Compare commits

..

611 Commits

Author SHA1 Message Date
Alan Rominger
9745058546 Only block commits if black fails for certain paths (#14531) 2023-10-10 10:12:57 -04:00
Aviral Katiyar
c97a48b165 Fix: #14510 Add alt-text codeblock to Images for Userguide: jobs.rst (#14530)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-09 16:40:56 -06:00
Rohit Raj
259bca0113 docs: Update workflows.rst (#14537) 2023-10-06 15:30:47 -06:00
Aviral Katiyar
92c2b4e983 Fix: #14500 Added alt text to images for Userguide: credential_plugins.rst (#14527)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-06 14:53:23 -06:00
Seth Foster
127a0cff23 Set ip_address to empty string
ip_address cannot be null, so set to
empty instead of None

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-10-05 22:53:16 -04:00
Aviral Katiyar
a0ef25006a Fix: #14499 Added alt text to images for Userguide: applications_auth.rst (#14526)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-05 14:22:10 -06:00
Chris Meyers
50c98a52f7 Update setting_up.rst (#14542) 2023-10-05 15:06:40 -04:00
Michelle McCausland
4008d72af6 issue-14522: Add alt-text codeblock to Images for Userguide: webhooks.rst (#14529)
Signed-off-by: Michelle McCausland <mmccausl@redhat.com>
2023-10-05 17:40:07 +01:00
Alan Rominger
e72e9f94b9 Fix collection test flake due to successful canceled command (#14519) 2023-10-04 09:09:29 -04:00
Sasa Jovicic
9d60b0b9c6 Fix #12815 Direct links to AWX do not reroute the user after authentication (#14399)
Signed-off-by: Sasa993 <jovicic.sasa@hotmail.com>
Co-authored-by: Sasa Jovicic <sjovicic@anexia-it.com>
2023-10-03 16:55:22 -04:00
Aviral Katiyar
05b58c4df6 Fix : #14490 Fixed the required spelling errors (#14507)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
2023-10-03 14:15:13 -06:00
TVo
b1b960fd17 Updated Forum terminology and removed mailing list (#14491) 2023-10-03 19:24:19 +01:00
Jakub Laskowski
3c8f71e559 Fixed wrong arguments order in DomainPasswordGrantAuthorizer (#14441)
Signed-off-by: Jakub Laskowski <jakub.laskowski9@gmail.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-10-03 11:54:57 -04:00
Alan Rominger
f5922f76fa DROP unnecessary unpartioned event tables (#14055) 2023-10-03 11:49:23 -04:00
kurokobo
05582702c6 fix: make type conversions work correctly (related #14487) (#14489)
Signed-off-by: kurokobo <2920259+kurokobo@users.noreply.github.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-30 04:02:10 +00:00
Alan Rominger
1d340c5b4e Add a section for postgres max_connections value (#14482) 2023-09-28 10:28:52 -04:00
TVo
15925f1416 Simplified release notes for AWX (#14485) 2023-09-27 14:50:57 -06:00
Salma Kochay
6e06a20cca add subscription usage page 2023-09-27 10:57:04 -04:00
Hao Liu
bb3acbb8ad Debug log for scheduler commit duration (#14035)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-27 09:46:55 -04:00
Hao Liu
a88e47930c Update django version to address CVE-2023-41164 (#14460) 2023-09-27 09:36:02 -04:00
Hao Liu
a0d4515ba4 Explicitly set collection version during promotion (#14484) 2023-09-26 14:19:22 -04:00
Alan Rominger
770cc10a78 Get rid of names_digest hack no longer needed (#14459) 2023-09-26 12:09:30 -04:00
Alan Rominger
159dd62d84 Add null value handling in create_partition (#14480) 2023-09-25 18:28:44 -04:00
TVo
640e5db9c6 Removed references of IRC and fixed formatting in "Work Items" section. (#14478)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-09-25 11:24:39 -06:00
Alan Rominger
9ed527eb26 Consolidate image and server setup in several checks (#14477) 2023-09-25 09:02:20 -04:00
Alan Rominger
29ad6e1eaa Fix bug, None was used instead of empty for DB outage (#14463) 2023-09-21 14:30:25 -04:00
Alan Rominger
3e607f8964 AAP-15927 Use ATTACH PARTITION to avoid exclusive table lock for events (#14433) 2023-09-21 14:27:04 -04:00
TVo
c9d1a4d063 Added release notes for version 23.1.0 (#14471) 2023-09-21 11:02:38 -06:00
Hao Liu
a290b082db Use ldap container hostname for LDAP config (#14473) 2023-09-21 11:31:51 -04:00
Hao Liu
6d3c22e801 Update how to get involved with matrix and forum (#14472) 2023-09-20 18:33:04 +00:00
Michael Abashian
1f91773a3c Simplify docs string base generation 2023-09-20 13:16:54 -04:00
Hao Liu
7b846e1e49 Add makefile target to load dev image into Kind (#13775)
Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-09-19 13:34:10 -04:00
Don Naro
f7a2de8a07 Contributor guide and adjusted titles (#14447)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
2023-09-18 10:40:47 -06:00
Andrew Klychkov
194c214f03 userguide/execution_environments.rst: replace building paragraphs with ref to Get started EE guide (#14429) 2023-09-15 10:20:46 -04:00
Christian Adams
77e30dd4b2 Add link to script for publishing operator on OperatorHub (#14442) 2023-09-15 09:32:19 -04:00
jessicamack
9d7421b9bc Update README (#14452)
Signed-off-by: jessicamack <jmack@redhat.com>
2023-09-14 20:20:06 +00:00
Alan Rominger
3b8e662916 Remove conditional paths due to conflict with required checks (#14450) 2023-09-14 16:19:42 -04:00
Alan Rominger
aa3228eec9 Fix continue-on-error GH actions bug, always run archive step instead 2023-09-14 19:45:07 +00:00
Alan Rominger
7b0598c7d8 Continue workflow steps to save logs from failed tests (#14448) 2023-09-14 18:23:22 +00:00
Ivan Aragonés Muniesa
49832d6379 don't pass the 'organization' or other fields to the search of the instance group or execution environments (#14223) 2023-09-14 09:31:05 -04:00
Alan Rominger
8feeb5f1fa Allow saving github creds in user folder (#14435) 2023-09-12 15:47:12 -04:00
Michael Abashian
56230ba5d1 Show a toast when the job is already in the process of launching 2023-09-06 16:56:34 -04:00
Michael Abashian
480aaeace5 Prevent the user from launching multiple jobs by rapidly clicking on buttons 2023-09-06 16:56:34 -04:00
Joe Garcia
3eaea396be Add base64 check on JWT from authn 2023-09-06 15:58:36 -04:00
Keith Grant
deef8669c9 rebuild package-lock (#14423) 2023-09-06 12:36:50 -07:00
Don Naro
63223a2cc7 allow list for example secrets in docs 2023-09-06 15:15:58 -04:00
Keith Grant
a28bc2eb3f bump babel dependencies (#14370) 2023-09-06 09:14:04 -07:00
Alan Rominger
09168e5832 Edit docker-compose instructions for correctness (#14418) 2023-09-06 11:55:25 -04:00
Alan Rominger
6df1de4262 Avoid activity stream entries for instance going offline (#14385) 2023-09-06 11:18:52 -04:00
Alan Rominger
e072bb7668 Declare license for unique module that uses BSD-2
Co-authored-by: Maxwell G <maxwell@gtmx.me>
2023-09-06 10:43:25 -04:00
Alan Rominger
ec579fd637 Fix collection metadata license to match intent 2023-09-06 10:43:25 -04:00
Marliana Lara
b95d521162 Update missing inventory error message (#14416) 2023-09-06 10:24:25 -04:00
Rick Elrod
d03a6a809d Enable collection integration tests on GHA
There are a number of changes here:

- Abstract out a GHA composite action for running the dev environment
- Update the e2e tests to use that new abstracted action
- Introduce a new (matrixed) job for running collection integration
  tests. This splits the jobs up based on filename.
- Collect coverage info and generate an html report that people can
  download easily to see collection coverage info.
- Do some hacks to delete the intermediary coverage file artifacts
  which aren't needed after the job finishes.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-09-05 16:10:48 -05:00
TVo
4466976e10 Added relnotes for 23.0.0 (#14409) 2023-09-05 15:07:53 -06:00
Don Naro
5733f78fd8 Add readthedocs configuration (#14413) 2023-09-05 15:07:32 -06:00
Alan Rominger
20fc7c702a Add check for building docsite (#14406) 2023-09-05 16:07:48 -04:00
Lila Yasin
6ce5799689 Incorrect capacity for remote execution nodes 14051 (#14315) 2023-09-05 11:20:36 -04:00
Don Naro
dc81aa46d0 Create AWX docsite with RST content (#14328)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-09-01 09:24:03 -06:00
Alan Rominger
ab3ceaecad Remove extra scheduler state save that does nothing (#14396) 2023-08-31 10:35:07 -04:00
John Westcott IV
1bb4240a6b Allow saml_admin_attr to work in conjunction with SAML Org Map (#14285)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-08-31 09:41:30 -03:00
Rick Elrod
5e105c2cbd [CI] Update GHA actions to sate some warnings emitted by test infrastructure (#14398)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-30 23:58:57 -05:00
Alan Rominger
cdb4f0b7fd Consume job_explanation from runner, fix error reporting error (#13482) 2023-08-30 16:45:50 -04:00
Ivanilson Junior
cf1e448577 Fix undefined property error when task is of type yum/debug and was s… (#14372)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
2023-08-30 15:37:28 -04:00
Andrew Klychkov
224e9e0324 [DOCS] tools/docker-compose/README.md: add way to solve postgresql issue (#14225) 2023-08-30 10:45:50 -04:00
Martin Slemr
660dab439b HostMetrics: Hard auto-cleanup (#14255)
Fix host metric settings

Cleanup_host_metric command with default params

Fix order of host metric cleanups
2023-08-30 09:18:59 -04:00
sean-m-sullivan
5ce2055431 update collection workflow example and tests 2023-08-30 09:15:54 -04:00
Alan Rominger
951bd1cc87 Re-run the updater script after upstream removal of future (#14265) 2023-08-29 15:36:42 -04:00
kurokobo
c9190ebd8f docs: update execution_nodes.md to follow changes for receptor_collection (#14247) 2023-08-29 13:06:54 -04:00
Seth Foster
eb33973fa3 Use receptor collection 2.0.0 2023-08-29 13:06:54 -04:00
Seth Foster
40be2e7b6e Use receptor-collection devel 2023-08-29 13:06:54 -04:00
kialam
485813211a Add toast and delete modal messaging when removing/adding peers. (#14373) 2023-08-29 13:06:54 -04:00
Seth Foster
0a87bf1b5e Apply JS formatting from npm prettier 2023-08-29 13:06:54 -04:00
Seth Foster
fa0e0b2576 Removed unused variable in test_instance_peers 2023-08-29 13:06:54 -04:00
Seth Foster
1d3b2f57ce No longer assert on receptor_host_identifier
receptor_host_identifier can be left out
of group_vars and will default to the
'ansible_host' variable
2023-08-29 13:06:54 -04:00
Seth Foster
0577e1ee79 Setup receptor after podman
Might help to install receptor last,
that way when nodes are first connected to the mesh
they already have podman installed and can potentially
run jobs. Otherwise it might be possible for controller
to launch jobs against nodes that aren't fully set up.
2023-08-29 13:06:54 -04:00
Seth Foster
470ecc4a4f Use itertools product instead of nested loop
Make test case cleaner by using itertools product
instead of the triple nested loop

Replace triple single quotes with triple
double quotes
2023-08-29 13:06:54 -04:00
Seth Foster
965127637b Make ip_address read only
Setting a different value for ip_address
and hostname does not work with the current
way we create receptor certs.
2023-08-29 13:06:54 -04:00
Seth Foster
eba130cf41 Change username to <username> in inventory 2023-08-29 13:06:54 -04:00
Seth Foster
441336301e Ensure ip_address is empty string 2023-08-29 13:06:54 -04:00
Seth Foster
2a0be898e6 Fix detecting if peers changed in serializer
Add a check_peers_changed() utility method
to determine if peers in attrs matches
the current instance peers.

Other changes:
- Set ip_address default to "", and do not
allow null.
2023-08-29 13:06:54 -04:00
Seth Foster
c47acc5988 Change PeersSerializer to SlugRelatedField
Get rid of PeersSerializer and just use SlugRelatedField,
which should be more a straightforward approach.

Other changes:
- cleanup code related to the already-removed api/v2/peers
endpoint
- add "hybrid" node type into more instance_peers test cases
2023-08-29 13:06:54 -04:00
Seth Foster
70ba32b5b2 Do not install ansible-runner or podman on hop nodes 2023-08-29 13:06:54 -04:00
Seth Foster
81e06dace2 Add listener_port to provision_instance
API changes
- cannot change peers or enable
peers_from_control_nodes on VM deployments
- allow setting ip_address
- use ip_address over hostname in the generated
group_vars/all.yml
- Drop api/v2/peers endpoint

DB changes
- add ip_address unique constraint, but ignore "" entries

Other changes
- provision_instance should take listener_port option

Tests
- test that new controls doesn't disturb other peers
relationships
- test ip_address over hostname
2023-08-29 13:06:54 -04:00
Seth Foster
3e8202590c Remove Disconnected link state
Dynamically flipping from Established
to Disconnected is not the intended
usage of InstanceLink State.

- Link state starts in Adding and becomes
Established once any control node first sees the link
is in the status KnownConnectionCosts
2023-08-29 13:06:54 -04:00
Seth Foster
ad96a72ebe Remove duplicate install bundle on InstanceDetail 2023-08-29 13:06:54 -04:00
Seth Foster
eb0058268b Revert "Remove duplicate install bundle on InstanceDetail"
This reverts commit cf5ccf53f4322b49b1009ca13e4f025c30529b30.
2023-08-29 13:06:54 -04:00
Seth Foster
2bf6512a8e Do not change link state if Removing
inspect_established_receptor_connections should
not change link state is current state is Removing.

Other changes:
- rename inspect_execution_nodes to inspect_execution_and_hop_nodes
- Default link state is Adding
- Set min listener_port value to 1024
- inspect_established_receptor_connections now
runs as part of cluster_node_heartbeat task
2023-08-29 13:06:54 -04:00
Seth Foster
855f61a04e Bump migration number 186 to 187 2023-08-29 13:06:54 -04:00
Seth Foster
532e71ff45 Remove extra newlines in install bundle all.yml 2023-08-29 13:06:54 -04:00
Seth Foster
b9ea114cac Remove duplicate install bundle on InstanceDetail 2023-08-29 13:06:54 -04:00
Seth Foster
e41ad82687 optional listener port UI (#14300) 2023-08-29 13:06:54 -04:00
Seth Foster
3bd25c682e Allow setting ip_address for execution nodes 2023-08-29 13:06:54 -04:00
Seth Foster
7169c75b1a receptor_python_packages renamed 2023-08-29 13:06:54 -04:00
kialam
fdb359a67b feature hop node topology updates (#14142) 2023-08-29 13:06:54 -04:00
Seth Foster
ed2a59c1a3 receptor python packages 2023-08-29 13:06:54 -04:00
Jake Jackson
906f8a1dce [hop node] documentation update in execution_nodes for hop nodes (#14215)
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-08-29 13:06:54 -04:00
Lila Yasin
6833976c54 [hop node] fix failing ci checks on feature_hop-node branch (#14226) 2023-08-29 13:06:54 -04:00
Seth Foster
d15405eafe Add peers_from for reverse peers M2M
use devel receptor-collection
2023-08-29 13:06:54 -04:00
Lila
6c3bbfc3be Looking to see if revising the path in the static dir resolves failing ci check. 2023-08-29 13:06:54 -04:00
Lila Yasin
2e3e6cbde5 hop node migration file updates(#14196)
rename migration function set_peers_from_control_nodes_true to automatically_peer_from_control_plane
import settings and only run function if settings.IS_K8S is true
set listener_port for control nodes to None
2023-08-29 13:06:54 -04:00
Lila Yasin
54894c14dc Hop node AWX Collection Updates (#14153)
Add hop node support to awx collections
- add peers and peers_from_control_nodes fields
- show new node_type "hop"
- add tests for adding hop nodes via collections

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-08-29 13:06:54 -04:00
Seth Foster
2a51f23b7d Add functional API tests
add tests for calling write_receptor_config

add write_receptor_config test

Do not set default listener_port on control node
2023-08-29 13:06:54 -04:00
Jake Jackson
80df31fc4e [hop node] update peer validation logic (#14132) 2023-08-29 13:06:54 -04:00
Lila Yasin
8f8462b38e Marked hop node validation errors for translation (#14116) 2023-08-29 13:06:54 -04:00
Seth Foster
0c41abea0e Make peers field optional 2023-08-29 13:06:54 -04:00
Lila Yasin
3eda1ede8d Migration file to set peers_from_control_ nodes to true for existing execution nodes (#14061) 2023-08-29 13:06:54 -04:00
Jake Jackson
40fca6db57 [hop_node] Validate listener_port is defined for peers (#14056)
add peer listener_port validation and update install bundle if listener_port is defined or not defined.
2023-08-29 13:06:54 -04:00
Seth Foster
148111a072 Remove task that enables COPR receptor repo (#14088)
do not pip install receptorctl
2023-08-29 13:06:54 -04:00
Lila Yasin
9cad45feac Prevent manual peering of control plane nodes to hop node (#13966) 2023-08-29 13:06:54 -04:00
Seth Foster
6834568c5d Add receptor host identifier to group_vars
Add disconnected link state topology
2023-08-29 13:06:54 -04:00
Lorenzo Tanganelli
f7fdb7fe8d Add peers readonly api and instancelink constraint (#13916)
Add Disconnected link state

introspect_receptor_connections is a periodic
task that examines active receptor connections
and cross-checks it with the InstanceLink info.

Any links that should be active but are not
will be put into a Disconnected state. If
active, it will be in an Established state.

UI - Add hop creation and peers mgmt (#13922)

* add UI for mgmt peers, instance edit and add

* add peer info on detail and bug fix on detail

* remove unused chip and change peer label

* rename lookup, put Instance type disable on edit

---------

Co-authored-by: tanganellilore <lorenzo.tanagnelli@hotmail.it>
2023-08-29 13:06:54 -04:00
Seth Foster
d8abd4912b Add support in hop nodes in API 2023-08-29 13:06:54 -04:00
Alan Rominger
4fbdc412ad Restrict PR body check to just AWX repo 2023-08-29 09:29:30 -04:00
Alan Rominger
db1af57daa Revert "Adding PR check to ensure JIRA links are present"
This reverts commit 3ae6174050.
2023-08-29 09:29:30 -04:00
Hao Liu
ffa59864ee Fix CVE-2023-40267 (#14388)
CVE-2023-40267 GitPython: Insecure non-multi options in clone and clone_from is not blocked https://bugzilla.redhat.com/show_bug.cgi?id=2231474

GitPython before 3.1.32 does not block insecure non-multi options in clone and clone_from. NOTE: this issue exists because of an incomplete fix for CVE-2022-24439.

References:
gitpython-developers/GitPython@ca965ec gitpython-developers/GitPython#1609
2023-08-28 15:35:32 -04:00
bxbrenden
b209bc67b4 Fix typo in description of scm_update_on_launch (#14382) 2023-08-28 16:52:44 +00:00
Chandler Swift
1faea020af Fix default redis url to pass check in redis-py>4.4 (#14344)
Signed-off-by: Chandler Swift <chandler+pearson@chandlerswift.com>
Co-authored-by: Rebeccah Hunter <rhunter@redhat.com>
2023-08-25 09:48:36 -04:00
Pablo Hess
b55a099620 Clarify that the license module requires fetching subs prior (#14351)
Co-authored-by: Pablo N. Hess <phess@redhat.com>
2023-08-23 15:20:47 -04:00
David Danielsson
f6dd3cb988 Enforce mutually exclusive options in credential module of the collection (#14363) 2023-08-23 15:16:06 -04:00
Alan Rominger
c448b87c85 AAP-10891 Apply AWX_TASK_ENV when performing credential plugin lookups (#14271) 2023-08-23 13:26:12 -04:00
Rick Elrod
4dd823121a Update cryptography for CVE-2023-38325 (#14358)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-23 10:54:20 -05:00
Michael Abashian
ec4f10d868 Add location for locales in nginx config 2023-08-22 16:33:00 -04:00
Marliana Lara
2a1dffd363 Fix edit constructed inventory hanging loading state (#14343) 2023-08-21 12:36:36 -04:00
digitalbadger-uk
8c7ab8fcf2 Added required epoc time field for Splunk HEC Event Receiver (#14246)
Signed-off-by: Iain <iain@digitalbadger.com>
2023-08-21 09:44:52 -03:00
Hao Liu
3de8455960 Fix missing trailing / in PUBLIC_PATH for UI
Missing trialing `/` cause UI to load file from incorrect static dir location.
2023-08-17 15:16:59 -04:00
Hao Liu
d832e75e99 Fix ui-next build step file path issue
Add full path for the mv command so that the command can be run from ui_next and from project root.

Additionally move the rename of file to src build step.
2023-08-17 15:16:59 -04:00
abwalczyk
a89e266feb Fixed task and web docs (#14350) 2023-08-17 12:22:51 -04:00
Hao Liu
8e1516eeb7 Update UI_NEXT build to set PRODUCT and PUBLIC_PATH
https://github.com/ansible/ansible-ui/pull/792 added configurable public path (which was change to '/' in https://github.com/ansible/ansible-ui/pull/766/files#diff-2606df06d89b38ff979770f810c3c269083e7c0fbafb27aba7f9ea0297179828L128-R157)

This PR added the variable when building ui-next
2023-08-16 18:35:12 -04:00
Hao Liu
c7f2fdbe57 Rename ui_next index.html to index_awx.html during build process
Due to change made in https://github.com/ansible/ansible-ui/pull/766/files#diff-7ae45ad102eab3b6d7e7896acd08c427a9b25b346470d7bc6507b6481575d519R18 awx/ui_next/build/awx/index_awx.html was renamed to awx/ui_next/build/awx/index.html

This PR fixes the problem by renaming the file back
2023-08-16 18:35:12 -04:00
delinea-sagar
c75757bf22 Update python-tss-sdk dependency (#14207)
Signed-off-by: delinea-sagar <sagar.wani@c.delinea.com>
2023-08-16 20:07:35 +00:00
Kevin Pavon
b8ec7c4072 Schedule rruleset fix related #13446 (#13611)
Signed-off-by: Kevin Pavon <7450065+KaraokeKev@users.noreply.github.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-08-16 16:10:31 -03:00
jbreitwe-rh
bb1c155bc9 Fixed typos (#14347) 2023-08-16 15:05:23 -04:00
Jeff Bradberry
4822dd79fc Revert "Improve performance for awx cli export (#13182)" 2023-08-15 15:55:10 -04:00
jainnikhil30
4cd90163fc make the default JOB_EVENT_BUFFER_SECONDS 1 seconds (#14335) 2023-08-12 07:49:34 +05:30
Alan Rominger
8dc6ceffee Fix programming error in facts retry merge (#14336) 2023-08-11 13:54:18 -04:00
Alan Rominger
2c7184f9d2 Add a retry to update host facts on deadlocks (#14325) 2023-08-11 11:13:56 -04:00
Martin Slemr
5cf93febaa HostMetricSummaryMonthly: Analytics export 2023-08-11 09:38:23 -04:00
Alan Rominger
284bd8377a Integrate scheduler into dispatcher main loop (#14067)
Dispatcher refactoring to get pg_notify publish payload
  as separate method

Refactor periodic module under dispatcher entirely
  Use real numbers for schedule reference time
  Run based on due_to_run method

Review comments about naming and code comments
2023-08-10 14:43:07 -04:00
Jeff Bradberry
14992cee17 Add in an async task to migrate the data over 2023-08-10 13:48:58 -04:00
Jeff Bradberry
6db663eacb Modify main/0185 to set aside the json fields that might be a problem
Rename them, then create a new clean field of the new jsonb type.
We'll use a task to do the data conversion.
2023-08-10 13:48:58 -04:00
Ivanilson Junior
87bb70bcc0 Remove extra quote from Skipped task status string (#14318)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
Co-authored-by: kialam <digitalanime@gmail.com>
2023-08-09 15:58:46 -07:00
Pablo Hess
c2d02841e8 Allow importing licenses with a missing "usage" attribute (#14326) 2023-08-09 16:41:14 -04:00
onefourfive
e5a6007bf1 fix broken link to upgrade docs. related #11313 (#14296)
Signed-off-by: onefourfive <>
Co-authored-by: onefourfive <unknown>
2023-08-09 15:06:44 -04:00
Alan Rominger
6f9ea1892b AAP-14538 Only process ansible_facts for successful jobs (#14313) 2023-08-04 17:10:14 -04:00
Sean Sullivan
abc56305cc Add Request time out option for collection (#14157)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-08-03 15:06:04 -03:00
kialam
9bb6786a58 Wait for new label IDs before setting label prompt values. (#14283) 2023-08-03 09:46:46 -04:00
Michael Abashian
aec9a9ca56 Fix rbac around credential access add button (#14290) 2023-08-03 09:18:21 -04:00
John Westcott IV
7e4cf859f5 Added PR check to ensure JIRA links are present (#13839) 2023-08-02 15:28:13 -04:00
mcen1
90c3d8a275 Update example service-account.yml for container group in documentation (#13479)
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Nana <35573203+masbahnana@users.noreply.github.com>
2023-08-02 15:27:18 -04:00
lucas-benedito
6d1c8de4ed Fix trial status and host limit with sub (#14237)
Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-08-02 10:27:20 -04:00
Seth Foster
601b62deef bump python-daemon package (#14301) 2023-08-01 01:39:17 +00:00
Seth Foster
131dd088cd fix linting (#14302) 2023-07-31 20:37:37 -04:00
Rick Elrod
445d892050 Drop unused django-taggit dependency (#14241)
This drops the django-taggit dependency and drops the relevant fields
from old migrations.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-31 10:05:27 -05:00
Michael Abashian
35a576f2dd Adds autoComplete attribute to forms that were missing it (#14080) 2023-07-28 09:49:36 -04:00
John Westcott IV
7838641215 Fixed dependencies tag in PR labeler (#14286) 2023-07-28 08:30:30 -04:00
Alan Rominger
ab5cc2e69c Simplifications for DependencyManager (#13533) 2023-07-27 15:42:29 -04:00
John Westcott IV
5a63533967 Added support to collection for named urls (#14205) 2023-07-27 10:22:41 -03:00
Christian Adams
b549ae1efa Only show the product version header when the requester is authenticated (#14135) 2023-07-26 18:38:05 -04:00
Alex Corey
bd0089fd35 fixes docs link for controller versions >= 4.3 (#14287) 2023-07-26 21:54:39 +00:00
Christian Adams
40d18e95c2 Explicitly turn off autocomplete for API login form (#14232) 2023-07-26 15:33:26 -04:00
Andrew Klychkov
191a0f7f2a docs/execution_environments.md: add a link to EE getting started guide (#14263) 2023-07-26 15:05:36 -04:00
eric-zadara
852bb0717c Return back chdir to project sync to support project-local roles/collections
Signed-off-by: eric-zadara <eric@zadarastorage.com>
2023-07-25 09:58:43 -05:00
Alan Rominger
98bfe3f43f Add missing trigger for failed-to-start nodes (#13802) 2023-07-24 12:17:46 -04:00
John Westcott IV
53a7b7818e Updating release process doc for operator hub instructions (#13564) 2023-07-24 15:29:26 +01:00
Gabriel Muniz
e7c7454a3a Remove host update code which can be non performant (#14233) 2023-07-24 09:56:40 -04:00
Homero Pawlowski
63e82aa4a3 Fix collection module docs for names, IDs, and named URLs (#14269) 2023-07-24 08:57:46 -04:00
ZitaNemeckova
fc1b74aa68 Remove extra data for AoC (#14254) 2023-07-19 11:16:53 -04:00
Alan Rominger
ea455df9f4 Only push the production images for main repo (#14261) 2023-07-19 09:51:33 -04:00
Satoe Imaishi
8e2a5ed8ae Require pyyaml >= 6.0.1 (#14262) 2023-07-18 16:25:14 -05:00
Rick Elrod
1d7e54bd39 Wrap Django RedisCache to mute exceptions (#14243)
We introduce a thin wrapper over Django's RedisCache so that the functionality of DJANGO_REDIS_IGNORE_EXCEPTIONS is retained while still being able to drop the django-redis dependency.

Credit to django-redis's implementation for the idea of using a decorator for this and abstracting out the exception handling logic.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-18 15:31:09 -05:00
Cristiano Nicolai
83df056f71 Small doc fixes for workflow and task manager (#14242) 2023-07-18 19:23:48 +00:00
Rick Elrod
48edb15a03 Prevent Dispatcher deadlock when Redis disappears (#14249)
This fixes https://github.com/ansible/awx/issues/14245 which has
more information about this issue.

This change addresses both:
- A clashing signal handler (registering a callback to fire when
  the task manager times out, and hitting that callback in cases
  where we didn't expect to). Make dispatcher timeout use
  SIGUSR1, not SIGTERM.
- Metrics not being reported should not make us crash, so that is
  now fixed as well.

Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-07-18 10:43:46 -05:00
John Westcott IV
8ddc19a927 Changing how associations work in awx collection (#13626)
Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-07-17 14:16:55 -03:00
Sean Sullivan
b021ad7b28 Allow job_template collection module to set verbosity to 5 (#14244) 2023-07-17 09:48:14 -05:00
Rick Elrod
b8ba2feecd Tell Makefile and pre-commit.sh that they are bash
On some systems, /bin/sh is a bash symlink and running it will launch
bash in sh compatibility mode. However, bash-specific syntax will still
work in this mode (for example using == or pipefail).

However, on systems where /bin/sh is a symlink to another shell (think:
Debian-based) they might not have those bashisms.

Set the shell in the Makefile, so that it uses bash (since it is already
depending on bash, even though it is calling it as /bin/sh by default),
and add a shebang to pre-commit.sh for the same reason.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-14 12:06:55 -05:00
Rick Elrod
8cfb704f86 Migrate from django-redis to Django's built-in Redis caching support (#14210)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-13 12:16:16 -05:00
John Westcott IV
efcac860de Upgrade django to 4.2.3 (#14228) 2023-07-13 08:52:50 -04:00
Martin Slemr
6c5590e0e6 HostMetricSummaryMonthly command + views + scheduled task (#13999)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-07-12 16:40:09 -04:00
Erez Tamam
0edcd688a2 add organization column notification template list (#13998) 2023-07-12 15:11:47 -04:00
Alan Rominger
b8c48f7d50 Restore pre-upgrade pg_notify notifcation behavior (#14222) 2023-07-11 16:23:53 -04:00
John Westcott IV
07e30a3d5f Refined release documentation (#14221) 2023-07-10 19:45:34 +00:00
John Westcott IV
cb5a8aa194 Fix black pre-commit hook (#14212) 2023-07-06 16:36:50 -04:00
Seth Foster
8b49f910c7 Add settings.RECEPTOR_LOG_LEVEL, update work signing key path (#14098) 2023-07-06 11:39:30 -04:00
kialam
a4f808df34 Schedules form - pass time prop as string. (#14206) 2023-07-06 07:57:55 -07:00
Alan Rominger
82abd18927 Fix DELETE 500 KeyError due to eventless model events (#14172) 2023-07-05 15:37:52 -04:00
John Westcott IV
5e9d514e5e Added CSRF Origin in settings (#14062) 2023-07-05 15:18:23 -04:00
Rick Elrod
4a34ee1f1e Add optional pgbouncer to dev environment (#14083)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-05 13:41:47 -05:00
John Westcott IV
3624fe2cac Add combined roles/collection requirements on project sync (#14081) 2023-07-05 13:25:44 -03:00
Cesar Francisco San Nicolas Martinez
0f96d9aca2 Rename/relocate receptor crt in install bundle (#14201) 2023-07-05 14:50:55 +02:00
Shane McDonald
989b80e771 Fix selinux errors with Redis mount in dev env 2023-07-03 09:57:01 -04:00
John Westcott IV
cc64be937d Fix spelling errors in readme of awx_collection/tools
Signed-off-by: John Westcott <john.westcott.iv@redhat.com>
2023-06-30 15:41:47 -04:00
John Westcott IV
94183d602c Enhancing vault integration
Added persistent storage

Auto-create vault and awx via playbooks

Create a new pattern for custom containers where we can do initialization

Auto-install roles needed for plumbing via the Makefile
2023-06-30 10:05:15 -04:00
Vidya Nambiar
ac4ef141bf Fix filter experience when assigning access to teams (#14175) 2023-06-29 15:15:32 -04:00
jainnikhil30
86f6b54eec add the bulk api swagger topic for API reference docs (#14181) 2023-06-28 21:55:38 +05:30
Michael Abashian
bd8108b27c Fixed bug where a weekly rrule string without a BYDAY would result in the UI throwing a TypeError (#14182) 2023-06-28 11:10:49 -04:00
Alan Rominger
aed96fb365 Use the proper queryset to filter project update events (#14166) 2023-06-26 21:41:08 -04:00
Alan Rominger
fe2da52eec Upgrade Github actions issue labeler to fix 404 errors (#14163) 2023-06-26 17:14:53 -04:00
Alan Rominger
974465e46a Add hashivault option as docker-compose optional container (#14161)
Co-authored-by: Sarabraj Singh <singh.sarabraj@gmail.com>
2023-06-26 15:48:58 -04:00
Alan Rominger
c736986023 Try to fix CI by adding dropped coreapi lib (#14165) 2023-06-26 15:11:12 -04:00
Akira Yokochi
6b381aa79e Add example for ad_hoc_command module (#14106) 2023-06-23 11:59:16 -04:00
Alan Rominger
755e55ec70 Remove reference to unmaintained runner image (#14143) 2023-06-23 10:15:11 -04:00
Rick Elrod
255c2e4172 [wsrelay] Give connection tasks time to clean up
When we close/cancel a connection to a web node, give the task time to
clean up after itself and cleanly exit. Otherwise, the Python GC might
clean up the task too early and this leads to ugly log messages like
this: "Task was destroyed but it is pending!"

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-23 00:56:24 -05:00
Alan Rominger
aa8437fd77 Tooling for running collection tests locally ad hoc (#14160) 2023-06-22 13:32:09 -04:00
Akira Yokochi
66f14bfe8f Using execution_environment option in ad_hoc_command module (#14105) 2023-06-22 13:10:01 -04:00
Gabriel Muniz
721a2002dc Add --interval to launch monitor command (#14068)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-06-22 11:07:26 -03:00
Seth Foster
af39b2cd3f Rename work signing private key filename (#14156) 2023-06-21 19:50:04 -04:00
Lorenzo Tanganelli
cdd48dd7cd Add instance_groups on resource_list_param_keys in awx_collection (#14146) 2023-06-21 19:29:14 +00:00
Sean Sullivan
d3de884baf In collection, give changed status in workflow_job_template when destroying nodes (#13928) 2023-06-21 15:17:53 -04:00
Benjamin Dudas
fa8968b95b Fix for Save on the Jobs settings page not responding (#14103)
Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-06-21 15:14:31 -04:00
Jesse Wattenbarger
897a19e127 Add None check back to get_post_fields (#14155) 2023-06-21 12:37:59 -04:00
Artsiom Musin
4bae961b5f Improve performance for awx cli export (#13182)
Co-authored-by: Jesse Wattenbarger <jwattenb@redhat.com>
2023-06-21 10:49:22 -04:00
Seth Foster
900c4fd8f1 Rename work signing private key filename (#14151) 2023-06-21 09:52:58 -04:00
Akira Yokochi
4d5bbd7065 Fixed typo in integration test for group module (#14140) 2023-06-21 09:28:01 -04:00
Gabriel Muniz
fb8fadc7f9 Add new ANSIBLE_COLLECTIONS_PATH in preparation for deprecation of plural version (#14079) 2023-06-20 10:32:18 -03:00
John Westcott IV
ba99ddfd82 Fix PR and issue labeler job permissions (#14134) 2023-06-15 18:56:40 +00:00
Gabriel Muniz
9676a95e05 Add AWS Secretsmanager plugin (#13778)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-06-15 10:12:02 -04:00
Gabriel Muniz
36d6ed9cac Removed automatic failure of job template launch when last project update is failed and update on launch is enabled (#13796)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-06-15 10:11:13 -04:00
Gabriel Muniz
875f1a82e4 Add dynamically configurable debug settings (#14008)
Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-06-15 09:31:54 -04:00
Rick Elrod
db71b63829 Address comments from @jjwatt
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-14 17:40:15 -04:00
John Westcott IV
cd4d83acb7 Compensating for NUL unicode characters
NUL characters are not allowed in text fields in the database

We used to strip them out of stdout but the exception changed

And we want to be sure to strip them out of JSONBlob fields
2023-06-14 17:40:15 -04:00
John Westcott IV
7e25a694f3 Making all non-complicated JSONBlobs JSONFields 2023-06-14 17:40:15 -04:00
John Westcott IV
baca43ee62 Performing test maintainance 2023-06-14 17:40:15 -04:00
John Westcott IV
3b69552260 Forcing our JSONField to use text instead of Jsonb data 2023-06-14 17:40:15 -04:00
Rick Elrod
f9bd780d62 [wsrelay] Port back to psycopg3
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-14 17:40:15 -04:00
John Westcott IV
a665d96026 Replacing psycopg2.copy_expert with psycopg3.copy 2023-06-14 17:40:15 -04:00
John Westcott IV
e47d30974c Removing psycopg2 references 2023-06-14 17:40:15 -04:00
John Westcott IV
2b8ed66f3e Updating old migration for psycopg3 2023-06-14 17:40:15 -04:00
John Westcott IV
dfe8b3b16b Removes psycopg2 in favor of psycopg3 2023-06-14 17:40:15 -04:00
Artsiom Musin
c738d0788e Check for a list of all option instead of string (#14046) 2023-06-14 15:41:06 -04:00
Jesse Wattenbarger
0c2d589109 Lazy init VERSION vars in Makefile (#14093) 2023-06-14 15:00:38 -04:00
Sean Sullivan
a47bbb5479 bugfix collection role module target_teams and instance_groups options (#14119) 2023-06-14 17:53:24 +00:00
Shane McDonald
4b4b73c02a Fix ARM builds (#14125) 2023-06-14 16:40:59 +00:00
John Westcott IV
d1d08fe499 Changed pin of rsyslog version (#14117) 2023-06-13 16:33:25 -04:00
Hao Liu
7e7a9f541c Remove install bundle download restriction (#14092) 2023-06-12 16:08:44 -04:00
kialam
98d67e2133 Update Patternfly and related deps. (#14086) 2023-06-12 12:35:26 -07:00
Alan Rominger
7a36041bf2 Remove whitespace artifacts from black with f-strings (#14112) 2023-06-12 11:52:22 -04:00
Hao Liu
b96564da55 Rename/relocate receptor cert and keys (#14091) 2023-06-09 12:57:04 -04:00
Seth Foster
044d6bf97c Fix task_system logs twice (#14096) 2023-06-07 16:50:56 -04:00
delinea-sagar
d357c1162f Awx.credential plugin.tss (#13985) 2023-06-07 19:36:15 +00:00
Darshan
3c22fc9242 Fix : awx.awx.group preserve hosts fails when there are no hosts (#13913)
Co-authored-by: Sean Sullivan <ssulliva@redhat.com>
2023-06-07 15:24:59 -04:00
Seth Foster
8c86092bf5 Remove random UUIDs from swagger json (#14089) 2023-06-06 10:44:15 -04:00
Cesar Francisco San Nicolas Martinez
081206965c Generate random UUID by default for added remote nodes (#14074) 2023-06-06 12:36:28 +02:00
Rick Elrod
036f85cd80 Two silly internal cleanups
- Nix an unused function from run_dispatcher. This stopped being used
  in 558e92806b but was never removed.

- Fix a typo in run_ws_heartbeat: hearbeat -> heartbeat that has existed
  since the beginning of this daemon.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-05 14:46:25 -05:00
Gabriel Muniz
6976ac9273 Add management command to precreate partitioned tables (#14076) 2023-06-05 18:20:53 +00:00
rakesh561
9009a21a32 Update Mesh.js to allow for running AWX at non-root path (URL prefixing) (#14020)
Co-authored-by: Michael Abashian <mabashia@redhat.com>
2023-06-05 11:46:12 -04:00
Shane McDonald
aafd4df288 Fix /api/swagger endpoint (available only in development mode) (#13197)
Co-authored-by: John Westcott IV <john.westcott.iv@redhat.com>
2023-06-02 12:58:21 -04:00
John Westcott IV
844666df4c Send real client remote address in TACACS+ authentication packet (#14077)
Co-authored-by: ekougs <ekougs@gmail.com>
2023-06-02 10:03:56 -04:00
Rick Elrod
0ae720244c [rsyslog] Enable disk-assisted queuing on output (#14005)
Right now we only enable queuing on the rsyslog main_queue. This adds a
parameter to also enable it on the omhttp output action. As omhttp can
take time to process messages (e.g. blocking on the result of its HTTP
requests), this change allows for queuing messages up and hopefully
preventing some messages from getting lost when the log server is slow
to respond.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-06-01 22:37:45 -05:00
Alex Corey
b70fa88b78 Adds RTL tests to new component, and to Instances List (#12927) 2023-06-01 19:19:24 +00:00
Alan Rominger
fbaeb90268 Apply conservative database connection reduction changes (#14066)
This is expected to free up 4 additional database connections per traditional node
  compare to roughly 12 in total before this change

Out of these 3 are accomplished by using existing connection for recently added services
  then 1 is obtained by closing the connection for the idle callback receiver main process

Signed-off-by: jessicamack <jmack@redhat.com>
Co-authored-by: jessicamack <jmack@redhat.com>
2023-06-01 14:59:18 -04:00
Michael Abashian
2a549c0b23 Removes dependabot for opening ui dependency pr's (#14075) 2023-06-01 14:30:02 -04:00
Alan Rominger
2c320cb16d Manually run subquery for parent event updates (#14044)
Fixes a long query when processing playbook_on_stats events
2023-06-01 07:55:56 -04:00
lucas-benedito
434595481c AAP-8038 - enable/disable services on reboot (#13415)
Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-05-31 19:24:30 +00:00
sll552
444d05447e Fix ovirt source (#12882) 2023-05-31 15:22:58 -04:00
Michael Abashian
fbe202bdbf Adds missing rel="noopener noreferrer" to each link element with target="_blank" (#13959) 2023-05-31 13:49:39 -04:00
Michael Abashian
d89cad0d9e Adds managed_by_policy checkbox to instances form. Adds warnings when associating or disassociating instances from instance groups. (#13994) 2023-05-31 12:31:55 -04:00
Marliana Lara
bdfd6f47ff Use PATCH request when updating wf nodes (#14063) 2023-05-31 12:30:58 -04:00
Gabriel Muniz
ae7be2eea1 Add instance_group to bulk api (#13982)
Co-authored-by: Elijah DeLee <kdelee@redhat.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-05-26 09:09:44 -03:00
Baptiste Agasse
8957a84738 Related #13336 - DNS resolution is preventing awx_collection to work with http[s]_proxy (#13524)
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-05-24 20:00:07 +00:00
Rick Elrod
bac124004f Rename heartbeet daemon to ws_heartbeat (#14041)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-24 13:27:55 -05:00
Joel Tenta
f46c7452d1 Spelling and codespelling corrections from community PR
- Made the choice not to pull in the CI tools due to the possibility of it blocking PRs.

Co-Authored By: Lila Yasin <89486372+djyasin@users.noreply.github.com>
2023-05-24 10:06:42 -04:00
John Westcott IV
098861d906 Updated sqlparse library (#13962)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-05-24 08:09:29 -03:00
John Westcott IV
daf39dc77e Adding capability of pretty error pages (#13852)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-05-23 14:05:38 -03:00
Hao Liu
00d8291d40 Change logging setting for task analytic scheduler (#14031) 2023-05-23 13:01:12 -04:00
Rick Elrod
88d1a484fa [dev docs] Re-document websockets infrastructure (#13992)
Re-add documentation for how AWX websockets and channels work, in the post-web/task split world.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-22 16:41:23 -05:00
Michael Abashian
5afdfb1135 Escape parenthesis in labeler for tech preview ui label 2023-05-18 15:00:19 -04:00
Michael Abashian
2f15cc5170 Updates issue_labeler.yml to handle tech preview ui auto-labeling 2023-05-18 14:46:36 -04:00
Michael Abashian
f15d40286c Adds a component label for the tech preview ui in bug_report.yml 2023-05-18 14:45:27 -04:00
Alan Rominger
f58c44590d Remove unused settings and associated code (#13898) 2023-05-18 10:05:59 -04:00
Alan Rominger
ef99770383 Add subsystem metrics for the dispatcher (#13989)
This adds a handful of metrics to /api/v2/metrics/ recorded from the dispatcher main process

Adds logic in the dispatcher period tasks to calculate these for the last collection interval
Reports worker count, task count, scale up events, and availability

Add data to demo grafana dashboard
2023-05-17 14:29:31 -04:00
John Westcott IV
84f67c7f82 Merge pull request #13961 from ansible/feature_django_upgrade_psycopg2
Upgrade to Django 4.2 LTS
2023-05-17 11:45:53 -04:00
Alan Rominger
433c28caa8 Materialize label page after getting 204 code (#14010) 2023-05-16 16:12:18 -04:00
Rick Elrod
fa05f55512 [collection] Fix sanity tests on ansible-core 2.15 (#14007)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-15 14:39:14 -05:00
Alan Rominger
0d5c0bcb91 Skip constructed_inventory in a more correct loop (#14004) 2023-05-15 13:48:59 -04:00
Rick Elrod
f3fa75d832 [wsrelay] Handle heartbeet shutdown and redis drop (#13991)
This fixes two different exceptions in wsrelay.

* One resulted from heartbeet getting ability in #13858 to gracefully
  shut down. When we saw the message come through, we didn't fully
  clean up the connection to the web node.

* The second resulted when Redis disappeared. We still want to exit in
  that case, but it's better to log a message and exit gracefully
  instead of crashing out.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-15 10:46:23 -05:00
John Westcott IV
285b7b0e5f Fixing using QuerySet.iterator() after prefetch_related() without specifying chunk_size is deprecated 2023-05-11 11:45:47 -04:00
John Westcott IV
08e8147374 Removing deprecated django.utils.timezone.utc alias in favor of datetime.timezone.utc 2023-05-11 11:45:47 -04:00
John Westcott IV
09bd398a9e Replacing depricated index_togeather with new indexes 2023-05-11 11:45:47 -04:00
John Westcott IV
8d6f50fae8 Upgrading djgno to 4.2 LTS 2023-05-11 11:45:15 -04:00
John Westcott IV
ecfbcb641e Adding upgrade to django-oauth-toolkit pre-migraiton 2023-05-11 11:43:33 -04:00
Shane McDonald
e434b1e0f3 Merge pull request #13987 from fosterseth/fix_ui_csp
Fix content security policy
2023-05-11 11:03:09 -04:00
Seth Foster
66c3acf777 Fix content security policy 2023-05-11 10:42:23 -04:00
John Westcott IV
ed1983bd8c Merge pull request #13977 from john-westcott-iv/awxkit_import_fix
Skip constructed_inventory endpoint in awxkit import
2023-05-11 09:04:32 -04:00
John Westcott IV
5c4277958c Merge pull request #13976 from john-westcott-iv/collection_job_wait_remove_depreciated_field_check
Change the job_wait integration test
2023-05-11 08:29:50 -04:00
John Westcott IV
7e4da7efa2 Updated pycryptography (#13964)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-05-11 09:25:56 -03:00
Christian Adams
7b1cb281c2 Merge pull request #13980 from rooftopcellist/extract-ui-next-strings
Update make target for extracting strings to do so for ui_next too
2023-05-10 23:18:44 -04:00
Christian M. Adams
dee39f3f1c Update make target for extracting strings to do so for ui_next too 2023-05-10 19:20:21 -04:00
John Westcott IV
ba7f97f84b Skip constructed_inventory endpoint in awxkit import 2023-05-10 14:24:27 -04:00
Alan Rominger
85e7189ee3 Add error handling to scm_version.py script (#13521)
raise Exception in the case that return code is non-zero

this approach has shown itself to be the most consistently reliable across multiple ecosystems
2023-05-10 14:20:56 -04:00
Alan Rominger
06430741ab Fix 400 error from job labels sublist (#13972)
This was caused by an incorrect parent_key ref from label to job
  also applies to workflow_job labels

This fixes a regression introduced by a recent merge (#13957)
2023-05-10 11:37:59 -04:00
John Westcott IV
cf091d7836 Change job_wait collection test to always try and delete created objects 2023-05-10 11:13:20 -04:00
John Westcott IV
a66acd87e6 Removes test of depreciated fields that have been removed from job_wait collection 2023-05-10 11:10:07 -04:00
Shane McDonald
595b4e3876 Merge pull request #13956 from shanemcd/get-your-strings-together
Clean up string formatting issues from black migration
2023-05-10 10:14:09 -04:00
Rick Elrod
74c46568c1 [wsrelay] switch from psycopg 3 to asyncpg (#13965)
Due to dependency issues specifically around upgrading to Django 4.2, we
cannot feasibly have a dependency on psycopg2 and psycopg3. The only
place that was currently using psycopg3 was wsrelay.

Change wsrelay to use the asyncpg library and psycopg2 instead.

Tested locally on kind with a dev build of awx.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-10 09:10:35 -05:00
Shane McDonald
05e9b29460 Merge pull request #13963 from Akasurde/doc_fix
Minor typo fix in docs
2023-05-10 08:33:01 -04:00
Shane McDonald
f1196fc019 Clean up string formatting issues from black migration 2023-05-10 08:19:23 -04:00
John Westcott IV
7f020052db Make state exists universal in collection (#13890)
Make state: exists available for all API modules

Make state:exists return the ID just like it would if it created the resource
2023-05-10 09:05:29 -03:00
Rick Elrod
53260213ba Issue template: Remind people to use security@ (#13971)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-05-09 11:00:02 -05:00
Abhijeet Kasurde
7d1ee37689 Minor typo fix in docs
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2023-05-08 07:47:07 -07:00
Seth Foster
45c13c25a4 Set receptor log level to info (#13958) 2023-05-05 15:01:21 -04:00
Alan Rominger
ba0e9831d2 Fix bug with parent_key filtering (#13957)
This was making host sub-list views non-functional
  specifically for constructed and smart inventory
  views would always return 0 results before this fix
2023-05-05 14:10:55 -04:00
Shane McDonald
92dce85468 Merge pull request #13955 from shanemcd/dark-processed
Add missing comma in host_status_counts list
2023-05-05 10:55:47 -04:00
Shane McDonald
77139e4138 Add missing comma in host_status_counts list 2023-05-05 08:02:38 -04:00
Sarah Akus
b28e14c630 Merge pull request #13941 from vidyanambiar/freq-details
Fix for incorrect value for 'Run on' field in frequency details
2023-05-02 13:19:06 -04:00
Alan Rominger
bf5594e338 Merge pull request #13930 from sean-m-sullivan/collection_role_update
In collection, allow roles to be added to multiple teams and users
2023-05-02 12:54:22 -04:00
Alan Rominger
f012a69c93 Allow running AWX checks on forks (#13938) 2023-05-02 11:47:29 -04:00
sean-m-sullivan
0fb334e372 collection, allow roles to be added to multiple teams and users 2023-05-02 07:34:38 -04:00
Vidya Nambiar
b7c5cbac3f Fix for 'Run on' field in frequency details 2023-05-01 17:03:51 -04:00
Sarah Akus
eb7407593f Merge pull request #13915 from marshmalien/10877-dup-freq-types-schedule
Show schedule details warning when RRule is unsupported
2023-04-28 14:21:23 -04:00
Sarah Akus
287596234c Merge pull request #13874 from marshmalien/8898-fix-update-vault-credentials
Fix vault credential update error when vault_id is missing
2023-04-28 13:50:46 -04:00
Sarah Akus
ee7b3470da Merge pull request #13873 from marshmalien/10799-bug-prompt-launch-credential-type-dropdown-complete
Fix screen crash when changing credential type in launch prompt dropdown
2023-04-28 13:25:40 -04:00
Jessica Steurer
0faa1c8a24 Merge branch 'devel' into 8898-fix-update-vault-credentials 2023-04-28 10:37:15 -03:00
Alan Rominger
77175d2862 Consolidate get_queryset methods (#13906)
In a prior merge, we added the ability to slap filter_read_permission = False on a view to get a certain functionality where it didn't filter a sublist the view is showing.

This logic already existed in a highly duplicated form among a number of views, so this deletes those methods in favor of the flag.
2023-04-28 09:10:18 -04:00
Klaas Demter
22464a5838 Enhance secret retrieval documentation (#13914) 2023-04-26 19:32:40 +00:00
Sarah Akus
3919ea6270 Merge pull request #13905 from vidyanambiar/topology-rbac
Make Topology view and Instances visible only to system admin/auditor
2023-04-26 15:13:32 -04:00
Marliana Lara
9d9f650051 Show schedule details warning when RRule is unsupported 2023-04-26 14:49:43 -04:00
jessicamack
66a3cb6b09 Merge pull request #13858 from jessicamack/13322-catch-sigterm
Catch SIGTERM or SIGINT and send offline message
2023-04-26 12:24:34 -04:00
jessicamack
d282393035 change exit code
Signed-off-by: jessicamack <jmack@redhat.com>
2023-04-26 11:37:59 -04:00
jessicamack
6ea3b20912 reverse previous commit to break into separate PR
Signed-off-by: jessicamack <jmack@redhat.com>
2023-04-26 11:37:59 -04:00
jessicamack
3025ef0dfa move with block inside of while to free up persistent db connection
Signed-off-by: jessicamack <jmack@redhat.com>
2023-04-26 11:37:59 -04:00
jessicamack
397d58c459 removed TODO. moved signal catches to handle()
Signed-off-by: jessicamack <jmack@redhat.com>
2023-04-26 11:37:59 -04:00
jessicamack
d739a4a90a updated black and ran again to fix lint formatting
Signed-off-by: jessicamack <jmack@redhat.com>
2023-04-26 11:37:59 -04:00
jessicamack
3fe64ad101 fix signal handler. black reformats
Signed-off-by: jessicamack <jmack@redhat.com>
2023-04-26 11:37:59 -04:00
jessicamack
919d1e5d40 catch SIGTERM or SIGINT and send offline message
Signed-off-by: jessicamack <jmack@redhat.com>
2023-04-26 11:37:59 -04:00
John Westcott IV
7fda4b0675 Merge pull request #13903 from john-westcott-iv/collection_intergration_tests
Enhance collection intergration tests
2023-04-26 09:08:00 -04:00
Gabriel Muniz
d8af19d169 Fix organization not showing all galaxy credentials for org admin (#13676)
* Fix organization not showing all galaxy credentials for org admin

* Add basic test to ensure counts

* refactored approach to allow removal of redundant code

* Allow configurable prefetch_related

* implicitly get related fields

* Removed extra queryset code
2023-04-25 15:33:42 -04:00
Vidya Nambiar
1821e540f7 Merge branch 'devel' into topology-rbac 2023-04-25 15:32:17 -04:00
Vidya Nambiar
77be6c7495 tests 2023-04-25 14:18:05 -04:00
John Westcott IV
baed869d93 Remove project_manual integration test
This test can no longer be performed without manual intervention because of how jobs are now run in EEs
2023-04-25 13:49:50 -04:00
John Westcott IV
b87ff45c07 Enhance collection test
ad_hoc_command_cancel really can no longer timeout on a cancel (it happens sub second) and remove unneeded block

Modified all test to respect test_id parameter so that all tests can be run togeather as a single ID

Fix a check in group since its group2 is deleted from being a sub group of group1

The UI now allows to propage sub groups to the inventory which we may want to support within the collection

Only run instance integration test if we are running on k8s and assume we are not by default

Fix hard coded names in manual_project
2023-04-25 13:48:37 -04:00
Alan Rominger
7acc0067f5 Remove Ansible config override to validate group names (#13837) 2023-04-25 13:37:13 -04:00
Alan Rominger
0a13762f11 Use separate module for pytest settings (#13895)
* Use separate module for test settings

* Further refine some pre-existing comments in settings

* Add CACHES to setting snapshot exceptions to accommodate changed load order
2023-04-25 13:31:46 -04:00
Vidya Nambiar
2c673c8f1f Make Topology view and Instances visible only to system admin/auditor 2023-04-25 12:44:27 -04:00
John Westcott IV
8c187c74fc Adding "password": "$encrypted$" to user serializer (#13704)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-04-25 10:18:01 -03:00
Jesse Wattenbarger
2ce9440bab Merge pull request #13896 from jjwatt/jjwatt-pyver
Fallback on PYTHON path in Makefile
2023-04-24 10:10:30 -04:00
Jesse Wattenbarger
765487390f Fallback on PYTHON path in Makefile
- Change default PYTHON in Makefile to be ranked choice
- Fix `PYTHON_VERSION` target that expects just a word
- Use native GNU Make `$(subst ,,)` instead of `sed`
- Add 'version-for-buildyml' target to simplify ci

If I understand correctly, this change should make
'$(PYTHON)' work how we want it to everywhere. Before
this change, on develpers' machines that don't have
a 'python3.9' in their path, make would fail. With this
change, we will prefer python3.9 if it's available, but
we'll take python3 otherwise.
2023-04-21 09:50:05 -04:00
Alan Rominger
086722149c Avoid recursive include of DEFAULT_SETTINGS, add sanity test (#13236)
* Avoid recursive include of DEFAULT_SETTINGS, add sanity test to avoid similar surprises

* Implement review comments for more clear code order and readability

* Clarify comment about order of app name, which is last in order so that it can modify user settings
2023-04-20 15:15:34 -04:00
Sarah Akus
c10ada6f44 Merge pull request #13876 from marshmalien/9668-adhoc-credentials-search
Fix credentials search in adhoc prompt modal
2023-04-20 13:41:36 -04:00
Sarah Akus
b350cd053d Merge pull request #13886 from marshmalien/fix-wf-approval-job-details
Fix incorrect workflow approval job details
2023-04-20 13:31:32 -04:00
Alan Rominger
d0acb1c53f Delete cp of local_settings.py file in test running, because path no longer exists (#13894)
* Change reference to moved local_settings.py file

* Do not appy local_settings to test runner
2023-04-20 13:19:00 -04:00
Hao Liu
f61b73010a Merge pull request #13889 from TheRealHaoLiu/egg-liminate
Remove unnecessary egg-link linking
2023-04-19 17:12:28 -04:00
Hao Liu
adb89cd48f Remove unnecessary egg-link linking
we link awx.egg-link from `tools/docker-compose/awx.egg-link` to `/tmp/awx.egg-link` than we move `/tmp/awx.egg-link` to `/var/lib/awx/venv/awx/lib/python3.9/site-packages/awx.egg-link`

bonus... now we dont have to set PYTHON=python3.9
2023-04-19 16:36:51 -04:00
Hao Liu
3e509b3d55 Merge pull request #13883 from ZitaNemeckova/remove_inventories_from_host_metrics
Remove Inventories column for now
2023-04-19 15:41:32 -04:00
Hao Liu
f0badea9d3 Merge pull request #13888 from TheRealHaoLiu/correct-make-call-make
Make target should not call make directly
2023-04-19 15:38:58 -04:00
Hao Liu
6a1ec0dc89 Merge pull request #13887 from TheRealHaoLiu/no-make-run-stuff-in-docker-compose
Stop using make to start awx processes part 1
2023-04-19 15:35:32 -04:00
Hao Liu
329fb88bbb Make target should not call make directly
https://www.gnu.org/software/make/manual/html_node/MAKE-Variable.html

make target should always call make with $(MAKE)
2023-04-19 15:01:16 -04:00
Hao Liu
177f8cb7b2 Stop using make to start processes
part 1...

we dont need to run awx processes through make
because awx-manage uses awx-python which is already activating the correct venv
2023-04-19 14:51:38 -04:00
Marliana Lara
b43107a5e9 Fix credentials search in adhoc prompt modal 2023-04-19 13:59:08 -04:00
Marliana Lara
4857685e1c Fix vault credential update server error 2023-04-19 13:58:39 -04:00
Marliana Lara
8ba1a2bcf7 Reset search params when prompt launch credential type dropdown changes
* Fix credential validation bugs
2023-04-19 13:58:11 -04:00
Marliana Lara
e7c80fe1e8 Fix incorrect workflow approval job details 2023-04-19 13:57:05 -04:00
Hao Liu
33f1c35292 Merge pull request #13658 from TheRealHaoLiu/different-dockerfile
Use different dockerfile for docker-compose-build
2023-04-19 12:12:54 -04:00
Hao Liu
ba899324f2 Merge pull request #13856 from TheRealHaoLiu/kube-dev-autoreload
Auto reload services in kube dev env
2023-04-19 12:08:52 -04:00
Hao Liu
9c236eb8dd Merge pull request #13882 from TheRealHaoLiu/link-launch-n-supervisord
Link launch script and supervisor conf in kube dev
2023-04-19 12:03:22 -04:00
Zita Nemeckova
36559a4539 Remove Inventories column for now. Revert this commit once the backend is ready. 2023-04-19 15:55:02 +02:00
Hao Liu
7a4b3ed139 Merge pull request #13881 from TheRealHaoLiu/fix-copy
Fix copy API
2023-04-19 09:39:39 -04:00
Gabriel Muniz
cd5cc64d6a Fix 500 on missing inventory for provisioning callbacks (#13862)
* Fix 500 on missing inventory for provisioning callbacks

* Added test to cover bug fix

* Reworded msg to clear what is missing to start the callback
2023-04-19 09:27:41 -04:00
Hao Liu
71a11ea3ad Link launch script and supervisor conf in kube dev
Linking launch script and supervisor conf file in kube development environment so we no longer have to rebuild kube devel images for superviosr conf file and launch script changes
2023-04-18 23:22:53 -04:00
Hao Liu
cfbbc4cb92 Auto reload services in kube dev env 2023-04-18 23:15:47 -04:00
Hao Liu
592920ee51 Use different dockerfile for docker-compose-build
- use different dockerfile for awx_devel and awx image
- make all Dockerfile* targets PHONY (bc its cheap to run)
- fix HEADLESS not working for awx-kube-build
2023-04-18 21:45:31 -04:00
Hao Liu
b75b84e282 Merge pull request #13725 from l3acon/collection-existential-state-for-credential-module
[collection] Add "exists" state for credential module
2023-04-18 20:51:14 -04:00
Sarah Akus
f4b80c70e3 Merge pull request #13849 from marshmalien/10854-instances-403-error
Check user permissions before fetching system settings
2023-04-18 16:41:40 -04:00
Hao Liu
9870187af5 Fix copy API
In web/task split deployment web and task container no longer share the same redis cache

In the original code we use redis cache to pass the list of sub objects that need to be copied to the new object

In this PR we extracted out the logic that computes the sub_object_list and move it into deep_copy_model_obj task
2023-04-18 16:03:04 -04:00
Michael Abashian
bbb436ddbb Merge pull request #13872 from mabashian/remove-codemirror
Removes unused codemirror dependency
2023-04-18 15:27:12 -04:00
Michael Abashian
abf915fafe Removes more unnecessary licenses 2023-04-18 15:06:19 -04:00
Michael Abashian
481814991e Remove codemirror licenses 2023-04-18 15:06:18 -04:00
Michael Abashian
e94ee8f8d7 Removes unused codemirror dependency 2023-04-18 15:06:18 -04:00
John Westcott IV
e660f62a59 Merge pull request #13875 from john-westcott-iv/fix_assumed_databases
Fixing issue were we assumed DATABASES would be defined
2023-04-18 14:21:17 -04:00
Keith Grant
a2a04002b6 Merge pull request #13869 from keithjgrant/persistent-filter-race-condition
Rework PersistentFilter to avoid double API call
2023-04-18 11:13:19 -07:00
John Westcott IV
93117c8264 Fixing issue were we assumed DATABSES would be defined 2023-04-18 13:57:17 -04:00
Keith J. Grant
b8118ac86a remove outdated tests 2023-04-18 10:04:28 -07:00
Keith J. Grant
c08f1ddcaa rework PersistentFilter to avoid double API call 2023-04-18 10:04:28 -07:00
Matthew Fernandez
d57f549a4c Merge branch 'devel' into collection-existential-state-for-credential-module 2023-04-18 09:51:54 -06:00
matt
93e6f974f6 remove redundant loop 2023-04-18 09:51:20 -06:00
John Westcott IV
32f7dfece1 Changing check for all in awx.awx.export (#13854) 2023-04-18 10:29:25 -03:00
Michael Abashian
68b32b9b4f Merge branch 'devel' into 10854-instances-403-error 2023-04-17 10:14:44 -04:00
Alan Rominger
886ba1ea7f Merge pull request #13860 from AlanCoding/move_test
Move integration tests to be consistent with the rest
2023-04-14 10:36:44 -04:00
Alex Corey
b128f05a37 Merge pull request #11076 from tongtie/fix-choose-project-scmType-manual-international
fix: Internationalization causes the project to be unable to choose manual option
2023-04-14 09:57:08 -04:00
Alan Rominger
36c9c9cdc4 Move integration tests to be consistent with the rest 2023-04-14 09:51:53 -04:00
Alan Rominger
342e9197b8 Customize application_name for different connections in dispatcher service (#13074)
* Introduce new method in settings, import in-line w NOQA mark

* Further refine the app_name to use shorter service names like dispatcher

* Clean up listener logic, change some names
2023-04-13 22:36:36 -04:00
John Westcott IV
2205664fb4 Merge pull request #13857 from john-westcott-iv/add_tacacs_plus
Adding tacacs+ container for testing
2023-04-13 16:15:32 -04:00
John Westcott IV
7cdf471894 Fix sat instance var (#13851)
* add the fallback satellite_instance_var_id

* Removing unnecessary whitespace

---------

Co-authored-by: Nikhil Jain <jainnikhil30@gmail.com>
2023-04-13 17:14:06 -03:00
John Westcott IV
8719648ff5 Adding tacacs+ container for testing 2023-04-13 15:02:08 -04:00
Dien Nguyen
c1455ee125 bugfix: add scm_branch to optional_args for workflow_launch (#13254)
* add scm_branch to optional_args

* add in limits

* Update workflow_launch.py

remove json from import to pass linting.

---------

Co-authored-by: dien nguyen <nguyen.d@gmail.comn>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-04-13 15:36:38 -03:00
Joe Garcia
11d5e5c7d4 Fixes #13402 allow user defined key retrieval from CYBR (#13411)
* Fixed #13402 allow user defined key retrieval from CYBR

* Add default value to object_property

* Raise ValueError if object_property not in response

* Raise KeyError instead of ValueError
2023-04-13 13:11:37 -04:00
John Westcott IV
fba4e06c50 Adding basic validation for local passwords (#13789)
* Adding basic validation for local passwords

* Adding edit screen

* Fixing tests
2023-04-13 10:02:52 -03:00
Hao Liu
12a4c301b8 Merge pull request #13721 from sscheib-rh/feat-add_secret_field_dsv_lookup
Add missing filtering mechanism for the Thycotic Devops Vault credential lookup
2023-04-13 08:58:59 -04:00
Hao Liu
8a1cdf859e Merge pull request #12627 from vician/tss-domain
Added domain entry and authorizer for TSS
2023-04-12 16:33:46 -04:00
Steffen Scheib
2f68317e5f Fixing api-lint error 2023-04-12 16:07:00 -04:00
Steffen Scheib
0f4bac7aed Add missing filtering mechanism for the Thycotic Devops Vault credential lookup 2023-04-12 16:07:00 -04:00
John Westcott IV
e42461d96f Merge pull request #13807 from sean-m-sullivan/credential_doc
update credential list examples in awx collection
2023-04-12 15:40:06 -04:00
sean-m-sullivan
9b716235a2 update credential list examples in awx collection 2023-04-12 15:19:11 -04:00
John Westcott IV
eb704dbaad Merge pull request #13838 from john-westcott-iv/oweel_additional_tests
Added more tests for different modules
2023-04-12 13:14:37 -04:00
Marliana Lara
105609ec20 Check user permissions before fetching system settings 2023-04-12 11:19:37 -04:00
John Westcott IV
9b390a624f Merge pull request #13831 from slemrmartin/analytics-api-permissions
Analytics API: Permissions for System Auditor
2023-04-12 10:37:26 -04:00
Martin Slemr
0046ce5e69 Analytics API: Permissions for System Auditor 2023-04-12 15:40:12 +02:00
Hao Liu
b80d0ae85b Merge pull request #13840 from AlanCoding/one_less_connection
Get rid of 1 perpetually unused connection in our app
2023-04-12 09:30:51 -04:00
Hao Liu
1c0142f75c Merge pull request #13841 from AlanCoding/tower_processes
Add run-clear-cache to tower-processes for auto-reload
2023-04-12 08:54:34 -04:00
Alan Rominger
1ea6d15ee3 Add run-clear-cache to tower-processes for auto-reload 2023-04-11 17:05:41 -04:00
Alan Rominger
3cd5d59d87 Get rid of 1 perpetually unused connection in our app 2023-04-11 17:04:59 -04:00
Alexander Komarov
d32a5905e8 Remove unused imports 2023-04-11 16:23:03 -04:00
Alexander Komarov
e53a5da91e Add more tests for different modules 2023-04-11 16:21:50 -04:00
Hao Liu
1a56272eaf Merge pull request #13767 from Ladas/analytics_export_subscription_id
Analytics export other subs attrs
2023-04-11 15:55:26 -04:00
John Westcott IV
3975028bd4 Merge pull request #12952 from sashashura/patch-1
ci: workflows security hardening
2023-04-11 15:51:07 -04:00
Seth Foster
1c51ef8a69 Store serialized metrics locally (#13833) 2023-04-11 15:06:48 -04:00
Michael Abashian
6b0fe8d137 Merge pull request #13766 from tanganellilore/fix_lang
Fix locale UI error
2023-04-11 14:51:55 -04:00
matt
4a3d437b32 spaces for pep8 2023-04-11 11:35:36 -06:00
Michael Abashian
23f3ab6a66 Merge branch 'devel' into fix_lang 2023-04-11 11:41:12 -04:00
Seth Foster
ffa3cd1fff Add troubleshooting to execution node docs (#13826) 2023-04-11 10:58:11 -04:00
John Westcott IV
236de7e209 Merge pull request #13827 from john-westcott-iv/remove_future_pin
Unpinning python library for future
2023-04-11 08:16:53 -04:00
Ladislav Smola
4e5cce8d15 Analytics export other subs attrs
We'll export also subscription_id since pool_id is not
enough in certain cases.

Then also export usage and account number
2023-04-10 21:47:32 -04:00
Matthew Fernandez
184719e9f2 Merge branch 'devel' into collection-existential-state-for-credential-module 2023-04-10 15:31:11 -06:00
John Westcott IV
6c9e2502a5 Unpinning future 2023-04-10 12:25:15 -04:00
Michael Abashian
0b1b866128 Fixes bug where attempting to edit a schedule with stringified extra_data threw error (#13795) 2023-04-10 09:33:25 -03:00
Hao Liu
80ebe13841 Merge pull request #13825 from TheRealHaoLiu/fix-dependency-conflict
Fix importlib-metadata dependency conflict
2023-04-07 13:17:49 -04:00
Hao Liu
328880609b Fix importlib-metadata dependency conflict
rerun requirements/updator.sh to regenerate requirements.txt fix conflict introduced by https://github.com/ansible/ansible-runner/pull/1224
2023-04-07 11:48:34 -04:00
John Westcott IV
71c307ab8a Merge pull request #13808 from ansible/feature_on-premise-analytics
Proxy analytics requests through AWX API
2023-04-07 11:46:14 -04:00
John Westcott IV
3ce68ced1e Merge pull request #13809 from ansible/feature_usage-collection-pt2
Enhance usage metrics collection
2023-04-07 11:44:59 -04:00
Martin Slemr
20817789bd HostMetric task param check 2023-04-07 08:56:03 -04:00
Salma Kochay
2b63b55b34 UI test fixes for hiding subscription details 2023-04-07 08:56:03 -04:00
Salma Kochay
64923e12fc show/hide host metric subscription details 2023-04-07 08:56:03 -04:00
Martin Slemr
6d4f92e1e8 HostMetric Cleanup task 2023-04-07 08:56:03 -04:00
Martin Slemr
fff6fa7d7a Additional Licensing values 2023-04-07 08:56:03 -04:00
Martin Slemr
44db4587be Analytics upload: HostMetrics hybrid sync 2023-04-07 08:56:03 -04:00
Martin Slemr
dc0958150a Adding analytics to root API page 2023-04-07 08:54:56 -04:00
John Westcott IV
9f27436c75 Adding basic unit/funcational tests 2023-04-07 08:54:56 -04:00
John Westcott IV
e60869e653 Consoldating similar methods 2023-04-07 08:54:56 -04:00
John Westcott IV
51e19d9d0b Adding all endpoints to /api/v2/analytics/ 2023-04-07 08:54:56 -04:00
Martin Slemr
0fea29ad4d Analytics API: OPTIONS proxy and response links update 2023-04-07 08:54:56 -04:00
Martin Slemr
0a40b758c3 Analytics API: Paths, headers and Error handling 2023-04-07 08:54:56 -04:00
Martin Slemr
1191458d80 Analytics API: Basics 2023-04-07 08:54:56 -04:00
Hao Liu
c0491a7b10 Merge pull request #13816 from TheRealHaoLiu/workaround-failed-make-requirements_awx
Temporary workaround for make requirements_awx failure and fix license test
2023-04-07 00:07:13 -04:00
Hao Liu
14e613bc92 Fix failed license check
psycopg2 also start with psycopg

Co-Authored-By: Gabriel Muniz <gmuniz@redhat.com>
2023-04-06 23:35:24 -04:00
Hao Liu
98e37383c2 Temporary workaround for make requirements_awx failure 2023-04-06 22:14:51 -04:00
John Westcott IV
9e336d55e4 Merge pull request #13805 from john-westcott-iv/fix_closing_colors
Do not add closing color tags if --no-color was specified
2023-04-06 08:41:49 -04:00
John Westcott IV
0e68caf0f7 Do not add closing color tags if --no-color was specified 2023-04-05 12:03:15 -04:00
Hao Liu
c9c150b5a6 Merge pull request #13799 from TheRealHaoLiu/fix-supervisor-conf-file
Fix supervisor conf file inconsistancy
2023-04-05 11:07:05 -04:00
Hao Liu
f97605430b Merge pull request #13804 from TheRealHaoLiu/heartbeet-logging
Add log handler and file for heartbeet
2023-04-05 11:06:32 -04:00
Hao Liu
454f31f6a4 Add log handler and file for heartbeet 2023-04-05 10:38:35 -04:00
Hao Liu
f62bf6a4c3 Fix supervisor conf file inconsistancy 2023-04-05 10:32:02 -04:00
John Westcott IV
a0dafbfd8c Merge pull request #13803 from john-westcott-iv/try_and_fix_checks
Adding import of centos repo key for dnf
2023-04-05 10:04:55 -04:00
John Westcott IV
b5c052b2e6 Adding import of centos repo key for dnf 2023-04-05 09:38:02 -04:00
Rick Elrod
1e690fcd7f Only use constr. inv URL when req comes from it (#13797)
When the API request is for /inventories/id use that as the URL in the
API response. When the request is for /constructed_inventories/id use
that.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-04-04 15:26:52 +00:00
Lorenzo Tanganelli
479d0c2b12 add instance_groups on cli and awx.awx.role (#13784) 2023-04-04 10:09:48 -04:00
Lorenzo Tanganelli
ede185504c fix js error in case of locale not exists 2023-04-03 21:03:14 +02:00
Alan Rominger
2db29e5ce2 Merge pull request #13786 from AlanCoding/refresh_refresh_refresh
Fix docker-clean target, accounting for slashes
2023-03-30 14:20:04 -04:00
Alan Rominger
7bb0d32be1 Fix docker-clean file, accounting for slashes 2023-03-30 13:46:15 -04:00
Hao Liu
acb22f0131 Merge pull request #13423 from ansible/feature_web-task-split
Allow web and task container to be deployed in separate deployment on Kubernetes
2023-03-30 12:52:22 -04:00
Rick Elrod
4f99a170be Nix websocket docs for now
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-30 08:48:50 -04:00
Hao Liu
17f5c4b8e6 Modify dev make target name to clarify intention
these make targets are for starting the different daemons within the kube/docker development environment updating the name to make it better reflect their intention

also added comments above the make target to describe what they do

note: these comments show up when run `make help`
2023-03-30 08:47:18 -04:00
Oleksii Baranov
598f9e2a55 Add host_metrics page to the awxkit 2023-03-30 08:46:17 +02:00
Hao Liu
d33573b29c Merge pull request #13603 from jjwatt/jjwatt-fix-clean-languages 2023-03-29 22:49:13 -04:00
Hao Liu
bc55bcf3a2 Rename SUPERVISOR_CONFIG_PATH
previously this is used so that task running in the task container can reach into the web container to restart rsyslog

now that the web container and task container are split there's no longer a way to do that so i renamed this env var to reference where it will now do

which is pointing to the supervisor conf file of the current running container
2023-03-29 22:09:19 -04:00
Hao Liu
6c0c1f6853 Rename launch script for launch awx web
launch_awx.sh that this PR rename is also now only use for launching awx web container renaming to reflect it's purpose

also remove the no longer needed creation of rsyslog conf as rsyslog is no longer in the web container

Update Dockerfile.j2
2023-03-29 22:09:19 -04:00
Hao Liu
0cc02d311f Rename supervisor.conf.j2 to be descriptive
supervisor.conf.j2 file is the template for supervisor.conf file for the web container rename to supervisor_web.conf make it more clear that it is use for the web container
2023-03-29 22:09:19 -04:00
jessicamack
13b9a6c5e3 Remove unused import
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
Lila
ac2f2039f5 Fix cache-clear for kube dev env
Missing conditional for when running in kube development environment
2023-03-29 22:09:19 -04:00
Hao Liu
c8c8ed1775 Raise ValueError when no ready and enabled task instance 2023-03-29 22:09:19 -04:00
thedoubl3j
6267469709 remove rsyslog_configurer from dispatcher as it is already being handled, add rsyslog_configurer to tower_processes 2023-03-29 22:09:19 -04:00
Lila
a1e39f71fc Removed errant comments. 2023-03-29 22:09:19 -04:00
Hao Liu
4b0acaf7a1 Add back missing rsyslog.conf file 2023-03-29 22:09:19 -04:00
Hao Liu
968267287b Catch SynchronousOnlyOperation and get setting async
If trying to get setting from async context (in daphne) catch SynchronousOnlyOperation error and retry in a thread
2023-03-29 22:09:19 -04:00
Hao Liu
25303ee625 Only select task instance that are ready and enabled
When select a queue for task instance to run task only select task instance that are ready and enabled
2023-03-29 22:09:19 -04:00
jessicamack
8c5e2237f4 import typing to fix lint issue
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
jessicamack
57d009199d removed unused imports. fix exception message
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
jessicamack
24cbf39a93 fix heartbeet ascii lint issue
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
jessicamack
95f1ef70a7 update licenses to include new requirement
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
jessicamack
680e2bcc0a remove out of date test code
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
Hao Liu
cd3f7666be add get_task_queuename
get_local_queuename will return the pod name of the instance

now that web and task are in different pods when web container queue a task it will be put into a queue without as task worker to execute the task
2023-03-29 22:09:19 -04:00
Hao Liu
049fb4eff5 fix job relaunch error
AttributeError: 'Settings' object has no attribute 'INSTALL_UUID'
2023-03-29 22:09:19 -04:00
Hao Liu
7cef4e6db7 clear settings cache after changing DISABLE_LOCAL_AUTH 2023-03-29 22:09:19 -04:00
jessicamack
da004da68a make reconfigure_rsyslog a task
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
jessicamack
b29f2f88d0 updated tests to be in line with clear_setting_cache changes
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
jessicamack
52a8a90c0e remove changes used for dev testing
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
Hao Liu
7cb890b603 minor fix-up due to merge conflict 2023-03-29 22:09:18 -04:00
Jessica Mack
78652bdd71 add functionality back to cache clear method
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
Jessica Mack
29d222be83 removed rsyslog queue, updated logger level
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
Lila Yasin
e7fa730f81 Removed some commented out code and adjusted a few loggers to make more sense contextually. (#13424) 2023-03-29 22:09:18 -04:00
Seth Foster
33f070081c Send subsystem metrics via wsrelay (#13333)
Works by adding a dedicated producer in wsrelay that looks for
local django channels message with group "metrics". The producer
sends this to the consumer running in the web container.

The consumer running in the web container handles the message by
pushing it into the local redis instance.

The django view that handles a request at the /api/v2/metrics
endpoint will load this data from redis, format it, and return the
response.
2023-03-29 22:09:18 -04:00
Rick Elrod
44463402a8 [wsrelay] attempt to standardize logging levels
This needs some work, but it's a start.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
93c2c56612 [wsrelay] Copy the message payload before we relay
We internally manipulate the message payload a bit (to know whether we
are originating it on the task side or the web system is originating
it). But when we get the message, we actually get a reference to the
dict containing the payload.

Other producers in wsrelay might still be acting on the message and
deciding whether or not to relay it. So we need to manipulate and send a
*copy* of the message, and leave the original alone.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
91bf49cdb3 Remove auto-reconnect logic from wsrelay
We no longer need to do this from wsrelay, as it will automatically try
to reconnect when it hears the next beacon from heartbeet.

This also cleans up the logic for what we do when we want to delete a
node we previously knew about.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
704759d29a add wsrelay to tower-processes
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
513f433f17 Add comment for new psycopg dep
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
5f41003fb1 Prevent looping issue when task/web share a Redis
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
2e0f25150c Start of heartbeet daemon
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
4f5bc992a0 fix merge from devel - wsbroadcast -> wsrelay
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
a9e7508e92 WIP: Make wsrelay listen for pg_notify heartbeat
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
1c2eb22956 Remove some debug code and modify logging a bit
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
a987249ca6 dedent a block that was clearly meant to be de-dented
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Shane McDonald
ab6d56c24e initial PoC for wsrelay
Checkpoint
2023-03-29 22:04:43 -04:00
Jessica Mack
c4ce5d0afa updated supervisor to include cache-clear
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
43f4872fec these methods don't need to be class methods
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
cb31973d59 switched to using the built in task processing
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
9f959ca3d4 removed unneeded launch file and Dockerfile change
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
454d6d28e7 mock additional pg_notify use in test
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
8b70fef743 removed unused import
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
026b8f05d7 added launch file, docker, and supervisor changes
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
d8e591cd69 added cache-clear service. update dispatcher queues
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Hao Liu
38cc193aea update permission to launch_awx_rsyslog.sh permission to +x (#13399)
Signed-off-by: Hao Liu <haoli@redhat.com>
2023-03-29 22:04:43 -04:00
Lila Yasin
65b3e0226d Created new rsyslog launch file. (#13327)
* Created new rsyslog launch file.
* Rsyslog conf work.
* Refining how we're calling rsyslog conf.
* Removed rsyslog so it no longer launches in the web container.
* Added the new launch_awx_rsyslog.sh to the /usr/bin
2023-03-29 22:04:43 -04:00
jessicamack
b5e04a4cb3 AWX code changes for rsyslog decoupling (#13222)
* add management command and logging for new daemon
* switch tasks over to calling pg_notify
* add daemon to docker-compose and supervisor
* renamed handle_setting_changes and moved notify call
* removed initial rsyslog configure from dispatcher
* add logging and clear cache before reconfigure
* add notify to delete
* moved pg_notify to own function
* update tests impacted by rsyslog change
* changed over to new pg_notify method

Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Christian Adams
c89c2892c4 Merge pull request #13749 from fosterseth/mintls13false
Allow TLS 1.2 for Receptor connections
2023-03-29 19:20:09 -04:00
Alan Rominger
5080a5530c Merge pull request #13448 from ansible/feature_constructed-inventory
Allow for using Ansible's `constructed` inventory plugin to dynamically group hosts from AWX inventories
2023-03-29 09:27:21 -04:00
Rick Elrod
77743ef406 [collection] Example for constructed inventories (#13755)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Marliana Lara
f792fea048 Add more constructed inventory hint examples 2023-03-28 11:20:24 -05:00
Alan Rominger
16ad27099e [constructed-inventory] Save facts on model for original host (#13700)
* Save facts on model for original host

Redirect to original host for ansible facts

Use current inventory hosts for facts instance_id filter
Thanks for Gabe for identifying this bug

* Fix spelling of queryset

Co-authored-by: Rick Elrod <rick@elrod.me>

* Fix sign error with facts expiry - from review

---------

Co-authored-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Alan Rominger
3f5a4cb6f1 [constructed-inventory] Backlink events to real hosts and summaries to both hosts (#13718)
* Backlink events to real hosts and summaries to both hosts

* Prevent error when original host is deleted during job run

* No duplicate entries, review suggestion from Rick

* Change word tense in help text, dict style adjustments

From code review

Co-authored-by: Rick Elrod <rick@elrod.me>

* Back out new variable for constructed host id

---------

Co-authored-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Alan Rominger
b88d9f4731 Force overwrite all vars for constructed inventory (#13731) 2023-03-28 11:20:24 -05:00
Alan Rominger
62b79b1959 Point constructed inventory URL to special view (#13730) 2023-03-28 11:20:24 -05:00
Alan Rominger
be5a2bbe61 Fail inventory updates with unmatched limits (#13726) 2023-03-28 11:20:24 -05:00
Rick Elrod
84edbed5ec [constructed-inventory] Fix some validation for constructed inv sources (#13727)
- When updating, we need the original object so we can make sure we
  aren't changing things we shouldn't be.
- We want to allow source_vars and limit, but not much else.
- We want to block everything else (at least, if it doesn't match what
  is in the original object...to allow the collection to work properly).
- Add two functional tests.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Alan Rominger
aa631a1ba7 [constructed-inventory] Allow filtering based on facts (#13678)
* initial functional filter-on-facts functionality

* Move facts to its own module to make interface more coherent

* Update test
2023-03-28 11:20:24 -05:00
Alan Rominger
771b831da8 Fail constructed inventory if ANY source is unparsed 2023-03-28 11:20:24 -05:00
Alan Rominger
ce4c1c11b3 Remove towervars from constructed inventory hosts (#13686) 2023-03-28 11:20:24 -05:00
Marliana Lara
054a70bda4 Filter constructed inventory hosts from smart inventory host lookup 2023-03-28 11:20:24 -05:00
Rick Elrod
ab0463bf2a Ordered m2m for Inventory/Inventory relationship (#13602)
Including changes to our custom Ordered m2m field which previously broke
if the source and target model was the same.

Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-03-28 11:20:24 -05:00
Marliana Lara
2bffddb5fb Add constructed inventory edit form 2023-03-28 11:20:24 -05:00
Marliana Lara
d576e65858 Add constructed inventory add form 2023-03-28 11:20:24 -05:00
Marliana Lara
e3d167dfd1 Hide constructed and smart inventories in Inventory Lookup 2023-03-28 11:20:24 -05:00
Alex Corey
ba9533f0e2 Adds constructed inventory groups and related groups. 2023-03-28 11:20:24 -05:00
Alex Corey
e7a739c3d7 Creates constructed inventory host lists by reusing, and renaming smart inventory host list components. 2023-03-28 11:20:24 -05:00
Marliana Lara
ab3a9a0364 Update inventory details after inventory source sync 2023-03-28 11:20:24 -05:00
Marliana Lara
7dd1bc04c4 Add constructed inventory detail's sync button 2023-03-28 11:20:24 -05:00
Gabe Muniz
8c4e943af0 refactored to use is_valid_relation instead of post 2023-03-28 11:20:24 -05:00
Gabe Muniz
7112da9cdc Various validations for const. inv. serialization
- prevent constructed inventory host,group,inventory_source creation
- disable deleting constructed inventory hosts
- remove the ability to add constructed inventory sources
- remove ability to add constructed inventories to constructed inventories
- block updates to constructed source type
- added tests for group/host/source creation
2023-03-28 11:20:24 -05:00
Marliana Lara
7a74437651 Add constructed inventory CRUD and subtab routes
* Add constructed inventory API model
 * Add constructed inventory detail view
 * Add util to switch inventory url based on "kind"
2023-03-28 11:20:24 -05:00
Hao Liu
e22967d28d add constructed kind to inventory module
- add kind 'constructed' to inventory module
- add 'input_inventories' field to inventory module

Co-authored-by: Rick Elrod <rick@elrod.me>
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Gabe Muniz
df6bb5a8b8 Refactor original hosts, add related field
Also rename source_inventories to input_inventories
2023-03-28 11:20:24 -05:00
Gabe Muniz
aa06940df5 force kind to readonly field and set kind to constructed in create 2023-03-28 11:20:24 -05:00
Alan Rominger
3e5467b472 [constructed-inventory] Add constructed inventory docs and do minor field updates (#13487)
* Add constructed inventory docs and do minor field updates

Add verbosity field to the constructed views

automatically set update_on_launch for the auto-created constructed inventory source
2023-03-28 11:20:24 -05:00
Alan Rominger
c2fe06dd95 [constructed-inventory] Use control plane EE for constructed inventory and hack temporary image (#13474)
* Use control plane EE for constructed inventory and hack temporary image

* Update page registry to work with new endpoints
2023-03-28 11:20:24 -05:00
Gabe Muniz
510f54b904 adding limit to inventory_source collection module 2023-03-28 11:20:24 -05:00
Alan Rominger
57e005b775 Start on new constructed inventory API view
Make the GET function work at most basic level

Basic functionality of updating working

Add functional test for the GET and PATCH views

Add constructed inventory list view for direct creation

Add limit field to constructed inventory serializer
2023-03-28 11:20:24 -05:00
Gabe Muniz
aad260bb41 edit new migration for deprecation of host_filter 2023-03-28 11:20:24 -05:00
Gabe Muniz
e3d39a2728 push limit to inventory sources
move limit field from InventorySourceSerializer to InventorySourceOptionsSerializer (#13464)

InventorySourceOptionsSerializer is the parent for both InventorySourceSerializer and InventoryUpdateSerializer

The limit option need to be exposed to both inventory_source and inventory_update

Co-Authored-By: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2023-03-28 11:17:17 -05:00
Alan Rominger
f59ced57bc Model and task changes for constructed inventory
Add in required setting about empty groups
2023-03-28 11:17:17 -05:00
Hao Liu
7f085e159f Merge pull request #13712 from ansible/feature_usage-collection
Allow soft deletion of HostMetrics and add usage collection utility
2023-03-28 12:16:02 -04:00
Seth Foster
db2253601d Allow TLS 1.2 for Receptor connections
- Required for FIPS environment where TLS 1.3 is
not supported
- TLS 1.3 can still be used if the nodes
both agree to use during handshake.
2023-03-27 11:07:30 -04:00
Klaas Demter
32a5186eea Fixes #6556 Expose SOCIAL_AUTH_USERNAME_IS_FULL_EMAIL (#13641)
Signed-off-by: Klaas Demter <Klaas-@users.noreply.github.com>
2023-03-27 11:30:40 -03:00
matt
b0c416334f add test coverage 2023-03-23 15:44:00 -06:00
Aparna Karve
c30c9cbdbe Remove --until option 2023-03-23 14:13:16 -04:00
Martin Slemr
8ec6e556a1 HostMetricSummaryMonthly API commented out 2023-03-23 14:13:16 -04:00
Hao Liu
382f98ceed Fixing migration files 2023-03-23 14:13:03 -04:00
Aparna Karve
fbd5d79428 Added internal batch processing for up to 10k rows
For --rows_per_file if > 10k, rows would be fetched in batches of 10k
2023-03-23 14:06:56 -04:00
Aparna Karve
878008a9c5 make rows_per_file optional parameter
Removed 2 sql statements that gave the info on row count
which warranted many other changes
2023-03-23 14:06:56 -04:00
Aparna Karve
132fe5e443 Remove pandas use csv. Also, remove anonymization 2023-03-23 14:06:56 -04:00
Aparna Karve
311cea5a4a CLI for host usage collection 2023-03-23 14:06:56 -04:00
Zita Nemeckova
88bb6e5a6a Fix test failure 2023-03-23 14:06:56 -04:00
Zita Nemeckova
c117ca66d5 Show HostMetrics only for specific subscription
SUBSCRIPTION_USAGE_MODEL: 'unique_managed_hosts'

Fixes https://issues.redhat.com/browse/AA-1613
2023-03-23 14:06:56 -04:00
Zita Nemeckova
c20e8eb712 Prettier 2023-03-23 14:06:56 -04:00
Zita Nemeckova
5be90fd36b Do not show deleted host metrics 2023-03-23 14:06:56 -04:00
Zita Nemeckova
32a56311e6 Fix linting issues 2023-03-23 14:06:56 -04:00
Zita Nemeckova
610f75fcb1 Update routeConfig test to be according to RBAC 2023-03-23 14:06:56 -04:00
Zita Nemeckova
179868dff2 Add possibility to select and delete HostMetrics 2023-03-23 14:06:56 -04:00
Zita Nemeckova
9f3c4f6240 RBAC: only superuse and auditor can see HostMetrics 2023-03-23 14:06:56 -04:00
Zita Nemeckova
d40fdd77ad Fix filter to take only hostname__icontains and disable advance search 2023-03-23 14:06:56 -04:00
Zita Nemeckova
9135ff2f77 Add HostMetrics routes to the test 2023-03-23 14:06:56 -04:00
Zita Nemeckova
8d46d32944 UI 2023-03-23 14:06:56 -04:00
Martin Slemr
ae0c1730bb Subscription_usage_model in analytics/config.json 2023-03-23 14:06:55 -04:00
Martin Slemr
9badbf0b4e Compliance computation settings 2023-03-23 14:06:55 -04:00
Martin Slemr
7285d82f00 HostMetric migration 2023-03-23 14:06:55 -04:00
Alan Rominger
e38f87eb1d Remove custom API filters and suggest solution via templates 2023-03-23 14:06:55 -04:00
Martin Slemr
e6050804f9 HostMetric review,migration,permissions 2023-03-23 14:06:55 -04:00
Martin Slemr
f919178734 HostMetricSummaryMonthly API and Migrations 2023-03-23 14:06:55 -04:00
Martin Slemr
05f918e666 HostMetric compliance computation 2023-03-23 14:06:55 -04:00
Martin Slemr
b18ad77035 Host Metrics update/soft delete 2023-03-23 14:06:55 -04:00
Martin Slemr
d80759cd7a HostMetrics migration 2023-03-23 14:06:55 -04:00
Martin Slemr
ef4e77d78f Host Metrics List API 2023-03-23 14:06:55 -04:00
Shane McDonald
bf98f62654 Merge pull request #13705 from jainnikhil30/dont_use_githubusercontent
Don't use githubusercontent for containers.conf and podman-contianers.conf
2023-03-23 11:58:58 -04:00
Marliana Lara
1f9925cf51 Fix automation analytics link in license page (#13225) 2023-03-23 08:02:16 -03:00
Hao Liu
4bf8366687 Merge pull request #13743 from TheRealHaoLiu/ui-next-non-phony
Turn ui-next make targets non-PHONY
2023-03-22 21:05:18 -04:00
Hao Liu
21b4755587 Turn make ui-next target non-PHONY
this allow you to pre-build your ui_next outside of container and it won't try to rebuild when you build awx image

`make ui-next` will no longer rebuild if awx/ui_next/build exist
2023-03-22 20:38:54 -04:00
Seth Foster
b4163dd00f Update node affinity description (#13741) 2023-03-22 20:54:08 +00:00
Hao Liu
6908f415a1 Merge pull request #13660 from ansible/feature_ui-next
Introducing tech preview of the new AWX UI
2023-03-21 14:09:47 -04:00
Hao Liu
746cd4bf77 Add note to indicate ui-next is imported target 2023-03-21 13:43:13 -04:00
Hao Liu
39ea162aa9 Update UI_NEXT help text in UI 2023-03-21 13:43:13 -04:00
Hao Liu
5bd00adb59 Update UI_NEXT README
also cleanup some small things
2023-03-21 13:43:13 -04:00
matt
7c4aedf716 exit from module 2023-03-20 13:36:24 -06:00
Alan Rominger
28b1c62275 Fix bug with awx collection manual type alias (#13671)
* Fix bug with manual type alias

* Add unit test for creating manual project with path
2023-03-20 15:26:34 -04:00
Vishali Sanghishetty
f3cdf368df Merge pull request #13693 from mabashian/12651-workflow-convergence
Fixes bug where editing a node always defaulted to all convergence
2023-03-20 15:08:52 -04:00
Michael Abashian
4302348e8e Fixes bug where editing a node always defaulted to all convergence 2023-03-20 14:33:44 -04:00
Hao Liu
cd6cb3352e fail UI_NEXT make src if variable not set 2023-03-20 14:05:58 -04:00
Hao Liu
d1895bb92e PHONY all UI_NEXT build target
- they were all PHONY to start with and also all target are written to be rerun able
2023-03-20 14:05:58 -04:00
Hao Liu
8d47644659 Move placeholder index_awx.html out of build dir
- move placeholder index_awx.html out of ui_next build dir
- copy index_awx.html to build dir during development bootstrap if UI_NEXT has not been build
2023-03-20 14:05:58 -04:00
matt
76f03b9adc add exists to awx.awx.credential 2023-03-20 09:59:24 -06:00
Oleksii Baranov
46227f14a1 Add logging and reduce migration to one operation 2023-03-20 14:19:30 +01:00
Oleksii Baranov
2d114a4d16 Add migration for new cyberark plugin names 2023-03-20 14:19:30 +01:00
lucas-benedito
7deddabea6 8049-expose execution node var for playbook (#13418)
Expose execution node var for playbook

---------

Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-03-17 15:12:25 -04:00
Gabriel Muniz
e15f4de0dd Fix race with heartbeat and reaper logic (#13713)
* Fix race with heartbeat and reaper logic

* Fix tests to fail when over drift over heartbeat time

* replaced modified with started time for reap() code and added test

* fixed logic bug and cleaned up tests

* Added comments to tests to call out reasoning
2023-03-17 14:24:31 -04:00
Kia Lam
f558957538 Commit .po files. 2023-03-17 09:41:29 -07:00
John Westcott IV
fa3920d3a3 Adding default index_awx.html incase user forgets to build ui-next 2023-03-17 11:11:22 -04:00
Hao Liu
48a04bff5a add new UI icons 2023-03-16 23:37:30 -04:00
Kia Lam
c30760aaa9 Fix brandname in banner. 2023-03-16 23:37:30 -04:00
Michael Abashian
3636c5e95e Adds missing mock for fetching the brand name 2023-03-16 23:37:30 -04:00
Hao Liu
ae0d868681 make dev-env test pass 2023-03-16 23:37:30 -04:00
Hao Liu
edbed92c95 Refine UI_NEXT Makefile and update README 2023-03-16 23:37:30 -04:00
Hao Liu
b75b098ee9 throw 404 when UI_NEXT false 2023-03-16 23:34:30 -04:00
Michael Abashian
4f2f345e23 Fix use of brandName 2023-03-16 23:34:30 -04:00
Michael Abashian
41a4551c91 Only show tech preview banner when config.ui_next is true. Use brandName variable in tech preview banner. 2023-03-16 23:34:30 -04:00
Hao Liu
229dbe0905 Add ui_next to /api/v2/config
- Add ui_next to /api/v2/config
- enable banner to show up for normal user since /api/v2/settings is only available to admin users
2023-03-16 23:34:30 -04:00
Michael Abashian
d137086870 Adds UI bits for new UI_NEXT system setting 2023-03-16 23:34:30 -04:00
Hao Liu
f53aa2d26b Build and serve UI_NEXT
- Add new makefile for building ui_next
- Add setting to toggle ui_next
- Add URL path for displaying ui_next
- Update collectstatic and template dir config to serve ui_next
2023-03-16 23:34:30 -04:00
Kia Lam
42c848b57b Add banner to dashboard page.
Co-Authored-By: kialam <2293210+kialam@users.noreply.github.com>
2023-03-16 23:23:21 -04:00
jainnikhil30
64b0e09e87 dont user githubusercontent for containers.conf and podman-containers.conf 2023-03-16 18:04:20 +05:30
Jesse Wattenbarger
af6549ffcd Fix a bug in clean languages
The `$` was not escaped for make or shell.
2023-02-21 07:52:49 -05:00
Alex
b3bda415da build: harden label_issue.yml permissions
Signed-off-by: Alex <aleksandrosansan@gmail.com>
2022-09-25 18:12:14 +02:00
Alex
21291b53fd build: harden label_pr.yml permissions
Signed-off-by: Alex <aleksandrosansan@gmail.com>
2022-09-25 18:10:53 +02:00
Alex
3eb748ff1f build: harden promote.yml permissions
Signed-off-by: Alex <aleksandrosansan@gmail.com>
2022-09-25 18:07:10 +02:00
Martin Vician
6d2c10ad02 Added domain item and authorizer for TSS 2022-08-05 14:13:12 +01:00
tongtie
ede9d961da fix: Internationalization causes the project to be unable to choose manual select 2021-09-14 22:20:52 +08:00
1494 changed files with 59353 additions and 20625 deletions

View File

@@ -19,6 +19,8 @@ body:
required: true
- label: I understand that AWX is open source software provided for free and that I might not receive a timely response.
required: true
- label: I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
required: true
- type: textarea
id: summary
@@ -42,6 +44,7 @@ body:
label: Select the relevant components
options:
- label: UI
- label: UI (tech preview)
- label: API
- label: Docs
- label: Collection

View File

@@ -0,0 +1,28 @@
name: Setup images for AWX
description: Builds new awx_devel image
inputs:
github-token:
description: GitHub Token for registry access
required: true
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Log in to registry
shell: bash
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
make docker-compose-build

View File

@@ -0,0 +1,73 @@
name: Run AWX docker-compose
description: Runs AWX with `make docker-compose`
inputs:
github-token:
description: GitHub Token to pass to awx_devel_image
required: true
build-ui:
description: Should the UI be built?
required: false
default: false
type: boolean
outputs:
ip:
description: The IP of the tools_awx_1 container
value: ${{ steps.data.outputs.ip }}
admin-token:
description: OAuth token for admin user
value: ${{ steps.data.outputs.admin_token }}
runs:
using: composite
steps:
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ inputs.github-token }}
- name: Upgrade ansible-core
shell: bash
run: python3 -m pip install --upgrade ansible-core
- name: Install system deps
shell: bash
run: sudo apt-get install -y gettext
- name: Start AWX
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_UP_OPTS="-d" \
make docker-compose
- name: Update default AWX password
shell: bash
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Build UI
# This must be a string comparison in composite actions:
# https://github.com/actions/runner/issues/2238
if: ${{ inputs.build-ui == 'true' }}
shell: bash
run: |
docker exec -i tools_awx_1 sh <<-EOSH
make ui-devel
EOSH
- name: Get instance data
id: data
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

View File

@@ -0,0 +1,19 @@
name: Upload logs
description: Upload logs from `make docker-compose` devel environment to GitHub as an artifact
inputs:
log-filename:
description: "*Unique* name of the log file"
required: true
runs:
using: composite
steps:
- name: Get AWX logs
shell: bash
run: |
docker logs tools_awx_1 > ${{ inputs.log-filename }}
- name: Upload AWX logs as artifact
uses: actions/upload-artifact@v3
with:
name: docker-compose-logs
path: ${{ inputs.log-filename }}

View File

@@ -1,19 +0,0 @@
version: 2
updates:
- package-ecosystem: "npm"
directory: "/awx/ui"
schedule:
interval: "monthly"
open-pull-requests-limit: 5
allow:
- dependency-type: "production"
reviewers:
- "AlexSCorey"
- "keithjgrant"
- "kialam"
- "mabashian"
- "marshmalien"
labels:
- "component:ui"
- "dependencies"
target-branch: "devel"

View File

@@ -6,6 +6,8 @@ needs_triage:
- "Feature Summary"
"component:ui":
- "\\[X\\] UI"
"component:ui_next":
- "\\[X\\] UI \\(tech preview\\)"
"component:api":
- "\\[X\\] API"
"component:docs":

View File

@@ -15,5 +15,5 @@
"dependencies":
- any: ["awx/ui/package.json"]
- any: ["awx/requirements/*.txt"]
- any: ["awx/requirements/requirements.in"]
- any: ["requirements/*.txt"]
- any: ["requirements/requirements.in"]

View File

@@ -3,7 +3,7 @@ name: CI
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEV_DOCKER_TAG_BASE: ghcr.io/${{ github.repository_owner }}
DEV_DOCKER_OWNER: ${{ github.repository_owner }}
COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
on:
pull_request:
@@ -35,29 +35,40 @@ jobs:
- name: ui-test-general
command: make ui-test-general
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run check ${{ matrix.tests.name }}
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make github_ci_runner
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make docker-runner
dev-env:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run smoke test
run: make github_ci_setup && ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
run: ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
awx-operator:
runs-on: ubuntu-latest
steps:
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: awx
- name: Checkout awx-operator
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ansible/awx-operator
path: awx-operator
@@ -67,7 +78,7 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -102,7 +113,7 @@ jobs:
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core
@@ -114,3 +125,137 @@ jobs:
# needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1
collection-integration:
name: awx_collection integration
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
target-regex:
- name: a-h
regex: ^[a-h]
- name: i-p
regex: ^[i-p]
- name: r-z0-9
regex: ^[r-z0-9]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install dependencies for running tests
run: |
python3 -m pip install -e ./awxkit/
python3 -m pip install -r awx_collection/requirements.txt
- name: Run integration tests
run: |
echo "::remove-matcher owner=python::" # Disable annoying annotations from setup-python
echo '[general]' > ~/.tower_cli.cfg
echo 'host = https://${{ steps.awx.outputs.ip }}:8043' >> ~/.tower_cli.cfg
echo 'oauth_token = ${{ steps.awx.outputs.admin-token }}' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--coverage --requirements $TARGETS" test_collection_integration
env:
ANSIBLE_TEST_PREFER_PODMAN: 1
# Upload coverage report as artifact
- uses: actions/upload-artifact@v3
if: always()
with:
name: coverage-${{ matrix.target-regex.name }}
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: collection-integration-${{ matrix.target-regex.name }}.log
collection-integration-coverage-combine:
name: combine awx_collection integration coverage
runs-on: ubuntu-latest
needs:
- collection-integration
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v3
with:
path: coverage
- name: Combine coverage
run: |
make COLLECTION_VERSION=100.100.100-git install_collection
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cd coverage
for i in coverage-*; do
cp -rv $i/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
done
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
echo '## AWX Collection Integration Coverage' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
ansible-test coverage report >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
echo '## AWX Collection Integration Coverage HTML' >> $GITHUB_STEP_SUMMARY
echo 'Download the HTML artifacts to view the coverage report.' >> $GITHUB_STEP_SUMMARY
# This is a huge hack, there's no official action for removing artifacts currently.
# Also ACTIONS_RUNTIME_URL and ACTIONS_RUNTIME_TOKEN aren't available in normal run
# steps, so we have to use github-script to get them.
#
# The advantage of doing this, though, is that we save on artifact storage space.
- name: Get secret artifact runtime URL
uses: actions/github-script@v6
id: get-runtime-url
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_URL } = process.env;
return ACTIONS_RUNTIME_URL;
- name: Get secret artifact runtime token
uses: actions/github-script@v6
id: get-runtime-token
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_TOKEN } = process.env;
return ACTIONS_RUNTIME_TOKEN;
- name: Remove intermediary artifacts
env:
ACTIONS_RUNTIME_URL: ${{ steps.get-runtime-url.outputs.result }}
ACTIONS_RUNTIME_TOKEN: ${{ steps.get-runtime-token.outputs.result }}
run: |
echo "::add-mask::${ACTIONS_RUNTIME_TOKEN}"
artifacts=$(
curl -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" \
${ACTIONS_RUNTIME_URL}_apis/pipelines/workflows/${{ github.run_id }}/artifacts?api-version=6.0-preview \
| jq -r '.value | .[] | select(.name | startswith("coverage-")) | .url'
)
for artifact in $artifacts; do
curl -i -X DELETE -H "Accept: application/json;api-version=6.0-preview" -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" "$artifact"
done
- name: Upload coverage report as artifact
uses: actions/upload-artifact@v3
with:
name: awx-collection-integration-coverage-html
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/reports/coverage

View File

@@ -16,7 +16,7 @@ jobs:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
@@ -28,7 +28,7 @@ jobs:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -48,8 +48,11 @@ jobs:
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-build
- name: Push image
- name: Push development images
run: |
docker push ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}
- name: Push AWX k8s image, only for upstream and feature branches
run: docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}
if: endsWith(github.repository, '/awx')

16
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
---
name: Docsite CI
on:
pull_request:
jobs:
docsite-build:
name: docsite test build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: install tox
run: pip install tox
- name: Assure docs can be built
run: tox -e docs

View File

@@ -19,41 +19,20 @@ jobs:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
- uses: ./.github/actions/run_awx_devel
id: awx
with:
python-version: ${{ env.py_version }}
- name: Install system deps
run: sudo apt-get install -y gettext
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build UI
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make ui-devel
- name: Start AWX
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make docker-compose &> make-docker-compose-output.log &
build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Pull awx_cypress_base image
run: |
docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }}
@@ -65,18 +44,6 @@ jobs:
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests .
- name: Update default AWX password
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5;
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Run E2E tests
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
@@ -86,7 +53,7 @@ jobs:
export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
AWX_IP=${{ steps.awx.outputs.ip }}
printenv > .env
echo "Executing tests:"
docker run \
@@ -102,8 +69,7 @@ jobs:
-w /e2e \
awx-pf-tests run --project .
- name: Save AWX logs
uses: actions/upload-artifact@v2
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
name: AWX-logs-${{ matrix.job }}
path: make-docker-compose-output.log
log-filename: e2e-${{ matrix.job }}.log

View File

@@ -6,6 +6,10 @@ on:
- opened
- reopened
permissions:
contents: write # to fetch code
issues: write # to label issues
jobs:
triage:
runs-on: ubuntu-latest
@@ -13,7 +17,7 @@ jobs:
steps:
- name: Label Issue
uses: github/issue-labeler@v2.4.1
uses: github/issue-labeler@v3.1
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
not-before: 2021-12-07T07:00:00Z
@@ -24,7 +28,7 @@ jobs:
runs-on: ubuntu-latest
name: Label Issue - Community
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests

View File

@@ -7,6 +7,10 @@ on:
- reopened
- synchronize
permissions:
contents: write # to determine modified files (actions/labeler)
pull-requests: write # to add labels to PRs (actions/labeler)
jobs:
triage:
runs-on: ubuntu-latest
@@ -23,7 +27,7 @@ jobs:
runs-on: ubuntu-latest
name: Label PR - Community
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests

View File

@@ -7,6 +7,7 @@ on:
types: [opened, edited, reopened, synchronize]
jobs:
pr-check:
if: github.repository_owner == 'ansible' && endsWith(github.repository, 'awx')
name: Scan PR description for semantic versioning keywords
runs-on: ubuntu-latest
permissions:

View File

@@ -8,19 +8,22 @@ on:
release:
types: [published]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
promote:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
steps:
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -37,8 +40,12 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }}
- name: Build collection and publish to galaxy
env:
COLLECTION_NAMESPACE: ${{ env.collection_namespace }}
COLLECTION_VERSION: ${{ github.event.release.tag_name }}
COLLECTION_TEMPLATE_VERSION: true
run: |
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
make build_collection
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \

View File

@@ -44,7 +44,7 @@ jobs:
exit 0
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: awx
@@ -52,18 +52,18 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- name: Checkout awx-logos
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ansible/awx-logos
path: awx-logos
- name: Checkout awx-operator
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator

View File

@@ -17,13 +17,13 @@ jobs:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}

10
.gitignore vendored
View File

@@ -157,7 +157,15 @@ use_dev_supervisor.txt
*.unison.tmp
*.#
/awx/ui/.ui-built
/Dockerfile
/_build/
/_build_kube_dev/
/Dockerfile
/Dockerfile.dev
/Dockerfile.kube-dev
awx/ui_next/src
awx/ui_next/build
# Docs build stuff
docs/docsite/build/
_readthedocs/

5
.gitleaks.toml Normal file
View File

@@ -0,0 +1,5 @@
[allowlist]
description = "Documentation contains example secrets and passwords"
paths = [
"docs/docsite/rst/administration/oauth2_token_auth.rst",
]

15
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,15 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
build:
os: ubuntu-22.04
tools:
python: >-
3.11
commands:
- pip install --user tox
- python3 -m tox -e docs
- mkdir -p _readthedocs/html/
- mv docs/docsite/build/html/* _readthedocs/html/

View File

@@ -10,6 +10,7 @@ ignore: |
tools/docker-compose/_sources
# django template files
awx/api/templates/instance_install_bundle/**
.readthedocs.yaml
extends: default

View File

@@ -4,6 +4,6 @@
Early versions of AWX did not support seamless upgrades between major versions and required the use of a backup and restore tool to perform upgrades.
Users who wish to upgrade modern AWX installations should follow the instructions at:
As of version 18.0, `awx-operator` is the preferred install/upgrade method. Users who wish to upgrade modern AWX installations should follow the instructions at:
https://github.com/ansible/awx/blob/devel/INSTALL.md#upgrading-from-previous-versions
https://github.com/ansible/awx-operator/blob/devel/docs/upgrade/upgrading.md

View File

@@ -31,7 +31,7 @@ If your issue isn't considered high priority, then please be patient as it may t
`state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state.
`state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
`state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familiar with an area of the code base the issue is for.
`state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on.

View File

@@ -6,6 +6,7 @@ recursive-include awx/templates *.html
recursive-include awx/api/templates *.md *.html *.yml
recursive-include awx/ui/build *.html
recursive-include awx/ui/build *
recursive-include awx/ui_next/build *
recursive-include awx/playbooks *.yml
recursive-include awx/lib/site-packages *
recursive-include awx/plugins *.ps1

171
Makefile
View File

@@ -1,12 +1,16 @@
PYTHON ?= python3.9
-include awx/ui_next/Makefile
PYTHON := $(notdir $(shell for i in python3.9 python3; do command -v $$i; done|sed 1q))
SHELL := bash
DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py)
VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py)
# ansible-test requires semver compatable version, so we allow overrides to hack it
COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
@@ -25,6 +29,8 @@ COLLECTION_TEMPLATE_VERSION ?= false
# NOTE: This defaults the container image version to the branch that's active
COMPOSE_TAG ?= $(GIT_BRANCH)
MAIN_NODE_TYPE ?= hybrid
# If set to true docker-compose will also start a pgbouncer instance and use it
PGBOUNCER ?= false
# If set to true docker-compose will also start a keycloak instance
KEYCLOAK ?= false
# If set to true docker-compose will also start an ldap instance
@@ -35,17 +41,24 @@ SPLUNK ?= false
PROMETHEUS ?= false
# If set to true docker-compose will also start a grafana instance
GRAFANA ?= false
# If set to true docker-compose will also start a hashicorp vault instance
VAULT ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
VENV_BASE ?= /var/lib/awx/venv
DEV_DOCKER_TAG_BASE ?= ghcr.io/ansible
DEV_DOCKER_OWNER ?= ansible
# Docker will only accept lowercase, so github names like Paul need to be paul
DEV_DOCKER_OWNER_LOWER = $(shell echo $(DEV_DOCKER_OWNER) | tr A-Z a-z)
DEV_DOCKER_TAG_BASE ?= ghcr.io/$(DEV_DOCKER_OWNER_LOWER)
DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
# Python packages to install only from source (not from binary wheels)
# Comma separated list
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
# These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==7.0.5 wheel==0.38.4
@@ -66,7 +79,7 @@ I18N_FLAG_FILE = .i18n_built
sdist \
ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit github_ci_setup github_ci_runner
.git/hooks/pre-commit
clean-tmp:
rm -rf tmp/
@@ -84,7 +97,7 @@ clean-schema:
clean-languages:
rm -f $(I18N_FLAG_FILE)
find ./awx/locale/ -type f -regex ".*\.mo$" -delete
find ./awx/locale/ -type f -regex '.*\.mo$$' -delete
## Remove temporary build files, compiled Python files.
clean: clean-ui clean-api clean-awxkit clean-dist
@@ -215,12 +228,6 @@ daphne:
fi; \
daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer
wsbroadcast:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_wsbroadcast
## Run to start the background task dispatcher for development.
dispatcher:
@if [ "$(VENV_BASE)" ]; then \
@@ -228,7 +235,6 @@ dispatcher:
fi; \
$(PYTHON) manage.py run_dispatcher
## Run to start the zeromq callback receiver
receiver:
@if [ "$(VENV_BASE)" ]; then \
@@ -245,6 +251,34 @@ jupyter:
fi; \
$(MANAGEMENT_COMMAND) shell_plus --notebook
## Start the rsyslog configurer process in background in development environment.
run-rsyslog-configurer:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_rsyslog_configurer
## Start cache_clear process in background in development environment.
run-cache-clear:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_cache_clear
## Start the wsrelay process in background in development environment.
run-wsrelay:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_wsrelay
## Start the heartbeat process in background in development environment.
run-ws-heartbeat:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_ws_heartbeat
reports:
mkdir -p $@
@@ -271,13 +305,13 @@ swagger: reports
check: black
api-lint:
BLACK_ARGS="--check" make black
BLACK_ARGS="--check" $(MAKE) black
flake8 awx
yamllint -s .
## Run egg_info_dev to generate awx.egg-info for development.
awx-link:
[ -d "/awx_devel/awx.egg-info" ] || $(PYTHON) /awx_devel/tools/scripts/egg_info_dev
cp -f /tmp/awx.egg-link /var/lib/awx/venv/awx/lib/$(PYTHON)/site-packages/awx.egg-link
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
PYTEST_ARGS ?= -n auto
@@ -290,21 +324,10 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
## Login to Github container image registry, pull image, then build image.
github_ci_setup:
# GITHUB_ACTOR is automatic github actions env var
# CI_GITHUB_TOKEN is defined in .github files
echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin
docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache
make docker-compose-build
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
## Builds image and runs AWX_DOCKER_CMD in it, mainly for .github checks.
github_ci_runner: github_ci_setup docker-runner
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \
@@ -346,11 +369,11 @@ test_collection_sanity:
rm -rf $(COLLECTION_INSTALL)
if ! [ -x "$(shell command -v ansible-test)" ]; then pip install ansible-core; fi
ansible --version
COLLECTION_VERSION=1.0.0 make install_collection
COLLECTION_VERSION=1.0.0 $(MAKE) install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)
cd $(COLLECTION_INSTALL) && ansible-test integration -vvv $(COLLECTION_TEST_TARGET)
test_unit:
@if [ "$(VENV_BASE)" ]; then \
@@ -418,7 +441,7 @@ ui-devel: awx/ui/node_modules
cp -r awx/ui/build/static/css/* /var/lib/awx/public/static/css; \
cp -r awx/ui/build/static/js/* /var/lib/awx/public/static/js; \
cp -r awx/ui/build/static/media/* /var/lib/awx/public/static/media; \
fi
fi
ui-devel-instrumented: awx/ui/node_modules
$(NPM_BIN) --prefix awx/ui --loglevel warn run start-instrumented
@@ -445,11 +468,12 @@ ui-test-general:
$(NPM_BIN) run --prefix awx/ui pretest
$(NPM_BIN) run --prefix awx/ui/ test-general --runInBand
# NOTE: The make target ui-next is imported from awx/ui_next/Makefile
HEADLESS ?= no
ifeq ($(HEADLESS), yes)
dist/$(SDIST_TAR_FILE):
else
dist/$(SDIST_TAR_FILE): $(UI_BUILD_FLAG_FILE)
dist/$(SDIST_TAR_FILE): $(UI_BUILD_FLAG_FILE) ui-next
endif
$(PYTHON) -m build -s
ln -sf $(SDIST_TAR_FILE) dist/awx.tar.gz
@@ -491,15 +515,20 @@ docker-compose-sources: .git/hooks/pre-commit
-e control_plane_node_count=$(CONTROL_PLANE_NODE_COUNT) \
-e execution_node_count=$(EXECUTION_NODE_COUNT) \
-e minikube_container_group=$(MINIKUBE_CONTAINER_GROUP) \
-e enable_pgbouncer=$(PGBOUNCER) \
-e enable_keycloak=$(KEYCLOAK) \
-e enable_ldap=$(LDAP) \
-e enable_splunk=$(SPLUNK) \
-e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) $(EXTRA_SOURCES_ANSIBLE_OPTS)
-e enable_grafana=$(GRAFANA) \
-e enable_vault=$(VAULT) \
-e enable_tacacs=$(TACACS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT);
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources
@@ -530,19 +559,28 @@ docker-compose-container-group-clean:
fi
rm -rf tools/docker-compose-minikube/_sources/
## Base development image build
docker-compose-build:
ansible-playbook tools/ansible/dockerfile.yml -e build_dev=True -e receptor_image=$(RECEPTOR_IMAGE)
DOCKER_BUILDKIT=1 docker build -t $(DEVEL_IMAGE_NAME) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
.PHONY: Dockerfile.dev
## Generate Dockerfile.dev for awx_devel image
Dockerfile.dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.dev \
-e build_dev=True \
-e receptor_image=$(RECEPTOR_IMAGE)
## Build awx_devel image for docker compose development environment
docker-compose-build: Dockerfile.dev
DOCKER_BUILDKIT=1 docker build \
-f Dockerfile.dev \
-t $(DEVEL_IMAGE_NAME) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
docker-clean:
-$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
-$(foreach image_id,$(shell docker images --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
-$(foreach image_id,$(shell docker images --filter=reference='*/*/*awx_devel*' --filter=reference='*/*awx_devel*' --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker volume rm -f tools_awx_db tools_vault_1 tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose
@@ -554,7 +592,7 @@ docker-compose-cluster-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-container-group:
MINIKUBE_CONTAINER_GROUP=true make docker-compose
MINIKUBE_CONTAINER_GROUP=true $(MAKE) docker-compose
clean-elk:
docker stop tools_kibana_1
@@ -571,12 +609,36 @@ VERSION:
@echo "awx: $(VERSION)"
PYTHON_VERSION:
@echo "$(PYTHON)" | sed 's:python::'
@echo "$(subst python,,$(PYTHON))"
.PHONY: version-for-buildyml
version-for-buildyml:
@echo $(firstword $(subst +, ,$(VERSION)))
# version-for-buildyml prints a special version string for build.yml,
# chopping off the sha after the '+' sign.
# tools/ansible/build.yml was doing this: make print-VERSION | cut -d + -f -1
# This does the same thing in native make without
# the pipe or the extra processes, and now the pb does `make version-for-buildyml`
# Example:
# 22.1.1.dev38+g523c0d9781 becomes 22.1.1.dev38
.PHONY: Dockerfile
## Generate Dockerfile for awx image
Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml -e receptor_image=$(RECEPTOR_IMAGE)
ansible-playbook tools/ansible/dockerfile.yml \
-e receptor_image=$(RECEPTOR_IMAGE) \
-e headless=$(HEADLESS)
## Build awx image for deployment on Kubernetes environment.
awx-kube-build: Dockerfile
DOCKER_BUILDKIT=1 docker build -f Dockerfile \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
.PHONY: Dockerfile.kube-dev
## Generate Docker.kube-dev for awx_kube_devel image
Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.kube-dev \
@@ -591,13 +653,9 @@ awx-kube-dev-build: Dockerfile.kube-dev
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
## Build awx image for deployment on Kubernetes environment.
awx-kube-build: Dockerfile
DOCKER_BUILDKIT=1 docker build -f Dockerfile \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
# Translation TASKS
# --------------------------------------
@@ -605,10 +663,12 @@ awx-kube-build: Dockerfile
## generate UI .pot file, an empty template of strings yet to be translated
pot: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-template --clean
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-template --clean
## generate UI .po files for each locale (will update translated strings for `en`)
po: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-strings -- --clean
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-strings -- --clean
## generate API django .pot .po
messages:
@@ -617,6 +677,7 @@ messages:
fi; \
$(PYTHON) manage.py makemessages -l en_us --keep-pot
.PHONY: print-%
print-%:
@echo $($*)
@@ -628,12 +689,12 @@ HELP_FILTER=.PHONY
## Display help targets
help:
@printf "Available targets:\n"
@make -s help/generate | grep -vE "\w($(HELP_FILTER))"
@$(MAKE) -s help/generate | grep -vE "\w($(HELP_FILTER))"
## Display help for all targets
help/all:
@printf "Available targets:\n"
@make -s help/generate
@$(MAKE) -s help/generate
## Generate help output from MAKEFILE_LIST
help/generate:
@@ -654,3 +715,7 @@ help/generate:
} \
{ lastLine = $$0 }' $(MAKEFILE_LIST) | sort -u
@printf "\n"
## Display help for ui-next targets
help/ui-next:
@$(MAKE) -s help MAKEFILE_LIST="awx/ui_next/Makefile"

View File

@@ -1,5 +1,5 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![IRC Chat - #ansible-awx](https://img.shields.io/badge/IRC-%23ansible--awx-blueviolet.svg)](https://libera.chat)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -37,5 +37,6 @@ Get Involved
We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
- Join the `#ansible-awx` channel on irc.libera.chat
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
- Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [Ansible Community Forum](https://forum.ansible.com)
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)

View File

@@ -52,39 +52,14 @@ try:
except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
import django # noqa: F401
HAS_DJANGO = True
except ImportError:
HAS_DJANGO = False
pass
else:
from django.db.backends.base import schema
from django.db.models import indexes
from django.db.backends.utils import names_digest
from django.db import connection
if HAS_DJANGO is True:
# See upgrade blocker note in requirements/README.md
try:
names_digest('foo', 'bar', 'baz', length=8)
except ValueError:
def names_digest(*args, length):
"""
Generate a 32-bit digest of a set of arguments that can be used to shorten
identifying names. Support for use in FIPS environments.
"""
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(arg.encode())
return h.hexdigest()[:length]
schema.names_digest = names_digest
indexes.names_digest = names_digest
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.

View File

@@ -347,7 +347,7 @@ class FieldLookupBackend(BaseFilterBackend):
args.append(Q(**{k: v}))
for role_name in role_filters:
if not hasattr(queryset.model, 'accessible_pk_qs'):
raise ParseError(_('Cannot apply role_level filter to this list because its model ' 'does not use roles for access control.'))
raise ParseError(_('Cannot apply role_level filter to this list because its model does not use roles for access control.'))
args.append(Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name)))
if or_filters:
q = Q()

View File

@@ -5,13 +5,11 @@
import inspect
import logging
import time
import uuid
# Django
from django.conf import settings
from django.contrib.auth import views as auth_views
from django.contrib.contenttypes.models import ContentType
from django.core.cache import cache
from django.core.exceptions import FieldDoesNotExist
from django.db import connection, transaction
from django.db.models.fields.related import OneToOneRel
@@ -35,7 +33,7 @@ from rest_framework.negotiation import DefaultContentNegotiation
# AWX
from awx.api.filters import FieldLookupBackend
from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate
from awx.main.access import access_registry
from awx.main.access import optimize_queryset
from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version
from awx.main.utils.db import get_all_field_names
from awx.main.utils.licensing import server_product_name
@@ -171,7 +169,7 @@ class APIView(views.APIView):
self.__init_request_error__ = exc
except UnsupportedMediaType as exc:
exc.detail = _(
'You did not use correct Content-Type in your HTTP request. ' 'If you are using our REST API, the Content-Type must be application/json'
'You did not use correct Content-Type in your HTTP request. If you are using our REST API, the Content-Type must be application/json'
)
self.__init_request_error__ = exc
return drf_request
@@ -234,7 +232,8 @@ class APIView(views.APIView):
response = super(APIView, self).finalize_response(request, response, *args, **kwargs)
time_started = getattr(self, 'time_started', None)
response['X-API-Product-Version'] = get_awx_version()
if request.user.is_authenticated:
response['X-API-Product-Version'] = get_awx_version()
response['X-API-Product-Name'] = server_product_name()
response['X-API-Node'] = settings.CLUSTER_HOST_ID
@@ -364,12 +363,7 @@ class GenericAPIView(generics.GenericAPIView, APIView):
return self.queryset._clone()
elif self.model is not None:
qs = self.model._default_manager
if self.model in access_registry:
access_class = access_registry[self.model]
if access_class.select_related:
qs = qs.select_related(*access_class.select_related)
if access_class.prefetch_related:
qs = qs.prefetch_related(*access_class.prefetch_related)
qs = optimize_queryset(qs)
return qs
else:
return super(GenericAPIView, self).get_queryset()
@@ -512,6 +506,9 @@ class SubListAPIView(ParentMixin, ListAPIView):
# And optionally (user must have given access permission on parent object
# to view sublist):
# parent_access = 'read'
# filter_read_permission sets whether or not to override the default intersection behavior
# implemented here
filter_read_permission = True
def get_description_context(self):
d = super(SubListAPIView, self).get_description_context()
@@ -526,12 +523,16 @@ class SubListAPIView(ParentMixin, ListAPIView):
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model).distinct()
sublist_qs = self.get_sublist_queryset(parent)
return qs & sublist_qs
if not self.filter_read_permission:
return optimize_queryset(self.get_sublist_queryset(parent))
qs = self.request.user.get_queryset(self.model)
if hasattr(self, 'parent_key'):
# This is vastly preferable for ReverseForeignKey relationships
return qs.filter(**{self.parent_key: parent})
return qs.distinct() & self.get_sublist_queryset(parent).distinct()
def get_sublist_queryset(self, parent):
return getattrd(parent, self.relationship).distinct()
return getattrd(parent, self.relationship)
class DestroyAPIView(generics.DestroyAPIView):
@@ -580,15 +581,6 @@ class SubListCreateAPIView(SubListAPIView, ListCreateAPIView):
d.update({'parent_key': getattr(self, 'parent_key', None)})
return d
def get_queryset(self):
if hasattr(self, 'parent_key'):
# Prefer this filtering because ForeignKey allows us more assumptions
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(**{self.parent_key: parent})
return super(SubListCreateAPIView, self).get_queryset()
def create(self, request, *args, **kwargs):
# If the object ID was not specified, it probably doesn't exist in the
# DB yet. We want to see if we can create it. The URL may choose to
@@ -967,16 +959,11 @@ class CopyAPIView(GenericAPIView):
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all():
new_obj.admin_role.members.add(request.user)
if sub_objs:
# store the copied object dict into cache, because it's
# often too large for postgres' notification bus
# (which has a default maximum message size of 8k)
key = 'deep-copy-{}'.format(str(uuid.uuid4()))
cache.set(key, sub_objs, timeout=3600)
permission_check_func = None
if hasattr(type(self), 'deep_copy_permission_check_func'):
permission_check_func = (type(self).__module__, type(self).__name__, 'deep_copy_permission_check_func')
trigger_delayed_deep_copy(
self.model.__module__, self.model.__name__, obj.pk, new_obj.pk, request.user.pk, key, permission_check_func=permission_check_func
self.model.__module__, self.model.__name__, obj.pk, new_obj.pk, request.user.pk, permission_check_func=permission_check_func
)
serializer = self._get_copy_return_serializer(new_obj)
headers = {'Location': new_obj.get_absolute_url(request=request)}

View File

@@ -71,7 +71,7 @@ class Metadata(metadata.SimpleMetadata):
'url': _('URL for this {}.'),
'related': _('Data structure with URLs of related resources.'),
'summary_fields': _(
'Data structure with name/description for related resources. ' 'The output for some objects may be limited for performance reasons.'
'Data structure with name/description for related resources. The output for some objects may be limited for performance reasons.'
),
'created': _('Timestamp when this {} was created.'),
'modified': _('Timestamp when this {} was last modified.'),

View File

@@ -25,6 +25,7 @@ __all__ = [
'UserPermission',
'IsSystemAdminOrAuditor',
'WorkflowApprovalPermission',
'AnalyticsPermission',
]
@@ -250,3 +251,16 @@ class IsSystemAdminOrAuditor(permissions.BasePermission):
class WebhookKeyPermission(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return request.user.can_access(view.model, 'admin', obj, request.data)
class AnalyticsPermission(permissions.BasePermission):
"""
Allows GET/POST/OPTIONS to system admins and system auditors.
"""
def has_permission(self, request, view):
if not (request.user and request.user.is_authenticated):
return False
if request.method in ["GET", "POST", "OPTIONS"]:
return request.user.is_superuser or request.user.is_system_auditor
return request.user.is_superuser

View File

@@ -56,6 +56,8 @@ from awx.main.models import (
ExecutionEnvironment,
Group,
Host,
HostMetric,
HostMetricSummaryMonthly,
Instance,
InstanceGroup,
InstanceLink,
@@ -156,6 +158,7 @@ SUMMARIZABLE_FK_FIELDS = {
'kind',
),
'host': DEFAULT_SUMMARY_FIELDS,
'constructed_host': DEFAULT_SUMMARY_FIELDS,
'group': DEFAULT_SUMMARY_FIELDS,
'default_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'execution_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
@@ -189,6 +192,11 @@ SUMMARIZABLE_FK_FIELDS = {
}
# These fields can be edited on a constructed inventory's generated source (possibly by using the constructed
# inventory's special API endpoint, but also by using the inventory sources endpoint).
CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS = ('source_vars', 'update_cache_timeout', 'limit', 'verbosity')
def reverse_gfk(content_object, request):
"""
Computes a reverse for a GenericForeignKey field.
@@ -212,7 +220,7 @@ class CopySerializer(serializers.Serializer):
view = self.context.get('view', None)
obj = view.get_object()
if name == obj.name:
raise serializers.ValidationError(_('The original object is already named {}, a copy from' ' it cannot have the same name.'.format(name)))
raise serializers.ValidationError(_('The original object is already named {}, a copy from it cannot have the same name.'.format(name)))
return attrs
@@ -752,7 +760,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
class UnifiedJobSerializer(BaseSerializer):
show_capabilities = ['start', 'delete']
event_processing_finished = serializers.BooleanField(
help_text=_('Indicates whether all of the events generated by this ' 'unified job have been saved to the database.'), read_only=True
help_text=_('Indicates whether all of the events generated by this unified job have been saved to the database.'), read_only=True
)
class Meta:
@@ -946,7 +954,7 @@ class UnifiedJobStdoutSerializer(UnifiedJobSerializer):
class UserSerializer(BaseSerializer):
password = serializers.CharField(required=False, default='', write_only=True, help_text=_('Write-only field used to change the password.'))
password = serializers.CharField(required=False, default='', help_text=_('Field used to change the password.'))
ldap_dn = serializers.CharField(source='profile.ldap_dn', read_only=True)
external_account = serializers.SerializerMethodField(help_text=_('Set if the account is managed by an external service'))
is_system_auditor = serializers.BooleanField(default=False)
@@ -973,7 +981,12 @@ class UserSerializer(BaseSerializer):
def to_representation(self, obj):
ret = super(UserSerializer, self).to_representation(obj)
ret.pop('password', None)
if self.get_external_account(obj):
# If this is an external account it shouldn't have a password field
ret.pop('password', None)
else:
# If its an internal account lets assume there is a password and return $encrypted$ to the user
ret['password'] = '$encrypted$'
if obj and type(self) is UserSerializer:
ret['auth'] = obj.social_auth.values('provider', 'uid')
return ret
@@ -987,13 +1000,31 @@ class UserSerializer(BaseSerializer):
django_validate_password(value)
if not self.instance and value in (None, ''):
raise serializers.ValidationError(_('Password required for new User.'))
# Check if a password is too long
password_max_length = User._meta.get_field('password').max_length
if len(value) > password_max_length:
raise serializers.ValidationError(_('Password max length is {}'.format(password_max_length)))
if getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH', 0) and len(value) < getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH'):
raise serializers.ValidationError(_('Password must be at least {} characters long.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH'))))
if getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS', 0) and sum(c.isdigit() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS'):
raise serializers.ValidationError(_('Password must contain at least {} digits.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS'))))
if getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER', 0) and sum(c.isupper() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER'):
raise serializers.ValidationError(
_('Password must contain at least {} uppercase characters.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER')))
)
if getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL', 0) and sum(not c.isalnum() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL'):
raise serializers.ValidationError(
_('Password must contain at least {} special characters.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL')))
)
return value
def _update_password(self, obj, new_password):
# For now we're not raising an error, just not saving password for
# users managed by LDAP who already have an unusable password set.
# Get external password will return something like ldap or enterprise or None if the user isn't external. We only want to allow a password update for a None option
if new_password and not self.get_external_account(obj):
if new_password and new_password != '$encrypted$' and not self.get_external_account(obj):
obj.set_password(new_password)
obj.save(update_fields=['password'])
@@ -1548,7 +1579,7 @@ class ProjectPlaybooksSerializer(ProjectSerializer):
class ProjectInventoriesSerializer(ProjectSerializer):
inventory_files = serializers.ReadOnlyField(help_text=_('Array of inventory files and directories available within this project, ' 'not comprehensive.'))
inventory_files = serializers.ReadOnlyField(help_text=_('Array of inventory files and directories available within this project, not comprehensive.'))
class Meta:
model = Project
@@ -1598,8 +1629,8 @@ class ProjectUpdateDetailSerializer(ProjectUpdateSerializer):
fields = ('*', 'host_status_counts', 'playbook_counts')
def get_playbook_counts(self, obj):
task_count = obj.project_update_events.filter(event='playbook_on_task_start').count()
play_count = obj.project_update_events.filter(event='playbook_on_play_start').count()
task_count = obj.get_event_queryset().filter(event='playbook_on_task_start').count()
play_count = obj.get_event_queryset().filter(event='playbook_on_play_start').count()
data = {'play_count': play_count, 'task_count': task_count}
@@ -1670,13 +1701,8 @@ class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables):
res.update(
dict(
hosts=self.reverse('api:inventory_hosts_list', kwargs={'pk': obj.pk}),
groups=self.reverse('api:inventory_groups_list', kwargs={'pk': obj.pk}),
root_groups=self.reverse('api:inventory_root_groups_list', kwargs={'pk': obj.pk}),
variable_data=self.reverse('api:inventory_variable_data', kwargs={'pk': obj.pk}),
script=self.reverse('api:inventory_script_view', kwargs={'pk': obj.pk}),
tree=self.reverse('api:inventory_tree_view', kwargs={'pk': obj.pk}),
inventory_sources=self.reverse('api:inventory_inventory_sources_list', kwargs={'pk': obj.pk}),
update_inventory_sources=self.reverse('api:inventory_inventory_sources_update', kwargs={'pk': obj.pk}),
activity_stream=self.reverse('api:inventory_activity_stream_list', kwargs={'pk': obj.pk}),
job_templates=self.reverse('api:inventory_job_template_list', kwargs={'pk': obj.pk}),
ad_hoc_commands=self.reverse('api:inventory_ad_hoc_commands_list', kwargs={'pk': obj.pk}),
@@ -1687,8 +1713,18 @@ class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables):
labels=self.reverse('api:inventory_label_list', kwargs={'pk': obj.pk}),
)
)
if obj.kind in ('', 'constructed'):
# links not relevant for the "old" smart inventory
res['groups'] = self.reverse('api:inventory_groups_list', kwargs={'pk': obj.pk})
res['root_groups'] = self.reverse('api:inventory_root_groups_list', kwargs={'pk': obj.pk})
res['update_inventory_sources'] = self.reverse('api:inventory_inventory_sources_update', kwargs={'pk': obj.pk})
res['inventory_sources'] = self.reverse('api:inventory_inventory_sources_list', kwargs={'pk': obj.pk})
res['tree'] = self.reverse('api:inventory_tree_view', kwargs={'pk': obj.pk})
if obj.organization:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization.pk})
if obj.kind == 'constructed':
res['input_inventories'] = self.reverse('api:inventory_input_inventories', kwargs={'pk': obj.pk})
res['constructed_url'] = self.reverse('api:constructed_inventory_detail', kwargs={'pk': obj.pk})
return res
def to_representation(self, obj):
@@ -1730,6 +1766,91 @@ class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables):
return super(InventorySerializer, self).validate(attrs)
class ConstructedFieldMixin(serializers.Field):
def get_attribute(self, instance):
if not hasattr(instance, '_constructed_inv_src'):
instance._constructed_inv_src = instance.inventory_sources.first()
inv_src = instance._constructed_inv_src
return super().get_attribute(inv_src) # yoink
class ConstructedCharField(ConstructedFieldMixin, serializers.CharField):
pass
class ConstructedIntegerField(ConstructedFieldMixin, serializers.IntegerField):
pass
class ConstructedInventorySerializer(InventorySerializer):
source_vars = ConstructedCharField(
required=False,
default=None,
allow_blank=True,
help_text=_('The source_vars for the related auto-created inventory source, special to constructed inventory.'),
)
update_cache_timeout = ConstructedIntegerField(
required=False,
allow_null=True,
min_value=0,
default=None,
help_text=_('The cache timeout for the related auto-created inventory source, special to constructed inventory'),
)
limit = ConstructedCharField(
required=False,
default=None,
allow_blank=True,
help_text=_('The limit to restrict the returned hosts for the related auto-created inventory source, special to constructed inventory.'),
)
verbosity = ConstructedIntegerField(
required=False,
allow_null=True,
min_value=0,
max_value=2,
default=None,
help_text=_('The verbosity level for the related auto-created inventory source, special to constructed inventory'),
)
class Meta:
model = Inventory
fields = ('*', '-host_filter') + CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS
read_only_fields = ('*', 'kind')
def pop_inv_src_data(self, data):
inv_src_data = {}
for field in CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS:
if field in data:
# values always need to be removed, as they are not valid for Inventory model
value = data.pop(field)
# null is not valid for any of those fields, taken as not-provided
if value is not None:
inv_src_data[field] = value
return inv_src_data
def apply_inv_src_data(self, inventory, inv_src_data):
if inv_src_data:
update_fields = []
inv_src = inventory.inventory_sources.first()
for field, value in inv_src_data.items():
setattr(inv_src, field, value)
update_fields.append(field)
if update_fields:
inv_src.save(update_fields=update_fields)
def create(self, validated_data):
validated_data['kind'] = 'constructed'
inv_src_data = self.pop_inv_src_data(validated_data)
inventory = super().create(validated_data)
self.apply_inv_src_data(inventory, inv_src_data)
return inventory
def update(self, obj, validated_data):
inv_src_data = self.pop_inv_src_data(validated_data)
obj = super().update(obj, validated_data)
self.apply_inv_src_data(obj, inv_src_data)
return obj
class InventoryScriptSerializer(InventorySerializer):
class Meta:
fields = ()
@@ -1783,6 +1904,9 @@ class HostSerializer(BaseSerializerWithVariables):
ansible_facts=self.reverse('api:host_ansible_facts_detail', kwargs={'pk': obj.pk}),
)
)
if obj.inventory.kind == 'constructed':
res['original_host'] = self.reverse('api:host_detail', kwargs={'pk': obj.instance_id})
res['ansible_facts'] = self.reverse('api:host_ansible_facts_detail', kwargs={'pk': obj.instance_id})
if obj.inventory:
res['inventory'] = self.reverse('api:inventory_detail', kwargs={'pk': obj.inventory.pk})
if obj.last_job:
@@ -1804,6 +1928,10 @@ class HostSerializer(BaseSerializerWithVariables):
group_list = [{'id': g.id, 'name': g.name} for g in obj.groups.all().order_by('id')[:5]]
group_cnt = obj.groups.count()
d.setdefault('groups', {'count': group_cnt, 'results': group_list})
if obj.inventory.kind == 'constructed':
summaries_qs = obj.constructed_host_summaries
else:
summaries_qs = obj.job_host_summaries
d.setdefault(
'recent_jobs',
[
@@ -1814,7 +1942,7 @@ class HostSerializer(BaseSerializerWithVariables):
'status': j.job.status,
'finished': j.job.finished,
}
for j in obj.job_host_summaries.select_related('job__job_template').order_by('-created').defer('job__extra_vars', 'job__artifacts')[:5]
for j in summaries_qs.select_related('job__job_template').order_by('-created').defer('job__extra_vars', 'job__artifacts')[:5]
],
)
return d
@@ -1839,8 +1967,8 @@ class HostSerializer(BaseSerializerWithVariables):
return value
def validate_inventory(self, value):
if value.kind == 'smart':
raise serializers.ValidationError({"detail": _("Cannot create Host for Smart Inventory")})
if value.kind in ('constructed', 'smart'):
raise serializers.ValidationError({"detail": _("Cannot create Host for Smart or Constructed Inventories")})
return value
def validate_variables(self, value):
@@ -1938,8 +2066,8 @@ class GroupSerializer(BaseSerializerWithVariables):
return value
def validate_inventory(self, value):
if value.kind == 'smart':
raise serializers.ValidationError({"detail": _("Cannot create Group for Smart Inventory")})
if value.kind in ('constructed', 'smart'):
raise serializers.ValidationError({"detail": _("Cannot create Group for Smart or Constructed Inventories")})
return value
def to_representation(self, obj):
@@ -2062,7 +2190,7 @@ class BulkHostCreateSerializer(serializers.Serializer):
host_data = []
for r in result:
item = {k: getattr(r, k) for k in return_keys}
if not settings.IS_TESTING_MODE:
if settings.DATABASES and ('sqlite3' not in settings.DATABASES.get('default', {}).get('ENGINE')):
# sqlite acts different with bulk_create -- it doesn't return the id of the objects
# to get it, you have to do an additional query, which is not useful for our tests
item['url'] = reverse('api:host_detail', kwargs={'pk': r.id})
@@ -2138,6 +2266,7 @@ class InventorySourceOptionsSerializer(BaseSerializer):
'custom_virtualenv',
'timeout',
'verbosity',
'limit',
)
read_only_fields = ('*', 'custom_virtualenv')
@@ -2244,8 +2373,8 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
return value
def validate_inventory(self, value):
if value and value.kind == 'smart':
raise serializers.ValidationError({"detail": _("Cannot create Inventory Source for Smart Inventory")})
if value and value.kind in ('constructed', 'smart'):
raise serializers.ValidationError({"detail": _("Cannot create Inventory Source for Smart or Constructed Inventories")})
return value
# TODO: remove when old 'credential' fields are removed
@@ -2289,9 +2418,16 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
if get_field_from_model_or_attrs('source') == 'scm':
if self.instance and self.instance.source == 'constructed':
allowed_fields = CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS
for field in attrs:
if attrs[field] != getattr(self.instance, field) and field not in allowed_fields:
raise serializers.ValidationError({"error": _("Cannot change field '{}' on a constructed inventory source.").format(field)})
elif get_field_from_model_or_attrs('source') == 'scm':
if ('source' in attrs or 'source_project' in attrs) and get_field_from_model_or_attrs('source_project') is None:
raise serializers.ValidationError({"source_project": _("Project required for scm type sources.")})
elif get_field_from_model_or_attrs('source') == 'constructed':
raise serializers.ValidationError({"error": _('constructed not a valid source for inventory')})
else:
redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path', 'scm_branch']))
if redundant_scm_fields:
@@ -2769,7 +2905,7 @@ class CredentialSerializer(BaseSerializer):
):
if getattr(self.instance, related_objects).count() > 0:
raise ValidationError(
_('You cannot change the credential type of the credential, as it may break the functionality' ' of the resources using it.')
_('You cannot change the credential type of the credential, as it may break the functionality of the resources using it.')
)
return credential_type
@@ -2789,7 +2925,7 @@ class CredentialSerializerCreate(CredentialSerializer):
default=None,
write_only=True,
allow_null=True,
help_text=_('Write-only field used to add user to owner role. If provided, ' 'do not give either team or organization. Only valid for creation.'),
help_text=_('Write-only field used to add user to owner role. If provided, do not give either team or organization. Only valid for creation.'),
)
team = serializers.PrimaryKeyRelatedField(
queryset=Team.objects.all(),
@@ -2797,14 +2933,14 @@ class CredentialSerializerCreate(CredentialSerializer):
default=None,
write_only=True,
allow_null=True,
help_text=_('Write-only field used to add team to owner role. If provided, ' 'do not give either user or organization. Only valid for creation.'),
help_text=_('Write-only field used to add team to owner role. If provided, do not give either user or organization. Only valid for creation.'),
)
organization = serializers.PrimaryKeyRelatedField(
queryset=Organization.objects.all(),
required=False,
default=None,
allow_null=True,
help_text=_('Inherit permissions from organization roles. If provided on creation, ' 'do not give either user or team.'),
help_text=_('Inherit permissions from organization roles. If provided on creation, do not give either user or team.'),
)
class Meta:
@@ -2826,7 +2962,7 @@ class CredentialSerializerCreate(CredentialSerializer):
if len(owner_fields) > 1:
received = ", ".join(sorted(owner_fields))
raise serializers.ValidationError(
{"detail": _("Only one of 'user', 'team', or 'organization' should be provided, " "received {} fields.".format(received))}
{"detail": _("Only one of 'user', 'team', or 'organization' should be provided, received {} fields.".format(received))}
)
if attrs.get('team'):
@@ -3097,7 +3233,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
if get_field_from_model_or_attrs('host_config_key') and not inventory:
raise serializers.ValidationError({'host_config_key': _("Cannot enable provisioning callback without an inventory set.")})
prompting_error_message = _("Must either set a default value or ask to prompt on launch.")
prompting_error_message = _("You must either set a default value or ask to prompt on launch.")
if project is None:
raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):
@@ -3486,7 +3622,7 @@ class SystemJobSerializer(UnifiedJobSerializer):
try:
return obj.result_stdout
except StdoutMaxBytesExceeded as e:
return _("Standard Output too large to display ({text_size} bytes), " "only download supported for sizes over {supported_size} bytes.").format(
return _("Standard Output too large to display ({text_size} bytes), only download supported for sizes over {supported_size} bytes.").format(
text_size=e.total, supported_size=e.supported
)
@@ -4033,6 +4169,7 @@ class JobHostSummarySerializer(BaseSerializer):
'-description',
'job',
'host',
'constructed_host',
'host_name',
'changed',
'dark',
@@ -4399,7 +4536,7 @@ class JobLaunchSerializer(BaseSerializer):
if cred.unique_hash() in provided_mapping.keys():
continue # User replaced credential with new of same type
errors.setdefault('credentials', []).append(
_('Removing {} credential at launch time without replacement is not supported. ' 'Provided list lacked credential(s): {}.').format(
_('Removing {} credential at launch time without replacement is not supported. Provided list lacked credential(s): {}.').format(
cred.unique_hash(display=True), ', '.join([str(c) for c in removed_creds])
)
)
@@ -4549,12 +4686,11 @@ class BulkJobNodeSerializer(WorkflowJobNodeSerializer):
# many-to-many fields
credentials = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
labels = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
# TODO: Use instance group role added via PR 13584(once merged), for now everything related to instance group is commented
# instance_groups = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
instance_groups = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
class Meta:
model = WorkflowJobNode
fields = ('*', 'credentials', 'labels') # m2m fields are not canonical for WJ nodes, TODO: add instance_groups once supported
fields = ('*', 'credentials', 'labels', 'instance_groups') # m2m fields are not canonical for WJ nodes
def validate(self, attrs):
return super(LaunchConfigurationBaseSerializer, self).validate(attrs)
@@ -4614,21 +4750,21 @@ class BulkJobLaunchSerializer(serializers.Serializer):
requested_use_execution_environments = {job['execution_environment'] for job in attrs['jobs'] if 'execution_environment' in job}
requested_use_credentials = set()
requested_use_labels = set()
# requested_use_instance_groups = set()
requested_use_instance_groups = set()
for job in attrs['jobs']:
for cred in job.get('credentials', []):
requested_use_credentials.add(cred)
for label in job.get('labels', []):
requested_use_labels.add(label)
# for instance_group in job.get('instance_groups', []):
# requested_use_instance_groups.add(instance_group)
for instance_group in job.get('instance_groups', []):
requested_use_instance_groups.add(instance_group)
key_to_obj_map = {
"unified_job_template": {obj.id: obj for obj in UnifiedJobTemplate.objects.filter(id__in=requested_ujts)},
"inventory": {obj.id: obj for obj in Inventory.objects.filter(id__in=requested_use_inventories)},
"credentials": {obj.id: obj for obj in Credential.objects.filter(id__in=requested_use_credentials)},
"labels": {obj.id: obj for obj in Label.objects.filter(id__in=requested_use_labels)},
# "instance_groups": {obj.id: obj for obj in InstanceGroup.objects.filter(id__in=requested_use_instance_groups)},
"instance_groups": {obj.id: obj for obj in InstanceGroup.objects.filter(id__in=requested_use_instance_groups)},
"execution_environment": {obj.id: obj for obj in ExecutionEnvironment.objects.filter(id__in=requested_use_execution_environments)},
}
@@ -4655,7 +4791,7 @@ class BulkJobLaunchSerializer(serializers.Serializer):
self.check_list_permission(Credential, requested_use_credentials, 'use_role')
self.check_list_permission(Label, requested_use_labels)
# self.check_list_permission(InstanceGroup, requested_use_instance_groups) # TODO: change to use_role for conflict
self.check_list_permission(InstanceGroup, requested_use_instance_groups) # TODO: change to use_role for conflict
self.check_list_permission(ExecutionEnvironment, requested_use_execution_environments) # TODO: change if roles introduced
jobs_object = self.get_objectified_jobs(attrs, key_to_obj_map)
@@ -4702,7 +4838,7 @@ class BulkJobLaunchSerializer(serializers.Serializer):
node_m2m_object_types_to_through_model = {
'credentials': WorkflowJobNode.credentials.through,
'labels': WorkflowJobNode.labels.through,
# 'instance_groups': WorkflowJobNode.instance_groups.through,
'instance_groups': WorkflowJobNode.instance_groups.through,
}
node_deferred_attr_names = (
'limit',
@@ -4755,9 +4891,9 @@ class BulkJobLaunchSerializer(serializers.Serializer):
if field_name in node_m2m_objects[node_identifier] and field_name == 'labels':
for label in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(label=label, workflowjobnode=node_m2m_objects[node_identifier]['node']))
# if obj_type in node_m2m_objects[node_identifier] and obj_type == 'instance_groups':
# for instance_group in node_m2m_objects[node_identifier][obj_type]:
# through_model_objects.append(through_model(instancegroup=instance_group, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if field_name in node_m2m_objects[node_identifier] and field_name == 'instance_groups':
for instance_group in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(instancegroup=instance_group, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if through_model_objects:
through_model.objects.bulk_create(through_model_objects)
@@ -4882,7 +5018,7 @@ class NotificationTemplateSerializer(BaseSerializer):
for subevent in event_messages:
if subevent not in ('running', 'approved', 'timed_out', 'denied'):
error_list.append(
_("Workflow Approval event '{}' invalid, must be one of " "'running', 'approved', 'timed_out', or 'denied'").format(subevent)
_("Workflow Approval event '{}' invalid, must be one of 'running', 'approved', 'timed_out', or 'denied'").format(subevent)
)
continue
subevent_messages = event_messages[subevent]
@@ -5220,10 +5356,16 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
class InstanceLinkSerializer(BaseSerializer):
class Meta:
model = InstanceLink
fields = ('source', 'target', 'link_state')
fields = ('id', 'url', 'related', 'source', 'target', 'link_state')
source = serializers.SlugRelatedField(slug_field="hostname", read_only=True)
target = serializers.SlugRelatedField(slug_field="hostname", read_only=True)
source = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
target = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
def get_related(self, obj):
res = super(InstanceLinkSerializer, self).get_related(obj)
res['source_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.source.id})
res['target_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.target.id})
return res
class InstanceNodeSerializer(BaseSerializer):
@@ -5240,6 +5382,7 @@ class InstanceSerializer(BaseSerializer):
jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that are targeted for this instance'), read_only=True)
jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance'), read_only=True)
health_check_pending = serializers.SerializerMethodField()
peers = serializers.SlugRelatedField(many=True, required=False, slug_field="hostname", queryset=Instance.objects.all())
class Meta:
model = Instance
@@ -5276,6 +5419,8 @@ class InstanceSerializer(BaseSerializer):
'node_state',
'ip_address',
'listener_port',
'peers',
'peers_from_control_nodes',
)
extra_kwargs = {
'node_type': {'initial': Instance.Types.EXECUTION, 'default': Instance.Types.EXECUTION},
@@ -5299,7 +5444,7 @@ class InstanceSerializer(BaseSerializer):
res = super(InstanceSerializer, self).get_related(obj)
res['jobs'] = self.reverse('api:instance_unified_jobs_list', kwargs={'pk': obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if settings.IS_K8S and obj.node_type in (Instance.Types.EXECUTION,):
if obj.node_type in [Instance.Types.EXECUTION, Instance.Types.HOP]:
res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk})
res['peers'] = self.reverse('api:instance_peers_list', kwargs={"pk": obj.pk})
if self.context['request'].user.is_superuser or self.context['request'].user.is_system_auditor:
@@ -5328,22 +5473,57 @@ class InstanceSerializer(BaseSerializer):
def get_health_check_pending(self, obj):
return obj.health_check_pending
def validate(self, data):
if self.instance:
if self.instance.node_type == Instance.Types.HOP:
raise serializers.ValidationError("Hop node instances may not be changed.")
else:
if not settings.IS_K8S:
raise serializers.ValidationError("Can only create instances on Kubernetes or OpenShift.")
return data
def validate(self, attrs):
def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
def check_peers_changed():
'''
return True if
- 'peers' in attrs
- instance peers matches peers in attrs
'''
return self.instance and 'peers' in attrs and set(self.instance.peers.all()) != set(attrs['peers'])
if not self.instance and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only create instances on Kubernetes or OpenShift."))
node_type = get_field_from_model_or_attrs("node_type")
peers_from_control_nodes = get_field_from_model_or_attrs("peers_from_control_nodes")
listener_port = get_field_from_model_or_attrs("listener_port")
peers = attrs.get('peers', [])
if peers_from_control_nodes and node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("peers_from_control_nodes can only be enabled for execution or hop nodes."))
if node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
if check_peers_changed():
raise serializers.ValidationError(
_("Setting peers manually for control nodes is not allowed. Enable peers_from_control_nodes on the hop and execution nodes instead.")
)
if not listener_port and peers_from_control_nodes:
raise serializers.ValidationError(_("Field listener_port must be a valid integer when peers_from_control_nodes is enabled."))
if not listener_port and self.instance and self.instance.peers_from.exists():
raise serializers.ValidationError(_("Field listener_port must be a valid integer when other nodes peer to it."))
for peer in peers:
if peer.listener_port is None:
raise serializers.ValidationError(_("Field listener_port must be set on peer ") + peer.hostname + ".")
if not settings.IS_K8S:
if check_peers_changed():
raise serializers.ValidationError(_("Cannot change peers."))
return super().validate(attrs)
def validate_node_type(self, value):
if not self.instance:
if value not in (Instance.Types.EXECUTION,):
raise serializers.ValidationError("Can only create execution nodes.")
else:
if self.instance.node_type != value:
raise serializers.ValidationError("Cannot change node type.")
if not self.instance and value not in [Instance.Types.HOP, Instance.Types.EXECUTION]:
raise serializers.ValidationError(_("Can only create execution or hop nodes."))
if self.instance and self.instance.node_type != value:
raise serializers.ValidationError(_("Cannot change node type."))
return value
@@ -5351,30 +5531,41 @@ class InstanceSerializer(BaseSerializer):
if self.instance:
if value != self.instance.node_state:
if not settings.IS_K8S:
raise serializers.ValidationError("Can only change the state on Kubernetes or OpenShift.")
raise serializers.ValidationError(_("Can only change the state on Kubernetes or OpenShift."))
if value != Instance.States.DEPROVISIONING:
raise serializers.ValidationError("Can only change instances to the 'deprovisioning' state.")
if self.instance.node_type not in (Instance.Types.EXECUTION,):
raise serializers.ValidationError("Can only deprovision execution nodes.")
raise serializers.ValidationError(_("Can only change instances to the 'deprovisioning' state."))
if self.instance.node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("Can only deprovision execution or hop nodes."))
else:
if value and value != Instance.States.INSTALLED:
raise serializers.ValidationError("Can only create instances in the 'installed' state.")
raise serializers.ValidationError(_("Can only create instances in the 'installed' state."))
return value
def validate_hostname(self, value):
"""
- Hostname cannot be "localhost" - but can be something like localhost.domain
- Cannot change the hostname of an-already instantiated & initialized Instance object
Cannot change the hostname
"""
if self.instance and self.instance.hostname != value:
raise serializers.ValidationError("Cannot change hostname.")
raise serializers.ValidationError(_("Cannot change hostname."))
return value
def validate_listener_port(self, value):
if self.instance and self.instance.listener_port != value:
raise serializers.ValidationError("Cannot change listener port.")
"""
Cannot change listener port, unless going from none to integer, and vice versa
"""
if value and self.instance and self.instance.listener_port and self.instance.listener_port != value:
raise serializers.ValidationError(_("Cannot change listener port."))
return value
def validate_peers_from_control_nodes(self, value):
"""
Can only enable for K8S based deployments
"""
if value and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only be enabled on Kubernetes or Openshift."))
return value
@@ -5382,7 +5573,45 @@ class InstanceSerializer(BaseSerializer):
class InstanceHealthCheckSerializer(BaseSerializer):
class Meta:
model = Instance
read_only_fields = ('uuid', 'hostname', 'version', 'last_health_check', 'errors', 'cpu', 'memory', 'cpu_capacity', 'mem_capacity', 'capacity')
read_only_fields = (
'uuid',
'hostname',
'ip_address',
'version',
'last_health_check',
'errors',
'cpu',
'memory',
'cpu_capacity',
'mem_capacity',
'capacity',
)
fields = read_only_fields
class HostMetricSerializer(BaseSerializer):
show_capabilities = ['delete']
class Meta:
model = HostMetric
fields = (
"id",
"hostname",
"url",
"first_automation",
"last_automation",
"last_deleted",
"automated_counter",
"deleted_counter",
"deleted",
"used_in_inventories",
)
class HostMetricSummaryMonthlySerializer(BaseSerializer):
class Meta:
model = HostMetricSummaryMonthly
read_only_fields = ("id", "date", "license_consumed", "license_capacity", "hosts_added", "hosts_deleted", "indirectly_managed_hosts")
fields = read_only_fields
@@ -5396,7 +5625,7 @@ class InstanceGroupSerializer(BaseSerializer):
instances = serializers.SerializerMethodField()
is_container_group = serializers.BooleanField(
required=False,
help_text=_('Indicates whether instances in this group are containerized.' 'Containerized groups have a designated Openshift or Kubernetes cluster.'),
help_text=_('Indicates whether instances in this group are containerized.Containerized groups have a designated Openshift or Kubernetes cluster.'),
)
# NOTE: help_text is duplicated from field definitions, no obvious way of
# both defining field details here and also getting the field's help_text
@@ -5407,7 +5636,7 @@ class InstanceGroupSerializer(BaseSerializer):
required=False,
initial=0,
label=_('Policy Instance Percentage'),
help_text=_("Minimum percentage of all instances that will be automatically assigned to " "this group when new instances come online."),
help_text=_("Minimum percentage of all instances that will be automatically assigned to this group when new instances come online."),
)
policy_instance_minimum = serializers.IntegerField(
default=0,
@@ -5415,7 +5644,7 @@ class InstanceGroupSerializer(BaseSerializer):
required=False,
initial=0,
label=_('Policy Instance Minimum'),
help_text=_("Static minimum number of Instances that will be automatically assign to " "this group when new instances come online."),
help_text=_("Static minimum number of Instances that will be automatically assign to this group when new instances come online."),
)
max_concurrent_jobs = serializers.IntegerField(
default=0,

View File

@@ -1,16 +1,10 @@
import json
import warnings
from coreapi.document import Object, Link
from rest_framework import exceptions
from rest_framework.permissions import AllowAny
from rest_framework.renderers import CoreJSONRenderer
from rest_framework.response import Response
from rest_framework.schemas import SchemaGenerator, AutoSchema as DRFAuthSchema
from rest_framework.views import APIView
from rest_framework_swagger import renderers
from drf_yasg.views import get_schema_view
from drf_yasg import openapi
class SuperUserSchemaGenerator(SchemaGenerator):
@@ -55,43 +49,15 @@ class AutoSchema(DRFAuthSchema):
return description
class SwaggerSchemaView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
renderer_classes = [CoreJSONRenderer, renderers.OpenAPIRenderer, renderers.SwaggerUIRenderer]
def get(self, request):
generator = SuperUserSchemaGenerator(title='Ansible Automation Platform controller API', patterns=None, urlconf=None)
schema = generator.get_schema(request=request)
# python core-api doesn't support the deprecation yet, so track it
# ourselves and return it in a response header
_deprecated = []
# By default, DRF OpenAPI serialization places all endpoints in
# a single node based on their root path (/api). Instead, we want to
# group them by topic/tag so that they're categorized in the rendered
# output
document = schema._data.pop('api')
for path, node in document.items():
if isinstance(node, Object):
for action in node.values():
topic = getattr(action, 'topic', None)
if topic:
schema._data.setdefault(topic, Object())
schema._data[topic]._data[path] = node
if isinstance(action, Object):
for link in action.links.values():
if link.deprecated:
_deprecated.append(link.url)
elif isinstance(node, Link):
topic = getattr(node, 'topic', None)
if topic:
schema._data.setdefault(topic, Object())
schema._data[topic]._data[path] = node
if not schema:
raise exceptions.ValidationError('The schema generator did not return a schema Document')
return Response(schema, headers={'X-Deprecated-Paths': json.dumps(_deprecated)})
schema_view = get_schema_view(
openapi.Info(
title="Snippets API",
default_version='v1',
description="Test description",
terms_of_service="https://www.google.com/policies/terms/",
contact=openapi.Contact(email="contact@snippets.local"),
license=openapi.License(name="BSD License"),
),
public=True,
permission_classes=[AllowAny],
)

View File

@@ -0,0 +1,18 @@
{% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}:
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
record containing the following fields:
{% include "api/_result_fields_common.md" %}
{% endifmeth %}
{% ifmeth DELETE %}
# Delete {{ model_verbose_name|title|anora }}:
Make a DELETE request to this resource to soft-delete this {{ model_verbose_name }}.
A soft deletion will mark the `deleted` field as true and exclude the host
metric from license calculations.
This may be undone later if the same hostname is automated again afterwards.
{% endifmeth %}

View File

@@ -2,21 +2,36 @@ receptor_user: awx
receptor_group: awx
receptor_verify: true
receptor_tls: true
receptor_mintls13: false
{% if instance.node_type == "execution" %}
receptor_work_commands:
ansible-runner:
command: ansible-runner
params: worker
allowruntimeparams: true
verifysignature: true
custom_worksign_public_keyfile: receptor/work-public-key.pem
additional_python_packages:
- ansible-runner
{% endif %}
custom_worksign_public_keyfile: receptor/work_public_key.pem
custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/receptor-ca.crt
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt
receptor_protocol: 'tcp'
{% if instance.listener_port %}
receptor_listener: true
receptor_port: {{ instance.listener_port }}
receptor_dependencies:
- python39-pip
{% else %}
receptor_listener: false
{% endif %}
{% if peers %}
receptor_peers:
{% for peer in peers %}
- host: {{ peer.host }}
port: {{ peer.port }}
protocol: tcp
{% endfor %}
{% endif %}
{% verbatim %}
podman_user: "{{ receptor_user }}"
podman_group: "{{ receptor_group }}"

View File

@@ -1,20 +1,16 @@
{% verbatim %}
---
- hosts: all
become: yes
tasks:
- name: Create the receptor user
user:
{% verbatim %}
name: "{{ receptor_user }}"
{% endverbatim %}
shell: /bin/bash
- name: Enable Copr repo for Receptor
command: dnf copr enable ansible-awx/receptor -y
{% if instance.node_type == "execution" %}
- import_role:
name: ansible.receptor.podman
{% endif %}
- import_role:
name: ansible.receptor.setup
- name: Install ansible-runner
pip:
name: ansible-runner
executable: pip3.9
{% endverbatim %}

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 1.1.0
version: 2.0.0

31
awx/api/urls/analytics.py Normal file
View File

@@ -0,0 +1,31 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
import awx.api.views.analytics as analytics
urls = [
re_path(r'^$', analytics.AnalyticsRootView.as_view(), name='analytics_root_view'),
re_path(r'^authorized/$', analytics.AnalyticsAuthorizedView.as_view(), name='analytics_authorized'),
re_path(r'^reports/$', analytics.AnalyticsReportsList.as_view(), name='analytics_reports_list'),
re_path(r'^report/(?P<slug>[\w-]+)/$', analytics.AnalyticsReportDetail.as_view(), name='analytics_report_detail'),
re_path(r'^report_options/$', analytics.AnalyticsReportOptionsList.as_view(), name='analytics_report_options_list'),
re_path(r'^adoption_rate/$', analytics.AnalyticsAdoptionRateList.as_view(), name='analytics_adoption_rate'),
re_path(r'^adoption_rate_options/$', analytics.AnalyticsAdoptionRateList.as_view(), name='analytics_adoption_rate_options'),
re_path(r'^event_explorer/$', analytics.AnalyticsEventExplorerList.as_view(), name='analytics_event_explorer'),
re_path(r'^event_explorer_options/$', analytics.AnalyticsEventExplorerList.as_view(), name='analytics_event_explorer_options'),
re_path(r'^host_explorer/$', analytics.AnalyticsHostExplorerList.as_view(), name='analytics_host_explorer'),
re_path(r'^host_explorer_options/$', analytics.AnalyticsHostExplorerList.as_view(), name='analytics_host_explorer_options'),
re_path(r'^job_explorer/$', analytics.AnalyticsJobExplorerList.as_view(), name='analytics_job_explorer'),
re_path(r'^job_explorer_options/$', analytics.AnalyticsJobExplorerList.as_view(), name='analytics_job_explorer_options'),
re_path(r'^probe_templates/$', analytics.AnalyticsProbeTemplatesList.as_view(), name='analytics_probe_templates_explorer'),
re_path(r'^probe_templates_options/$', analytics.AnalyticsProbeTemplatesList.as_view(), name='analytics_probe_templates_options'),
re_path(r'^probe_template_for_hosts/$', analytics.AnalyticsProbeTemplateForHostsList.as_view(), name='analytics_probe_template_for_hosts_explorer'),
re_path(r'^probe_template_for_hosts_options/$', analytics.AnalyticsProbeTemplateForHostsList.as_view(), name='analytics_probe_template_for_hosts_options'),
re_path(r'^roi_templates/$', analytics.AnalyticsRoiTemplatesList.as_view(), name='analytics_roi_templates_explorer'),
re_path(r'^roi_templates_options/$', analytics.AnalyticsRoiTemplatesList.as_view(), name='analytics_roi_templates_options'),
]
__all__ = ['urls']

View File

@@ -0,0 +1,10 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import HostMetricList, HostMetricDetail
urls = [re_path(r'^$', HostMetricList.as_view(), name='host_metric_list'), re_path(r'^(?P<pk>[0-9]+)/$', HostMetricDetail.as_view(), name='host_metric_detail')]
__all__ = ['urls']

View File

@@ -6,7 +6,10 @@ from django.urls import re_path
from awx.api.views.inventory import (
InventoryList,
InventoryDetail,
ConstructedInventoryDetail,
ConstructedInventoryList,
InventoryActivityStreamList,
InventoryInputInventoriesList,
InventoryJobTemplateList,
InventoryAccessList,
InventoryObjectRolesList,
@@ -37,6 +40,7 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/script/$', InventoryScriptView.as_view(), name='inventory_script_view'),
re_path(r'^(?P<pk>[0-9]+)/tree/$', InventoryTreeView.as_view(), name='inventory_tree_view'),
re_path(r'^(?P<pk>[0-9]+)/inventory_sources/$', InventoryInventorySourcesList.as_view(), name='inventory_inventory_sources_list'),
re_path(r'^(?P<pk>[0-9]+)/input_inventories/$', InventoryInputInventoriesList.as_view(), name='inventory_input_inventories'),
re_path(r'^(?P<pk>[0-9]+)/update_inventory_sources/$', InventoryInventorySourcesUpdate.as_view(), name='inventory_inventory_sources_update'),
re_path(r'^(?P<pk>[0-9]+)/activity_stream/$', InventoryActivityStreamList.as_view(), name='inventory_activity_stream_list'),
re_path(r'^(?P<pk>[0-9]+)/job_templates/$', InventoryJobTemplateList.as_view(), name='inventory_job_template_list'),
@@ -48,4 +52,10 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/copy/$', InventoryCopy.as_view(), name='inventory_copy'),
]
__all__ = ['urls']
# Constructed inventory special views
constructed_inventory_urls = [
re_path(r'^$', ConstructedInventoryList.as_view(), name='constructed_inventory_list'),
re_path(r'^(?P<pk>[0-9]+)/$', ConstructedInventoryDetail.as_view(), name='constructed_inventory_detail'),
]
__all__ = ['urls', 'constructed_inventory_urls']

View File

@@ -30,6 +30,7 @@ from awx.api.views import (
OAuth2TokenList,
ApplicationOAuth2TokenList,
OAuth2ApplicationDetail,
HostMetricSummaryMonthlyList,
)
from awx.api.views.bulk import (
@@ -41,15 +42,17 @@ from awx.api.views.bulk import (
from awx.api.views.mesh_visualizer import MeshVisualizer
from awx.api.views.metrics import MetricsView
from awx.api.views.analytics import AWX_ANALYTICS_API_PREFIX
from .organization import urls as organization_urls
from .user import urls as user_urls
from .project import urls as project_urls
from .project_update import urls as project_update_urls
from .inventory import urls as inventory_urls
from .inventory import urls as inventory_urls, constructed_inventory_urls
from .execution_environments import urls as execution_environment_urls
from .team import urls as team_urls
from .host import urls as host_urls
from .host_metric import urls as host_metric_urls
from .group import urls as group_urls
from .inventory_source import urls as inventory_source_urls
from .inventory_update import urls as inventory_update_urls
@@ -80,7 +83,7 @@ from .oauth2 import urls as oauth2_urls
from .oauth2_root import urls as oauth2_root_urls
from .workflow_approval_template import urls as workflow_approval_template_urls
from .workflow_approval import urls as workflow_approval_urls
from .analytics import urls as analytics_urls
v2_urls = [
re_path(r'^$', ApiV2RootView.as_view(), name='api_v2_root_view'),
@@ -117,7 +120,10 @@ v2_urls = [
re_path(r'^project_updates/', include(project_update_urls)),
re_path(r'^teams/', include(team_urls)),
re_path(r'^inventories/', include(inventory_urls)),
re_path(r'^constructed_inventories/', include(constructed_inventory_urls)),
re_path(r'^hosts/', include(host_urls)),
re_path(r'^host_metrics/', include(host_metric_urls)),
re_path(r'^host_metric_summary_monthly/$', HostMetricSummaryMonthlyList.as_view(), name='host_metric_summary_monthly_list'),
re_path(r'^groups/', include(group_urls)),
re_path(r'^inventory_sources/', include(inventory_source_urls)),
re_path(r'^inventory_updates/', include(inventory_update_urls)),
@@ -141,6 +147,7 @@ v2_urls = [
re_path(r'^unified_job_templates/$', UnifiedJobTemplateList.as_view(), name='unified_job_template_list'),
re_path(r'^unified_jobs/$', UnifiedJobList.as_view(), name='unified_job_list'),
re_path(r'^activity_stream/', include(activity_stream_urls)),
re_path(rf'^{AWX_ANALYTICS_API_PREFIX}/', include(analytics_urls)),
re_path(r'^workflow_approval_templates/', include(workflow_approval_template_urls)),
re_path(r'^workflow_approvals/', include(workflow_approval_urls)),
re_path(r'^bulk/$', BulkView.as_view(), name='bulk'),
@@ -159,10 +166,13 @@ urlpatterns = [
]
if MODE == 'development':
# Only include these if we are in the development environment
from awx.api.swagger import SwaggerSchemaView
urlpatterns += [re_path(r'^swagger/$', SwaggerSchemaView.as_view(), name='swagger_view')]
from awx.api.swagger import schema_view
from awx.api.urls.debug import urls as debug_urls
urlpatterns += [re_path(r'^debug/', include(debug_urls))]
urlpatterns += [
re_path(r'^swagger(?P<format>\.json|\.yaml)/$', schema_view.without_ui(cache_timeout=0), name='schema-json'),
re_path(r'^swagger/$', schema_view.with_ui('swagger', cache_timeout=0), name='schema-swagger-ui'),
re_path(r'^redoc/$', schema_view.with_ui('redoc', cache_timeout=0), name='schema-redoc'),
]

View File

@@ -17,7 +17,6 @@ from collections import OrderedDict
from urllib3.exceptions import ConnectTimeoutError
# Django
from django.conf import settings
from django.core.exceptions import FieldError, ObjectDoesNotExist
@@ -30,7 +29,7 @@ from django.utils.safestring import mark_safe
from django.utils.timezone import now
from django.views.decorators.csrf import csrf_exempt
from django.template.loader import render_to_string
from django.http import HttpResponse
from django.http import HttpResponse, HttpResponseRedirect
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import gettext_lazy as _
@@ -63,7 +62,7 @@ from wsgiref.util import FileWrapper
# AWX
from awx.main.tasks.system import send_notifications, update_inventory_computed_fields
from awx.main.access import get_user_queryset, HostAccess
from awx.main.access import get_user_queryset
from awx.api.generics import (
APIView,
BaseUsersList,
@@ -342,17 +341,18 @@ class InstanceDetail(RetrieveUpdateAPIView):
def update_raw_data(self, data):
# these fields are only valid on creation of an instance, so they unwanted on detail view
data.pop('listener_port', None)
data.pop('node_type', None)
data.pop('hostname', None)
data.pop('ip_address', None)
return super(InstanceDetail, self).update_raw_data(data)
def update(self, request, *args, **kwargs):
r = super(InstanceDetail, self).update(request, *args, **kwargs)
if status.is_success(r.status_code):
obj = self.get_object()
obj.set_capacity_value()
obj.save(update_fields=['capacity'])
capacity_changed = obj.set_capacity_value()
if capacity_changed:
obj.save(update_fields=['capacity'])
r.data = serializers.InstanceSerializer(obj, context=self.get_serializer_context()).to_representation(obj)
return r
@@ -566,7 +566,7 @@ class LaunchConfigCredentialsBase(SubListAttachDetachAPIView):
if self.relationship not in ask_mapping:
return {"msg": _("Related template cannot accept {} on launch.").format(self.relationship)}
elif sub.passwords_needed:
return {"msg": _("Credential that requires user input on launch " "cannot be used in saved launch configuration.")}
return {"msg": _("Credential that requires user input on launch cannot be used in saved launch configuration.")}
ask_field_name = ask_mapping[self.relationship]
@@ -795,13 +795,7 @@ class ExecutionEnvironmentActivityStreamList(SubListAPIView):
parent_model = models.ExecutionEnvironment
relationship = 'activitystream_set'
search_fields = ('changes',)
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(execution_environment=parent)
filter_read_permission = False
class ProjectList(ListCreateAPIView):
@@ -1548,6 +1542,40 @@ class HostRelatedSearchMixin(object):
return ret
class HostMetricList(ListAPIView):
name = _("Host Metrics List")
model = models.HostMetric
serializer_class = serializers.HostMetricSerializer
permission_classes = (IsSystemAdminOrAuditor,)
search_fields = ('hostname', 'deleted')
def get_queryset(self):
return self.model.objects.all()
class HostMetricDetail(RetrieveDestroyAPIView):
name = _("Host Metric Detail")
model = models.HostMetric
serializer_class = serializers.HostMetricSerializer
permission_classes = (IsSystemAdminOrAuditor,)
def delete(self, request, *args, **kwargs):
self.get_object().soft_delete()
return Response(status=status.HTTP_204_NO_CONTENT)
class HostMetricSummaryMonthlyList(ListAPIView):
name = _("Host Metrics Summary Monthly")
model = models.HostMetricSummaryMonthly
serializer_class = serializers.HostMetricSummaryMonthlySerializer
permission_classes = (IsSystemAdminOrAuditor,)
search_fields = ('date',)
def get_queryset(self):
return self.model.objects.all()
class HostList(HostRelatedSearchMixin, ListCreateAPIView):
always_allow_superuser = False
model = models.Host
@@ -1576,6 +1604,8 @@ class HostDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
def delete(self, request, *args, **kwargs):
if self.get_object().inventory.pending_deletion:
return Response({"error": _("The inventory for this host is already being deleted.")}, status=status.HTTP_400_BAD_REQUEST)
if self.get_object().inventory.kind == 'constructed':
return Response({"error": _("Delete constructed inventory hosts from input inventory.")}, status=status.HTTP_400_BAD_REQUEST)
return super(HostDetail, self).delete(request, *args, **kwargs)
@@ -1583,6 +1613,14 @@ class HostAnsibleFactsDetail(RetrieveAPIView):
model = models.Host
serializer_class = serializers.AnsibleFactsSerializer
def get(self, request, *args, **kwargs):
obj = self.get_object()
if obj.inventory.kind == 'constructed':
# If this is a constructed inventory host, it is not the source of truth about facts
# redirect to the original input inventory host instead
return HttpResponseRedirect(reverse('api:host_ansible_facts_detail', kwargs={'pk': obj.instance_id}, request=self.request))
return super().get(request, *args, **kwargs)
class InventoryHostsList(HostRelatedSearchMixin, SubListCreateAttachDetachAPIView):
model = models.Host
@@ -1590,13 +1628,7 @@ class InventoryHostsList(HostRelatedSearchMixin, SubListCreateAttachDetachAPIVie
parent_model = models.Inventory
relationship = 'hosts'
parent_key = 'inventory'
def get_queryset(self):
inventory = self.get_parent_object()
qs = getattrd(inventory, self.relationship).all()
# Apply queryset optimizations
qs = qs.select_related(*HostAccess.select_related).prefetch_related(*HostAccess.prefetch_related)
return qs
filter_read_permission = False
class HostGroupsList(SubListCreateAttachDetachAPIView):
@@ -2469,7 +2501,7 @@ class JobTemplateSurveySpec(GenericAPIView):
return Response(
dict(
error=_(
"$encrypted$ is a reserved keyword for password question defaults, " "survey question {idx} is type {survey_item[type]}."
"$encrypted$ is a reserved keyword for password question defaults, survey question {idx} is type {survey_item[type]}."
).format(**context)
),
status=status.HTTP_400_BAD_REQUEST,
@@ -2537,16 +2569,7 @@ class JobTemplateCredentialsList(SubListCreateAttachDetachAPIView):
serializer_class = serializers.CredentialSerializer
parent_model = models.JobTemplate
relationship = 'credentials'
def get_queryset(self):
# Return the full list of credentials
parent = self.get_parent_object()
self.check_parent_access(parent)
sublist_qs = getattrd(parent, self.relationship)
sublist_qs = sublist_qs.prefetch_related(
'created_by', 'modified_by', 'admin_role', 'use_role', 'read_role', 'admin_role__parents', 'admin_role__members'
)
return sublist_qs
filter_read_permission = False
def is_valid_relation(self, parent, sub, created=False):
if sub.unique_hash() in [cred.unique_hash() for cred in parent.credentials.all()]:
@@ -2648,7 +2671,10 @@ class JobTemplateCallback(GenericAPIView):
# Permission class should have already validated host_config_key.
job_template = self.get_object()
# Attempt to find matching hosts based on remote address.
matching_hosts = self.find_matching_hosts()
if job_template.inventory:
matching_hosts = self.find_matching_hosts()
else:
return Response({"msg": _("Cannot start automatically, an inventory is required.")}, status=status.HTTP_400_BAD_REQUEST)
# If the host is not found, update the inventory before trying to
# match again.
inventory_sources_already_updated = []
@@ -2733,6 +2759,7 @@ class JobTemplateInstanceGroupsList(SubListAttachDetachAPIView):
serializer_class = serializers.InstanceGroupSerializer
parent_model = models.JobTemplate
relationship = 'instance_groups'
filter_read_permission = False
class JobTemplateAccessList(ResourceAccessList):
@@ -2823,16 +2850,7 @@ class WorkflowJobTemplateNodeChildrenBaseList(EnforceParentRelationshipMixin, Su
relationship = ''
enforce_parent_relationship = 'workflow_job_template'
search_fields = ('unified_job_template__name', 'unified_job_template__description')
'''
Limit the set of WorkflowJobTemplateNodes to the related nodes of specified by
'relationship'
'''
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).all()
filter_read_permission = False
def is_valid_relation(self, parent, sub, created=False):
if created:
@@ -2907,14 +2925,7 @@ class WorkflowJobNodeChildrenBaseList(SubListAPIView):
parent_model = models.WorkflowJobNode
relationship = ''
search_fields = ('unified_job_template__name', 'unified_job_template__description')
#
# Limit the set of WorkflowJobNodes to the related nodes of specified by self.relationship
#
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).all()
filter_read_permission = False
class WorkflowJobNodeSuccessNodesList(WorkflowJobNodeChildrenBaseList):
@@ -3093,11 +3104,8 @@ class WorkflowJobTemplateWorkflowNodesList(SubListCreateAPIView):
relationship = 'workflow_job_template_nodes'
parent_key = 'workflow_job_template'
search_fields = ('unified_job_template__name', 'unified_job_template__description')
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).order_by('id')
ordering = ('id',) # assure ordering by id for consistency
filter_read_permission = False
class WorkflowJobTemplateJobsList(SubListAPIView):
@@ -3189,11 +3197,8 @@ class WorkflowJobWorkflowNodesList(SubListAPIView):
relationship = 'workflow_job_nodes'
parent_key = 'workflow_job'
search_fields = ('unified_job_template__name', 'unified_job_template__description')
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).order_by('id')
ordering = ('id',) # assure ordering by id for consistency
filter_read_permission = False
class WorkflowJobCancel(GenericCancelView):
@@ -3328,7 +3333,6 @@ class JobLabelList(SubListAPIView):
serializer_class = serializers.LabelSerializer
parent_model = models.Job
relationship = 'labels'
parent_key = 'job'
class WorkflowJobLabelList(JobLabelList):
@@ -3507,11 +3511,7 @@ class BaseJobHostSummariesList(SubListAPIView):
relationship = 'job_host_summaries'
name = _('Job Host Summaries List')
search_fields = ('host_name',)
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).select_related('job', 'job__job_template', 'host')
filter_read_permission = False
class HostJobHostSummariesList(BaseJobHostSummariesList):
@@ -4055,7 +4055,7 @@ class UnifiedJobStdout(RetrieveAPIView):
return super(UnifiedJobStdout, self).retrieve(request, *args, **kwargs)
except models.StdoutMaxBytesExceeded as e:
response_message = _(
"Standard Output too large to display ({text_size} bytes), " "only download supported for sizes over {supported_size} bytes."
"Standard Output too large to display ({text_size} bytes), only download supported for sizes over {supported_size} bytes."
).format(text_size=e.total, supported_size=e.supported)
if request.accepted_renderer.format == 'json':
return Response({'range': {'start': 0, 'end': 1, 'absolute_end': 1}, 'content': response_message})

296
awx/api/views/analytics.py Normal file
View File

@@ -0,0 +1,296 @@
import requests
import logging
import urllib.parse as urlparse
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from django.utils import translation
from awx.api.generics import APIView, Response
from awx.api.permissions import AnalyticsPermission
from awx.api.versioning import reverse
from awx.main.utils import get_awx_version
from rest_framework import status
from collections import OrderedDict
AUTOMATION_ANALYTICS_API_URL_PATH = "/api/tower-analytics/v1"
AWX_ANALYTICS_API_PREFIX = 'analytics'
ERROR_UPLOAD_NOT_ENABLED = "analytics-upload-not-enabled"
ERROR_MISSING_URL = "missing-url"
ERROR_MISSING_USER = "missing-user"
ERROR_MISSING_PASSWORD = "missing-password"
ERROR_NO_DATA_OR_ENTITLEMENT = "no-data-or-entitlement"
ERROR_NOT_FOUND = "not-found"
ERROR_UNAUTHORIZED = "unauthorized"
ERROR_UNKNOWN = "unknown"
ERROR_UNSUPPORTED_METHOD = "unsupported-method"
logger = logging.getLogger('awx.api.views.analytics')
class MissingSettings(Exception):
"""Settings are not correct Exception"""
pass
class GetNotAllowedMixin(object):
def get(self, request, format=None):
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
class AnalyticsRootView(APIView):
permission_classes = (AnalyticsPermission,)
name = _('Automation Analytics')
swagger_topic = 'Automation Analytics'
def get(self, request, format=None):
data = OrderedDict()
data['authorized'] = reverse('api:analytics_authorized')
data['reports'] = reverse('api:analytics_reports_list')
data['report_options'] = reverse('api:analytics_report_options_list')
data['adoption_rate'] = reverse('api:analytics_adoption_rate')
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options')
data['event_explorer'] = reverse('api:analytics_event_explorer')
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options')
data['host_explorer'] = reverse('api:analytics_host_explorer')
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options')
data['job_explorer'] = reverse('api:analytics_job_explorer')
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options')
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer')
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options')
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer')
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options')
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer')
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options')
return Response(data)
class AnalyticsGenericView(APIView):
"""
Example:
headers = {
'Content-Type': 'application/json',
}
params = {
'limit': '20',
'offset': '0',
'sort_by': 'name:asc',
}
json_data = {
'limit': '20',
'offset': '0',
'sort_options': 'name',
'sort_order': 'asc',
'tags': [],
'slug': [],
'name': [],
'description': '',
}
response = requests.post(f'{AUTOMATION_ANALYTICS_API_URL}/reports/', params=params,
headers=headers, json=json_data)
return Response(response.json(), status=response.status_code)
"""
permission_classes = (AnalyticsPermission,)
@staticmethod
def _request_headers(request):
headers = {}
for header in ['Content-Type', 'Content-Length', 'Accept-Encoding', 'User-Agent', 'Accept']:
if request.headers.get(header, None):
headers[header] = request.headers.get(header)
headers['X-Rh-Analytics-Source'] = 'controller'
headers['X-Rh-Analytics-Source-Version'] = get_awx_version()
headers['Accept-Language'] = translation.get_language()
return headers
@staticmethod
def _get_analytics_path(request_path):
parts = request_path.split(f'{AWX_ANALYTICS_API_PREFIX}/')
path_specific = parts[-1]
return f"{AUTOMATION_ANALYTICS_API_URL_PATH}/{path_specific}"
def _get_analytics_url(self, request_path):
analytics_path = self._get_analytics_path(request_path)
url = getattr(settings, 'AUTOMATION_ANALYTICS_URL', None)
if not url:
raise MissingSettings(ERROR_MISSING_URL)
url_parts = urlparse.urlsplit(url)
analytics_url = urlparse.urlunsplit([url_parts.scheme, url_parts.netloc, analytics_path, url_parts.query, url_parts.fragment])
return analytics_url
@staticmethod
def _get_setting(setting_name, default, error_message):
setting = getattr(settings, setting_name, default)
if not setting:
raise MissingSettings(error_message)
return setting
@staticmethod
def _error_response(keyword, message=None, remote=True, remote_status_code=None, status_code=status.HTTP_403_FORBIDDEN):
text = {"error": {"remote": remote, "remote_status": remote_status_code, "keyword": keyword}}
if message:
text["error"]["message"] = message
return Response(text, status=status_code)
def _error_response_404(self, response):
try:
json_response = response.json()
# Subscription/entitlement problem or missing tenant data in AA db => HTTP 403
message = json_response.get('error', None)
if message:
return self._error_response(ERROR_NO_DATA_OR_ENTITLEMENT, message, remote=True, remote_status_code=response.status_code)
# Standard 404 problem => HTTP 404
message = json_response.get('detail', None) or response.text
except requests.exceptions.JSONDecodeError:
# Unexpected text => still HTTP 404
message = response.text
return self._error_response(ERROR_NOT_FOUND, message, remote=True, remote_status_code=status.HTTP_404_NOT_FOUND, status_code=status.HTTP_404_NOT_FOUND)
@staticmethod
def _update_response_links(json_response):
if not json_response.get('links', None):
return
for key, value in json_response['links'].items():
if value:
json_response['links'][key] = value.replace(AUTOMATION_ANALYTICS_API_URL_PATH, f"/api/v2/{AWX_ANALYTICS_API_PREFIX}")
def _forward_response(self, response):
try:
content_type = response.headers.get('content-type', '')
if content_type.find('application/json') != -1:
json_response = response.json()
self._update_response_links(json_response)
return Response(json_response, status=response.status_code)
except Exception as e:
logger.error(f"Analytics API: Response error: {e}")
return Response(response.content, status=response.status_code)
def _send_to_analytics(self, request, method):
try:
headers = self._request_headers(request)
self._get_setting('INSIGHTS_TRACKING_STATE', False, ERROR_UPLOAD_NOT_ENABLED)
url = self._get_analytics_url(request.path)
rh_user = self._get_setting('REDHAT_USERNAME', None, ERROR_MISSING_USER)
rh_password = self._get_setting('REDHAT_PASSWORD', None, ERROR_MISSING_PASSWORD)
if method not in ["GET", "POST", "OPTIONS"]:
return self._error_response(ERROR_UNSUPPORTED_METHOD, method, remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
else:
response = requests.request(
method,
url,
auth=(rh_user, rh_password),
verify=settings.INSIGHTS_CERT_PATH,
params=request.query_params,
headers=headers,
json=request.data,
timeout=(31, 31),
)
#
# Missing or wrong user/pass
#
if response.status_code == status.HTTP_401_UNAUTHORIZED:
text = (response.text or '').rstrip("\n")
return self._error_response(ERROR_UNAUTHORIZED, text, remote=True, remote_status_code=response.status_code)
#
# Not found, No entitlement or No data in Analytics
#
elif response.status_code == status.HTTP_404_NOT_FOUND:
return self._error_response_404(response)
#
# Success or not a 401/404 errors are just forwarded
#
else:
return self._forward_response(response)
except MissingSettings as e:
logger.warning(f"Analytics API: Setting missing: {e.args[0]}")
return self._error_response(e.args[0], remote=False)
except requests.exceptions.RequestException as e:
logger.error(f"Analytics API: Request error: {e}")
return self._error_response(ERROR_UNKNOWN, str(e), remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
except Exception as e:
logger.error(f"Analytics API: Error: {e}")
return self._error_response(ERROR_UNKNOWN, str(e), remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
class AnalyticsGenericListView(AnalyticsGenericView):
def get(self, request, format=None):
return self._send_to_analytics(request, method="GET")
def post(self, request, format=None):
return self._send_to_analytics(request, method="POST")
def options(self, request, format=None):
return self._send_to_analytics(request, method="OPTIONS")
class AnalyticsGenericDetailView(AnalyticsGenericView):
def get(self, request, slug, format=None):
return self._send_to_analytics(request, method="GET")
def post(self, request, slug, format=None):
return self._send_to_analytics(request, method="POST")
def options(self, request, slug, format=None):
return self._send_to_analytics(request, method="OPTIONS")
class AnalyticsAuthorizedView(AnalyticsGenericListView):
name = _("Authorized")
class AnalyticsReportsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Reports")
swagger_topic = "Automation Analytics"
class AnalyticsReportDetail(AnalyticsGenericDetailView):
name = _("Report")
class AnalyticsReportOptionsList(AnalyticsGenericListView):
name = _("Report Options")
class AnalyticsAdoptionRateList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Adoption Rate")
class AnalyticsEventExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Event Explorer")
class AnalyticsHostExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Host Explorer")
class AnalyticsJobExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Job Explorer")
class AnalyticsProbeTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Templates")
class AnalyticsProbeTemplateForHostsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Template For Hosts")
class AnalyticsRoiTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("ROI Templates")

View File

@@ -1,5 +1,7 @@
from collections import OrderedDict
from django.utils.translation import gettext_lazy as _
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import JSONRenderer
from rest_framework.reverse import reverse
@@ -18,6 +20,9 @@ from awx.api import (
class BulkView(APIView):
name = _('Bulk')
swagger_topic = 'Bulk'
permission_classes = [IsAuthenticated]
renderer_classes = [
renderers.BrowsableAPIRenderer,

View File

@@ -6,6 +6,8 @@ import io
import ipaddress
import os
import tarfile
import time
import re
import asn1
from awx.api import serializers
@@ -40,6 +42,8 @@ RECEPTOR_OID = "1.3.6.1.4.1.2312.19.1"
# │ │ └── receptor.key
# │ └── work-public-key.pem
# └── requirements.yml
class InstanceInstallBundle(GenericAPIView):
name = _('Install Bundle')
model = models.Instance
@@ -49,56 +53,54 @@ class InstanceInstallBundle(GenericAPIView):
def get(self, request, *args, **kwargs):
instance_obj = self.get_object()
if instance_obj.node_type not in ('execution',):
if instance_obj.node_type not in ('execution', 'hop'):
return Response(
data=dict(msg=_('Install bundle can only be generated for execution nodes.')),
data=dict(msg=_('Install bundle can only be generated for execution or hop nodes.')),
status=status.HTTP_400_BAD_REQUEST,
)
with io.BytesIO() as f:
with tarfile.open(fileobj=f, mode='w:gz') as tar:
# copy /etc/receptor/tls/ca/receptor-ca.crt to receptor/tls/ca in the tar file
tar.add(
os.path.realpath('/etc/receptor/tls/ca/receptor-ca.crt'), arcname=f"{instance_obj.hostname}_install_bundle/receptor/tls/ca/receptor-ca.crt"
)
# copy /etc/receptor/tls/ca/mesh-CA.crt to receptor/tls/ca in the tar file
tar.add(os.path.realpath('/etc/receptor/tls/ca/mesh-CA.crt'), arcname=f"{instance_obj.hostname}_install_bundle/receptor/tls/ca/mesh-CA.crt")
# copy /etc/receptor/signing/work-public-key.pem to receptor/work-public-key.pem
tar.add('/etc/receptor/signing/work-public-key.pem', arcname=f"{instance_obj.hostname}_install_bundle/receptor/work-public-key.pem")
# copy /etc/receptor/work_public_key.pem to receptor/work_public_key.pem
tar.add('/etc/receptor/work_public_key.pem', arcname=f"{instance_obj.hostname}_install_bundle/receptor/work_public_key.pem")
# generate and write the receptor key to receptor/tls/receptor.key in the tar file
key, cert = generate_receptor_tls(instance_obj)
def tar_addfile(tarinfo, filecontent):
tarinfo.mtime = time.time()
tarinfo.size = len(filecontent)
tar.addfile(tarinfo, io.BytesIO(filecontent))
key_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.key")
key_tarinfo.size = len(key)
tar.addfile(key_tarinfo, io.BytesIO(key))
tar_addfile(key_tarinfo, key)
cert_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.crt")
cert_tarinfo.size = len(cert)
tar.addfile(cert_tarinfo, io.BytesIO(cert))
tar_addfile(cert_tarinfo, cert)
# generate and write install_receptor.yml to the tar file
playbook = generate_playbook().encode('utf-8')
playbook = generate_playbook(instance_obj).encode('utf-8')
playbook_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/install_receptor.yml")
playbook_tarinfo.size = len(playbook)
tar.addfile(playbook_tarinfo, io.BytesIO(playbook))
tar_addfile(playbook_tarinfo, playbook)
# generate and write inventory.yml to the tar file
inventory_yml = generate_inventory_yml(instance_obj).encode('utf-8')
inventory_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/inventory.yml")
inventory_yml_tarinfo.size = len(inventory_yml)
tar.addfile(inventory_yml_tarinfo, io.BytesIO(inventory_yml))
tar_addfile(inventory_yml_tarinfo, inventory_yml)
# generate and write group_vars/all.yml to the tar file
group_vars = generate_group_vars_all_yml(instance_obj).encode('utf-8')
group_vars_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/group_vars/all.yml")
group_vars_tarinfo.size = len(group_vars)
tar.addfile(group_vars_tarinfo, io.BytesIO(group_vars))
tar_addfile(group_vars_tarinfo, group_vars)
# generate and write requirements.yml to the tar file
requirements_yml = generate_requirements_yml().encode('utf-8')
requirements_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/requirements.yml")
requirements_yml_tarinfo.size = len(requirements_yml)
tar.addfile(requirements_yml_tarinfo, io.BytesIO(requirements_yml))
tar_addfile(requirements_yml_tarinfo, requirements_yml)
# respond with the tarfile
f.seek(0)
@@ -107,8 +109,10 @@ class InstanceInstallBundle(GenericAPIView):
return response
def generate_playbook():
return render_to_string("instance_install_bundle/install_receptor.yml")
def generate_playbook(instance_obj):
playbook_yaml = render_to_string("instance_install_bundle/install_receptor.yml", context=dict(instance=instance_obj))
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', playbook_yaml)
def generate_requirements_yml():
@@ -120,7 +124,12 @@ def generate_inventory_yml(instance_obj):
def generate_group_vars_all_yml(instance_obj):
return render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj))
peers = []
for instance in instance_obj.peers.all():
peers.append(dict(host=instance.hostname, port=instance.listener_port))
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj, peers=peers))
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', all_yaml)
def generate_receptor_tls(instance_obj):
@@ -161,14 +170,14 @@ def generate_receptor_tls(instance_obj):
.sign(key, hashes.SHA256())
)
# sign csr with the receptor ca key from /etc/receptor/ca/receptor-ca.key
with open('/etc/receptor/tls/ca/receptor-ca.key', 'rb') as f:
# sign csr with the receptor ca key from /etc/receptor/ca/mesh-CA.key
with open('/etc/receptor/tls/ca/mesh-CA.key', 'rb') as f:
ca_key = serialization.load_pem_private_key(
f.read(),
password=None,
)
with open('/etc/receptor/tls/ca/receptor-ca.crt', 'rb') as f:
with open('/etc/receptor/tls/ca/mesh-CA.crt', 'rb') as f:
ca_cert = x509.load_pem_x509_certificate(f.read())
cert = (

View File

@@ -14,6 +14,7 @@ from django.utils.translation import gettext_lazy as _
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import status
from rest_framework import serializers
# AWX
from awx.main.models import ActivityStream, Inventory, JobTemplate, Role, User, InstanceGroup, InventoryUpdateEvent, InventoryUpdate
@@ -31,6 +32,7 @@ from awx.api.views.labels import LabelSubListCreateAttachDetachView
from awx.api.serializers import (
InventorySerializer,
ConstructedInventorySerializer,
ActivityStreamSerializer,
RoleSerializer,
InstanceGroupSerializer,
@@ -79,7 +81,9 @@ class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIVie
# Do not allow changes to an Inventory kind.
if kind is not None and obj.kind != kind:
return Response(dict(error=_('You cannot turn a regular inventory into a "smart" inventory.')), status=status.HTTP_405_METHOD_NOT_ALLOWED)
return Response(
dict(error=_('You cannot turn a regular inventory into a "smart" or "constructed" inventory.')), status=status.HTTP_405_METHOD_NOT_ALLOWED
)
return super(InventoryDetail, self).update(request, *args, **kwargs)
def destroy(self, request, *args, **kwargs):
@@ -94,6 +98,29 @@ class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIVie
return Response(dict(error=_("{0}".format(e))), status=status.HTTP_400_BAD_REQUEST)
class ConstructedInventoryDetail(InventoryDetail):
serializer_class = ConstructedInventorySerializer
class ConstructedInventoryList(InventoryList):
serializer_class = ConstructedInventorySerializer
def get_queryset(self):
r = super().get_queryset()
return r.filter(kind='constructed')
class InventoryInputInventoriesList(SubListAttachDetachAPIView):
model = Inventory
serializer_class = InventorySerializer
parent_model = Inventory
relationship = 'input_inventories'
def is_valid_relation(self, parent, sub, created=False):
if sub.kind == 'constructed':
raise serializers.ValidationError({'error': 'You cannot add a constructed inventory to another constructed inventory.'})
class InventoryActivityStreamList(SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer

View File

@@ -50,7 +50,7 @@ class UnifiedJobDeletionMixin(object):
return Response({"error": _("Job has not finished processing events.")}, status=status.HTTP_400_BAD_REQUEST)
else:
# if it has been > 1 minute, events are probably lost
logger.warning('Allowing deletion of {} through the API without all events ' 'processed.'.format(obj.log_format))
logger.warning('Allowing deletion of {} through the API without all events processed.'.format(obj.log_format))
# Manually cascade delete events if unpartitioned job
if obj.has_unpartitioned_events:

View File

@@ -61,12 +61,6 @@ class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_queryset(self):
qs = Organization.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'auditor_role', 'member_role', 'read_role')
qs = qs.prefetch_related('created_by', 'modified_by')
return qs
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
@@ -207,6 +201,7 @@ class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
serializer_class = InstanceGroupSerializer
parent_model = Organization
relationship = 'instance_groups'
filter_read_permission = False
class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
@@ -214,6 +209,7 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
serializer_class = CredentialSerializer
parent_model = Organization
relationship = 'galaxy_credentials'
filter_read_permission = False
def is_valid_relation(self, parent, sub, created=False):
if sub.kind != 'galaxy_api_token':

View File

@@ -20,6 +20,7 @@ from rest_framework import status
import requests
from awx import MODE
from awx.api.generics import APIView
from awx.conf.registry import settings_registry
from awx.main.analytics import all_collectors
@@ -54,6 +55,8 @@ class ApiRootView(APIView):
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
if MODE == 'development':
data['swagger'] = drf_reverse('api:schema-swagger-ui')
return Response(data)
@@ -98,10 +101,13 @@ class ApiVersionRootView(APIView):
data['tokens'] = reverse('api:o_auth2_token_list', request=request)
data['metrics'] = reverse('api:metrics_view', request=request)
data['inventory'] = reverse('api:inventory_list', request=request)
data['constructed_inventory'] = reverse('api:constructed_inventory_list', request=request)
data['inventory_sources'] = reverse('api:inventory_source_list', request=request)
data['inventory_updates'] = reverse('api:inventory_update_list', request=request)
data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request)
data['host_metrics'] = reverse('api:host_metric_list', request=request)
data['host_metric_summary_monthly'] = reverse('api:host_metric_summary_monthly_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)
@@ -122,6 +128,7 @@ class ApiVersionRootView(APIView):
data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request)
data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request)
data['bulk'] = reverse('api:bulk', request=request)
data['analytics'] = reverse('api:analytics_root_view', request=request)
return Response(data)
@@ -272,6 +279,9 @@ class ApiV2ConfigView(APIView):
pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off'
# Guarding against settings.UI_NEXT being set to a non-boolean value
ui_next_state = settings.UI_NEXT if settings.UI_NEXT in (True, False) else False
data = dict(
time_zone=settings.TIME_ZONE,
license_info=license_data,
@@ -280,6 +290,7 @@ class ApiV2ConfigView(APIView):
analytics_status=pendo_state,
analytics_collectors=all_collectors(),
become_methods=PRIVILEGE_ESCALATION_METHODS,
ui_next=ui_next_state,
)
# If LDAP is enabled, user_ldap_fields will return a list of field

View File

@@ -114,7 +114,7 @@ class WebhookReceiverBase(APIView):
# Ensure that the full contents of the request are captured for multiple uses.
request.body
logger.debug("headers: {}\n" "data: {}\n".format(request.headers, request.data))
logger.debug("headers: {}\ndata: {}\n".format(request.headers, request.data))
obj = self.get_object()
self.check_signature(obj)

View File

@@ -14,7 +14,7 @@ class ConfConfig(AppConfig):
def ready(self):
self.module.autodiscover()
if not set(sys.argv) & {'migrate', 'check_migrations'}:
if not set(sys.argv) & {'migrate', 'check_migrations', 'showmigrations'}:
from .settings import SettingsWrapper
SettingsWrapper.initialize()

View File

@@ -0,0 +1,17 @@
# Generated by Django 4.2 on 2023-06-09 19:51
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('conf', '0009_rename_proot_settings'),
]
operations = [
migrations.AlterField(
model_name='setting',
name='value',
field=models.JSONField(null=True),
),
]

View File

@@ -8,7 +8,6 @@ import json
from django.db import models
# AWX
from awx.main.fields import JSONBlob
from awx.main.models.base import CreatedModifiedModel, prevent_search
from awx.main.utils import encrypt_field
from awx.conf import settings_registry
@@ -18,7 +17,7 @@ __all__ = ['Setting']
class Setting(CreatedModifiedModel):
key = models.CharField(max_length=255)
value = JSONBlob(null=True)
value = models.JSONField(null=True)
user = prevent_search(models.ForeignKey('auth.User', related_name='settings', default=None, null=True, editable=False, on_delete=models.CASCADE))
def __str__(self):

View File

@@ -5,11 +5,13 @@ import threading
import time
import os
from concurrent.futures import ThreadPoolExecutor
# Django
from django.conf import LazySettings
from django.conf import settings, UserSettingsHolder
from django.core.cache import cache as django_cache
from django.core.exceptions import ImproperlyConfigured
from django.core.exceptions import ImproperlyConfigured, SynchronousOnlyOperation
from django.db import transaction, connection
from django.db.utils import Error as DBError, ProgrammingError
from django.utils.functional import cached_property
@@ -157,7 +159,7 @@ class EncryptedCacheProxy(object):
obj_id = self.cache.get(Setting.get_cache_id_key(key), default=empty)
if obj_id is empty:
logger.info('Efficiency notice: Corresponding id not stored in cache %s', Setting.get_cache_id_key(key))
obj_id = getattr(self._get_setting_from_db(key), 'pk', None)
obj_id = getattr(_get_setting_from_db(self.registry, key), 'pk', None)
elif obj_id == SETTING_CACHE_NONE:
obj_id = None
return method(TransientSetting(pk=obj_id, value=value), 'value')
@@ -166,11 +168,6 @@ class EncryptedCacheProxy(object):
# a no-op; it just returns the provided value
return value
def _get_setting_from_db(self, key):
field = self.registry.get_setting_field(key)
if not field.read_only:
return Setting.objects.filter(key=key, user__isnull=True).order_by('pk').first()
def __getattr__(self, name):
return getattr(self.cache, name)
@@ -186,6 +183,22 @@ def get_settings_to_cache(registry):
return dict([(key, SETTING_CACHE_NOTSET) for key in get_writeable_settings(registry)])
# Will first attempt to get the setting from the database in synchronous mode.
# If call from async context, it will attempt to get the setting from the database in a thread.
def _get_setting_from_db(registry, key):
def get_settings_from_db_sync(registry, key):
field = registry.get_setting_field(key)
if not field.read_only or key == 'INSTALL_UUID':
return Setting.objects.filter(key=key, user__isnull=True).order_by('pk').first()
try:
return get_settings_from_db_sync(registry, key)
except SynchronousOnlyOperation:
with ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(get_settings_from_db_sync, registry, key)
return future.result()
def get_cache_value(value):
"""Returns the proper special cache setting for a value
based on instance type.
@@ -345,7 +358,7 @@ class SettingsWrapper(UserSettingsHolder):
setting_id = None
# this value is read-only, however we *do* want to fetch its value from the database
if not field.read_only or name == 'INSTALL_UUID':
setting = Setting.objects.filter(key=name, user__isnull=True).order_by('pk').first()
setting = _get_setting_from_db(self.registry, name)
if setting:
if getattr(field, 'encrypted', False):
value = decrypt_field(setting, 'value')
@@ -405,6 +418,10 @@ class SettingsWrapper(UserSettingsHolder):
"""Get value while accepting the in-memory cache if key is available"""
with _ctit_db_wrapper(trans_safe=True):
return self._get_local(name)
# If the last line did not return, that means we hit a database error
# in that case, we should not have a local cache value
# thus, return empty as a signal to use the default
return empty
def __getattr__(self, name):
value = empty

View File

@@ -94,9 +94,7 @@ def test_setting_singleton_retrieve_readonly(api_request, dummy_setting):
@pytest.mark.django_db
def test_setting_singleton_update(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch('awx.conf.views.clear_setting_cache'):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 3})
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
assert response.data['FOO_BAR'] == 3
@@ -112,7 +110,7 @@ def test_setting_singleton_update_hybriddictfield_with_forbidden(api_request, du
# sure that the _Forbidden validator doesn't get used for the
# fields. See also https://github.com/ansible/awx/issues/4099.
with dummy_setting('FOO_BAR', field_class=sso_fields.SAMLOrgAttrField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
'awx.conf.views.clear_setting_cache'
):
api_request(
'patch',
@@ -126,7 +124,7 @@ def test_setting_singleton_update_hybriddictfield_with_forbidden(api_request, du
@pytest.mark.django_db
def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=4, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
'awx.conf.views.clear_setting_cache'
):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 5})
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
@@ -136,7 +134,7 @@ def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy
@pytest.mark.django_db
def test_setting_singleton_update_dont_change_encrypted_mark(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.CharField, encrypted=True, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
'awx.conf.views.clear_setting_cache'
):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 'password'})
assert Setting.objects.get(key='FOO_BAR').value.startswith('$encrypted$')
@@ -155,16 +153,14 @@ def test_setting_singleton_update_runs_custom_validate(api_request, dummy_settin
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), dummy_validate(
'foobar', func_raising_exception
), mock.patch('awx.conf.views.handle_setting_changes'):
), mock.patch('awx.conf.views.clear_setting_cache'):
response = api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 23})
assert response.status_code == 400
@pytest.mark.django_db
def test_setting_singleton_delete(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch('awx.conf.views.clear_setting_cache'):
api_request('delete', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
assert not response.data['FOO_BAR']
@@ -173,7 +169,7 @@ def test_setting_singleton_delete(api_request, dummy_setting):
@pytest.mark.django_db
def test_setting_singleton_delete_no_read_only_fields(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=23, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
'awx.conf.views.clear_setting_cache'
):
api_request('delete', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))

View File

@@ -35,7 +35,7 @@ class TestStringListBooleanField:
field = StringListBooleanField()
with pytest.raises(ValidationError) as e:
field.to_internal_value(value)
assert e.value.detail[0] == "Expected None, True, False, a string or list " "of strings but got {} instead.".format(type(value))
assert e.value.detail[0] == "Expected None, True, False, a string or list of strings but got {} instead.".format(type(value))
@pytest.mark.parametrize("value_in, value_known", FIELD_VALUES)
def test_to_representation_valid(self, value_in, value_known):
@@ -48,7 +48,7 @@ class TestStringListBooleanField:
field = StringListBooleanField()
with pytest.raises(ValidationError) as e:
field.to_representation(value)
assert e.value.detail[0] == "Expected None, True, False, a string or list " "of strings but got {} instead.".format(type(value))
assert e.value.detail[0] == "Expected None, True, False, a string or list of strings but got {} instead.".format(type(value))
class TestListTuplesField:
@@ -67,7 +67,7 @@ class TestListTuplesField:
field = ListTuplesField()
with pytest.raises(ValidationError) as e:
field.to_internal_value(value)
assert e.value.detail[0] == "Expected a list of tuples of max length 2 " "but got {} instead.".format(t)
assert e.value.detail[0] == "Expected a list of tuples of max length 2 but got {} instead.".format(t)
class TestStringListPathField:

View File

@@ -13,6 +13,7 @@ from unittest import mock
from django.conf import LazySettings
from django.core.cache.backends.locmem import LocMemCache
from django.core.exceptions import ImproperlyConfigured
from django.db.utils import Error as DBError
from django.utils.translation import gettext_lazy as _
import pytest
@@ -331,3 +332,18 @@ def test_in_memory_cache_works(settings):
with mock.patch.object(settings, '_get_local') as mock_get:
assert settings.AWX_VAR == 'DEFAULT'
mock_get.assert_not_called()
@pytest.mark.defined_in_file(AWX_VAR=[])
def test_getattr_with_database_error(settings):
"""
If a setting is defined via the registry and has a null-ish default which is not None
then referencing that setting during a database outage should give that default
this is regression testing for a bug where it would return None
"""
settings.registry.register('AWX_VAR', field_class=fields.StringListField, default=[], category=_('System'), category_slug='system')
settings._awx_conf_memoizedcache.clear()
with mock.patch('django.db.backends.base.base.BaseDatabaseWrapper.ensure_connection') as mock_ensure:
mock_ensure.side_effect = DBError('for test')
assert settings.AWX_VAR == []

View File

@@ -26,10 +26,11 @@ from awx.api.generics import APIView, GenericAPIView, ListAPIView, RetrieveUpdat
from awx.api.permissions import IsSystemAdminOrAuditor
from awx.api.versioning import reverse
from awx.main.utils import camelcase_to_underscore
from awx.main.tasks.system import handle_setting_changes
from awx.main.tasks.system import clear_setting_cache
from awx.conf.models import Setting
from awx.conf.serializers import SettingCategorySerializer, SettingSingletonSerializer
from awx.conf import settings_registry
from awx.main.utils.external_logging import reconfigure_rsyslog
SettingCategory = collections.namedtuple('SettingCategory', ('url', 'slug', 'name'))
@@ -118,7 +119,10 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
setting.save(update_fields=['value'])
settings_change_list.append(key)
if settings_change_list:
connection.on_commit(lambda: handle_setting_changes.delay(settings_change_list))
connection.on_commit(lambda: clear_setting_cache.delay(settings_change_list))
if any([setting.startswith('LOG_AGGREGATOR') for setting in settings_change_list]):
# call notify to rsyslog. no data is need so payload is empty
reconfigure_rsyslog.delay()
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
@@ -133,7 +137,10 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
setting.delete()
settings_change_list.append(setting.key)
if settings_change_list:
connection.on_commit(lambda: handle_setting_changes.delay(settings_change_list))
connection.on_commit(lambda: clear_setting_cache.delay(settings_change_list))
if any([setting.startswith('LOG_AGGREGATOR') for setting in settings_change_list]):
# call notify to rsyslog. no data is need so payload is empty
reconfigure_rsyslog.delay()
# When TOWER_URL_BASE is deleted from the API, reset it to the hostname
# used to make the request as a default.

View File

@@ -366,9 +366,9 @@ class BaseAccess(object):
report_violation = lambda message: None
else:
report_violation = lambda message: logger.warning(message)
if validation_info.get('trial', False) is True or validation_info['instance_count'] == 10: # basic 10 license
if validation_info.get('trial', False) is True:
def report_violation(message):
def report_violation(message): # noqa
raise PermissionDenied(message)
if check_expiration and validation_info.get('time_remaining', None) is None:
@@ -2234,7 +2234,7 @@ class WorkflowJobAccess(BaseAccess):
if not node_access.can_add({'reference_obj': node}):
wj_add_perm = False
if not wj_add_perm and self.save_messages:
self.messages['workflow_job_template'] = _('You do not have permission to the workflow job ' 'resources required for relaunch.')
self.messages['workflow_job_template'] = _('You do not have permission to the workflow job resources required for relaunch.')
return wj_add_perm
def can_cancel(self, obj):
@@ -2952,3 +2952,19 @@ class WorkflowApprovalTemplateAccess(BaseAccess):
for cls in BaseAccess.__subclasses__():
access_registry[cls.model] = cls
access_registry[UnpartitionedJobEvent] = UnpartitionedJobEventAccess
def optimize_queryset(queryset):
"""
A utility method in case you already have a queryset and just want to
apply the standard optimizations for that model.
In other words, use if you do not want to start from filtered_queryset for some reason.
"""
if not queryset.model or queryset.model not in access_registry:
return queryset
access_class = access_registry[queryset.model]
if access_class.select_related:
queryset = queryset.select_related(*access_class.select_related)
if access_class.prefetch_related:
queryset = queryset.prefetch_related(*access_class.prefetch_related)
return queryset

View File

@@ -4,11 +4,11 @@ import logging
# AWX
from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
logger = logging.getLogger('awx.main.scheduler')
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
def send_subsystem_metrics():
Metrics().send_metrics()

View File

@@ -65,7 +65,7 @@ class FixedSlidingWindow:
return sum(self.buckets.values()) or 0
class BroadcastWebsocketStatsManager:
class RelayWebsocketStatsManager:
def __init__(self, event_loop, local_hostname):
self._local_hostname = local_hostname
@@ -74,7 +74,7 @@ class BroadcastWebsocketStatsManager:
self._redis_key = BROADCAST_WEBSOCKET_REDIS_KEY_NAME
def new_remote_host_stats(self, remote_hostname):
self._stats[remote_hostname] = BroadcastWebsocketStats(self._local_hostname, remote_hostname)
self._stats[remote_hostname] = RelayWebsocketStats(self._local_hostname, remote_hostname)
return self._stats[remote_hostname]
def delete_remote_host_stats(self, remote_hostname):
@@ -107,7 +107,7 @@ class BroadcastWebsocketStatsManager:
return parser.text_string_to_metric_families(stats_str.decode('UTF-8'))
class BroadcastWebsocketStats:
class RelayWebsocketStats:
def __init__(self, local_hostname, remote_hostname):
self._local_hostname = local_hostname
self._remote_hostname = remote_hostname

View File

@@ -6,7 +6,7 @@ import platform
import distro
from django.db import connection
from django.db.models import Count
from django.db.models import Count, Min
from django.conf import settings
from django.contrib.sessions.models import Session
from django.utils.timezone import now, timedelta
@@ -35,7 +35,7 @@ data _since_ the last report date - i.e., new data in the last 24 hours)
"""
def trivial_slicing(key, since, until, last_gather):
def trivial_slicing(key, since, until, last_gather, **kwargs):
if since is not None:
return [(since, until)]
@@ -48,7 +48,7 @@ def trivial_slicing(key, since, until, last_gather):
return [(last_entry, until)]
def four_hour_slicing(key, since, until, last_gather):
def four_hour_slicing(key, since, until, last_gather, **kwargs):
if since is not None:
last_entry = since
else:
@@ -69,6 +69,54 @@ def four_hour_slicing(key, since, until, last_gather):
start = end
def host_metric_slicing(key, since, until, last_gather, **kwargs):
"""
Slicing doesn't start 4 weeks ago, but sends whole table monthly or first time
"""
from awx.main.models.inventory import HostMetric
if since is not None:
return [(since, until)]
from awx.conf.models import Setting
# Check if full sync should be done
full_sync_enabled = kwargs.get('full_sync_enabled', False)
last_entry = None
if not full_sync_enabled:
#
# If not, try incremental sync first
#
last_entries = Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_ENTRIES').first()
last_entries = json.loads((last_entries.value if last_entries is not None else '') or '{}', object_hook=datetime_hook)
last_entry = last_entries.get(key)
if not last_entry:
#
# If not done before, switch to full sync
#
full_sync_enabled = True
if full_sync_enabled:
#
# Find the lowest date for full sync
#
min_dates = HostMetric.objects.aggregate(min_last_automation=Min('last_automation'), min_last_deleted=Min('last_deleted'))
if min_dates['min_last_automation'] and min_dates['min_last_deleted']:
last_entry = min(min_dates['min_last_automation'], min_dates['min_last_deleted'])
elif min_dates['min_last_automation'] or min_dates['min_last_deleted']:
last_entry = min_dates['min_last_automation'] or min_dates['min_last_deleted']
if not last_entry:
# empty table
return []
start, end = last_entry, None
while start < until:
end = min(start + timedelta(days=30), until)
yield (start, end)
start = end
def _identify_lower(key, since, until, last_gather):
from awx.conf.models import Setting
@@ -83,7 +131,7 @@ def _identify_lower(key, since, until, last_gather):
return lower, last_entries
@register('config', '1.4', description=_('General platform configuration.'))
@register('config', '1.6', description=_('General platform configuration.'))
def config(since, **kwargs):
license_info = get_license()
install_type = 'traditional'
@@ -107,10 +155,13 @@ def config(since, **kwargs):
'subscription_name': license_info.get('subscription_name'),
'sku': license_info.get('sku'),
'support_level': license_info.get('support_level'),
'usage': license_info.get('usage'),
'product_name': license_info.get('product_name'),
'valid_key': license_info.get('valid_key'),
'satellite': license_info.get('satellite'),
'pool_id': license_info.get('pool_id'),
'subscription_id': license_info.get('subscription_id'),
'account_number': license_info.get('account_number'),
'current_instances': license_info.get('current_instances'),
'automated_instances': license_info.get('automated_instances'),
'automated_since': license_info.get('automated_since'),
@@ -119,6 +170,7 @@ def config(since, **kwargs):
'compliant': license_info.get('compliant'),
'date_warning': license_info.get('date_warning'),
'date_expired': license_info.get('date_expired'),
'subscription_usage_model': getattr(settings, 'SUBSCRIPTION_USAGE_MODEL', ''), # 1.5+
'free_instances': license_info.get('free_instances', 0),
'total_licensed_instances': license_info.get('instance_count', 0),
'license_expiry': license_info.get('time_remaining', 0),
@@ -347,7 +399,10 @@ def _copy_table(table, query, path):
file_path = os.path.join(path, table + '_table.csv')
file = FileSplitter(filespec=file_path)
with connection.cursor() as cursor:
cursor.copy_expert(query, file)
with cursor.copy(query) as copy:
while data := copy.read():
byte_data = bytes(data)
file.write(byte_data.decode())
return file.file_list()
@@ -536,3 +591,42 @@ def workflow_job_template_node_table(since, full_path, **kwargs):
) always_nodes ON main_workflowjobtemplatenode.id = always_nodes.from_workflowjobtemplatenode_id
ORDER BY main_workflowjobtemplatenode.id ASC) TO STDOUT WITH CSV HEADER'''
return _copy_table(table='workflow_job_template_node', query=workflow_job_template_node_query, path=full_path)
@register(
'host_metric_table', '1.0', format='csv', description=_('Host Metric data, incremental/full sync'), expensive=host_metric_slicing, full_sync_interval=30
)
def host_metric_table(since, full_path, until, **kwargs):
host_metric_query = '''COPY (SELECT main_hostmetric.id,
main_hostmetric.hostname,
main_hostmetric.first_automation,
main_hostmetric.last_automation,
main_hostmetric.last_deleted,
main_hostmetric.deleted,
main_hostmetric.automated_counter,
main_hostmetric.deleted_counter,
main_hostmetric.used_in_inventories
FROM main_hostmetric
WHERE (main_hostmetric.last_automation > '{}' AND main_hostmetric.last_automation <= '{}') OR
(main_hostmetric.last_deleted > '{}' AND main_hostmetric.last_deleted <= '{}')
ORDER BY main_hostmetric.id ASC) TO STDOUT WITH CSV HEADER'''.format(
since.isoformat(), until.isoformat(), since.isoformat(), until.isoformat()
)
return _copy_table(table='host_metric', query=host_metric_query, path=full_path)
@register('host_metric_summary_monthly_table', '1.0', format='csv', description=_('HostMetricSummaryMonthly export, full sync'), expensive=trivial_slicing)
def host_metric_summary_monthly_table(since, full_path, **kwargs):
query = '''
COPY (SELECT main_hostmetricsummarymonthly.id,
main_hostmetricsummarymonthly.date,
main_hostmetricsummarymonthly.license_capacity,
main_hostmetricsummarymonthly.license_consumed,
main_hostmetricsummarymonthly.hosts_added,
main_hostmetricsummarymonthly.hosts_deleted,
main_hostmetricsummarymonthly.indirectly_managed_hosts
FROM main_hostmetricsummarymonthly
ORDER BY main_hostmetricsummarymonthly.id ASC) TO STDOUT WITH CSV HEADER
'''
return _copy_table(table='host_metric_summary_monthly', query=query, path=full_path)

View File

@@ -52,7 +52,7 @@ def all_collectors():
}
def register(key, version, description=None, format='json', expensive=None):
def register(key, version, description=None, format='json', expensive=None, full_sync_interval=None):
"""
A decorator used to register a function as a metric collector.
@@ -71,6 +71,7 @@ def register(key, version, description=None, format='json', expensive=None):
f.__awx_analytics_description__ = description
f.__awx_analytics_type__ = format
f.__awx_expensive__ = expensive
f.__awx_full_sync_interval__ = full_sync_interval
return f
return decorate
@@ -259,10 +260,19 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
# These slicer functions may return a generator. The `since` parameter is
# allowed to be None, and will fall back to LAST_ENTRIES[key] or to
# LAST_GATHER (truncated appropriately to match the 4-week limit).
#
# Or it can force full table sync if interval is given
kwargs = dict()
full_sync_enabled = False
if func.__awx_full_sync_interval__:
last_full_sync = last_entries.get(f"{key}_full")
full_sync_enabled = not last_full_sync or last_full_sync < now() - timedelta(days=func.__awx_full_sync_interval__)
kwargs['full_sync_enabled'] = full_sync_enabled
if func.__awx_expensive__:
slices = func.__awx_expensive__(key, since, until, last_gather)
slices = func.__awx_expensive__(key, since, until, last_gather, **kwargs)
else:
slices = collectors.trivial_slicing(key, since, until, last_gather)
slices = collectors.trivial_slicing(key, since, until, last_gather, **kwargs)
for start, end in slices:
files = func(start, full_path=gather_dir, until=end)
@@ -301,6 +311,12 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
succeeded = False
logger.exception("Could not generate metric {}".format(filename))
# update full sync timestamp if successfully shipped
if full_sync_enabled and collection_type != 'dry-run' and succeeded:
with disable_activity_stream():
last_entries[f"{key}_full"] = now()
settings.AUTOMATION_ANALYTICS_LAST_ENTRIES = json.dumps(last_entries, cls=DjangoJSONEncoder)
if collection_type != 'dry-run':
if succeeded:
for fpath in tarfiles:
@@ -359,9 +375,7 @@ def ship(path):
s.headers = get_awx_http_client_headers()
s.headers.pop('Content-Type')
with set_environ(**settings.AWX_TASK_ENV):
response = s.post(
url, files=files, verify="/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31)
)
response = s.post(url, files=files, verify=settings.INSIGHTS_CERT_PATH, auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31))
# Accept 2XX status_codes
if response.status_code >= 300:
logger.error('Upload failed with status {}, {}'.format(response.status_code, response.text))

View File

@@ -9,7 +9,7 @@ from django.apps import apps
from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing
root_key = 'awx_metrics'
root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
logger = logging.getLogger('awx.main.analytics')
@@ -209,6 +209,11 @@ class Metrics:
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
]
# turn metric list into dictionary with the metric name as a key
self.METRICS = {}
@@ -264,13 +269,6 @@ class Metrics:
data[field] = self.METRICS[field].decode(self.conn)
return data
def store_metrics(self, data_json):
# called when receiving metrics from other instances
data = json.loads(data_json)
if self.instance_name != data['instance']:
logger.debug(f"{self.instance_name} received subsystem metrics from {data['instance']}")
self.conn.set(root_key + "_instance_" + data['instance'], data['metrics'])
def should_pipe_execute(self):
if self.metrics_have_changed is False:
return False
@@ -305,13 +303,15 @@ class Metrics:
try:
current_time = time.time()
if current_time - self.previous_send_metrics.decode(self.conn) > self.send_metrics_interval:
serialized_metrics = self.serialize_local_metrics()
payload = {
'instance': self.instance_name,
'metrics': self.serialize_local_metrics(),
'metrics': serialized_metrics,
}
# store a local copy as well
self.store_metrics(json.dumps(payload))
# store the serialized data locally as well, so that load_other_metrics will read it
self.conn.set(root_key + '_instance_' + self.instance_name, serialized_metrics)
emit_channel_notification("metrics", payload)
self.previous_send_metrics.set(current_time)
self.previous_send_metrics.store_value(self.conn)
finally:

87
awx/main/cache.py Normal file
View File

@@ -0,0 +1,87 @@
import functools
from django.conf import settings
from django.core.cache.backends.base import DEFAULT_TIMEOUT
from django.core.cache.backends.redis import RedisCache
from redis.exceptions import ConnectionError, ResponseError, TimeoutError
import socket
# This list comes from what django-redis ignores and the behavior we are trying
# to retain while dropping the dependency on django-redis.
IGNORED_EXCEPTIONS = (TimeoutError, ResponseError, ConnectionError, socket.timeout)
CONNECTION_INTERRUPTED_SENTINEL = object()
def optionally_ignore_exceptions(func=None, return_value=None):
if func is None:
return functools.partial(optionally_ignore_exceptions, return_value=return_value)
@functools.wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except IGNORED_EXCEPTIONS as e:
if settings.DJANGO_REDIS_IGNORE_EXCEPTIONS:
return return_value
raise e.__cause__ or e
return wrapper
class AWXRedisCache(RedisCache):
"""
We just want to wrap the upstream RedisCache class so that we can ignore
the exceptions that it raises when the cache is unavailable.
"""
@optionally_ignore_exceptions
def add(self, key, value, timeout=DEFAULT_TIMEOUT, version=None):
return super().add(key, value, timeout, version)
@optionally_ignore_exceptions(return_value=CONNECTION_INTERRUPTED_SENTINEL)
def _get(self, key, default=None, version=None):
return super().get(key, default, version)
def get(self, key, default=None, version=None):
value = self._get(key, default, version)
if value is CONNECTION_INTERRUPTED_SENTINEL:
return default
return value
@optionally_ignore_exceptions
def set(self, key, value, timeout=DEFAULT_TIMEOUT, version=None):
return super().set(key, value, timeout, version)
@optionally_ignore_exceptions
def touch(self, key, timeout=DEFAULT_TIMEOUT, version=None):
return super().touch(key, timeout, version)
@optionally_ignore_exceptions
def delete(self, key, version=None):
return super().delete(key, version)
@optionally_ignore_exceptions
def get_many(self, keys, version=None):
return super().get_many(keys, version)
@optionally_ignore_exceptions
def has_key(self, key, version=None):
return super().has_key(key, version)
@optionally_ignore_exceptions
def incr(self, key, delta=1, version=None):
return super().incr(key, delta, version)
@optionally_ignore_exceptions
def set_many(self, data, timeout=DEFAULT_TIMEOUT, version=None):
return super().set_many(data, timeout, version)
@optionally_ignore_exceptions
def delete_many(self, keys, version=None):
return super().delete_many(keys, version)
@optionally_ignore_exceptions
def clear(self):
return super().clear()

View File

@@ -10,7 +10,7 @@ from rest_framework import serializers
# AWX
from awx.conf import fields, register, register_validate
from awx.main.models import ExecutionEnvironment
from awx.main.constants import SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS
logger = logging.getLogger('awx.main.conf')
@@ -94,6 +94,20 @@ register(
category_slug='system',
)
register(
'CSRF_TRUSTED_ORIGINS',
default=[],
field_class=fields.StringListField,
label=_('CSRF Trusted Origins List'),
help_text=_(
"If the service is behind a reverse proxy/load balancer, use this setting "
"to configure the schema://addresses from which the service should trust "
"Origin header values. "
),
category=_('System'),
category_slug='system',
)
register(
'LICENSE',
field_class=fields.DictField,
@@ -684,11 +698,28 @@ register(
field_class=fields.IntegerField,
default=1,
min_value=1,
label=_('Maximum disk persistance for external log aggregation (in GB)'),
label=_('Maximum disk persistence for external log aggregation (in GB)'),
help_text=_(
'Amount of data to store (in gigabytes) during an outage of '
'the external log aggregator (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting.'
'Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. '
'Notably, this is used for the rsyslogd main queue (for input messages).'
),
category=_('Logging'),
category_slug='logging',
)
register(
'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB',
field_class=fields.IntegerField,
default=1,
min_value=1,
label=_('Maximum disk persistence for rsyslogd action queuing (in GB)'),
help_text=_(
'Amount of data to store (in gigabytes) if an rsyslog action takes time '
'to process an incoming message (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). '
'Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified '
'by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
),
category=_('Logging'),
category_slug='logging',
@@ -795,6 +826,91 @@ register(
category_slug='bulk',
)
register(
'UI_NEXT',
field_class=fields.BooleanField,
default=False,
label=_('Enable Preview of New User Interface'),
help_text=_('Enable preview of new user interface.'),
category=_('System'),
category_slug='system',
)
register(
'SUBSCRIPTION_USAGE_MODEL',
field_class=fields.ChoiceField,
choices=[
('', _('Default model for AWX - no subscription. Deletion of host_metrics will not be considered for purposes of managed host counting')),
(
SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS,
_('Usage based on unique managed nodes in a large historical time frame and delete functionality for no longer used managed nodes'),
),
],
default='',
allow_blank=True,
label=_('Defines subscription usage model and shows Host Metrics'),
category=_('System'),
category_slug='system',
)
register(
'CLEANUP_HOST_METRICS_LAST_TS',
field_class=fields.DateTimeField,
label=_('Last cleanup date for HostMetrics'),
allow_null=True,
category=_('System'),
category_slug='system',
)
register(
'HOST_METRIC_SUMMARY_TASK_LAST_TS',
field_class=fields.DateTimeField,
label=_('Last computing date of HostMetricSummaryMonthly'),
allow_null=True,
category=_('System'),
category_slug='system',
)
register(
'AWX_CLEANUP_PATHS',
field_class=fields.BooleanField,
label=_('Enable or Disable tmp dir cleanup'),
default=True,
help_text=_('Enable or Disable TMP Dir cleanup'),
category=('Debug'),
category_slug='debug',
)
register(
'AWX_REQUEST_PROFILE',
field_class=fields.BooleanField,
label=_('Debug Web Requests'),
default=False,
help_text=_('Debug web request python timing'),
category=('Debug'),
category_slug='debug',
)
register(
'DEFAULT_CONTAINER_RUN_OPTIONS',
field_class=fields.StringListField,
label=_('Container Run Options'),
default=['--network', 'slirp4netns:enable_ipv6=true'],
help_text=_("List of options to pass to podman run example: ['--network', 'slirp4netns:enable_ipv6=true', '--log-level', 'debug']"),
category=('Jobs'),
category_slug='jobs',
)
register(
'RECEPTOR_RELEASE_WORK',
field_class=fields.BooleanField,
label=_('Release Receptor Work'),
default=True,
help_text=_('Release receptor work'),
category=('Debug'),
category_slug='debug',
)
def logging_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'LOG_AGGREGATOR_HOST') or not hasattr(serializer.instance, 'LOG_AGGREGATOR_TYPE'):

View File

@@ -38,6 +38,8 @@ STANDARD_INVENTORY_UPDATE_ENV = {
'ANSIBLE_INVENTORY_EXPORT': 'True',
# Redirecting output to stderr allows JSON parsing to still work with -vvv
'ANSIBLE_VERBOSE_TO_STDERR': 'True',
# if ansible-inventory --limit is used for an inventory import, unmatched should be a failure
'ANSIBLE_HOST_PATTERN_MISMATCH': 'error',
}
CAN_CANCEL = ('new', 'pending', 'waiting', 'running')
ACTIVE_STATES = CAN_CANCEL
@@ -63,7 +65,7 @@ ENV_BLOCKLIST = frozenset(
'INVENTORY_HOSTVARS',
'AWX_HOST',
'PROJECT_REVISION',
'SUPERVISOR_WEB_CONFIG_PATH',
'SUPERVISOR_CONFIG_PATH',
)
)
@@ -106,3 +108,9 @@ JOB_VARIABLE_PREFIXES = [
ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE = (
'\u001b[31m \u001b[1m This can be caused if the version of ansible-runner in your execution environment is out of date.\u001b[0m'
)
# Values for setting SUBSCRIPTION_USAGE_MODEL
SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS = 'unique_managed_hosts'
# Shared prefetch to use for creating a queryset for the purpose of writing or saving facts
HOST_FACTS_FIELDS = ('name', 'ansible_facts', 'ansible_facts_modified', 'modified', 'inventory_id')

View File

@@ -3,6 +3,7 @@ import logging
import time
import hmac
import asyncio
import redis
from django.core.serializers.json import DjangoJSONEncoder
from django.conf import settings
@@ -80,7 +81,7 @@ class WebsocketSecretAuthHelper:
WebsocketSecretAuthHelper.verify_secret(secret)
class BroadcastConsumer(AsyncJsonWebsocketConsumer):
class RelayConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
try:
WebsocketSecretAuthHelper.is_authorized(self.scope)
@@ -100,6 +101,21 @@ class BroadcastConsumer(AsyncJsonWebsocketConsumer):
async def internal_message(self, event):
await self.send(event['text'])
async def receive_json(self, data):
(group, message) = unwrap_broadcast_msg(data)
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "_instance_" + message['instance'], message['metrics'])
else:
await self.channel_layer.group_send(group, message)
async def consumer_subscribe(self, event):
await self.send_json(event)
async def consumer_unsubscribe(self, event):
await self.send_json(event)
class EventConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
@@ -128,6 +144,11 @@ class EventConsumer(AsyncJsonWebsocketConsumer):
self.channel_name,
)
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.unsubscribe", "groups": list(current_groups), "origin_channel": self.channel_name},
)
@database_sync_to_async
def user_can_see_object_id(self, user_access, oid):
# At this point user is a channels.auth.UserLazyObject object
@@ -176,9 +197,20 @@ class EventConsumer(AsyncJsonWebsocketConsumer):
self.channel_name,
)
if len(old_groups):
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.unsubscribe", "groups": list(old_groups), "origin_channel": self.channel_name},
)
new_groups_exclusive = new_groups - current_groups
for group_name in new_groups_exclusive:
await self.channel_layer.group_add(group_name, self.channel_name)
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.subscribe", "groups": list(new_groups), "origin_channel": self.channel_name},
)
self.scope['session']['groups'] = new_groups
await self.send_json({"groups_current": list(new_groups), "groups_left": list(old_groups), "groups_joined": list(new_groups_exclusive)})
@@ -200,9 +232,11 @@ def _dump_payload(payload):
return None
def emit_channel_notification(group, payload):
from awx.main.wsbroadcast import wrap_broadcast_msg # noqa
def unwrap_broadcast_msg(payload: dict):
return (payload['group'], payload['message'])
def emit_channel_notification(group, payload):
payload_dumped = _dump_payload(payload)
if payload_dumped is None:
return
@@ -212,16 +246,6 @@ def emit_channel_notification(group, payload):
run_sync(
channel_layer.group_send(
group,
{"type": "internal.message", "text": payload_dumped},
)
)
run_sync(
channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{
"type": "internal.message",
"text": wrap_broadcast_msg(group, payload_dumped),
},
{"type": "internal.message", "text": payload_dumped, "needs_relay": True},
)
)

View File

@@ -54,6 +54,12 @@ aim_inputs = {
'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'),
},
{'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']},
{
'id': 'object_property',
'label': _('Object Property'),
'type': 'string',
'help_text': _('The property of the object to return. Default: Content Ex: Username, Address, etc.'),
},
{
'id': 'reason',
'label': _('Reason'),
@@ -74,6 +80,7 @@ def aim_backend(**kwargs):
app_id = kwargs['app_id']
object_query = kwargs['object_query']
object_query_format = kwargs['object_query_format']
object_property = kwargs.get('object_property', '')
reason = kwargs.get('reason', None)
if webservice_id == '':
webservice_id = 'AIMWebService'
@@ -98,7 +105,18 @@ def aim_backend(**kwargs):
allow_redirects=False,
)
raise_for_status(res)
return res.json()['Content']
# CCP returns the property name capitalized, username is camel case
# so we need to handle that case
if object_property == '':
object_property = 'Content'
elif object_property.lower() == 'username':
object_property = 'UserName'
elif object_property not in res:
raise KeyError('Property {} not found in object'.format(object_property))
else:
object_property = object_property.capitalize()
return res.json()[object_property]
aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)

View File

@@ -0,0 +1,65 @@
import boto3
from botocore.exceptions import ClientError
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
secrets_manager_inputs = {
'fields': [
{
'id': 'aws_access_key',
'label': _('AWS Access Key'),
'type': 'string',
},
{
'id': 'aws_secret_key',
'label': _('AWS Secret Key'),
'type': 'string',
'secret': True,
},
],
'metadata': [
{
'id': 'region_name',
'label': _('AWS Secrets Manager Region'),
'type': 'string',
'help_text': _('Region which the secrets manager is located'),
},
{
'id': 'secret_name',
'label': _('AWS Secret Name'),
'type': 'string',
},
],
'required': ['aws_access_key', 'aws_secret_key', 'region_name', 'secret_name'],
}
def aws_secretsmanager_backend(**kwargs):
secret_name = kwargs['secret_name']
region_name = kwargs['region_name']
aws_secret_access_key = kwargs['aws_secret_key']
aws_access_key_id = kwargs['aws_access_key']
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager', region_name=region_name, aws_secret_access_key=aws_secret_access_key, aws_access_key_id=aws_access_key_id
)
try:
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
except ClientError as e:
raise e
# Secrets Manager decrypts the secret value using the associated KMS CMK
# Depending on whether the secret was a string or binary, only one of these fields will be populated
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
else:
secret = get_secret_value_response['SecretBinary']
return secret
aws_secretmanager_plugin = CredentialPlugin('AWS Secrets Manager lookup', inputs=secrets_manager_inputs, backend=aws_secretsmanager_backend)

View File

@@ -4,6 +4,8 @@ from urllib.parse import urljoin, quote
from django.utils.translation import gettext_lazy as _
import requests
import base64
import binascii
conjur_inputs = {
@@ -50,6 +52,13 @@ conjur_inputs = {
}
def _is_base64(s: str) -> bool:
try:
return base64.b64encode(base64.b64decode(s.encode("utf-8"))) == s.encode("utf-8")
except binascii.Error:
return False
def conjur_backend(**kwargs):
url = kwargs['url']
api_key = kwargs['api_key']
@@ -77,7 +86,7 @@ def conjur_backend(**kwargs):
token = resp.content.decode('utf-8')
lookup_kwargs = {
'headers': {'Authorization': 'Token token="{}"'.format(token)},
'headers': {'Authorization': 'Token token="{}"'.format(token if _is_base64(token) else base64.b64encode(token.encode('utf-8')).decode('utf-8'))},
'allow_redirects': False,
}

View File

@@ -35,8 +35,14 @@ dsv_inputs = {
'type': 'string',
'help_text': _('The secret path e.g. /test/secret1'),
},
{
'id': 'secret_field',
'label': _('Secret Field'),
'help_text': _('The field to extract from the secret'),
'type': 'string',
},
],
'required': ['tenant', 'client_id', 'client_secret', 'path'],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field'],
}
if settings.DEBUG:
@@ -52,5 +58,5 @@ if settings.DEBUG:
dsv_plugin = CredentialPlugin(
'Thycotic DevOps Secrets Vault',
dsv_inputs,
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path']),
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip
)

View File

@@ -265,6 +265,8 @@ def kv_backend(**kwargs):
if secret_key:
try:
if (secret_key != 'data') and (secret_key not in json['data']) and ('data' in json['data']):
return json['data']['data'][secret_key]
return json['data'][secret_key]
except KeyError:
raise RuntimeError('{} is not present at {}'.format(secret_key, secret_path))

View File

@@ -1,7 +1,10 @@
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
from thycotic.secrets.server import PasswordGrantAuthorizer, SecretServer, ServerSecret
try:
from delinea.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
except ImportError:
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
tss_inputs = {
'fields': [
@@ -17,6 +20,12 @@ tss_inputs = {
'help_text': _('The (Application) user username'),
'type': 'string',
},
{
'id': 'domain',
'label': _('Domain'),
'help_text': _('The (Application) user domain'),
'type': 'string',
},
{
'id': 'password',
'label': _('Password'),
@@ -44,7 +53,12 @@ tss_inputs = {
def tss_backend(**kwargs):
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
if kwargs.get("domain"):
authorizer = DomainPasswordGrantAuthorizer(
base_url=kwargs['server_url'], username=kwargs['username'], domain=kwargs['domain'], password=kwargs['password']
)
else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer)
secret_dict = secret_server.get_secret(kwargs['secret_id'])
secret = ServerSecret(**secret_dict)

View File

@@ -63,7 +63,7 @@ class RecordedQueryLog(object):
if not os.path.isdir(self.dest):
os.makedirs(self.dest)
progname = ' '.join(sys.argv)
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'wsbroadcast'):
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'wsrelay'):
if match in progname:
progname = match
break
@@ -87,7 +87,7 @@ class RecordedQueryLog(object):
)
log.commit()
log.execute(
'INSERT INTO queries (pid, version, argv, time, sql, explain, bt) ' 'VALUES (?, ?, ?, ?, ?, ?, ?);',
'INSERT INTO queries (pid, version, argv, time, sql, explain, bt) VALUES (?, ?, ?, ?, ?, ?, ?);',
(os.getpid(), version, ' '.join(sys.argv), seconds, sql, explain, bt),
)
log.commit()

View File

@@ -1,12 +1,14 @@
import psycopg2
import os
import psycopg
import select
from contextlib import contextmanager
from awx.settings.application_name import get_application_name
from django.conf import settings
from django.db import connection as pg_connection
NOT_READY = ([], [], [])
@@ -14,9 +16,36 @@ def get_local_queuename():
return settings.CLUSTER_HOST_ID
def get_task_queuename():
if os.getenv('AWX_COMPONENT') != 'web':
return settings.CLUSTER_HOST_ID
from awx.main.models.ha import Instance
random_task_instance = (
Instance.objects.filter(
node_type__in=(Instance.Types.CONTROL, Instance.Types.HYBRID),
node_state=Instance.States.READY,
enabled=True,
)
.only('hostname')
.order_by('?')
.first()
)
if random_task_instance is None:
raise ValueError('No task instances are READY and Enabled.')
return random_task_instance.hostname
class PubSub(object):
def __init__(self, conn):
def __init__(self, conn, select_timeout=None):
self.conn = conn
if select_timeout is None:
self.select_timeout = 5
else:
self.select_timeout = select_timeout
def listen(self, channel):
with self.conn.cursor() as cur:
@@ -30,25 +59,42 @@ class PubSub(object):
with self.conn.cursor() as cur:
cur.execute('SELECT pg_notify(%s, %s);', (channel, payload))
def events(self, select_timeout=5, yield_timeouts=False):
@staticmethod
def current_notifies(conn):
"""
Altered version of .notifies method from psycopg library
This removes the outer while True loop so that we only process
queued notifications
"""
with conn.lock:
try:
ns = conn.wait(psycopg.generators.notifies(conn.pgconn))
except psycopg.errors._NO_TRACEBACK as ex:
raise ex.with_traceback(None)
enc = psycopg._encodings.pgconn_encoding(conn.pgconn)
for pgn in ns:
n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid)
yield n
def events(self, yield_timeouts=False):
if not self.conn.autocommit:
raise RuntimeError('Listening for events can only be done in autocommit mode')
while True:
if select.select([self.conn], [], [], select_timeout) == NOT_READY:
if select.select([self.conn], [], [], self.select_timeout) == NOT_READY:
if yield_timeouts:
yield None
else:
self.conn.poll()
while self.conn.notifies:
yield self.conn.notifies.pop(0)
notification_generator = self.current_notifies(self.conn)
for notification in notification_generator:
yield notification
def close(self):
self.conn.close()
@contextmanager
def pg_bus_conn(new_connection=False):
def pg_bus_conn(new_connection=False, select_timeout=None):
'''
Any listeners probably want to establish a new database connection,
separate from the Django connection used for queries, because that will prevent
@@ -60,12 +106,12 @@ def pg_bus_conn(new_connection=False):
'''
if new_connection:
conf = settings.DATABASES['default']
conn = psycopg2.connect(
dbname=conf['NAME'], host=conf['HOST'], user=conf['USER'], password=conf['PASSWORD'], port=conf['PORT'], **conf.get("OPTIONS", {})
)
# Django connection.cursor().connection doesn't have autocommit=True on by default
conn.set_session(autocommit=True)
conf = settings.DATABASES['default'].copy()
conf['OPTIONS'] = conf.get('OPTIONS', {}).copy()
# Modify the application name to distinguish from other connections the process might use
conf['OPTIONS']['application_name'] = get_application_name(settings.CLUSTER_HOST_ID, function='listener')
connection_data = f"dbname={conf['NAME']} host={conf['HOST']} user={conf['USER']} password={conf['PASSWORD']} port={conf['PORT']}"
conn = psycopg.connect(connection_data, autocommit=True, **conf['OPTIONS'])
else:
if pg_connection.connection is None:
pg_connection.connect()
@@ -73,7 +119,7 @@ def pg_bus_conn(new_connection=False):
raise RuntimeError('Unexpectedly could not connect to postgres for pg_notify actions')
conn = pg_connection.connection
pubsub = PubSub(conn)
pubsub = PubSub(conn, select_timeout=select_timeout)
yield pubsub
if new_connection:
conn.close()

View File

@@ -6,7 +6,7 @@ from django.conf import settings
from django.db import connection
import redis
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
from . import pg_bus_conn
@@ -21,7 +21,7 @@ class Control(object):
if service not in self.services:
raise RuntimeError('{} must be in {}'.format(service, self.services))
self.service = service
self.queuename = host or get_local_queuename()
self.queuename = host or get_task_queuename()
def status(self, *args, **kwargs):
r = redis.Redis.from_url(settings.BROKER_URL)
@@ -40,6 +40,9 @@ class Control(object):
def cancel(self, task_ids, *args, **kwargs):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)
@classmethod
def generate_reply_queue_name(cls):
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
@@ -52,14 +55,14 @@ class Control(object):
if not connection.get_autocommit():
raise RuntimeError('Control-with-reply messages can only be done in autocommit mode')
with pg_bus_conn() as conn:
with pg_bus_conn(select_timeout=timeout) as conn:
conn.listen(reply_queue)
send_data = {'control': command, 'reply_to': reply_queue}
if extra_data:
send_data.update(extra_data)
conn.notify(self.queuename, json.dumps(send_data))
for reply in conn.events(select_timeout=timeout, yield_timeouts=True):
for reply in conn.events(yield_timeouts=True):
if reply is None:
logger.error(f'{self.service} did not reply within {timeout}s')
raise RuntimeError(f"{self.service} did not reply within {timeout}s")

View File

@@ -1,53 +1,142 @@
import logging
import os
import time
from multiprocessing import Process
import yaml
from datetime import datetime
from django.conf import settings
from django.db import connections
from schedule import Scheduler
from django_guid import set_guid
from django_guid.utils import generate_guid
from awx.main.dispatch.worker import TaskWorker
logger = logging.getLogger('awx.main.dispatch.periodic')
class Scheduler(Scheduler):
def run_continuously(self):
idle_seconds = max(1, min(self.jobs).period.total_seconds() / 2)
class ScheduledTask:
"""
Class representing schedules, very loosely modeled after python schedule library Job
the idea of this class is to:
- only deal in relative times (time since the scheduler global start)
- only deal in integer math for target runtimes, but float for current relative time
def run():
ppid = os.getppid()
logger.warning('periodic beat started')
while True:
if os.getppid() != ppid:
# if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too
pid = os.getpid()
logger.warning(f'periodic beat exiting gracefully pid:{pid}')
raise SystemExit()
try:
for conn in connections.all():
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
set_guid(generate_guid())
self.run_pending()
except Exception:
logger.exception('encountered an error while scheduling periodic tasks')
time.sleep(idle_seconds)
Missed schedule policy:
Invariant target times are maintained, meaning that if interval=10s offset=0
and it runs at t=7s, then it calls for next run in 3s.
However, if a complete interval has passed, that is counted as a missed run,
and missed runs are abandoned (no catch-up runs).
"""
process = Process(target=run)
process.daemon = True
process.start()
def __init__(self, name: str, data: dict):
# parameters need for schedule computation
self.interval = int(data['schedule'].total_seconds())
self.offset = 0 # offset relative to start time this schedule begins
self.index = 0 # number of periods of the schedule that has passed
# parameters that do not affect scheduling logic
self.last_run = None # time of last run, only used for debug
self.completed_runs = 0 # number of times schedule is known to run
self.name = name
self.data = data # used by caller to know what to run
@property
def next_run(self):
"Time until the next run with t=0 being the global_start of the scheduler class"
return (self.index + 1) * self.interval + self.offset
def due_to_run(self, relative_time):
return bool(self.next_run <= relative_time)
def expected_runs(self, relative_time):
return int((relative_time - self.offset) / self.interval)
def mark_run(self, relative_time):
self.last_run = relative_time
self.completed_runs += 1
new_index = self.expected_runs(relative_time)
if new_index > self.index + 1:
logger.warning(f'Missed {new_index - self.index - 1} schedules of {self.name}')
self.index = new_index
def missed_runs(self, relative_time):
"Number of times job was supposed to ran but failed to, only used for debug"
missed_ct = self.expected_runs(relative_time) - self.completed_runs
# if this is currently due to run do not count that as a missed run
if missed_ct and self.due_to_run(relative_time):
missed_ct -= 1
return missed_ct
def run_continuously():
scheduler = Scheduler()
for task in settings.CELERYBEAT_SCHEDULE.values():
apply_async = TaskWorker.resolve_callable(task['task']).apply_async
total_seconds = task['schedule'].total_seconds()
scheduler.every(total_seconds).seconds.do(apply_async)
scheduler.run_continuously()
class Scheduler:
def __init__(self, schedule):
"""
Expects schedule in the form of a dictionary like
{
'job1': {'schedule': timedelta(seconds=50), 'other': 'stuff'}
}
Only the schedule nearest-second value is used for scheduling,
the rest of the data is for use by the caller to know what to run.
"""
self.jobs = [ScheduledTask(name, data) for name, data in schedule.items()]
min_interval = min(job.interval for job in self.jobs)
num_jobs = len(self.jobs)
# this is intentionally oppioniated against spammy schedules
# a core goal is to spread out the scheduled tasks (for worker management)
# and high-frequency schedules just do not work with that
if num_jobs > min_interval:
raise RuntimeError(f'Number of schedules ({num_jobs}) is more than the shortest schedule interval ({min_interval} seconds).')
# even space out jobs over the base interval
for i, job in enumerate(self.jobs):
job.offset = (i * min_interval) // num_jobs
# internally times are all referenced relative to startup time, add grace period
self.global_start = time.time() + 2.0
def get_and_mark_pending(self):
relative_time = time.time() - self.global_start
to_run = []
for job in self.jobs:
if job.due_to_run(relative_time):
to_run.append(job)
logger.debug(f'scheduler found {job.name} to run, {relative_time - job.next_run} seconds after target')
job.mark_run(relative_time)
return to_run
def time_until_next_run(self):
relative_time = time.time() - self.global_start
next_job = min(self.jobs, key=lambda j: j.next_run)
delta = next_job.next_run - relative_time
if delta <= 0.1:
# careful not to give 0 or negative values to the select timeout, which has unclear interpretation
logger.warning(f'Scheduler next run of {next_job.name} is {-delta} seconds in the past')
return 0.1
elif delta > 20.0:
logger.warning(f'Scheduler next run unexpectedly over 20 seconds in future: {delta}')
return 20.0
logger.debug(f'Scheduler next run is {next_job.name} in {delta} seconds')
return delta
def debug(self, *args, **kwargs):
data = dict()
data['title'] = 'Scheduler status'
now = datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S UTC')
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
relative_time = time.time() - self.global_start
data['started_time'] = start_time
data['current_time'] = now
data['current_time_relative'] = round(relative_time, 3)
data['total_schedules'] = len(self.jobs)
data['schedule_list'] = dict(
[
(
job.name,
dict(
last_run_seconds_ago=round(relative_time - job.last_run, 3) if job.last_run else None,
next_run_in_seconds=round(job.next_run - relative_time, 3),
offset_in_seconds=job.offset,
completed_runs=job.completed_runs,
missed_runs=job.missed_runs(relative_time),
),
)
for job in sorted(self.jobs, key=lambda job: job.interval)
]
)
return yaml.safe_dump(data, default_flow_style=False, sort_keys=False)

View File

@@ -339,6 +339,17 @@ class AutoscalePool(WorkerPool):
# but if the task takes longer than the time defined here, we will force it to stop here
self.task_manager_timeout = settings.TASK_MANAGER_TIMEOUT + settings.TASK_MANAGER_TIMEOUT_GRACE_PERIOD
# initialize some things for subsystem metrics periodic gathering
# the AutoscalePool class does not save these to redis directly, but reports via produce_subsystem_metrics
self.scale_up_ct = 0
self.worker_count_max = 0
def produce_subsystem_metrics(self, metrics_object):
metrics_object.set('dispatcher_pool_scale_up_events', self.scale_up_ct)
metrics_object.set('dispatcher_pool_active_task_count', sum(len(w.managed_tasks) for w in self.workers))
metrics_object.set('dispatcher_pool_max_worker_count', self.worker_count_max)
self.worker_count_max = len(self.workers)
@property
def should_grow(self):
if len(self.workers) < self.min_workers:
@@ -406,16 +417,16 @@ class AutoscalePool(WorkerPool):
# the task manager to never do more work
current_task = w.current_task
if current_task and isinstance(current_task, dict):
endings = ['tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager']
endings = ('tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager')
current_task_name = current_task.get('task', '')
if any(current_task_name.endswith(e) for e in endings):
if current_task_name.endswith(endings):
if 'started' not in current_task:
w.managed_tasks[current_task['uuid']]['started'] = time.time()
age = time.time() - current_task['started']
w.managed_tasks[current_task['uuid']]['age'] = age
if age > self.task_manager_timeout:
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGTERM to {w.pid}')
os.kill(w.pid, signal.SIGTERM)
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGUSR1 to {w.pid}')
os.kill(w.pid, signal.SIGUSR1)
for m in orphaned:
# if all the workers are dead, spawn at least one
@@ -443,7 +454,12 @@ class AutoscalePool(WorkerPool):
idx = random.choice(range(len(self.workers)))
return idx, self.workers[idx]
else:
return super(AutoscalePool, self).up()
self.scale_up_ct += 1
ret = super(AutoscalePool, self).up()
new_worker_ct = len(self.workers)
if new_worker_ct > self.worker_count_max:
self.worker_count_max = new_worker_ct
return ret
def write(self, preferred_queue, body):
if 'guid' in body:

View File

@@ -73,15 +73,15 @@ class task:
return cls.apply_async(args, kwargs)
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
def get_async_body(cls, args=None, kwargs=None, uuid=None, **kw):
"""
Get the python dict to become JSON data in the pg_notify message
This same message gets passed over the dispatcher IPC queue to workers
If a task is submitted to a multiprocessing pool, skipping pg_notify, this might be used directly
"""
task_id = uuid or str(uuid4())
args = args or []
kwargs = kwargs or {}
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name, 'time_pub': time.time()}
guid = get_guid()
if guid:
@@ -89,6 +89,16 @@ class task:
if bind_kwargs:
obj['bind_kwargs'] = bind_kwargs
obj.update(**kw)
return obj
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = cls.get_async_body(args=args, kwargs=kwargs, uuid=uuid, **kw)
if callable(queue):
queue = queue()
if not is_testing():
@@ -116,4 +126,5 @@ class task:
setattr(fn, 'name', cls.name)
setattr(fn, 'apply_async', cls.apply_async)
setattr(fn, 'delay', cls.delay)
setattr(fn, 'get_async_body', cls.get_async_body)
return fn

View File

@@ -79,9 +79,11 @@ def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=No
else:
hostname = instance.hostname
workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id
jobs = UnifiedJob.objects.filter(
Q(status='running', modified__lte=ref_time) & (Q(execution_node=hostname) | Q(controller_node=hostname)) & ~Q(polymorphic_ctype_id=workflow_ctype_id)
)
base_Q = Q(status='running') & (Q(execution_node=hostname) | Q(controller_node=hostname)) & ~Q(polymorphic_ctype_id=workflow_ctype_id)
if ref_time:
jobs = UnifiedJob.objects.filter(base_Q & Q(started__lte=ref_time))
else:
jobs = UnifiedJob.objects.filter(base_Q)
if excluded_uuids:
jobs = jobs.exclude(celery_task_id__in=excluded_uuids)
for j in jobs:

View File

@@ -7,17 +7,21 @@ import signal
import sys
import redis
import json
import psycopg2
import psycopg
import time
from uuid import UUID
from queue import Empty as QueueEmpty
from datetime import timedelta
from django import db
from django.conf import settings
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch.periodic import Scheduler
from awx.main.dispatch import pg_bus_conn
from awx.main.utils.common import log_excess_runtime
from awx.main.utils.db import set_connection_name
import awx.main.analytics.subsystem_metrics as s_metrics
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -62,10 +66,12 @@ class AWXConsumerBase(object):
def control(self, body):
logger.warning(f'Received control signal:\n{body}')
control = body.get('control')
if control in ('status', 'running', 'cancel'):
if control in ('status', 'schedule', 'running', 'cancel'):
reply_queue = body['reply_to']
if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()])
if control == 'schedule':
msg = self.scheduler.debug()
elif control == 'running':
msg = []
for worker in self.pool.workers:
@@ -91,16 +97,11 @@ class AWXConsumerBase(object):
else:
logger.error('unrecognized control message: {}'.format(control))
def process_task(self, body):
def dispatch_task(self, body):
"""This will place the given body into a worker queue to run method decorated as a task"""
if isinstance(body, dict):
body['time_ack'] = time.time()
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
if len(self.pool):
if "uuid" in body and body['uuid']:
try:
@@ -114,15 +115,24 @@ class AWXConsumerBase(object):
self.pool.write(queue, body)
self.total_messages += 1
def process_task(self, body):
"""Routes the task details in body as either a control task or a task-task"""
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
self.dispatch_task(body)
@log_excess_runtime(logger)
def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
try:
self.redis.set(f'awx_{self.name}_statistics', self.pool.debug())
self.last_stats = time.time()
except Exception:
logger.exception(f"encountered an error communicating with redis to store {self.name} statistics")
self.last_stats = time.time()
self.last_stats = time.time()
def run(self, *args, **kwargs):
signal.signal(signal.SIGINT, self.stop)
@@ -141,29 +151,72 @@ class AWXConsumerRedis(AWXConsumerBase):
def run(self, *args, **kwargs):
super(AWXConsumerRedis, self).run(*args, **kwargs)
self.worker.on_start()
logger.info(f'Callback receiver started with pid={os.getpid()}')
db.connection.close() # logs use database, so close connection
while True:
logger.debug(f'{os.getpid()} is alive')
time.sleep(60)
class AWXConsumerPG(AWXConsumerBase):
def __init__(self, *args, **kwargs):
def __init__(self, *args, schedule=None, **kwargs):
super().__init__(*args, **kwargs)
self.pg_max_wait = settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE
self.pg_max_wait = settings.DISPATCHER_DB_DOWNTIME_TOLERANCE
# if no successful loops have ran since startup, then we should fail right away
self.pg_is_down = True # set so that we fail if we get database errors on startup
self.pg_down_time = time.time() - self.pg_max_wait # allow no grace period
self.last_cleanup = time.time()
init_time = time.time()
self.pg_down_time = init_time - self.pg_max_wait # allow no grace period
self.last_cleanup = init_time
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.last_metrics_gather = init_time
self.listen_cumulative_time = 0.0
if schedule:
schedule = schedule.copy()
else:
schedule = {}
# add control tasks to be ran at regular schedules
# NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections
schedule['pool_cleanup'] = {'control': self.pool.cleanup, 'schedule': timedelta(seconds=60)}
# record subsystem metrics for the dispatcher
schedule['metrics_gather'] = {'control': self.record_metrics, 'schedule': timedelta(seconds=20)}
self.scheduler = Scheduler(schedule)
def record_metrics(self):
current_time = time.time()
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
def run_periodic_tasks(self):
self.record_statistics() # maintains time buffer in method
"""
Run general periodic logic, and return maximum time in seconds before
the next requested run
This may be called more often than that when events are consumed
so this should be very efficient in that
"""
try:
self.record_statistics() # maintains time buffer in method
except Exception as exc:
logger.warning(f'Failed to save dispatcher statistics {exc}')
if time.time() - self.last_cleanup > 60: # same as cluster_node_heartbeat
# NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections
self.pool.cleanup()
self.last_cleanup = time.time()
for job in self.scheduler.get_and_mark_pending():
if 'control' in job.data:
try:
job.data['control']()
except Exception:
logger.exception(f'Error running control task {job.data}')
elif 'task' in job.data:
body = self.worker.resolve_callable(job.data['task']).get_async_body()
# bypasses pg_notify for scheduled tasks
self.dispatch_task(body)
self.pg_is_down = False
self.listen_start = time.time()
return self.scheduler.time_until_next_run()
def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs)
@@ -179,17 +232,21 @@ class AWXConsumerPG(AWXConsumerBase):
if init is False:
self.worker.on_start()
init = True
# run_periodic_tasks run scheduled actions and gives time until next scheduled action
# this is saved to the conn (PubSub) object in order to modify read timeout in-loop
conn.select_timeout = self.run_periodic_tasks()
# this is the main operational loop for awx-manage run_dispatcher
for e in conn.events(yield_timeouts=True):
self.listen_cumulative_time += time.time() - self.listen_start # for metrics
if e is not None:
self.process_task(json.loads(e.payload))
self.run_periodic_tasks()
self.pg_is_down = False
conn.select_timeout = self.run_periodic_tasks()
if self.should_stop:
return
except psycopg2.InterfaceError:
except psycopg.InterfaceError:
logger.warning("Stale Postgres message bus connection, reconnecting")
continue
except (db.DatabaseError, psycopg2.OperationalError):
except (db.DatabaseError, psycopg.OperationalError):
# If we have attained stady state operation, tolerate short-term database hickups
if not self.pg_is_down:
logger.exception(f"Error consuming new events from postgres, will retry for {self.pg_max_wait} s")
@@ -219,6 +276,7 @@ class BaseWorker(object):
def work_loop(self, queue, finished, idx, *args):
ppid = os.getppid()
signal_handler = WorkerSignalHandler()
set_connection_name('worker') # set application_name to distinguish from other dispatcher processes
while not signal_handler.kill_now:
# if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too
@@ -230,8 +288,8 @@ class BaseWorker(object):
break
except QueueEmpty:
continue
except Exception as e:
logger.error("Exception on worker {}, restarting: ".format(idx) + str(e))
except Exception:
logger.exception("Exception on worker {}, reconnecting: ".format(idx))
continue
try:
for conn in db.connections.all():

View File

@@ -191,7 +191,9 @@ class CallbackBrokerWorker(BaseWorker):
e._retry_count = retry_count
# special sanitization logic for postgres treatment of NUL 0x00 char
if (retry_count == 1) and isinstance(exc_indv, ValueError) and ("\x00" in e.stdout):
# This used to check the class of the exception but on the postgres3 upgrade it could appear
# as either DataError or ValueError, so now lets just try if its there.
if (retry_count == 1) and ("\x00" in e.stdout):
e.stdout = e.stdout.replace("\x00", "")
if retry_count >= self.INDIVIDUAL_EVENT_RETRIES:

View File

@@ -26,8 +26,8 @@ class TaskWorker(BaseWorker):
`awx.main.dispatch.publish`.
"""
@classmethod
def resolve_callable(cls, task):
@staticmethod
def resolve_callable(task):
"""
Transform a dotted notation task into an imported, callable function, e.g.,
@@ -46,7 +46,8 @@ class TaskWorker(BaseWorker):
return _call
def run_callable(self, body):
@staticmethod
def run_callable(body):
"""
Given some AMQP message, import the correct Python code and run it.
"""

View File

@@ -67,10 +67,60 @@ def __enum_validate__(validator, enums, instance, schema):
Draft4Validator.VALIDATORS['enum'] = __enum_validate__
import logging
logger = logging.getLogger('awx.main.fields')
class JSONBlob(JSONField):
# Cringe... a JSONField that is back ended with a TextField.
# This field was a legacy custom field type that tl;dr; was a TextField
# Over the years, with Django upgrades, we were able to go to a JSONField instead of the custom field
# However, we didn't want to have large customers with millions of events to update from text to json during an upgrade
# So we keep this field type as backended with TextField.
def get_internal_type(self):
return "TextField"
# postgres uses a Jsonb field as the default backend
# with psycopg2 it was using a psycopg2._json.Json class internally
# with psycopg3 it uses a psycopg.types.json.Jsonb class internally
# The binary class was not compatible with a text field, so we are going to override these next two methods and ensure we are using a string
def from_db_value(self, value, expression, connection):
if value is None:
return value
if isinstance(value, str):
try:
return json.loads(value)
except Exception as e:
logger.error(f"Failed to load JSONField {self.name}: {e}")
return value
def get_db_prep_value(self, value, connection, prepared=False):
if not prepared:
value = self.get_prep_value(value)
try:
# Null characters are not allowed in text fields and JSONBlobs are JSON data but saved as text
# So we want to make sure we strip out any null characters also note, these "should" be escaped by the dumps process:
# >>> my_obj = { 'test': '\x00' }
# >>> import json
# >>> json.dumps(my_obj)
# '{"test": "\\u0000"}'
# But just to be safe, lets remove them if they are there. \x00 and \u0000 are the same:
# >>> string = "\x00"
# >>> "\u0000" in string
# True
dumped_value = json.dumps(value)
if "\x00" in dumped_value:
dumped_value = dumped_value.replace("\x00", '')
return dumped_value
except Exception as e:
logger.error(f"Failed to dump JSONField {self.name}: {e} value: {value}")
return value
# Based on AutoOneToOneField from django-annoying:
# https://bitbucket.org/offline/django-annoying/src/a0de8b294db3/annoying/fields.py
@@ -800,7 +850,7 @@ class CredentialTypeInjectorField(JSONSchemaField):
def validate_env_var_allowed(self, env_var):
if env_var.startswith('ANSIBLE_'):
raise django_exceptions.ValidationError(
_('Environment variable {} may affect Ansible configuration so its ' 'use is not allowed in credentials.').format(env_var),
_('Environment variable {} may affect Ansible configuration so its use is not allowed in credentials.').format(env_var),
code='invalid',
params={'value': env_var},
)
@@ -954,6 +1004,16 @@ class OrderedManyToManyDescriptor(ManyToManyDescriptor):
def get_queryset(self):
return super(OrderedManyRelatedManager, self).get_queryset().order_by('%s__position' % self.through._meta.model_name)
def add(self, *objects):
if len(objects) > 1:
raise RuntimeError('Ordered many-to-many fields do not support multiple objects')
return super().add(*objects)
def remove(self, *objects):
if len(objects) > 1:
raise RuntimeError('Ordered many-to-many fields do not support multiple objects')
return super().remove(*objects)
return OrderedManyRelatedManager
return add_custom_queryset_to_many_related_manager(
@@ -971,13 +1031,12 @@ class OrderedManyToManyField(models.ManyToManyField):
by a special `position` column on the M2M table
"""
def _update_m2m_position(self, sender, **kwargs):
if kwargs.get('action') in ('post_add', 'post_remove'):
order_with_respect_to = None
for field in sender._meta.local_fields:
if isinstance(field, models.ForeignKey) and isinstance(kwargs['instance'], field.related_model):
order_with_respect_to = field.name
for i, ig in enumerate(sender.objects.filter(**{order_with_respect_to: kwargs['instance'].pk})):
def _update_m2m_position(self, sender, instance, action, **kwargs):
if action in ('post_add', 'post_remove'):
descriptor = getattr(instance, self.name)
order_with_respect_to = descriptor.source_field_name
for i, ig in enumerate(sender.objects.filter(**{order_with_respect_to: instance.pk})):
if ig.position != i:
ig.position = i
ig.save()

View File

@@ -23,7 +23,7 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would ' 'be removed)')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
def init_logging(self):
log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0]))

View File

@@ -0,0 +1,22 @@
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.main.tasks.host_metrics import HostMetricTask
class Command(BaseCommand):
"""
This command provides cleanup task for HostMetric model.
There are two modes, which run in following order:
- soft cleanup
- - Perform soft-deletion of all host metrics last automated 12 months ago or before.
This is the same as issuing a DELETE request to /api/v2/host_metrics/N/ for all host metrics that match the criteria.
- - updates columns delete, deleted_counter and last_deleted
- hard cleanup
- - Permanently erase from the database all host metrics last automated 36 months ago or before.
This operation happens after the soft deletion has finished.
"""
help = 'Run soft and hard-deletion of HostMetrics'
def handle(self, *args, **options):
HostMetricTask().cleanup(soft_threshold=settings.CLEANUP_HOST_METRICS_SOFT_THRESHOLD, hard_threshold=settings.CLEANUP_HOST_METRICS_HARD_THRESHOLD)

View File

@@ -17,10 +17,7 @@ from django.utils.timezone import now
# AWX
from awx.main.models import Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob, WorkflowJob, Notification
def unified_job_class_to_event_table_name(job_class):
return f'main_{job_class().event_class.__name__.lower()}'
from awx.main.utils import unified_job_class_to_event_table_name
def partition_table_name(job_class, dt):
@@ -152,7 +149,7 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would ' 'be removed)')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs')
parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands')
parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates')
@@ -198,14 +195,35 @@ class Command(BaseCommand):
delete_meta.delete_jobs()
return (delete_meta.jobs_no_delete_count, delete_meta.jobs_to_delete_count)
def _cascade_delete_job_events(self, model, pk_list):
def _handle_unpartitioned_events(self, model, pk_list):
"""
If unpartitioned job events remain, it will cascade those from jobs in pk_list
if the unpartitioned table is no longer necessary, it will drop the table
"""
tblname = unified_job_class_to_event_table_name(model)
rel_name = model().event_parent_key
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_tables WHERE tablename = '_unpartitioned_{tblname}';")
row = cursor.fetchone()
if row is None:
self.logger.debug(f'Unpartitioned table for {rel_name} does not exist, you are fully migrated')
return
if pk_list:
with connection.cursor() as cursor:
tblname = unified_job_class_to_event_table_name(model)
pk_list_csv = ','.join(map(str, pk_list))
rel_name = model().event_parent_key
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv})")
with connection.cursor() as cursor:
# same as UnpartitionedJobEvent.objects.aggregate(Max('created'))
cursor.execute(f'SELECT MAX("_unpartitioned_{tblname}"."created") FROM "_unpartitioned_{tblname}"')
row = cursor.fetchone()
last_created = row[0]
if last_created:
self.logger.info(f'Last event created in _unpartitioned_{tblname} was {last_created.isoformat()}')
else:
self.logger.info(f'Table _unpartitioned_{tblname} has no events in it')
if (last_created is None) or (last_created < self.cutoff):
self.logger.warning(f'Dropping table _unpartitioned_{tblname} since no records are newer than {self.cutoff}')
cursor.execute(f'DROP TABLE _unpartitioned_{tblname}')
def cleanup_jobs(self):
batch_size = 100000
@@ -230,7 +248,7 @@ class Command(BaseCommand):
_, results = qs_batch.delete()
deleted += results['main.Job']
self._cascade_delete_job_events(Job, pk_list)
self._handle_unpartitioned_events(Job, pk_list)
return skipped, deleted
@@ -253,7 +271,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(AdHocCommand, pk_list)
self._handle_unpartitioned_events(AdHocCommand, pk_list)
skipped += AdHocCommand.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -281,7 +299,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(ProjectUpdate, pk_list)
self._handle_unpartitioned_events(ProjectUpdate, pk_list)
skipped += ProjectUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -309,7 +327,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(InventoryUpdate, pk_list)
self._handle_unpartitioned_events(InventoryUpdate, pk_list)
skipped += InventoryUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -333,7 +351,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(SystemJob, pk_list)
self._handle_unpartitioned_events(SystemJob, pk_list)
skipped += SystemJob.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted

View File

@@ -44,7 +44,7 @@ class Command(BaseCommand):
'- To list all (now deprecated) custom virtual environments run:',
'awx-manage list_custom_venvs',
'',
'- To export the contents of a (deprecated) virtual environment, ' 'run the following command while supplying the path as an argument:',
'- To export the contents of a (deprecated) virtual environment, run the following command while supplying the path as an argument:',
'awx-manage export_custom_venv /path/to/venv',
'',
'- Run these commands with `-q` to remove tool tips.',

View File

@@ -13,7 +13,7 @@ class Command(BaseCommand):
Deprovision a cluster node
"""
help = 'Remove instance from the database. ' 'Specify `--hostname` to use this command.'
help = 'Remove instance from the database. Specify `--hostname` to use this command.'
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help='Hostname used during provisioning')

View File

@@ -1,5 +1,6 @@
from django.core.management.base import BaseCommand, CommandError
from awx.main.tasks.system import clear_setting_cache
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
@@ -31,5 +32,7 @@ class Command(BaseCommand):
else:
raise CommandError('Please pass --enable flag to allow local auth or --disable flag to disable local auth')
clear_setting_cache.delay(['DISABLE_LOCAL_AUTH'])
def handle(self, **options):
self._enable_disable_auth(options.get('enable'), options.get('disable'))

View File

@@ -1,53 +1,230 @@
from django.core.management.base import BaseCommand
import datetime
from django.core.serializers.json import DjangoJSONEncoder
from awx.main.models.inventory import HostMetric
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly
from awx.main.analytics.collectors import config
import json
import sys
import tempfile
import tarfile
import csv
CSV_PREFERRED_ROW_COUNT = 500000
BATCHED_FETCH_COUNT = 10000
class Command(BaseCommand):
help = 'This is for offline licensing usage'
def host_metric_queryset(self, result, offset=0, limit=BATCHED_FETCH_COUNT):
list_of_queryset = list(
result.values(
'id',
'hostname',
'first_automation',
'last_automation',
'last_deleted',
'automated_counter',
'deleted_counter',
'deleted',
'used_in_inventories',
).order_by('first_automation')[offset : offset + limit]
)
return list_of_queryset
def host_metric_summary_monthly_queryset(self, result, offset=0, limit=BATCHED_FETCH_COUNT):
list_of_queryset = list(
result.values(
'id',
'date',
'license_consumed',
'license_capacity',
'hosts_added',
'hosts_deleted',
'indirectly_managed_hosts',
).order_by(
'date'
)[offset : offset + limit]
)
return list_of_queryset
def paginated_db_retrieval(self, type, filter_kwargs, rows_per_file):
offset = 0
list_of_queryset = []
while True:
if type == 'host_metric':
result = HostMetric.objects.filter(**filter_kwargs)
list_of_queryset = self.host_metric_queryset(result, offset, rows_per_file)
elif type == 'host_metric_summary_monthly':
result = HostMetricSummaryMonthly.objects.filter(**filter_kwargs)
list_of_queryset = self.host_metric_summary_monthly_queryset(result, offset, rows_per_file)
if not list_of_queryset:
break
else:
yield list_of_queryset
offset += len(list_of_queryset)
def controlled_db_retrieval(self, type, filter_kwargs, offset=0, fetch_count=BATCHED_FETCH_COUNT):
if type == 'host_metric':
result = HostMetric.objects.filter(**filter_kwargs)
return self.host_metric_queryset(result, offset, fetch_count)
elif type == 'host_metric_summary_monthly':
result = HostMetricSummaryMonthly.objects.filter(**filter_kwargs)
return self.host_metric_summary_monthly_queryset(result, offset, fetch_count)
def write_to_csv(self, csv_file, list_of_queryset, always_header, first_write=False, mode='a'):
with open(csv_file, mode, newline='') as output_file:
try:
keys = list_of_queryset[0].keys() if list_of_queryset else []
dict_writer = csv.DictWriter(output_file, keys)
if always_header or first_write:
dict_writer.writeheader()
dict_writer.writerows(list_of_queryset)
except Exception as e:
print(e)
def csv_for_tar(self, temp_dir, type, filter_kwargs, rows_per_file, always_header=True):
for index, list_of_queryset in enumerate(self.paginated_db_retrieval(type, filter_kwargs, rows_per_file)):
csv_file = f'{temp_dir}/{type}{index+1}.csv'
arcname_file = f'{type}{index+1}.csv'
first_write = True if index == 0 else False
self.write_to_csv(csv_file, list_of_queryset, always_header, first_write, 'w')
yield csv_file, arcname_file
def csv_for_tar_batched_fetch(self, temp_dir, type, filter_kwargs, rows_per_file, always_header=True):
csv_iteration = 1
offset = 0
rows_written_per_csv = 0
to_fetch = BATCHED_FETCH_COUNT
while True:
list_of_queryset = self.controlled_db_retrieval(type, filter_kwargs, offset, to_fetch)
if not list_of_queryset:
break
csv_file = f'{temp_dir}/{type}{csv_iteration}.csv'
arcname_file = f'{type}{csv_iteration}.csv'
self.write_to_csv(csv_file, list_of_queryset, always_header)
offset += to_fetch
rows_written_per_csv += to_fetch
always_header = False
remaining_rows_per_csv = rows_per_file - rows_written_per_csv
if not remaining_rows_per_csv:
yield csv_file, arcname_file
rows_written_per_csv = 0
always_header = True
to_fetch = BATCHED_FETCH_COUNT
csv_iteration += 1
elif remaining_rows_per_csv < BATCHED_FETCH_COUNT:
to_fetch = remaining_rows_per_csv
if rows_written_per_csv:
yield csv_file, arcname_file
def config_for_tar(self, options, temp_dir):
config_json = json.dumps(config(options.get('since')))
config_file = f'{temp_dir}/config.json'
arcname_file = 'config.json'
with open(config_file, 'w') as f:
f.write(config_json)
return config_file, arcname_file
def output_json(self, options, filter_kwargs):
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in self.csv_for_tar(temp_dir, options.get('json', 'host_metric'), filter_kwargs, BATCHED_FETCH_COUNT, True):
csv_file = csv_detail[0]
with open(csv_file) as f:
reader = csv.DictReader(f)
rows = list(reader)
json_result = json.dumps(rows, cls=DjangoJSONEncoder)
print(json_result)
def output_csv(self, options, filter_kwargs):
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in self.csv_for_tar(temp_dir, options.get('csv', 'host_metric'), filter_kwargs, BATCHED_FETCH_COUNT, False):
csv_file = csv_detail[0]
with open(csv_file) as f:
sys.stdout.write(f.read())
def output_tarball(self, options, filter_kwargs):
always_header = True
rows_per_file = options['rows_per_file'] or CSV_PREFERRED_ROW_COUNT
tar = tarfile.open("./host_metrics.tar.gz", "w:gz")
if rows_per_file <= BATCHED_FETCH_COUNT:
csv_function = self.csv_for_tar
else:
csv_function = self.csv_for_tar_batched_fetch
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in csv_function(temp_dir, 'host_metric', filter_kwargs, rows_per_file, always_header):
tar.add(csv_detail[0], arcname=csv_detail[1])
for csv_detail in csv_function(temp_dir, 'host_metric_summary_monthly', filter_kwargs, rows_per_file, always_header):
tar.add(csv_detail[0], arcname=csv_detail[1])
config_file, arcname_file = self.config_for_tar(options, temp_dir)
tar.add(config_file, arcname=arcname_file)
tar.close()
def add_arguments(self, parser):
parser.add_argument('--since', type=datetime.datetime.fromisoformat, help='Start Date in ISO format YYYY-MM-DD')
parser.add_argument('--until', type=datetime.datetime.fromisoformat, help='End Date in ISO format YYYY-MM-DD')
parser.add_argument('--json', action='store_true', help='Select output as JSON')
parser.add_argument('--json', type=str, const='host_metric', nargs='?', help='Select output as JSON for host_metric or host_metric_summary_monthly')
parser.add_argument('--csv', type=str, const='host_metric', nargs='?', help='Select output as CSV for host_metric or host_metric_summary_monthly')
parser.add_argument('--tarball', action='store_true', help=f'Package CSV files into a tar with upto {CSV_PREFERRED_ROW_COUNT} rows')
parser.add_argument('--rows_per_file', type=int, help=f'Split rows in chunks of {CSV_PREFERRED_ROW_COUNT}')
def handle(self, *args, **options):
since = options.get('since')
until = options.get('until')
if since is None and until is None:
print("No Arguments received")
return None
if since is not None and since.tzinfo is None:
since = since.replace(tzinfo=datetime.timezone.utc)
if until is not None and until.tzinfo is None:
until = until.replace(tzinfo=datetime.timezone.utc)
filter_kwargs = {}
if since is not None:
filter_kwargs['last_automation__gte'] = since
if until is not None:
filter_kwargs['last_automation__lte'] = until
result = HostMetric.objects.filter(**filter_kwargs)
filter_kwargs_host_metrics_summary = {}
if since is not None:
filter_kwargs_host_metrics_summary['date__gte'] = since
if options['rows_per_file'] and options.get('rows_per_file') > CSV_PREFERRED_ROW_COUNT:
print(f"rows_per_file exceeds the allowable limit of {CSV_PREFERRED_ROW_COUNT}.")
return
# if --json flag is set, output the result in json format
if options['json']:
list_of_queryset = list(result.values('hostname', 'first_automation', 'last_automation'))
json_result = json.dumps(list_of_queryset, cls=DjangoJSONEncoder)
print(json_result)
self.output_json(options, filter_kwargs)
elif options['csv']:
self.output_csv(options, filter_kwargs)
elif options['tarball']:
self.output_tarball(options, filter_kwargs)
# --json flag is not set, output in plain text
else:
print(f"Total Number of hosts automated: {len(result)}")
for item in result:
print(f"Printing up to {BATCHED_FETCH_COUNT} automated hosts:")
result = HostMetric.objects.filter(**filter_kwargs)
list_of_queryset = self.host_metric_queryset(result, 0, BATCHED_FETCH_COUNT)
for item in list_of_queryset:
print(
"Hostname : {hostname} | first_automation : {first_automation} | last_automation : {last_automation}".format(
hostname=item.hostname, first_automation=item.first_automation, last_automation=item.last_automation
hostname=item['hostname'], first_automation=item['first_automation'], last_automation=item['last_automation']
)
)
return

View File

@@ -0,0 +1,9 @@
from django.core.management.base import BaseCommand
from awx.main.tasks.host_metrics import HostMetricSummaryMonthlyTask
class Command(BaseCommand):
help = 'Computing of HostMetricSummaryMonthly'
def handle(self, *args, **options):
HostMetricSummaryMonthlyTask().execute()

View File

@@ -458,12 +458,19 @@ class Command(BaseCommand):
# TODO: We disable variable overwrite here in case user-defined inventory variables get
# mangled. But we still need to figure out a better way of processing multiple inventory
# update variables mixing with each other.
all_obj = self.inventory
db_variables = all_obj.variables_dict
db_variables.update(self.all_group.variables)
if db_variables != all_obj.variables_dict:
all_obj.variables = json.dumps(db_variables)
all_obj.save(update_fields=['variables'])
# issue for this: https://github.com/ansible/awx/issues/11623
if self.inventory.kind == 'constructed' and self.inventory_source.overwrite_vars:
# NOTE: we had to add a exception case to not merge variables
# to make constructed inventory coherent
db_variables = self.all_group.variables
else:
db_variables = self.inventory.variables_dict
db_variables.update(self.all_group.variables)
if db_variables != self.inventory.variables_dict:
self.inventory.variables = json.dumps(db_variables)
self.inventory.save(update_fields=['variables'])
logger.debug('Inventory variables updated from "all" group')
else:
logger.debug('Inventory variables unmodified')
@@ -522,16 +529,32 @@ class Command(BaseCommand):
def _update_db_host_from_mem_host(self, db_host, mem_host):
# Update host variables.
db_variables = db_host.variables_dict
if self.overwrite_vars:
db_variables = mem_host.variables
else:
db_variables.update(mem_host.variables)
mem_variables = mem_host.variables
update_fields = []
# Update host instance_id.
instance_id = self._get_instance_id(mem_variables)
if instance_id != db_host.instance_id:
old_instance_id = db_host.instance_id
db_host.instance_id = instance_id
update_fields.append('instance_id')
if self.inventory.kind == 'constructed':
# remote towervars so the constructed hosts do not have extra variables
for prefix in ('host', 'tower'):
for var in ('remote_{}_enabled', 'remote_{}_id'):
mem_variables.pop(var.format(prefix), None)
if self.overwrite_vars:
db_variables = mem_variables
else:
db_variables.update(mem_variables)
if db_variables != db_host.variables_dict:
db_host.variables = json.dumps(db_variables)
update_fields.append('variables')
# Update host enabled flag.
enabled = self._get_enabled(mem_host.variables)
enabled = self._get_enabled(mem_variables)
if enabled is not None and db_host.enabled != enabled:
db_host.enabled = enabled
update_fields.append('enabled')
@@ -540,12 +563,6 @@ class Command(BaseCommand):
old_name = db_host.name
db_host.name = mem_host.name
update_fields.append('name')
# Update host instance_id.
instance_id = self._get_instance_id(mem_host.variables)
if instance_id != db_host.instance_id:
old_instance_id = db_host.instance_id
db_host.instance_id = instance_id
update_fields.append('instance_id')
# Update host and display message(s) on what changed.
if update_fields:
db_host.save(update_fields=update_fields)
@@ -654,13 +671,19 @@ class Command(BaseCommand):
mem_host = self.all_group.all_hosts[mem_host_name]
import_vars = mem_host.variables
host_desc = import_vars.pop('_awx_description', 'imported')
host_attrs = dict(variables=json.dumps(import_vars), description=host_desc)
host_attrs = dict(description=host_desc)
enabled = self._get_enabled(mem_host.variables)
if enabled is not None:
host_attrs['enabled'] = enabled
if self.instance_id_var:
instance_id = self._get_instance_id(mem_host.variables)
host_attrs['instance_id'] = instance_id
if self.inventory.kind == 'constructed':
# remote towervars so the constructed hosts do not have extra variables
for prefix in ('host', 'tower'):
for var in ('remote_{}_enabled', 'remote_{}_id'):
import_vars.pop(var.format(prefix), None)
host_attrs['variables'] = json.dumps(import_vars)
try:
sanitize_jinja(mem_host_name)
except ValueError as e:

View File

@@ -22,7 +22,7 @@ class Command(BaseCommand):
'# Discovered Virtual Environments:',
'\n'.join(venvs),
'',
'- To export the contents of a (deprecated) virtual environment, ' 'run the following command while supplying the path as an argument:',
'- To export the contents of a (deprecated) virtual environment, run the following command while supplying the path as an argument:',
'awx-manage export_custom_venv /path/to/venv',
'',
'- To view the connections a (deprecated) virtual environment had in the database, run the following command while supplying the path as an argument:',

View File

@@ -44,16 +44,18 @@ class Command(BaseCommand):
for x in ig.instances.all():
color = '\033[92m'
end_color = '\033[0m'
if x.capacity == 0 and x.node_type != 'hop':
color = '\033[91m'
if not x.enabled:
color = '\033[90m[DISABLED] '
if no_color:
color = ''
end_color = ''
capacity = f' capacity={x.capacity}' if x.node_type != 'hop' else ''
version = f" version={x.version or '?'}" if x.node_type != 'hop' else ''
heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else ''
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}\033[0m')
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}{end_color}')
print()

View File

@@ -0,0 +1,27 @@
from django.utils.timezone import now
from django.core.management.base import BaseCommand, CommandParser
from datetime import timedelta
from awx.main.utils.common import create_partition, unified_job_class_to_event_table_name
from awx.main.models import Job, SystemJob, ProjectUpdate, InventoryUpdate, AdHocCommand
class Command(BaseCommand):
"""Command used to precreate database partitions to avoid pg_dump locks"""
def add_arguments(self, parser: CommandParser) -> None:
parser.add_argument('--count', dest='count', action='store', help='The amount of hours of partitions to create', type=int, default=1)
def _create_partitioned_tables(self, count):
tables = list()
for model in (Job, SystemJob, ProjectUpdate, InventoryUpdate, AdHocCommand):
tables.append(unified_job_class_to_event_table_name(model))
start = now()
while count > 0:
for table in tables:
create_partition(table, start)
print(f'Created partitions for {table} {start}')
start = start + timedelta(hours=1)
count -= 1
def handle(self, **options):
self._create_partitioned_tables(count=options.get('count'))

View File

@@ -25,17 +25,20 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help="Hostname used during provisioning")
parser.add_argument('--listener_port', dest='listener_port', type=int, help="Receptor listener port")
parser.add_argument('--node_type', type=str, default='hybrid', choices=['control', 'execution', 'hop', 'hybrid'], help="Instance Node type")
parser.add_argument('--uuid', type=str, help="Instance UUID")
def _register_hostname(self, hostname, node_type, uuid):
def _register_hostname(self, hostname, node_type, uuid, listener_port):
if not hostname:
if not settings.AWX_AUTO_DEPROVISION_INSTANCES:
raise CommandError('Registering with values from settings only intended for use in K8s installs')
from awx.main.management.commands.register_queue import RegisterQueue
(changed, instance) = Instance.objects.register(ip_address=os.environ.get('MY_POD_IP'), node_type='control', uuid=settings.SYSTEM_UUID)
(changed, instance) = Instance.objects.register(
ip_address=os.environ.get('MY_POD_IP'), listener_port=listener_port, node_type='control', node_uuid=settings.SYSTEM_UUID
)
RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register()
RegisterQueue(
settings.DEFAULT_EXECUTION_QUEUE_NAME,
@@ -48,7 +51,7 @@ class Command(BaseCommand):
max_concurrent_jobs=settings.DEFAULT_EXECUTION_QUEUE_MAX_CONCURRENT_JOBS,
).register()
else:
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, uuid=uuid)
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid, listener_port=listener_port)
if changed:
print("Successfully registered instance {}".format(hostname))
else:
@@ -58,6 +61,6 @@ class Command(BaseCommand):
@transaction.atomic
def handle(self, **options):
self.changed = False
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'))
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'), options.get('listener_port'))
if self.changed:
print("(changed: True)")

View File

@@ -0,0 +1,33 @@
import logging
import json
from django.core.management.base import BaseCommand
from awx.main.dispatch import pg_bus_conn
from awx.main.dispatch.worker.task import TaskWorker
logger = logging.getLogger('awx.main.cache_clear')
class Command(BaseCommand):
"""
Cache Clear
Runs as a management command and starts a daemon that listens for a pg_notify message to clear the cache.
"""
help = 'Launch the cache clear daemon'
def handle(self, *arg, **options):
try:
with pg_bus_conn() as conn:
conn.listen("tower_settings_change")
for e in conn.events(yield_timeouts=True):
if e is not None:
body = json.loads(e.payload)
logger.info(f"Cache clear request received. Clearing now, payload: {e.payload}")
TaskWorker.run_callable(body)
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in cache clear main loop')
raise

View File

@@ -4,28 +4,22 @@ import logging
import yaml
from django.conf import settings
from django.core.cache import cache as django_cache
from django.core.management.base import BaseCommand
from django.db import connection as django_connection
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
from awx.main.dispatch import periodic
logger = logging.getLogger('awx.main.dispatch')
def construct_bcast_queue_name(common_name):
return common_name + '_' + settings.CLUSTER_HOST_ID
class Command(BaseCommand):
help = 'Launch the task dispatcher'
def add_arguments(self, parser):
parser.add_argument('--status', dest='status', action='store_true', help='print the internal state of any running dispatchers')
parser.add_argument('--schedule', dest='schedule', action='store_true', help='print the current status of schedules being ran by dispatcher')
parser.add_argument('--running', dest='running', action='store_true', help='print the UUIDs of any tasked managed by this dispatcher')
parser.add_argument(
'--reload',
@@ -47,6 +41,9 @@ class Command(BaseCommand):
if options.get('status'):
print(Control('dispatcher').status())
return
if options.get('schedule'):
print(Control('dispatcher').schedule())
return
if options.get('running'):
print(Control('dispatcher').running())
return
@@ -63,21 +60,11 @@ class Command(BaseCommand):
print(Control('dispatcher').cancel(cancel_data))
return
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and cache connections (that way lies race conditions)
django_connection.close()
django_cache.close()
# spawn a daemon thread to periodically enqueues scheduled tasks
# (like the node heartbeat)
periodic.run_continuously()
consumer = None
try:
queues = ['tower_broadcast_all', get_local_queuename()]
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4))
queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4), schedule=settings.CELERYBEAT_SCHEDULE)
consumer.run()
except KeyboardInterrupt:
logger.debug('Terminating Task Dispatcher')

Some files were not shown because too many files have changed in this diff Show More