Compare commits

..

176 Commits

Author SHA1 Message Date
Ratan Gulati
40c2b700fe Fix: #14523 Add alt-text codeblock to Images for workflow_template.rst (#14604)
* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* Update workflow_templates.rst

* Revised proposed alt text for workflow_templates.rst

---------

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-11-07 10:51:11 -07:00
Thanhnguyet Vo
71d548f9e5 Removed references to images that were deleted. 2023-11-07 08:55:27 -07:00
Thanhnguyet Vo
dd98963f86 Updated images - Workflow Templates chapter of Userguide. 2023-11-07 08:55:27 -07:00
TVo
4b467dfd8d Revised proposed alt text for insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
456b56778e Update insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
5b3cb20f92 Update insights.rst 2023-11-07 08:14:34 -07:00
TVo
d7086a3c88 Revised the proposed Alt text for main_menu.rst 2023-11-06 13:09:26 -07:00
Ratan Gulati
21e7ab078c Fix: #14511 Add alt-text codeblock to Images for Userguide: main_menu.rst
Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
2023-11-06 13:09:26 -07:00
Elijah DeLee
946ca0b3b8 fix wsrelay connection in ipv6 environments 2023-11-06 13:58:41 -05:00
TVo
b831dbd608 Removed mailing list from triage_replies.md 2023-11-03 14:30:30 -06:00
Thanhnguyet Vo
943e455f9d Re-do for PR #14595 to fix CI issues. 2023-11-03 08:35:22 -06:00
Seth Foster
53bc88abe2 Fix python_paths error in CI(#14622)
Remove outdated lines from pytest.ini

Was causing KeyError 'python_paths' in CI

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-11-03 09:36:21 -04:00
Rick Elrod
3b4d95633e [rsyslog] remove main_queue, add more action queue params (#14532)
* [rsyslog] remove main_queue, add more action queue params

Signed-off-by: Rick Elrod <rick@elrod.me>

* Remove now-unused LOG_AGGREGATOR_MAX_DISK_USAGE_GB, add LOG_AGGREGATOR_ACTION_QUEUE_SIZE

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-10-31 14:49:17 -04:00
Alan Rominger
93c329d9d5 Fix cancel bug - WorkflowManager cancel in transaction (#14608)
This fixes a bug where jobs within a workflow job were not canceled
  when the workflow job was canceled by the user

The fix is to submit the cancel request as a part of the
  transaction that WorkflowManager commits its work in
  this requires that we send the message without expecting a reply
  so this changes the control-with-reply cancel to just a control function
2023-10-30 15:30:18 -04:00
Hao Liu
f4c53aaf22 Update receptor-collection version to 2.0.2 (#14613) 2023-10-30 17:24:02 +00:00
Alan Rominger
333ef76cbd Send notifications for dependency failures (#14603)
* Send notifications for dependency failures

* Delete tests for deleted method

* Remove another test for removed method
2023-10-30 10:42:37 -04:00
Alan Rominger
fc0b58fd04 Fix bug that prevented dispatcher exit with downed DB (#14469)
* Separate handling of original sitTERM and sigINT
2023-10-26 14:34:25 -04:00
Andrii Zakurenyi
bef0a8b23a Fix DevOps Secrets Vault credential plugin to work with python-dsv-sdk>=1.0.4
Signed-off-by: Andrii Zakurenyi <andrii.zakurenyi@c.delinea.com>
2023-10-25 15:48:24 -04:00
lmo5
a5f33456b6 Fix missing service account secret in docker-compose-minikube role (#14596)
* Fix missing service account secret

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2023-10-25 19:27:21 +00:00
Surav Shrestha
21fb395912 fix typos in docs/development/minikube.md 2023-10-25 15:23:23 -04:00
jessicamack
44255f378d Fix extra_vars bug in ansible.controller.ad_hoc_command (#14585)
* convert to valid type for serializer

* check that extra_vars are in request

* remove doubled line

* add integration test for change

* move change to the ad_hoc_command module

Signed-off-by: jessicamack <jmack@redhat.com>

* fix imports

Signed-off-by: jessicamack <jmack@redhat.com>

---------

Signed-off-by: jessicamack <jmack@redhat.com>
2023-10-25 10:38:45 -04:00
Parikshit Adhikari
71a6d48612 Fix: typos inside /docs directory (#14594)
fix typos inside docs
2023-10-24 19:01:21 +00:00
nmiah1
b7e5f5d1e1 Typo in export.py example (#14598) 2023-10-24 18:33:38 +00:00
Alan Rominger
b6b167627c Fix Boolean values defaulting to False in collection (#14493)
* Fix Boolean values defaulting to False in collection

* Remove null values in other cases, fix null handling for WFJT nodes

* Only remove null values if it is a boolean field

* Reset changes to WFJT node field processing

* Use test content from sean-m-sullivan to fix lookups in assert
2023-10-24 14:29:16 -04:00
Hao Liu
20f5b255c9 Fix "upgrade in progress" status page not showing up while migration is in progress (#14579)
Web container does not need to wait for migration

if the database is running and responsive, but migrations have not finished, it will start serving, and users will get the upgrading page

wait-for-migration prevent nginix and uwsgi from starting up to serve the "upgrade in progress" status page
2023-10-24 14:27:09 -04:00
Oleksii Baranov
3bcf46555d Fix swagger generation on rhel (#14317) (#14589) 2023-10-24 14:19:02 -04:00
Don Naro
94703ccf84 Pip compile docsite requirements (#14449)
Co-authored-by: Sviatoslav Sydorenko <578543+webknjaz@users.noreply.github.com>
Co-authored-by: Sviatoslav Sydorenko <wk.cvs.github@sydorenko.org.ua>
2023-10-24 12:53:41 -04:00
BHANUTEJA
6cdea1909d Alt text for Execution Env section of Userguide (#14576)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 18:48:07 +00:00
Mike Mwanje
f133580172 Adds alt text to instance_groups.rst images (#14571)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 16:11:17 +00:00
Kishan Mehta
4b90a7fcd1 Add alt text for image directives in credential_types.rst (#14551)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 09:36:05 -06:00
Marliana Lara
95bfedad5b Format constructed inventory hint example as valid YAML (#14568) 2023-10-20 10:24:47 -04:00
Kishan Mehta
1081f2d8e9 Add alt text for image directives in credentials.rst (#14550)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 14:13:49 +00:00
Kishan Mehta
c4ab54d7f3 Add alt text for image directives in job_capacity.rst & job_slices.rst (#14549)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 13:34:04 +00:00
Hao Liu
bcefcd8cf8 Remove specific version for receptorctl (#14593) 2023-10-19 22:49:42 -04:00
Kishan Mehta
0bd057529d Add alt text for image directives in job_templates.rst (#14548)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
2023-10-19 20:24:32 +00:00
Sayyed Faisal Ali
a82c03e2e2 added alt-text in projects.rst (#14544)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-19 12:39:58 -06:00
TVo
447ac77535 Corrected missing text replacement directives (#14592) 2023-10-19 16:36:41 +00:00
Andrew Klychkov
72d0928f1b [DOCS] EE guide: fix a ref to Get started with EE (#14587) 2023-10-19 03:30:21 -04:00
Deepshri M
6d727d4bc4 Adding alt text for image (#14541)
Signed-off-by: Deepshri M <deepshrim613@gmail.com>
2023-10-17 14:53:18 -06:00
Rohit Raj
6040e44d9d docs: Update teams.rst (#14539)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 20:16:09 +00:00
Rohit Raj
b99ce5cd62 docs: Update users.rst (#14538)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 14:58:40 +00:00
Rohit Raj
ba8a90c55f docs: Update security.rst (#14540)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-16 17:56:46 -06:00
Sayyed Faisal Ali
7ee2172517 added alt-text in project-sign.rst (#14545)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-16 09:25:34 -06:00
Alan Rominger
07f49f5925 AAP-16926 Delete unpartitioned tables in a separate transaction (#14572) 2023-10-13 15:50:51 -04:00
Hao Liu
376993077a Removing mailing list from get involved (#14580) 2023-10-13 17:49:34 +00:00
Hao Liu
48f586bac4 Make wait-for-migrations wait forever (#14566) 2023-10-13 13:48:12 +00:00
Surendran
16dab57c63 Added alt-text for images in notifications.rst (#14555)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:22:37 -06:00
Surendran
75a71492fd Added alt-text for images in organizations.rst (#14556)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:15:45 -06:00
Hao Liu
e9bd99c1ff Fix CVE-2023-43665 (#14561) 2023-10-12 14:00:32 -04:00
Daniel Gonçalves
56878b4910 Add customizable batch_size for cleanup_activitystream and cleanup_jobs (#14412)
Signed-off-by: Daniel Gonçalves <daniel.gonc@lves.fr>
2023-10-11 20:09:16 +00:00
Alan Rominger
19ca480078 Upgrade client library for dsv since tss already landed (#14362) 2023-10-11 16:01:22 -04:00
Steffen Scheib
64eb963025 Cleaning SOS report passwords (#14557) 2023-10-11 19:54:28 +00:00
Will Thames
dc34d0887a Execution environment image should not be required (#14488) 2023-10-11 15:39:51 -04:00
Andrew Klychkov
160634fb6f ee_reference.rst: refert to Builder's definition docs instead of duplicating its content (#14562) 2023-10-11 13:54:12 +01:00
Alan Rominger
9745058546 Only block commits if black fails for certain paths (#14531) 2023-10-10 10:12:57 -04:00
Aviral Katiyar
c97a48b165 Fix: #14510 Add alt-text codeblock to Images for Userguide: jobs.rst (#14530)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-09 16:40:56 -06:00
Rohit Raj
259bca0113 docs: Update workflows.rst (#14537) 2023-10-06 15:30:47 -06:00
Aviral Katiyar
92c2b4e983 Fix: #14500 Added alt text to images for Userguide: credential_plugins.rst (#14527)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-06 14:53:23 -06:00
Seth Foster
127a0cff23 Set ip_address to empty string
ip_address cannot be null, so set to
empty instead of None

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-10-05 22:53:16 -04:00
Aviral Katiyar
a0ef25006a Fix: #14499 Added alt text to images for Userguide: applications_auth.rst (#14526)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-05 14:22:10 -06:00
Chris Meyers
50c98a52f7 Update setting_up.rst (#14542) 2023-10-05 15:06:40 -04:00
Michelle McCausland
4008d72af6 issue-14522: Add alt-text codeblock to Images for Userguide: webhooks.rst (#14529)
Signed-off-by: Michelle McCausland <mmccausl@redhat.com>
2023-10-05 17:40:07 +01:00
Alan Rominger
e72e9f94b9 Fix collection test flake due to successful canceled command (#14519) 2023-10-04 09:09:29 -04:00
Sasa Jovicic
9d60b0b9c6 Fix #12815 Direct links to AWX do not reroute the user after authentication (#14399)
Signed-off-by: Sasa993 <jovicic.sasa@hotmail.com>
Co-authored-by: Sasa Jovicic <sjovicic@anexia-it.com>
2023-10-03 16:55:22 -04:00
Aviral Katiyar
05b58c4df6 Fix : #14490 Fixed the required spelling errors (#14507)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
2023-10-03 14:15:13 -06:00
TVo
b1b960fd17 Updated Forum terminology and removed mailing list (#14491) 2023-10-03 19:24:19 +01:00
Jakub Laskowski
3c8f71e559 Fixed wrong arguments order in DomainPasswordGrantAuthorizer (#14441)
Signed-off-by: Jakub Laskowski <jakub.laskowski9@gmail.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-10-03 11:54:57 -04:00
Alan Rominger
f5922f76fa DROP unnecessary unpartioned event tables (#14055) 2023-10-03 11:49:23 -04:00
kurokobo
05582702c6 fix: make type conversions work correctly (related #14487) (#14489)
Signed-off-by: kurokobo <2920259+kurokobo@users.noreply.github.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-30 04:02:10 +00:00
Alan Rominger
1d340c5b4e Add a section for postgres max_connections value (#14482) 2023-09-28 10:28:52 -04:00
TVo
15925f1416 Simplified release notes for AWX (#14485) 2023-09-27 14:50:57 -06:00
Salma Kochay
6e06a20cca add subscription usage page 2023-09-27 10:57:04 -04:00
Hao Liu
bb3acbb8ad Debug log for scheduler commit duration (#14035)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-27 09:46:55 -04:00
Hao Liu
a88e47930c Update django version to address CVE-2023-41164 (#14460) 2023-09-27 09:36:02 -04:00
Hao Liu
a0d4515ba4 Explicitly set collection version during promotion (#14484) 2023-09-26 14:19:22 -04:00
Alan Rominger
770cc10a78 Get rid of names_digest hack no longer needed (#14459) 2023-09-26 12:09:30 -04:00
Alan Rominger
159dd62d84 Add null value handling in create_partition (#14480) 2023-09-25 18:28:44 -04:00
TVo
640e5db9c6 Removed references of IRC and fixed formatting in "Work Items" section. (#14478)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-09-25 11:24:39 -06:00
Alan Rominger
9ed527eb26 Consolidate image and server setup in several checks (#14477) 2023-09-25 09:02:20 -04:00
Alan Rominger
29ad6e1eaa Fix bug, None was used instead of empty for DB outage (#14463) 2023-09-21 14:30:25 -04:00
Alan Rominger
3e607f8964 AAP-15927 Use ATTACH PARTITION to avoid exclusive table lock for events (#14433) 2023-09-21 14:27:04 -04:00
TVo
c9d1a4d063 Added release notes for version 23.1.0 (#14471) 2023-09-21 11:02:38 -06:00
Hao Liu
a290b082db Use ldap container hostname for LDAP config (#14473) 2023-09-21 11:31:51 -04:00
Hao Liu
6d3c22e801 Update how to get involved with matrix and forum (#14472) 2023-09-20 18:33:04 +00:00
Michael Abashian
1f91773a3c Simplify docs string base generation 2023-09-20 13:16:54 -04:00
Hao Liu
7b846e1e49 Add makefile target to load dev image into Kind (#13775)
Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-09-19 13:34:10 -04:00
Don Naro
f7a2de8a07 Contributor guide and adjusted titles (#14447)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
2023-09-18 10:40:47 -06:00
Andrew Klychkov
194c214f03 userguide/execution_environments.rst: replace building paragraphs with ref to Get started EE guide (#14429) 2023-09-15 10:20:46 -04:00
Christian Adams
77e30dd4b2 Add link to script for publishing operator on OperatorHub (#14442) 2023-09-15 09:32:19 -04:00
jessicamack
9d7421b9bc Update README (#14452)
Signed-off-by: jessicamack <jmack@redhat.com>
2023-09-14 20:20:06 +00:00
Alan Rominger
3b8e662916 Remove conditional paths due to conflict with required checks (#14450) 2023-09-14 16:19:42 -04:00
Alan Rominger
aa3228eec9 Fix continue-on-error GH actions bug, always run archive step instead 2023-09-14 19:45:07 +00:00
Alan Rominger
7b0598c7d8 Continue workflow steps to save logs from failed tests (#14448) 2023-09-14 18:23:22 +00:00
Ivan Aragonés Muniesa
49832d6379 don't pass the 'organization' or other fields to the search of the instance group or execution environments (#14223) 2023-09-14 09:31:05 -04:00
Alan Rominger
8feeb5f1fa Allow saving github creds in user folder (#14435) 2023-09-12 15:47:12 -04:00
Michael Abashian
56230ba5d1 Show a toast when the job is already in the process of launching 2023-09-06 16:56:34 -04:00
Michael Abashian
480aaeace5 Prevent the user from launching multiple jobs by rapidly clicking on buttons 2023-09-06 16:56:34 -04:00
Joe Garcia
3eaea396be Add base64 check on JWT from authn 2023-09-06 15:58:36 -04:00
Keith Grant
deef8669c9 rebuild package-lock (#14423) 2023-09-06 12:36:50 -07:00
Don Naro
63223a2cc7 allow list for example secrets in docs 2023-09-06 15:15:58 -04:00
Keith Grant
a28bc2eb3f bump babel dependencies (#14370) 2023-09-06 09:14:04 -07:00
Alan Rominger
09168e5832 Edit docker-compose instructions for correctness (#14418) 2023-09-06 11:55:25 -04:00
Alan Rominger
6df1de4262 Avoid activity stream entries for instance going offline (#14385) 2023-09-06 11:18:52 -04:00
Alan Rominger
e072bb7668 Declare license for unique module that uses BSD-2
Co-authored-by: Maxwell G <maxwell@gtmx.me>
2023-09-06 10:43:25 -04:00
Alan Rominger
ec579fd637 Fix collection metadata license to match intent 2023-09-06 10:43:25 -04:00
Marliana Lara
b95d521162 Update missing inventory error message (#14416) 2023-09-06 10:24:25 -04:00
Rick Elrod
d03a6a809d Enable collection integration tests on GHA
There are a number of changes here:

- Abstract out a GHA composite action for running the dev environment
- Update the e2e tests to use that new abstracted action
- Introduce a new (matrixed) job for running collection integration
  tests. This splits the jobs up based on filename.
- Collect coverage info and generate an html report that people can
  download easily to see collection coverage info.
- Do some hacks to delete the intermediary coverage file artifacts
  which aren't needed after the job finishes.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-09-05 16:10:48 -05:00
TVo
4466976e10 Added relnotes for 23.0.0 (#14409) 2023-09-05 15:07:53 -06:00
Don Naro
5733f78fd8 Add readthedocs configuration (#14413) 2023-09-05 15:07:32 -06:00
Alan Rominger
20fc7c702a Add check for building docsite (#14406) 2023-09-05 16:07:48 -04:00
Lila Yasin
6ce5799689 Incorrect capacity for remote execution nodes 14051 (#14315) 2023-09-05 11:20:36 -04:00
Don Naro
dc81aa46d0 Create AWX docsite with RST content (#14328)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-09-01 09:24:03 -06:00
Alan Rominger
ab3ceaecad Remove extra scheduler state save that does nothing (#14396) 2023-08-31 10:35:07 -04:00
John Westcott IV
1bb4240a6b Allow saml_admin_attr to work in conjunction with SAML Org Map (#14285)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-08-31 09:41:30 -03:00
Rick Elrod
5e105c2cbd [CI] Update GHA actions to sate some warnings emitted by test infrastructure (#14398)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-30 23:58:57 -05:00
Alan Rominger
cdb4f0b7fd Consume job_explanation from runner, fix error reporting error (#13482) 2023-08-30 16:45:50 -04:00
Ivanilson Junior
cf1e448577 Fix undefined property error when task is of type yum/debug and was s… (#14372)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
2023-08-30 15:37:28 -04:00
Andrew Klychkov
224e9e0324 [DOCS] tools/docker-compose/README.md: add way to solve postgresql issue (#14225) 2023-08-30 10:45:50 -04:00
Martin Slemr
660dab439b HostMetrics: Hard auto-cleanup (#14255)
Fix host metric settings

Cleanup_host_metric command with default params

Fix order of host metric cleanups
2023-08-30 09:18:59 -04:00
sean-m-sullivan
5ce2055431 update collection workflow example and tests 2023-08-30 09:15:54 -04:00
Alan Rominger
951bd1cc87 Re-run the updater script after upstream removal of future (#14265) 2023-08-29 15:36:42 -04:00
kurokobo
c9190ebd8f docs: update execution_nodes.md to follow changes for receptor_collection (#14247) 2023-08-29 13:06:54 -04:00
Seth Foster
eb33973fa3 Use receptor collection 2.0.0 2023-08-29 13:06:54 -04:00
Seth Foster
40be2e7b6e Use receptor-collection devel 2023-08-29 13:06:54 -04:00
kialam
485813211a Add toast and delete modal messaging when removing/adding peers. (#14373) 2023-08-29 13:06:54 -04:00
Seth Foster
0a87bf1b5e Apply JS formatting from npm prettier 2023-08-29 13:06:54 -04:00
Seth Foster
fa0e0b2576 Removed unused variable in test_instance_peers 2023-08-29 13:06:54 -04:00
Seth Foster
1d3b2f57ce No longer assert on receptor_host_identifier
receptor_host_identifier can be left out
of group_vars and will default to the
'ansible_host' variable
2023-08-29 13:06:54 -04:00
Seth Foster
0577e1ee79 Setup receptor after podman
Might help to install receptor last,
that way when nodes are first connected to the mesh
they already have podman installed and can potentially
run jobs. Otherwise it might be possible for controller
to launch jobs against nodes that aren't fully set up.
2023-08-29 13:06:54 -04:00
Seth Foster
470ecc4a4f Use itertools product instead of nested loop
Make test case cleaner by using itertools product
instead of the triple nested loop

Replace triple single quotes with triple
double quotes
2023-08-29 13:06:54 -04:00
Seth Foster
965127637b Make ip_address read only
Setting a different value for ip_address
and hostname does not work with the current
way we create receptor certs.
2023-08-29 13:06:54 -04:00
Seth Foster
eba130cf41 Change username to <username> in inventory 2023-08-29 13:06:54 -04:00
Seth Foster
441336301e Ensure ip_address is empty string 2023-08-29 13:06:54 -04:00
Seth Foster
2a0be898e6 Fix detecting if peers changed in serializer
Add a check_peers_changed() utility method
to determine if peers in attrs matches
the current instance peers.

Other changes:
- Set ip_address default to "", and do not
allow null.
2023-08-29 13:06:54 -04:00
Seth Foster
c47acc5988 Change PeersSerializer to SlugRelatedField
Get rid of PeersSerializer and just use SlugRelatedField,
which should be more a straightforward approach.

Other changes:
- cleanup code related to the already-removed api/v2/peers
endpoint
- add "hybrid" node type into more instance_peers test cases
2023-08-29 13:06:54 -04:00
Seth Foster
70ba32b5b2 Do not install ansible-runner or podman on hop nodes 2023-08-29 13:06:54 -04:00
Seth Foster
81e06dace2 Add listener_port to provision_instance
API changes
- cannot change peers or enable
peers_from_control_nodes on VM deployments
- allow setting ip_address
- use ip_address over hostname in the generated
group_vars/all.yml
- Drop api/v2/peers endpoint

DB changes
- add ip_address unique constraint, but ignore "" entries

Other changes
- provision_instance should take listener_port option

Tests
- test that new controls doesn't disturb other peers
relationships
- test ip_address over hostname
2023-08-29 13:06:54 -04:00
Seth Foster
3e8202590c Remove Disconnected link state
Dynamically flipping from Established
to Disconnected is not the intended
usage of InstanceLink State.

- Link state starts in Adding and becomes
Established once any control node first sees the link
is in the status KnownConnectionCosts
2023-08-29 13:06:54 -04:00
Seth Foster
ad96a72ebe Remove duplicate install bundle on InstanceDetail 2023-08-29 13:06:54 -04:00
Seth Foster
eb0058268b Revert "Remove duplicate install bundle on InstanceDetail"
This reverts commit cf5ccf53f4322b49b1009ca13e4f025c30529b30.
2023-08-29 13:06:54 -04:00
Seth Foster
2bf6512a8e Do not change link state if Removing
inspect_established_receptor_connections should
not change link state is current state is Removing.

Other changes:
- rename inspect_execution_nodes to inspect_execution_and_hop_nodes
- Default link state is Adding
- Set min listener_port value to 1024
- inspect_established_receptor_connections now
runs as part of cluster_node_heartbeat task
2023-08-29 13:06:54 -04:00
Seth Foster
855f61a04e Bump migration number 186 to 187 2023-08-29 13:06:54 -04:00
Seth Foster
532e71ff45 Remove extra newlines in install bundle all.yml 2023-08-29 13:06:54 -04:00
Seth Foster
b9ea114cac Remove duplicate install bundle on InstanceDetail 2023-08-29 13:06:54 -04:00
Seth Foster
e41ad82687 optional listener port UI (#14300) 2023-08-29 13:06:54 -04:00
Seth Foster
3bd25c682e Allow setting ip_address for execution nodes 2023-08-29 13:06:54 -04:00
Seth Foster
7169c75b1a receptor_python_packages renamed 2023-08-29 13:06:54 -04:00
kialam
fdb359a67b feature hop node topology updates (#14142) 2023-08-29 13:06:54 -04:00
Seth Foster
ed2a59c1a3 receptor python packages 2023-08-29 13:06:54 -04:00
Jake Jackson
906f8a1dce [hop node] documentation update in execution_nodes for hop nodes (#14215)
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-08-29 13:06:54 -04:00
Lila Yasin
6833976c54 [hop node] fix failing ci checks on feature_hop-node branch (#14226) 2023-08-29 13:06:54 -04:00
Seth Foster
d15405eafe Add peers_from for reverse peers M2M
use devel receptor-collection
2023-08-29 13:06:54 -04:00
Lila
6c3bbfc3be Looking to see if revising the path in the static dir resolves failing ci check. 2023-08-29 13:06:54 -04:00
Lila Yasin
2e3e6cbde5 hop node migration file updates(#14196)
rename migration function set_peers_from_control_nodes_true to automatically_peer_from_control_plane
import settings and only run function if settings.IS_K8S is true
set listener_port for control nodes to None
2023-08-29 13:06:54 -04:00
Lila Yasin
54894c14dc Hop node AWX Collection Updates (#14153)
Add hop node support to awx collections
- add peers and peers_from_control_nodes fields
- show new node_type "hop"
- add tests for adding hop nodes via collections

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-08-29 13:06:54 -04:00
Seth Foster
2a51f23b7d Add functional API tests
add tests for calling write_receptor_config

add write_receptor_config test

Do not set default listener_port on control node
2023-08-29 13:06:54 -04:00
Jake Jackson
80df31fc4e [hop node] update peer validation logic (#14132) 2023-08-29 13:06:54 -04:00
Lila Yasin
8f8462b38e Marked hop node validation errors for translation (#14116) 2023-08-29 13:06:54 -04:00
Seth Foster
0c41abea0e Make peers field optional 2023-08-29 13:06:54 -04:00
Lila Yasin
3eda1ede8d Migration file to set peers_from_control_ nodes to true for existing execution nodes (#14061) 2023-08-29 13:06:54 -04:00
Jake Jackson
40fca6db57 [hop_node] Validate listener_port is defined for peers (#14056)
add peer listener_port validation and update install bundle if listener_port is defined or not defined.
2023-08-29 13:06:54 -04:00
Seth Foster
148111a072 Remove task that enables COPR receptor repo (#14088)
do not pip install receptorctl
2023-08-29 13:06:54 -04:00
Lila Yasin
9cad45feac Prevent manual peering of control plane nodes to hop node (#13966) 2023-08-29 13:06:54 -04:00
Seth Foster
6834568c5d Add receptor host identifier to group_vars
Add disconnected link state topology
2023-08-29 13:06:54 -04:00
Lorenzo Tanganelli
f7fdb7fe8d Add peers readonly api and instancelink constraint (#13916)
Add Disconnected link state

introspect_receptor_connections is a periodic
task that examines active receptor connections
and cross-checks it with the InstanceLink info.

Any links that should be active but are not
will be put into a Disconnected state. If
active, it will be in an Established state.

UI - Add hop creation and peers mgmt (#13922)

* add UI for mgmt peers, instance edit and add

* add peer info on detail and bug fix on detail

* remove unused chip and change peer label

* rename lookup, put Instance type disable on edit

---------

Co-authored-by: tanganellilore <lorenzo.tanagnelli@hotmail.it>
2023-08-29 13:06:54 -04:00
Seth Foster
d8abd4912b Add support in hop nodes in API 2023-08-29 13:06:54 -04:00
Alan Rominger
4fbdc412ad Restrict PR body check to just AWX repo 2023-08-29 09:29:30 -04:00
Alan Rominger
db1af57daa Revert "Adding PR check to ensure JIRA links are present"
This reverts commit 3ae6174050.
2023-08-29 09:29:30 -04:00
Hao Liu
ffa59864ee Fix CVE-2023-40267 (#14388)
CVE-2023-40267 GitPython: Insecure non-multi options in clone and clone_from is not blocked https://bugzilla.redhat.com/show_bug.cgi?id=2231474

GitPython before 3.1.32 does not block insecure non-multi options in clone and clone_from. NOTE: this issue exists because of an incomplete fix for CVE-2022-24439.

References:
gitpython-developers/GitPython@ca965ec gitpython-developers/GitPython#1609
2023-08-28 15:35:32 -04:00
bxbrenden
b209bc67b4 Fix typo in description of scm_update_on_launch (#14382) 2023-08-28 16:52:44 +00:00
Chandler Swift
1faea020af Fix default redis url to pass check in redis-py>4.4 (#14344)
Signed-off-by: Chandler Swift <chandler+pearson@chandlerswift.com>
Co-authored-by: Rebeccah Hunter <rhunter@redhat.com>
2023-08-25 09:48:36 -04:00
Pablo Hess
b55a099620 Clarify that the license module requires fetching subs prior (#14351)
Co-authored-by: Pablo N. Hess <phess@redhat.com>
2023-08-23 15:20:47 -04:00
David Danielsson
f6dd3cb988 Enforce mutually exclusive options in credential module of the collection (#14363) 2023-08-23 15:16:06 -04:00
Alan Rominger
c448b87c85 AAP-10891 Apply AWX_TASK_ENV when performing credential plugin lookups (#14271) 2023-08-23 13:26:12 -04:00
Rick Elrod
4dd823121a Update cryptography for CVE-2023-38325 (#14358)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-23 10:54:20 -05:00
Michael Abashian
ec4f10d868 Add location for locales in nginx config 2023-08-22 16:33:00 -04:00
956 changed files with 22393 additions and 2080 deletions

View File

@@ -0,0 +1,28 @@
name: Setup images for AWX
description: Builds new awx_devel image
inputs:
github-token:
description: GitHub Token for registry access
required: true
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Log in to registry
shell: bash
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
make docker-compose-build

View File

@@ -0,0 +1,73 @@
name: Run AWX docker-compose
description: Runs AWX with `make docker-compose`
inputs:
github-token:
description: GitHub Token to pass to awx_devel_image
required: true
build-ui:
description: Should the UI be built?
required: false
default: false
type: boolean
outputs:
ip:
description: The IP of the tools_awx_1 container
value: ${{ steps.data.outputs.ip }}
admin-token:
description: OAuth token for admin user
value: ${{ steps.data.outputs.admin_token }}
runs:
using: composite
steps:
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ inputs.github-token }}
- name: Upgrade ansible-core
shell: bash
run: python3 -m pip install --upgrade ansible-core
- name: Install system deps
shell: bash
run: sudo apt-get install -y gettext
- name: Start AWX
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_UP_OPTS="-d" \
make docker-compose
- name: Update default AWX password
shell: bash
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Build UI
# This must be a string comparison in composite actions:
# https://github.com/actions/runner/issues/2238
if: ${{ inputs.build-ui == 'true' }}
shell: bash
run: |
docker exec -i tools_awx_1 sh <<-EOSH
make ui-devel
EOSH
- name: Get instance data
id: data
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

View File

@@ -0,0 +1,19 @@
name: Upload logs
description: Upload logs from `make docker-compose` devel environment to GitHub as an artifact
inputs:
log-filename:
description: "*Unique* name of the log file"
required: true
runs:
using: composite
steps:
- name: Get AWX logs
shell: bash
run: |
docker logs tools_awx_1 > ${{ inputs.log-filename }}
- name: Upload AWX logs as artifact
uses: actions/upload-artifact@v3
with:
name: docker-compose-logs
path: ${{ inputs.log-filename }}

View File

@@ -7,8 +7,8 @@
## PRs/Issues
### Visit our mailing list
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on our mailing list? See https://github.com/ansible/awx/#get-involved for information for ways to connect with us.
### Visit the Forum or Matrix
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on either the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com) or the [Ansible Community Forum](https://forum.ansible.com/tag/awx)?
### Denied Submission

View File

@@ -35,29 +35,40 @@ jobs:
- name: ui-test-general
command: make ui-test-general
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run check ${{ matrix.tests.name }}
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make github_ci_runner
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make docker-runner
dev-env:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run smoke test
run: make github_ci_setup && ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
run: ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
awx-operator:
runs-on: ubuntu-latest
steps:
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: awx
- name: Checkout awx-operator
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ansible/awx-operator
path: awx-operator
@@ -67,7 +78,7 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -102,7 +113,7 @@ jobs:
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core
@@ -114,3 +125,137 @@ jobs:
# needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1
collection-integration:
name: awx_collection integration
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
target-regex:
- name: a-h
regex: ^[a-h]
- name: i-p
regex: ^[i-p]
- name: r-z0-9
regex: ^[r-z0-9]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install dependencies for running tests
run: |
python3 -m pip install -e ./awxkit/
python3 -m pip install -r awx_collection/requirements.txt
- name: Run integration tests
run: |
echo "::remove-matcher owner=python::" # Disable annoying annotations from setup-python
echo '[general]' > ~/.tower_cli.cfg
echo 'host = https://${{ steps.awx.outputs.ip }}:8043' >> ~/.tower_cli.cfg
echo 'oauth_token = ${{ steps.awx.outputs.admin-token }}' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--coverage --requirements $TARGETS" test_collection_integration
env:
ANSIBLE_TEST_PREFER_PODMAN: 1
# Upload coverage report as artifact
- uses: actions/upload-artifact@v3
if: always()
with:
name: coverage-${{ matrix.target-regex.name }}
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: collection-integration-${{ matrix.target-regex.name }}.log
collection-integration-coverage-combine:
name: combine awx_collection integration coverage
runs-on: ubuntu-latest
needs:
- collection-integration
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v3
with:
path: coverage
- name: Combine coverage
run: |
make COLLECTION_VERSION=100.100.100-git install_collection
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cd coverage
for i in coverage-*; do
cp -rv $i/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
done
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
echo '## AWX Collection Integration Coverage' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
ansible-test coverage report >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
echo '## AWX Collection Integration Coverage HTML' >> $GITHUB_STEP_SUMMARY
echo 'Download the HTML artifacts to view the coverage report.' >> $GITHUB_STEP_SUMMARY
# This is a huge hack, there's no official action for removing artifacts currently.
# Also ACTIONS_RUNTIME_URL and ACTIONS_RUNTIME_TOKEN aren't available in normal run
# steps, so we have to use github-script to get them.
#
# The advantage of doing this, though, is that we save on artifact storage space.
- name: Get secret artifact runtime URL
uses: actions/github-script@v6
id: get-runtime-url
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_URL } = process.env;
return ACTIONS_RUNTIME_URL;
- name: Get secret artifact runtime token
uses: actions/github-script@v6
id: get-runtime-token
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_TOKEN } = process.env;
return ACTIONS_RUNTIME_TOKEN;
- name: Remove intermediary artifacts
env:
ACTIONS_RUNTIME_URL: ${{ steps.get-runtime-url.outputs.result }}
ACTIONS_RUNTIME_TOKEN: ${{ steps.get-runtime-token.outputs.result }}
run: |
echo "::add-mask::${ACTIONS_RUNTIME_TOKEN}"
artifacts=$(
curl -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" \
${ACTIONS_RUNTIME_URL}_apis/pipelines/workflows/${{ github.run_id }}/artifacts?api-version=6.0-preview \
| jq -r '.value | .[] | select(.name | startswith("coverage-")) | .url'
)
for artifact in $artifacts; do
curl -i -X DELETE -H "Accept: application/json;api-version=6.0-preview" -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" "$artifact"
done
- name: Upload coverage report as artifact
uses: actions/upload-artifact@v3
with:
name: awx-collection-integration-coverage-html
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/reports/coverage

View File

@@ -16,7 +16,7 @@ jobs:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
@@ -28,7 +28,7 @@ jobs:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}

16
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
---
name: Docsite CI
on:
pull_request:
jobs:
docsite-build:
name: docsite test build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: install tox
run: pip install tox
- name: Assure docs can be built
run: tox -e docs

View File

@@ -19,41 +19,20 @@ jobs:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
- uses: ./.github/actions/run_awx_devel
id: awx
with:
python-version: ${{ env.py_version }}
- name: Install system deps
run: sudo apt-get install -y gettext
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build UI
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make ui-devel
- name: Start AWX
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make docker-compose &> make-docker-compose-output.log &
build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Pull awx_cypress_base image
run: |
docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }}
@@ -65,18 +44,6 @@ jobs:
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests .
- name: Update default AWX password
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5;
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Run E2E tests
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
@@ -86,7 +53,7 @@ jobs:
export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
AWX_IP=${{ steps.awx.outputs.ip }}
printenv > .env
echo "Executing tests:"
docker run \
@@ -102,8 +69,7 @@ jobs:
-w /e2e \
awx-pf-tests run --project .
- name: Save AWX logs
uses: actions/upload-artifact@v2
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
name: AWX-logs-${{ matrix.job }}
path: make-docker-compose-output.log
log-filename: e2e-${{ matrix.job }}.log

View File

@@ -28,7 +28,7 @@ jobs:
runs-on: ubuntu-latest
name: Label Issue - Community
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests

View File

@@ -27,7 +27,7 @@ jobs:
runs-on: ubuntu-latest
name: Label PR - Community
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests

View File

@@ -7,6 +7,7 @@ on:
types: [opened, edited, reopened, synchronize]
jobs:
pr-check:
if: github.repository_owner == 'ansible' && endsWith(github.repository, 'awx')
name: Scan PR description for semantic versioning keywords
runs-on: ubuntu-latest
permissions:

View File

@@ -1,35 +0,0 @@
---
name: Check body for reference to jira
on:
pull_request:
branches:
- release_**
jobs:
pr-check:
if: github.repository_owner == 'ansible' && github.repository != 'awx'
name: Scan PR description for JIRA links
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check for JIRA lines
env:
PR_BODY: ${{ github.event.pull_request.body }}
run: |
echo "$PR_BODY" | grep "JIRA: None" > no_jira
echo "$PR_BODY" | grep "JIRA: https://.*[0-9]+"> jira
exit 0
# We exit 0 and set the shell to prevent the returns from the greps from failing this step
# See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
shell: bash {0}
- name: Check for exactly one item
run: |
if [ $(cat no_jira jira | wc -l) != 1 ] ; then
echo "The PR body must contain exactly one of [ 'JIRA: None' or 'JIRA: <one or more links>' ]"
echo "We counted $(cat no_jira jira | wc -l)"
exit 255;
else
exit 0;
fi

View File

@@ -17,13 +17,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
@@ -40,8 +40,12 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }}
- name: Build collection and publish to galaxy
env:
COLLECTION_NAMESPACE: ${{ env.collection_namespace }}
COLLECTION_VERSION: ${{ github.event.release.tag_name }}
COLLECTION_TEMPLATE_VERSION: true
run: |
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
make build_collection
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \

View File

@@ -44,7 +44,7 @@ jobs:
exit 0
- name: Checkout awx
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: awx
@@ -52,18 +52,18 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- name: Checkout awx-logos
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ansible/awx-logos
path: awx-logos
- name: Checkout awx-operator
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator

View File

@@ -17,13 +17,13 @@ jobs:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}

4
.gitignore vendored
View File

@@ -165,3 +165,7 @@ use_dev_supervisor.txt
awx/ui_next/src
awx/ui_next/build
# Docs build stuff
docs/docsite/build/
_readthedocs/

5
.gitleaks.toml Normal file
View File

@@ -0,0 +1,5 @@
[allowlist]
description = "Documentation contains example secrets and passwords"
paths = [
"docs/docsite/rst/administration/oauth2_token_auth.rst",
]

5
.pip-tools.toml Normal file
View File

@@ -0,0 +1,5 @@
[tool.pip-tools]
resolver = "backtracking"
allow-unsafe = true
strip-extras = true
quiet = true

15
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,15 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
build:
os: ubuntu-22.04
tools:
python: >-
3.11
commands:
- pip install --user tox
- python3 -m tox -e docs
- mkdir -p _readthedocs/html/
- mv docs/docsite/build/html/* _readthedocs/html/

View File

@@ -10,6 +10,7 @@ ignore: |
tools/docker-compose/_sources
# django template files
awx/api/templates/instance_install_bundle/**
.readthedocs.yaml
extends: default

View File

@@ -6,6 +6,7 @@ DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
@@ -78,7 +79,7 @@ I18N_FLAG_FILE = .i18n_built
sdist \
ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit github_ci_setup github_ci_runner
.git/hooks/pre-commit
clean-tmp:
rm -rf tmp/
@@ -323,21 +324,10 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
## Login to Github container image registry, pull image, then build image.
github_ci_setup:
# GITHUB_ACTOR is automatic github actions env var
# CI_GITHUB_TOKEN is defined in .github files
echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin
docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache
$(MAKE) docker-compose-build
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
## Builds image and runs AWX_DOCKER_CMD in it, mainly for .github checks.
github_ci_runner: github_ci_setup docker-runner
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \
@@ -383,7 +373,7 @@ test_collection_sanity:
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)
cd $(COLLECTION_INSTALL) && ansible-test integration -vvv $(COLLECTION_TEST_TARGET)
test_unit:
@if [ "$(VENV_BASE)" ]; then \
@@ -664,6 +654,9 @@ awx-kube-dev-build: Dockerfile.kube-dev
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
# Translation TASKS
# --------------------------------------

View File

@@ -1,5 +1,5 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![IRC Chat - #ansible-awx](https://img.shields.io/badge/IRC-%23ansible--awx-blueviolet.svg)](https://libera.chat)
[![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -30,12 +30,12 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
- Join the `#ansible-awx` channel on irc.libera.chat
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
- Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [Ansible Community Forum](https://forum.ansible.com)

View File

@@ -52,39 +52,14 @@ try:
except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
import django # noqa: F401
HAS_DJANGO = True
except ImportError:
HAS_DJANGO = False
pass
else:
from django.db.backends.base import schema
from django.db.models import indexes
from django.db.backends.utils import names_digest
from django.db import connection
if HAS_DJANGO is True:
# See upgrade blocker note in requirements/README.md
try:
names_digest('foo', 'bar', 'baz', length=8)
except ValueError:
def names_digest(*args, length):
"""
Generate a 32-bit digest of a set of arguments that can be used to shorten
identifying names. Support for use in FIPS environments.
"""
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(arg.encode())
return h.hexdigest()[:length]
schema.names_digest = names_digest
indexes.names_digest = names_digest
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.

View File

@@ -3233,7 +3233,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
if get_field_from_model_or_attrs('host_config_key') and not inventory:
raise serializers.ValidationError({'host_config_key': _("Cannot enable provisioning callback without an inventory set.")})
prompting_error_message = _("Must either set a default value or ask to prompt on launch.")
prompting_error_message = _("You must either set a default value or ask to prompt on launch.")
if project is None:
raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):
@@ -5356,10 +5356,16 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
class InstanceLinkSerializer(BaseSerializer):
class Meta:
model = InstanceLink
fields = ('source', 'target', 'link_state')
fields = ('id', 'url', 'related', 'source', 'target', 'link_state')
source = serializers.SlugRelatedField(slug_field="hostname", read_only=True)
target = serializers.SlugRelatedField(slug_field="hostname", read_only=True)
source = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
target = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
def get_related(self, obj):
res = super(InstanceLinkSerializer, self).get_related(obj)
res['source_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.source.id})
res['target_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.target.id})
return res
class InstanceNodeSerializer(BaseSerializer):
@@ -5376,6 +5382,7 @@ class InstanceSerializer(BaseSerializer):
jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that are targeted for this instance'), read_only=True)
jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance'), read_only=True)
health_check_pending = serializers.SerializerMethodField()
peers = serializers.SlugRelatedField(many=True, required=False, slug_field="hostname", queryset=Instance.objects.all())
class Meta:
model = Instance
@@ -5412,6 +5419,8 @@ class InstanceSerializer(BaseSerializer):
'node_state',
'ip_address',
'listener_port',
'peers',
'peers_from_control_nodes',
)
extra_kwargs = {
'node_type': {'initial': Instance.Types.EXECUTION, 'default': Instance.Types.EXECUTION},
@@ -5464,22 +5473,57 @@ class InstanceSerializer(BaseSerializer):
def get_health_check_pending(self, obj):
return obj.health_check_pending
def validate(self, data):
if self.instance:
if self.instance.node_type == Instance.Types.HOP:
raise serializers.ValidationError("Hop node instances may not be changed.")
else:
if not settings.IS_K8S:
raise serializers.ValidationError("Can only create instances on Kubernetes or OpenShift.")
return data
def validate(self, attrs):
def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
def check_peers_changed():
'''
return True if
- 'peers' in attrs
- instance peers matches peers in attrs
'''
return self.instance and 'peers' in attrs and set(self.instance.peers.all()) != set(attrs['peers'])
if not self.instance and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only create instances on Kubernetes or OpenShift."))
node_type = get_field_from_model_or_attrs("node_type")
peers_from_control_nodes = get_field_from_model_or_attrs("peers_from_control_nodes")
listener_port = get_field_from_model_or_attrs("listener_port")
peers = attrs.get('peers', [])
if peers_from_control_nodes and node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("peers_from_control_nodes can only be enabled for execution or hop nodes."))
if node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
if check_peers_changed():
raise serializers.ValidationError(
_("Setting peers manually for control nodes is not allowed. Enable peers_from_control_nodes on the hop and execution nodes instead.")
)
if not listener_port and peers_from_control_nodes:
raise serializers.ValidationError(_("Field listener_port must be a valid integer when peers_from_control_nodes is enabled."))
if not listener_port and self.instance and self.instance.peers_from.exists():
raise serializers.ValidationError(_("Field listener_port must be a valid integer when other nodes peer to it."))
for peer in peers:
if peer.listener_port is None:
raise serializers.ValidationError(_("Field listener_port must be set on peer ") + peer.hostname + ".")
if not settings.IS_K8S:
if check_peers_changed():
raise serializers.ValidationError(_("Cannot change peers."))
return super().validate(attrs)
def validate_node_type(self, value):
if not self.instance:
if value not in (Instance.Types.EXECUTION,):
raise serializers.ValidationError("Can only create execution nodes.")
else:
if self.instance.node_type != value:
raise serializers.ValidationError("Cannot change node type.")
if not self.instance and value not in [Instance.Types.HOP, Instance.Types.EXECUTION]:
raise serializers.ValidationError(_("Can only create execution or hop nodes."))
if self.instance and self.instance.node_type != value:
raise serializers.ValidationError(_("Cannot change node type."))
return value
@@ -5487,30 +5531,41 @@ class InstanceSerializer(BaseSerializer):
if self.instance:
if value != self.instance.node_state:
if not settings.IS_K8S:
raise serializers.ValidationError("Can only change the state on Kubernetes or OpenShift.")
raise serializers.ValidationError(_("Can only change the state on Kubernetes or OpenShift."))
if value != Instance.States.DEPROVISIONING:
raise serializers.ValidationError("Can only change instances to the 'deprovisioning' state.")
if self.instance.node_type not in (Instance.Types.EXECUTION,):
raise serializers.ValidationError("Can only deprovision execution nodes.")
raise serializers.ValidationError(_("Can only change instances to the 'deprovisioning' state."))
if self.instance.node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("Can only deprovision execution or hop nodes."))
else:
if value and value != Instance.States.INSTALLED:
raise serializers.ValidationError("Can only create instances in the 'installed' state.")
raise serializers.ValidationError(_("Can only create instances in the 'installed' state."))
return value
def validate_hostname(self, value):
"""
- Hostname cannot be "localhost" - but can be something like localhost.domain
- Cannot change the hostname of an-already instantiated & initialized Instance object
Cannot change the hostname
"""
if self.instance and self.instance.hostname != value:
raise serializers.ValidationError("Cannot change hostname.")
raise serializers.ValidationError(_("Cannot change hostname."))
return value
def validate_listener_port(self, value):
if self.instance and self.instance.listener_port != value:
raise serializers.ValidationError("Cannot change listener port.")
"""
Cannot change listener port, unless going from none to integer, and vice versa
"""
if value and self.instance and self.instance.listener_port and self.instance.listener_port != value:
raise serializers.ValidationError(_("Cannot change listener port."))
return value
def validate_peers_from_control_nodes(self, value):
"""
Can only enable for K8S based deployments
"""
if value and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only be enabled on Kubernetes or Openshift."))
return value
@@ -5518,7 +5573,19 @@ class InstanceSerializer(BaseSerializer):
class InstanceHealthCheckSerializer(BaseSerializer):
class Meta:
model = Instance
read_only_fields = ('uuid', 'hostname', 'version', 'last_health_check', 'errors', 'cpu', 'memory', 'cpu_capacity', 'mem_capacity', 'capacity')
read_only_fields = (
'uuid',
'hostname',
'ip_address',
'version',
'last_health_check',
'errors',
'cpu',
'memory',
'cpu_capacity',
'mem_capacity',
'capacity',
)
fields = read_only_fields

View File

@@ -3,21 +3,35 @@ receptor_group: awx
receptor_verify: true
receptor_tls: true
receptor_mintls13: false
{% if instance.node_type == "execution" %}
receptor_work_commands:
ansible-runner:
command: ansible-runner
params: worker
allowruntimeparams: true
verifysignature: true
additional_python_packages:
- ansible-runner
{% endif %}
custom_worksign_public_keyfile: receptor/work_public_key.pem
custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt
receptor_protocol: 'tcp'
{% if instance.listener_port %}
receptor_listener: true
receptor_port: {{ instance.listener_port }}
receptor_dependencies:
- python39-pip
{% else %}
receptor_listener: false
{% endif %}
{% if peers %}
receptor_peers:
{% for peer in peers %}
- host: {{ peer.host }}
port: {{ peer.port }}
protocol: tcp
{% endfor %}
{% endif %}
{% verbatim %}
podman_user: "{{ receptor_user }}"
podman_group: "{{ receptor_group }}"

View File

@@ -1,20 +1,16 @@
{% verbatim %}
---
- hosts: all
become: yes
tasks:
- name: Create the receptor user
user:
{% verbatim %}
name: "{{ receptor_user }}"
{% endverbatim %}
shell: /bin/bash
- name: Enable Copr repo for Receptor
command: dnf copr enable ansible-awx/receptor -y
{% if instance.node_type == "execution" %}
- import_role:
name: ansible.receptor.podman
{% endif %}
- import_role:
name: ansible.receptor.setup
- name: Install ansible-runner
pip:
name: ansible-runner
executable: pip3.9
{% endverbatim %}

View File

@@ -1,4 +1,4 @@
---
collections:
- name: ansible.receptor
version: 1.1.0
version: 2.0.2

View File

@@ -341,17 +341,18 @@ class InstanceDetail(RetrieveUpdateAPIView):
def update_raw_data(self, data):
# these fields are only valid on creation of an instance, so they unwanted on detail view
data.pop('listener_port', None)
data.pop('node_type', None)
data.pop('hostname', None)
data.pop('ip_address', None)
return super(InstanceDetail, self).update_raw_data(data)
def update(self, request, *args, **kwargs):
r = super(InstanceDetail, self).update(request, *args, **kwargs)
if status.is_success(r.status_code):
obj = self.get_object()
obj.set_capacity_value()
obj.save(update_fields=['capacity'])
capacity_changed = obj.set_capacity_value()
if capacity_changed:
obj.save(update_fields=['capacity'])
r.data = serializers.InstanceSerializer(obj, context=self.get_serializer_context()).to_representation(obj)
return r

View File

@@ -6,6 +6,8 @@ import io
import ipaddress
import os
import tarfile
import time
import re
import asn1
from awx.api import serializers
@@ -40,6 +42,8 @@ RECEPTOR_OID = "1.3.6.1.4.1.2312.19.1"
# │ │ └── receptor.key
# │ └── work-public-key.pem
# └── requirements.yml
class InstanceInstallBundle(GenericAPIView):
name = _('Install Bundle')
model = models.Instance
@@ -49,9 +53,9 @@ class InstanceInstallBundle(GenericAPIView):
def get(self, request, *args, **kwargs):
instance_obj = self.get_object()
if instance_obj.node_type not in ('execution',):
if instance_obj.node_type not in ('execution', 'hop'):
return Response(
data=dict(msg=_('Install bundle can only be generated for execution nodes.')),
data=dict(msg=_('Install bundle can only be generated for execution or hop nodes.')),
status=status.HTTP_400_BAD_REQUEST,
)
@@ -66,37 +70,37 @@ class InstanceInstallBundle(GenericAPIView):
# generate and write the receptor key to receptor/tls/receptor.key in the tar file
key, cert = generate_receptor_tls(instance_obj)
def tar_addfile(tarinfo, filecontent):
tarinfo.mtime = time.time()
tarinfo.size = len(filecontent)
tar.addfile(tarinfo, io.BytesIO(filecontent))
key_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.key")
key_tarinfo.size = len(key)
tar.addfile(key_tarinfo, io.BytesIO(key))
tar_addfile(key_tarinfo, key)
cert_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.crt")
cert_tarinfo.size = len(cert)
tar.addfile(cert_tarinfo, io.BytesIO(cert))
tar_addfile(cert_tarinfo, cert)
# generate and write install_receptor.yml to the tar file
playbook = generate_playbook().encode('utf-8')
playbook = generate_playbook(instance_obj).encode('utf-8')
playbook_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/install_receptor.yml")
playbook_tarinfo.size = len(playbook)
tar.addfile(playbook_tarinfo, io.BytesIO(playbook))
tar_addfile(playbook_tarinfo, playbook)
# generate and write inventory.yml to the tar file
inventory_yml = generate_inventory_yml(instance_obj).encode('utf-8')
inventory_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/inventory.yml")
inventory_yml_tarinfo.size = len(inventory_yml)
tar.addfile(inventory_yml_tarinfo, io.BytesIO(inventory_yml))
tar_addfile(inventory_yml_tarinfo, inventory_yml)
# generate and write group_vars/all.yml to the tar file
group_vars = generate_group_vars_all_yml(instance_obj).encode('utf-8')
group_vars_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/group_vars/all.yml")
group_vars_tarinfo.size = len(group_vars)
tar.addfile(group_vars_tarinfo, io.BytesIO(group_vars))
tar_addfile(group_vars_tarinfo, group_vars)
# generate and write requirements.yml to the tar file
requirements_yml = generate_requirements_yml().encode('utf-8')
requirements_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/requirements.yml")
requirements_yml_tarinfo.size = len(requirements_yml)
tar.addfile(requirements_yml_tarinfo, io.BytesIO(requirements_yml))
tar_addfile(requirements_yml_tarinfo, requirements_yml)
# respond with the tarfile
f.seek(0)
@@ -105,8 +109,10 @@ class InstanceInstallBundle(GenericAPIView):
return response
def generate_playbook():
return render_to_string("instance_install_bundle/install_receptor.yml")
def generate_playbook(instance_obj):
playbook_yaml = render_to_string("instance_install_bundle/install_receptor.yml", context=dict(instance=instance_obj))
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', playbook_yaml)
def generate_requirements_yml():
@@ -118,7 +124,12 @@ def generate_inventory_yml(instance_obj):
def generate_group_vars_all_yml(instance_obj):
return render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj))
peers = []
for instance in instance_obj.peers.all():
peers.append(dict(host=instance.hostname, port=instance.listener_port))
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj, peers=peers))
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', all_yaml)
def generate_receptor_tls(instance_obj):

View File

@@ -418,6 +418,10 @@ class SettingsWrapper(UserSettingsHolder):
"""Get value while accepting the in-memory cache if key is available"""
with _ctit_db_wrapper(trans_safe=True):
return self._get_local(name)
# If the last line did not return, that means we hit a database error
# in that case, we should not have a local cache value
# thus, return empty as a signal to use the default
return empty
def __getattr__(self, name):
value = empty

View File

@@ -13,6 +13,7 @@ from unittest import mock
from django.conf import LazySettings
from django.core.cache.backends.locmem import LocMemCache
from django.core.exceptions import ImproperlyConfigured
from django.db.utils import Error as DBError
from django.utils.translation import gettext_lazy as _
import pytest
@@ -331,3 +332,18 @@ def test_in_memory_cache_works(settings):
with mock.patch.object(settings, '_get_local') as mock_get:
assert settings.AWX_VAR == 'DEFAULT'
mock_get.assert_not_called()
@pytest.mark.defined_in_file(AWX_VAR=[])
def test_getattr_with_database_error(settings):
"""
If a setting is defined via the registry and has a null-ish default which is not None
then referencing that setting during a database outage should give that default
this is regression testing for a bug where it would return None
"""
settings.registry.register('AWX_VAR', field_class=fields.StringListField, default=[], category=_('System'), category_slug='system')
settings._awx_conf_memoizedcache.clear()
with mock.patch('django.db.backends.base.base.BaseDatabaseWrapper.ensure_connection') as mock_ensure:
mock_ensure.side_effect = DBError('for test')
assert settings.AWX_VAR == []

View File

@@ -694,16 +694,18 @@ register(
category_slug='logging',
)
register(
'LOG_AGGREGATOR_MAX_DISK_USAGE_GB',
'LOG_AGGREGATOR_ACTION_QUEUE_SIZE',
field_class=fields.IntegerField,
default=1,
default=131072,
min_value=1,
label=_('Maximum disk persistence for external log aggregation (in GB)'),
label=_('Maximum number of messages that can be stored in the log action queue'),
help_text=_(
'Amount of data to store (in gigabytes) during an outage of '
'the external log aggregator (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. '
'Notably, this is used for the rsyslogd main queue (for input messages).'
'Defines how large the rsyslog action queue can grow in number of messages '
'stored. This can have an impact on memory utilization. When the queue '
'reaches 75% of this number, the queue will start writing to disk '
'(queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and '
'DEBUG messages will start to be discarded (queue.discardMark with '
'queue.discardSeverity=5).'
),
category=_('Logging'),
category_slug='logging',
@@ -718,8 +720,7 @@ register(
'Amount of data to store (in gigabytes) if an rsyslog action takes time '
'to process an incoming message (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). '
'Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified '
'by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
'It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
),
category=_('Logging'),
category_slug='logging',

View File

@@ -4,6 +4,8 @@ from urllib.parse import urljoin, quote
from django.utils.translation import gettext_lazy as _
import requests
import base64
import binascii
conjur_inputs = {
@@ -50,6 +52,13 @@ conjur_inputs = {
}
def _is_base64(s: str) -> bool:
try:
return base64.b64encode(base64.b64decode(s.encode("utf-8"))) == s.encode("utf-8")
except binascii.Error:
return False
def conjur_backend(**kwargs):
url = kwargs['url']
api_key = kwargs['api_key']
@@ -77,7 +86,7 @@ def conjur_backend(**kwargs):
token = resp.content.decode('utf-8')
lookup_kwargs = {
'headers': {'Authorization': 'Token token="{}"'.format(token)},
'headers': {'Authorization': 'Token token="{}"'.format(token if _is_base64(token) else base64.b64encode(token.encode('utf-8')).decode('utf-8'))},
'allow_redirects': False,
}

View File

@@ -2,25 +2,28 @@ from .plugin import CredentialPlugin
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from thycotic.secrets.vault import SecretsVault
from delinea.secrets.vault import PasswordGrantAuthorizer, SecretsVault
dsv_inputs = {
'fields': [
{
'id': 'tenant',
'label': _('Tenant'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretservercloud.com'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretsvaultcloud.com'),
'type': 'string',
},
{
'id': 'tld',
'label': _('Top-level Domain (TLD)'),
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretservercloud.com'),
'choices': ['ca', 'com', 'com.au', 'com.sg', 'eu'],
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretsvaultcloud.com'),
'choices': ['ca', 'com', 'com.au', 'eu'],
'default': 'com',
},
{'id': 'client_id', 'label': _('Client ID'), 'type': 'string'},
{
'id': 'client_id',
'label': _('Client ID'),
'type': 'string',
},
{
'id': 'client_secret',
'label': _('Client Secret'),
@@ -51,12 +54,26 @@ if settings.DEBUG:
'id': 'url_template',
'label': _('URL template'),
'type': 'string',
'default': 'https://{}.secretsvaultcloud.{}/v1',
'default': 'https://{}.secretsvaultcloud.{}',
}
)
dsv_plugin = CredentialPlugin(
'Thycotic DevOps Secrets Vault',
dsv_inputs,
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip
)
def dsv_backend(**kwargs):
tenant_name = kwargs['tenant']
tenant_tld = kwargs.get('tld', 'com')
tenant_url_template = kwargs.get('url_template', 'https://{}.secretsvaultcloud.{}')
client_id = kwargs['client_id']
client_secret = kwargs['client_secret']
secret_path = kwargs['path']
secret_field = kwargs['secret_field']
tenant_url = tenant_url_template.format(tenant_name, tenant_tld.strip("."))
authorizer = PasswordGrantAuthorizer(tenant_url, client_id, client_secret)
dsv_secret = SecretsVault(tenant_url, authorizer).get_secret(secret_path)
return dsv_secret['data'][secret_field]
dsv_plugin = CredentialPlugin(name='Thycotic DevOps Secrets Vault', inputs=dsv_inputs, backend=dsv_backend)

View File

@@ -54,7 +54,9 @@ tss_inputs = {
def tss_backend(**kwargs):
if kwargs.get("domain"):
authorizer = DomainPasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'], kwargs['domain'])
authorizer = DomainPasswordGrantAuthorizer(
base_url=kwargs['server_url'], username=kwargs['username'], domain=kwargs['domain'], password=kwargs['password']
)
else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer)

View File

@@ -37,8 +37,11 @@ class Control(object):
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
def cancel(self, task_ids, *args, **kwargs):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
def cancel(self, task_ids, with_reply=True):
if with_reply:
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
else:
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)

View File

@@ -89,8 +89,9 @@ class AWXConsumerBase(object):
if task_ids and not msg:
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
if reply_queue is not None:
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
elif control == 'reload':
for worker in self.pool.workers:
worker.quit()

View File

@@ -24,6 +24,9 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=500, metavar='X', help='Remove activity stream events in batch of X events. Defaults to 500.'
)
def init_logging(self):
log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0]))
@@ -48,7 +51,7 @@ class Command(BaseCommand):
else:
pks_to_delete.add(asobj.pk)
# Cleanup objects in batches instead of deleting each one individually.
if len(pks_to_delete) >= 500:
if len(pks_to_delete) >= self.batch_size:
ActivityStream.objects.filter(pk__in=pks_to_delete).delete()
n_deleted_items += len(pks_to_delete)
pks_to_delete.clear()
@@ -63,4 +66,5 @@ class Command(BaseCommand):
self.days = int(options.get('days', 30))
self.cutoff = now() - datetime.timedelta(days=self.days)
self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 500))
self.cleanup_activitystream()

View File

@@ -1,22 +1,22 @@
from awx.main.models import HostMetric
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.main.tasks.host_metrics import HostMetricTask
class Command(BaseCommand):
"""
Run soft-deleting of HostMetrics
This command provides cleanup task for HostMetric model.
There are two modes, which run in following order:
- soft cleanup
- - Perform soft-deletion of all host metrics last automated 12 months ago or before.
This is the same as issuing a DELETE request to /api/v2/host_metrics/N/ for all host metrics that match the criteria.
- - updates columns delete, deleted_counter and last_deleted
- hard cleanup
- - Permanently erase from the database all host metrics last automated 36 months ago or before.
This operation happens after the soft deletion has finished.
"""
help = 'Run soft-deleting of HostMetrics'
def add_arguments(self, parser):
parser.add_argument('--months-ago', type=int, dest='months-ago', action='store', help='Threshold in months for soft-deleting')
help = 'Run soft and hard-deletion of HostMetrics'
def handle(self, *args, **options):
months_ago = options.get('months-ago') or None
if not months_ago:
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
HostMetric.cleanup_task(months_ago)
HostMetricTask().cleanup(soft_threshold=settings.CLEANUP_HOST_METRICS_SOFT_THRESHOLD, hard_threshold=settings.CLEANUP_HOST_METRICS_HARD_THRESHOLD)

View File

@@ -9,6 +9,7 @@ import re
# Django
from django.apps import apps
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction, connection
from django.db.models import Min, Max
@@ -150,6 +151,9 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=100000, metavar='X', help='Remove jobs in batch of X jobs. Defaults to 100000.'
)
parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs')
parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands')
parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates')
@@ -195,18 +199,58 @@ class Command(BaseCommand):
delete_meta.delete_jobs()
return (delete_meta.jobs_no_delete_count, delete_meta.jobs_to_delete_count)
def _cascade_delete_job_events(self, model, pk_list):
def has_unpartitioned_table(self, model):
tblname = unified_job_class_to_event_table_name(model)
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_tables WHERE tablename = '_unpartitioned_{tblname}';")
row = cursor.fetchone()
if row is None:
return False
return True
def _delete_unpartitioned_table(self, model):
"If the unpartitioned table is no longer necessary, it will drop the table"
tblname = unified_job_class_to_event_table_name(model)
if not self.has_unpartitioned_table(model):
self.logger.debug(f'Table _unpartitioned_{tblname} does not exist, you are fully migrated.')
return
with connection.cursor() as cursor:
# same as UnpartitionedJobEvent.objects.aggregate(Max('created'))
cursor.execute(f'SELECT MAX("_unpartitioned_{tblname}"."created") FROM "_unpartitioned_{tblname}";')
row = cursor.fetchone()
last_created = row[0]
if last_created:
self.logger.info(f'Last event created in _unpartitioned_{tblname} was {last_created.isoformat()}')
else:
self.logger.info(f'Table _unpartitioned_{tblname} has no events in it')
if (last_created is None) or (last_created < self.cutoff):
self.logger.warning(
f'Dropping table _unpartitioned_{tblname} since no records are newer than {self.cutoff}\n'
'WARNING - this will happen in a separate transaction so a failure will not roll back prior cleanup'
)
with connection.cursor() as cursor:
cursor.execute(f'DROP TABLE _unpartitioned_{tblname};')
def _delete_unpartitioned_events(self, model, pk_list):
"If unpartitioned job events remain, it will cascade those from jobs in pk_list"
tblname = unified_job_class_to_event_table_name(model)
rel_name = model().event_parent_key
# Bail if the unpartitioned table does not exist anymore
if not self.has_unpartitioned_table(model):
return
# Table still exists, delete individual unpartitioned events
if pk_list:
with connection.cursor() as cursor:
tblname = unified_job_class_to_event_table_name(model)
self.logger.debug(f'Deleting {len(pk_list)} events from _unpartitioned_{tblname}, use a longer cleanup window to delete the table.')
pk_list_csv = ','.join(map(str, pk_list))
rel_name = model().event_parent_key
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv})")
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv});")
def cleanup_jobs(self):
batch_size = 100000
# Hack to avoid doing N+1 queries as each item in the Job query set does
# an individual query to get the underlying UnifiedJob.
Job.polymorphic_super_sub_accessors_replaced = True
@@ -221,13 +265,14 @@ class Command(BaseCommand):
deleted = 0
info = qs.aggregate(min=Min('id'), max=Max('id'))
if info['min'] is not None:
for start in range(info['min'], info['max'] + 1, batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + batch_size)
for start in range(info['min'], info['max'] + 1, self.batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + self.batch_size)
pk_list = qs_batch.values_list('id', flat=True)
_, results = qs_batch.delete()
deleted += results['main.Job']
self._cascade_delete_job_events(Job, pk_list)
# Avoid dropping the job event table in case we have interacted with it already
self._delete_unpartitioned_events(Job, pk_list)
return skipped, deleted
@@ -250,7 +295,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(AdHocCommand, pk_list)
self._delete_unpartitioned_events(AdHocCommand, pk_list)
skipped += AdHocCommand.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -278,7 +323,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(ProjectUpdate, pk_list)
self._delete_unpartitioned_events(ProjectUpdate, pk_list)
skipped += ProjectUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -306,7 +351,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(InventoryUpdate, pk_list)
self._delete_unpartitioned_events(InventoryUpdate, pk_list)
skipped += InventoryUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -330,7 +375,7 @@ class Command(BaseCommand):
deleted += 1
if not self.dry_run:
self._cascade_delete_job_events(SystemJob, pk_list)
self._delete_unpartitioned_events(SystemJob, pk_list)
skipped += SystemJob.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@@ -375,12 +420,12 @@ class Command(BaseCommand):
skipped += Notification.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted
@transaction.atomic
def handle(self, *args, **options):
self.verbosity = int(options.get('verbosity', 1))
self.init_logging()
self.days = int(options.get('days', 90))
self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 100000))
try:
self.cutoff = now() - datetime.timedelta(days=self.days)
except OverflowError:
@@ -402,19 +447,29 @@ class Command(BaseCommand):
del s.receivers[:]
s.sender_receivers_cache.clear()
for m in model_names:
if m not in models_to_cleanup:
continue
with transaction.atomic():
for m in models_to_cleanup:
skipped, deleted = getattr(self, 'cleanup_%s' % m)()
skipped, deleted = getattr(self, 'cleanup_%s' % m)()
func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
if self.dry_run:
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
if self.dry_run:
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
# Deleting unpartitioned tables cannot be done in same transaction as updates to related tables
if not self.dry_run:
with transaction.atomic():
for m in models_to_cleanup:
unified_job_class_name = m[:-1].title().replace('Management', 'System').replace('_', '')
unified_job_class = apps.get_model('main', unified_job_class_name)
try:
unified_job_class().event_class
except (NotImplementedError, AttributeError):
continue # no need to run this for models without events
self._delete_unpartitioned_table(unified_job_class)

View File

@@ -25,17 +25,20 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help="Hostname used during provisioning")
parser.add_argument('--listener_port', dest='listener_port', type=int, help="Receptor listener port")
parser.add_argument('--node_type', type=str, default='hybrid', choices=['control', 'execution', 'hop', 'hybrid'], help="Instance Node type")
parser.add_argument('--uuid', type=str, help="Instance UUID")
def _register_hostname(self, hostname, node_type, uuid):
def _register_hostname(self, hostname, node_type, uuid, listener_port):
if not hostname:
if not settings.AWX_AUTO_DEPROVISION_INSTANCES:
raise CommandError('Registering with values from settings only intended for use in K8s installs')
from awx.main.management.commands.register_queue import RegisterQueue
(changed, instance) = Instance.objects.register(ip_address=os.environ.get('MY_POD_IP'), node_type='control', node_uuid=settings.SYSTEM_UUID)
(changed, instance) = Instance.objects.register(
ip_address=os.environ.get('MY_POD_IP'), listener_port=listener_port, node_type='control', node_uuid=settings.SYSTEM_UUID
)
RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register()
RegisterQueue(
settings.DEFAULT_EXECUTION_QUEUE_NAME,
@@ -48,7 +51,7 @@ class Command(BaseCommand):
max_concurrent_jobs=settings.DEFAULT_EXECUTION_QUEUE_MAX_CONCURRENT_JOBS,
).register()
else:
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid)
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid, listener_port=listener_port)
if changed:
print("Successfully registered instance {}".format(hostname))
else:
@@ -58,6 +61,6 @@ class Command(BaseCommand):
@transaction.atomic
def handle(self, **options):
self.changed = False
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'))
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'), options.get('listener_port'))
if self.changed:
print("(changed: True)")

View File

@@ -115,21 +115,25 @@ class InstanceManager(models.Manager):
return node[0]
raise RuntimeError("No instance found with the current cluster host id")
def register(self, node_uuid=None, hostname=None, ip_address=None, node_type='hybrid', defaults=None):
def register(self, node_uuid=None, hostname=None, ip_address="", listener_port=None, node_type='hybrid', defaults=None):
if not hostname:
hostname = settings.CLUSTER_HOST_ID
if not ip_address:
ip_address = ""
with advisory_lock('instance_registration_%s' % hostname):
if settings.AWX_AUTO_DEPROVISION_INSTANCES:
# detect any instances with the same IP address.
# if one exists, set it to None
inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname)
if inst_conflicting_ip.exists():
for other_inst in inst_conflicting_ip:
other_hostname = other_inst.hostname
other_inst.ip_address = None
other_inst.save(update_fields=['ip_address'])
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname))
# if one exists, set it to ""
if ip_address:
inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname)
if inst_conflicting_ip.exists():
for other_inst in inst_conflicting_ip:
other_hostname = other_inst.hostname
other_inst.ip_address = ""
other_inst.save(update_fields=['ip_address'])
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname))
# Return existing instance that matches hostname or UUID (default to UUID)
if node_uuid is not None and node_uuid != UUID_DEFAULT and self.filter(uuid=node_uuid).exists():
@@ -157,6 +161,9 @@ class InstanceManager(models.Manager):
if instance.node_type != node_type:
instance.node_type = node_type
update_fields.append('node_type')
if instance.listener_port != listener_port:
instance.listener_port = listener_port
update_fields.append('listener_port')
if update_fields:
instance.save(update_fields=update_fields)
return (True, instance)
@@ -167,12 +174,11 @@ class InstanceManager(models.Manager):
create_defaults = {
'node_state': Instance.States.INSTALLED,
'capacity': 0,
'listener_port': 27199,
}
if defaults is not None:
create_defaults.update(defaults)
uuid_option = {'uuid': node_uuid if node_uuid is not None else uuid.uuid4()}
if node_type == 'execution' and 'version' not in create_defaults:
create_defaults['version'] = RECEPTOR_PENDING
instance = self.create(hostname=hostname, ip_address=ip_address, node_type=node_type, **create_defaults, **uuid_option)
instance = self.create(hostname=hostname, ip_address=ip_address, listener_port=listener_port, node_type=node_type, **create_defaults, **uuid_option)
return (True, instance)

View File

@@ -0,0 +1,75 @@
# Generated by Django 4.2.3 on 2023-08-04 20:50
import django.core.validators
from django.db import migrations, models
from django.conf import settings
def automatically_peer_from_control_plane(apps, schema_editor):
if settings.IS_K8S:
Instance = apps.get_model('main', 'Instance')
Instance.objects.filter(node_type='execution').update(peers_from_control_nodes=True)
Instance.objects.filter(node_type='control').update(listener_port=None)
class Migration(migrations.Migration):
dependencies = [
('main', '0186_drop_django_taggit'),
]
operations = [
migrations.AlterModelOptions(
name='instancelink',
options={'ordering': ('id',)},
),
migrations.AddField(
model_name='instance',
name='peers_from_control_nodes',
field=models.BooleanField(default=False, help_text='If True, control plane cluster nodes should automatically peer to it.'),
),
migrations.AlterField(
model_name='instance',
name='ip_address',
field=models.CharField(blank=True, default='', max_length=50),
),
migrations.AlterField(
model_name='instance',
name='listener_port',
field=models.PositiveIntegerField(
blank=True,
default=None,
help_text='Port that Receptor will listen for incoming connections on.',
null=True,
validators=[django.core.validators.MinValueValidator(1024), django.core.validators.MaxValueValidator(65535)],
),
),
migrations.AlterField(
model_name='instance',
name='peers',
field=models.ManyToManyField(related_name='peers_from', through='main.InstanceLink', to='main.instance'),
),
migrations.AlterField(
model_name='instancelink',
name='link_state',
field=models.CharField(
choices=[('adding', 'Adding'), ('established', 'Established'), ('removing', 'Removing')],
default='adding',
help_text='Indicates the current life cycle stage of this peer link.',
max_length=16,
),
),
migrations.AddConstraint(
model_name='instance',
constraint=models.UniqueConstraint(
condition=models.Q(('ip_address', ''), _negated=True),
fields=('ip_address',),
name='unique_ip_address_not_empty',
violation_error_message='Field ip_address must be unique.',
),
),
migrations.AddConstraint(
model_name='instancelink',
constraint=models.CheckConstraint(check=models.Q(('source', models.F('target')), _negated=True), name='source_and_target_can_not_be_equal'),
),
migrations.RunPython(automatically_peer_from_control_plane),
]

View File

@@ -17,6 +17,7 @@ from jinja2 import sandbox
from django.db import models
from django.utils.translation import gettext_lazy as _, gettext_noop
from django.core.exceptions import ValidationError
from django.conf import settings
from django.utils.encoding import force_str
from django.utils.functional import cached_property
from django.utils.timezone import now
@@ -30,7 +31,7 @@ from awx.main.fields import (
CredentialTypeInjectorField,
DynamicCredentialInputField,
)
from awx.main.utils import decrypt_field, classproperty
from awx.main.utils import decrypt_field, classproperty, set_environ
from awx.main.utils.safe_yaml import safe_dump
from awx.main.utils.execution_environments import to_container_path
from awx.main.validators import validate_ssh_private_key
@@ -1252,7 +1253,9 @@ class CredentialInputSource(PrimordialModel):
backend_kwargs[field_name] = value
backend_kwargs.update(self.metadata)
return backend(**backend_kwargs)
with set_environ(**settings.AWX_TASK_ENV):
return backend(**backend_kwargs)
def get_absolute_url(self, request=None):
view_name = 'api:credential_input_source_detail'

View File

@@ -12,13 +12,14 @@ from django.dispatch import receiver
from django.utils.translation import gettext_lazy as _
from django.conf import settings
from django.utils.timezone import now, timedelta
from django.db.models import Sum
from django.db.models import Sum, Q
import redis
from solo.models import SingletonModel
# AWX
from awx import __version__ as awx_application_version
from awx.main.utils import is_testing
from awx.api.versioning import reverse
from awx.main.fields import ImplicitRoleField
from awx.main.managers import InstanceManager, UUID_DEFAULT
@@ -70,16 +71,33 @@ class InstanceLink(BaseModel):
REMOVING = 'removing', _('Removing')
link_state = models.CharField(
choices=States.choices, default=States.ESTABLISHED, max_length=16, help_text=_("Indicates the current life cycle stage of this peer link.")
choices=States.choices, default=States.ADDING, max_length=16, help_text=_("Indicates the current life cycle stage of this peer link.")
)
class Meta:
unique_together = ('source', 'target')
ordering = ("id",)
constraints = [models.CheckConstraint(check=~models.Q(source=models.F('target')), name='source_and_target_can_not_be_equal')]
class Instance(HasPolicyEditsMixin, BaseModel):
"""A model representing an AWX instance running against this database."""
class Meta:
app_label = 'main'
ordering = ("hostname",)
constraints = [
models.UniqueConstraint(
fields=["ip_address"],
condition=~Q(ip_address=""), # don't apply to constraint to empty entries
name="unique_ip_address_not_empty",
violation_error_message=_("Field ip_address must be unique."),
)
]
def __str__(self):
return self.hostname
objects = InstanceManager()
# Fields set in instance registration
@@ -87,10 +105,8 @@ class Instance(HasPolicyEditsMixin, BaseModel):
hostname = models.CharField(max_length=250, unique=True)
ip_address = models.CharField(
blank=True,
null=True,
default=None,
default="",
max_length=50,
unique=True,
)
# Auto-fields, implementation is different from BaseModel
created = models.DateTimeField(auto_now_add=True)
@@ -169,16 +185,14 @@ class Instance(HasPolicyEditsMixin, BaseModel):
)
listener_port = models.PositiveIntegerField(
blank=True,
default=27199,
validators=[MinValueValidator(1), MaxValueValidator(65535)],
null=True,
default=None,
validators=[MinValueValidator(1024), MaxValueValidator(65535)],
help_text=_("Port that Receptor will listen for incoming connections on."),
)
peers = models.ManyToManyField('self', symmetrical=False, through=InstanceLink, through_fields=('source', 'target'))
class Meta:
app_label = 'main'
ordering = ("hostname",)
peers = models.ManyToManyField('self', symmetrical=False, through=InstanceLink, through_fields=('source', 'target'), related_name='peers_from')
peers_from_control_nodes = models.BooleanField(default=False, help_text=_("If True, control plane cluster nodes should automatically peer to it."))
POLICY_FIELDS = frozenset(('managed_by_policy', 'hostname', 'capacity_adjustment'))
@@ -275,10 +289,14 @@ class Instance(HasPolicyEditsMixin, BaseModel):
if update_last_seen:
update_fields += ['last_seen']
if perform_save:
self.save(update_fields=update_fields)
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
self.save(update_fields=update_fields)
return update_fields
def set_capacity_value(self):
old_val = self.capacity
"""Sets capacity according to capacity adjustment rule (no save)"""
if self.enabled and self.node_type != 'hop':
lower_cap = min(self.mem_capacity, self.cpu_capacity)
@@ -286,6 +304,7 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.capacity = lower_cap + (higher_cap - lower_cap) * self.capacity_adjustment
else:
self.capacity = 0
return int(self.capacity) != int(old_val) # return True if value changed
def refresh_capacity_fields(self):
"""Update derived capacity fields from cpu and memory (no save)"""
@@ -293,8 +312,8 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.cpu_capacity = 0
self.mem_capacity = 0 # formula has a non-zero offset, so we make sure it is 0 for hop nodes
else:
self.cpu_capacity = get_cpu_effective_capacity(self.cpu)
self.mem_capacity = get_mem_effective_capacity(self.memory)
self.cpu_capacity = get_cpu_effective_capacity(self.cpu, is_control_node=bool(self.node_type in (Instance.Types.CONTROL, Instance.Types.HYBRID)))
self.mem_capacity = get_mem_effective_capacity(self.memory, is_control_node=bool(self.node_type in (Instance.Types.CONTROL, Instance.Types.HYBRID)))
self.set_capacity_value()
def save_health_data(self, version=None, cpu=0, memory=0, uuid=None, update_last_seen=False, errors=''):
@@ -317,12 +336,17 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.version = version
update_fields.append('version')
new_cpu = get_corrected_cpu(cpu)
if self.node_type == Instance.Types.EXECUTION:
new_cpu = cpu
new_memory = memory
else:
new_cpu = get_corrected_cpu(cpu)
new_memory = get_corrected_memory(memory)
if new_cpu != self.cpu:
self.cpu = new_cpu
update_fields.append('cpu')
new_memory = get_corrected_memory(memory)
if new_memory != self.memory:
self.memory = new_memory
update_fields.append('memory')
@@ -464,21 +488,50 @@ def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs
instance.set_default_policy_fields()
def schedule_write_receptor_config(broadcast=True):
from awx.main.tasks.receptor import write_receptor_config # prevents circular import
# broadcast to all control instances to update their receptor configs
if broadcast:
connection.on_commit(lambda: write_receptor_config.apply_async(queue='tower_broadcast_all'))
else:
if not is_testing():
write_receptor_config() # just run locally
@receiver(post_save, sender=Instance)
def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
if settings.IS_K8S and instance.node_type in (Instance.Types.EXECUTION,):
'''
Here we link control nodes to hop or execution nodes based on the
peers_from_control_nodes field.
write_receptor_config should be called on each control node when:
1. new node is created with peers_from_control_nodes enabled
2. a node changes its value of peers_from_control_nodes
3. a new control node comes online and has instances to peer to
'''
if created and settings.IS_K8S and instance.node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
inst = Instance.objects.filter(peers_from_control_nodes=True)
if set(instance.peers.all()) != set(inst):
instance.peers.set(inst)
schedule_write_receptor_config(broadcast=False)
if settings.IS_K8S and instance.node_type in [Instance.Types.HOP, Instance.Types.EXECUTION]:
if instance.node_state == Instance.States.DEPROVISIONING:
from awx.main.tasks.receptor import remove_deprovisioned_node # prevents circular import
# wait for jobs on the node to complete, then delete the
# node and kick off write_receptor_config
connection.on_commit(lambda: remove_deprovisioned_node.apply_async([instance.hostname]))
if instance.node_state == Instance.States.INSTALLED:
from awx.main.tasks.receptor import write_receptor_config # prevents circular import
# broadcast to all control instances to update their receptor configs
connection.on_commit(lambda: write_receptor_config.apply_async(queue='tower_broadcast_all'))
else:
control_instances = set(Instance.objects.filter(node_type__in=[Instance.Types.CONTROL, Instance.Types.HYBRID]))
if instance.peers_from_control_nodes:
if (control_instances & set(instance.peers_from.all())) != set(control_instances):
instance.peers_from.add(*control_instances)
schedule_write_receptor_config() # keep method separate to make pytest mocking easier
else:
if set(control_instances) & set(instance.peers_from.all()):
instance.peers_from.remove(*control_instances)
schedule_write_receptor_config()
if created or instance.has_policy_changes():
schedule_policy_task()
@@ -493,6 +546,8 @@ def on_instance_group_deleted(sender, instance, using, **kwargs):
@receiver(post_delete, sender=Instance)
def on_instance_deleted(sender, instance, using, **kwargs):
schedule_policy_task()
if settings.IS_K8S and instance.node_type in (Instance.Types.EXECUTION, Instance.Types.HOP) and instance.peers_from_control_nodes:
schedule_write_receptor_config()
class UnifiedJobTemplateInstanceGroupMembership(models.Model):

View File

@@ -10,7 +10,6 @@ import copy
import os.path
from urllib.parse import urljoin
import dateutil.relativedelta
import yaml
# Django
@@ -890,23 +889,6 @@ class HostMetric(models.Model):
self.deleted = False
self.save(update_fields=['deleted'])
@classmethod
def cleanup_task(cls, months_ago):
try:
months_ago = int(months_ago)
if months_ago <= 0:
raise ValueError()
last_automation_before = now() - dateutil.relativedelta.relativedelta(months=months_ago)
logger.info(f'cleanup_host_metrics: soft-deleting records last automated before {last_automation_before}')
HostMetric.active_objects.filter(last_automation__lt=last_automation_before).update(
deleted=True, deleted_counter=models.F('deleted_counter') + 1, last_deleted=now()
)
settings.CLEANUP_HOST_METRICS_LAST_TS = now()
except (TypeError, ValueError):
logger.error(f"cleanup_host_metrics: months_ago({months_ago}) has to be a positive integer value")
class HostMetricSummaryMonthly(models.Model):
"""

View File

@@ -1439,6 +1439,11 @@ class UnifiedJob(
if not self.celery_task_id:
return
canceled = []
if not connection.get_autocommit():
# this condition is purpose-written for the task manager, when it cancels jobs in workflows
ControlDispatcher('dispatcher', self.controller_node).cancel([self.celery_task_id], with_reply=False)
return True # task manager itself needs to act under assumption that cancel was received
try:
# Use control and reply mechanism to cancel and obtain confirmation
timeout = 5

View File

@@ -124,6 +124,13 @@ class TaskBase:
self.record_aggregate_metrics()
sys.exit(1)
def get_local_metrics(self):
data = {}
for k, metric in self.subsystem_metrics.METRICS.items():
if k.startswith(self.prefix) and metric.metric_has_changed:
data[k[len(self.prefix) + 1 :]] = metric.current_value
return data
def schedule(self):
# Always be able to restore the original signal handler if we finish
original_sigusr1 = signal.getsignal(signal.SIGUSR1)
@@ -146,10 +153,14 @@ class TaskBase:
signal.signal(signal.SIGUSR1, original_sigusr1)
commit_start = time.time()
logger.debug(f"Commiting {self.prefix} Scheduler changes")
if self.prefix == "task_manager":
self.subsystem_metrics.set(f"{self.prefix}_commit_seconds", time.time() - commit_start)
local_metrics = self.get_local_metrics()
self.record_aggregate_metrics()
logger.debug(f"Finishing {self.prefix} Scheduler")
logger.debug(f"Finished {self.prefix} Scheduler, timing data:\n{local_metrics}")
class WorkflowManager(TaskBase):
@@ -259,6 +270,9 @@ class WorkflowManager(TaskBase):
job.status = 'failed'
job.save(update_fields=['status', 'job_explanation'])
job.websocket_emit_status('failed')
# NOTE: sending notification templates here is slightly worse performance
# this is not yet optimized in the same way as for the TaskManager
job.send_notification_templates('failed')
ScheduleWorkflowManager().schedule()
# TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
@@ -419,6 +433,25 @@ class TaskManager(TaskBase):
self.tm_models = TaskManagerModels()
self.controlplane_ig = self.tm_models.instance_groups.controlplane_ig
def process_job_dep_failures(self, task):
"""If job depends on a job that has failed, mark as failed and handle misc stuff."""
for dep in task.dependent_jobs.all():
# if we detect a failed or error dependency, go ahead and fail this task.
if dep.status in ("error", "failed"):
task.status = 'failed'
logger.warning(f'Previous task failed task: {task.id} dep: {dep.id} task manager')
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(dep)),
dep.name,
dep.id,
)
task.save(update_fields=['status', 'job_explanation'])
task.websocket_emit_status('failed')
self.pre_start_failed.append(task.id)
return True
return False
def job_blocked_by(self, task):
# TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph
# in the old task manager this was handled as a method on each task object outside of the graph and
@@ -430,20 +463,6 @@ class TaskManager(TaskBase):
for dep in task.dependent_jobs.all():
if dep.status in ACTIVE_STATES:
return dep
# if we detect a failed or error dependency, go ahead and fail this
# task. The errback on the dependency takes some time to trigger,
# and we don't want the task to enter running state if its
# dependency has failed or errored.
elif dep.status in ("error", "failed"):
task.status = 'failed'
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(dep)),
dep.name,
dep.id,
)
task.save(update_fields=['status', 'job_explanation'])
task.websocket_emit_status('failed')
return dep
return None
@@ -463,7 +482,6 @@ class TaskManager(TaskBase):
if self.start_task_limit == 0:
# schedule another run immediately after this task manager
ScheduleTaskManager().schedule()
from awx.main.tasks.system import handle_work_error, handle_work_success
task.status = 'waiting'
@@ -474,7 +492,7 @@ class TaskManager(TaskBase):
task.job_explanation += ' '
task.job_explanation += 'Task failed pre-start check.'
task.save()
# TODO: run error handler to fail sub-tasks and send notifications
self.pre_start_failed.append(task.id)
else:
if type(task) is WorkflowJob:
task.status = 'running'
@@ -496,19 +514,16 @@ class TaskManager(TaskBase):
# apply_async does a NOTIFY to the channel dispatcher is listening to
# postgres will treat this as part of the transaction, which is what we want
if task.status != 'failed' and type(task) is not WorkflowJob:
task_actual = {'type': get_type_for_model(type(task)), 'id': task.id}
task_cls = task._get_task_class()
task_cls.apply_async(
[task.pk],
opts,
queue=task.get_queue_name(),
uuid=task.celery_task_id,
callbacks=[{'task': handle_work_success.name, 'kwargs': {'task_actual': task_actual}}],
errbacks=[{'task': handle_work_error.name, 'kwargs': {'task_actual': task_actual}}],
)
# In exception cases, like a job failing pre-start checks, we send the websocket status message
# for jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
# In exception cases, like a job failing pre-start checks, we send the websocket status message.
# For jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
if task.status != 'waiting':
task.websocket_emit_status(task.status) # adds to on_commit
@@ -529,6 +544,11 @@ class TaskManager(TaskBase):
if self.timed_out():
logger.warning("Task manager has reached time out while processing pending jobs, exiting loop early")
break
has_failed = self.process_job_dep_failures(task)
if has_failed:
continue
blocked_by = self.job_blocked_by(task)
if blocked_by:
self.subsystem_metrics.inc(f"{self.prefix}_tasks_blocked", 1)
@@ -642,6 +662,11 @@ class TaskManager(TaskBase):
reap_job(j, 'failed')
def process_tasks(self):
# maintain a list of jobs that went to an early failure state,
# meaning the dispatcher never got these jobs,
# that means we have to handle notifications for those
self.pre_start_failed = []
running_tasks = [t for t in self.all_tasks if t.status in ['waiting', 'running']]
self.process_running_tasks(running_tasks)
self.subsystem_metrics.inc(f"{self.prefix}_running_processed", len(running_tasks))
@@ -651,6 +676,11 @@ class TaskManager(TaskBase):
self.process_pending_tasks(pending_tasks)
self.subsystem_metrics.inc(f"{self.prefix}_pending_processed", len(pending_tasks))
if self.pre_start_failed:
from awx.main.tasks.system import handle_failure_notifications
handle_failure_notifications.delay(self.pre_start_failed)
def timeout_approval_node(self, task):
if self.timed_out():
logger.warning("Task manager has reached time out while processing approval nodes, exiting loop early")

View File

@@ -208,9 +208,10 @@ class RunnerCallback:
# We opened a connection just for that save, close it here now
connections.close_all()
elif status_data['status'] == 'error':
result_traceback = status_data.get('result_traceback', None)
if result_traceback:
self.delay_update(result_traceback=result_traceback)
for field_name in ('result_traceback', 'job_explanation'):
field_value = status_data.get(field_name, None)
if field_value:
self.delay_update(**{field_name: field_value})
def artifacts_handler(self, artifact_dir):
self.artifacts_processed = True

10
awx/main/tasks/helpers.py Normal file
View File

@@ -0,0 +1,10 @@
from django.utils.timezone import now
from rest_framework.fields import DateTimeField
def is_run_threshold_reached(setting, threshold_seconds):
last_time = DateTimeField().to_internal_value(setting) if setting else None
if not last_time:
return True
else:
return (now() - last_time).total_seconds() > threshold_seconds

View File

@@ -3,33 +3,90 @@ from dateutil.relativedelta import relativedelta
import logging
from django.conf import settings
from django.db.models import Count
from django.db.models import Count, F
from django.db.models.functions import TruncMonth
from django.utils.timezone import now
from rest_framework.fields import DateTimeField
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.publish import task
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly
from awx.main.tasks.helpers import is_run_threshold_reached
from awx.conf.license import get_license
logger = logging.getLogger('awx.main.tasks.host_metric_summary_monthly')
logger = logging.getLogger('awx.main.tasks.host_metrics')
@task(queue=get_task_queuename)
def cleanup_host_metrics():
if is_run_threshold_reached(getattr(settings, 'CLEANUP_HOST_METRICS_LAST_TS', None), getattr(settings, 'CLEANUP_HOST_METRICS_INTERVAL', 30) * 86400):
logger.info(f"Executing cleanup_host_metrics, last ran at {getattr(settings, 'CLEANUP_HOST_METRICS_LAST_TS', '---')}")
HostMetricTask().cleanup(
soft_threshold=getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12),
hard_threshold=getattr(settings, 'CLEANUP_HOST_METRICS_HARD_THRESHOLD', 36),
)
logger.info("Finished cleanup_host_metrics")
@task(queue=get_task_queuename)
def host_metric_summary_monthly():
"""Run cleanup host metrics summary monthly task each week"""
if _is_run_threshold_reached(
getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', None), getattr(settings, 'HOST_METRIC_SUMMARY_TASK_INTERVAL', 7) * 86400
):
if is_run_threshold_reached(getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', None), getattr(settings, 'HOST_METRIC_SUMMARY_TASK_INTERVAL', 7) * 86400):
logger.info(f"Executing host_metric_summary_monthly, last ran at {getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', '---')}")
HostMetricSummaryMonthlyTask().execute()
logger.info("Finished host_metric_summary_monthly")
def _is_run_threshold_reached(setting, threshold_seconds):
last_time = DateTimeField().to_internal_value(setting) if setting else DateTimeField().to_internal_value('1970-01-01')
class HostMetricTask:
"""
This class provides cleanup task for HostMetric model.
There are two modes:
- soft cleanup (updates columns delete, deleted_counter and last_deleted)
- hard cleanup (deletes from the db)
"""
return (now() - last_time).total_seconds() > threshold_seconds
def cleanup(self, soft_threshold=None, hard_threshold=None):
"""
Main entrypoint, runs either soft cleanup, hard cleanup or both
:param soft_threshold: (int)
:param hard_threshold: (int)
"""
if hard_threshold is not None:
self.hard_cleanup(hard_threshold)
if soft_threshold is not None:
self.soft_cleanup(soft_threshold)
settings.CLEANUP_HOST_METRICS_LAST_TS = now()
@staticmethod
def soft_cleanup(threshold=None):
if threshold is None:
threshold = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
try:
threshold = int(threshold)
except (ValueError, TypeError) as e:
raise type(e)("soft_threshold has to be convertible to number") from e
last_automation_before = now() - relativedelta(months=threshold)
rows = HostMetric.active_objects.filter(last_automation__lt=last_automation_before).update(
deleted=True, deleted_counter=F('deleted_counter') + 1, last_deleted=now()
)
logger.info(f'cleanup_host_metrics: soft-deleted records last automated before {last_automation_before}, affected rows: {rows}')
@staticmethod
def hard_cleanup(threshold=None):
if threshold is None:
threshold = getattr(settings, 'CLEANUP_HOST_METRICS_HARD_THRESHOLD', 36)
try:
threshold = int(threshold)
except (ValueError, TypeError) as e:
raise type(e)("hard_threshold has to be convertible to number") from e
last_deleted_before = now() - relativedelta(months=threshold)
queryset = HostMetric.objects.filter(deleted=True, last_deleted__lt=last_deleted_before)
rows = queryset.delete()
logger.info(f'cleanup_host_metrics: hard-deleted records which were soft deleted before {last_deleted_before}, affected rows: {rows[0]}')
class HostMetricSummaryMonthlyTask:

View File

@@ -74,6 +74,8 @@ from awx.main.utils.common import (
extract_ansible_vars,
get_awx_version,
create_partition,
ScheduleWorkflowManager,
ScheduleTaskManager,
)
from awx.conf.license import get_license
from awx.main.utils.handlers import SpecialInventoryHandler
@@ -450,6 +452,12 @@ class BaseTask(object):
instance.ansible_version = ansible_version_info
instance.save(update_fields=['ansible_version'])
# Run task manager appropriately for speculative dependencies
if instance.unifiedjob_blocked_jobs.exists():
ScheduleTaskManager().schedule()
if instance.spawned_by_workflow:
ScheduleWorkflowManager().schedule()
def should_use_fact_cache(self):
return False
@@ -1873,6 +1881,8 @@ class RunSystemJob(BaseTask):
if system_job.job_type in ('cleanup_jobs', 'cleanup_activitystream'):
if 'days' in json_vars:
args.extend(['--days', str(json_vars.get('days', 60))])
if 'batch_size' in json_vars:
args.extend(['--batch-size', str(json_vars['batch_size'])])
if 'dry_run' in json_vars and json_vars['dry_run']:
args.extend(['--dry-run'])
if system_job.job_type == 'cleanup_jobs':

View File

@@ -30,6 +30,7 @@ from awx.main.tasks.signals import signal_state, signal_callback, SignalExit
from awx.main.models import Instance, InstanceLink, UnifiedJob
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.publish import task
from awx.main.utils.pglock import advisory_lock
# Receptorctl
from receptorctl.socket_interface import ReceptorControl
@@ -431,16 +432,16 @@ class AWXReceptorJob:
# massive, only ask for last 1000 bytes
startpos = max(stdout_size - 1000, 0)
resultsock, resultfile = receptor_ctl.get_work_results(self.unit_id, startpos=startpos, return_socket=True, return_sockfile=True)
resultsock.setblocking(False) # this makes resultfile reads non blocking
lines = resultfile.readlines()
receptor_output = b"".join(lines).decode()
if receptor_output:
self.task.runner_callback.delay_update(result_traceback=receptor_output)
self.task.runner_callback.delay_update(result_traceback=f'Worker output:\n{receptor_output}')
elif detail:
self.task.runner_callback.delay_update(result_traceback=detail)
self.task.runner_callback.delay_update(result_traceback=f'Receptor detail:\n{detail}')
else:
logger.warning(f'No result details or output from {self.task.instance.log_format}, status:\n{state_name}')
except Exception:
logger.exception(f'Work results error from job id={self.task.instance.id} work_unit={self.task.instance.work_unit_id}')
raise RuntimeError(detail)
return res
@@ -675,26 +676,41 @@ RECEPTOR_CONFIG_STARTER = (
)
@task()
def write_receptor_config():
lock = FileLock(__RECEPTOR_CONF_LOCKFILE)
with lock:
receptor_config = list(RECEPTOR_CONFIG_STARTER)
def should_update_config(instances):
'''
checks that the list of instances matches the list of
tcp-peers in the config
'''
current_config = read_receptor_config() # this gets receptor conf lock
current_peers = []
for config_entry in current_config:
for key, value in config_entry.items():
if key.endswith('-peer'):
current_peers.append(value['address'])
intended_peers = [f"{i.hostname}:{i.listener_port}" for i in instances]
logger.debug(f"Peers current {current_peers} intended {intended_peers}")
if set(current_peers) == set(intended_peers):
return False # config file is already update to date
this_inst = Instance.objects.me()
instances = Instance.objects.filter(node_type=Instance.Types.EXECUTION)
existing_peers = {link.target_id for link in InstanceLink.objects.filter(source=this_inst)}
new_links = []
for instance in instances:
peer = {'tcp-peer': {'address': f'{instance.hostname}:{instance.listener_port}', 'tls': 'tlsclient'}}
receptor_config.append(peer)
if instance.id not in existing_peers:
new_links.append(InstanceLink(source=this_inst, target=instance, link_state=InstanceLink.States.ADDING))
return True
InstanceLink.objects.bulk_create(new_links)
with open(__RECEPTOR_CONF, 'w') as file:
yaml.dump(receptor_config, file, default_flow_style=False)
def generate_config_data():
# returns two values
# receptor config - based on current database peers
# should_update - If True, receptor_config differs from the receptor conf file on disk
instances = Instance.objects.filter(node_type__in=(Instance.Types.EXECUTION, Instance.Types.HOP), peers_from_control_nodes=True)
receptor_config = list(RECEPTOR_CONFIG_STARTER)
for instance in instances:
peer = {'tcp-peer': {'address': f'{instance.hostname}:{instance.listener_port}', 'tls': 'tlsclient'}}
receptor_config.append(peer)
should_update = should_update_config(instances)
return receptor_config, should_update
def reload_receptor():
logger.warning("Receptor config changed, reloading receptor")
# This needs to be outside of the lock because this function itself will acquire the lock.
receptor_ctl = get_receptor_ctl()
@@ -710,8 +726,29 @@ def write_receptor_config():
else:
raise RuntimeError("Receptor reload failed")
links = InstanceLink.objects.filter(source=this_inst, target__in=instances, link_state=InstanceLink.States.ADDING)
links.update(link_state=InstanceLink.States.ESTABLISHED)
@task()
def write_receptor_config():
"""
This task runs async on each control node, K8S only.
It is triggered whenever remote is added or removed, or if peers_from_control_nodes
is flipped.
It is possible for write_receptor_config to be called multiple times.
For example, if new instances are added in quick succession.
To prevent that case, each control node first grabs a DB advisory lock, specific
to just that control node (i.e. multiple control nodes can run this function
at the same time, since it only writes the local receptor config file)
"""
with advisory_lock(f"{settings.CLUSTER_HOST_ID}_write_receptor_config", wait=True):
# Config file needs to be updated
receptor_config, should_update = generate_config_data()
if should_update:
lock = FileLock(__RECEPTOR_CONF_LOCKFILE)
with lock:
with open(__RECEPTOR_CONF, 'w') as file:
yaml.dump(receptor_config, file, default_flow_style=False)
reload_receptor()
@task(queue=get_task_queuename)
@@ -731,6 +768,3 @@ def remove_deprovisioned_node(hostname):
# This will as a side effect also delete the InstanceLinks that are tied to it.
Instance.objects.filter(hostname=hostname).delete()
# Update the receptor configs for all of the control-plane.
write_receptor_config.apply_async(queue='tower_broadcast_all')

View File

@@ -16,7 +16,9 @@ class SignalExit(Exception):
class SignalState:
def reset(self):
self.sigterm_flag = False
self.is_active = False
self.sigint_flag = False
self.is_active = False # for nested context managers
self.original_sigterm = None
self.original_sigint = None
self.raise_exception = False
@@ -24,23 +26,36 @@ class SignalState:
def __init__(self):
self.reset()
def set_flag(self, *args):
"""Method to pass into the python signal.signal method to receive signals"""
self.sigterm_flag = True
def raise_if_needed(self):
if self.raise_exception:
self.raise_exception = False # so it is not raised a second time in error handling
raise SignalExit()
def set_sigterm_flag(self, *args):
self.sigterm_flag = True
self.raise_if_needed()
def set_sigint_flag(self, *args):
self.sigint_flag = True
self.raise_if_needed()
def connect_signals(self):
self.original_sigterm = signal.getsignal(signal.SIGTERM)
self.original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGTERM, self.set_flag)
signal.signal(signal.SIGINT, self.set_flag)
signal.signal(signal.SIGTERM, self.set_sigterm_flag)
signal.signal(signal.SIGINT, self.set_sigint_flag)
self.is_active = True
def restore_signals(self):
signal.signal(signal.SIGTERM, self.original_sigterm)
signal.signal(signal.SIGINT, self.original_sigint)
# if we got a signal while context manager was active, call parent methods.
if self.sigterm_flag:
if callable(self.original_sigterm):
self.original_sigterm()
if self.sigint_flag:
if callable(self.original_sigint):
self.original_sigint()
self.reset()
@@ -48,7 +63,7 @@ signal_state = SignalState()
def signal_callback():
return signal_state.sigterm_flag
return bool(signal_state.sigterm_flag or signal_state.sigint_flag)
def with_signal_handling(f):

View File

@@ -48,22 +48,16 @@ from awx.main.models import (
Inventory,
SmartInventoryMembership,
Job,
HostMetric,
convert_jsonfields,
)
from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename, reaper
from awx.main.utils.common import (
get_type_for_model,
ignore_inventory_computed_fields,
ignore_inventory_group_removal,
ScheduleWorkflowManager,
ScheduleTaskManager,
)
from awx.main.utils.common import ignore_inventory_computed_fields, ignore_inventory_group_removal
from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock
from awx.main.tasks.helpers import is_run_threshold_reached
from awx.main.tasks.receptor import get_receptor_ctl, worker_info, worker_cleanup, administrative_workunit_reaper, write_receptor_config
from awx.main.consumers import emit_channel_notification
from awx.main import analytics
@@ -368,9 +362,7 @@ def send_notifications(notification_list, job_id=None):
@task(queue=get_task_queuename)
def gather_analytics():
from awx.conf.models import Setting
if is_run_threshold_reached(Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_GATHER').first(), settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
if is_run_threshold_reached(getattr(settings, 'AUTOMATION_ANALYTICS_LAST_GATHER', None), settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
analytics.gather()
@@ -427,29 +419,6 @@ def cleanup_images_and_files():
_cleanup_images_and_files()
@task(queue=get_task_queuename)
def cleanup_host_metrics():
"""Run cleanup host metrics ~each month"""
# TODO: move whole method to host_metrics in follow-up PR
from awx.conf.models import Setting
if is_run_threshold_reached(
Setting.objects.filter(key='CLEANUP_HOST_METRICS_LAST_TS').first(), getattr(settings, 'CLEANUP_HOST_METRICS_INTERVAL', 30) * 86400
):
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
logger.info("Executing cleanup_host_metrics")
HostMetric.cleanup_task(months_ago)
logger.info("Finished cleanup_host_metrics")
def is_run_threshold_reached(setting, threshold_seconds):
from rest_framework.fields import DateTimeField
last_time = DateTimeField().to_internal_value(setting.value) if setting and setting.value else DateTimeField().to_internal_value('1970-01-01')
return (now() - last_time).total_seconds() > threshold_seconds
@task(queue=get_task_queuename)
def cluster_node_health_check(node):
"""
@@ -491,7 +460,6 @@ def execution_node_health_check(node):
data = worker_info(node)
prior_capacity = instance.capacity
instance.save_health_data(
version='ansible-runner-' + data.get('runner_version', '???'),
cpu=data.get('cpu_count', 0),
@@ -512,13 +480,37 @@ def execution_node_health_check(node):
return data
def inspect_execution_nodes(instance_list):
with advisory_lock('inspect_execution_nodes_lock', wait=False):
node_lookup = {inst.hostname: inst for inst in instance_list}
def inspect_established_receptor_connections(mesh_status):
'''
Flips link state from ADDING to ESTABLISHED
If the InstanceLink source and target match the entries
in Known Connection Costs, flip to Established.
'''
from awx.main.models import InstanceLink
all_links = InstanceLink.objects.filter(link_state=InstanceLink.States.ADDING)
if not all_links.exists():
return
active_receptor_conns = mesh_status['KnownConnectionCosts']
update_links = []
for link in all_links:
if link.link_state != InstanceLink.States.REMOVING:
if link.target.hostname in active_receptor_conns.get(link.source.hostname, {}):
if link.link_state is not InstanceLink.States.ESTABLISHED:
link.link_state = InstanceLink.States.ESTABLISHED
update_links.append(link)
InstanceLink.objects.bulk_update(update_links, ['link_state'])
def inspect_execution_and_hop_nodes(instance_list):
with advisory_lock('inspect_execution_and_hop_nodes_lock', wait=False):
node_lookup = {inst.hostname: inst for inst in instance_list}
ctl = get_receptor_ctl()
mesh_status = ctl.simple_command('status')
inspect_established_receptor_connections(mesh_status)
nowtime = now()
workers = mesh_status['Advertisements']
@@ -576,7 +568,7 @@ def cluster_node_heartbeat(dispatch_time=None, worker_tasks=None):
this_inst = inst
break
inspect_execution_nodes(instance_list)
inspect_execution_and_hop_nodes(instance_list)
for inst in list(instance_list):
if inst == this_inst:
@@ -765,66 +757,21 @@ def awx_periodic_scheduler():
new_unified_job.save(update_fields=['status', 'job_explanation'])
new_unified_job.websocket_emit_status("failed")
emit_channel_notification('schedules-changed', dict(id=schedule.id, group_name="schedules"))
state.save()
def schedule_manager_success_or_error(instance):
if instance.unifiedjob_blocked_jobs.exists():
ScheduleTaskManager().schedule()
if instance.spawned_by_workflow:
ScheduleWorkflowManager().schedule()
@task(queue=get_task_queuename)
def handle_work_success(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in success callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
schedule_manager_success_or_error(instance)
@task(queue=get_task_queuename)
def handle_work_error(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in error callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
subtasks = instance.get_jobs_fail_chain() # reverse of dependent_jobs mostly
logger.debug(f'Executing error task id {task_actual["id"]}, subtasks: {[subtask.id for subtask in subtasks]}')
deps_of_deps = {}
for subtask in subtasks:
if subtask.celery_task_id != instance.celery_task_id and not subtask.cancel_flag and not subtask.status in ('successful', 'failed'):
# If there are multiple in the dependency chain, A->B->C, and this was called for A, blame B for clarity
blame_job = deps_of_deps.get(subtask.id, instance)
subtask.status = 'failed'
subtask.failed = True
if not subtask.job_explanation:
subtask.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(blame_job)),
blame_job.name,
blame_job.id,
)
subtask.save()
subtask.websocket_emit_status("failed")
for sub_subtask in subtask.get_jobs_fail_chain():
deps_of_deps[sub_subtask.id] = subtask
# We only send 1 job complete message since all the job completion message
# handling does is trigger the scheduler. If we extend the functionality of
# what the job complete message handler does then we may want to send a
# completion event for each job here.
schedule_manager_success_or_error(instance)
def handle_failure_notifications(task_ids):
"""A task-ified version of the method that sends notifications."""
found_task_ids = set()
for instance in UnifiedJob.objects.filter(id__in=task_ids):
found_task_ids.add(instance.id)
try:
instance.send_notification_templates('failed')
except Exception:
logger.exception(f'Error preparing notifications for task {instance.id}')
deleted_tasks = set(task_ids) - found_task_ids
if deleted_tasks:
logger.warning(f'Could not send notifications for {deleted_tasks} because they were not found in the database')
@task(queue=get_task_queuename)

View File

@@ -84,5 +84,6 @@ def test_custom_hostname_regex(post, admin_user):
"hostname": value[0],
"node_type": "execution",
"node_state": "installed",
"peers": [],
}
post(url=url, user=admin_user, data=data, expect=value[1])

View File

@@ -0,0 +1,342 @@
import pytest
import yaml
import itertools
from unittest import mock
from django.db.utils import IntegrityError
from awx.api.versioning import reverse
from awx.main.models import Instance
from awx.api.views.instance_install_bundle import generate_group_vars_all_yml
def has_peer(group_vars, peer):
peers = group_vars.get('receptor_peers', [])
for p in peers:
if f"{p['host']}:{p['port']}" == peer:
return True
return False
@pytest.mark.django_db
class TestPeers:
@pytest.fixture(autouse=True)
def configure_settings(self, settings):
settings.IS_K8S = True
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_prevent_peering_to_self(self, node_type):
"""
cannot peer to self
"""
control_instance = Instance.objects.create(hostname='abc', node_type=node_type)
with pytest.raises(IntegrityError):
control_instance.peers.add(control_instance)
@pytest.mark.parametrize('node_type', ['control', 'hybrid', 'hop', 'execution'])
def test_creating_node(self, node_type, admin_user, post):
"""
can only add hop and execution nodes via API
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "node_type": node_type},
user=admin_user,
expect=400 if node_type in ['control', 'hybrid'] else 201,
)
def test_changing_node_type(self, admin_user, patch):
"""
cannot change node type
"""
hop = Instance.objects.create(hostname='abc', node_type="hop")
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_type": "execution"},
user=admin_user,
expect=400,
)
@pytest.mark.parametrize('node_type', ['hop', 'execution'])
def test_listener_port_null(self, node_type, admin_user, post):
"""
listener_port can be None
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "node_type": node_type, "listener_port": None},
user=admin_user,
expect=201,
)
@pytest.mark.parametrize('node_type, allowed', [('control', False), ('hybrid', False), ('hop', True), ('execution', True)])
def test_peers_from_control_nodes_allowed(self, node_type, allowed, post, admin_user):
"""
only hop and execution nodes can have peers_from_control_nodes set to True
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "peers_from_control_nodes": True, "node_type": node_type, "listener_port": 6789},
user=admin_user,
expect=201 if allowed else 400,
)
def test_listener_port_is_required(self, admin_user, post):
"""
if adding instance to peers list, that instance must have listener_port set
"""
Instance.objects.create(hostname='abc', node_type="hop", listener_port=None)
post(
url=reverse('api:instance_list'),
data={"hostname": "ex", "peers_from_control_nodes": False, "node_type": "execution", "listener_port": None, "peers": ["abc"]},
user=admin_user,
expect=400,
)
def test_peers_from_control_nodes_listener_port_enabled(self, admin_user, post):
"""
if peers_from_control_nodes is True, listener_port must an integer
Assert that all other combinations are allowed
"""
for index, item in enumerate(itertools.product(['hop', 'execution'], [True, False], [None, 6789])):
node_type, peers_from, listener_port = item
# only disallowed case is when peers_from is True and listener port is None
disallowed = peers_from and not listener_port
post(
url=reverse('api:instance_list'),
data={"hostname": f"abc{index}", "peers_from_control_nodes": peers_from, "node_type": node_type, "listener_port": listener_port},
user=admin_user,
expect=400 if disallowed else 201,
)
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_disallow_modifying_peers_control_nodes(self, node_type, admin_user, patch):
"""
for control nodes, peers field should not be
modified directly via patch.
"""
control = Instance.objects.create(hostname='abc', node_type=node_type)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', peers_from_control_nodes=True, listener_port=6789)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', peers_from_control_nodes=False, listener_port=6789)
assert [hop1] == list(control.peers.all()) # only hop1 should be peered
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": ["hop2"]},
user=admin_user,
expect=400, # cannot add peers directly
)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": ["hop1"]},
user=admin_user,
expect=200, # patching with current peers list should be okay
)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": []},
user=admin_user,
expect=400, # cannot remove peers directly
)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={},
user=admin_user,
expect=200, # patching without data should be fine too
)
# patch hop2
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
data={"peers_from_control_nodes": True},
user=admin_user,
expect=200, # patching without data should be fine too
)
assert {hop1, hop2} == set(control.peers.all()) # hop1 and hop2 should now be peered from control node
def test_disallow_changing_hostname(self, admin_user, patch):
"""
cannot change hostname
"""
hop = Instance.objects.create(hostname='hop', node_type='hop')
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"hostname": "hop2"},
user=admin_user,
expect=400,
)
def test_disallow_changing_node_state(self, admin_user, patch):
"""
only allow setting to deprovisioning
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', node_state='installed')
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_state": "deprovisioning"},
user=admin_user,
expect=200,
)
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_state": "ready"},
user=admin_user,
expect=400,
)
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_control_node_automatically_peers(self, node_type):
"""
a new control node should automatically
peer to hop
peer to hop should be removed if hop is deleted
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', peers_from_control_nodes=True, listener_port=6789)
control = Instance.objects.create(hostname='abc', node_type=node_type)
assert hop in control.peers.all()
hop.delete()
assert not control.peers.exists()
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_control_node_retains_other_peers(self, node_type):
"""
if a new node comes online, other peer relationships should
remain intact
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop1.peers.add(hop2)
# a control node is added
Instance.objects.create(hostname='control', node_type=node_type, listener_port=None)
assert hop1.peers.exists()
def test_group_vars(self, get, admin_user):
"""
control > hop1 > hop2 < execution
"""
control = Instance.objects.create(hostname='control', node_type='control', listener_port=None)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
execution = Instance.objects.create(hostname='execution', node_type='execution', listener_port=6789)
execution.peers.add(hop2)
hop1.peers.add(hop2)
control_vars = yaml.safe_load(generate_group_vars_all_yml(control))
hop1_vars = yaml.safe_load(generate_group_vars_all_yml(hop1))
hop2_vars = yaml.safe_load(generate_group_vars_all_yml(hop2))
execution_vars = yaml.safe_load(generate_group_vars_all_yml(execution))
# control group vars assertions
assert has_peer(control_vars, 'hop1:6789')
assert not has_peer(control_vars, 'hop2:6789')
assert not has_peer(control_vars, 'execution:6789')
assert not control_vars.get('receptor_listener', False)
# hop1 group vars assertions
assert has_peer(hop1_vars, 'hop2:6789')
assert not has_peer(hop1_vars, 'execution:6789')
assert hop1_vars.get('receptor_listener', False)
# hop2 group vars assertions
assert not has_peer(hop2_vars, 'hop1:6789')
assert not has_peer(hop2_vars, 'execution:6789')
assert hop2_vars.get('receptor_listener', False)
assert hop2_vars.get('receptor_peers', []) == []
# execution group vars assertions
assert has_peer(execution_vars, 'hop2:6789')
assert not has_peer(execution_vars, 'hop1:6789')
assert execution_vars.get('receptor_listener', False)
def test_write_receptor_config_called(self):
"""
Assert that write_receptor_config is called
when certain instances are created, or if
peers_from_control_nodes changes.
In general, write_receptor_config should only
be called when necessary, as it will reload
receptor backend connections which is not trivial.
"""
with mock.patch('awx.main.models.ha.schedule_write_receptor_config') as write_method:
# new control instance but nothing to peer to (no)
control = Instance.objects.create(hostname='control1', node_type='control')
write_method.assert_not_called()
# new hop node with peers_from_control_nodes False (no)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop1.delete()
write_method.assert_not_called()
# new hop node with peers_from_control_nodes True (yes)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
write_method.assert_called()
write_method.reset_mock()
# new control instance but with something to peer to (yes)
Instance.objects.create(hostname='control2', node_type='control')
write_method.assert_called()
write_method.reset_mock()
# new hop node with peers_from_control_nodes False and peered to another hop node (no)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop2.peers.add(hop1)
hop2.delete()
write_method.assert_not_called()
# changing peers_from_control_nodes to False (yes)
hop1.peers_from_control_nodes = False
hop1.save()
write_method.assert_called()
write_method.reset_mock()
# deleting hop node that has peers_from_control_nodes to False (no)
hop1.delete()
write_method.assert_not_called()
# deleting control nodes (no)
control.delete()
write_method.assert_not_called()
def test_write_receptor_config_data(self):
"""
Assert the correct peers are included in data that will
be written to receptor.conf
"""
from awx.main.tasks.receptor import RECEPTOR_CONFIG_STARTER
with mock.patch('awx.main.tasks.receptor.read_receptor_config', return_value=list(RECEPTOR_CONFIG_STARTER)):
from awx.main.tasks.receptor import generate_config_data
_, should_update = generate_config_data()
assert not should_update
# not peered, so config file should not be updated
for i in range(3):
Instance.objects.create(hostname=f"exNo-{i}", node_type='execution', listener_port=6789, peers_from_control_nodes=False)
_, should_update = generate_config_data()
assert not should_update
# peered, so config file should be updated
expected_peers = []
for i in range(3):
expected_peers.append(f"hop-{i}:6789")
Instance.objects.create(hostname=f"hop-{i}", node_type='hop', listener_port=6789, peers_from_control_nodes=True)
for i in range(3):
expected_peers.append(f"exYes-{i}:6789")
Instance.objects.create(hostname=f"exYes-{i}", node_type='execution', listener_port=6789, peers_from_control_nodes=True)
new_config, should_update = generate_config_data()
assert should_update
peers = []
for entry in new_config:
for key, value in entry.items():
if key == "tcp-peer":
peers.append(value['address'])
assert set(expected_peers) == set(peers)

View File

@@ -0,0 +1,78 @@
import pytest
from awx.main.tasks.host_metrics import HostMetricTask
from awx.main.models.inventory import HostMetric
from awx.main.tests.factories.fixtures import mk_host_metric
from dateutil.relativedelta import relativedelta
from django.conf import settings
from django.utils import timezone
@pytest.mark.django_db
def test_no_host_metrics():
"""No-crash test"""
assert HostMetric.objects.count() == 0
HostMetricTask().cleanup(soft_threshold=0, hard_threshold=0)
HostMetricTask().cleanup(soft_threshold=24, hard_threshold=42)
assert HostMetric.objects.count() == 0
@pytest.mark.django_db
def test_delete_exception():
"""Crash test"""
with pytest.raises(ValueError):
HostMetricTask().soft_cleanup("")
with pytest.raises(TypeError):
HostMetricTask().hard_cleanup(set())
@pytest.mark.django_db
@pytest.mark.parametrize('threshold', [settings.CLEANUP_HOST_METRICS_SOFT_THRESHOLD, 20])
def test_soft_delete(threshold):
"""Metrics with last_automation < threshold are updated to deleted=True"""
mk_host_metric('host_1', first_automation=ago(months=1), last_automation=ago(months=1), deleted=False)
mk_host_metric('host_2', first_automation=ago(months=1), last_automation=ago(months=1), deleted=True)
mk_host_metric('host_3', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=-1), deleted=False)
mk_host_metric('host_4', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=-1), deleted=True)
mk_host_metric('host_5', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=1), deleted=False)
mk_host_metric('host_6', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=1), deleted=True)
mk_host_metric('host_7', first_automation=ago(months=1), last_automation=ago(months=42), deleted=False)
mk_host_metric('host_8', first_automation=ago(months=1), last_automation=ago(months=42), deleted=True)
assert HostMetric.objects.count() == 8
assert HostMetric.active_objects.count() == 4
for i in range(2):
HostMetricTask().cleanup(soft_threshold=threshold)
assert HostMetric.objects.count() == 8
hostnames = set(HostMetric.objects.filter(deleted=False).order_by('hostname').values_list('hostname', flat=True))
assert hostnames == {'host_1', 'host_3'}
@pytest.mark.django_db
@pytest.mark.parametrize('threshold', [settings.CLEANUP_HOST_METRICS_HARD_THRESHOLD, 20])
def test_hard_delete(threshold):
"""Metrics with last_deleted < threshold and deleted=True are deleted from the db"""
mk_host_metric('host_1', first_automation=ago(months=1), last_deleted=ago(months=1), deleted=False)
mk_host_metric('host_2', first_automation=ago(months=1), last_deleted=ago(months=1), deleted=True)
mk_host_metric('host_3', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=-1), deleted=False)
mk_host_metric('host_4', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=-1), deleted=True)
mk_host_metric('host_5', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=1), deleted=False)
mk_host_metric('host_6', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=1), deleted=True)
mk_host_metric('host_7', first_automation=ago(months=1), last_deleted=ago(months=42), deleted=False)
mk_host_metric('host_8', first_automation=ago(months=1), last_deleted=ago(months=42), deleted=True)
assert HostMetric.objects.count() == 8
assert HostMetric.active_objects.count() == 4
for i in range(2):
HostMetricTask().cleanup(hard_threshold=threshold)
assert HostMetric.objects.count() == 6
hostnames = set(HostMetric.objects.order_by('hostname').values_list('hostname', flat=True))
assert hostnames == {'host_1', 'host_2', 'host_3', 'host_4', 'host_5', 'host_7'}
def ago(months=0, hours=0):
return timezone.now() - relativedelta(months=months, hours=hours)

View File

@@ -76,3 +76,24 @@ def test_hashivault_handle_auth_kubernetes():
def test_hashivault_handle_auth_not_enough_args():
with pytest.raises(Exception):
hashivault.handle_auth()
class TestDelineaImports:
"""
These module have a try-except for ImportError which will allow using the older library
but we do not want the awx_devel image to have the older library,
so these tests are designed to fail if these wind up using the fallback import
"""
def test_dsv_import(self):
from awx.main.credential_plugins.dsv import SecretsVault # noqa
# assert this module as opposed to older thycotic.secrets.vault
assert SecretsVault.__module__ == 'delinea.secrets.vault'
def test_tss_import(self):
from awx.main.credential_plugins.tss import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret # noqa
for cls in (DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret):
# assert this module as opposed to older thycotic.secrets.server
assert cls.__module__ == 'delinea.secrets.server'

View File

@@ -37,9 +37,9 @@ def test_orphan_unified_job_creation(instance, inventory):
@pytest.mark.django_db
@mock.patch('awx.main.tasks.system.inspect_execution_nodes', lambda *args, **kwargs: None)
@mock.patch('awx.main.models.ha.get_cpu_effective_capacity', lambda cpu: 8)
@mock.patch('awx.main.models.ha.get_mem_effective_capacity', lambda mem: 62)
@mock.patch('awx.main.tasks.system.inspect_execution_and_hop_nodes', lambda *args, **kwargs: None)
@mock.patch('awx.main.models.ha.get_cpu_effective_capacity', lambda cpu, is_control_node: 8)
@mock.patch('awx.main.models.ha.get_mem_effective_capacity', lambda mem, is_control_node: 62)
def test_job_capacity_and_with_inactive_node():
i = Instance.objects.create(hostname='test-1')
i.save_health_data('18.0.1', 2, 8000)

View File

@@ -5,8 +5,8 @@ import tempfile
import shutil
from awx.main.tasks.jobs import RunJob
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files, handle_work_error
from awx.main.models import Instance, Job, InventoryUpdate, ProjectUpdate
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files
from awx.main.models import Instance, Job
@pytest.fixture
@@ -73,17 +73,3 @@ def test_does_not_run_reaped_job(mocker, mock_me):
job.refresh_from_db()
assert job.status == 'failed'
mock_run.assert_not_called()
@pytest.mark.django_db
def test_handle_work_error_nested(project, inventory_source):
pu = ProjectUpdate.objects.create(status='failed', project=project, celery_task_id='1234')
iu = InventoryUpdate.objects.create(status='pending', inventory_source=inventory_source, source='scm')
job = Job.objects.create(status='pending')
iu.dependent_jobs.add(pu)
job.dependent_jobs.add(pu, iu)
handle_work_error({'type': 'project_update', 'id': pu.id})
iu.refresh_from_db()
job.refresh_from_db()
assert iu.job_explanation == f'Previous Task Failed: {{"job_type": "project_update", "job_name": "", "job_id": "{pu.id}"}}'
assert job.job_explanation == f'Previous Task Failed: {{"job_type": "inventory_update", "job_name": "", "job_id": "{iu.id}"}}'

View File

@@ -47,7 +47,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
]
),
),
@@ -61,7 +61,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
]
),
),
@@ -75,7 +75,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
]
),
),
@@ -89,7 +89,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -103,7 +103,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -117,7 +117,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -131,7 +131,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -145,7 +145,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -159,7 +159,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
]
),
),
@@ -173,7 +173,7 @@ data_loggly = {
'\n'.join(
[
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
]
),
),

View File

@@ -36,7 +36,9 @@ def test_SYSTEM_TASK_ABS_MEM_conversion(value, converted_value, mem_capacity):
mock_settings.IS_K8S = True
assert convert_mem_str_to_bytes(value) == converted_value
assert get_corrected_memory(-1) == converted_value
assert get_mem_effective_capacity(-1) == mem_capacity
assert get_mem_effective_capacity(1, is_control_node=True) == mem_capacity
# SYSTEM_TASK_ABS_MEM should not effect memory and capacity for execution nodes
assert get_mem_effective_capacity(2147483648, is_control_node=False) == 20
@pytest.mark.parametrize(
@@ -58,4 +60,6 @@ def test_SYSTEM_TASK_ABS_CPU_conversion(value, converted_value, cpu_capacity):
mock_settings.SYSTEM_TASK_FORKS_CPU = 4
assert convert_cpu_str_to_decimal_cpu(value) == converted_value
assert get_corrected_cpu(-1) == converted_value
assert get_cpu_effective_capacity(-1) == cpu_capacity
assert get_cpu_effective_capacity(-1, is_control_node=True) == cpu_capacity
# SYSTEM_TASK_ABS_CPU should not effect cpu count and capacity for execution nodes
assert get_cpu_effective_capacity(2.0, is_control_node=False) == 8

View File

@@ -1,8 +1,43 @@
import signal
import functools
from awx.main.tasks.signals import signal_state, signal_callback, with_signal_handling
def pytest_sigint():
pytest_sigint.called_count += 1
def pytest_sigterm():
pytest_sigterm.called_count += 1
def tmp_signals_for_test(func):
"""
When we run our internal signal handlers, it will call the original signal
handlers when its own work is finished.
This would crash the test runners normally, because those methods will
shut down the process.
So this is a decorator to safely replace existing signal handlers
with new signal handlers that do nothing so that tests do not crash.
"""
@functools.wraps(func)
def wrapper():
original_sigterm = signal.getsignal(signal.SIGTERM)
original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGTERM, pytest_sigterm)
signal.signal(signal.SIGINT, pytest_sigint)
pytest_sigterm.called_count = 0
pytest_sigint.called_count = 0
func()
signal.signal(signal.SIGTERM, original_sigterm)
signal.signal(signal.SIGINT, original_sigint)
return wrapper
@tmp_signals_for_test
def test_outer_inner_signal_handling():
"""
Even if the flag is set in the outer context, its value should persist in the inner context
@@ -15,17 +50,22 @@ def test_outer_inner_signal_handling():
@with_signal_handling
def f1():
assert signal_callback() is False
signal_state.set_flag()
signal_state.set_sigterm_flag()
assert signal_callback()
f2()
original_sigterm = signal.getsignal(signal.SIGTERM)
assert signal_callback() is False
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 0
f1()
assert signal_callback() is False
assert signal.getsignal(signal.SIGTERM) is original_sigterm
assert pytest_sigterm.called_count == 1
assert pytest_sigint.called_count == 0
@tmp_signals_for_test
def test_inner_outer_signal_handling():
"""
Even if the flag is set in the inner context, its value should persist in the outer context
@@ -34,7 +74,7 @@ def test_inner_outer_signal_handling():
@with_signal_handling
def f2():
assert signal_callback() is False
signal_state.set_flag()
signal_state.set_sigint_flag()
assert signal_callback()
@with_signal_handling
@@ -45,6 +85,10 @@ def test_inner_outer_signal_handling():
original_sigterm = signal.getsignal(signal.SIGTERM)
assert signal_callback() is False
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 0
f1()
assert signal_callback() is False
assert signal.getsignal(signal.SIGTERM) is original_sigterm
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 1

View File

@@ -143,13 +143,6 @@ def test_send_notifications_job_id(mocker):
assert UnifiedJob.objects.get.called_with(id=1)
def test_work_success_callback_missing_job():
task_data = {'type': 'project_update', 'id': 9999}
with mock.patch('django.db.models.query.QuerySet.get') as get_mock:
get_mock.side_effect = ProjectUpdate.DoesNotExist()
assert system.handle_work_success(task_data) is None
@mock.patch('awx.main.models.UnifiedJob.objects.get')
@mock.patch('awx.main.models.Notification.objects.filter')
def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker):

View File

@@ -23,7 +23,7 @@ from django.core.exceptions import ObjectDoesNotExist, FieldDoesNotExist
from django.utils.dateparse import parse_datetime
from django.utils.translation import gettext_lazy as _
from django.utils.functional import cached_property
from django.db import connection, transaction, ProgrammingError
from django.db import connection, transaction, ProgrammingError, IntegrityError
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField
from django.db.models.fields.related_descriptors import ForwardManyToOneDescriptor, ManyToManyDescriptor
from django.db.models.query import QuerySet
@@ -768,14 +768,13 @@ def get_corrected_cpu(cpu_count): # formerlly get_cpu_capacity
return cpu_count # no correction
def get_cpu_effective_capacity(cpu_count):
def get_cpu_effective_capacity(cpu_count, is_control_node=False):
from django.conf import settings
cpu_count = get_corrected_cpu(cpu_count)
settings_forkcpu = getattr(settings, 'SYSTEM_TASK_FORKS_CPU', None)
env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
if is_control_node:
cpu_count = get_corrected_cpu(cpu_count)
if env_forkcpu:
forkcpu = int(env_forkcpu)
elif settings_forkcpu:
@@ -834,6 +833,7 @@ def get_corrected_memory(memory):
# Runner returns memory in bytes
# so we convert memory from settings to bytes as well.
if env_absmem is not None:
return convert_mem_str_to_bytes(env_absmem)
elif settings_absmem is not None:
@@ -842,14 +842,13 @@ def get_corrected_memory(memory):
return memory
def get_mem_effective_capacity(mem_bytes):
def get_mem_effective_capacity(mem_bytes, is_control_node=False):
from django.conf import settings
mem_bytes = get_corrected_memory(mem_bytes)
settings_mem_mb_per_fork = getattr(settings, 'SYSTEM_TASK_FORKS_MEM', None)
env_mem_mb_per_fork = os.getenv('SYSTEM_TASK_FORKS_MEM', None)
if is_control_node:
mem_bytes = get_corrected_memory(mem_bytes)
if env_mem_mb_per_fork:
mem_mb_per_fork = int(env_mem_mb_per_fork)
elif settings_mem_mb_per_fork:
@@ -1165,13 +1164,24 @@ def create_partition(tblname, start=None):
try:
with transaction.atomic():
with connection.cursor() as cursor:
cursor.execute(f"SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = '{tblname}_{partition_label}');")
row = cursor.fetchone()
if row is not None:
for val in row: # should only have 1
if val is True:
logger.debug(f'Event partition table {tblname}_{partition_label} already exists')
return
cursor.execute(
f'CREATE TABLE IF NOT EXISTS {tblname}_{partition_label} '
f'PARTITION OF {tblname} '
f'FOR VALUES FROM (\'{start_timestamp}\') to (\'{end_timestamp}\');'
f'CREATE TABLE {tblname}_{partition_label} (LIKE {tblname} INCLUDING DEFAULTS INCLUDING CONSTRAINTS); '
f'ALTER TABLE {tblname} ATTACH PARTITION {tblname}_{partition_label} '
f'FOR VALUES FROM (\'{start_timestamp}\') TO (\'{end_timestamp}\');'
)
except ProgrammingError as e:
logger.debug(f'Caught known error due to existing partition: {e}')
except (ProgrammingError, IntegrityError) as e:
if 'already exists' in str(e):
logger.info(f'Caught known error due to partition creation race: {e}')
else:
raise
def cleanup_new_process(func):

View File

@@ -17,11 +17,26 @@ def construct_rsyslog_conf_template(settings=settings):
port = getattr(settings, 'LOG_AGGREGATOR_PORT', '')
protocol = getattr(settings, 'LOG_AGGREGATOR_PROTOCOL', '')
timeout = getattr(settings, 'LOG_AGGREGATOR_TCP_TIMEOUT', 5)
max_disk_space_main_queue = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_GB', 1)
action_queue_size = getattr(settings, 'LOG_AGGREGATOR_ACTION_QUEUE_SIZE', 131072)
max_disk_space_action_queue = getattr(settings, 'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB', 1)
spool_directory = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH', '/var/lib/awx').rstrip('/')
error_log_file = getattr(settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', '')
queue_options = [
f'queue.spoolDirectory="{spool_directory}"',
'queue.filename="awx-external-logger-action-queue"',
f'queue.maxDiskSpace="{max_disk_space_action_queue}g"', # overall disk space for all queue files
'queue.maxFileSize="100m"', # individual file size
'queue.type="LinkedList"',
'queue.saveOnShutdown="on"',
'queue.syncqueuefiles="on"', # (f)sync when checkpoint occurs
'queue.checkpointInterval="1000"', # Update disk queue every 1000 messages
f'queue.size="{action_queue_size}"', # max number of messages in queue
f'queue.highwaterMark="{int(action_queue_size * 0.75)}"', # 75% of queue.size
f'queue.discardMark="{int(action_queue_size * 0.9)}"', # 90% of queue.size
'queue.discardSeverity="5"', # Only discard notice, info, debug if we must discard anything
]
if not os.access(spool_directory, os.W_OK):
spool_directory = '/var/lib/awx'
@@ -33,7 +48,6 @@ def construct_rsyslog_conf_template(settings=settings):
'$WorkDirectory /var/lib/awx/rsyslog',
f'$MaxMessageSize {max_bytes}',
'$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf',
f'main_queue(queue.spoolDirectory="{spool_directory}" queue.maxdiskspace="{max_disk_space_main_queue}g" queue.type="Disk" queue.filename="awx-external-logger-backlog")', # noqa
'module(load="imuxsock" SysSock.Use="off")',
'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on" RateLimit.Burst="0")',
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
@@ -79,12 +93,7 @@ def construct_rsyslog_conf_template(settings=settings):
'action.resumeRetryCount="-1"',
'template="awx"',
f'action.resumeInterval="{timeout}"',
f'queue.spoolDirectory="{spool_directory}"',
'queue.filename="awx-external-logger-action-queue"',
f'queue.maxdiskspace="{max_disk_space_action_queue}g"',
'queue.type="LinkedList"',
'queue.saveOnShutdown="on"',
]
] + queue_options
if error_log_file:
params.append(f'errorfile="{error_log_file}"')
if parsed.path:
@@ -112,9 +121,18 @@ def construct_rsyslog_conf_template(settings=settings):
params = ' '.join(params)
parts.extend(['module(load="omhttp")', f'action({params})'])
elif protocol and host and port:
parts.append(
f'action(type="omfwd" target="{host}" port="{port}" protocol="{protocol}" action.resumeRetryCount="-1" action.resumeInterval="{timeout}" template="awx")' # noqa
)
params = [
'type="omfwd"',
f'target="{host}"',
f'port="{port}"',
f'protocol="{protocol}"',
'action.resumeRetryCount="-1"',
f'action.resumeInterval="{timeout}"',
'template="awx"',
] + queue_options
params = ' '.join(params)
parts.append(f'action({params})')
else:
parts.append('action(type="omfile" file="/dev/null")') # rsyslog needs *at least* one valid action to start
tmpl = '\n'.join(parts)

View File

@@ -3,6 +3,8 @@ import logging
import asyncio
from typing import Dict
import ipaddress
import aiohttp
from aiohttp import client_exceptions
import aioredis
@@ -71,7 +73,16 @@ class WebsocketRelayConnection:
if not self.channel_layer:
self.channel_layer = get_channel_layer()
uri = f"{self.protocol}://{self.remote_host}:{self.remote_port}/websocket/relay/"
# figure out if what we have is an ipaddress, IPv6 Addresses must have brackets added for uri
uri_hostname = self.remote_host
try:
# Throws ValueError if self.remote_host is a hostname like example.com, not an IPv4 or IPv6 ip address
if isinstance(ipaddress.ip_address(uri_hostname), ipaddress.IPv6Address):
uri_hostname = f"[{uri_hostname}]"
except ValueError:
pass
uri = f"{self.protocol}://{uri_hostname}:{self.remote_port}/websocket/relay/"
timeout = aiohttp.ClientTimeout(total=10)
secret_val = WebsocketSecretAuthHelper.construct_secret()

View File

@@ -470,13 +470,13 @@ CELERYBEAT_SCHEDULE = {
'receptor_reaper': {'task': 'awx.main.tasks.system.awx_receptor_workunit_reaper', 'schedule': timedelta(seconds=60)},
'send_subsystem_metrics': {'task': 'awx.main.analytics.analytics_tasks.send_subsystem_metrics', 'schedule': timedelta(seconds=20)},
'cleanup_images': {'task': 'awx.main.tasks.system.cleanup_images_and_files', 'schedule': timedelta(hours=3)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.system.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.host_metrics.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)},
'host_metric_summary_monthly': {'task': 'awx.main.tasks.host_metrics.host_metric_summary_monthly', 'schedule': timedelta(hours=4)},
}
# Django Caching Configuration
DJANGO_REDIS_IGNORE_EXCEPTIONS = True
CACHES = {'default': {'BACKEND': 'awx.main.cache.AWXRedisCache', 'LOCATION': 'unix:/var/run/redis/redis.sock?db=1'}}
CACHES = {'default': {'BACKEND': 'awx.main.cache.AWXRedisCache', 'LOCATION': 'unix:///var/run/redis/redis.sock?db=1'}}
# Social Auth configuration.
SOCIAL_AUTH_STRATEGY = 'social_django.strategy.DjangoStrategy'
@@ -796,7 +796,7 @@ LOG_AGGREGATOR_ENABLED = False
LOG_AGGREGATOR_TCP_TIMEOUT = 5
LOG_AGGREGATOR_VERIFY_CERT = True
LOG_AGGREGATOR_LEVEL = 'INFO'
LOG_AGGREGATOR_MAX_DISK_USAGE_GB = 1 # Main queue
LOG_AGGREGATOR_ACTION_QUEUE_SIZE = 131072
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB = 1 # Action queue
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx'
LOG_AGGREGATOR_RSYSLOGD_DEBUG = False
@@ -1049,7 +1049,7 @@ UI_NEXT = True
# - 'unique_managed_hosts': Compliant = automated - deleted hosts (using /api/v2/host_metrics/)
SUBSCRIPTION_USAGE_MODEL = ''
# Host metrics cleanup - last time of the cleanup run (soft-deleting records)
# Host metrics cleanup - last time of the task/command run
CLEANUP_HOST_METRICS_LAST_TS = None
# Host metrics cleanup - minimal interval between two cleanups in days
CLEANUP_HOST_METRICS_INTERVAL = 30 # days

View File

@@ -87,7 +87,7 @@ def _update_user_orgs(backend, desired_org_state, orgs_to_create, user=None):
is_member_expression = org_opts.get(user_type, None)
remove_members = bool(org_opts.get('remove_{}'.format(user_type), remove))
has_role = _update_m2m_from_expression(user, is_member_expression, remove_members)
desired_org_state[organization_name][role_name] = has_role
desired_org_state[organization_name][role_name] = desired_org_state[organization_name].get(role_name, False) or has_role
def _update_user_teams(backend, desired_team_state, teams_to_create, user=None):

View File

@@ -637,3 +637,75 @@ class TestSAMLUserFlags:
}
assert expected == _check_flag(user, 'superuser', attributes, user_flags_settings)
@pytest.mark.django_db
def test__update_user_orgs_org_map_and_saml_attr():
"""
This combines the action of two other tests where an org membership is defined both by
the ORGANIZATION_MAP and the SOCIAL_AUTH_SAML_ORGANIZATION_ATTR at the same time
"""
# This data will make the user a member
class BackendClass:
s = {
'ORGANIZATION_MAP': {
'Default1': {
'remove': True,
'remove_admins': True,
'users': 'foobar',
'remove_users': True,
'organization_alias': 'o1_alias',
}
}
}
def setting(self, key):
return self.s[key]
backend = BackendClass()
setting = {
'saml_attr': 'memberOf',
'saml_admin_attr': 'admins',
'saml_auditor_attr': 'auditors',
'remove': True,
'remove_admins': True,
}
# This data from the server will make the user an admin of the organization
kwargs = {
'username': 'foobar',
'uid': 'idp:cmeyers@redhat.com',
'request': {u'SAMLResponse': [], u'RelayState': [u'idp']},
'is_new': False,
'response': {
'session_index': '_0728f0e0-b766-0135-75fa-02842b07c044',
'idp_name': u'idp',
'attributes': {
'admins': ['Default1'],
},
},
'social': None,
'strategy': None,
'new_association': False,
}
this_user = User.objects.create(username='foobar')
with override_settings(SOCIAL_AUTH_SAML_ORGANIZATION_ATTR=setting):
desired_org_state = {}
orgs_to_create = []
# this should add user as an admin of the org
_update_user_orgs_by_saml_attr(backend, desired_org_state, orgs_to_create, **kwargs)
assert desired_org_state['o1_alias']['admin_role'] is True
assert set(orgs_to_create) == set(['o1_alias'])
# this should add user as a member of the org without reverting the admin status
_update_user_orgs(backend, desired_org_state, orgs_to_create, this_user)
assert desired_org_state['o1_alias']['member_role'] is True
assert desired_org_state['o1_alias']['admin_role'] is True
assert set(orgs_to_create) == set(['o1_alias'])

1771
awx/ui/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -33,12 +33,12 @@
"styled-components": "5.3.6"
},
"devDependencies": {
"@babel/core": "^7.16.10",
"@babel/eslint-parser": "^7.16.5",
"@babel/eslint-plugin": "^7.16.5",
"@babel/plugin-syntax-jsx": "7.16.7",
"@babel/polyfill": "^7.8.7",
"@babel/preset-react": "7.16.7",
"@babel/core": "^7.22.9",
"@babel/eslint-parser": "^7.22.9",
"@babel/eslint-plugin": "^7.22.10",
"@babel/plugin-syntax-jsx": "^7.22.5",
"@babel/polyfill": "^7.12.1",
"@babel/preset-react": "^7.22.5",
"@cypress/instrument-cra": "^1.4.0",
"@lingui/cli": "^3.7.1",
"@lingui/loader": "3.15.0",

View File

@@ -5,7 +5,11 @@
<title data-cy="migration-title">{{ title }}</title>
<meta
http-equiv="Content-Security-Policy"
content="default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'nonce-{{ csp_nonce }}' *.pendo.io; img-src 'self' *.pendo.io data:;"
content="default-src 'self';
connect-src 'self' ws: wss:;
style-src 'self' 'unsafe-inline';
script-src 'self' 'nonce-{{ csp_nonce }}' *.pendo.io;
img-src 'self' *.pendo.io data:;"
/>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge" />

View File

@@ -33,6 +33,7 @@ import Roles from './models/Roles';
import Root from './models/Root';
import Schedules from './models/Schedules';
import Settings from './models/Settings';
import SubscriptionUsage from './models/SubscriptionUsage';
import SystemJobs from './models/SystemJobs';
import SystemJobTemplates from './models/SystemJobTemplates';
import Teams from './models/Teams';
@@ -82,6 +83,7 @@ const RolesAPI = new Roles();
const RootAPI = new Root();
const SchedulesAPI = new Schedules();
const SettingsAPI = new Settings();
const SubscriptionUsageAPI = new SubscriptionUsage();
const SystemJobsAPI = new SystemJobs();
const SystemJobTemplatesAPI = new SystemJobTemplates();
const TeamsAPI = new Teams();
@@ -132,6 +134,7 @@ export {
RootAPI,
SchedulesAPI,
SettingsAPI,
SubscriptionUsageAPI,
SystemJobsAPI,
SystemJobTemplatesAPI,
TeamsAPI,

View File

@@ -0,0 +1,16 @@
import Base from '../Base';
class SubscriptionUsage extends Base {
constructor(http) {
super(http);
this.baseUrl = 'api/v2/host_metric_summary_monthly/';
}
readSubscriptionUsageChart(dateRange) {
return this.http.get(
`${this.baseUrl}?date__gte=${dateRange}&order_by=date&page_size=100`
);
}
}
export default SubscriptionUsage;

View File

@@ -11,6 +11,7 @@ import {
WorkflowJobsAPI,
WorkflowJobTemplatesAPI,
} from 'api';
import useToast, { AlertVariant } from 'hooks/useToast';
import AlertModal from '../AlertModal';
import ErrorDetail from '../ErrorDetail';
import LaunchPrompt from '../LaunchPrompt';
@@ -45,8 +46,22 @@ function LaunchButton({ resource, children }) {
const [isLaunching, setIsLaunching] = useState(false);
const [resourceCredentials, setResourceCredentials] = useState([]);
const [error, setError] = useState(null);
const { addToast, Toast, toastProps } = useToast();
const showToast = () => {
addToast({
id: resource.id,
title: t`A job has already been launched`,
variant: AlertVariant.info,
hasTimeout: true,
});
};
const handleLaunch = async () => {
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true);
const readLaunch =
resource.type === 'workflow_job_template'
@@ -104,6 +119,11 @@ function LaunchButton({ resource, children }) {
};
const launchWithParams = async (params) => {
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true);
try {
let jobPromise;
@@ -141,6 +161,10 @@ function LaunchButton({ resource, children }) {
let readRelaunch;
let relaunch;
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true);
if (resource.type === 'inventory_update') {
// We'll need to handle the scenario where the src no longer exists
@@ -197,6 +221,7 @@ function LaunchButton({ resource, children }) {
handleRelaunch,
isLaunching,
})}
<Toast {...toastProps} />
{error && (
<AlertModal
isOpen={error}

View File

@@ -223,6 +223,10 @@ function Lookup(props) {
const Item = shape({
id: number.isRequired,
});
const InstanceItem = shape({
id: number.isRequired,
hostname: string.isRequired,
});
Lookup.propTypes = {
id: string,
@@ -230,7 +234,13 @@ Lookup.propTypes = {
modalDescription: oneOfType([string, node]),
onChange: func.isRequired,
onUpdate: func,
value: oneOfType([Item, arrayOf(Item), object]),
value: oneOfType([
Item,
arrayOf(Item),
object,
InstanceItem,
arrayOf(InstanceItem),
]),
multiple: bool,
required: bool,
onBlur: func,

View File

@@ -0,0 +1,212 @@
import React, { useCallback, useEffect } from 'react';
import { arrayOf, string, func, bool, shape } from 'prop-types';
import { withRouter } from 'react-router-dom';
import { t } from '@lingui/macro';
import { FormGroup, Chip } from '@patternfly/react-core';
import { InstancesAPI } from 'api';
import { Instance } from 'types';
import { getSearchableKeys } from 'components/PaginatedTable';
import { getQSConfig, parseQueryString, mergeParams } from 'util/qs';
import useRequest from 'hooks/useRequest';
import Popover from '../Popover';
import OptionsList from '../OptionsList';
import Lookup from './Lookup';
import LookupErrorMessage from './shared/LookupErrorMessage';
import FieldWithPrompt from '../FieldWithPrompt';
const QS_CONFIG = getQSConfig('instances', {
page: 1,
page_size: 5,
order_by: 'hostname',
});
function PeersLookup({
id,
value,
onChange,
tooltip,
className,
required,
history,
fieldName,
multiple,
validate,
columns,
isPromptableField,
promptId,
promptName,
formLabel,
typePeers,
instance_details,
}) {
const {
result: { instances, count, relatedSearchableKeys, searchableKeys },
request: fetchInstances,
error,
isLoading,
} = useRequest(
useCallback(async () => {
const params = parseQueryString(QS_CONFIG, history.location.search);
const peersFilter = {};
if (typePeers) {
peersFilter.not__node_type = ['control', 'hybrid'];
if (instance_details) {
if (instance_details.id) {
peersFilter.not__id = instance_details.id;
peersFilter.not__hostname = instance_details.peers;
}
}
}
const [{ data }, actionsResponse] = await Promise.all([
InstancesAPI.read(
mergeParams(params, {
...peersFilter,
})
),
InstancesAPI.readOptions(),
]);
return {
instances: data.results,
count: data.count,
relatedSearchableKeys: (
actionsResponse?.data?.related_search_fields || []
).map((val) => val.slice(0, -8)),
searchableKeys: getSearchableKeys(actionsResponse.data.actions?.GET),
};
}, [history.location, typePeers, instance_details]),
{
instances: [],
count: 0,
relatedSearchableKeys: [],
searchableKeys: [],
}
);
useEffect(() => {
fetchInstances();
}, [fetchInstances]);
const renderLookup = () => (
<>
<Lookup
id={fieldName}
header={formLabel}
value={value}
onChange={onChange}
onUpdate={fetchInstances}
fieldName={fieldName}
validate={validate}
qsConfig={QS_CONFIG}
multiple={multiple}
required={required}
isLoading={isLoading}
label={formLabel}
renderItemChip={({ item, removeItem, canDelete }) => (
<Chip
key={item.id}
onClick={() => removeItem(item)}
isReadOnly={!canDelete}
>
{item.hostname}
</Chip>
)}
renderOptionsList={({ state, dispatch, canDelete }) => (
<OptionsList
value={state.selectedItems}
options={instances}
optionCount={count}
columns={columns}
header={formLabel}
displayKey="hostname"
searchColumns={[
{
name: t`Hostname`,
key: 'hostname__icontains',
isDefault: true,
},
]}
sortColumns={[
{
name: t`Hostname`,
key: 'hostname',
},
]}
searchableKeys={searchableKeys}
relatedSearchableKeys={relatedSearchableKeys}
multiple={multiple}
label={formLabel}
name={fieldName}
qsConfig={QS_CONFIG}
readOnly={!canDelete}
selectItem={(item) => dispatch({ type: 'SELECT_ITEM', item })}
deselectItem={(item) => dispatch({ type: 'DESELECT_ITEM', item })}
/>
)}
/>
<LookupErrorMessage error={error} />
</>
);
return isPromptableField ? (
<FieldWithPrompt
fieldId={id}
label={formLabel}
promptId={promptId}
promptName={promptName}
tooltip={tooltip}
>
{renderLookup()}
</FieldWithPrompt>
) : (
<FormGroup
className={className}
label={formLabel}
labelIcon={tooltip && <Popover content={tooltip} />}
fieldId={id}
>
{renderLookup()}
</FormGroup>
);
}
PeersLookup.propTypes = {
id: string,
value: arrayOf(Instance).isRequired,
tooltip: string,
onChange: func.isRequired,
className: string,
required: bool,
validate: func,
multiple: bool,
fieldName: string,
columns: arrayOf(Object),
formLabel: string,
instance_details: (Instance, shape({})),
typePeers: bool,
};
PeersLookup.defaultProps = {
id: 'instances',
tooltip: '',
className: '',
required: false,
validate: () => undefined,
fieldName: 'instances',
columns: [
{
key: 'hostname',
name: t`Hostname`,
},
{
key: 'node_type',
name: t`Node Type`,
},
],
formLabel: t`Instances`,
instance_details: {},
multiple: true,
typePeers: false,
};
export default withRouter(PeersLookup);

View File

@@ -0,0 +1,137 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { Formik } from 'formik';
import { InstancesAPI } from 'api';
import { mountWithContexts } from '../../../testUtils/enzymeHelpers';
import PeersLookup from './PeersLookup';
jest.mock('../../api');
const mockedInstances = {
count: 1,
results: [
{
id: 2,
name: 'Foo',
image: 'quay.io/ansible/awx-ee',
pull: 'missing',
},
],
};
const instances = [
{
id: 1,
hostname: 'awx_1',
type: 'instance',
url: '/api/v2/instances/1/',
related: {
named_url: '/api/v2/instances/awx_1/',
jobs: '/api/v2/instances/1/jobs/',
instance_groups: '/api/v2/instances/1/instance_groups/',
peers: '/api/v2/instances/1/peers/',
},
summary_fields: {
user_capabilities: {
edit: false,
},
links: [],
},
uuid: '00000000-0000-0000-0000-000000000000',
created: '2023-04-26T22:06:46.766198Z',
modified: '2023-04-26T22:06:46.766217Z',
last_seen: '2023-04-26T23:12:02.857732Z',
health_check_started: null,
health_check_pending: false,
last_health_check: '2023-04-26T23:01:13.941693Z',
errors: 'Instance received normal shutdown signal',
capacity_adjustment: '1.00',
version: '0.1.dev33237+g1fdef52',
capacity: 0,
consumed_capacity: 0,
percent_capacity_remaining: 0,
jobs_running: 0,
jobs_total: 0,
cpu: '8.0',
memory: 8011055104,
cpu_capacity: 0,
mem_capacity: 0,
enabled: true,
managed_by_policy: true,
node_type: 'hybrid',
node_state: 'installed',
ip_address: null,
listener_port: 27199,
peers: [],
peers_from_control_nodes: false,
},
];
describe('PeersLookup', () => {
let wrapper;
beforeEach(() => {
InstancesAPI.read.mockResolvedValue({
data: mockedInstances,
});
});
afterEach(() => {
jest.clearAllMocks();
});
test('should render successfully without instance_details (for new added instance)', async () => {
InstancesAPI.readOptions.mockReturnValue({
data: {
actions: {
GET: {},
POST: {},
},
related_search_fields: [],
},
});
await act(async () => {
wrapper = mountWithContexts(
<Formik>
<PeersLookup value={instances} onChange={() => {}} />
</Formik>
);
});
wrapper.update();
expect(InstancesAPI.read).toHaveBeenCalledTimes(1);
expect(wrapper.find('PeersLookup')).toHaveLength(1);
expect(wrapper.find('FormGroup[label="Instances"]').length).toBe(1);
expect(wrapper.find('Checkbox[aria-label="Prompt on launch"]').length).toBe(
0
);
});
test('should render successfully with instance_details for edit instance', async () => {
InstancesAPI.readOptions.mockReturnValue({
data: {
actions: {
GET: {},
POST: {},
},
related_search_fields: [],
},
});
await act(async () => {
wrapper = mountWithContexts(
<Formik>
<PeersLookup
value={instances}
instance_details={instances[0]}
onChange={() => {}}
/>
</Formik>
);
});
wrapper.update();
expect(InstancesAPI.read).toHaveBeenCalledTimes(1);
expect(wrapper.find('PeersLookup')).toHaveLength(1);
expect(wrapper.find('FormGroup[label="Instances"]').length).toBe(1);
expect(wrapper.find('Checkbox[aria-label="Prompt on launch"]').length).toBe(
0
);
});
});

View File

@@ -8,3 +8,4 @@ export { default as ApplicationLookup } from './ApplicationLookup';
export { default as HostFilterLookup } from './HostFilterLookup';
export { default as OrganizationLookup } from './OrganizationLookup';
export { default as ExecutionEnvironmentLookup } from './ExecutionEnvironmentLookup';
export { default as PeersLookup } from './PeersLookup';

View File

@@ -125,19 +125,24 @@ const Item = shape({
name: string.isRequired,
url: string,
});
const InstanceItem = shape({
id: oneOfType([number, string]).isRequired,
hostname: string.isRequired,
url: string,
});
OptionsList.propTypes = {
deselectItem: func.isRequired,
displayKey: string,
isSelectedDraggable: bool,
multiple: bool,
optionCount: number.isRequired,
options: arrayOf(Item).isRequired,
options: oneOfType([arrayOf(Item), arrayOf(InstanceItem)]).isRequired,
qsConfig: QSConfig.isRequired,
renderItemChip: func,
searchColumns: SearchColumns,
selectItem: func.isRequired,
sortColumns: SortColumns,
value: arrayOf(Item).isRequired,
value: oneOfType([arrayOf(Item), arrayOf(InstanceItem)]).isRequired,
};
OptionsList.defaultProps = {
isSelectedDraggable: false,

View File

@@ -75,6 +75,7 @@ function SessionProvider({ children }) {
const [sessionCountdown, setSessionCountdown] = useState(0);
const [authRedirectTo, setAuthRedirectTo] = useState('/');
const [isUserBeingLoggedOut, setIsUserBeingLoggedOut] = useState(false);
const [isRedirectLinkReceived, setIsRedirectLinkReceived] = useState(false);
const {
request: fetchLoginRedirectOverride,
@@ -99,6 +100,7 @@ function SessionProvider({ children }) {
const logout = useCallback(async () => {
setIsUserBeingLoggedOut(true);
setIsRedirectLinkReceived(false);
if (!isSessionExpired.current) {
setAuthRedirectTo('/logout');
window.localStorage.setItem(SESSION_USER_ID, null);
@@ -112,6 +114,18 @@ function SessionProvider({ children }) {
return <Redirect to="/login" />;
}, [setSessionTimeout, setSessionCountdown]);
useEffect(() => {
const unlisten = history.listen((location, action) => {
if (action === 'POP') {
setIsRedirectLinkReceived(true);
}
});
return () => {
unlisten(); // ensure that the listener is removed when the component unmounts
};
}, [history]);
useEffect(() => {
if (!isAuthenticated(document.cookie)) {
return () => {};
@@ -176,6 +190,8 @@ function SessionProvider({ children }) {
logout,
sessionCountdown,
setAuthRedirectTo,
isRedirectLinkReceived,
setIsRedirectLinkReceived,
}),
[
authRedirectTo,
@@ -186,6 +202,8 @@ function SessionProvider({ children }) {
logout,
sessionCountdown,
setAuthRedirectTo,
isRedirectLinkReceived,
setIsRedirectLinkReceived,
]
);

View File

@@ -17,6 +17,7 @@ import Organizations from 'screens/Organization';
import Projects from 'screens/Project';
import Schedules from 'screens/Schedule';
import Settings from 'screens/Setting';
import SubscriptionUsage from 'screens/SubscriptionUsage/SubscriptionUsage';
import Teams from 'screens/Team';
import Templates from 'screens/Template';
import TopologyView from 'screens/TopologyView';
@@ -61,6 +62,11 @@ function getRouteConfig(userProfile = {}) {
path: '/host_metrics',
screen: HostMetrics,
},
{
title: <Trans>Subscription Usage</Trans>,
path: '/subscription_usage',
screen: SubscriptionUsage,
},
],
},
{
@@ -189,6 +195,7 @@ function getRouteConfig(userProfile = {}) {
'unique_managed_hosts'
) {
deleteRoute('host_metrics');
deleteRoute('subscription_usage');
}
if (userProfile?.isSuperUser || userProfile?.isSystemAuditor)
return routeConfig;
@@ -197,6 +204,7 @@ function getRouteConfig(userProfile = {}) {
deleteRoute('management_jobs');
deleteRoute('topology_view');
deleteRoute('instances');
deleteRoute('subscription_usage');
if (userProfile?.isOrgAdmin) return routeConfig;
if (!userProfile?.isNotificationAdmin) deleteRoute('notification_templates');

View File

@@ -31,6 +31,7 @@ describe('getRouteConfig', () => {
'/activity_stream',
'/workflow_approvals',
'/host_metrics',
'/subscription_usage',
'/templates',
'/credentials',
'/projects',
@@ -61,6 +62,7 @@ describe('getRouteConfig', () => {
'/activity_stream',
'/workflow_approvals',
'/host_metrics',
'/subscription_usage',
'/templates',
'/credentials',
'/projects',

View File

@@ -32,12 +32,10 @@ describe('<InstanceAdd />', () => {
await waitForElement(wrapper, 'isLoading', (el) => el.length === 0);
await act(async () => {
wrapper.find('InstanceForm').prop('handleSubmit')({
name: 'new Foo',
node_type: 'hop',
});
});
expect(InstancesAPI.create).toHaveBeenCalledWith({
name: 'new Foo',
node_type: 'hop',
});
expect(history.location.pathname).toBe('/instances/13/details');

View File

@@ -1,6 +1,6 @@
import React, { useCallback, useEffect, useState } from 'react';
import { useHistory, useParams } from 'react-router-dom';
import { useHistory, useParams, Link } from 'react-router-dom';
import { t, Plural } from '@lingui/macro';
import {
Button,
@@ -116,6 +116,7 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
setBreadcrumb(instance);
}
}, [instance, setBreadcrumb]);
const { error: healthCheckError, request: fetchHealthCheck } = useRequest(
useCallback(async () => {
const { status } = await InstancesAPI.healthCheck(id);
@@ -205,13 +206,42 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
}
/>
<Detail label={t`Node Type`} value={instance.node_type} />
<Detail label={t`Host`} value={instance.ip_address} />
<Detail label={t`Listener Port`} value={instance.listener_port} />
{(isExecutionNode || isHopNode) && (
<>
{instance.related?.install_bundle && (
<Detail
label={t`Install Bundle`}
value={
<Tooltip content={t`Click to download bundle`}>
<Button
component="a"
isSmall
href={`${instance.related?.install_bundle}`}
target="_blank"
variant="secondary"
dataCy="install-bundle-download-button"
rel="noopener noreferrer"
>
<DownloadIcon />
</Button>
</Tooltip>
}
/>
)}
<Detail
label={t`Peers from control nodes`}
value={instance.peers_from_control_nodes ? t`On` : t`Off`}
/>
</>
)}
{!isHopNode && (
<>
<Detail
label={t`Policy Type`}
value={instance.managed_by_policy ? t`Auto` : t`Manual`}
/>
<Detail label={t`Host`} value={instance.ip_address} />
<Detail label={t`Running Jobs`} value={instance.jobs_running} />
<Detail label={t`Total Jobs`} value={instance.jobs_total} />
{instanceGroups && (
@@ -246,26 +276,6 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
}
value={formatHealthCheckTimeStamp(instance.last_health_check)}
/>
{instance.related?.install_bundle && (
<Detail
label={t`Install Bundle`}
value={
<Tooltip content={t`Click to download bundle`}>
<Button
component="a"
isSmall
href={`${instance.related?.install_bundle}`}
target="_blank"
variant="secondary"
dataCy="install-bundle-download-button"
rel="noopener noreferrer"
>
<DownloadIcon />
</Button>
</Tooltip>
}
/>
)}
<Detail
label={t`Capacity Adjustment`}
dataCy="capacity-adjustment"
@@ -327,9 +337,20 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
/>
)}
</DetailList>
{!isHopNode && (
<CardActionsRow>
{config?.me?.is_superuser && isK8s && isExecutionNode && (
<CardActionsRow>
{config?.me?.is_superuser && isK8s && (isExecutionNode || isHopNode) && (
<Button
ouiaId="instance-detail-edit-button"
aria-label={t`edit`}
component={Link}
to={`/instances/${id}/edit`}
>
{t`Edit`}
</Button>
)}
{config?.me?.is_superuser &&
isK8s &&
(isExecutionNode || isHopNode) && (
<RemoveInstanceButton
dataCy="remove-instance-button"
itemsToRemove={[instance]}
@@ -337,32 +358,31 @@ function InstanceDetail({ setBreadcrumb, isK8s }) {
onRemove={removeInstances}
/>
)}
{isExecutionNode && (
<Tooltip content={t`Run a health check on the instance`}>
<Button
isDisabled={
!config?.me?.is_superuser || instance.health_check_pending
}
variant="primary"
ouiaId="health-check-button"
onClick={fetchHealthCheck}
isLoading={instance.health_check_pending}
spinnerAriaLabel={t`Running health check`}
>
{instance.health_check_pending
? t`Running health check`
: t`Run health check`}
</Button>
</Tooltip>
)}
<InstanceToggle
css="display: inline-flex;"
fetchInstances={fetchDetails}
instance={instance}
dataCy="enable-instance"
/>
</CardActionsRow>
)}
{isExecutionNode && (
<Tooltip content={t`Run a health check on the instance`}>
<Button
isDisabled={
!config?.me?.is_superuser || instance.health_check_pending
}
variant="primary"
ouiaId="health-check-button"
onClick={fetchHealthCheck}
isLoading={instance.health_check_pending}
spinnerAriaLabel={t`Running health check`}
>
{instance.health_check_pending
? t`Running health check`
: t`Run health check`}
</Button>
</Tooltip>
)}
<InstanceToggle
css="display: inline-flex;"
fetchInstances={fetchDetails}
instance={instance}
dataCy="enable-instance"
/>
</CardActionsRow>
{error && (
<AlertModal

View File

@@ -0,0 +1,105 @@
import React, { useState, useCallback, useEffect } from 'react';
import { t } from '@lingui/macro';
import { useHistory, useParams, Link } from 'react-router-dom';
import { Card, PageSection } from '@patternfly/react-core';
import useRequest from 'hooks/useRequest';
import ContentError from 'components/ContentError';
import ContentLoading from 'components/ContentLoading';
import { CardBody } from 'components/Card';
import { InstancesAPI } from 'api';
import InstanceForm from '../Shared/InstanceForm';
function InstanceEdit({ setBreadcrumb }) {
const history = useHistory();
const { id } = useParams();
const [formError, setFormError] = useState();
const detailsUrl = `/instances/${id}/details`;
const handleSubmit = async (values) => {
try {
await InstancesAPI.update(id, values);
history.push(detailsUrl);
} catch (err) {
setFormError(err);
}
};
const handleCancel = () => {
history.push(detailsUrl);
};
const {
isLoading,
error,
request: fetchDetail,
result: { instance, peers },
} = useRequest(
useCallback(async () => {
const [{ data: instance_detail }, { data: peers_detail }] =
await Promise.all([
InstancesAPI.readDetail(id),
InstancesAPI.readPeers(id),
]);
return {
instance: instance_detail,
peers: peers_detail.results,
};
}, [id]),
{
instance: {},
peers: [],
}
);
useEffect(() => {
fetchDetail();
}, [fetchDetail]);
useEffect(() => {
if (instance) {
setBreadcrumb(instance);
}
}, [instance, setBreadcrumb]);
if (isLoading) {
return (
<CardBody>
<ContentLoading />
</CardBody>
);
}
if (error) {
return (
<CardBody>
<ContentError error={error}>
{error?.response?.status === 404 && (
<span>
{t`Instance not found.`}{' '}
<Link to="/instances">{t`View all Instances.`}</Link>
</span>
)}
</ContentError>
</CardBody>
);
}
return (
<PageSection>
<Card>
<InstanceForm
instance={instance}
instance_peers={peers}
isEdit
submitError={formError}
handleSubmit={handleSubmit}
handleCancel={handleCancel}
/>
</Card>
</PageSection>
);
}
export default InstanceEdit;

View File

@@ -0,0 +1,149 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { createMemoryHistory } from 'history';
import useDebounce from 'hooks/useDebounce';
import { InstancesAPI } from 'api';
import {
mountWithContexts,
waitForElement,
} from '../../../../testUtils/enzymeHelpers';
import InstanceEdit from './InstanceEdit';
jest.mock('../../../api');
jest.mock('../../../hooks/useDebounce');
jest.mock('react-router-dom', () => ({
...jest.requireActual('react-router-dom'),
useParams: () => ({
id: 42,
}),
}));
const instanceData = {
id: 42,
hostname: 'awx_1',
type: 'instance',
url: '/api/v2/instances/1/',
related: {
named_url: '/api/v2/instances/awx_1/',
jobs: '/api/v2/instances/1/jobs/',
instance_groups: '/api/v2/instances/1/instance_groups/',
peers: '/api/v2/instances/1/peers/',
},
summary_fields: {
user_capabilities: {
edit: false,
},
links: [],
},
uuid: '00000000-0000-0000-0000-000000000000',
created: '2023-04-26T22:06:46.766198Z',
modified: '2023-04-26T22:06:46.766217Z',
last_seen: '2023-04-26T23:12:02.857732Z',
health_check_started: null,
health_check_pending: false,
last_health_check: '2023-04-26T23:01:13.941693Z',
errors: 'Instance received normal shutdown signal',
capacity_adjustment: '1.00',
version: '0.1.dev33237+g1fdef52',
capacity: 0,
consumed_capacity: 0,
percent_capacity_remaining: 0,
jobs_running: 0,
jobs_total: 0,
cpu: '8.0',
memory: 8011055104,
cpu_capacity: 0,
mem_capacity: 0,
enabled: true,
managed_by_policy: true,
node_type: 'hybrid',
node_state: 'installed',
ip_address: null,
listener_port: 27199,
peers: [],
peers_from_control_nodes: false,
};
const instanceDataWithPeers = {
results: [instanceData],
};
const updatedInstance = {
node_type: 'hop',
peers: ['test-peer'],
};
describe('<InstanceEdit/>', () => {
let wrapper;
let history;
beforeAll(async () => {
useDebounce.mockImplementation((fn) => fn);
history = createMemoryHistory();
InstancesAPI.readDetail.mockResolvedValue({ data: instanceData });
InstancesAPI.readPeers.mockResolvedValue({ data: instanceDataWithPeers });
await act(async () => {
wrapper = mountWithContexts(
<InstanceEdit
instance={instanceData}
peers={instanceDataWithPeers}
isEdit
setBreadcrumb={() => {}}
/>,
{
context: { router: { history } },
}
);
});
expect(InstancesAPI.readDetail).toBeCalledWith(42);
});
afterAll(() => {
jest.clearAllMocks();
});
test('initially renders successfully', async () => {
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
expect(wrapper.find('InstanceEdit')).toHaveLength(1);
});
test('handleSubmit should call the api and redirect to details page', async () => {
await act(async () => {
wrapper.find('InstanceForm').invoke('handleSubmit')(updatedInstance);
});
expect(InstancesAPI.update).toHaveBeenCalledWith(42, updatedInstance);
expect(history.location.pathname).toEqual('/instances/42/details');
});
test('should navigate to instance details when cancel is clicked', async () => {
await act(async () => {
wrapper.find('button[aria-label="Cancel"]').simulate('click');
});
expect(history.location.pathname).toEqual('/instances/42/details');
});
test('should navigate to instance details after successful submission', async () => {
await act(async () => {
wrapper.find('InstanceForm').invoke('handleSubmit')(updatedInstance);
});
wrapper.update();
expect(wrapper.find('submitError').length).toBe(0);
expect(history.location.pathname).toEqual('/instances/42/details');
});
test('failed form submission should show an error message', async () => {
const error = {
response: {
data: { detail: 'An error occurred' },
},
};
InstancesAPI.update.mockImplementationOnce(() => Promise.reject(error));
await act(async () => {
wrapper.find('InstanceForm').invoke('handleSubmit')(updatedInstance);
});
wrapper.update();
expect(wrapper.find('FormSubmitError').length).toBe(1);
});
});

View File

@@ -0,0 +1 @@
export { default } from './InstanceEdit';

View File

@@ -138,7 +138,7 @@ function InstanceListItem({
rowIndex,
isSelected,
onSelect,
disable: !isExecutionNode,
disable: !(isExecutionNode || isHopNode),
}}
dataLabel={t`Selected`}
/>

View File

@@ -1,17 +1,24 @@
import React, { useCallback, useEffect } from 'react';
import React, { useCallback, useEffect, useState } from 'react';
import { t } from '@lingui/macro';
import { CardBody } from 'components/Card';
import PaginatedTable, {
getSearchableKeys,
HeaderCell,
HeaderRow,
ToolbarAddButton,
} from 'components/PaginatedTable';
import { getQSConfig, parseQueryString } from 'util/qs';
import DisassociateButton from 'components/DisassociateButton';
import AssociateModal from 'components/AssociateModal';
import ErrorDetail from 'components/ErrorDetail';
import AlertModal from 'components/AlertModal';
import useToast, { AlertVariant } from 'hooks/useToast';
import { getQSConfig, parseQueryString, mergeParams } from 'util/qs';
import { useLocation, useParams } from 'react-router-dom';
import useRequest from 'hooks/useRequest';
import useRequest, { useDismissableError } from 'hooks/useRequest';
import DataListToolbar from 'components/DataListToolbar';
import { InstancesAPI } from 'api';
import useExpanded from 'hooks/useExpanded';
import useSelected from 'hooks/useSelected';
import InstancePeerListItem from './InstancePeerListItem';
const QS_CONFIG = getQSConfig('peer', {
@@ -20,27 +27,36 @@ const QS_CONFIG = getQSConfig('peer', {
order_by: 'hostname',
});
function InstancePeerList() {
function InstancePeerList({ setBreadcrumb }) {
const location = useLocation();
const { id } = useParams();
const [isModalOpen, setIsModalOpen] = useState(false);
const { addToast, Toast, toastProps } = useToast();
const readInstancesOptions = useCallback(
() => InstancesAPI.readOptions(id),
[id]
);
const {
isLoading,
error: contentError,
request: fetchPeers,
result: { peers, count, relatedSearchableKeys, searchableKeys },
result: { instance, peers, count, relatedSearchableKeys, searchableKeys },
} = useRequest(
useCallback(async () => {
const params = parseQueryString(QS_CONFIG, location.search);
const [
{ data: detail },
{
data: { results, count: itemNumber },
},
actions,
] = await Promise.all([
InstancesAPI.readDetail(id),
InstancesAPI.readPeers(id, params),
InstancesAPI.readOptions(),
]);
return {
instance: detail,
peers: results,
count: itemNumber,
relatedSearchableKeys: (actions?.data?.related_search_fields || []).map(
@@ -50,6 +66,7 @@ function InstancePeerList() {
};
}, [id, location]),
{
instance: {},
peers: [],
count: 0,
relatedSearchableKeys: [],
@@ -61,18 +78,98 @@ function InstancePeerList() {
fetchPeers();
}, [fetchPeers]);
useEffect(() => {
if (instance) {
setBreadcrumb(instance);
}
}, [instance, setBreadcrumb]);
const { expanded, isAllExpanded, handleExpand, expandAll } =
useExpanded(peers);
const { selected, isAllSelected, handleSelect, clearSelected, selectAll } =
useSelected(peers);
const fetchInstancesToAssociate = useCallback(
(params) =>
InstancesAPI.read(
mergeParams(params, {
...{ not__id: id },
...{ not__node_type: ['control', 'hybrid'] },
...{ not__hostname: instance.peers },
})
),
[id, instance]
);
const {
isLoading: isAssociateLoading,
request: handlePeerAssociate,
error: associateError,
} = useRequest(
useCallback(
async (instancesPeerToAssociate) => {
const selected_hostname = instancesPeerToAssociate.map(
(obj) => obj.hostname
);
const new_peers = [
...new Set([...instance.peers, ...selected_hostname]),
];
await InstancesAPI.update(instance.id, { peers: new_peers });
fetchPeers();
addToast({
id: instancesPeerToAssociate,
title: t`${selected_hostname} added as a peer. Please be sure to run the install bundle for ${instance.hostname} again in order to see changes take effect.`,
variant: AlertVariant.success,
hasTimeout: true,
});
},
[instance, fetchPeers, addToast]
)
);
const {
isLoading: isDisassociateLoading,
request: handlePeersDiassociate,
error: disassociateError,
} = useRequest(
useCallback(async () => {
const new_peers = [];
const selected_hostname = selected.map((obj) => obj.hostname);
for (let i = 0; i < instance.peers.length; i++) {
if (!selected_hostname.includes(instance.peers[i])) {
new_peers.push(instance.peers[i]);
}
}
await InstancesAPI.update(instance.id, { peers: new_peers });
fetchPeers();
addToast({
title: t`${selected_hostname} removed. Please be sure to run the install bundle for ${instance.hostname} again in order to see changes take effect.`,
variant: AlertVariant.success,
hasTimeout: true,
});
}, [instance, selected, fetchPeers, addToast])
);
const { error, dismissError } = useDismissableError(
associateError || disassociateError
);
const isHopNode = instance.node_type === 'hop';
const isExecutionNode = instance.node_type === 'execution';
return (
<CardBody>
<PaginatedTable
contentError={contentError}
hasContentLoading={isLoading}
hasContentLoading={
isLoading || isDisassociateLoading || isAssociateLoading
}
items={peers}
itemCount={count}
pluralizedItemName={t`Peers`}
qsConfig={QS_CONFIG}
onRowClick={handleSelect}
clearSelected={clearSelected}
toolbarSearchableKeys={searchableKeys}
toolbarRelatedSearchableKeys={relatedSearchableKeys}
toolbarSearchColumns={[
@@ -101,13 +198,36 @@ function InstancePeerList() {
renderToolbar={(props) => (
<DataListToolbar
{...props}
isAllSelected={isAllSelected}
onSelectAll={selectAll}
isAllExpanded={isAllExpanded}
onExpandAll={expandAll}
qsConfig={QS_CONFIG}
additionalControls={[
(isExecutionNode || isHopNode) && (
<ToolbarAddButton
ouiaId="add-instance-peers-button"
key="associate"
defaultLabel={t`Associate`}
onClick={() => setIsModalOpen(true)}
/>
),
(isExecutionNode || isHopNode) && (
<DisassociateButton
verifyCannotDisassociate={false}
key="disassociate"
onDisassociate={handlePeersDiassociate}
itemsToDisassociate={selected}
modalTitle={t`Remove instance from peers?`}
/>
),
]}
/>
)}
renderRow={(peer, index) => (
<InstancePeerListItem
isSelected={selected.some((row) => row.id === peer.id)}
onSelect={() => handleSelect(peer)}
isExpanded={expanded.some((row) => row.id === peer.id)}
onExpand={() => handleExpand(peer)}
key={peer.id}
@@ -116,6 +236,35 @@ function InstancePeerList() {
/>
)}
/>
{isModalOpen && (
<AssociateModal
header={t`Instances`}
fetchRequest={fetchInstancesToAssociate}
isModalOpen={isModalOpen}
onAssociate={handlePeerAssociate}
onClose={() => setIsModalOpen(false)}
title={t`Select Instances`}
optionsRequest={readInstancesOptions}
displayKey="hostname"
columns={[
{ key: 'hostname', name: t`Name` },
{ key: 'node_type', name: t`Node Type` },
]}
/>
)}
<Toast {...toastProps} />
{error && (
<AlertModal
isOpen={error}
onClose={dismissError}
title={t`Error!`}
variant="error"
>
{associateError && t`Failed to associate peer.`}
{disassociateError && t`Failed to remove peers.`}
<ErrorDetail error={error} />
</AlertModal>
)}
</CardBody>
);
}

View File

@@ -10,6 +10,8 @@ import { Detail, DetailList } from 'components/DetailList';
function InstancePeerListItem({
peerInstance,
isSelected,
onSelect,
isExpanded,
onExpand,
rowIndex,
@@ -33,7 +35,14 @@ function InstancePeerListItem({
}}
/>
)}
<Td />
<Td
select={{
rowIndex,
isSelected,
onSelect,
}}
dataLabel={t`Selected`}
/>
<Td id={labelId} dataLabel={t`Name`}>
<Link to={`/instances/${peerInstance.id}/details`}>
<b>{peerInstance.hostname}</b>

View File

@@ -7,6 +7,7 @@ import PersistentFilters from 'components/PersistentFilters';
import { InstanceList } from './InstanceList';
import Instance from './Instance';
import InstanceAdd from './InstanceAdd';
import InstanceEdit from './InstanceEdit';
function Instances() {
const [breadcrumbConfig, setBreadcrumbConfig] = useState({
@@ -20,8 +21,11 @@ function Instances() {
}
setBreadcrumbConfig({
'/instances': t`Instances`,
'/instances/add': t`Create new Instance`,
[`/instances/${instance.id}`]: `${instance.hostname}`,
[`/instances/${instance.id}/details`]: t`Details`,
[`/instances/${instance.id}/peers`]: t`Peers`,
[`/instances/${instance.id}/edit`]: t`Edit Instance`,
});
}, []);
@@ -30,7 +34,10 @@ function Instances() {
<ScreenHeader streamType="instance" breadcrumbConfig={breadcrumbConfig} />
<Switch>
<Route path="/instances/add">
<InstanceAdd />
<InstanceAdd setBreadcrumb={buildBreadcrumbConfig} />
</Route>
<Route path="/instances/:id/edit" key="edit">
<InstanceEdit setBreadcrumb={buildBreadcrumbConfig} />
</Route>
<Route path="/instances/:id">
<Instance setBreadcrumb={buildBreadcrumbConfig} />

View File

@@ -1,6 +1,6 @@
import React from 'react';
import React, { useCallback } from 'react';
import { t } from '@lingui/macro';
import { Formik } from 'formik';
import { Formik, useField, useFormikContext } from 'formik';
import { Form, FormGroup, CardBody } from '@patternfly/react-core';
import { FormColumnLayout } from 'components/FormLayout';
import FormField, {
@@ -8,9 +8,31 @@ import FormField, {
CheckboxField,
} from 'components/FormField';
import FormActionGroup from 'components/FormActionGroup';
import AnsibleSelect from 'components/AnsibleSelect';
import { PeersLookup } from 'components/Lookup';
import { required } from 'util/validators';
function InstanceFormFields() {
const INSTANCE_TYPES = [
{ id: 'execution', name: t`Execution` },
{ id: 'hop', name: t`Hop` },
];
function InstanceFormFields({ isEdit }) {
const [instanceTypeField, instanceTypeMeta, instanceTypeHelpers] = useField({
name: 'node_type',
validate: required(t`Set a value for this field`),
});
const { setFieldValue } = useFormikContext();
const [peersField, peersMeta, peersHelpers] = useField('peers');
const handlePeersUpdate = useCallback(
(value) => {
setFieldValue('peers', value);
},
[setFieldValue]
);
return (
<>
<FormField
@@ -20,6 +42,7 @@ function InstanceFormFields() {
type="text"
validate={required(null)}
isRequired
isDisabled={isEdit}
/>
<FormField
id="instance-description"
@@ -40,16 +63,47 @@ function InstanceFormFields() {
label={t`Listener Port`}
name="listener_port"
type="number"
tooltip={t`Select the port that Receptor will listen on for incoming connections. Default is 27199.`}
isRequired
tooltip={t`Select the port that Receptor will listen on for incoming connections, e.g. 27199.`}
/>
<FormField
id="instance-type"
<FormGroup
fieldId="instance-type"
label={t`Instance Type`}
name="node_type"
type="text"
tooltip={t`Sets the role that this instance will play within mesh topology. Default is "execution."`}
isDisabled
validated={
!instanceTypeMeta.touched || !instanceTypeMeta.error
? 'default'
: 'error'
}
helperTextInvalid={instanceTypeMeta.error}
isRequired
>
<AnsibleSelect
{...instanceTypeField}
id="node_type"
data={INSTANCE_TYPES.map((type) => ({
key: type.id,
value: type.id,
label: type.name,
}))}
onChange={(event, value) => {
instanceTypeHelpers.setValue(value);
}}
isDisabled={isEdit}
/>
</FormGroup>
<PeersLookup
helperTextInvalid={peersMeta.error}
isValid={!peersMeta.touched || !peersMeta.error}
onBlur={() => peersHelpers.setTouched()}
onChange={handlePeersUpdate}
value={peersField.value}
tooltip={t`Select the Peers Instances.`}
fieldName="peers"
formLabel={t`Peers`}
multiple
typePeers
id="peers"
isRequired
/>
<FormGroup fieldId="instance-option-checkboxes" label={t`Options`}>
<CheckboxField
@@ -64,6 +118,12 @@ function InstanceFormFields() {
label={t`Managed by Policy`}
tooltip={t`Controls whether or not this instance is managed by policy. If enabled, the instance will be available for automatic assignment to and unassignment from instance groups based on policy rules.`}
/>
<CheckboxField
id="peers_from_control_nodes"
name="peers_from_control_nodes"
label={t`Peers from control nodes`}
tooltip={t`If enabled, control nodes will peer to this instance automatically. If disabled, instance will be connected only to associated peers.`}
/>
</FormGroup>
</>
);
@@ -71,6 +131,8 @@ function InstanceFormFields() {
function InstanceForm({
instance = {},
instance_peers = [],
isEdit = false,
submitError,
handleCancel,
handleSubmit,
@@ -79,22 +141,29 @@ function InstanceForm({
<CardBody>
<Formik
initialValues={{
hostname: '',
description: '',
node_type: 'execution',
node_state: 'installed',
listener_port: 27199,
enabled: true,
managed_by_policy: true,
hostname: instance.hostname || '',
description: instance.description || '',
node_type: instance.node_type || 'execution',
node_state: instance.node_state || 'installed',
listener_port: instance.listener_port,
enabled: instance.enabled || true,
managed_by_policy: instance.managed_by_policy || true,
peers_from_control_nodes: instance.peers_from_control_nodes || false,
peers: instance_peers,
}}
onSubmit={(values) => {
handleSubmit(values);
handleSubmit({
...values,
listener_port:
values.listener_port === '' ? null : values.listener_port,
peers: values.peers.map((peer) => peer.hostname || peer),
});
}}
>
{(formik) => (
<Form autoComplete="off" onSubmit={formik.handleSubmit}>
<FormColumnLayout>
<InstanceFormFields instance={instance} />
<InstanceFormFields isEdit={isEdit} />
<FormSubmitError error={submitError} />
<FormActionGroup
onCancel={handleCancel}

View File

@@ -65,7 +65,7 @@ describe('<InstanceForm />', () => {
expect(handleCancel).toBeCalled();
});
test('should call handleSubmit when Cancel button is clicked', async () => {
test('should call handleSubmit when Save button is clicked', async () => {
expect(handleSubmit).not.toHaveBeenCalled();
await act(async () => {
wrapper.find('input#hostname').simulate('change', {
@@ -74,9 +74,6 @@ describe('<InstanceForm />', () => {
wrapper.find('input#instance-description').simulate('change', {
target: { value: 'This is a repeat song', name: 'description' },
});
wrapper.find('input#instance-port').simulate('change', {
target: { value: 'This is a repeat song', name: 'listener_port' },
});
});
wrapper.update();
expect(
@@ -91,9 +88,10 @@ describe('<InstanceForm />', () => {
enabled: true,
managed_by_policy: true,
hostname: 'new Foo',
listener_port: 'This is a repeat song',
node_state: 'installed',
node_type: 'execution',
peers_from_control_nodes: false,
peers: [],
});
});
});

View File

@@ -33,7 +33,8 @@ function RemoveInstanceButton({ itemsToRemove, onRemove, isK8s }) {
const [removeDetails, setRemoveDetails] = useState(null);
const [isLoading, setIsLoading] = useState(false);
const cannotRemove = (item) => item.node_type !== 'execution';
const cannotRemove = (item) =>
!(item.node_type === 'execution' || item.node_type === 'hop');
const toggleModal = async (isOpen) => {
setRemoveDetails(null);
@@ -175,7 +176,7 @@ function RemoveInstanceButton({ itemsToRemove, onRemove, isK8s }) {
</Button>,
]}
>
<div>{t`This action will remove the following instances:`}</div>
<div>{t`This action will remove the following instance and you may need to rerun the install bundle for any instance that was previously connected to:`}</div>
{itemsToRemove.map((item) => (
<span key={item.id} id={`item-to-be-removed-${item.id}`}>
<strong>{item.hostname}</strong>

View File

@@ -302,9 +302,9 @@ function HostsByProcessorTypeExample() {
const hostsByProcessorLimit = `intel_hosts`;
const hostsByProcessorSourceVars = `plugin: constructed
strict: true
groups:
intel_hosts: "GenuineIntel" in ansible_processor`;
strict: true
groups:
intel_hosts: "'GenuineIntel' in ansible_processor"`;
return (
<FormFieldGroupExpandable

View File

@@ -45,7 +45,7 @@ describe('<ConstructedInventoryHint />', () => {
);
expect(navigator.clipboard.writeText).toHaveBeenCalledWith(
expect.stringContaining(
'intel_hosts: "GenuineIntel" in ansible_processor'
`intel_hosts: \"'GenuineIntel' in ansible_processor\"`
)
);
});

View File

@@ -53,13 +53,9 @@ const getStdOutValue = (hostEvent) => {
const res = hostEvent?.event_data?.res;
let stdOut;
if (taskAction === 'debug' && res.result && res.result.stdout) {
if (taskAction === 'debug' && res?.result?.stdout) {
stdOut = res.result.stdout;
} else if (
taskAction === 'yum' &&
res.results &&
Array.isArray(res.results)
) {
} else if (taskAction === 'yum' && Array.isArray(res?.results)) {
stdOut = res.results.join('\n');
} else if (res?.stdout) {
stdOut = Array.isArray(res.stdout) ? res.stdout.join(' ') : res.stdout;

Some files were not shown because too many files have changed in this diff Show More