Compare commits

...

197 Commits

Author SHA1 Message Date
jessicamack
a689f87f1c add licenses 2023-12-06 12:31:14 -05:00
jessicamack
7501ad6836 add django-ansible-base
Signed-off-by: jessicamack <jmack@redhat.com>
2023-12-06 12:31:14 -05:00
Don Naro
dd00bbba42 separate tox calls in readthedocs config (#14673) 2023-12-06 17:17:12 +00:00
Rick Elrod
fe6bac6d9e [CI] Reduce GHA timeouts from 6h default (#14704)
* [CI] Reduce GHA timeouts from 6h default

The goal here is to never interfere with a real run (so most of the
timeout-minutes values seem rather high) but to avoid having 6h long
runs if something goes crazy and never ends.

Signed-off-by: Rick Elrod <rick@elrod.me>

* Do bash hackery instead

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-12-06 14:43:45 +00:00
jainnikhil30
87abbd4b10 Fix the bulk Job Launch Integration test in awx collection (#14702)
* fix the integration tests
2023-12-06 18:50:33 +05:30
lucas-benedito
fb04e5d9f6 Fixing wsrelay connection loop (#14692)
* Fixing wsrelay connection loop

* The loop was being interrupted when reaching the return statements, causing a race condition that would make nodes remain disconnected from their websockets
* Added log messages for the previous return state to improve the logging from this state.

* Added logging for malformed payload

* Update awx/main/wsrelay.py

Co-authored-by: Rick Elrod <rick@elrod.me>

* Moved logmsg outside condition

---------

Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-12-04 09:33:05 -05:00
Hao Liu
478e2cb28d Fix awx collection publishing on galaxy (#14642)
--location (-L) parameter will prompt curl to submit a new request if the URL is a redirect.

After moving to galaxy-NG without -L the curl falsely return 302 for any version

Co-authored-by: John Barker <john@johnrbarker.com>
2023-11-29 20:28:22 +00:00
Chris Meyers
2ac304d289 allow pytest --migrations to succeed (#14663)
* allow pytest --migrations to succeed

* We actually subvert migrations from running in test via pytest.ini
  --no-migrations option. This has led to bit rot for the sqlite
  migrations happy path. This changeset pays off that tech debt and
  allows for an sqlite migration happy path.
* This paves the way for programatic invocation of individual migrations
  and weaving of the creation of resources (i.e. Instance, Job Template,
  etc). With this, a developer can instantiate various database states,
  trigger a migration, assert the state of the db, and then have pytest
  rollback all of that.
* I will note that in practice, running these migrations is dog shit
  slow BUT this work also opens up the possibility of saving and
  re-using sqlite3 database files. Normally, caching is not THE answer
  and causes more harm than good. But in this case, our migrations are
  mostly write-once (I say mostly because this change set violates
  that :) so cache invalidation isn't a major issue.

* functional test for migrations on sqlite

* We commonly subvert running migrations in test land. Test land uses
  sqlite. By not constantly exercising this code path it atrophies. The
  smoke test here is to continuously exercise that code path.
* Add ci test to run migration tests separately, they take =~ 2-3
  minutes each on my laptop.
* The smoke tests also serves as an example of how to write migration
  tests.

* run migration tests in ci
2023-11-17 13:33:08 -05:00
Don Naro
3e5851f3af Upgrade doc requirements (#14669)
* upgrade when pip compiling doc reqs

* upgrade doc requirements
2023-11-16 13:04:21 -07:00
Alan Rominger
adb1b12074 Update RBAC docs, remove unused get_permissions (#14492)
* Update RBAC docs, remove unused get_permissions

* Add back in section for get_roles_on_resource
2023-11-16 11:29:33 -05:00
Alan Rominger
8fae20c48a Remove unused methods we attach to user model (#14668) 2023-11-16 11:21:21 -05:00
Hao Liu
ec364cc60e Make vault init more idempotent (#14664)
Currently if you cleanup docker volume for vault and bring docker-compose development back up with vault enabled we will not initialize vault because the secret files still exist.

This change will attempt to initialize vault reguardless and update the secret file if vault is initialized
2023-11-16 09:43:45 -06:00
TVo
1cfd51764e Added missing pointers to release notes (#14659)
* Replaced with larger graphic.

* Revert "Replaced with larger graphic."

This reverts commit 1214b00052.

* Added missing pointers to release notes.
2023-11-15 14:24:11 -07:00
Steffen Scheib
0b8fedfd04 Adding the possibility to decode base64 decoded strings to Delinea's Devops Secret Vault (DSV) (#14646)
Adding the possibility to decode base64 decoded strings to Delinea's Devops Secret Vault (DSV).
This is necessary as uploading files to DSV is not possible (and not meant to be) and files should be added base64 encoded.
The commit is making sure to remain backward compatible (no secret decoding), as a default is supplied.

This has been tested with DSV and works for secrets that are base64 encoded and secrets that are not base64 encoded (which is the default).

Signed-off-by: Steffen Scheib <sscheib@redhat.com>
2023-11-15 15:28:34 -05:00
Don Naro
72a8173462 issue #14653 heading does not render correctly (#14665) 2023-11-15 15:05:52 -05:00
Tong He
873b1fbe07 Set subscription type as developer for developer subscriptions. (#14584)
* Set subscription type as developer for developer subscriptions.

Signed-off-by: Tong He <the@redhat.com>

* Set subscription type as developer for developer subscription manifests.

Signed-off-by: Tong He <the@redhat.com>

* Remedy the wrong character to assign value.

Signed-off-by: Tong He <the@redhat.com>

* Reformat licensing.py by black.

Signed-off-by: Tong He <the@redhat.com>

---------

Signed-off-by: Tong He <the@redhat.com>
2023-11-15 10:33:57 +00:00
Alan Rominger
1f36e84b45 Correctly handle case where unpartitioned table does not exist (#14648) 2023-11-14 08:38:48 -05:00
TVo
8c4bff2b86 Replaced with larger graphic. (#14647) 2023-11-13 09:55:04 -06:00
lucas-benedito
14f636af84 Setting credential_type as required (#14651)
* Setting credential_type as required

* Added test for missing credential_type in credential module

* Corrected test assertion

---------

Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-11-13 09:54:32 -06:00
Don Naro
0057c8daf6 Docs: Include REST API reference content from swagger.json (#14607) 2023-11-11 08:33:41 -05:00
TVo
d8a28b3c06 Added alt text for settings-menu.rst (#14639)
* Re-do for PR #14595 to fix CI issues.

* Added alt text to settings-menu.rst

* Update docs/docsite/rst/common/settings-menu.rst

Co-authored-by: Don Naro <dnaro@redhat.com>

---------

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-11-08 15:17:57 -07:00
Ratan Gulati
40c2b700fe Fix: #14523 Add alt-text codeblock to Images for workflow_template.rst (#14604)
* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* add alt to images in workflow_templates.rst

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>

* Update workflow_templates.rst

* Revised proposed alt text for workflow_templates.rst

---------

Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-11-07 10:51:11 -07:00
Thanhnguyet Vo
71d548f9e5 Removed references to images that were deleted. 2023-11-07 08:55:27 -07:00
Thanhnguyet Vo
dd98963f86 Updated images - Workflow Templates chapter of Userguide. 2023-11-07 08:55:27 -07:00
TVo
4b467dfd8d Revised proposed alt text for insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
456b56778e Update insights.rst 2023-11-07 08:14:34 -07:00
BHANUTEJA
5b3cb20f92 Update insights.rst 2023-11-07 08:14:34 -07:00
TVo
d7086a3c88 Revised the proposed Alt text for main_menu.rst 2023-11-06 13:09:26 -07:00
Ratan Gulati
21e7ab078c Fix: #14511 Add alt-text codeblock to Images for Userguide: main_menu.rst
Signed-off-by: Ratan Gulati <ratangulati.dev@gmail.com>
2023-11-06 13:09:26 -07:00
Elijah DeLee
946ca0b3b8 fix wsrelay connection in ipv6 environments 2023-11-06 13:58:41 -05:00
TVo
b831dbd608 Removed mailing list from triage_replies.md 2023-11-03 14:30:30 -06:00
Thanhnguyet Vo
943e455f9d Re-do for PR #14595 to fix CI issues. 2023-11-03 08:35:22 -06:00
Seth Foster
53bc88abe2 Fix python_paths error in CI(#14622)
Remove outdated lines from pytest.ini

Was causing KeyError 'python_paths' in CI

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-11-03 09:36:21 -04:00
Rick Elrod
3b4d95633e [rsyslog] remove main_queue, add more action queue params (#14532)
* [rsyslog] remove main_queue, add more action queue params

Signed-off-by: Rick Elrod <rick@elrod.me>

* Remove now-unused LOG_AGGREGATOR_MAX_DISK_USAGE_GB, add LOG_AGGREGATOR_ACTION_QUEUE_SIZE

Signed-off-by: Rick Elrod <rick@elrod.me>

---------

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-10-31 14:49:17 -04:00
Alan Rominger
93c329d9d5 Fix cancel bug - WorkflowManager cancel in transaction (#14608)
This fixes a bug where jobs within a workflow job were not canceled
  when the workflow job was canceled by the user

The fix is to submit the cancel request as a part of the
  transaction that WorkflowManager commits its work in
  this requires that we send the message without expecting a reply
  so this changes the control-with-reply cancel to just a control function
2023-10-30 15:30:18 -04:00
Hao Liu
f4c53aaf22 Update receptor-collection version to 2.0.2 (#14613) 2023-10-30 17:24:02 +00:00
Alan Rominger
333ef76cbd Send notifications for dependency failures (#14603)
* Send notifications for dependency failures

* Delete tests for deleted method

* Remove another test for removed method
2023-10-30 10:42:37 -04:00
Alan Rominger
fc0b58fd04 Fix bug that prevented dispatcher exit with downed DB (#14469)
* Separate handling of original sitTERM and sigINT
2023-10-26 14:34:25 -04:00
Andrii Zakurenyi
bef0a8b23a Fix DevOps Secrets Vault credential plugin to work with python-dsv-sdk>=1.0.4
Signed-off-by: Andrii Zakurenyi <andrii.zakurenyi@c.delinea.com>
2023-10-25 15:48:24 -04:00
lmo5
a5f33456b6 Fix missing service account secret in docker-compose-minikube role (#14596)
* Fix missing service account secret

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2023-10-25 19:27:21 +00:00
Surav Shrestha
21fb395912 fix typos in docs/development/minikube.md 2023-10-25 15:23:23 -04:00
jessicamack
44255f378d Fix extra_vars bug in ansible.controller.ad_hoc_command (#14585)
* convert to valid type for serializer

* check that extra_vars are in request

* remove doubled line

* add integration test for change

* move change to the ad_hoc_command module

Signed-off-by: jessicamack <jmack@redhat.com>

* fix imports

Signed-off-by: jessicamack <jmack@redhat.com>

---------

Signed-off-by: jessicamack <jmack@redhat.com>
2023-10-25 10:38:45 -04:00
Parikshit Adhikari
71a6d48612 Fix: typos inside /docs directory (#14594)
fix typos inside docs
2023-10-24 19:01:21 +00:00
nmiah1
b7e5f5d1e1 Typo in export.py example (#14598) 2023-10-24 18:33:38 +00:00
Alan Rominger
b6b167627c Fix Boolean values defaulting to False in collection (#14493)
* Fix Boolean values defaulting to False in collection

* Remove null values in other cases, fix null handling for WFJT nodes

* Only remove null values if it is a boolean field

* Reset changes to WFJT node field processing

* Use test content from sean-m-sullivan to fix lookups in assert
2023-10-24 14:29:16 -04:00
Hao Liu
20f5b255c9 Fix "upgrade in progress" status page not showing up while migration is in progress (#14579)
Web container does not need to wait for migration

if the database is running and responsive, but migrations have not finished, it will start serving, and users will get the upgrading page

wait-for-migration prevent nginix and uwsgi from starting up to serve the "upgrade in progress" status page
2023-10-24 14:27:09 -04:00
Oleksii Baranov
3bcf46555d Fix swagger generation on rhel (#14317) (#14589) 2023-10-24 14:19:02 -04:00
Don Naro
94703ccf84 Pip compile docsite requirements (#14449)
Co-authored-by: Sviatoslav Sydorenko <578543+webknjaz@users.noreply.github.com>
Co-authored-by: Sviatoslav Sydorenko <wk.cvs.github@sydorenko.org.ua>
2023-10-24 12:53:41 -04:00
BHANUTEJA
6cdea1909d Alt text for Execution Env section of Userguide (#14576)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 18:48:07 +00:00
Mike Mwanje
f133580172 Adds alt text to instance_groups.rst images (#14571)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 16:11:17 +00:00
Kishan Mehta
4b90a7fcd1 Add alt text for image directives in credential_types.rst (#14551)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 09:36:05 -06:00
Marliana Lara
95bfedad5b Format constructed inventory hint example as valid YAML (#14568) 2023-10-20 10:24:47 -04:00
Kishan Mehta
1081f2d8e9 Add alt text for image directives in credentials.rst (#14550)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 14:13:49 +00:00
Kishan Mehta
c4ab54d7f3 Add alt text for image directives in job_capacity.rst & job_slices.rst (#14549)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-20 13:34:04 +00:00
Hao Liu
bcefcd8cf8 Remove specific version for receptorctl (#14593) 2023-10-19 22:49:42 -04:00
Kishan Mehta
0bd057529d Add alt text for image directives in job_templates.rst (#14548)
Co-authored-by: Kishan Mehta <kishan@scrapinghub.com>
2023-10-19 20:24:32 +00:00
Sayyed Faisal Ali
a82c03e2e2 added alt-text in projects.rst (#14544)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-19 12:39:58 -06:00
TVo
447ac77535 Corrected missing text replacement directives (#14592) 2023-10-19 16:36:41 +00:00
Andrew Klychkov
72d0928f1b [DOCS] EE guide: fix a ref to Get started with EE (#14587) 2023-10-19 03:30:21 -04:00
Deepshri M
6d727d4bc4 Adding alt text for image (#14541)
Signed-off-by: Deepshri M <deepshrim613@gmail.com>
2023-10-17 14:53:18 -06:00
Rohit Raj
6040e44d9d docs: Update teams.rst (#14539)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 20:16:09 +00:00
Rohit Raj
b99ce5cd62 docs: Update users.rst (#14538)
Co-authored-by: TVo <thavo@redhat.com>
2023-10-17 14:58:40 +00:00
Rohit Raj
ba8a90c55f docs: Update security.rst (#14540)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-16 17:56:46 -06:00
Sayyed Faisal Ali
7ee2172517 added alt-text in project-sign.rst (#14545)
Signed-off-by: c0de-slayer <fsali315@gmail.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-10-16 09:25:34 -06:00
Alan Rominger
07f49f5925 AAP-16926 Delete unpartitioned tables in a separate transaction (#14572) 2023-10-13 15:50:51 -04:00
Hao Liu
376993077a Removing mailing list from get involved (#14580) 2023-10-13 17:49:34 +00:00
Hao Liu
48f586bac4 Make wait-for-migrations wait forever (#14566) 2023-10-13 13:48:12 +00:00
Surendran
16dab57c63 Added alt-text for images in notifications.rst (#14555)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:22:37 -06:00
Surendran
75a71492fd Added alt-text for images in organizations.rst (#14556)
Signed-off-by: Surendran Gokul <surendrangokul55@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-12 15:15:45 -06:00
Hao Liu
e9bd99c1ff Fix CVE-2023-43665 (#14561) 2023-10-12 14:00:32 -04:00
Daniel Gonçalves
56878b4910 Add customizable batch_size for cleanup_activitystream and cleanup_jobs (#14412)
Signed-off-by: Daniel Gonçalves <daniel.gonc@lves.fr>
2023-10-11 20:09:16 +00:00
Alan Rominger
19ca480078 Upgrade client library for dsv since tss already landed (#14362) 2023-10-11 16:01:22 -04:00
Steffen Scheib
64eb963025 Cleaning SOS report passwords (#14557) 2023-10-11 19:54:28 +00:00
Will Thames
dc34d0887a Execution environment image should not be required (#14488) 2023-10-11 15:39:51 -04:00
Andrew Klychkov
160634fb6f ee_reference.rst: refert to Builder's definition docs instead of duplicating its content (#14562) 2023-10-11 13:54:12 +01:00
Alan Rominger
9745058546 Only block commits if black fails for certain paths (#14531) 2023-10-10 10:12:57 -04:00
Aviral Katiyar
c97a48b165 Fix: #14510 Add alt-text codeblock to Images for Userguide: jobs.rst (#14530)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-09 16:40:56 -06:00
Rohit Raj
259bca0113 docs: Update workflows.rst (#14537) 2023-10-06 15:30:47 -06:00
Aviral Katiyar
92c2b4e983 Fix: #14500 Added alt text to images for Userguide: credential_plugins.rst (#14527)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-06 14:53:23 -06:00
Seth Foster
127a0cff23 Set ip_address to empty string
ip_address cannot be null, so set to
empty instead of None

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2023-10-05 22:53:16 -04:00
Aviral Katiyar
a0ef25006a Fix: #14499 Added alt text to images for Userguide: applications_auth.rst (#14526)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-10-05 14:22:10 -06:00
Chris Meyers
50c98a52f7 Update setting_up.rst (#14542) 2023-10-05 15:06:40 -04:00
Michelle McCausland
4008d72af6 issue-14522: Add alt-text codeblock to Images for Userguide: webhooks.rst (#14529)
Signed-off-by: Michelle McCausland <mmccausl@redhat.com>
2023-10-05 17:40:07 +01:00
Alan Rominger
e72e9f94b9 Fix collection test flake due to successful canceled command (#14519) 2023-10-04 09:09:29 -04:00
Sasa Jovicic
9d60b0b9c6 Fix #12815 Direct links to AWX do not reroute the user after authentication (#14399)
Signed-off-by: Sasa993 <jovicic.sasa@hotmail.com>
Co-authored-by: Sasa Jovicic <sjovicic@anexia-it.com>
2023-10-03 16:55:22 -04:00
Aviral Katiyar
05b58c4df6 Fix : #14490 Fixed the required spelling errors (#14507)
Signed-off-by: maskboyAvi <aviralofficial1729@gmail.com>
2023-10-03 14:15:13 -06:00
TVo
b1b960fd17 Updated Forum terminology and removed mailing list (#14491) 2023-10-03 19:24:19 +01:00
Jakub Laskowski
3c8f71e559 Fixed wrong arguments order in DomainPasswordGrantAuthorizer (#14441)
Signed-off-by: Jakub Laskowski <jakub.laskowski9@gmail.com>
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-10-03 11:54:57 -04:00
Alan Rominger
f5922f76fa DROP unnecessary unpartioned event tables (#14055) 2023-10-03 11:49:23 -04:00
kurokobo
05582702c6 fix: make type conversions work correctly (related #14487) (#14489)
Signed-off-by: kurokobo <2920259+kurokobo@users.noreply.github.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-30 04:02:10 +00:00
Alan Rominger
1d340c5b4e Add a section for postgres max_connections value (#14482) 2023-09-28 10:28:52 -04:00
TVo
15925f1416 Simplified release notes for AWX (#14485) 2023-09-27 14:50:57 -06:00
Salma Kochay
6e06a20cca add subscription usage page 2023-09-27 10:57:04 -04:00
Hao Liu
bb3acbb8ad Debug log for scheduler commit duration (#14035)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-09-27 09:46:55 -04:00
Hao Liu
a88e47930c Update django version to address CVE-2023-41164 (#14460) 2023-09-27 09:36:02 -04:00
Hao Liu
a0d4515ba4 Explicitly set collection version during promotion (#14484) 2023-09-26 14:19:22 -04:00
Alan Rominger
770cc10a78 Get rid of names_digest hack no longer needed (#14459) 2023-09-26 12:09:30 -04:00
Alan Rominger
159dd62d84 Add null value handling in create_partition (#14480) 2023-09-25 18:28:44 -04:00
TVo
640e5db9c6 Removed references of IRC and fixed formatting in "Work Items" section. (#14478)
Co-authored-by: Don Naro <dnaro@redhat.com>
2023-09-25 11:24:39 -06:00
Alan Rominger
9ed527eb26 Consolidate image and server setup in several checks (#14477) 2023-09-25 09:02:20 -04:00
Alan Rominger
29ad6e1eaa Fix bug, None was used instead of empty for DB outage (#14463) 2023-09-21 14:30:25 -04:00
Alan Rominger
3e607f8964 AAP-15927 Use ATTACH PARTITION to avoid exclusive table lock for events (#14433) 2023-09-21 14:27:04 -04:00
TVo
c9d1a4d063 Added release notes for version 23.1.0 (#14471) 2023-09-21 11:02:38 -06:00
Hao Liu
a290b082db Use ldap container hostname for LDAP config (#14473) 2023-09-21 11:31:51 -04:00
Hao Liu
6d3c22e801 Update how to get involved with matrix and forum (#14472) 2023-09-20 18:33:04 +00:00
Michael Abashian
1f91773a3c Simplify docs string base generation 2023-09-20 13:16:54 -04:00
Hao Liu
7b846e1e49 Add makefile target to load dev image into Kind (#13775)
Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-09-19 13:34:10 -04:00
Don Naro
f7a2de8a07 Contributor guide and adjusted titles (#14447)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
2023-09-18 10:40:47 -06:00
Andrew Klychkov
194c214f03 userguide/execution_environments.rst: replace building paragraphs with ref to Get started EE guide (#14429) 2023-09-15 10:20:46 -04:00
Christian Adams
77e30dd4b2 Add link to script for publishing operator on OperatorHub (#14442) 2023-09-15 09:32:19 -04:00
jessicamack
9d7421b9bc Update README (#14452)
Signed-off-by: jessicamack <jmack@redhat.com>
2023-09-14 20:20:06 +00:00
Alan Rominger
3b8e662916 Remove conditional paths due to conflict with required checks (#14450) 2023-09-14 16:19:42 -04:00
Alan Rominger
aa3228eec9 Fix continue-on-error GH actions bug, always run archive step instead 2023-09-14 19:45:07 +00:00
Alan Rominger
7b0598c7d8 Continue workflow steps to save logs from failed tests (#14448) 2023-09-14 18:23:22 +00:00
Ivan Aragonés Muniesa
49832d6379 don't pass the 'organization' or other fields to the search of the instance group or execution environments (#14223) 2023-09-14 09:31:05 -04:00
Alan Rominger
8feeb5f1fa Allow saving github creds in user folder (#14435) 2023-09-12 15:47:12 -04:00
Michael Abashian
56230ba5d1 Show a toast when the job is already in the process of launching 2023-09-06 16:56:34 -04:00
Michael Abashian
480aaeace5 Prevent the user from launching multiple jobs by rapidly clicking on buttons 2023-09-06 16:56:34 -04:00
Joe Garcia
3eaea396be Add base64 check on JWT from authn 2023-09-06 15:58:36 -04:00
Keith Grant
deef8669c9 rebuild package-lock (#14423) 2023-09-06 12:36:50 -07:00
Don Naro
63223a2cc7 allow list for example secrets in docs 2023-09-06 15:15:58 -04:00
Keith Grant
a28bc2eb3f bump babel dependencies (#14370) 2023-09-06 09:14:04 -07:00
Alan Rominger
09168e5832 Edit docker-compose instructions for correctness (#14418) 2023-09-06 11:55:25 -04:00
Alan Rominger
6df1de4262 Avoid activity stream entries for instance going offline (#14385) 2023-09-06 11:18:52 -04:00
Alan Rominger
e072bb7668 Declare license for unique module that uses BSD-2
Co-authored-by: Maxwell G <maxwell@gtmx.me>
2023-09-06 10:43:25 -04:00
Alan Rominger
ec579fd637 Fix collection metadata license to match intent 2023-09-06 10:43:25 -04:00
Marliana Lara
b95d521162 Update missing inventory error message (#14416) 2023-09-06 10:24:25 -04:00
Rick Elrod
d03a6a809d Enable collection integration tests on GHA
There are a number of changes here:

- Abstract out a GHA composite action for running the dev environment
- Update the e2e tests to use that new abstracted action
- Introduce a new (matrixed) job for running collection integration
  tests. This splits the jobs up based on filename.
- Collect coverage info and generate an html report that people can
  download easily to see collection coverage info.
- Do some hacks to delete the intermediary coverage file artifacts
  which aren't needed after the job finishes.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-09-05 16:10:48 -05:00
TVo
4466976e10 Added relnotes for 23.0.0 (#14409) 2023-09-05 15:07:53 -06:00
Don Naro
5733f78fd8 Add readthedocs configuration (#14413) 2023-09-05 15:07:32 -06:00
Alan Rominger
20fc7c702a Add check for building docsite (#14406) 2023-09-05 16:07:48 -04:00
Lila Yasin
6ce5799689 Incorrect capacity for remote execution nodes 14051 (#14315) 2023-09-05 11:20:36 -04:00
Don Naro
dc81aa46d0 Create AWX docsite with RST content (#14328)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com>
Co-authored-by: TVo <thavo@redhat.com>
2023-09-01 09:24:03 -06:00
Alan Rominger
ab3ceaecad Remove extra scheduler state save that does nothing (#14396) 2023-08-31 10:35:07 -04:00
John Westcott IV
1bb4240a6b Allow saml_admin_attr to work in conjunction with SAML Org Map (#14285)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-08-31 09:41:30 -03:00
Rick Elrod
5e105c2cbd [CI] Update GHA actions to sate some warnings emitted by test infrastructure (#14398)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-30 23:58:57 -05:00
Alan Rominger
cdb4f0b7fd Consume job_explanation from runner, fix error reporting error (#13482) 2023-08-30 16:45:50 -04:00
Ivanilson Junior
cf1e448577 Fix undefined property error when task is of type yum/debug and was s… (#14372)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
2023-08-30 15:37:28 -04:00
Andrew Klychkov
224e9e0324 [DOCS] tools/docker-compose/README.md: add way to solve postgresql issue (#14225) 2023-08-30 10:45:50 -04:00
Martin Slemr
660dab439b HostMetrics: Hard auto-cleanup (#14255)
Fix host metric settings

Cleanup_host_metric command with default params

Fix order of host metric cleanups
2023-08-30 09:18:59 -04:00
sean-m-sullivan
5ce2055431 update collection workflow example and tests 2023-08-30 09:15:54 -04:00
Alan Rominger
951bd1cc87 Re-run the updater script after upstream removal of future (#14265) 2023-08-29 15:36:42 -04:00
kurokobo
c9190ebd8f docs: update execution_nodes.md to follow changes for receptor_collection (#14247) 2023-08-29 13:06:54 -04:00
Seth Foster
eb33973fa3 Use receptor collection 2.0.0 2023-08-29 13:06:54 -04:00
Seth Foster
40be2e7b6e Use receptor-collection devel 2023-08-29 13:06:54 -04:00
kialam
485813211a Add toast and delete modal messaging when removing/adding peers. (#14373) 2023-08-29 13:06:54 -04:00
Seth Foster
0a87bf1b5e Apply JS formatting from npm prettier 2023-08-29 13:06:54 -04:00
Seth Foster
fa0e0b2576 Removed unused variable in test_instance_peers 2023-08-29 13:06:54 -04:00
Seth Foster
1d3b2f57ce No longer assert on receptor_host_identifier
receptor_host_identifier can be left out
of group_vars and will default to the
'ansible_host' variable
2023-08-29 13:06:54 -04:00
Seth Foster
0577e1ee79 Setup receptor after podman
Might help to install receptor last,
that way when nodes are first connected to the mesh
they already have podman installed and can potentially
run jobs. Otherwise it might be possible for controller
to launch jobs against nodes that aren't fully set up.
2023-08-29 13:06:54 -04:00
Seth Foster
470ecc4a4f Use itertools product instead of nested loop
Make test case cleaner by using itertools product
instead of the triple nested loop

Replace triple single quotes with triple
double quotes
2023-08-29 13:06:54 -04:00
Seth Foster
965127637b Make ip_address read only
Setting a different value for ip_address
and hostname does not work with the current
way we create receptor certs.
2023-08-29 13:06:54 -04:00
Seth Foster
eba130cf41 Change username to <username> in inventory 2023-08-29 13:06:54 -04:00
Seth Foster
441336301e Ensure ip_address is empty string 2023-08-29 13:06:54 -04:00
Seth Foster
2a0be898e6 Fix detecting if peers changed in serializer
Add a check_peers_changed() utility method
to determine if peers in attrs matches
the current instance peers.

Other changes:
- Set ip_address default to "", and do not
allow null.
2023-08-29 13:06:54 -04:00
Seth Foster
c47acc5988 Change PeersSerializer to SlugRelatedField
Get rid of PeersSerializer and just use SlugRelatedField,
which should be more a straightforward approach.

Other changes:
- cleanup code related to the already-removed api/v2/peers
endpoint
- add "hybrid" node type into more instance_peers test cases
2023-08-29 13:06:54 -04:00
Seth Foster
70ba32b5b2 Do not install ansible-runner or podman on hop nodes 2023-08-29 13:06:54 -04:00
Seth Foster
81e06dace2 Add listener_port to provision_instance
API changes
- cannot change peers or enable
peers_from_control_nodes on VM deployments
- allow setting ip_address
- use ip_address over hostname in the generated
group_vars/all.yml
- Drop api/v2/peers endpoint

DB changes
- add ip_address unique constraint, but ignore "" entries

Other changes
- provision_instance should take listener_port option

Tests
- test that new controls doesn't disturb other peers
relationships
- test ip_address over hostname
2023-08-29 13:06:54 -04:00
Seth Foster
3e8202590c Remove Disconnected link state
Dynamically flipping from Established
to Disconnected is not the intended
usage of InstanceLink State.

- Link state starts in Adding and becomes
Established once any control node first sees the link
is in the status KnownConnectionCosts
2023-08-29 13:06:54 -04:00
Seth Foster
ad96a72ebe Remove duplicate install bundle on InstanceDetail 2023-08-29 13:06:54 -04:00
Seth Foster
eb0058268b Revert "Remove duplicate install bundle on InstanceDetail"
This reverts commit cf5ccf53f4322b49b1009ca13e4f025c30529b30.
2023-08-29 13:06:54 -04:00
Seth Foster
2bf6512a8e Do not change link state if Removing
inspect_established_receptor_connections should
not change link state is current state is Removing.

Other changes:
- rename inspect_execution_nodes to inspect_execution_and_hop_nodes
- Default link state is Adding
- Set min listener_port value to 1024
- inspect_established_receptor_connections now
runs as part of cluster_node_heartbeat task
2023-08-29 13:06:54 -04:00
Seth Foster
855f61a04e Bump migration number 186 to 187 2023-08-29 13:06:54 -04:00
Seth Foster
532e71ff45 Remove extra newlines in install bundle all.yml 2023-08-29 13:06:54 -04:00
Seth Foster
b9ea114cac Remove duplicate install bundle on InstanceDetail 2023-08-29 13:06:54 -04:00
Seth Foster
e41ad82687 optional listener port UI (#14300) 2023-08-29 13:06:54 -04:00
Seth Foster
3bd25c682e Allow setting ip_address for execution nodes 2023-08-29 13:06:54 -04:00
Seth Foster
7169c75b1a receptor_python_packages renamed 2023-08-29 13:06:54 -04:00
kialam
fdb359a67b feature hop node topology updates (#14142) 2023-08-29 13:06:54 -04:00
Seth Foster
ed2a59c1a3 receptor python packages 2023-08-29 13:06:54 -04:00
Jake Jackson
906f8a1dce [hop node] documentation update in execution_nodes for hop nodes (#14215)
Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-08-29 13:06:54 -04:00
Lila Yasin
6833976c54 [hop node] fix failing ci checks on feature_hop-node branch (#14226) 2023-08-29 13:06:54 -04:00
Seth Foster
d15405eafe Add peers_from for reverse peers M2M
use devel receptor-collection
2023-08-29 13:06:54 -04:00
Lila
6c3bbfc3be Looking to see if revising the path in the static dir resolves failing ci check. 2023-08-29 13:06:54 -04:00
Lila Yasin
2e3e6cbde5 hop node migration file updates(#14196)
rename migration function set_peers_from_control_nodes_true to automatically_peer_from_control_plane
import settings and only run function if settings.IS_K8S is true
set listener_port for control nodes to None
2023-08-29 13:06:54 -04:00
Lila Yasin
54894c14dc Hop node AWX Collection Updates (#14153)
Add hop node support to awx collections
- add peers and peers_from_control_nodes fields
- show new node_type "hop"
- add tests for adding hop nodes via collections

Co-authored-by: Seth Foster <fosterseth@users.noreply.github.com>
2023-08-29 13:06:54 -04:00
Seth Foster
2a51f23b7d Add functional API tests
add tests for calling write_receptor_config

add write_receptor_config test

Do not set default listener_port on control node
2023-08-29 13:06:54 -04:00
Jake Jackson
80df31fc4e [hop node] update peer validation logic (#14132) 2023-08-29 13:06:54 -04:00
Lila Yasin
8f8462b38e Marked hop node validation errors for translation (#14116) 2023-08-29 13:06:54 -04:00
Seth Foster
0c41abea0e Make peers field optional 2023-08-29 13:06:54 -04:00
Lila Yasin
3eda1ede8d Migration file to set peers_from_control_ nodes to true for existing execution nodes (#14061) 2023-08-29 13:06:54 -04:00
Jake Jackson
40fca6db57 [hop_node] Validate listener_port is defined for peers (#14056)
add peer listener_port validation and update install bundle if listener_port is defined or not defined.
2023-08-29 13:06:54 -04:00
Seth Foster
148111a072 Remove task that enables COPR receptor repo (#14088)
do not pip install receptorctl
2023-08-29 13:06:54 -04:00
Lila Yasin
9cad45feac Prevent manual peering of control plane nodes to hop node (#13966) 2023-08-29 13:06:54 -04:00
Seth Foster
6834568c5d Add receptor host identifier to group_vars
Add disconnected link state topology
2023-08-29 13:06:54 -04:00
Lorenzo Tanganelli
f7fdb7fe8d Add peers readonly api and instancelink constraint (#13916)
Add Disconnected link state

introspect_receptor_connections is a periodic
task that examines active receptor connections
and cross-checks it with the InstanceLink info.

Any links that should be active but are not
will be put into a Disconnected state. If
active, it will be in an Established state.

UI - Add hop creation and peers mgmt (#13922)

* add UI for mgmt peers, instance edit and add

* add peer info on detail and bug fix on detail

* remove unused chip and change peer label

* rename lookup, put Instance type disable on edit

---------

Co-authored-by: tanganellilore <lorenzo.tanagnelli@hotmail.it>
2023-08-29 13:06:54 -04:00
Seth Foster
d8abd4912b Add support in hop nodes in API 2023-08-29 13:06:54 -04:00
Alan Rominger
4fbdc412ad Restrict PR body check to just AWX repo 2023-08-29 09:29:30 -04:00
Alan Rominger
db1af57daa Revert "Adding PR check to ensure JIRA links are present"
This reverts commit 3ae6174050.
2023-08-29 09:29:30 -04:00
Hao Liu
ffa59864ee Fix CVE-2023-40267 (#14388)
CVE-2023-40267 GitPython: Insecure non-multi options in clone and clone_from is not blocked https://bugzilla.redhat.com/show_bug.cgi?id=2231474

GitPython before 3.1.32 does not block insecure non-multi options in clone and clone_from. NOTE: this issue exists because of an incomplete fix for CVE-2022-24439.

References:
gitpython-developers/GitPython@ca965ec gitpython-developers/GitPython#1609
2023-08-28 15:35:32 -04:00
bxbrenden
b209bc67b4 Fix typo in description of scm_update_on_launch (#14382) 2023-08-28 16:52:44 +00:00
Chandler Swift
1faea020af Fix default redis url to pass check in redis-py>4.4 (#14344)
Signed-off-by: Chandler Swift <chandler+pearson@chandlerswift.com>
Co-authored-by: Rebeccah Hunter <rhunter@redhat.com>
2023-08-25 09:48:36 -04:00
Pablo Hess
b55a099620 Clarify that the license module requires fetching subs prior (#14351)
Co-authored-by: Pablo N. Hess <phess@redhat.com>
2023-08-23 15:20:47 -04:00
David Danielsson
f6dd3cb988 Enforce mutually exclusive options in credential module of the collection (#14363) 2023-08-23 15:16:06 -04:00
Alan Rominger
c448b87c85 AAP-10891 Apply AWX_TASK_ENV when performing credential plugin lookups (#14271) 2023-08-23 13:26:12 -04:00
Rick Elrod
4dd823121a Update cryptography for CVE-2023-38325 (#14358)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-08-23 10:54:20 -05:00
Michael Abashian
ec4f10d868 Add location for locales in nginx config 2023-08-22 16:33:00 -04:00
988 changed files with 23031 additions and 1999 deletions

View File

@@ -0,0 +1,28 @@
name: Setup images for AWX
description: Builds new awx_devel image
inputs:
github-token:
description: GitHub Token for registry access
required: true
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Log in to registry
shell: bash
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull latest devel image to warm cache
shell: bash
run: docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build image for current source checkout
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
make docker-compose-build

View File

@@ -0,0 +1,77 @@
name: Run AWX docker-compose
description: Runs AWX with `make docker-compose`
inputs:
github-token:
description: GitHub Token to pass to awx_devel_image
required: true
build-ui:
description: Should the UI be built?
required: false
default: false
type: boolean
outputs:
ip:
description: The IP of the tools_awx_1 container
value: ${{ steps.data.outputs.ip }}
admin-token:
description: OAuth token for admin user
value: ${{ steps.data.outputs.admin_token }}
runs:
using: composite
steps:
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ inputs.github-token }}
- name: Upgrade ansible-core
shell: bash
run: python3 -m pip install --upgrade ansible-core
- name: Install system deps
shell: bash
run: sudo apt-get install -y gettext
- name: Start AWX
shell: bash
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} \
COMPOSE_TAG=${{ github.base_ref }} \
COMPOSE_UP_OPTS="-d" \
make docker-compose
- name: Update default AWX password
shell: bash
run: |
SECONDS=0
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]; do
if [[ $SECONDS -gt 600 ]]; then
echo "Timing out, AWX never came up"
exit 1
fi
echo "Waiting for AWX..."
sleep 5
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Build UI
# This must be a string comparison in composite actions:
# https://github.com/actions/runner/issues/2238
if: ${{ inputs.build-ui == 'true' }}
shell: bash
run: |
docker exec -i tools_awx_1 sh <<-EOSH
make ui-devel
EOSH
- name: Get instance data
id: data
shell: bash
run: |
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1)
ADMIN_TOKEN=$(docker exec -i tools_awx_1 awx-manage create_oauth2_token --user admin)
echo "ip=$AWX_IP" >> $GITHUB_OUTPUT
echo "admin_token=$ADMIN_TOKEN" >> $GITHUB_OUTPUT

View File

@@ -0,0 +1,19 @@
name: Upload logs
description: Upload logs from `make docker-compose` devel environment to GitHub as an artifact
inputs:
log-filename:
description: "*Unique* name of the log file"
required: true
runs:
using: composite
steps:
- name: Get AWX logs
shell: bash
run: |
docker logs tools_awx_1 > ${{ inputs.log-filename }}
- name: Upload AWX logs as artifact
uses: actions/upload-artifact@v3
with:
name: docker-compose-logs
path: ${{ inputs.log-filename }}

View File

@@ -7,8 +7,8 @@
## PRs/Issues ## PRs/Issues
### Visit our mailing list ### Visit the Forum or Matrix
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on our mailing list? See https://github.com/ansible/awx/#get-involved for information for ways to connect with us. - Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on either the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com) or the [Ansible Community Forum](https://forum.ansible.com/tag/awx)?
### Denied Submission ### Denied Submission

View File

@@ -11,6 +11,7 @@ jobs:
common-tests: common-tests:
name: ${{ matrix.tests.name }} name: ${{ matrix.tests.name }}
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 60
permissions: permissions:
packages: write packages: write
contents: read contents: read
@@ -20,6 +21,8 @@ jobs:
tests: tests:
- name: api-test - name: api-test
command: /start_tests.sh command: /start_tests.sh
- name: api-migrations
command: /start_tests.sh test_migrations
- name: api-lint - name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters command: /var/lib/awx/venv/awx/bin/tox -e linters
- name: api-swagger - name: api-swagger
@@ -35,29 +38,42 @@ jobs:
- name: ui-test-general - name: ui-test-general
command: make ui-test-general command: make ui-test-general
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- name: Build awx_devel image for running checks
uses: ./.github/actions/awx_devel_image
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run check ${{ matrix.tests.name }} - name: Run check ${{ matrix.tests.name }}
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make github_ci_runner run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make docker-runner
dev-env: dev-env:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 60
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run smoke test - name: Run smoke test
run: make github_ci_setup && ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v run: ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v
awx-operator: awx-operator:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 60
steps: steps:
- name: Checkout awx - name: Checkout awx
uses: actions/checkout@v2 uses: actions/checkout@v3
with: with:
path: awx path: awx
- name: Checkout awx-operator - name: Checkout awx-operator
uses: actions/checkout@v2 uses: actions/checkout@v3
with: with:
repository: ansible/awx-operator repository: ansible/awx-operator
path: awx-operator path: awx-operator
@@ -67,7 +83,7 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }} - name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v4
with: with:
python-version: ${{ env.py_version }} python-version: ${{ env.py_version }}
@@ -99,10 +115,11 @@ jobs:
collection-sanity: collection-sanity:
name: awx_collection sanity name: awx_collection sanity
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 30
strategy: strategy:
fail-fast: false fail-fast: false
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version. # The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core - name: Upgrade ansible-core
@@ -114,3 +131,139 @@ jobs:
# needed due to cgroupsv2. This is fixed, but a stable release # needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet. # with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1 ANSIBLE_TEST_PREFER_PODMAN: 1
collection-integration:
name: awx_collection integration
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
target-regex:
- name: a-h
regex: ^[a-h]
- name: i-p
regex: ^[i-p]
- name: r-z0-9
regex: ^[r-z0-9]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: false
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install dependencies for running tests
run: |
python3 -m pip install -e ./awxkit/
python3 -m pip install -r awx_collection/requirements.txt
- name: Run integration tests
run: |
echo "::remove-matcher owner=python::" # Disable annoying annotations from setup-python
echo '[general]' > ~/.tower_cli.cfg
echo 'host = https://${{ steps.awx.outputs.ip }}:8043' >> ~/.tower_cli.cfg
echo 'oauth_token = ${{ steps.awx.outputs.admin-token }}' >> ~/.tower_cli.cfg
echo 'verify_ssl = false' >> ~/.tower_cli.cfg
TARGETS="$(ls awx_collection/tests/integration/targets | grep '${{ matrix.target-regex.regex }}' | tr '\n' ' ')"
make COLLECTION_VERSION=100.100.100-git COLLECTION_TEST_TARGET="--coverage --requirements $TARGETS" test_collection_integration
env:
ANSIBLE_TEST_PREFER_PODMAN: 1
# Upload coverage report as artifact
- uses: actions/upload-artifact@v3
if: always()
with:
name: coverage-${{ matrix.target-regex.name }}
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: collection-integration-${{ matrix.target-regex.name }}.log
collection-integration-coverage-combine:
name: combine awx_collection integration coverage
runs-on: ubuntu-latest
timeout-minutes: 10
needs:
- collection-integration
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Download coverage artifacts
uses: actions/download-artifact@v3
with:
path: coverage
- name: Combine coverage
run: |
make COLLECTION_VERSION=100.100.100-git install_collection
mkdir -p ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage
cd coverage
for i in coverage-*; do
cp -rv $i/* ~/.ansible/collections/ansible_collections/awx/awx/tests/output/coverage/
done
cd ~/.ansible/collections/ansible_collections/awx/awx
ansible-test coverage combine --requirements
ansible-test coverage html
echo '## AWX Collection Integration Coverage' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
ansible-test coverage report >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
echo '## AWX Collection Integration Coverage HTML' >> $GITHUB_STEP_SUMMARY
echo 'Download the HTML artifacts to view the coverage report.' >> $GITHUB_STEP_SUMMARY
# This is a huge hack, there's no official action for removing artifacts currently.
# Also ACTIONS_RUNTIME_URL and ACTIONS_RUNTIME_TOKEN aren't available in normal run
# steps, so we have to use github-script to get them.
#
# The advantage of doing this, though, is that we save on artifact storage space.
- name: Get secret artifact runtime URL
uses: actions/github-script@v6
id: get-runtime-url
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_URL } = process.env;
return ACTIONS_RUNTIME_URL;
- name: Get secret artifact runtime token
uses: actions/github-script@v6
id: get-runtime-token
with:
result-encoding: string
script: |
const { ACTIONS_RUNTIME_TOKEN } = process.env;
return ACTIONS_RUNTIME_TOKEN;
- name: Remove intermediary artifacts
env:
ACTIONS_RUNTIME_URL: ${{ steps.get-runtime-url.outputs.result }}
ACTIONS_RUNTIME_TOKEN: ${{ steps.get-runtime-token.outputs.result }}
run: |
echo "::add-mask::${ACTIONS_RUNTIME_TOKEN}"
artifacts=$(
curl -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" \
${ACTIONS_RUNTIME_URL}_apis/pipelines/workflows/${{ github.run_id }}/artifacts?api-version=6.0-preview \
| jq -r '.value | .[] | select(.name | startswith("coverage-")) | .url'
)
for artifact in $artifacts; do
curl -i -X DELETE -H "Accept: application/json;api-version=6.0-preview" -H "Authorization: Bearer $ACTIONS_RUNTIME_TOKEN" "$artifact"
done
- name: Upload coverage report as artifact
uses: actions/upload-artifact@v3
with:
name: awx-collection-integration-coverage-html
path: ~/.ansible/collections/ansible_collections/awx/awx/tests/output/reports/coverage

View File

@@ -12,11 +12,12 @@ jobs:
push: push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_') if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 60
permissions: permissions:
packages: write packages: write
contents: read contents: read
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- name: Get python version from Makefile - name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
@@ -28,7 +29,7 @@ jobs:
OWNER: '${{ github.repository_owner }}' OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }} - name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v4
with: with:
python-version: ${{ env.py_version }} python-version: ${{ env.py_version }}

17
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
---
name: Docsite CI
on:
pull_request:
jobs:
docsite-build:
name: docsite test build
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v3
- name: install tox
run: pip install tox
- name: Assure docs can be built
run: tox -e docs

View File

@@ -19,41 +19,20 @@ jobs:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- name: Get python version from Makefile - uses: ./.github/actions/run_awx_devel
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV id: awx
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with: with:
python-version: ${{ env.py_version }} build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install system deps
run: sudo apt-get install -y gettext
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ github.base_ref }}
- name: Build UI
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make ui-devel
- name: Start AWX
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ github.base_ref }} make docker-compose &> make-docker-compose-output.log &
- name: Pull awx_cypress_base image - name: Pull awx_cypress_base image
run: | run: |
docker pull quay.io/awx/awx_cypress_base:latest docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project - name: Checkout test project
uses: actions/checkout@v2 uses: actions/checkout@v3
with: with:
repository: ${{ github.repository_owner }}/tower-qa repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }} ssh-key: ${{ secrets.QA_REPO_KEY }}
@@ -65,18 +44,6 @@ jobs:
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests . docker build -t awx-pf-tests .
- name: Update default AWX password
run: |
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -k https://localhost:8043/api/v2/ping/)" != "200" ]]
do
echo "Waiting for AWX..."
sleep 5;
done
echo "AWX is up, updating the password..."
docker exec -i tools_awx_1 sh <<-EOSH
awx-manage update_password --username=admin --password=password
EOSH
- name: Run E2E tests - name: Run E2E tests
env: env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }} CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
@@ -86,7 +53,7 @@ jobs:
export COMMIT_INFO_SHA=$GITHUB_SHA export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' tools_awx_1) AWX_IP=${{ steps.awx.outputs.ip }}
printenv > .env printenv > .env
echo "Executing tests:" echo "Executing tests:"
docker run \ docker run \
@@ -102,8 +69,7 @@ jobs:
-w /e2e \ -w /e2e \
awx-pf-tests run --project . awx-pf-tests run --project .
- name: Save AWX logs - uses: ./.github/actions/upload_awx_devel_logs
uses: actions/upload-artifact@v2 if: always()
with: with:
name: AWX-logs-${{ matrix.job }} log-filename: e2e-${{ matrix.job }}.log
path: make-docker-compose-output.log

View File

@@ -9,6 +9,7 @@ on:
jobs: jobs:
push: push:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 20
permissions: permissions:
packages: write packages: write
contents: read contents: read

View File

@@ -13,6 +13,7 @@ permissions:
jobs: jobs:
triage: triage:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 20
name: Label Issue name: Label Issue
steps: steps:
@@ -26,9 +27,10 @@ jobs:
community: community:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 20
name: Label Issue - Community name: Label Issue - Community
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
- name: Install python requests - name: Install python requests
run: pip install requests run: pip install requests

View File

@@ -14,6 +14,7 @@ permissions:
jobs: jobs:
triage: triage:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 20
name: Label PR name: Label PR
steps: steps:
@@ -25,9 +26,10 @@ jobs:
community: community:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 20
name: Label PR - Community name: Label PR - Community
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
- name: Install python requests - name: Install python requests
run: pip install requests run: pip install requests

View File

@@ -7,8 +7,10 @@ on:
types: [opened, edited, reopened, synchronize] types: [opened, edited, reopened, synchronize]
jobs: jobs:
pr-check: pr-check:
if: github.repository_owner == 'ansible' && endsWith(github.repository, 'awx')
name: Scan PR description for semantic versioning keywords name: Scan PR description for semantic versioning keywords
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 20
permissions: permissions:
packages: write packages: write
contents: read contents: read

View File

@@ -1,35 +0,0 @@
---
name: Check body for reference to jira
on:
pull_request:
branches:
- release_**
jobs:
pr-check:
if: github.repository_owner == 'ansible' && github.repository != 'awx'
name: Scan PR description for JIRA links
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check for JIRA lines
env:
PR_BODY: ${{ github.event.pull_request.body }}
run: |
echo "$PR_BODY" | grep "JIRA: None" > no_jira
echo "$PR_BODY" | grep "JIRA: https://.*[0-9]+"> jira
exit 0
# We exit 0 and set the shell to prevent the returns from the greps from failing this step
# See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
shell: bash {0}
- name: Check for exactly one item
run: |
if [ $(cat no_jira jira | wc -l) != 1 ] ; then
echo "The PR body must contain exactly one of [ 'JIRA: None' or 'JIRA: <one or more links>' ]"
echo "We counted $(cat no_jira jira | wc -l)"
exit 255;
else
exit 0;
fi

View File

@@ -13,17 +13,18 @@ permissions:
jobs: jobs:
promote: promote:
if: endsWith(github.repository, '/awx') if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 90
steps: steps:
- name: Checkout awx - name: Checkout awx
uses: actions/checkout@v2 uses: actions/checkout@v3
- name: Get python version from Makefile - name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }} - name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v4
with: with:
python-version: ${{ env.py_version }} python-version: ${{ env.py_version }}
@@ -40,9 +41,13 @@ jobs:
if: ${{ github.repository_owner != 'ansible' }} if: ${{ github.repository_owner != 'ansible' }}
- name: Build collection and publish to galaxy - name: Build collection and publish to galaxy
env:
COLLECTION_NAMESPACE: ${{ env.collection_namespace }}
COLLECTION_VERSION: ${{ github.event.release.tag_name }}
COLLECTION_TEMPLATE_VERSION: true
run: | run: |
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection make build_collection
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \ if [ "$(curl -L --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \ echo "Galaxy release already done"; \
else \ else \
ansible-galaxy collection publish \ ansible-galaxy collection publish \

View File

@@ -23,6 +23,7 @@ jobs:
stage: stage:
if: endsWith(github.repository, '/awx') if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 90
permissions: permissions:
packages: write packages: write
contents: write contents: write
@@ -44,7 +45,7 @@ jobs:
exit 0 exit 0
- name: Checkout awx - name: Checkout awx
uses: actions/checkout@v2 uses: actions/checkout@v3
with: with:
path: awx path: awx
@@ -52,18 +53,18 @@ jobs:
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }} - name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v4
with: with:
python-version: ${{ env.py_version }} python-version: ${{ env.py_version }}
- name: Checkout awx-logos - name: Checkout awx-logos
uses: actions/checkout@v2 uses: actions/checkout@v3
with: with:
repository: ansible/awx-logos repository: ansible/awx-logos
path: awx-logos path: awx-logos
- name: Checkout awx-operator - name: Checkout awx-operator
uses: actions/checkout@v2 uses: actions/checkout@v3
with: with:
repository: ${{ github.repository_owner }}/awx-operator repository: ${{ github.repository_owner }}/awx-operator
path: awx-operator path: awx-operator

View File

@@ -9,6 +9,7 @@ jobs:
name: Update Dependabot Prs name: Update Dependabot Prs
if: contains(github.event.pull_request.labels.*.name, 'dependencies') && contains(github.event.pull_request.labels.*.name, 'component:ui') if: contains(github.event.pull_request.labels.*.name, 'dependencies') && contains(github.event.pull_request.labels.*.name, 'component:ui')
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 20
steps: steps:
- name: Checkout branch - name: Checkout branch

View File

@@ -13,17 +13,18 @@ on:
jobs: jobs:
push: push:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 60
permissions: permissions:
packages: write packages: write
contents: read contents: read
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- name: Get python version from Makefile - name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }} - name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v4
with: with:
python-version: ${{ env.py_version }} python-version: ${{ env.py_version }}

4
.gitignore vendored
View File

@@ -165,3 +165,7 @@ use_dev_supervisor.txt
awx/ui_next/src awx/ui_next/src
awx/ui_next/build awx/ui_next/build
# Docs build stuff
docs/docsite/build/
_readthedocs/

5
.gitleaks.toml Normal file
View File

@@ -0,0 +1,5 @@
[allowlist]
description = "Documentation contains example secrets and passwords"
paths = [
"docs/docsite/rst/administration/oauth2_token_auth.rst",
]

5
.pip-tools.toml Normal file
View File

@@ -0,0 +1,5 @@
[tool.pip-tools]
resolver = "backtracking"
allow-unsafe = true
strip-extras = true
quiet = true

16
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,16 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
build:
os: ubuntu-22.04
tools:
python: >-
3.11
commands:
- pip install --user tox
- python3 -m tox -e docs --notest -v
- python3 -m tox -e docs --skip-pkg-install -q
- mkdir -p _readthedocs/html/
- mv docs/docsite/build/html/* _readthedocs/html/

View File

@@ -10,6 +10,7 @@ ignore: |
tools/docker-compose/_sources tools/docker-compose/_sources
# django template files # django template files
awx/api/templates/instance_install_bundle/** awx/api/templates/instance_install_bundle/**
.readthedocs.yaml
extends: default extends: default

View File

@@ -6,6 +6,7 @@ DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no OFFICIAL ?= no
NODE ?= node NODE ?= node
NPM_BIN ?= npm NPM_BIN ?= npm
KIND_BIN ?= $(shell which kind)
CHROMIUM_BIN=/tmp/chrome-linux/chrome CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD) GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage MANAGEMENT_COMMAND ?= awx-manage
@@ -78,7 +79,7 @@ I18N_FLAG_FILE = .i18n_built
sdist \ sdist \
ui-release ui-devel \ ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \ VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit github_ci_setup github_ci_runner .git/hooks/pre-commit
clean-tmp: clean-tmp:
rm -rf tmp/ rm -rf tmp/
@@ -323,21 +324,16 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3 cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file' awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
## Login to Github container image registry, pull image, then build image. test_migrations:
github_ci_setup: if [ "$(VENV_BASE)" ]; then \
# GITHUB_ACTOR is automatic github actions env var . $(VENV_BASE)/awx/bin/activate; \
# CI_GITHUB_TOKEN is defined in .github files fi; \
echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider --migrations -m migration_test $(PYTEST_ARGS) $(TEST_DIRS)
docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache
$(MAKE) docker-compose-build
## Runs AWX_DOCKER_CMD inside a new docker container. ## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner: docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD) docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
## Builds image and runs AWX_DOCKER_CMD in it, mainly for .github checks.
github_ci_runner: github_ci_setup docker-runner
test_collection: test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \ if [ "$(VENV_BASE)" ]; then \
@@ -383,7 +379,7 @@ test_collection_sanity:
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS) cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
test_collection_integration: install_collection test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET) cd $(COLLECTION_INSTALL) && ansible-test integration -vvv $(COLLECTION_TEST_TARGET)
test_unit: test_unit:
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
@@ -664,6 +660,9 @@ awx-kube-dev-build: Dockerfile.kube-dev
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) . -t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
kind-dev-load: awx-kube-dev-build
$(KIND_BIN) load docker-image $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG)
# Translation TASKS # Translation TASKS
# -------------------------------------- # --------------------------------------

View File

@@ -1,5 +1,5 @@
[![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project) [![CI](https://github.com/ansible/awx/actions/workflows/ci.yml/badge.svg?branch=devel)](https://github.com/ansible/awx/actions/workflows/ci.yml) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![IRC Chat - #ansible-awx](https://img.shields.io/badge/IRC-%23ansible--awx-blueviolet.svg)](https://libera.chat) [![Ansible Matrix](https://img.shields.io/badge/matrix-Ansible%20Community-blueviolet.svg?logo=matrix)](https://chat.ansible.im/#/welcome) [![Ansible Discourse](https://img.shields.io/badge/discourse-Ansible%20Community-yellowgreen.svg?logo=discourse)](https://forum.ansible.com)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" /> <img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -30,12 +30,12 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
Code of Conduct Code of Conduct
--------------- ---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com) We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved Get Involved
------------ ------------
We welcome your feedback and ideas. Here's how to reach us with feedback and questions: We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
- Join the `#ansible-awx` channel on irc.libera.chat - Join the [Ansible AWX channel on Matrix](https://matrix.to/#/#awx:ansible.com)
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project) - Join the [Ansible Community Forum](https://forum.ansible.com)

View File

@@ -52,39 +52,14 @@ try:
except ImportError: # pragma: no cover except ImportError: # pragma: no cover
MODE = 'production' MODE = 'production'
import hashlib
try: try:
import django # noqa: F401 import django # noqa: F401
HAS_DJANGO = True
except ImportError: except ImportError:
HAS_DJANGO = False pass
else: else:
from django.db.backends.base import schema
from django.db.models import indexes
from django.db.backends.utils import names_digest
from django.db import connection from django.db import connection
if HAS_DJANGO is True:
# See upgrade blocker note in requirements/README.md
try:
names_digest('foo', 'bar', 'baz', length=8)
except ValueError:
def names_digest(*args, length):
"""
Generate a 32-bit digest of a set of arguments that can be used to shorten
identifying names. Support for use in FIPS environments.
"""
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(arg.encode())
return h.hexdigest()[:length]
schema.names_digest = names_digest
indexes.names_digest = names_digest
def find_commands(management_dir): def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py. # Modified version of function from django/core/management/__init__.py.

View File

@@ -3233,7 +3233,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
if get_field_from_model_or_attrs('host_config_key') and not inventory: if get_field_from_model_or_attrs('host_config_key') and not inventory:
raise serializers.ValidationError({'host_config_key': _("Cannot enable provisioning callback without an inventory set.")}) raise serializers.ValidationError({'host_config_key': _("Cannot enable provisioning callback without an inventory set.")})
prompting_error_message = _("Must either set a default value or ask to prompt on launch.") prompting_error_message = _("You must either set a default value or ask to prompt on launch.")
if project is None: if project is None:
raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")}) raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'): elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):
@@ -5356,10 +5356,16 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
class InstanceLinkSerializer(BaseSerializer): class InstanceLinkSerializer(BaseSerializer):
class Meta: class Meta:
model = InstanceLink model = InstanceLink
fields = ('source', 'target', 'link_state') fields = ('id', 'url', 'related', 'source', 'target', 'link_state')
source = serializers.SlugRelatedField(slug_field="hostname", read_only=True) source = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
target = serializers.SlugRelatedField(slug_field="hostname", read_only=True) target = serializers.SlugRelatedField(slug_field="hostname", queryset=Instance.objects.all())
def get_related(self, obj):
res = super(InstanceLinkSerializer, self).get_related(obj)
res['source_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.source.id})
res['target_instance'] = self.reverse('api:instance_detail', kwargs={'pk': obj.target.id})
return res
class InstanceNodeSerializer(BaseSerializer): class InstanceNodeSerializer(BaseSerializer):
@@ -5376,6 +5382,7 @@ class InstanceSerializer(BaseSerializer):
jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that are targeted for this instance'), read_only=True) jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that are targeted for this instance'), read_only=True)
jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance'), read_only=True) jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance'), read_only=True)
health_check_pending = serializers.SerializerMethodField() health_check_pending = serializers.SerializerMethodField()
peers = serializers.SlugRelatedField(many=True, required=False, slug_field="hostname", queryset=Instance.objects.all())
class Meta: class Meta:
model = Instance model = Instance
@@ -5412,6 +5419,8 @@ class InstanceSerializer(BaseSerializer):
'node_state', 'node_state',
'ip_address', 'ip_address',
'listener_port', 'listener_port',
'peers',
'peers_from_control_nodes',
) )
extra_kwargs = { extra_kwargs = {
'node_type': {'initial': Instance.Types.EXECUTION, 'default': Instance.Types.EXECUTION}, 'node_type': {'initial': Instance.Types.EXECUTION, 'default': Instance.Types.EXECUTION},
@@ -5464,22 +5473,57 @@ class InstanceSerializer(BaseSerializer):
def get_health_check_pending(self, obj): def get_health_check_pending(self, obj):
return obj.health_check_pending return obj.health_check_pending
def validate(self, data): def validate(self, attrs):
if self.instance: def get_field_from_model_or_attrs(fd):
if self.instance.node_type == Instance.Types.HOP: return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
raise serializers.ValidationError("Hop node instances may not be changed.")
else: def check_peers_changed():
if not settings.IS_K8S: '''
raise serializers.ValidationError("Can only create instances on Kubernetes or OpenShift.") return True if
return data - 'peers' in attrs
- instance peers matches peers in attrs
'''
return self.instance and 'peers' in attrs and set(self.instance.peers.all()) != set(attrs['peers'])
if not self.instance and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only create instances on Kubernetes or OpenShift."))
node_type = get_field_from_model_or_attrs("node_type")
peers_from_control_nodes = get_field_from_model_or_attrs("peers_from_control_nodes")
listener_port = get_field_from_model_or_attrs("listener_port")
peers = attrs.get('peers', [])
if peers_from_control_nodes and node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError(_("peers_from_control_nodes can only be enabled for execution or hop nodes."))
if node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
if check_peers_changed():
raise serializers.ValidationError(
_("Setting peers manually for control nodes is not allowed. Enable peers_from_control_nodes on the hop and execution nodes instead.")
)
if not listener_port and peers_from_control_nodes:
raise serializers.ValidationError(_("Field listener_port must be a valid integer when peers_from_control_nodes is enabled."))
if not listener_port and self.instance and self.instance.peers_from.exists():
raise serializers.ValidationError(_("Field listener_port must be a valid integer when other nodes peer to it."))
for peer in peers:
if peer.listener_port is None:
raise serializers.ValidationError(_("Field listener_port must be set on peer ") + peer.hostname + ".")
if not settings.IS_K8S:
if check_peers_changed():
raise serializers.ValidationError(_("Cannot change peers."))
return super().validate(attrs)
def validate_node_type(self, value): def validate_node_type(self, value):
if not self.instance: if not self.instance and value not in [Instance.Types.HOP, Instance.Types.EXECUTION]:
if value not in (Instance.Types.EXECUTION,): raise serializers.ValidationError(_("Can only create execution or hop nodes."))
raise serializers.ValidationError("Can only create execution nodes.")
else: if self.instance and self.instance.node_type != value:
if self.instance.node_type != value: raise serializers.ValidationError(_("Cannot change node type."))
raise serializers.ValidationError("Cannot change node type.")
return value return value
@@ -5487,30 +5531,41 @@ class InstanceSerializer(BaseSerializer):
if self.instance: if self.instance:
if value != self.instance.node_state: if value != self.instance.node_state:
if not settings.IS_K8S: if not settings.IS_K8S:
raise serializers.ValidationError("Can only change the state on Kubernetes or OpenShift.") raise serializers.ValidationError(_("Can only change the state on Kubernetes or OpenShift."))
if value != Instance.States.DEPROVISIONING: if value != Instance.States.DEPROVISIONING:
raise serializers.ValidationError("Can only change instances to the 'deprovisioning' state.") raise serializers.ValidationError(_("Can only change instances to the 'deprovisioning' state."))
if self.instance.node_type not in (Instance.Types.EXECUTION,): if self.instance.node_type not in (Instance.Types.EXECUTION, Instance.Types.HOP):
raise serializers.ValidationError("Can only deprovision execution nodes.") raise serializers.ValidationError(_("Can only deprovision execution or hop nodes."))
else: else:
if value and value != Instance.States.INSTALLED: if value and value != Instance.States.INSTALLED:
raise serializers.ValidationError("Can only create instances in the 'installed' state.") raise serializers.ValidationError(_("Can only create instances in the 'installed' state."))
return value return value
def validate_hostname(self, value): def validate_hostname(self, value):
""" """
- Hostname cannot be "localhost" - but can be something like localhost.domain Cannot change the hostname
- Cannot change the hostname of an-already instantiated & initialized Instance object
""" """
if self.instance and self.instance.hostname != value: if self.instance and self.instance.hostname != value:
raise serializers.ValidationError("Cannot change hostname.") raise serializers.ValidationError(_("Cannot change hostname."))
return value return value
def validate_listener_port(self, value): def validate_listener_port(self, value):
if self.instance and self.instance.listener_port != value: """
raise serializers.ValidationError("Cannot change listener port.") Cannot change listener port, unless going from none to integer, and vice versa
"""
if value and self.instance and self.instance.listener_port and self.instance.listener_port != value:
raise serializers.ValidationError(_("Cannot change listener port."))
return value
def validate_peers_from_control_nodes(self, value):
"""
Can only enable for K8S based deployments
"""
if value and not settings.IS_K8S:
raise serializers.ValidationError(_("Can only be enabled on Kubernetes or Openshift."))
return value return value
@@ -5518,7 +5573,19 @@ class InstanceSerializer(BaseSerializer):
class InstanceHealthCheckSerializer(BaseSerializer): class InstanceHealthCheckSerializer(BaseSerializer):
class Meta: class Meta:
model = Instance model = Instance
read_only_fields = ('uuid', 'hostname', 'version', 'last_health_check', 'errors', 'cpu', 'memory', 'cpu_capacity', 'mem_capacity', 'capacity') read_only_fields = (
'uuid',
'hostname',
'ip_address',
'version',
'last_health_check',
'errors',
'cpu',
'memory',
'cpu_capacity',
'mem_capacity',
'capacity',
)
fields = read_only_fields fields = read_only_fields

View File

@@ -3,21 +3,35 @@ receptor_group: awx
receptor_verify: true receptor_verify: true
receptor_tls: true receptor_tls: true
receptor_mintls13: false receptor_mintls13: false
{% if instance.node_type == "execution" %}
receptor_work_commands: receptor_work_commands:
ansible-runner: ansible-runner:
command: ansible-runner command: ansible-runner
params: worker params: worker
allowruntimeparams: true allowruntimeparams: true
verifysignature: true verifysignature: true
additional_python_packages:
- ansible-runner
{% endif %}
custom_worksign_public_keyfile: receptor/work_public_key.pem custom_worksign_public_keyfile: receptor/work_public_key.pem
custom_tls_certfile: receptor/tls/receptor.crt custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt custom_ca_certfile: receptor/tls/ca/mesh-CA.crt
receptor_protocol: 'tcp' receptor_protocol: 'tcp'
{% if instance.listener_port %}
receptor_listener: true receptor_listener: true
receptor_port: {{ instance.listener_port }} receptor_port: {{ instance.listener_port }}
receptor_dependencies: {% else %}
- python39-pip receptor_listener: false
{% endif %}
{% if peers %}
receptor_peers:
{% for peer in peers %}
- host: {{ peer.host }}
port: {{ peer.port }}
protocol: tcp
{% endfor %}
{% endif %}
{% verbatim %} {% verbatim %}
podman_user: "{{ receptor_user }}" podman_user: "{{ receptor_user }}"
podman_group: "{{ receptor_group }}" podman_group: "{{ receptor_group }}"

View File

@@ -1,20 +1,16 @@
{% verbatim %}
--- ---
- hosts: all - hosts: all
become: yes become: yes
tasks: tasks:
- name: Create the receptor user - name: Create the receptor user
user: user:
{% verbatim %}
name: "{{ receptor_user }}" name: "{{ receptor_user }}"
{% endverbatim %}
shell: /bin/bash shell: /bin/bash
- name: Enable Copr repo for Receptor {% if instance.node_type == "execution" %}
command: dnf copr enable ansible-awx/receptor -y
- import_role: - import_role:
name: ansible.receptor.podman name: ansible.receptor.podman
{% endif %}
- import_role: - import_role:
name: ansible.receptor.setup name: ansible.receptor.setup
- name: Install ansible-runner
pip:
name: ansible-runner
executable: pip3.9
{% endverbatim %}

View File

@@ -1,4 +1,4 @@
--- ---
collections: collections:
- name: ansible.receptor - name: ansible.receptor
version: 1.1.0 version: 2.0.2

View File

@@ -128,6 +128,10 @@ logger = logging.getLogger('awx.api.views')
def unpartitioned_event_horizon(cls): def unpartitioned_event_horizon(cls):
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE table_name = '_unpartitioned_{cls._meta.db_table}';")
if not cursor.fetchone():
return 0
with connection.cursor() as cursor: with connection.cursor() as cursor:
try: try:
cursor.execute(f'SELECT MAX(id) FROM _unpartitioned_{cls._meta.db_table}') cursor.execute(f'SELECT MAX(id) FROM _unpartitioned_{cls._meta.db_table}')
@@ -341,17 +345,18 @@ class InstanceDetail(RetrieveUpdateAPIView):
def update_raw_data(self, data): def update_raw_data(self, data):
# these fields are only valid on creation of an instance, so they unwanted on detail view # these fields are only valid on creation of an instance, so they unwanted on detail view
data.pop('listener_port', None)
data.pop('node_type', None) data.pop('node_type', None)
data.pop('hostname', None) data.pop('hostname', None)
data.pop('ip_address', None)
return super(InstanceDetail, self).update_raw_data(data) return super(InstanceDetail, self).update_raw_data(data)
def update(self, request, *args, **kwargs): def update(self, request, *args, **kwargs):
r = super(InstanceDetail, self).update(request, *args, **kwargs) r = super(InstanceDetail, self).update(request, *args, **kwargs)
if status.is_success(r.status_code): if status.is_success(r.status_code):
obj = self.get_object() obj = self.get_object()
obj.set_capacity_value() capacity_changed = obj.set_capacity_value()
obj.save(update_fields=['capacity']) if capacity_changed:
obj.save(update_fields=['capacity'])
r.data = serializers.InstanceSerializer(obj, context=self.get_serializer_context()).to_representation(obj) r.data = serializers.InstanceSerializer(obj, context=self.get_serializer_context()).to_representation(obj)
return r return r

View File

@@ -6,6 +6,8 @@ import io
import ipaddress import ipaddress
import os import os
import tarfile import tarfile
import time
import re
import asn1 import asn1
from awx.api import serializers from awx.api import serializers
@@ -40,6 +42,8 @@ RECEPTOR_OID = "1.3.6.1.4.1.2312.19.1"
# │ │ └── receptor.key # │ │ └── receptor.key
# │ └── work-public-key.pem # │ └── work-public-key.pem
# └── requirements.yml # └── requirements.yml
class InstanceInstallBundle(GenericAPIView): class InstanceInstallBundle(GenericAPIView):
name = _('Install Bundle') name = _('Install Bundle')
model = models.Instance model = models.Instance
@@ -49,9 +53,9 @@ class InstanceInstallBundle(GenericAPIView):
def get(self, request, *args, **kwargs): def get(self, request, *args, **kwargs):
instance_obj = self.get_object() instance_obj = self.get_object()
if instance_obj.node_type not in ('execution',): if instance_obj.node_type not in ('execution', 'hop'):
return Response( return Response(
data=dict(msg=_('Install bundle can only be generated for execution nodes.')), data=dict(msg=_('Install bundle can only be generated for execution or hop nodes.')),
status=status.HTTP_400_BAD_REQUEST, status=status.HTTP_400_BAD_REQUEST,
) )
@@ -66,37 +70,37 @@ class InstanceInstallBundle(GenericAPIView):
# generate and write the receptor key to receptor/tls/receptor.key in the tar file # generate and write the receptor key to receptor/tls/receptor.key in the tar file
key, cert = generate_receptor_tls(instance_obj) key, cert = generate_receptor_tls(instance_obj)
def tar_addfile(tarinfo, filecontent):
tarinfo.mtime = time.time()
tarinfo.size = len(filecontent)
tar.addfile(tarinfo, io.BytesIO(filecontent))
key_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.key") key_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.key")
key_tarinfo.size = len(key) tar_addfile(key_tarinfo, key)
tar.addfile(key_tarinfo, io.BytesIO(key))
cert_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.crt") cert_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.crt")
cert_tarinfo.size = len(cert) cert_tarinfo.size = len(cert)
tar.addfile(cert_tarinfo, io.BytesIO(cert)) tar_addfile(cert_tarinfo, cert)
# generate and write install_receptor.yml to the tar file # generate and write install_receptor.yml to the tar file
playbook = generate_playbook().encode('utf-8') playbook = generate_playbook(instance_obj).encode('utf-8')
playbook_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/install_receptor.yml") playbook_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/install_receptor.yml")
playbook_tarinfo.size = len(playbook) tar_addfile(playbook_tarinfo, playbook)
tar.addfile(playbook_tarinfo, io.BytesIO(playbook))
# generate and write inventory.yml to the tar file # generate and write inventory.yml to the tar file
inventory_yml = generate_inventory_yml(instance_obj).encode('utf-8') inventory_yml = generate_inventory_yml(instance_obj).encode('utf-8')
inventory_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/inventory.yml") inventory_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/inventory.yml")
inventory_yml_tarinfo.size = len(inventory_yml) tar_addfile(inventory_yml_tarinfo, inventory_yml)
tar.addfile(inventory_yml_tarinfo, io.BytesIO(inventory_yml))
# generate and write group_vars/all.yml to the tar file # generate and write group_vars/all.yml to the tar file
group_vars = generate_group_vars_all_yml(instance_obj).encode('utf-8') group_vars = generate_group_vars_all_yml(instance_obj).encode('utf-8')
group_vars_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/group_vars/all.yml") group_vars_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/group_vars/all.yml")
group_vars_tarinfo.size = len(group_vars) tar_addfile(group_vars_tarinfo, group_vars)
tar.addfile(group_vars_tarinfo, io.BytesIO(group_vars))
# generate and write requirements.yml to the tar file # generate and write requirements.yml to the tar file
requirements_yml = generate_requirements_yml().encode('utf-8') requirements_yml = generate_requirements_yml().encode('utf-8')
requirements_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/requirements.yml") requirements_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/requirements.yml")
requirements_yml_tarinfo.size = len(requirements_yml) tar_addfile(requirements_yml_tarinfo, requirements_yml)
tar.addfile(requirements_yml_tarinfo, io.BytesIO(requirements_yml))
# respond with the tarfile # respond with the tarfile
f.seek(0) f.seek(0)
@@ -105,8 +109,10 @@ class InstanceInstallBundle(GenericAPIView):
return response return response
def generate_playbook(): def generate_playbook(instance_obj):
return render_to_string("instance_install_bundle/install_receptor.yml") playbook_yaml = render_to_string("instance_install_bundle/install_receptor.yml", context=dict(instance=instance_obj))
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', playbook_yaml)
def generate_requirements_yml(): def generate_requirements_yml():
@@ -118,7 +124,12 @@ def generate_inventory_yml(instance_obj):
def generate_group_vars_all_yml(instance_obj): def generate_group_vars_all_yml(instance_obj):
return render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj)) peers = []
for instance in instance_obj.peers.all():
peers.append(dict(host=instance.hostname, port=instance.listener_port))
all_yaml = render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj, peers=peers))
# convert consecutive newlines with a single newline
return re.sub(r'\n+', '\n', all_yaml)
def generate_receptor_tls(instance_obj): def generate_receptor_tls(instance_obj):

View File

@@ -418,6 +418,10 @@ class SettingsWrapper(UserSettingsHolder):
"""Get value while accepting the in-memory cache if key is available""" """Get value while accepting the in-memory cache if key is available"""
with _ctit_db_wrapper(trans_safe=True): with _ctit_db_wrapper(trans_safe=True):
return self._get_local(name) return self._get_local(name)
# If the last line did not return, that means we hit a database error
# in that case, we should not have a local cache value
# thus, return empty as a signal to use the default
return empty
def __getattr__(self, name): def __getattr__(self, name):
value = empty value = empty

View File

@@ -13,6 +13,7 @@ from unittest import mock
from django.conf import LazySettings from django.conf import LazySettings
from django.core.cache.backends.locmem import LocMemCache from django.core.cache.backends.locmem import LocMemCache
from django.core.exceptions import ImproperlyConfigured from django.core.exceptions import ImproperlyConfigured
from django.db.utils import Error as DBError
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
import pytest import pytest
@@ -331,3 +332,18 @@ def test_in_memory_cache_works(settings):
with mock.patch.object(settings, '_get_local') as mock_get: with mock.patch.object(settings, '_get_local') as mock_get:
assert settings.AWX_VAR == 'DEFAULT' assert settings.AWX_VAR == 'DEFAULT'
mock_get.assert_not_called() mock_get.assert_not_called()
@pytest.mark.defined_in_file(AWX_VAR=[])
def test_getattr_with_database_error(settings):
"""
If a setting is defined via the registry and has a null-ish default which is not None
then referencing that setting during a database outage should give that default
this is regression testing for a bug where it would return None
"""
settings.registry.register('AWX_VAR', field_class=fields.StringListField, default=[], category=_('System'), category_slug='system')
settings._awx_conf_memoizedcache.clear()
with mock.patch('django.db.backends.base.base.BaseDatabaseWrapper.ensure_connection') as mock_ensure:
mock_ensure.side_effect = DBError('for test')
assert settings.AWX_VAR == []

View File

@@ -79,7 +79,6 @@ __all__ = [
'get_user_queryset', 'get_user_queryset',
'check_user_access', 'check_user_access',
'check_user_access_with_errors', 'check_user_access_with_errors',
'user_accessible_objects',
'consumer_access', 'consumer_access',
] ]
@@ -136,10 +135,6 @@ def register_access(model_class, access_class):
access_registry[model_class] = access_class access_registry[model_class] = access_class
def user_accessible_objects(user, role_name):
return ResourceMixin._accessible_objects(User, user, role_name)
def get_user_queryset(user, model_class): def get_user_queryset(user, model_class):
""" """
Return a queryset for the given model_class containing only the instances Return a queryset for the given model_class containing only the instances

View File

@@ -694,16 +694,18 @@ register(
category_slug='logging', category_slug='logging',
) )
register( register(
'LOG_AGGREGATOR_MAX_DISK_USAGE_GB', 'LOG_AGGREGATOR_ACTION_QUEUE_SIZE',
field_class=fields.IntegerField, field_class=fields.IntegerField,
default=1, default=131072,
min_value=1, min_value=1,
label=_('Maximum disk persistence for external log aggregation (in GB)'), label=_('Maximum number of messages that can be stored in the log action queue'),
help_text=_( help_text=_(
'Amount of data to store (in gigabytes) during an outage of ' 'Defines how large the rsyslog action queue can grow in number of messages '
'the external log aggregator (defaults to 1). ' 'stored. This can have an impact on memory utilization. When the queue '
'Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. ' 'reaches 75% of this number, the queue will start writing to disk '
'Notably, this is used for the rsyslogd main queue (for input messages).' '(queue.highWatermark in rsyslog). When it reaches 90%, NOTICE, INFO, and '
'DEBUG messages will start to be discarded (queue.discardMark with '
'queue.discardSeverity=5).'
), ),
category=_('Logging'), category=_('Logging'),
category_slug='logging', category_slug='logging',
@@ -718,8 +720,7 @@ register(
'Amount of data to store (in gigabytes) if an rsyslog action takes time ' 'Amount of data to store (in gigabytes) if an rsyslog action takes time '
'to process an incoming message (defaults to 1). ' 'to process an incoming message (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). ' 'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). '
'Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified ' 'It stores files in the directory specified by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
'by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
), ),
category=_('Logging'), category=_('Logging'),
category_slug='logging', category_slug='logging',

View File

@@ -4,6 +4,8 @@ from urllib.parse import urljoin, quote
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
import requests import requests
import base64
import binascii
conjur_inputs = { conjur_inputs = {
@@ -50,6 +52,13 @@ conjur_inputs = {
} }
def _is_base64(s: str) -> bool:
try:
return base64.b64encode(base64.b64decode(s.encode("utf-8"))) == s.encode("utf-8")
except binascii.Error:
return False
def conjur_backend(**kwargs): def conjur_backend(**kwargs):
url = kwargs['url'] url = kwargs['url']
api_key = kwargs['api_key'] api_key = kwargs['api_key']
@@ -77,7 +86,7 @@ def conjur_backend(**kwargs):
token = resp.content.decode('utf-8') token = resp.content.decode('utf-8')
lookup_kwargs = { lookup_kwargs = {
'headers': {'Authorization': 'Token token="{}"'.format(token)}, 'headers': {'Authorization': 'Token token="{}"'.format(token if _is_base64(token) else base64.b64encode(token.encode('utf-8')).decode('utf-8'))},
'allow_redirects': False, 'allow_redirects': False,
} }

View File

@@ -2,25 +2,29 @@ from .plugin import CredentialPlugin
from django.conf import settings from django.conf import settings
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
from thycotic.secrets.vault import SecretsVault from delinea.secrets.vault import PasswordGrantAuthorizer, SecretsVault
from base64 import b64decode
dsv_inputs = { dsv_inputs = {
'fields': [ 'fields': [
{ {
'id': 'tenant', 'id': 'tenant',
'label': _('Tenant'), 'label': _('Tenant'),
'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretservercloud.com'), 'help_text': _('The tenant e.g. "ex" when the URL is https://ex.secretsvaultcloud.com'),
'type': 'string', 'type': 'string',
}, },
{ {
'id': 'tld', 'id': 'tld',
'label': _('Top-level Domain (TLD)'), 'label': _('Top-level Domain (TLD)'),
'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretservercloud.com'), 'help_text': _('The TLD of the tenant e.g. "com" when the URL is https://ex.secretsvaultcloud.com'),
'choices': ['ca', 'com', 'com.au', 'com.sg', 'eu'], 'choices': ['ca', 'com', 'com.au', 'eu'],
'default': 'com', 'default': 'com',
}, },
{'id': 'client_id', 'label': _('Client ID'), 'type': 'string'}, {
'id': 'client_id',
'label': _('Client ID'),
'type': 'string',
},
{ {
'id': 'client_secret', 'id': 'client_secret',
'label': _('Client Secret'), 'label': _('Client Secret'),
@@ -41,8 +45,16 @@ dsv_inputs = {
'help_text': _('The field to extract from the secret'), 'help_text': _('The field to extract from the secret'),
'type': 'string', 'type': 'string',
}, },
{
'id': 'secret_decoding',
'label': _('Should the secret be base64 decoded?'),
'help_text': _('Specify whether the secret should be base64 decoded, typically used for storing files, such as SSH keys'),
'choices': ['No Decoding', 'Decode Base64'],
'type': 'string',
'default': 'No Decoding',
},
], ],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field'], 'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field', 'secret_decoding'],
} }
if settings.DEBUG: if settings.DEBUG:
@@ -51,12 +63,32 @@ if settings.DEBUG:
'id': 'url_template', 'id': 'url_template',
'label': _('URL template'), 'label': _('URL template'),
'type': 'string', 'type': 'string',
'default': 'https://{}.secretsvaultcloud.{}/v1', 'default': 'https://{}.secretsvaultcloud.{}',
} }
) )
dsv_plugin = CredentialPlugin(
'Thycotic DevOps Secrets Vault', def dsv_backend(**kwargs):
dsv_inputs, tenant_name = kwargs['tenant']
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip tenant_tld = kwargs.get('tld', 'com')
) tenant_url_template = kwargs.get('url_template', 'https://{}.secretsvaultcloud.{}')
client_id = kwargs['client_id']
client_secret = kwargs['client_secret']
secret_path = kwargs['path']
secret_field = kwargs['secret_field']
# providing a default value to remain backward compatible for secrets that have not specified this option
secret_decoding = kwargs.get('secret_decoding', 'No Decoding')
tenant_url = tenant_url_template.format(tenant_name, tenant_tld.strip("."))
authorizer = PasswordGrantAuthorizer(tenant_url, client_id, client_secret)
dsv_secret = SecretsVault(tenant_url, authorizer).get_secret(secret_path)
# files can be uploaded base64 decoded to DSV and thus decoding it only, when asked for
if secret_decoding == 'Decode Base64':
return b64decode(dsv_secret['data'][secret_field]).decode()
return dsv_secret['data'][secret_field]
dsv_plugin = CredentialPlugin(name='Thycotic DevOps Secrets Vault', inputs=dsv_inputs, backend=dsv_backend)

View File

@@ -54,7 +54,9 @@ tss_inputs = {
def tss_backend(**kwargs): def tss_backend(**kwargs):
if kwargs.get("domain"): if kwargs.get("domain"):
authorizer = DomainPasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'], kwargs['domain']) authorizer = DomainPasswordGrantAuthorizer(
base_url=kwargs['server_url'], username=kwargs['username'], domain=kwargs['domain'], password=kwargs['password']
)
else: else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password']) authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer) secret_server = SecretServer(kwargs['server_url'], authorizer)

View File

@@ -37,8 +37,11 @@ class Control(object):
def running(self, *args, **kwargs): def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs) return self.control_with_reply('running', *args, **kwargs)
def cancel(self, task_ids, *args, **kwargs): def cancel(self, task_ids, with_reply=True):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs) if with_reply:
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
else:
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
def schedule(self, *args, **kwargs): def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs) return self.control_with_reply('schedule', *args, **kwargs)

View File

@@ -89,8 +89,9 @@ class AWXConsumerBase(object):
if task_ids and not msg: if task_ids and not msg:
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}') logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
with pg_bus_conn() as conn: if reply_queue is not None:
conn.notify(reply_queue, json.dumps(msg)) with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
elif control == 'reload': elif control == 'reload':
for worker in self.pool.workers: for worker in self.pool.workers:
worker.quit() worker.quit()

View File

@@ -24,6 +24,9 @@ class Command(BaseCommand):
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old') parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)') parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=500, metavar='X', help='Remove activity stream events in batch of X events. Defaults to 500.'
)
def init_logging(self): def init_logging(self):
log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0])) log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0]))
@@ -48,7 +51,7 @@ class Command(BaseCommand):
else: else:
pks_to_delete.add(asobj.pk) pks_to_delete.add(asobj.pk)
# Cleanup objects in batches instead of deleting each one individually. # Cleanup objects in batches instead of deleting each one individually.
if len(pks_to_delete) >= 500: if len(pks_to_delete) >= self.batch_size:
ActivityStream.objects.filter(pk__in=pks_to_delete).delete() ActivityStream.objects.filter(pk__in=pks_to_delete).delete()
n_deleted_items += len(pks_to_delete) n_deleted_items += len(pks_to_delete)
pks_to_delete.clear() pks_to_delete.clear()
@@ -63,4 +66,5 @@ class Command(BaseCommand):
self.days = int(options.get('days', 30)) self.days = int(options.get('days', 30))
self.cutoff = now() - datetime.timedelta(days=self.days) self.cutoff = now() - datetime.timedelta(days=self.days)
self.dry_run = bool(options.get('dry_run', False)) self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 500))
self.cleanup_activitystream() self.cleanup_activitystream()

View File

@@ -1,22 +1,22 @@
from awx.main.models import HostMetric
from django.core.management.base import BaseCommand from django.core.management.base import BaseCommand
from django.conf import settings from django.conf import settings
from awx.main.tasks.host_metrics import HostMetricTask
class Command(BaseCommand): class Command(BaseCommand):
""" """
Run soft-deleting of HostMetrics This command provides cleanup task for HostMetric model.
There are two modes, which run in following order:
- soft cleanup
- - Perform soft-deletion of all host metrics last automated 12 months ago or before.
This is the same as issuing a DELETE request to /api/v2/host_metrics/N/ for all host metrics that match the criteria.
- - updates columns delete, deleted_counter and last_deleted
- hard cleanup
- - Permanently erase from the database all host metrics last automated 36 months ago or before.
This operation happens after the soft deletion has finished.
""" """
help = 'Run soft-deleting of HostMetrics' help = 'Run soft and hard-deletion of HostMetrics'
def add_arguments(self, parser):
parser.add_argument('--months-ago', type=int, dest='months-ago', action='store', help='Threshold in months for soft-deleting')
def handle(self, *args, **options): def handle(self, *args, **options):
months_ago = options.get('months-ago') or None HostMetricTask().cleanup(soft_threshold=settings.CLEANUP_HOST_METRICS_SOFT_THRESHOLD, hard_threshold=settings.CLEANUP_HOST_METRICS_HARD_THRESHOLD)
if not months_ago:
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
HostMetric.cleanup_task(months_ago)

View File

@@ -9,6 +9,7 @@ import re
# Django # Django
from django.apps import apps
from django.core.management.base import BaseCommand, CommandError from django.core.management.base import BaseCommand, CommandError
from django.db import transaction, connection from django.db import transaction, connection
from django.db.models import Min, Max from django.db.models import Min, Max
@@ -150,6 +151,9 @@ class Command(BaseCommand):
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.') parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)') parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)')
parser.add_argument(
'--batch-size', dest='batch_size', type=int, default=100000, metavar='X', help='Remove jobs in batch of X jobs. Defaults to 100000.'
)
parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs') parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs')
parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands') parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands')
parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates') parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates')
@@ -195,18 +199,58 @@ class Command(BaseCommand):
delete_meta.delete_jobs() delete_meta.delete_jobs()
return (delete_meta.jobs_no_delete_count, delete_meta.jobs_to_delete_count) return (delete_meta.jobs_no_delete_count, delete_meta.jobs_to_delete_count)
def _cascade_delete_job_events(self, model, pk_list): def has_unpartitioned_table(self, model):
tblname = unified_job_class_to_event_table_name(model)
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_tables WHERE tablename = '_unpartitioned_{tblname}';")
row = cursor.fetchone()
if row is None:
return False
return True
def _delete_unpartitioned_table(self, model):
"If the unpartitioned table is no longer necessary, it will drop the table"
tblname = unified_job_class_to_event_table_name(model)
if not self.has_unpartitioned_table(model):
self.logger.debug(f'Table _unpartitioned_{tblname} does not exist, you are fully migrated.')
return
with connection.cursor() as cursor:
# same as UnpartitionedJobEvent.objects.aggregate(Max('created'))
cursor.execute(f'SELECT MAX("_unpartitioned_{tblname}"."created") FROM "_unpartitioned_{tblname}";')
row = cursor.fetchone()
last_created = row[0]
if last_created:
self.logger.info(f'Last event created in _unpartitioned_{tblname} was {last_created.isoformat()}')
else:
self.logger.info(f'Table _unpartitioned_{tblname} has no events in it')
if (last_created is None) or (last_created < self.cutoff):
self.logger.warning(
f'Dropping table _unpartitioned_{tblname} since no records are newer than {self.cutoff}\n'
'WARNING - this will happen in a separate transaction so a failure will not roll back prior cleanup'
)
with connection.cursor() as cursor:
cursor.execute(f'DROP TABLE _unpartitioned_{tblname};')
def _delete_unpartitioned_events(self, model, pk_list):
"If unpartitioned job events remain, it will cascade those from jobs in pk_list"
tblname = unified_job_class_to_event_table_name(model)
rel_name = model().event_parent_key
# Bail if the unpartitioned table does not exist anymore
if not self.has_unpartitioned_table(model):
return
# Table still exists, delete individual unpartitioned events
if pk_list: if pk_list:
with connection.cursor() as cursor: with connection.cursor() as cursor:
tblname = unified_job_class_to_event_table_name(model) self.logger.debug(f'Deleting {len(pk_list)} events from _unpartitioned_{tblname}, use a longer cleanup window to delete the table.')
pk_list_csv = ','.join(map(str, pk_list)) pk_list_csv = ','.join(map(str, pk_list))
rel_name = model().event_parent_key cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv});")
cursor.execute(f"DELETE FROM _unpartitioned_{tblname} WHERE {rel_name} IN ({pk_list_csv})")
def cleanup_jobs(self): def cleanup_jobs(self):
batch_size = 100000
# Hack to avoid doing N+1 queries as each item in the Job query set does # Hack to avoid doing N+1 queries as each item in the Job query set does
# an individual query to get the underlying UnifiedJob. # an individual query to get the underlying UnifiedJob.
Job.polymorphic_super_sub_accessors_replaced = True Job.polymorphic_super_sub_accessors_replaced = True
@@ -221,13 +265,14 @@ class Command(BaseCommand):
deleted = 0 deleted = 0
info = qs.aggregate(min=Min('id'), max=Max('id')) info = qs.aggregate(min=Min('id'), max=Max('id'))
if info['min'] is not None: if info['min'] is not None:
for start in range(info['min'], info['max'] + 1, batch_size): for start in range(info['min'], info['max'] + 1, self.batch_size):
qs_batch = qs.filter(id__gte=start, id__lte=start + batch_size) qs_batch = qs.filter(id__gte=start, id__lte=start + self.batch_size)
pk_list = qs_batch.values_list('id', flat=True) pk_list = qs_batch.values_list('id', flat=True)
_, results = qs_batch.delete() _, results = qs_batch.delete()
deleted += results['main.Job'] deleted += results['main.Job']
self._cascade_delete_job_events(Job, pk_list) # Avoid dropping the job event table in case we have interacted with it already
self._delete_unpartitioned_events(Job, pk_list)
return skipped, deleted return skipped, deleted
@@ -250,7 +295,7 @@ class Command(BaseCommand):
deleted += 1 deleted += 1
if not self.dry_run: if not self.dry_run:
self._cascade_delete_job_events(AdHocCommand, pk_list) self._delete_unpartitioned_events(AdHocCommand, pk_list)
skipped += AdHocCommand.objects.filter(created__gte=self.cutoff).count() skipped += AdHocCommand.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted return skipped, deleted
@@ -278,7 +323,7 @@ class Command(BaseCommand):
deleted += 1 deleted += 1
if not self.dry_run: if not self.dry_run:
self._cascade_delete_job_events(ProjectUpdate, pk_list) self._delete_unpartitioned_events(ProjectUpdate, pk_list)
skipped += ProjectUpdate.objects.filter(created__gte=self.cutoff).count() skipped += ProjectUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted return skipped, deleted
@@ -306,7 +351,7 @@ class Command(BaseCommand):
deleted += 1 deleted += 1
if not self.dry_run: if not self.dry_run:
self._cascade_delete_job_events(InventoryUpdate, pk_list) self._delete_unpartitioned_events(InventoryUpdate, pk_list)
skipped += InventoryUpdate.objects.filter(created__gte=self.cutoff).count() skipped += InventoryUpdate.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted return skipped, deleted
@@ -330,7 +375,7 @@ class Command(BaseCommand):
deleted += 1 deleted += 1
if not self.dry_run: if not self.dry_run:
self._cascade_delete_job_events(SystemJob, pk_list) self._delete_unpartitioned_events(SystemJob, pk_list)
skipped += SystemJob.objects.filter(created__gte=self.cutoff).count() skipped += SystemJob.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted return skipped, deleted
@@ -375,12 +420,12 @@ class Command(BaseCommand):
skipped += Notification.objects.filter(created__gte=self.cutoff).count() skipped += Notification.objects.filter(created__gte=self.cutoff).count()
return skipped, deleted return skipped, deleted
@transaction.atomic
def handle(self, *args, **options): def handle(self, *args, **options):
self.verbosity = int(options.get('verbosity', 1)) self.verbosity = int(options.get('verbosity', 1))
self.init_logging() self.init_logging()
self.days = int(options.get('days', 90)) self.days = int(options.get('days', 90))
self.dry_run = bool(options.get('dry_run', False)) self.dry_run = bool(options.get('dry_run', False))
self.batch_size = int(options.get('batch_size', 100000))
try: try:
self.cutoff = now() - datetime.timedelta(days=self.days) self.cutoff = now() - datetime.timedelta(days=self.days)
except OverflowError: except OverflowError:
@@ -402,19 +447,29 @@ class Command(BaseCommand):
del s.receivers[:] del s.receivers[:]
s.sender_receivers_cache.clear() s.sender_receivers_cache.clear()
for m in model_names: with transaction.atomic():
if m not in models_to_cleanup: for m in models_to_cleanup:
continue skipped, deleted = getattr(self, 'cleanup_%s' % m)()
skipped, deleted = getattr(self, 'cleanup_%s' % m)() func = getattr(self, 'cleanup_%s_partition' % m, None)
if func:
skipped_partition, deleted_partition = func()
skipped += skipped_partition
deleted += deleted_partition
func = getattr(self, 'cleanup_%s_partition' % m, None) if self.dry_run:
if func: self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
skipped_partition, deleted_partition = func() else:
skipped += skipped_partition self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)
deleted += deleted_partition
if self.dry_run: # Deleting unpartitioned tables cannot be done in same transaction as updates to related tables
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped) if not self.dry_run:
else: with transaction.atomic():
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped) for m in models_to_cleanup:
unified_job_class_name = m[:-1].title().replace('Management', 'System').replace('_', '')
unified_job_class = apps.get_model('main', unified_job_class_name)
try:
unified_job_class().event_class
except (NotImplementedError, AttributeError):
continue # no need to run this for models without events
self._delete_unpartitioned_table(unified_job_class)

View File

@@ -25,17 +25,20 @@ class Command(BaseCommand):
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help="Hostname used during provisioning") parser.add_argument('--hostname', dest='hostname', type=str, help="Hostname used during provisioning")
parser.add_argument('--listener_port', dest='listener_port', type=int, help="Receptor listener port")
parser.add_argument('--node_type', type=str, default='hybrid', choices=['control', 'execution', 'hop', 'hybrid'], help="Instance Node type") parser.add_argument('--node_type', type=str, default='hybrid', choices=['control', 'execution', 'hop', 'hybrid'], help="Instance Node type")
parser.add_argument('--uuid', type=str, help="Instance UUID") parser.add_argument('--uuid', type=str, help="Instance UUID")
def _register_hostname(self, hostname, node_type, uuid): def _register_hostname(self, hostname, node_type, uuid, listener_port):
if not hostname: if not hostname:
if not settings.AWX_AUTO_DEPROVISION_INSTANCES: if not settings.AWX_AUTO_DEPROVISION_INSTANCES:
raise CommandError('Registering with values from settings only intended for use in K8s installs') raise CommandError('Registering with values from settings only intended for use in K8s installs')
from awx.main.management.commands.register_queue import RegisterQueue from awx.main.management.commands.register_queue import RegisterQueue
(changed, instance) = Instance.objects.register(ip_address=os.environ.get('MY_POD_IP'), node_type='control', node_uuid=settings.SYSTEM_UUID) (changed, instance) = Instance.objects.register(
ip_address=os.environ.get('MY_POD_IP'), listener_port=listener_port, node_type='control', node_uuid=settings.SYSTEM_UUID
)
RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register() RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register()
RegisterQueue( RegisterQueue(
settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_EXECUTION_QUEUE_NAME,
@@ -48,7 +51,7 @@ class Command(BaseCommand):
max_concurrent_jobs=settings.DEFAULT_EXECUTION_QUEUE_MAX_CONCURRENT_JOBS, max_concurrent_jobs=settings.DEFAULT_EXECUTION_QUEUE_MAX_CONCURRENT_JOBS,
).register() ).register()
else: else:
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid) (changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid, listener_port=listener_port)
if changed: if changed:
print("Successfully registered instance {}".format(hostname)) print("Successfully registered instance {}".format(hostname))
else: else:
@@ -58,6 +61,6 @@ class Command(BaseCommand):
@transaction.atomic @transaction.atomic
def handle(self, **options): def handle(self, **options):
self.changed = False self.changed = False
self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid')) self._register_hostname(options.get('hostname'), options.get('node_type'), options.get('uuid'), options.get('listener_port'))
if self.changed: if self.changed:
print("(changed: True)") print("(changed: True)")

View File

@@ -115,21 +115,25 @@ class InstanceManager(models.Manager):
return node[0] return node[0]
raise RuntimeError("No instance found with the current cluster host id") raise RuntimeError("No instance found with the current cluster host id")
def register(self, node_uuid=None, hostname=None, ip_address=None, node_type='hybrid', defaults=None): def register(self, node_uuid=None, hostname=None, ip_address="", listener_port=None, node_type='hybrid', defaults=None):
if not hostname: if not hostname:
hostname = settings.CLUSTER_HOST_ID hostname = settings.CLUSTER_HOST_ID
if not ip_address:
ip_address = ""
with advisory_lock('instance_registration_%s' % hostname): with advisory_lock('instance_registration_%s' % hostname):
if settings.AWX_AUTO_DEPROVISION_INSTANCES: if settings.AWX_AUTO_DEPROVISION_INSTANCES:
# detect any instances with the same IP address. # detect any instances with the same IP address.
# if one exists, set it to None # if one exists, set it to ""
inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname) if ip_address:
if inst_conflicting_ip.exists(): inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname)
for other_inst in inst_conflicting_ip: if inst_conflicting_ip.exists():
other_hostname = other_inst.hostname for other_inst in inst_conflicting_ip:
other_inst.ip_address = None other_hostname = other_inst.hostname
other_inst.save(update_fields=['ip_address']) other_inst.ip_address = ""
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname)) other_inst.save(update_fields=['ip_address'])
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname))
# Return existing instance that matches hostname or UUID (default to UUID) # Return existing instance that matches hostname or UUID (default to UUID)
if node_uuid is not None and node_uuid != UUID_DEFAULT and self.filter(uuid=node_uuid).exists(): if node_uuid is not None and node_uuid != UUID_DEFAULT and self.filter(uuid=node_uuid).exists():
@@ -157,6 +161,9 @@ class InstanceManager(models.Manager):
if instance.node_type != node_type: if instance.node_type != node_type:
instance.node_type = node_type instance.node_type = node_type
update_fields.append('node_type') update_fields.append('node_type')
if instance.listener_port != listener_port:
instance.listener_port = listener_port
update_fields.append('listener_port')
if update_fields: if update_fields:
instance.save(update_fields=update_fields) instance.save(update_fields=update_fields)
return (True, instance) return (True, instance)
@@ -167,12 +174,11 @@ class InstanceManager(models.Manager):
create_defaults = { create_defaults = {
'node_state': Instance.States.INSTALLED, 'node_state': Instance.States.INSTALLED,
'capacity': 0, 'capacity': 0,
'listener_port': 27199,
} }
if defaults is not None: if defaults is not None:
create_defaults.update(defaults) create_defaults.update(defaults)
uuid_option = {'uuid': node_uuid if node_uuid is not None else uuid.uuid4()} uuid_option = {'uuid': node_uuid if node_uuid is not None else uuid.uuid4()}
if node_type == 'execution' and 'version' not in create_defaults: if node_type == 'execution' and 'version' not in create_defaults:
create_defaults['version'] = RECEPTOR_PENDING create_defaults['version'] = RECEPTOR_PENDING
instance = self.create(hostname=hostname, ip_address=ip_address, node_type=node_type, **create_defaults, **uuid_option) instance = self.create(hostname=hostname, ip_address=ip_address, listener_port=listener_port, node_type=node_type, **create_defaults, **uuid_option)
return (True, instance) return (True, instance)

View File

@@ -9,6 +9,7 @@ from django.conf import settings
# AWX # AWX
import awx.main.fields import awx.main.fields
from awx.main.models import Host from awx.main.models import Host
from ._sqlite_helper import dbawaremigrations
def replaces(): def replaces():
@@ -131,9 +132,11 @@ class Migration(migrations.Migration):
help_text='If enabled, Tower will act as an Ansible Fact Cache Plugin; persisting facts at the end of a playbook run to the database and caching facts for use by Ansible.', help_text='If enabled, Tower will act as an Ansible Fact Cache Plugin; persisting facts at the end of a playbook run to the database and caching facts for use by Ansible.',
), ),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
sql="CREATE INDEX host_ansible_facts_default_gin ON {} USING gin(ansible_facts jsonb_path_ops);".format(Host._meta.db_table), sql="CREATE INDEX host_ansible_facts_default_gin ON {} USING gin(ansible_facts jsonb_path_ops);".format(Host._meta.db_table),
reverse_sql='DROP INDEX host_ansible_facts_default_gin;', reverse_sql='DROP INDEX host_ansible_facts_default_gin;',
sqlite_sql=dbawaremigrations.RunSQL.noop,
sqlite_reverse_sql=dbawaremigrations.RunSQL.noop,
), ),
# SCM file-based inventories # SCM file-based inventories
migrations.AddField( migrations.AddField(

View File

@@ -3,24 +3,27 @@ from __future__ import unicode_literals
from django.db import migrations from django.db import migrations
from ._sqlite_helper import dbawaremigrations
tables_to_drop = [
'celery_taskmeta',
'celery_tasksetmeta',
'djcelery_crontabschedule',
'djcelery_intervalschedule',
'djcelery_periodictask',
'djcelery_periodictasks',
'djcelery_taskstate',
'djcelery_workerstate',
'djkombu_message',
'djkombu_queue',
]
postgres_sql = ([("DROP TABLE IF EXISTS {} CASCADE;".format(table))] for table in tables_to_drop)
sqlite_sql = ([("DROP TABLE IF EXISTS {};".format(table))] for table in tables_to_drop)
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [ dependencies = [
('main', '0049_v330_validate_instance_capacity_adjustment'), ('main', '0049_v330_validate_instance_capacity_adjustment'),
] ]
operations = [ operations = [dbawaremigrations.RunSQL(p, sqlite_sql=s) for p, s in zip(postgres_sql, sqlite_sql)]
migrations.RunSQL([("DROP TABLE IF EXISTS {} CASCADE;".format(table))])
for table in (
'celery_taskmeta',
'celery_tasksetmeta',
'djcelery_crontabschedule',
'djcelery_intervalschedule',
'djcelery_periodictask',
'djcelery_periodictasks',
'djcelery_taskstate',
'djcelery_workerstate',
'djkombu_message',
'djkombu_queue',
)
]

View File

@@ -2,6 +2,8 @@
from django.db import migrations, models, connection from django.db import migrations, models, connection
from ._sqlite_helper import dbawaremigrations
def migrate_event_data(apps, schema_editor): def migrate_event_data(apps, schema_editor):
# see: https://github.com/ansible/awx/issues/6010 # see: https://github.com/ansible/awx/issues/6010
@@ -24,6 +26,11 @@ def migrate_event_data(apps, schema_editor):
cursor.execute(f'ALTER TABLE {tblname} ALTER COLUMN id TYPE bigint USING id::bigint;') cursor.execute(f'ALTER TABLE {tblname} ALTER COLUMN id TYPE bigint USING id::bigint;')
def migrate_event_data_sqlite(apps, schema_editor):
# TODO: cmeyers fill this in
return
class FakeAlterField(migrations.AlterField): class FakeAlterField(migrations.AlterField):
def database_forwards(self, *args): def database_forwards(self, *args):
# this is intentionally left blank, because we're # this is intentionally left blank, because we're
@@ -37,7 +44,7 @@ class Migration(migrations.Migration):
] ]
operations = [ operations = [
migrations.RunPython(migrate_event_data), dbawaremigrations.RunPython(migrate_event_data, sqlite_code=migrate_event_data_sqlite),
FakeAlterField( FakeAlterField(
model_name='adhoccommandevent', model_name='adhoccommandevent',
name='id', name='id',

View File

@@ -1,5 +1,7 @@
from django.db import migrations, models, connection from django.db import migrations, models, connection
from ._sqlite_helper import dbawaremigrations
def migrate_event_data(apps, schema_editor): def migrate_event_data(apps, schema_editor):
# see: https://github.com/ansible/awx/issues/9039 # see: https://github.com/ansible/awx/issues/9039
@@ -59,6 +61,10 @@ def migrate_event_data(apps, schema_editor):
cursor.execute('DROP INDEX IF EXISTS main_jobevent_job_id_idx') cursor.execute('DROP INDEX IF EXISTS main_jobevent_job_id_idx')
def migrate_event_data_sqlite(apps, schema_editor):
return None
class FakeAddField(migrations.AddField): class FakeAddField(migrations.AddField):
def database_forwards(self, *args): def database_forwards(self, *args):
# this is intentionally left blank, because we're # this is intentionally left blank, because we're
@@ -72,7 +78,7 @@ class Migration(migrations.Migration):
] ]
operations = [ operations = [
migrations.RunPython(migrate_event_data), dbawaremigrations.RunPython(migrate_event_data, sqlite_code=migrate_event_data_sqlite),
FakeAddField( FakeAddField(
model_name='jobevent', model_name='jobevent',
name='job_created', name='job_created',

View File

@@ -3,6 +3,8 @@
import awx.main.models.notifications import awx.main.models.notifications
from django.db import migrations, models from django.db import migrations, models
from ._sqlite_helper import dbawaremigrations
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [ dependencies = [
@@ -104,11 +106,12 @@ class Migration(migrations.Migration):
name='deleted_actor', name='deleted_actor',
field=models.JSONField(null=True), field=models.JSONField(null=True),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_activitystream RENAME setting TO setting_old; ALTER TABLE main_activitystream RENAME setting TO setting_old;
ALTER TABLE main_activitystream ALTER COLUMN setting_old DROP NOT NULL; ALTER TABLE main_activitystream ALTER COLUMN setting_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_activitystream RENAME setting TO setting_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='activitystream', model_name='activitystream',
@@ -121,11 +124,12 @@ class Migration(migrations.Migration):
name='setting', name='setting',
field=models.JSONField(blank=True, default=dict), field=models.JSONField(blank=True, default=dict),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old; ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_job ALTER COLUMN survey_passwords_old DROP NOT NULL; ALTER TABLE main_job ALTER COLUMN survey_passwords_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='job', model_name='job',
@@ -138,11 +142,12 @@ class Migration(migrations.Migration):
name='survey_passwords', name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False), field=models.JSONField(blank=True, default=dict, editable=False),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old; ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_joblaunchconfig ALTER COLUMN char_prompts_old DROP NOT NULL; ALTER TABLE main_joblaunchconfig ALTER COLUMN char_prompts_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='joblaunchconfig', model_name='joblaunchconfig',
@@ -155,11 +160,12 @@ class Migration(migrations.Migration):
name='char_prompts', name='char_prompts',
field=models.JSONField(blank=True, default=dict), field=models.JSONField(blank=True, default=dict),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old; ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_joblaunchconfig ALTER COLUMN survey_passwords_old DROP NOT NULL; ALTER TABLE main_joblaunchconfig ALTER COLUMN survey_passwords_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old;",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='joblaunchconfig', model_name='joblaunchconfig',
@@ -172,11 +178,12 @@ class Migration(migrations.Migration):
name='survey_passwords', name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False), field=models.JSONField(blank=True, default=dict, editable=False),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_notification RENAME body TO body_old; ALTER TABLE main_notification RENAME body TO body_old;
ALTER TABLE main_notification ALTER COLUMN body_old DROP NOT NULL; ALTER TABLE main_notification ALTER COLUMN body_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_notification RENAME body TO body_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='notification', model_name='notification',
@@ -189,11 +196,12 @@ class Migration(migrations.Migration):
name='body', name='body',
field=models.JSONField(blank=True, default=dict), field=models.JSONField(blank=True, default=dict),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old; ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old;
ALTER TABLE main_unifiedjob ALTER COLUMN job_env_old DROP NOT NULL; ALTER TABLE main_unifiedjob ALTER COLUMN job_env_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='unifiedjob', model_name='unifiedjob',
@@ -206,11 +214,12 @@ class Migration(migrations.Migration):
name='job_env', name='job_env',
field=models.JSONField(blank=True, default=dict, editable=False), field=models.JSONField(blank=True, default=dict, editable=False),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old; ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_workflowjob ALTER COLUMN char_prompts_old DROP NOT NULL; ALTER TABLE main_workflowjob ALTER COLUMN char_prompts_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='workflowjob', model_name='workflowjob',
@@ -223,11 +232,12 @@ class Migration(migrations.Migration):
name='char_prompts', name='char_prompts',
field=models.JSONField(blank=True, default=dict), field=models.JSONField(blank=True, default=dict),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old; ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_workflowjob ALTER COLUMN survey_passwords_old DROP NOT NULL; ALTER TABLE main_workflowjob ALTER COLUMN survey_passwords_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='workflowjob', model_name='workflowjob',
@@ -240,11 +250,12 @@ class Migration(migrations.Migration):
name='survey_passwords', name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False), field=models.JSONField(blank=True, default=dict, editable=False),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old; ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_workflowjobnode ALTER COLUMN char_prompts_old DROP NOT NULL; ALTER TABLE main_workflowjobnode ALTER COLUMN char_prompts_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='workflowjobnode', model_name='workflowjobnode',
@@ -257,11 +268,12 @@ class Migration(migrations.Migration):
name='char_prompts', name='char_prompts',
field=models.JSONField(blank=True, default=dict), field=models.JSONField(blank=True, default=dict),
), ),
migrations.RunSQL( dbawaremigrations.RunSQL(
""" """
ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old; ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_workflowjobnode ALTER COLUMN survey_passwords_old DROP NOT NULL; ALTER TABLE main_workflowjobnode ALTER COLUMN survey_passwords_old DROP NOT NULL;
""", """,
sqlite_sql="ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old",
state_operations=[ state_operations=[
migrations.RemoveField( migrations.RemoveField(
model_name='workflowjobnode', model_name='workflowjobnode',

View File

@@ -3,6 +3,8 @@ from __future__ import unicode_literals
from django.db import migrations from django.db import migrations
from ._sqlite_helper import dbawaremigrations
def delete_taggit_contenttypes(apps, schema_editor): def delete_taggit_contenttypes(apps, schema_editor):
ContentType = apps.get_model('contenttypes', 'ContentType') ContentType = apps.get_model('contenttypes', 'ContentType')
@@ -20,8 +22,8 @@ class Migration(migrations.Migration):
] ]
operations = [ operations = [
migrations.RunSQL("DROP TABLE IF EXISTS taggit_tag CASCADE;"), dbawaremigrations.RunSQL("DROP TABLE IF EXISTS taggit_tag CASCADE;", sqlite_sql="DROP TABLE IF EXISTS taggit_tag;"),
migrations.RunSQL("DROP TABLE IF EXISTS taggit_taggeditem CASCADE;"), dbawaremigrations.RunSQL("DROP TABLE IF EXISTS taggit_taggeditem CASCADE;", sqlite_sql="DROP TABLE IF EXISTS taggit_taggeditem;"),
migrations.RunPython(delete_taggit_contenttypes), migrations.RunPython(delete_taggit_contenttypes),
migrations.RunPython(delete_taggit_migration_records), migrations.RunPython(delete_taggit_migration_records),
] ]

View File

@@ -0,0 +1,75 @@
# Generated by Django 4.2.3 on 2023-08-04 20:50
import django.core.validators
from django.db import migrations, models
from django.conf import settings
def automatically_peer_from_control_plane(apps, schema_editor):
if settings.IS_K8S:
Instance = apps.get_model('main', 'Instance')
Instance.objects.filter(node_type='execution').update(peers_from_control_nodes=True)
Instance.objects.filter(node_type='control').update(listener_port=None)
class Migration(migrations.Migration):
dependencies = [
('main', '0186_drop_django_taggit'),
]
operations = [
migrations.AlterModelOptions(
name='instancelink',
options={'ordering': ('id',)},
),
migrations.AddField(
model_name='instance',
name='peers_from_control_nodes',
field=models.BooleanField(default=False, help_text='If True, control plane cluster nodes should automatically peer to it.'),
),
migrations.AlterField(
model_name='instance',
name='ip_address',
field=models.CharField(blank=True, default='', max_length=50),
),
migrations.AlterField(
model_name='instance',
name='listener_port',
field=models.PositiveIntegerField(
blank=True,
default=None,
help_text='Port that Receptor will listen for incoming connections on.',
null=True,
validators=[django.core.validators.MinValueValidator(1024), django.core.validators.MaxValueValidator(65535)],
),
),
migrations.AlterField(
model_name='instance',
name='peers',
field=models.ManyToManyField(related_name='peers_from', through='main.InstanceLink', to='main.instance'),
),
migrations.AlterField(
model_name='instancelink',
name='link_state',
field=models.CharField(
choices=[('adding', 'Adding'), ('established', 'Established'), ('removing', 'Removing')],
default='adding',
help_text='Indicates the current life cycle stage of this peer link.',
max_length=16,
),
),
migrations.AddConstraint(
model_name='instance',
constraint=models.UniqueConstraint(
condition=models.Q(('ip_address', ''), _negated=True),
fields=('ip_address',),
name='unique_ip_address_not_empty',
violation_error_message='Field ip_address must be unique.',
),
),
migrations.AddConstraint(
model_name='instancelink',
constraint=models.CheckConstraint(check=models.Q(('source', models.F('target')), _negated=True), name='source_and_target_can_not_be_equal'),
),
migrations.RunPython(automatically_peer_from_control_plane),
]

View File

@@ -0,0 +1,61 @@
from django.db import migrations
class RunSQL(migrations.operations.special.RunSQL):
"""
Bit of a hack here. Django actually wants this decision made in the router
and we can pass **hints.
"""
def __init__(self, *args, **kwargs):
if 'sqlite_sql' not in kwargs:
raise ValueError("sqlite_sql parameter required")
sqlite_sql = kwargs.pop('sqlite_sql')
self.sqlite_sql = sqlite_sql
self.sqlite_reverse_sql = kwargs.pop('sqlite_reverse_sql', None)
super().__init__(*args, **kwargs)
def database_forwards(self, app_label, schema_editor, from_state, to_state):
if not schema_editor.connection.vendor.startswith('postgres'):
self.sql = self.sqlite_sql or migrations.RunSQL.noop
super().database_forwards(app_label, schema_editor, from_state, to_state)
def database_backwards(self, app_label, schema_editor, from_state, to_state):
if not schema_editor.connection.vendor.startswith('postgres'):
self.reverse_sql = self.sqlite_reverse_sql or migrations.RunSQL.noop
super().database_backwards(app_label, schema_editor, from_state, to_state)
class RunPython(migrations.operations.special.RunPython):
"""
Bit of a hack here. Django actually wants this decision made in the router
and we can pass **hints.
"""
def __init__(self, *args, **kwargs):
if 'sqlite_code' not in kwargs:
raise ValueError("sqlite_code parameter required")
sqlite_code = kwargs.pop('sqlite_code')
self.sqlite_code = sqlite_code
self.sqlite_reverse_code = kwargs.pop('sqlite_reverse_code', None)
super().__init__(*args, **kwargs)
def database_forwards(self, app_label, schema_editor, from_state, to_state):
if not schema_editor.connection.vendor.startswith('postgres'):
self.code = self.sqlite_code or migrations.RunPython.noop
super().database_forwards(app_label, schema_editor, from_state, to_state)
def database_backwards(self, app_label, schema_editor, from_state, to_state):
if not schema_editor.connection.vendor.startswith('postgres'):
self.reverse_code = self.sqlite_reverse_code or migrations.RunPython.noop
super().database_backwards(app_label, schema_editor, from_state, to_state)
class _sqlitemigrations:
RunPython = RunPython
RunSQL = RunSQL
dbawaremigrations = _sqlitemigrations()

View File

@@ -57,7 +57,6 @@ from awx.main.models.ha import ( # noqa
from awx.main.models.rbac import ( # noqa from awx.main.models.rbac import ( # noqa
Role, Role,
batch_role_ancestor_rebuilding, batch_role_ancestor_rebuilding,
get_roles_on_resource,
role_summary_fields_generator, role_summary_fields_generator,
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR, ROLE_SINGLETON_SYSTEM_AUDITOR,
@@ -91,13 +90,12 @@ from oauth2_provider.models import Grant, RefreshToken # noqa -- needed django-
# Add custom methods to User model for permissions checks. # Add custom methods to User model for permissions checks.
from django.contrib.auth.models import User # noqa from django.contrib.auth.models import User # noqa
from awx.main.access import get_user_queryset, check_user_access, check_user_access_with_errors, user_accessible_objects # noqa from awx.main.access import get_user_queryset, check_user_access, check_user_access_with_errors # noqa
User.add_to_class('get_queryset', get_user_queryset) User.add_to_class('get_queryset', get_user_queryset)
User.add_to_class('can_access', check_user_access) User.add_to_class('can_access', check_user_access)
User.add_to_class('can_access_with_errors', check_user_access_with_errors) User.add_to_class('can_access_with_errors', check_user_access_with_errors)
User.add_to_class('accessible_objects', user_accessible_objects)
def convert_jsonfields(): def convert_jsonfields():

View File

@@ -17,6 +17,7 @@ from jinja2 import sandbox
from django.db import models from django.db import models
from django.utils.translation import gettext_lazy as _, gettext_noop from django.utils.translation import gettext_lazy as _, gettext_noop
from django.core.exceptions import ValidationError from django.core.exceptions import ValidationError
from django.conf import settings
from django.utils.encoding import force_str from django.utils.encoding import force_str
from django.utils.functional import cached_property from django.utils.functional import cached_property
from django.utils.timezone import now from django.utils.timezone import now
@@ -30,7 +31,7 @@ from awx.main.fields import (
CredentialTypeInjectorField, CredentialTypeInjectorField,
DynamicCredentialInputField, DynamicCredentialInputField,
) )
from awx.main.utils import decrypt_field, classproperty from awx.main.utils import decrypt_field, classproperty, set_environ
from awx.main.utils.safe_yaml import safe_dump from awx.main.utils.safe_yaml import safe_dump
from awx.main.utils.execution_environments import to_container_path from awx.main.utils.execution_environments import to_container_path
from awx.main.validators import validate_ssh_private_key from awx.main.validators import validate_ssh_private_key
@@ -1252,7 +1253,9 @@ class CredentialInputSource(PrimordialModel):
backend_kwargs[field_name] = value backend_kwargs[field_name] = value
backend_kwargs.update(self.metadata) backend_kwargs.update(self.metadata)
return backend(**backend_kwargs)
with set_environ(**settings.AWX_TASK_ENV):
return backend(**backend_kwargs)
def get_absolute_url(self, request=None): def get_absolute_url(self, request=None):
view_name = 'api:credential_input_source_detail' view_name = 'api:credential_input_source_detail'

View File

@@ -12,13 +12,14 @@ from django.dispatch import receiver
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
from django.conf import settings from django.conf import settings
from django.utils.timezone import now, timedelta from django.utils.timezone import now, timedelta
from django.db.models import Sum from django.db.models import Sum, Q
import redis import redis
from solo.models import SingletonModel from solo.models import SingletonModel
# AWX # AWX
from awx import __version__ as awx_application_version from awx import __version__ as awx_application_version
from awx.main.utils import is_testing
from awx.api.versioning import reverse from awx.api.versioning import reverse
from awx.main.fields import ImplicitRoleField from awx.main.fields import ImplicitRoleField
from awx.main.managers import InstanceManager, UUID_DEFAULT from awx.main.managers import InstanceManager, UUID_DEFAULT
@@ -70,16 +71,33 @@ class InstanceLink(BaseModel):
REMOVING = 'removing', _('Removing') REMOVING = 'removing', _('Removing')
link_state = models.CharField( link_state = models.CharField(
choices=States.choices, default=States.ESTABLISHED, max_length=16, help_text=_("Indicates the current life cycle stage of this peer link.") choices=States.choices, default=States.ADDING, max_length=16, help_text=_("Indicates the current life cycle stage of this peer link.")
) )
class Meta: class Meta:
unique_together = ('source', 'target') unique_together = ('source', 'target')
ordering = ("id",)
constraints = [models.CheckConstraint(check=~models.Q(source=models.F('target')), name='source_and_target_can_not_be_equal')]
class Instance(HasPolicyEditsMixin, BaseModel): class Instance(HasPolicyEditsMixin, BaseModel):
"""A model representing an AWX instance running against this database.""" """A model representing an AWX instance running against this database."""
class Meta:
app_label = 'main'
ordering = ("hostname",)
constraints = [
models.UniqueConstraint(
fields=["ip_address"],
condition=~Q(ip_address=""), # don't apply to constraint to empty entries
name="unique_ip_address_not_empty",
violation_error_message=_("Field ip_address must be unique."),
)
]
def __str__(self):
return self.hostname
objects = InstanceManager() objects = InstanceManager()
# Fields set in instance registration # Fields set in instance registration
@@ -87,10 +105,8 @@ class Instance(HasPolicyEditsMixin, BaseModel):
hostname = models.CharField(max_length=250, unique=True) hostname = models.CharField(max_length=250, unique=True)
ip_address = models.CharField( ip_address = models.CharField(
blank=True, blank=True,
null=True, default="",
default=None,
max_length=50, max_length=50,
unique=True,
) )
# Auto-fields, implementation is different from BaseModel # Auto-fields, implementation is different from BaseModel
created = models.DateTimeField(auto_now_add=True) created = models.DateTimeField(auto_now_add=True)
@@ -169,16 +185,14 @@ class Instance(HasPolicyEditsMixin, BaseModel):
) )
listener_port = models.PositiveIntegerField( listener_port = models.PositiveIntegerField(
blank=True, blank=True,
default=27199, null=True,
validators=[MinValueValidator(1), MaxValueValidator(65535)], default=None,
validators=[MinValueValidator(1024), MaxValueValidator(65535)],
help_text=_("Port that Receptor will listen for incoming connections on."), help_text=_("Port that Receptor will listen for incoming connections on."),
) )
peers = models.ManyToManyField('self', symmetrical=False, through=InstanceLink, through_fields=('source', 'target')) peers = models.ManyToManyField('self', symmetrical=False, through=InstanceLink, through_fields=('source', 'target'), related_name='peers_from')
peers_from_control_nodes = models.BooleanField(default=False, help_text=_("If True, control plane cluster nodes should automatically peer to it."))
class Meta:
app_label = 'main'
ordering = ("hostname",)
POLICY_FIELDS = frozenset(('managed_by_policy', 'hostname', 'capacity_adjustment')) POLICY_FIELDS = frozenset(('managed_by_policy', 'hostname', 'capacity_adjustment'))
@@ -275,10 +289,14 @@ class Instance(HasPolicyEditsMixin, BaseModel):
if update_last_seen: if update_last_seen:
update_fields += ['last_seen'] update_fields += ['last_seen']
if perform_save: if perform_save:
self.save(update_fields=update_fields) from awx.main.signals import disable_activity_stream
with disable_activity_stream():
self.save(update_fields=update_fields)
return update_fields return update_fields
def set_capacity_value(self): def set_capacity_value(self):
old_val = self.capacity
"""Sets capacity according to capacity adjustment rule (no save)""" """Sets capacity according to capacity adjustment rule (no save)"""
if self.enabled and self.node_type != 'hop': if self.enabled and self.node_type != 'hop':
lower_cap = min(self.mem_capacity, self.cpu_capacity) lower_cap = min(self.mem_capacity, self.cpu_capacity)
@@ -286,6 +304,7 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.capacity = lower_cap + (higher_cap - lower_cap) * self.capacity_adjustment self.capacity = lower_cap + (higher_cap - lower_cap) * self.capacity_adjustment
else: else:
self.capacity = 0 self.capacity = 0
return int(self.capacity) != int(old_val) # return True if value changed
def refresh_capacity_fields(self): def refresh_capacity_fields(self):
"""Update derived capacity fields from cpu and memory (no save)""" """Update derived capacity fields from cpu and memory (no save)"""
@@ -293,8 +312,8 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.cpu_capacity = 0 self.cpu_capacity = 0
self.mem_capacity = 0 # formula has a non-zero offset, so we make sure it is 0 for hop nodes self.mem_capacity = 0 # formula has a non-zero offset, so we make sure it is 0 for hop nodes
else: else:
self.cpu_capacity = get_cpu_effective_capacity(self.cpu) self.cpu_capacity = get_cpu_effective_capacity(self.cpu, is_control_node=bool(self.node_type in (Instance.Types.CONTROL, Instance.Types.HYBRID)))
self.mem_capacity = get_mem_effective_capacity(self.memory) self.mem_capacity = get_mem_effective_capacity(self.memory, is_control_node=bool(self.node_type in (Instance.Types.CONTROL, Instance.Types.HYBRID)))
self.set_capacity_value() self.set_capacity_value()
def save_health_data(self, version=None, cpu=0, memory=0, uuid=None, update_last_seen=False, errors=''): def save_health_data(self, version=None, cpu=0, memory=0, uuid=None, update_last_seen=False, errors=''):
@@ -317,12 +336,17 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.version = version self.version = version
update_fields.append('version') update_fields.append('version')
new_cpu = get_corrected_cpu(cpu) if self.node_type == Instance.Types.EXECUTION:
new_cpu = cpu
new_memory = memory
else:
new_cpu = get_corrected_cpu(cpu)
new_memory = get_corrected_memory(memory)
if new_cpu != self.cpu: if new_cpu != self.cpu:
self.cpu = new_cpu self.cpu = new_cpu
update_fields.append('cpu') update_fields.append('cpu')
new_memory = get_corrected_memory(memory)
if new_memory != self.memory: if new_memory != self.memory:
self.memory = new_memory self.memory = new_memory
update_fields.append('memory') update_fields.append('memory')
@@ -464,21 +488,50 @@ def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs
instance.set_default_policy_fields() instance.set_default_policy_fields()
def schedule_write_receptor_config(broadcast=True):
from awx.main.tasks.receptor import write_receptor_config # prevents circular import
# broadcast to all control instances to update their receptor configs
if broadcast:
connection.on_commit(lambda: write_receptor_config.apply_async(queue='tower_broadcast_all'))
else:
if not is_testing():
write_receptor_config() # just run locally
@receiver(post_save, sender=Instance) @receiver(post_save, sender=Instance)
def on_instance_saved(sender, instance, created=False, raw=False, **kwargs): def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
if settings.IS_K8S and instance.node_type in (Instance.Types.EXECUTION,): '''
Here we link control nodes to hop or execution nodes based on the
peers_from_control_nodes field.
write_receptor_config should be called on each control node when:
1. new node is created with peers_from_control_nodes enabled
2. a node changes its value of peers_from_control_nodes
3. a new control node comes online and has instances to peer to
'''
if created and settings.IS_K8S and instance.node_type in [Instance.Types.CONTROL, Instance.Types.HYBRID]:
inst = Instance.objects.filter(peers_from_control_nodes=True)
if set(instance.peers.all()) != set(inst):
instance.peers.set(inst)
schedule_write_receptor_config(broadcast=False)
if settings.IS_K8S and instance.node_type in [Instance.Types.HOP, Instance.Types.EXECUTION]:
if instance.node_state == Instance.States.DEPROVISIONING: if instance.node_state == Instance.States.DEPROVISIONING:
from awx.main.tasks.receptor import remove_deprovisioned_node # prevents circular import from awx.main.tasks.receptor import remove_deprovisioned_node # prevents circular import
# wait for jobs on the node to complete, then delete the # wait for jobs on the node to complete, then delete the
# node and kick off write_receptor_config # node and kick off write_receptor_config
connection.on_commit(lambda: remove_deprovisioned_node.apply_async([instance.hostname])) connection.on_commit(lambda: remove_deprovisioned_node.apply_async([instance.hostname]))
else:
if instance.node_state == Instance.States.INSTALLED: control_instances = set(Instance.objects.filter(node_type__in=[Instance.Types.CONTROL, Instance.Types.HYBRID]))
from awx.main.tasks.receptor import write_receptor_config # prevents circular import if instance.peers_from_control_nodes:
if (control_instances & set(instance.peers_from.all())) != set(control_instances):
# broadcast to all control instances to update their receptor configs instance.peers_from.add(*control_instances)
connection.on_commit(lambda: write_receptor_config.apply_async(queue='tower_broadcast_all')) schedule_write_receptor_config() # keep method separate to make pytest mocking easier
else:
if set(control_instances) & set(instance.peers_from.all()):
instance.peers_from.remove(*control_instances)
schedule_write_receptor_config()
if created or instance.has_policy_changes(): if created or instance.has_policy_changes():
schedule_policy_task() schedule_policy_task()
@@ -493,6 +546,8 @@ def on_instance_group_deleted(sender, instance, using, **kwargs):
@receiver(post_delete, sender=Instance) @receiver(post_delete, sender=Instance)
def on_instance_deleted(sender, instance, using, **kwargs): def on_instance_deleted(sender, instance, using, **kwargs):
schedule_policy_task() schedule_policy_task()
if settings.IS_K8S and instance.node_type in (Instance.Types.EXECUTION, Instance.Types.HOP) and instance.peers_from_control_nodes:
schedule_write_receptor_config()
class UnifiedJobTemplateInstanceGroupMembership(models.Model): class UnifiedJobTemplateInstanceGroupMembership(models.Model):

View File

@@ -10,7 +10,6 @@ import copy
import os.path import os.path
from urllib.parse import urljoin from urllib.parse import urljoin
import dateutil.relativedelta
import yaml import yaml
# Django # Django
@@ -890,23 +889,6 @@ class HostMetric(models.Model):
self.deleted = False self.deleted = False
self.save(update_fields=['deleted']) self.save(update_fields=['deleted'])
@classmethod
def cleanup_task(cls, months_ago):
try:
months_ago = int(months_ago)
if months_ago <= 0:
raise ValueError()
last_automation_before = now() - dateutil.relativedelta.relativedelta(months=months_ago)
logger.info(f'cleanup_host_metrics: soft-deleting records last automated before {last_automation_before}')
HostMetric.active_objects.filter(last_automation__lt=last_automation_before).update(
deleted=True, deleted_counter=models.F('deleted_counter') + 1, last_deleted=now()
)
settings.CLEANUP_HOST_METRICS_LAST_TS = now()
except (TypeError, ValueError):
logger.error(f"cleanup_host_metrics: months_ago({months_ago}) has to be a positive integer value")
class HostMetricSummaryMonthly(models.Model): class HostMetricSummaryMonthly(models.Model):
""" """

View File

@@ -19,7 +19,7 @@ from django.utils.translation import gettext_lazy as _
# AWX # AWX
from awx.main.models.base import prevent_search from awx.main.models.base import prevent_search
from awx.main.models.rbac import Role, RoleAncestorEntry, get_roles_on_resource from awx.main.models.rbac import Role, RoleAncestorEntry
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser, polymorphic from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser, polymorphic
from awx.main.utils.execution_environments import get_default_execution_environment from awx.main.utils.execution_environments import get_default_execution_environment
from awx.main.utils.encryption import decrypt_value, get_encryption_key, is_encrypted from awx.main.utils.encryption import decrypt_value, get_encryption_key, is_encrypted
@@ -54,10 +54,7 @@ class ResourceMixin(models.Model):
Use instead of `MyModel.objects` when you want to only consider Use instead of `MyModel.objects` when you want to only consider
resources that a user has specific permissions for. For example: resources that a user has specific permissions for. For example:
MyModel.accessible_objects(user, 'read_role').filter(name__istartswith='bar'); MyModel.accessible_objects(user, 'read_role').filter(name__istartswith='bar');
NOTE: This should only be used for list type things. If you have a NOTE: This should only be used for list type things.
specific resource you want to check permissions on, it is more
performant to resolve the resource in question then call
`myresource.get_permissions(user)`.
""" """
return ResourceMixin._accessible_objects(cls, accessor, role_field) return ResourceMixin._accessible_objects(cls, accessor, role_field)
@@ -86,15 +83,6 @@ class ResourceMixin(models.Model):
def _accessible_objects(cls, accessor, role_field): def _accessible_objects(cls, accessor, role_field):
return cls.objects.filter(pk__in=ResourceMixin._accessible_pk_qs(cls, accessor, role_field)) return cls.objects.filter(pk__in=ResourceMixin._accessible_pk_qs(cls, accessor, role_field))
def get_permissions(self, accessor):
"""
Returns a string list of the roles a accessor has for a given resource.
An accessor can be either a User, Role, or an arbitrary resource that
contains one or more Roles associated with it.
"""
return get_roles_on_resource(self, accessor)
class SurveyJobTemplateMixin(models.Model): class SurveyJobTemplateMixin(models.Model):
class Meta: class Meta:

View File

@@ -1439,6 +1439,11 @@ class UnifiedJob(
if not self.celery_task_id: if not self.celery_task_id:
return return
canceled = [] canceled = []
if not connection.get_autocommit():
# this condition is purpose-written for the task manager, when it cancels jobs in workflows
ControlDispatcher('dispatcher', self.controller_node).cancel([self.celery_task_id], with_reply=False)
return True # task manager itself needs to act under assumption that cancel was received
try: try:
# Use control and reply mechanism to cancel and obtain confirmation # Use control and reply mechanism to cancel and obtain confirmation
timeout = 5 timeout = 5

View File

@@ -124,6 +124,13 @@ class TaskBase:
self.record_aggregate_metrics() self.record_aggregate_metrics()
sys.exit(1) sys.exit(1)
def get_local_metrics(self):
data = {}
for k, metric in self.subsystem_metrics.METRICS.items():
if k.startswith(self.prefix) and metric.metric_has_changed:
data[k[len(self.prefix) + 1 :]] = metric.current_value
return data
def schedule(self): def schedule(self):
# Always be able to restore the original signal handler if we finish # Always be able to restore the original signal handler if we finish
original_sigusr1 = signal.getsignal(signal.SIGUSR1) original_sigusr1 = signal.getsignal(signal.SIGUSR1)
@@ -146,10 +153,14 @@ class TaskBase:
signal.signal(signal.SIGUSR1, original_sigusr1) signal.signal(signal.SIGUSR1, original_sigusr1)
commit_start = time.time() commit_start = time.time()
logger.debug(f"Commiting {self.prefix} Scheduler changes")
if self.prefix == "task_manager": if self.prefix == "task_manager":
self.subsystem_metrics.set(f"{self.prefix}_commit_seconds", time.time() - commit_start) self.subsystem_metrics.set(f"{self.prefix}_commit_seconds", time.time() - commit_start)
local_metrics = self.get_local_metrics()
self.record_aggregate_metrics() self.record_aggregate_metrics()
logger.debug(f"Finishing {self.prefix} Scheduler")
logger.debug(f"Finished {self.prefix} Scheduler, timing data:\n{local_metrics}")
class WorkflowManager(TaskBase): class WorkflowManager(TaskBase):
@@ -259,6 +270,9 @@ class WorkflowManager(TaskBase):
job.status = 'failed' job.status = 'failed'
job.save(update_fields=['status', 'job_explanation']) job.save(update_fields=['status', 'job_explanation'])
job.websocket_emit_status('failed') job.websocket_emit_status('failed')
# NOTE: sending notification templates here is slightly worse performance
# this is not yet optimized in the same way as for the TaskManager
job.send_notification_templates('failed')
ScheduleWorkflowManager().schedule() ScheduleWorkflowManager().schedule()
# TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ? # TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
@@ -419,6 +433,25 @@ class TaskManager(TaskBase):
self.tm_models = TaskManagerModels() self.tm_models = TaskManagerModels()
self.controlplane_ig = self.tm_models.instance_groups.controlplane_ig self.controlplane_ig = self.tm_models.instance_groups.controlplane_ig
def process_job_dep_failures(self, task):
"""If job depends on a job that has failed, mark as failed and handle misc stuff."""
for dep in task.dependent_jobs.all():
# if we detect a failed or error dependency, go ahead and fail this task.
if dep.status in ("error", "failed"):
task.status = 'failed'
logger.warning(f'Previous task failed task: {task.id} dep: {dep.id} task manager')
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(dep)),
dep.name,
dep.id,
)
task.save(update_fields=['status', 'job_explanation'])
task.websocket_emit_status('failed')
self.pre_start_failed.append(task.id)
return True
return False
def job_blocked_by(self, task): def job_blocked_by(self, task):
# TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph # TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph
# in the old task manager this was handled as a method on each task object outside of the graph and # in the old task manager this was handled as a method on each task object outside of the graph and
@@ -430,20 +463,6 @@ class TaskManager(TaskBase):
for dep in task.dependent_jobs.all(): for dep in task.dependent_jobs.all():
if dep.status in ACTIVE_STATES: if dep.status in ACTIVE_STATES:
return dep return dep
# if we detect a failed or error dependency, go ahead and fail this
# task. The errback on the dependency takes some time to trigger,
# and we don't want the task to enter running state if its
# dependency has failed or errored.
elif dep.status in ("error", "failed"):
task.status = 'failed'
task.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(dep)),
dep.name,
dep.id,
)
task.save(update_fields=['status', 'job_explanation'])
task.websocket_emit_status('failed')
return dep
return None return None
@@ -463,7 +482,6 @@ class TaskManager(TaskBase):
if self.start_task_limit == 0: if self.start_task_limit == 0:
# schedule another run immediately after this task manager # schedule another run immediately after this task manager
ScheduleTaskManager().schedule() ScheduleTaskManager().schedule()
from awx.main.tasks.system import handle_work_error, handle_work_success
task.status = 'waiting' task.status = 'waiting'
@@ -474,7 +492,7 @@ class TaskManager(TaskBase):
task.job_explanation += ' ' task.job_explanation += ' '
task.job_explanation += 'Task failed pre-start check.' task.job_explanation += 'Task failed pre-start check.'
task.save() task.save()
# TODO: run error handler to fail sub-tasks and send notifications self.pre_start_failed.append(task.id)
else: else:
if type(task) is WorkflowJob: if type(task) is WorkflowJob:
task.status = 'running' task.status = 'running'
@@ -496,19 +514,16 @@ class TaskManager(TaskBase):
# apply_async does a NOTIFY to the channel dispatcher is listening to # apply_async does a NOTIFY to the channel dispatcher is listening to
# postgres will treat this as part of the transaction, which is what we want # postgres will treat this as part of the transaction, which is what we want
if task.status != 'failed' and type(task) is not WorkflowJob: if task.status != 'failed' and type(task) is not WorkflowJob:
task_actual = {'type': get_type_for_model(type(task)), 'id': task.id}
task_cls = task._get_task_class() task_cls = task._get_task_class()
task_cls.apply_async( task_cls.apply_async(
[task.pk], [task.pk],
opts, opts,
queue=task.get_queue_name(), queue=task.get_queue_name(),
uuid=task.celery_task_id, uuid=task.celery_task_id,
callbacks=[{'task': handle_work_success.name, 'kwargs': {'task_actual': task_actual}}],
errbacks=[{'task': handle_work_error.name, 'kwargs': {'task_actual': task_actual}}],
) )
# In exception cases, like a job failing pre-start checks, we send the websocket status message # In exception cases, like a job failing pre-start checks, we send the websocket status message.
# for jobs going into waiting, we omit this because of performance issues, as it should go to running quickly # For jobs going into waiting, we omit this because of performance issues, as it should go to running quickly
if task.status != 'waiting': if task.status != 'waiting':
task.websocket_emit_status(task.status) # adds to on_commit task.websocket_emit_status(task.status) # adds to on_commit
@@ -529,6 +544,11 @@ class TaskManager(TaskBase):
if self.timed_out(): if self.timed_out():
logger.warning("Task manager has reached time out while processing pending jobs, exiting loop early") logger.warning("Task manager has reached time out while processing pending jobs, exiting loop early")
break break
has_failed = self.process_job_dep_failures(task)
if has_failed:
continue
blocked_by = self.job_blocked_by(task) blocked_by = self.job_blocked_by(task)
if blocked_by: if blocked_by:
self.subsystem_metrics.inc(f"{self.prefix}_tasks_blocked", 1) self.subsystem_metrics.inc(f"{self.prefix}_tasks_blocked", 1)
@@ -642,6 +662,11 @@ class TaskManager(TaskBase):
reap_job(j, 'failed') reap_job(j, 'failed')
def process_tasks(self): def process_tasks(self):
# maintain a list of jobs that went to an early failure state,
# meaning the dispatcher never got these jobs,
# that means we have to handle notifications for those
self.pre_start_failed = []
running_tasks = [t for t in self.all_tasks if t.status in ['waiting', 'running']] running_tasks = [t for t in self.all_tasks if t.status in ['waiting', 'running']]
self.process_running_tasks(running_tasks) self.process_running_tasks(running_tasks)
self.subsystem_metrics.inc(f"{self.prefix}_running_processed", len(running_tasks)) self.subsystem_metrics.inc(f"{self.prefix}_running_processed", len(running_tasks))
@@ -651,6 +676,11 @@ class TaskManager(TaskBase):
self.process_pending_tasks(pending_tasks) self.process_pending_tasks(pending_tasks)
self.subsystem_metrics.inc(f"{self.prefix}_pending_processed", len(pending_tasks)) self.subsystem_metrics.inc(f"{self.prefix}_pending_processed", len(pending_tasks))
if self.pre_start_failed:
from awx.main.tasks.system import handle_failure_notifications
handle_failure_notifications.delay(self.pre_start_failed)
def timeout_approval_node(self, task): def timeout_approval_node(self, task):
if self.timed_out(): if self.timed_out():
logger.warning("Task manager has reached time out while processing approval nodes, exiting loop early") logger.warning("Task manager has reached time out while processing approval nodes, exiting loop early")

View File

@@ -208,9 +208,10 @@ class RunnerCallback:
# We opened a connection just for that save, close it here now # We opened a connection just for that save, close it here now
connections.close_all() connections.close_all()
elif status_data['status'] == 'error': elif status_data['status'] == 'error':
result_traceback = status_data.get('result_traceback', None) for field_name in ('result_traceback', 'job_explanation'):
if result_traceback: field_value = status_data.get(field_name, None)
self.delay_update(result_traceback=result_traceback) if field_value:
self.delay_update(**{field_name: field_value})
def artifacts_handler(self, artifact_dir): def artifacts_handler(self, artifact_dir):
self.artifacts_processed = True self.artifacts_processed = True

10
awx/main/tasks/helpers.py Normal file
View File

@@ -0,0 +1,10 @@
from django.utils.timezone import now
from rest_framework.fields import DateTimeField
def is_run_threshold_reached(setting, threshold_seconds):
last_time = DateTimeField().to_internal_value(setting) if setting else None
if not last_time:
return True
else:
return (now() - last_time).total_seconds() > threshold_seconds

View File

@@ -3,33 +3,90 @@ from dateutil.relativedelta import relativedelta
import logging import logging
from django.conf import settings from django.conf import settings
from django.db.models import Count from django.db.models import Count, F
from django.db.models.functions import TruncMonth from django.db.models.functions import TruncMonth
from django.utils.timezone import now from django.utils.timezone import now
from rest_framework.fields import DateTimeField
from awx.main.dispatch import get_task_queuename from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.publish import task from awx.main.dispatch.publish import task
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly
from awx.main.tasks.helpers import is_run_threshold_reached
from awx.conf.license import get_license from awx.conf.license import get_license
logger = logging.getLogger('awx.main.tasks.host_metric_summary_monthly') logger = logging.getLogger('awx.main.tasks.host_metrics')
@task(queue=get_task_queuename)
def cleanup_host_metrics():
if is_run_threshold_reached(getattr(settings, 'CLEANUP_HOST_METRICS_LAST_TS', None), getattr(settings, 'CLEANUP_HOST_METRICS_INTERVAL', 30) * 86400):
logger.info(f"Executing cleanup_host_metrics, last ran at {getattr(settings, 'CLEANUP_HOST_METRICS_LAST_TS', '---')}")
HostMetricTask().cleanup(
soft_threshold=getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12),
hard_threshold=getattr(settings, 'CLEANUP_HOST_METRICS_HARD_THRESHOLD', 36),
)
logger.info("Finished cleanup_host_metrics")
@task(queue=get_task_queuename) @task(queue=get_task_queuename)
def host_metric_summary_monthly(): def host_metric_summary_monthly():
"""Run cleanup host metrics summary monthly task each week""" """Run cleanup host metrics summary monthly task each week"""
if _is_run_threshold_reached( if is_run_threshold_reached(getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', None), getattr(settings, 'HOST_METRIC_SUMMARY_TASK_INTERVAL', 7) * 86400):
getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', None), getattr(settings, 'HOST_METRIC_SUMMARY_TASK_INTERVAL', 7) * 86400
):
logger.info(f"Executing host_metric_summary_monthly, last ran at {getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', '---')}") logger.info(f"Executing host_metric_summary_monthly, last ran at {getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', '---')}")
HostMetricSummaryMonthlyTask().execute() HostMetricSummaryMonthlyTask().execute()
logger.info("Finished host_metric_summary_monthly") logger.info("Finished host_metric_summary_monthly")
def _is_run_threshold_reached(setting, threshold_seconds): class HostMetricTask:
last_time = DateTimeField().to_internal_value(setting) if setting else DateTimeField().to_internal_value('1970-01-01') """
This class provides cleanup task for HostMetric model.
There are two modes:
- soft cleanup (updates columns delete, deleted_counter and last_deleted)
- hard cleanup (deletes from the db)
"""
return (now() - last_time).total_seconds() > threshold_seconds def cleanup(self, soft_threshold=None, hard_threshold=None):
"""
Main entrypoint, runs either soft cleanup, hard cleanup or both
:param soft_threshold: (int)
:param hard_threshold: (int)
"""
if hard_threshold is not None:
self.hard_cleanup(hard_threshold)
if soft_threshold is not None:
self.soft_cleanup(soft_threshold)
settings.CLEANUP_HOST_METRICS_LAST_TS = now()
@staticmethod
def soft_cleanup(threshold=None):
if threshold is None:
threshold = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
try:
threshold = int(threshold)
except (ValueError, TypeError) as e:
raise type(e)("soft_threshold has to be convertible to number") from e
last_automation_before = now() - relativedelta(months=threshold)
rows = HostMetric.active_objects.filter(last_automation__lt=last_automation_before).update(
deleted=True, deleted_counter=F('deleted_counter') + 1, last_deleted=now()
)
logger.info(f'cleanup_host_metrics: soft-deleted records last automated before {last_automation_before}, affected rows: {rows}')
@staticmethod
def hard_cleanup(threshold=None):
if threshold is None:
threshold = getattr(settings, 'CLEANUP_HOST_METRICS_HARD_THRESHOLD', 36)
try:
threshold = int(threshold)
except (ValueError, TypeError) as e:
raise type(e)("hard_threshold has to be convertible to number") from e
last_deleted_before = now() - relativedelta(months=threshold)
queryset = HostMetric.objects.filter(deleted=True, last_deleted__lt=last_deleted_before)
rows = queryset.delete()
logger.info(f'cleanup_host_metrics: hard-deleted records which were soft deleted before {last_deleted_before}, affected rows: {rows[0]}')
class HostMetricSummaryMonthlyTask: class HostMetricSummaryMonthlyTask:

View File

@@ -74,6 +74,8 @@ from awx.main.utils.common import (
extract_ansible_vars, extract_ansible_vars,
get_awx_version, get_awx_version,
create_partition, create_partition,
ScheduleWorkflowManager,
ScheduleTaskManager,
) )
from awx.conf.license import get_license from awx.conf.license import get_license
from awx.main.utils.handlers import SpecialInventoryHandler from awx.main.utils.handlers import SpecialInventoryHandler
@@ -450,6 +452,12 @@ class BaseTask(object):
instance.ansible_version = ansible_version_info instance.ansible_version = ansible_version_info
instance.save(update_fields=['ansible_version']) instance.save(update_fields=['ansible_version'])
# Run task manager appropriately for speculative dependencies
if instance.unifiedjob_blocked_jobs.exists():
ScheduleTaskManager().schedule()
if instance.spawned_by_workflow:
ScheduleWorkflowManager().schedule()
def should_use_fact_cache(self): def should_use_fact_cache(self):
return False return False
@@ -1873,6 +1881,8 @@ class RunSystemJob(BaseTask):
if system_job.job_type in ('cleanup_jobs', 'cleanup_activitystream'): if system_job.job_type in ('cleanup_jobs', 'cleanup_activitystream'):
if 'days' in json_vars: if 'days' in json_vars:
args.extend(['--days', str(json_vars.get('days', 60))]) args.extend(['--days', str(json_vars.get('days', 60))])
if 'batch_size' in json_vars:
args.extend(['--batch-size', str(json_vars['batch_size'])])
if 'dry_run' in json_vars and json_vars['dry_run']: if 'dry_run' in json_vars and json_vars['dry_run']:
args.extend(['--dry-run']) args.extend(['--dry-run'])
if system_job.job_type == 'cleanup_jobs': if system_job.job_type == 'cleanup_jobs':

View File

@@ -30,6 +30,7 @@ from awx.main.tasks.signals import signal_state, signal_callback, SignalExit
from awx.main.models import Instance, InstanceLink, UnifiedJob from awx.main.models import Instance, InstanceLink, UnifiedJob
from awx.main.dispatch import get_task_queuename from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.publish import task from awx.main.dispatch.publish import task
from awx.main.utils.pglock import advisory_lock
# Receptorctl # Receptorctl
from receptorctl.socket_interface import ReceptorControl from receptorctl.socket_interface import ReceptorControl
@@ -431,16 +432,16 @@ class AWXReceptorJob:
# massive, only ask for last 1000 bytes # massive, only ask for last 1000 bytes
startpos = max(stdout_size - 1000, 0) startpos = max(stdout_size - 1000, 0)
resultsock, resultfile = receptor_ctl.get_work_results(self.unit_id, startpos=startpos, return_socket=True, return_sockfile=True) resultsock, resultfile = receptor_ctl.get_work_results(self.unit_id, startpos=startpos, return_socket=True, return_sockfile=True)
resultsock.setblocking(False) # this makes resultfile reads non blocking
lines = resultfile.readlines() lines = resultfile.readlines()
receptor_output = b"".join(lines).decode() receptor_output = b"".join(lines).decode()
if receptor_output: if receptor_output:
self.task.runner_callback.delay_update(result_traceback=receptor_output) self.task.runner_callback.delay_update(result_traceback=f'Worker output:\n{receptor_output}')
elif detail: elif detail:
self.task.runner_callback.delay_update(result_traceback=detail) self.task.runner_callback.delay_update(result_traceback=f'Receptor detail:\n{detail}')
else: else:
logger.warning(f'No result details or output from {self.task.instance.log_format}, status:\n{state_name}') logger.warning(f'No result details or output from {self.task.instance.log_format}, status:\n{state_name}')
except Exception: except Exception:
logger.exception(f'Work results error from job id={self.task.instance.id} work_unit={self.task.instance.work_unit_id}')
raise RuntimeError(detail) raise RuntimeError(detail)
return res return res
@@ -675,26 +676,41 @@ RECEPTOR_CONFIG_STARTER = (
) )
@task() def should_update_config(instances):
def write_receptor_config(): '''
lock = FileLock(__RECEPTOR_CONF_LOCKFILE) checks that the list of instances matches the list of
with lock: tcp-peers in the config
receptor_config = list(RECEPTOR_CONFIG_STARTER) '''
current_config = read_receptor_config() # this gets receptor conf lock
current_peers = []
for config_entry in current_config:
for key, value in config_entry.items():
if key.endswith('-peer'):
current_peers.append(value['address'])
intended_peers = [f"{i.hostname}:{i.listener_port}" for i in instances]
logger.debug(f"Peers current {current_peers} intended {intended_peers}")
if set(current_peers) == set(intended_peers):
return False # config file is already update to date
this_inst = Instance.objects.me() return True
instances = Instance.objects.filter(node_type=Instance.Types.EXECUTION)
existing_peers = {link.target_id for link in InstanceLink.objects.filter(source=this_inst)}
new_links = []
for instance in instances:
peer = {'tcp-peer': {'address': f'{instance.hostname}:{instance.listener_port}', 'tls': 'tlsclient'}}
receptor_config.append(peer)
if instance.id not in existing_peers:
new_links.append(InstanceLink(source=this_inst, target=instance, link_state=InstanceLink.States.ADDING))
InstanceLink.objects.bulk_create(new_links)
with open(__RECEPTOR_CONF, 'w') as file: def generate_config_data():
yaml.dump(receptor_config, file, default_flow_style=False) # returns two values
# receptor config - based on current database peers
# should_update - If True, receptor_config differs from the receptor conf file on disk
instances = Instance.objects.filter(node_type__in=(Instance.Types.EXECUTION, Instance.Types.HOP), peers_from_control_nodes=True)
receptor_config = list(RECEPTOR_CONFIG_STARTER)
for instance in instances:
peer = {'tcp-peer': {'address': f'{instance.hostname}:{instance.listener_port}', 'tls': 'tlsclient'}}
receptor_config.append(peer)
should_update = should_update_config(instances)
return receptor_config, should_update
def reload_receptor():
logger.warning("Receptor config changed, reloading receptor")
# This needs to be outside of the lock because this function itself will acquire the lock. # This needs to be outside of the lock because this function itself will acquire the lock.
receptor_ctl = get_receptor_ctl() receptor_ctl = get_receptor_ctl()
@@ -710,8 +726,29 @@ def write_receptor_config():
else: else:
raise RuntimeError("Receptor reload failed") raise RuntimeError("Receptor reload failed")
links = InstanceLink.objects.filter(source=this_inst, target__in=instances, link_state=InstanceLink.States.ADDING)
links.update(link_state=InstanceLink.States.ESTABLISHED) @task()
def write_receptor_config():
"""
This task runs async on each control node, K8S only.
It is triggered whenever remote is added or removed, or if peers_from_control_nodes
is flipped.
It is possible for write_receptor_config to be called multiple times.
For example, if new instances are added in quick succession.
To prevent that case, each control node first grabs a DB advisory lock, specific
to just that control node (i.e. multiple control nodes can run this function
at the same time, since it only writes the local receptor config file)
"""
with advisory_lock(f"{settings.CLUSTER_HOST_ID}_write_receptor_config", wait=True):
# Config file needs to be updated
receptor_config, should_update = generate_config_data()
if should_update:
lock = FileLock(__RECEPTOR_CONF_LOCKFILE)
with lock:
with open(__RECEPTOR_CONF, 'w') as file:
yaml.dump(receptor_config, file, default_flow_style=False)
reload_receptor()
@task(queue=get_task_queuename) @task(queue=get_task_queuename)
@@ -731,6 +768,3 @@ def remove_deprovisioned_node(hostname):
# This will as a side effect also delete the InstanceLinks that are tied to it. # This will as a side effect also delete the InstanceLinks that are tied to it.
Instance.objects.filter(hostname=hostname).delete() Instance.objects.filter(hostname=hostname).delete()
# Update the receptor configs for all of the control-plane.
write_receptor_config.apply_async(queue='tower_broadcast_all')

View File

@@ -16,7 +16,9 @@ class SignalExit(Exception):
class SignalState: class SignalState:
def reset(self): def reset(self):
self.sigterm_flag = False self.sigterm_flag = False
self.is_active = False self.sigint_flag = False
self.is_active = False # for nested context managers
self.original_sigterm = None self.original_sigterm = None
self.original_sigint = None self.original_sigint = None
self.raise_exception = False self.raise_exception = False
@@ -24,23 +26,36 @@ class SignalState:
def __init__(self): def __init__(self):
self.reset() self.reset()
def set_flag(self, *args): def raise_if_needed(self):
"""Method to pass into the python signal.signal method to receive signals"""
self.sigterm_flag = True
if self.raise_exception: if self.raise_exception:
self.raise_exception = False # so it is not raised a second time in error handling self.raise_exception = False # so it is not raised a second time in error handling
raise SignalExit() raise SignalExit()
def set_sigterm_flag(self, *args):
self.sigterm_flag = True
self.raise_if_needed()
def set_sigint_flag(self, *args):
self.sigint_flag = True
self.raise_if_needed()
def connect_signals(self): def connect_signals(self):
self.original_sigterm = signal.getsignal(signal.SIGTERM) self.original_sigterm = signal.getsignal(signal.SIGTERM)
self.original_sigint = signal.getsignal(signal.SIGINT) self.original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGTERM, self.set_flag) signal.signal(signal.SIGTERM, self.set_sigterm_flag)
signal.signal(signal.SIGINT, self.set_flag) signal.signal(signal.SIGINT, self.set_sigint_flag)
self.is_active = True self.is_active = True
def restore_signals(self): def restore_signals(self):
signal.signal(signal.SIGTERM, self.original_sigterm) signal.signal(signal.SIGTERM, self.original_sigterm)
signal.signal(signal.SIGINT, self.original_sigint) signal.signal(signal.SIGINT, self.original_sigint)
# if we got a signal while context manager was active, call parent methods.
if self.sigterm_flag:
if callable(self.original_sigterm):
self.original_sigterm()
if self.sigint_flag:
if callable(self.original_sigint):
self.original_sigint()
self.reset() self.reset()
@@ -48,7 +63,7 @@ signal_state = SignalState()
def signal_callback(): def signal_callback():
return signal_state.sigterm_flag return bool(signal_state.sigterm_flag or signal_state.sigint_flag)
def with_signal_handling(f): def with_signal_handling(f):

View File

@@ -48,22 +48,16 @@ from awx.main.models import (
Inventory, Inventory,
SmartInventoryMembership, SmartInventoryMembership,
Job, Job,
HostMetric,
convert_jsonfields, convert_jsonfields,
) )
from awx.main.constants import ACTIVE_STATES from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename, reaper from awx.main.dispatch import get_task_queuename, reaper
from awx.main.utils.common import ( from awx.main.utils.common import ignore_inventory_computed_fields, ignore_inventory_group_removal
get_type_for_model,
ignore_inventory_computed_fields,
ignore_inventory_group_removal,
ScheduleWorkflowManager,
ScheduleTaskManager,
)
from awx.main.utils.reload import stop_local_services from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock from awx.main.utils.pglock import advisory_lock
from awx.main.tasks.helpers import is_run_threshold_reached
from awx.main.tasks.receptor import get_receptor_ctl, worker_info, worker_cleanup, administrative_workunit_reaper, write_receptor_config from awx.main.tasks.receptor import get_receptor_ctl, worker_info, worker_cleanup, administrative_workunit_reaper, write_receptor_config
from awx.main.consumers import emit_channel_notification from awx.main.consumers import emit_channel_notification
from awx.main import analytics from awx.main import analytics
@@ -368,9 +362,7 @@ def send_notifications(notification_list, job_id=None):
@task(queue=get_task_queuename) @task(queue=get_task_queuename)
def gather_analytics(): def gather_analytics():
from awx.conf.models import Setting if is_run_threshold_reached(getattr(settings, 'AUTOMATION_ANALYTICS_LAST_GATHER', None), settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
if is_run_threshold_reached(Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_GATHER').first(), settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
analytics.gather() analytics.gather()
@@ -427,29 +419,6 @@ def cleanup_images_and_files():
_cleanup_images_and_files() _cleanup_images_and_files()
@task(queue=get_task_queuename)
def cleanup_host_metrics():
"""Run cleanup host metrics ~each month"""
# TODO: move whole method to host_metrics in follow-up PR
from awx.conf.models import Setting
if is_run_threshold_reached(
Setting.objects.filter(key='CLEANUP_HOST_METRICS_LAST_TS').first(), getattr(settings, 'CLEANUP_HOST_METRICS_INTERVAL', 30) * 86400
):
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
logger.info("Executing cleanup_host_metrics")
HostMetric.cleanup_task(months_ago)
logger.info("Finished cleanup_host_metrics")
def is_run_threshold_reached(setting, threshold_seconds):
from rest_framework.fields import DateTimeField
last_time = DateTimeField().to_internal_value(setting.value) if setting and setting.value else DateTimeField().to_internal_value('1970-01-01')
return (now() - last_time).total_seconds() > threshold_seconds
@task(queue=get_task_queuename) @task(queue=get_task_queuename)
def cluster_node_health_check(node): def cluster_node_health_check(node):
""" """
@@ -491,7 +460,6 @@ def execution_node_health_check(node):
data = worker_info(node) data = worker_info(node)
prior_capacity = instance.capacity prior_capacity = instance.capacity
instance.save_health_data( instance.save_health_data(
version='ansible-runner-' + data.get('runner_version', '???'), version='ansible-runner-' + data.get('runner_version', '???'),
cpu=data.get('cpu_count', 0), cpu=data.get('cpu_count', 0),
@@ -512,13 +480,37 @@ def execution_node_health_check(node):
return data return data
def inspect_execution_nodes(instance_list): def inspect_established_receptor_connections(mesh_status):
with advisory_lock('inspect_execution_nodes_lock', wait=False): '''
node_lookup = {inst.hostname: inst for inst in instance_list} Flips link state from ADDING to ESTABLISHED
If the InstanceLink source and target match the entries
in Known Connection Costs, flip to Established.
'''
from awx.main.models import InstanceLink
all_links = InstanceLink.objects.filter(link_state=InstanceLink.States.ADDING)
if not all_links.exists():
return
active_receptor_conns = mesh_status['KnownConnectionCosts']
update_links = []
for link in all_links:
if link.link_state != InstanceLink.States.REMOVING:
if link.target.hostname in active_receptor_conns.get(link.source.hostname, {}):
if link.link_state is not InstanceLink.States.ESTABLISHED:
link.link_state = InstanceLink.States.ESTABLISHED
update_links.append(link)
InstanceLink.objects.bulk_update(update_links, ['link_state'])
def inspect_execution_and_hop_nodes(instance_list):
with advisory_lock('inspect_execution_and_hop_nodes_lock', wait=False):
node_lookup = {inst.hostname: inst for inst in instance_list}
ctl = get_receptor_ctl() ctl = get_receptor_ctl()
mesh_status = ctl.simple_command('status') mesh_status = ctl.simple_command('status')
inspect_established_receptor_connections(mesh_status)
nowtime = now() nowtime = now()
workers = mesh_status['Advertisements'] workers = mesh_status['Advertisements']
@@ -576,7 +568,7 @@ def cluster_node_heartbeat(dispatch_time=None, worker_tasks=None):
this_inst = inst this_inst = inst
break break
inspect_execution_nodes(instance_list) inspect_execution_and_hop_nodes(instance_list)
for inst in list(instance_list): for inst in list(instance_list):
if inst == this_inst: if inst == this_inst:
@@ -765,66 +757,21 @@ def awx_periodic_scheduler():
new_unified_job.save(update_fields=['status', 'job_explanation']) new_unified_job.save(update_fields=['status', 'job_explanation'])
new_unified_job.websocket_emit_status("failed") new_unified_job.websocket_emit_status("failed")
emit_channel_notification('schedules-changed', dict(id=schedule.id, group_name="schedules")) emit_channel_notification('schedules-changed', dict(id=schedule.id, group_name="schedules"))
state.save()
def schedule_manager_success_or_error(instance):
if instance.unifiedjob_blocked_jobs.exists():
ScheduleTaskManager().schedule()
if instance.spawned_by_workflow:
ScheduleWorkflowManager().schedule()
@task(queue=get_task_queuename) @task(queue=get_task_queuename)
def handle_work_success(task_actual): def handle_failure_notifications(task_ids):
try: """A task-ified version of the method that sends notifications."""
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id']) found_task_ids = set()
except ObjectDoesNotExist: for instance in UnifiedJob.objects.filter(id__in=task_ids):
logger.warning('Missing {} `{}` in success callback.'.format(task_actual['type'], task_actual['id'])) found_task_ids.add(instance.id)
return try:
if not instance: instance.send_notification_templates('failed')
return except Exception:
schedule_manager_success_or_error(instance) logger.exception(f'Error preparing notifications for task {instance.id}')
deleted_tasks = set(task_ids) - found_task_ids
if deleted_tasks:
@task(queue=get_task_queuename) logger.warning(f'Could not send notifications for {deleted_tasks} because they were not found in the database')
def handle_work_error(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in error callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
subtasks = instance.get_jobs_fail_chain() # reverse of dependent_jobs mostly
logger.debug(f'Executing error task id {task_actual["id"]}, subtasks: {[subtask.id for subtask in subtasks]}')
deps_of_deps = {}
for subtask in subtasks:
if subtask.celery_task_id != instance.celery_task_id and not subtask.cancel_flag and not subtask.status in ('successful', 'failed'):
# If there are multiple in the dependency chain, A->B->C, and this was called for A, blame B for clarity
blame_job = deps_of_deps.get(subtask.id, instance)
subtask.status = 'failed'
subtask.failed = True
if not subtask.job_explanation:
subtask.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(blame_job)),
blame_job.name,
blame_job.id,
)
subtask.save()
subtask.websocket_emit_status("failed")
for sub_subtask in subtask.get_jobs_fail_chain():
deps_of_deps[sub_subtask.id] = subtask
# We only send 1 job complete message since all the job completion message
# handling does is trigger the scheduler. If we extend the functionality of
# what the job complete message handler does then we may want to send a
# completion event for each job here.
schedule_manager_success_or_error(instance)
@task(queue=get_task_queuename) @task(queue=get_task_queuename)

View File

@@ -84,5 +84,6 @@ def test_custom_hostname_regex(post, admin_user):
"hostname": value[0], "hostname": value[0],
"node_type": "execution", "node_type": "execution",
"node_state": "installed", "node_state": "installed",
"peers": [],
} }
post(url=url, user=admin_user, data=data, expect=value[1]) post(url=url, user=admin_user, data=data, expect=value[1])

View File

@@ -0,0 +1,342 @@
import pytest
import yaml
import itertools
from unittest import mock
from django.db.utils import IntegrityError
from awx.api.versioning import reverse
from awx.main.models import Instance
from awx.api.views.instance_install_bundle import generate_group_vars_all_yml
def has_peer(group_vars, peer):
peers = group_vars.get('receptor_peers', [])
for p in peers:
if f"{p['host']}:{p['port']}" == peer:
return True
return False
@pytest.mark.django_db
class TestPeers:
@pytest.fixture(autouse=True)
def configure_settings(self, settings):
settings.IS_K8S = True
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_prevent_peering_to_self(self, node_type):
"""
cannot peer to self
"""
control_instance = Instance.objects.create(hostname='abc', node_type=node_type)
with pytest.raises(IntegrityError):
control_instance.peers.add(control_instance)
@pytest.mark.parametrize('node_type', ['control', 'hybrid', 'hop', 'execution'])
def test_creating_node(self, node_type, admin_user, post):
"""
can only add hop and execution nodes via API
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "node_type": node_type},
user=admin_user,
expect=400 if node_type in ['control', 'hybrid'] else 201,
)
def test_changing_node_type(self, admin_user, patch):
"""
cannot change node type
"""
hop = Instance.objects.create(hostname='abc', node_type="hop")
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_type": "execution"},
user=admin_user,
expect=400,
)
@pytest.mark.parametrize('node_type', ['hop', 'execution'])
def test_listener_port_null(self, node_type, admin_user, post):
"""
listener_port can be None
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "node_type": node_type, "listener_port": None},
user=admin_user,
expect=201,
)
@pytest.mark.parametrize('node_type, allowed', [('control', False), ('hybrid', False), ('hop', True), ('execution', True)])
def test_peers_from_control_nodes_allowed(self, node_type, allowed, post, admin_user):
"""
only hop and execution nodes can have peers_from_control_nodes set to True
"""
post(
url=reverse('api:instance_list'),
data={"hostname": "abc", "peers_from_control_nodes": True, "node_type": node_type, "listener_port": 6789},
user=admin_user,
expect=201 if allowed else 400,
)
def test_listener_port_is_required(self, admin_user, post):
"""
if adding instance to peers list, that instance must have listener_port set
"""
Instance.objects.create(hostname='abc', node_type="hop", listener_port=None)
post(
url=reverse('api:instance_list'),
data={"hostname": "ex", "peers_from_control_nodes": False, "node_type": "execution", "listener_port": None, "peers": ["abc"]},
user=admin_user,
expect=400,
)
def test_peers_from_control_nodes_listener_port_enabled(self, admin_user, post):
"""
if peers_from_control_nodes is True, listener_port must an integer
Assert that all other combinations are allowed
"""
for index, item in enumerate(itertools.product(['hop', 'execution'], [True, False], [None, 6789])):
node_type, peers_from, listener_port = item
# only disallowed case is when peers_from is True and listener port is None
disallowed = peers_from and not listener_port
post(
url=reverse('api:instance_list'),
data={"hostname": f"abc{index}", "peers_from_control_nodes": peers_from, "node_type": node_type, "listener_port": listener_port},
user=admin_user,
expect=400 if disallowed else 201,
)
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_disallow_modifying_peers_control_nodes(self, node_type, admin_user, patch):
"""
for control nodes, peers field should not be
modified directly via patch.
"""
control = Instance.objects.create(hostname='abc', node_type=node_type)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', peers_from_control_nodes=True, listener_port=6789)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', peers_from_control_nodes=False, listener_port=6789)
assert [hop1] == list(control.peers.all()) # only hop1 should be peered
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": ["hop2"]},
user=admin_user,
expect=400, # cannot add peers directly
)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": ["hop1"]},
user=admin_user,
expect=200, # patching with current peers list should be okay
)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={"peers": []},
user=admin_user,
expect=400, # cannot remove peers directly
)
patch(
url=reverse('api:instance_detail', kwargs={'pk': control.pk}),
data={},
user=admin_user,
expect=200, # patching without data should be fine too
)
# patch hop2
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop2.pk}),
data={"peers_from_control_nodes": True},
user=admin_user,
expect=200, # patching without data should be fine too
)
assert {hop1, hop2} == set(control.peers.all()) # hop1 and hop2 should now be peered from control node
def test_disallow_changing_hostname(self, admin_user, patch):
"""
cannot change hostname
"""
hop = Instance.objects.create(hostname='hop', node_type='hop')
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"hostname": "hop2"},
user=admin_user,
expect=400,
)
def test_disallow_changing_node_state(self, admin_user, patch):
"""
only allow setting to deprovisioning
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', node_state='installed')
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_state": "deprovisioning"},
user=admin_user,
expect=200,
)
patch(
url=reverse('api:instance_detail', kwargs={'pk': hop.pk}),
data={"node_state": "ready"},
user=admin_user,
expect=400,
)
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_control_node_automatically_peers(self, node_type):
"""
a new control node should automatically
peer to hop
peer to hop should be removed if hop is deleted
"""
hop = Instance.objects.create(hostname='hop', node_type='hop', peers_from_control_nodes=True, listener_port=6789)
control = Instance.objects.create(hostname='abc', node_type=node_type)
assert hop in control.peers.all()
hop.delete()
assert not control.peers.exists()
@pytest.mark.parametrize('node_type', ['control', 'hybrid'])
def test_control_node_retains_other_peers(self, node_type):
"""
if a new node comes online, other peer relationships should
remain intact
"""
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop1.peers.add(hop2)
# a control node is added
Instance.objects.create(hostname='control', node_type=node_type, listener_port=None)
assert hop1.peers.exists()
def test_group_vars(self, get, admin_user):
"""
control > hop1 > hop2 < execution
"""
control = Instance.objects.create(hostname='control', node_type='control', listener_port=None)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
execution = Instance.objects.create(hostname='execution', node_type='execution', listener_port=6789)
execution.peers.add(hop2)
hop1.peers.add(hop2)
control_vars = yaml.safe_load(generate_group_vars_all_yml(control))
hop1_vars = yaml.safe_load(generate_group_vars_all_yml(hop1))
hop2_vars = yaml.safe_load(generate_group_vars_all_yml(hop2))
execution_vars = yaml.safe_load(generate_group_vars_all_yml(execution))
# control group vars assertions
assert has_peer(control_vars, 'hop1:6789')
assert not has_peer(control_vars, 'hop2:6789')
assert not has_peer(control_vars, 'execution:6789')
assert not control_vars.get('receptor_listener', False)
# hop1 group vars assertions
assert has_peer(hop1_vars, 'hop2:6789')
assert not has_peer(hop1_vars, 'execution:6789')
assert hop1_vars.get('receptor_listener', False)
# hop2 group vars assertions
assert not has_peer(hop2_vars, 'hop1:6789')
assert not has_peer(hop2_vars, 'execution:6789')
assert hop2_vars.get('receptor_listener', False)
assert hop2_vars.get('receptor_peers', []) == []
# execution group vars assertions
assert has_peer(execution_vars, 'hop2:6789')
assert not has_peer(execution_vars, 'hop1:6789')
assert execution_vars.get('receptor_listener', False)
def test_write_receptor_config_called(self):
"""
Assert that write_receptor_config is called
when certain instances are created, or if
peers_from_control_nodes changes.
In general, write_receptor_config should only
be called when necessary, as it will reload
receptor backend connections which is not trivial.
"""
with mock.patch('awx.main.models.ha.schedule_write_receptor_config') as write_method:
# new control instance but nothing to peer to (no)
control = Instance.objects.create(hostname='control1', node_type='control')
write_method.assert_not_called()
# new hop node with peers_from_control_nodes False (no)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop1.delete()
write_method.assert_not_called()
# new hop node with peers_from_control_nodes True (yes)
hop1 = Instance.objects.create(hostname='hop1', node_type='hop', listener_port=6789, peers_from_control_nodes=True)
write_method.assert_called()
write_method.reset_mock()
# new control instance but with something to peer to (yes)
Instance.objects.create(hostname='control2', node_type='control')
write_method.assert_called()
write_method.reset_mock()
# new hop node with peers_from_control_nodes False and peered to another hop node (no)
hop2 = Instance.objects.create(hostname='hop2', node_type='hop', listener_port=6789, peers_from_control_nodes=False)
hop2.peers.add(hop1)
hop2.delete()
write_method.assert_not_called()
# changing peers_from_control_nodes to False (yes)
hop1.peers_from_control_nodes = False
hop1.save()
write_method.assert_called()
write_method.reset_mock()
# deleting hop node that has peers_from_control_nodes to False (no)
hop1.delete()
write_method.assert_not_called()
# deleting control nodes (no)
control.delete()
write_method.assert_not_called()
def test_write_receptor_config_data(self):
"""
Assert the correct peers are included in data that will
be written to receptor.conf
"""
from awx.main.tasks.receptor import RECEPTOR_CONFIG_STARTER
with mock.patch('awx.main.tasks.receptor.read_receptor_config', return_value=list(RECEPTOR_CONFIG_STARTER)):
from awx.main.tasks.receptor import generate_config_data
_, should_update = generate_config_data()
assert not should_update
# not peered, so config file should not be updated
for i in range(3):
Instance.objects.create(hostname=f"exNo-{i}", node_type='execution', listener_port=6789, peers_from_control_nodes=False)
_, should_update = generate_config_data()
assert not should_update
# peered, so config file should be updated
expected_peers = []
for i in range(3):
expected_peers.append(f"hop-{i}:6789")
Instance.objects.create(hostname=f"hop-{i}", node_type='hop', listener_port=6789, peers_from_control_nodes=True)
for i in range(3):
expected_peers.append(f"exYes-{i}:6789")
Instance.objects.create(hostname=f"exYes-{i}", node_type='execution', listener_port=6789, peers_from_control_nodes=True)
new_config, should_update = generate_config_data()
assert should_update
peers = []
for entry in new_config:
for key, value in entry.items():
if key == "tcp-peer":
peers.append(value['address'])
assert set(expected_peers) == set(peers)

View File

@@ -0,0 +1,78 @@
import pytest
from awx.main.tasks.host_metrics import HostMetricTask
from awx.main.models.inventory import HostMetric
from awx.main.tests.factories.fixtures import mk_host_metric
from dateutil.relativedelta import relativedelta
from django.conf import settings
from django.utils import timezone
@pytest.mark.django_db
def test_no_host_metrics():
"""No-crash test"""
assert HostMetric.objects.count() == 0
HostMetricTask().cleanup(soft_threshold=0, hard_threshold=0)
HostMetricTask().cleanup(soft_threshold=24, hard_threshold=42)
assert HostMetric.objects.count() == 0
@pytest.mark.django_db
def test_delete_exception():
"""Crash test"""
with pytest.raises(ValueError):
HostMetricTask().soft_cleanup("")
with pytest.raises(TypeError):
HostMetricTask().hard_cleanup(set())
@pytest.mark.django_db
@pytest.mark.parametrize('threshold', [settings.CLEANUP_HOST_METRICS_SOFT_THRESHOLD, 20])
def test_soft_delete(threshold):
"""Metrics with last_automation < threshold are updated to deleted=True"""
mk_host_metric('host_1', first_automation=ago(months=1), last_automation=ago(months=1), deleted=False)
mk_host_metric('host_2', first_automation=ago(months=1), last_automation=ago(months=1), deleted=True)
mk_host_metric('host_3', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=-1), deleted=False)
mk_host_metric('host_4', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=-1), deleted=True)
mk_host_metric('host_5', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=1), deleted=False)
mk_host_metric('host_6', first_automation=ago(months=1), last_automation=ago(months=threshold, hours=1), deleted=True)
mk_host_metric('host_7', first_automation=ago(months=1), last_automation=ago(months=42), deleted=False)
mk_host_metric('host_8', first_automation=ago(months=1), last_automation=ago(months=42), deleted=True)
assert HostMetric.objects.count() == 8
assert HostMetric.active_objects.count() == 4
for i in range(2):
HostMetricTask().cleanup(soft_threshold=threshold)
assert HostMetric.objects.count() == 8
hostnames = set(HostMetric.objects.filter(deleted=False).order_by('hostname').values_list('hostname', flat=True))
assert hostnames == {'host_1', 'host_3'}
@pytest.mark.django_db
@pytest.mark.parametrize('threshold', [settings.CLEANUP_HOST_METRICS_HARD_THRESHOLD, 20])
def test_hard_delete(threshold):
"""Metrics with last_deleted < threshold and deleted=True are deleted from the db"""
mk_host_metric('host_1', first_automation=ago(months=1), last_deleted=ago(months=1), deleted=False)
mk_host_metric('host_2', first_automation=ago(months=1), last_deleted=ago(months=1), deleted=True)
mk_host_metric('host_3', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=-1), deleted=False)
mk_host_metric('host_4', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=-1), deleted=True)
mk_host_metric('host_5', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=1), deleted=False)
mk_host_metric('host_6', first_automation=ago(months=1), last_deleted=ago(months=threshold, hours=1), deleted=True)
mk_host_metric('host_7', first_automation=ago(months=1), last_deleted=ago(months=42), deleted=False)
mk_host_metric('host_8', first_automation=ago(months=1), last_deleted=ago(months=42), deleted=True)
assert HostMetric.objects.count() == 8
assert HostMetric.active_objects.count() == 4
for i in range(2):
HostMetricTask().cleanup(hard_threshold=threshold)
assert HostMetric.objects.count() == 6
hostnames = set(HostMetric.objects.order_by('hostname').values_list('hostname', flat=True))
assert hostnames == {'host_1', 'host_2', 'host_3', 'host_4', 'host_5', 'host_7'}
def ago(months=0, hours=0):
return timezone.now() - relativedelta(months=months, hours=hours)

View File

@@ -76,3 +76,24 @@ def test_hashivault_handle_auth_kubernetes():
def test_hashivault_handle_auth_not_enough_args(): def test_hashivault_handle_auth_not_enough_args():
with pytest.raises(Exception): with pytest.raises(Exception):
hashivault.handle_auth() hashivault.handle_auth()
class TestDelineaImports:
"""
These module have a try-except for ImportError which will allow using the older library
but we do not want the awx_devel image to have the older library,
so these tests are designed to fail if these wind up using the fallback import
"""
def test_dsv_import(self):
from awx.main.credential_plugins.dsv import SecretsVault # noqa
# assert this module as opposed to older thycotic.secrets.vault
assert SecretsVault.__module__ == 'delinea.secrets.vault'
def test_tss_import(self):
from awx.main.credential_plugins.tss import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret # noqa
for cls in (DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret):
# assert this module as opposed to older thycotic.secrets.server
assert cls.__module__ == 'delinea.secrets.server'

View File

@@ -37,9 +37,9 @@ def test_orphan_unified_job_creation(instance, inventory):
@pytest.mark.django_db @pytest.mark.django_db
@mock.patch('awx.main.tasks.system.inspect_execution_nodes', lambda *args, **kwargs: None) @mock.patch('awx.main.tasks.system.inspect_execution_and_hop_nodes', lambda *args, **kwargs: None)
@mock.patch('awx.main.models.ha.get_cpu_effective_capacity', lambda cpu: 8) @mock.patch('awx.main.models.ha.get_cpu_effective_capacity', lambda cpu, is_control_node: 8)
@mock.patch('awx.main.models.ha.get_mem_effective_capacity', lambda mem: 62) @mock.patch('awx.main.models.ha.get_mem_effective_capacity', lambda mem, is_control_node: 62)
def test_job_capacity_and_with_inactive_node(): def test_job_capacity_and_with_inactive_node():
i = Instance.objects.create(hostname='test-1') i = Instance.objects.create(hostname='test-1')
i.save_health_data('18.0.1', 2, 8000) i.save_health_data('18.0.1', 2, 8000)

View File

@@ -0,0 +1,44 @@
import pytest
from django_test_migrations.plan import all_migrations, nodes_to_tuples
"""
Most tests that live in here can probably be deleted at some point. They are mainly
for a developer. When AWX versions that users upgrade from falls out of support that
is when migration tests can be deleted. This is also a good time to squash. Squashing
will likely mess with the tests that live here.
The smoke test should be kept in here. The smoke test ensures that our migrations
continue to work when sqlite is the backing database (vs. the default DB of postgres).
"""
@pytest.mark.django_db
class TestMigrationSmoke:
def test_happy_path(self, migrator):
"""
This smoke test runs all the migrations.
Example of how to use django-test-migration to invoke particular migration(s)
while weaving in object creation and assertions.
Note that this is more than just an example. It is a smoke test because it runs ALL
the migrations. Our "normal" unit tests subvert the migrations running because it is slow.
"""
migration_nodes = all_migrations('default')
migration_tuples = nodes_to_tuples(migration_nodes)
final_migration = migration_tuples[-1]
migrator.apply_initial_migration(('main', None))
# I just picked a newish migration at the time of writing this.
# If someone from the future finds themselves here because the are squashing migrations
# it is fine to change the 0180_... below to some other newish migration
intermediate_state = migrator.apply_tested_migration(('main', '0180_add_hostmetric_fields'))
Instance = intermediate_state.apps.get_model('main', 'Instance')
# Create any old object in the database
Instance.objects.create(hostname='foobar', node_type='control')
final_state = migrator.apply_tested_migration(final_migration)
Instance = final_state.apps.get_model('main', 'Instance')
assert Instance.objects.filter(hostname='foobar').count() == 1

View File

@@ -122,25 +122,6 @@ def test_team_org_resource_role(ext_auth, organization, rando, org_admin, team):
] == [True for i in range(2)] ] == [True for i in range(2)]
@pytest.mark.django_db
def test_user_accessible_objects(user, organization):
"""
We cannot directly use accessible_objects for User model because
both editing and read permissions are obligated to complex business logic
"""
admin = user('admin', False)
u = user('john', False)
access = UserAccess(admin)
assert access.get_queryset().count() == 1 # can only see himself
organization.member_role.members.add(u)
organization.member_role.members.add(admin)
assert access.get_queryset().count() == 2
organization.member_role.members.remove(u)
assert access.get_queryset().count() == 1
@pytest.mark.django_db @pytest.mark.django_db
def test_org_admin_create_sys_auditor(org_admin): def test_org_admin_create_sys_auditor(org_admin):
access = UserAccess(org_admin) access = UserAccess(org_admin)

View File

@@ -5,8 +5,8 @@ import tempfile
import shutil import shutil
from awx.main.tasks.jobs import RunJob from awx.main.tasks.jobs import RunJob
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files, handle_work_error from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files
from awx.main.models import Instance, Job, InventoryUpdate, ProjectUpdate from awx.main.models import Instance, Job
@pytest.fixture @pytest.fixture
@@ -73,17 +73,3 @@ def test_does_not_run_reaped_job(mocker, mock_me):
job.refresh_from_db() job.refresh_from_db()
assert job.status == 'failed' assert job.status == 'failed'
mock_run.assert_not_called() mock_run.assert_not_called()
@pytest.mark.django_db
def test_handle_work_error_nested(project, inventory_source):
pu = ProjectUpdate.objects.create(status='failed', project=project, celery_task_id='1234')
iu = InventoryUpdate.objects.create(status='pending', inventory_source=inventory_source, source='scm')
job = Job.objects.create(status='pending')
iu.dependent_jobs.add(pu)
job.dependent_jobs.add(pu, iu)
handle_work_error({'type': 'project_update', 'id': pu.id})
iu.refresh_from_db()
job.refresh_from_db()
assert iu.job_explanation == f'Previous Task Failed: {{"job_type": "project_update", "job_name": "", "job_id": "{pu.id}"}}'
assert job.job_explanation == f'Previous Task Failed: {{"job_type": "inventory_update", "job_name": "", "job_id": "{iu.id}"}}'

View File

@@ -47,7 +47,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa 'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
] ]
), ),
), ),
@@ -61,7 +61,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa 'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
] ]
), ),
), ),
@@ -75,7 +75,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa 'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5")', # noqa
] ]
), ),
), ),
@@ -89,7 +89,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa 'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
] ]
), ),
), ),
@@ -103,7 +103,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa 'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
] ]
), ),
), ),
@@ -117,7 +117,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa 'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
] ]
), ),
), ),
@@ -131,7 +131,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa 'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
] ]
), ),
), ),
@@ -145,7 +145,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa 'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
] ]
), ),
), ),
@@ -159,7 +159,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa 'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
] ]
), ),
), ),
@@ -173,7 +173,7 @@ data_loggly = {
'\n'.join( '\n'.join(
[ [
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxdiskspace="1g" queue.type="LinkedList" queue.saveOnShutdown="on" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa 'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" queue.spoolDirectory="/var/lib/awx" queue.filename="awx-external-logger-action-queue" queue.maxDiskSpace="1g" queue.maxFileSize="100m" queue.type="LinkedList" queue.saveOnShutdown="on" queue.syncqueuefiles="on" queue.checkpointInterval="1000" queue.size="131072" queue.highwaterMark="98304" queue.discardMark="117964" queue.discardSeverity="5" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
] ]
), ),
), ),

View File

@@ -36,7 +36,9 @@ def test_SYSTEM_TASK_ABS_MEM_conversion(value, converted_value, mem_capacity):
mock_settings.IS_K8S = True mock_settings.IS_K8S = True
assert convert_mem_str_to_bytes(value) == converted_value assert convert_mem_str_to_bytes(value) == converted_value
assert get_corrected_memory(-1) == converted_value assert get_corrected_memory(-1) == converted_value
assert get_mem_effective_capacity(-1) == mem_capacity assert get_mem_effective_capacity(1, is_control_node=True) == mem_capacity
# SYSTEM_TASK_ABS_MEM should not effect memory and capacity for execution nodes
assert get_mem_effective_capacity(2147483648, is_control_node=False) == 20
@pytest.mark.parametrize( @pytest.mark.parametrize(
@@ -58,4 +60,6 @@ def test_SYSTEM_TASK_ABS_CPU_conversion(value, converted_value, cpu_capacity):
mock_settings.SYSTEM_TASK_FORKS_CPU = 4 mock_settings.SYSTEM_TASK_FORKS_CPU = 4
assert convert_cpu_str_to_decimal_cpu(value) == converted_value assert convert_cpu_str_to_decimal_cpu(value) == converted_value
assert get_corrected_cpu(-1) == converted_value assert get_corrected_cpu(-1) == converted_value
assert get_cpu_effective_capacity(-1) == cpu_capacity assert get_cpu_effective_capacity(-1, is_control_node=True) == cpu_capacity
# SYSTEM_TASK_ABS_CPU should not effect cpu count and capacity for execution nodes
assert get_cpu_effective_capacity(2.0, is_control_node=False) == 8

View File

@@ -1,8 +1,43 @@
import signal import signal
import functools
from awx.main.tasks.signals import signal_state, signal_callback, with_signal_handling from awx.main.tasks.signals import signal_state, signal_callback, with_signal_handling
def pytest_sigint():
pytest_sigint.called_count += 1
def pytest_sigterm():
pytest_sigterm.called_count += 1
def tmp_signals_for_test(func):
"""
When we run our internal signal handlers, it will call the original signal
handlers when its own work is finished.
This would crash the test runners normally, because those methods will
shut down the process.
So this is a decorator to safely replace existing signal handlers
with new signal handlers that do nothing so that tests do not crash.
"""
@functools.wraps(func)
def wrapper():
original_sigterm = signal.getsignal(signal.SIGTERM)
original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGTERM, pytest_sigterm)
signal.signal(signal.SIGINT, pytest_sigint)
pytest_sigterm.called_count = 0
pytest_sigint.called_count = 0
func()
signal.signal(signal.SIGTERM, original_sigterm)
signal.signal(signal.SIGINT, original_sigint)
return wrapper
@tmp_signals_for_test
def test_outer_inner_signal_handling(): def test_outer_inner_signal_handling():
""" """
Even if the flag is set in the outer context, its value should persist in the inner context Even if the flag is set in the outer context, its value should persist in the inner context
@@ -15,17 +50,22 @@ def test_outer_inner_signal_handling():
@with_signal_handling @with_signal_handling
def f1(): def f1():
assert signal_callback() is False assert signal_callback() is False
signal_state.set_flag() signal_state.set_sigterm_flag()
assert signal_callback() assert signal_callback()
f2() f2()
original_sigterm = signal.getsignal(signal.SIGTERM) original_sigterm = signal.getsignal(signal.SIGTERM)
assert signal_callback() is False assert signal_callback() is False
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 0
f1() f1()
assert signal_callback() is False assert signal_callback() is False
assert signal.getsignal(signal.SIGTERM) is original_sigterm assert signal.getsignal(signal.SIGTERM) is original_sigterm
assert pytest_sigterm.called_count == 1
assert pytest_sigint.called_count == 0
@tmp_signals_for_test
def test_inner_outer_signal_handling(): def test_inner_outer_signal_handling():
""" """
Even if the flag is set in the inner context, its value should persist in the outer context Even if the flag is set in the inner context, its value should persist in the outer context
@@ -34,7 +74,7 @@ def test_inner_outer_signal_handling():
@with_signal_handling @with_signal_handling
def f2(): def f2():
assert signal_callback() is False assert signal_callback() is False
signal_state.set_flag() signal_state.set_sigint_flag()
assert signal_callback() assert signal_callback()
@with_signal_handling @with_signal_handling
@@ -45,6 +85,10 @@ def test_inner_outer_signal_handling():
original_sigterm = signal.getsignal(signal.SIGTERM) original_sigterm = signal.getsignal(signal.SIGTERM)
assert signal_callback() is False assert signal_callback() is False
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 0
f1() f1()
assert signal_callback() is False assert signal_callback() is False
assert signal.getsignal(signal.SIGTERM) is original_sigterm assert signal.getsignal(signal.SIGTERM) is original_sigterm
assert pytest_sigterm.called_count == 0
assert pytest_sigint.called_count == 1

View File

@@ -143,13 +143,6 @@ def test_send_notifications_job_id(mocker):
assert UnifiedJob.objects.get.called_with(id=1) assert UnifiedJob.objects.get.called_with(id=1)
def test_work_success_callback_missing_job():
task_data = {'type': 'project_update', 'id': 9999}
with mock.patch('django.db.models.query.QuerySet.get') as get_mock:
get_mock.side_effect = ProjectUpdate.DoesNotExist()
assert system.handle_work_success(task_data) is None
@mock.patch('awx.main.models.UnifiedJob.objects.get') @mock.patch('awx.main.models.UnifiedJob.objects.get')
@mock.patch('awx.main.models.Notification.objects.filter') @mock.patch('awx.main.models.Notification.objects.filter')
def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker): def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker):

View File

@@ -23,7 +23,7 @@ from django.core.exceptions import ObjectDoesNotExist, FieldDoesNotExist
from django.utils.dateparse import parse_datetime from django.utils.dateparse import parse_datetime
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
from django.utils.functional import cached_property from django.utils.functional import cached_property
from django.db import connection, transaction, ProgrammingError from django.db import connection, transaction, ProgrammingError, IntegrityError
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField from django.db.models.fields.related import ForeignObjectRel, ManyToManyField
from django.db.models.fields.related_descriptors import ForwardManyToOneDescriptor, ManyToManyDescriptor from django.db.models.fields.related_descriptors import ForwardManyToOneDescriptor, ManyToManyDescriptor
from django.db.models.query import QuerySet from django.db.models.query import QuerySet
@@ -768,14 +768,13 @@ def get_corrected_cpu(cpu_count): # formerlly get_cpu_capacity
return cpu_count # no correction return cpu_count # no correction
def get_cpu_effective_capacity(cpu_count): def get_cpu_effective_capacity(cpu_count, is_control_node=False):
from django.conf import settings from django.conf import settings
cpu_count = get_corrected_cpu(cpu_count)
settings_forkcpu = getattr(settings, 'SYSTEM_TASK_FORKS_CPU', None) settings_forkcpu = getattr(settings, 'SYSTEM_TASK_FORKS_CPU', None)
env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None) env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
if is_control_node:
cpu_count = get_corrected_cpu(cpu_count)
if env_forkcpu: if env_forkcpu:
forkcpu = int(env_forkcpu) forkcpu = int(env_forkcpu)
elif settings_forkcpu: elif settings_forkcpu:
@@ -834,6 +833,7 @@ def get_corrected_memory(memory):
# Runner returns memory in bytes # Runner returns memory in bytes
# so we convert memory from settings to bytes as well. # so we convert memory from settings to bytes as well.
if env_absmem is not None: if env_absmem is not None:
return convert_mem_str_to_bytes(env_absmem) return convert_mem_str_to_bytes(env_absmem)
elif settings_absmem is not None: elif settings_absmem is not None:
@@ -842,14 +842,13 @@ def get_corrected_memory(memory):
return memory return memory
def get_mem_effective_capacity(mem_bytes): def get_mem_effective_capacity(mem_bytes, is_control_node=False):
from django.conf import settings from django.conf import settings
mem_bytes = get_corrected_memory(mem_bytes)
settings_mem_mb_per_fork = getattr(settings, 'SYSTEM_TASK_FORKS_MEM', None) settings_mem_mb_per_fork = getattr(settings, 'SYSTEM_TASK_FORKS_MEM', None)
env_mem_mb_per_fork = os.getenv('SYSTEM_TASK_FORKS_MEM', None) env_mem_mb_per_fork = os.getenv('SYSTEM_TASK_FORKS_MEM', None)
if is_control_node:
mem_bytes = get_corrected_memory(mem_bytes)
if env_mem_mb_per_fork: if env_mem_mb_per_fork:
mem_mb_per_fork = int(env_mem_mb_per_fork) mem_mb_per_fork = int(env_mem_mb_per_fork)
elif settings_mem_mb_per_fork: elif settings_mem_mb_per_fork:
@@ -1165,13 +1164,24 @@ def create_partition(tblname, start=None):
try: try:
with transaction.atomic(): with transaction.atomic():
with connection.cursor() as cursor: with connection.cursor() as cursor:
cursor.execute(f"SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = '{tblname}_{partition_label}');")
row = cursor.fetchone()
if row is not None:
for val in row: # should only have 1
if val is True:
logger.debug(f'Event partition table {tblname}_{partition_label} already exists')
return
cursor.execute( cursor.execute(
f'CREATE TABLE IF NOT EXISTS {tblname}_{partition_label} ' f'CREATE TABLE {tblname}_{partition_label} (LIKE {tblname} INCLUDING DEFAULTS INCLUDING CONSTRAINTS); '
f'PARTITION OF {tblname} ' f'ALTER TABLE {tblname} ATTACH PARTITION {tblname}_{partition_label} '
f'FOR VALUES FROM (\'{start_timestamp}\') to (\'{end_timestamp}\');' f'FOR VALUES FROM (\'{start_timestamp}\') TO (\'{end_timestamp}\');'
) )
except ProgrammingError as e: except (ProgrammingError, IntegrityError) as e:
logger.debug(f'Caught known error due to existing partition: {e}') if 'already exists' in str(e):
logger.info(f'Caught known error due to partition creation race: {e}')
else:
raise
def cleanup_new_process(func): def cleanup_new_process(func):

View File

@@ -17,11 +17,26 @@ def construct_rsyslog_conf_template(settings=settings):
port = getattr(settings, 'LOG_AGGREGATOR_PORT', '') port = getattr(settings, 'LOG_AGGREGATOR_PORT', '')
protocol = getattr(settings, 'LOG_AGGREGATOR_PROTOCOL', '') protocol = getattr(settings, 'LOG_AGGREGATOR_PROTOCOL', '')
timeout = getattr(settings, 'LOG_AGGREGATOR_TCP_TIMEOUT', 5) timeout = getattr(settings, 'LOG_AGGREGATOR_TCP_TIMEOUT', 5)
max_disk_space_main_queue = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_GB', 1) action_queue_size = getattr(settings, 'LOG_AGGREGATOR_ACTION_QUEUE_SIZE', 131072)
max_disk_space_action_queue = getattr(settings, 'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB', 1) max_disk_space_action_queue = getattr(settings, 'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB', 1)
spool_directory = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH', '/var/lib/awx').rstrip('/') spool_directory = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH', '/var/lib/awx').rstrip('/')
error_log_file = getattr(settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', '') error_log_file = getattr(settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', '')
queue_options = [
f'queue.spoolDirectory="{spool_directory}"',
'queue.filename="awx-external-logger-action-queue"',
f'queue.maxDiskSpace="{max_disk_space_action_queue}g"', # overall disk space for all queue files
'queue.maxFileSize="100m"', # individual file size
'queue.type="LinkedList"',
'queue.saveOnShutdown="on"',
'queue.syncqueuefiles="on"', # (f)sync when checkpoint occurs
'queue.checkpointInterval="1000"', # Update disk queue every 1000 messages
f'queue.size="{action_queue_size}"', # max number of messages in queue
f'queue.highwaterMark="{int(action_queue_size * 0.75)}"', # 75% of queue.size
f'queue.discardMark="{int(action_queue_size * 0.9)}"', # 90% of queue.size
'queue.discardSeverity="5"', # Only discard notice, info, debug if we must discard anything
]
if not os.access(spool_directory, os.W_OK): if not os.access(spool_directory, os.W_OK):
spool_directory = '/var/lib/awx' spool_directory = '/var/lib/awx'
@@ -33,7 +48,6 @@ def construct_rsyslog_conf_template(settings=settings):
'$WorkDirectory /var/lib/awx/rsyslog', '$WorkDirectory /var/lib/awx/rsyslog',
f'$MaxMessageSize {max_bytes}', f'$MaxMessageSize {max_bytes}',
'$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf', '$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf',
f'main_queue(queue.spoolDirectory="{spool_directory}" queue.maxdiskspace="{max_disk_space_main_queue}g" queue.type="Disk" queue.filename="awx-external-logger-backlog")', # noqa
'module(load="imuxsock" SysSock.Use="off")', 'module(load="imuxsock" SysSock.Use="off")',
'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on" RateLimit.Burst="0")', 'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on" RateLimit.Burst="0")',
'template(name="awx" type="string" string="%rawmsg-after-pri%")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")',
@@ -79,12 +93,7 @@ def construct_rsyslog_conf_template(settings=settings):
'action.resumeRetryCount="-1"', 'action.resumeRetryCount="-1"',
'template="awx"', 'template="awx"',
f'action.resumeInterval="{timeout}"', f'action.resumeInterval="{timeout}"',
f'queue.spoolDirectory="{spool_directory}"', ] + queue_options
'queue.filename="awx-external-logger-action-queue"',
f'queue.maxdiskspace="{max_disk_space_action_queue}g"',
'queue.type="LinkedList"',
'queue.saveOnShutdown="on"',
]
if error_log_file: if error_log_file:
params.append(f'errorfile="{error_log_file}"') params.append(f'errorfile="{error_log_file}"')
if parsed.path: if parsed.path:
@@ -112,9 +121,18 @@ def construct_rsyslog_conf_template(settings=settings):
params = ' '.join(params) params = ' '.join(params)
parts.extend(['module(load="omhttp")', f'action({params})']) parts.extend(['module(load="omhttp")', f'action({params})'])
elif protocol and host and port: elif protocol and host and port:
parts.append( params = [
f'action(type="omfwd" target="{host}" port="{port}" protocol="{protocol}" action.resumeRetryCount="-1" action.resumeInterval="{timeout}" template="awx")' # noqa 'type="omfwd"',
) f'target="{host}"',
f'port="{port}"',
f'protocol="{protocol}"',
'action.resumeRetryCount="-1"',
f'action.resumeInterval="{timeout}"',
'template="awx"',
] + queue_options
params = ' '.join(params)
parts.append(f'action({params})')
else: else:
parts.append('action(type="omfile" file="/dev/null")') # rsyslog needs *at least* one valid action to start parts.append('action(type="omfile" file="/dev/null")') # rsyslog needs *at least* one valid action to start
tmpl = '\n'.join(parts) tmpl = '\n'.join(parts)

View File

@@ -199,6 +199,8 @@ class Licenser(object):
license['support_level'] = attr.get('value') license['support_level'] = attr.get('value')
elif attr.get('name') == 'usage': elif attr.get('name') == 'usage':
license['usage'] = attr.get('value') license['usage'] = attr.get('value')
elif attr.get('name') == 'ph_product_name' and attr.get('value') == 'RHEL Developer':
license['license_type'] = 'developer'
if not license: if not license:
logger.error("No valid subscriptions found in manifest") logger.error("No valid subscriptions found in manifest")
@@ -322,7 +324,9 @@ class Licenser(object):
def generate_license_options_from_entitlements(self, json): def generate_license_options_from_entitlements(self, json):
from dateutil.parser import parse from dateutil.parser import parse
ValidSub = collections.namedtuple('ValidSub', 'sku name support_level end_date trial quantity pool_id satellite subscription_id account_number usage') ValidSub = collections.namedtuple(
'ValidSub', 'sku name support_level end_date trial developer_license quantity pool_id satellite subscription_id account_number usage'
)
valid_subs = [] valid_subs = []
for sub in json: for sub in json:
satellite = sub.get('satellite') satellite = sub.get('satellite')
@@ -350,6 +354,7 @@ class Licenser(object):
sku = sub['productId'] sku = sub['productId']
trial = sku.startswith('S') # i.e.,, SER/SVC trial = sku.startswith('S') # i.e.,, SER/SVC
developer_license = False
support_level = '' support_level = ''
usage = '' usage = ''
pool_id = sub['id'] pool_id = sub['id']
@@ -364,9 +369,24 @@ class Licenser(object):
support_level = attr.get('value') support_level = attr.get('value')
elif attr.get('name') == 'usage': elif attr.get('name') == 'usage':
usage = attr.get('value') usage = attr.get('value')
elif attr.get('name') == 'ph_product_name' and attr.get('value') == 'RHEL Developer':
developer_license = True
valid_subs.append( valid_subs.append(
ValidSub(sku, sub['productName'], support_level, end_date, trial, quantity, pool_id, satellite, subscription_id, account_number, usage) ValidSub(
sku,
sub['productName'],
support_level,
end_date,
trial,
developer_license,
quantity,
pool_id,
satellite,
subscription_id,
account_number,
usage,
)
) )
if valid_subs: if valid_subs:
@@ -381,6 +401,8 @@ class Licenser(object):
if sub.trial: if sub.trial:
license._attrs['trial'] = True license._attrs['trial'] = True
license._attrs['license_type'] = 'trial' license._attrs['license_type'] = 'trial'
if sub.developer_license:
license._attrs['license_type'] = 'developer'
license._attrs['instance_count'] = min(MAX_INSTANCES, license._attrs['instance_count']) license._attrs['instance_count'] = min(MAX_INSTANCES, license._attrs['instance_count'])
human_instances = license._attrs['instance_count'] human_instances = license._attrs['instance_count']
if human_instances == MAX_INSTANCES: if human_instances == MAX_INSTANCES:

View File

@@ -3,6 +3,8 @@ import logging
import asyncio import asyncio
from typing import Dict from typing import Dict
import ipaddress
import aiohttp import aiohttp
from aiohttp import client_exceptions from aiohttp import client_exceptions
import aioredis import aioredis
@@ -71,7 +73,16 @@ class WebsocketRelayConnection:
if not self.channel_layer: if not self.channel_layer:
self.channel_layer = get_channel_layer() self.channel_layer = get_channel_layer()
uri = f"{self.protocol}://{self.remote_host}:{self.remote_port}/websocket/relay/" # figure out if what we have is an ipaddress, IPv6 Addresses must have brackets added for uri
uri_hostname = self.remote_host
try:
# Throws ValueError if self.remote_host is a hostname like example.com, not an IPv4 or IPv6 ip address
if isinstance(ipaddress.ip_address(uri_hostname), ipaddress.IPv6Address):
uri_hostname = f"[{uri_hostname}]"
except ValueError:
pass
uri = f"{self.protocol}://{uri_hostname}:{self.remote_port}/websocket/relay/"
timeout = aiohttp.ClientTimeout(total=10) timeout = aiohttp.ClientTimeout(total=10)
secret_val = WebsocketSecretAuthHelper.construct_secret() secret_val = WebsocketSecretAuthHelper.construct_secret()
@@ -216,7 +227,8 @@ class WebSocketRelayManager(object):
continue continue
try: try:
if not notif.payload or notif.channel != "web_ws_heartbeat": if not notif.payload or notif.channel != "web_ws_heartbeat":
return logger.warning(f"Unexpected channel or missing payload. {notif.channel}, {notif.payload}")
continue
try: try:
payload = json.loads(notif.payload) payload = json.loads(notif.payload)
@@ -224,13 +236,15 @@ class WebSocketRelayManager(object):
logmsg = "Failed to decode message from pg_notify channel `web_ws_heartbeat`" logmsg = "Failed to decode message from pg_notify channel `web_ws_heartbeat`"
if logger.isEnabledFor(logging.DEBUG): if logger.isEnabledFor(logging.DEBUG):
logmsg = "{} {}".format(logmsg, payload) logmsg = "{} {}".format(logmsg, payload)
logger.warning(logmsg) logger.warning(logmsg)
return continue
# Skip if the message comes from the same host we are running on # Skip if the message comes from the same host we are running on
# In this case, we'll be sharing a redis, no need to relay. # In this case, we'll be sharing a redis, no need to relay.
if payload.get("hostname") == self.local_hostname: if payload.get("hostname") == self.local_hostname:
return hostname = payload.get("hostname")
logger.debug("Received a heartbeat request for {hostname}. Skipping as we use redis for local host.")
continue
action = payload.get("action") action = payload.get("action")
@@ -239,7 +253,7 @@ class WebSocketRelayManager(object):
ip = payload.get("ip") or hostname # try back to hostname if ip isn't supplied ip = payload.get("ip") or hostname # try back to hostname if ip isn't supplied
if ip is None: if ip is None:
logger.warning(f"Received invalid {action} ws_heartbeat, missing hostname and ip: {payload}") logger.warning(f"Received invalid {action} ws_heartbeat, missing hostname and ip: {payload}")
return continue
logger.debug(f"Web host {hostname} ({ip}) {action} heartbeat received.") logger.debug(f"Web host {hostname} ({ip}) {action} heartbeat received.")
if action == "online": if action == "online":

View File

@@ -336,6 +336,7 @@ INSTALLED_APPS = [
'awx.ui', 'awx.ui',
'awx.sso', 'awx.sso',
'solo', 'solo',
'ansible_base',
] ]
INTERNAL_IPS = ('127.0.0.1',) INTERNAL_IPS = ('127.0.0.1',)
@@ -470,13 +471,13 @@ CELERYBEAT_SCHEDULE = {
'receptor_reaper': {'task': 'awx.main.tasks.system.awx_receptor_workunit_reaper', 'schedule': timedelta(seconds=60)}, 'receptor_reaper': {'task': 'awx.main.tasks.system.awx_receptor_workunit_reaper', 'schedule': timedelta(seconds=60)},
'send_subsystem_metrics': {'task': 'awx.main.analytics.analytics_tasks.send_subsystem_metrics', 'schedule': timedelta(seconds=20)}, 'send_subsystem_metrics': {'task': 'awx.main.analytics.analytics_tasks.send_subsystem_metrics', 'schedule': timedelta(seconds=20)},
'cleanup_images': {'task': 'awx.main.tasks.system.cleanup_images_and_files', 'schedule': timedelta(hours=3)}, 'cleanup_images': {'task': 'awx.main.tasks.system.cleanup_images_and_files', 'schedule': timedelta(hours=3)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.system.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)}, 'cleanup_host_metrics': {'task': 'awx.main.tasks.host_metrics.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)},
'host_metric_summary_monthly': {'task': 'awx.main.tasks.host_metrics.host_metric_summary_monthly', 'schedule': timedelta(hours=4)}, 'host_metric_summary_monthly': {'task': 'awx.main.tasks.host_metrics.host_metric_summary_monthly', 'schedule': timedelta(hours=4)},
} }
# Django Caching Configuration # Django Caching Configuration
DJANGO_REDIS_IGNORE_EXCEPTIONS = True DJANGO_REDIS_IGNORE_EXCEPTIONS = True
CACHES = {'default': {'BACKEND': 'awx.main.cache.AWXRedisCache', 'LOCATION': 'unix:/var/run/redis/redis.sock?db=1'}} CACHES = {'default': {'BACKEND': 'awx.main.cache.AWXRedisCache', 'LOCATION': 'unix:///var/run/redis/redis.sock?db=1'}}
# Social Auth configuration. # Social Auth configuration.
SOCIAL_AUTH_STRATEGY = 'social_django.strategy.DjangoStrategy' SOCIAL_AUTH_STRATEGY = 'social_django.strategy.DjangoStrategy'
@@ -796,7 +797,7 @@ LOG_AGGREGATOR_ENABLED = False
LOG_AGGREGATOR_TCP_TIMEOUT = 5 LOG_AGGREGATOR_TCP_TIMEOUT = 5
LOG_AGGREGATOR_VERIFY_CERT = True LOG_AGGREGATOR_VERIFY_CERT = True
LOG_AGGREGATOR_LEVEL = 'INFO' LOG_AGGREGATOR_LEVEL = 'INFO'
LOG_AGGREGATOR_MAX_DISK_USAGE_GB = 1 # Main queue LOG_AGGREGATOR_ACTION_QUEUE_SIZE = 131072
LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB = 1 # Action queue LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB = 1 # Action queue
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx' LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx'
LOG_AGGREGATOR_RSYSLOGD_DEBUG = False LOG_AGGREGATOR_RSYSLOGD_DEBUG = False
@@ -1049,7 +1050,7 @@ UI_NEXT = True
# - 'unique_managed_hosts': Compliant = automated - deleted hosts (using /api/v2/host_metrics/) # - 'unique_managed_hosts': Compliant = automated - deleted hosts (using /api/v2/host_metrics/)
SUBSCRIPTION_USAGE_MODEL = '' SUBSCRIPTION_USAGE_MODEL = ''
# Host metrics cleanup - last time of the cleanup run (soft-deleting records) # Host metrics cleanup - last time of the task/command run
CLEANUP_HOST_METRICS_LAST_TS = None CLEANUP_HOST_METRICS_LAST_TS = None
# Host metrics cleanup - minimal interval between two cleanups in days # Host metrics cleanup - minimal interval between two cleanups in days
CLEANUP_HOST_METRICS_INTERVAL = 30 # days CLEANUP_HOST_METRICS_INTERVAL = 30 # days

View File

@@ -87,7 +87,7 @@ def _update_user_orgs(backend, desired_org_state, orgs_to_create, user=None):
is_member_expression = org_opts.get(user_type, None) is_member_expression = org_opts.get(user_type, None)
remove_members = bool(org_opts.get('remove_{}'.format(user_type), remove)) remove_members = bool(org_opts.get('remove_{}'.format(user_type), remove))
has_role = _update_m2m_from_expression(user, is_member_expression, remove_members) has_role = _update_m2m_from_expression(user, is_member_expression, remove_members)
desired_org_state[organization_name][role_name] = has_role desired_org_state[organization_name][role_name] = desired_org_state[organization_name].get(role_name, False) or has_role
def _update_user_teams(backend, desired_team_state, teams_to_create, user=None): def _update_user_teams(backend, desired_team_state, teams_to_create, user=None):

View File

@@ -637,3 +637,75 @@ class TestSAMLUserFlags:
} }
assert expected == _check_flag(user, 'superuser', attributes, user_flags_settings) assert expected == _check_flag(user, 'superuser', attributes, user_flags_settings)
@pytest.mark.django_db
def test__update_user_orgs_org_map_and_saml_attr():
"""
This combines the action of two other tests where an org membership is defined both by
the ORGANIZATION_MAP and the SOCIAL_AUTH_SAML_ORGANIZATION_ATTR at the same time
"""
# This data will make the user a member
class BackendClass:
s = {
'ORGANIZATION_MAP': {
'Default1': {
'remove': True,
'remove_admins': True,
'users': 'foobar',
'remove_users': True,
'organization_alias': 'o1_alias',
}
}
}
def setting(self, key):
return self.s[key]
backend = BackendClass()
setting = {
'saml_attr': 'memberOf',
'saml_admin_attr': 'admins',
'saml_auditor_attr': 'auditors',
'remove': True,
'remove_admins': True,
}
# This data from the server will make the user an admin of the organization
kwargs = {
'username': 'foobar',
'uid': 'idp:cmeyers@redhat.com',
'request': {u'SAMLResponse': [], u'RelayState': [u'idp']},
'is_new': False,
'response': {
'session_index': '_0728f0e0-b766-0135-75fa-02842b07c044',
'idp_name': u'idp',
'attributes': {
'admins': ['Default1'],
},
},
'social': None,
'strategy': None,
'new_association': False,
}
this_user = User.objects.create(username='foobar')
with override_settings(SOCIAL_AUTH_SAML_ORGANIZATION_ATTR=setting):
desired_org_state = {}
orgs_to_create = []
# this should add user as an admin of the org
_update_user_orgs_by_saml_attr(backend, desired_org_state, orgs_to_create, **kwargs)
assert desired_org_state['o1_alias']['admin_role'] is True
assert set(orgs_to_create) == set(['o1_alias'])
# this should add user as a member of the org without reverting the admin status
_update_user_orgs(backend, desired_org_state, orgs_to_create, this_user)
assert desired_org_state['o1_alias']['member_role'] is True
assert desired_org_state['o1_alias']['admin_role'] is True
assert set(orgs_to_create) == set(['o1_alias'])

1771
awx/ui/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -33,12 +33,12 @@
"styled-components": "5.3.6" "styled-components": "5.3.6"
}, },
"devDependencies": { "devDependencies": {
"@babel/core": "^7.16.10", "@babel/core": "^7.22.9",
"@babel/eslint-parser": "^7.16.5", "@babel/eslint-parser": "^7.22.9",
"@babel/eslint-plugin": "^7.16.5", "@babel/eslint-plugin": "^7.22.10",
"@babel/plugin-syntax-jsx": "7.16.7", "@babel/plugin-syntax-jsx": "^7.22.5",
"@babel/polyfill": "^7.8.7", "@babel/polyfill": "^7.12.1",
"@babel/preset-react": "7.16.7", "@babel/preset-react": "^7.22.5",
"@cypress/instrument-cra": "^1.4.0", "@cypress/instrument-cra": "^1.4.0",
"@lingui/cli": "^3.7.1", "@lingui/cli": "^3.7.1",
"@lingui/loader": "3.15.0", "@lingui/loader": "3.15.0",

View File

@@ -5,7 +5,11 @@
<title data-cy="migration-title">{{ title }}</title> <title data-cy="migration-title">{{ title }}</title>
<meta <meta
http-equiv="Content-Security-Policy" http-equiv="Content-Security-Policy"
content="default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'nonce-{{ csp_nonce }}' *.pendo.io; img-src 'self' *.pendo.io data:;" content="default-src 'self';
connect-src 'self' ws: wss:;
style-src 'self' 'unsafe-inline';
script-src 'self' 'nonce-{{ csp_nonce }}' *.pendo.io;
img-src 'self' *.pendo.io data:;"
/> />
<meta charset="utf-8"> <meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" />

View File

@@ -33,6 +33,7 @@ import Roles from './models/Roles';
import Root from './models/Root'; import Root from './models/Root';
import Schedules from './models/Schedules'; import Schedules from './models/Schedules';
import Settings from './models/Settings'; import Settings from './models/Settings';
import SubscriptionUsage from './models/SubscriptionUsage';
import SystemJobs from './models/SystemJobs'; import SystemJobs from './models/SystemJobs';
import SystemJobTemplates from './models/SystemJobTemplates'; import SystemJobTemplates from './models/SystemJobTemplates';
import Teams from './models/Teams'; import Teams from './models/Teams';
@@ -82,6 +83,7 @@ const RolesAPI = new Roles();
const RootAPI = new Root(); const RootAPI = new Root();
const SchedulesAPI = new Schedules(); const SchedulesAPI = new Schedules();
const SettingsAPI = new Settings(); const SettingsAPI = new Settings();
const SubscriptionUsageAPI = new SubscriptionUsage();
const SystemJobsAPI = new SystemJobs(); const SystemJobsAPI = new SystemJobs();
const SystemJobTemplatesAPI = new SystemJobTemplates(); const SystemJobTemplatesAPI = new SystemJobTemplates();
const TeamsAPI = new Teams(); const TeamsAPI = new Teams();
@@ -132,6 +134,7 @@ export {
RootAPI, RootAPI,
SchedulesAPI, SchedulesAPI,
SettingsAPI, SettingsAPI,
SubscriptionUsageAPI,
SystemJobsAPI, SystemJobsAPI,
SystemJobTemplatesAPI, SystemJobTemplatesAPI,
TeamsAPI, TeamsAPI,

View File

@@ -0,0 +1,16 @@
import Base from '../Base';
class SubscriptionUsage extends Base {
constructor(http) {
super(http);
this.baseUrl = 'api/v2/host_metric_summary_monthly/';
}
readSubscriptionUsageChart(dateRange) {
return this.http.get(
`${this.baseUrl}?date__gte=${dateRange}&order_by=date&page_size=100`
);
}
}
export default SubscriptionUsage;

View File

@@ -11,6 +11,7 @@ import {
WorkflowJobsAPI, WorkflowJobsAPI,
WorkflowJobTemplatesAPI, WorkflowJobTemplatesAPI,
} from 'api'; } from 'api';
import useToast, { AlertVariant } from 'hooks/useToast';
import AlertModal from '../AlertModal'; import AlertModal from '../AlertModal';
import ErrorDetail from '../ErrorDetail'; import ErrorDetail from '../ErrorDetail';
import LaunchPrompt from '../LaunchPrompt'; import LaunchPrompt from '../LaunchPrompt';
@@ -45,8 +46,22 @@ function LaunchButton({ resource, children }) {
const [isLaunching, setIsLaunching] = useState(false); const [isLaunching, setIsLaunching] = useState(false);
const [resourceCredentials, setResourceCredentials] = useState([]); const [resourceCredentials, setResourceCredentials] = useState([]);
const [error, setError] = useState(null); const [error, setError] = useState(null);
const { addToast, Toast, toastProps } = useToast();
const showToast = () => {
addToast({
id: resource.id,
title: t`A job has already been launched`,
variant: AlertVariant.info,
hasTimeout: true,
});
};
const handleLaunch = async () => { const handleLaunch = async () => {
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true); setIsLaunching(true);
const readLaunch = const readLaunch =
resource.type === 'workflow_job_template' resource.type === 'workflow_job_template'
@@ -104,6 +119,11 @@ function LaunchButton({ resource, children }) {
}; };
const launchWithParams = async (params) => { const launchWithParams = async (params) => {
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true);
try { try {
let jobPromise; let jobPromise;
@@ -141,6 +161,10 @@ function LaunchButton({ resource, children }) {
let readRelaunch; let readRelaunch;
let relaunch; let relaunch;
if (isLaunching) {
showToast();
return;
}
setIsLaunching(true); setIsLaunching(true);
if (resource.type === 'inventory_update') { if (resource.type === 'inventory_update') {
// We'll need to handle the scenario where the src no longer exists // We'll need to handle the scenario where the src no longer exists
@@ -197,6 +221,7 @@ function LaunchButton({ resource, children }) {
handleRelaunch, handleRelaunch,
isLaunching, isLaunching,
})} })}
<Toast {...toastProps} />
{error && ( {error && (
<AlertModal <AlertModal
isOpen={error} isOpen={error}

View File

@@ -223,6 +223,10 @@ function Lookup(props) {
const Item = shape({ const Item = shape({
id: number.isRequired, id: number.isRequired,
}); });
const InstanceItem = shape({
id: number.isRequired,
hostname: string.isRequired,
});
Lookup.propTypes = { Lookup.propTypes = {
id: string, id: string,
@@ -230,7 +234,13 @@ Lookup.propTypes = {
modalDescription: oneOfType([string, node]), modalDescription: oneOfType([string, node]),
onChange: func.isRequired, onChange: func.isRequired,
onUpdate: func, onUpdate: func,
value: oneOfType([Item, arrayOf(Item), object]), value: oneOfType([
Item,
arrayOf(Item),
object,
InstanceItem,
arrayOf(InstanceItem),
]),
multiple: bool, multiple: bool,
required: bool, required: bool,
onBlur: func, onBlur: func,

View File

@@ -0,0 +1,212 @@
import React, { useCallback, useEffect } from 'react';
import { arrayOf, string, func, bool, shape } from 'prop-types';
import { withRouter } from 'react-router-dom';
import { t } from '@lingui/macro';
import { FormGroup, Chip } from '@patternfly/react-core';
import { InstancesAPI } from 'api';
import { Instance } from 'types';
import { getSearchableKeys } from 'components/PaginatedTable';
import { getQSConfig, parseQueryString, mergeParams } from 'util/qs';
import useRequest from 'hooks/useRequest';
import Popover from '../Popover';
import OptionsList from '../OptionsList';
import Lookup from './Lookup';
import LookupErrorMessage from './shared/LookupErrorMessage';
import FieldWithPrompt from '../FieldWithPrompt';
const QS_CONFIG = getQSConfig('instances', {
page: 1,
page_size: 5,
order_by: 'hostname',
});
function PeersLookup({
id,
value,
onChange,
tooltip,
className,
required,
history,
fieldName,
multiple,
validate,
columns,
isPromptableField,
promptId,
promptName,
formLabel,
typePeers,
instance_details,
}) {
const {
result: { instances, count, relatedSearchableKeys, searchableKeys },
request: fetchInstances,
error,
isLoading,
} = useRequest(
useCallback(async () => {
const params = parseQueryString(QS_CONFIG, history.location.search);
const peersFilter = {};
if (typePeers) {
peersFilter.not__node_type = ['control', 'hybrid'];
if (instance_details) {
if (instance_details.id) {
peersFilter.not__id = instance_details.id;
peersFilter.not__hostname = instance_details.peers;
}
}
}
const [{ data }, actionsResponse] = await Promise.all([
InstancesAPI.read(
mergeParams(params, {
...peersFilter,
})
),
InstancesAPI.readOptions(),
]);
return {
instances: data.results,
count: data.count,
relatedSearchableKeys: (
actionsResponse?.data?.related_search_fields || []
).map((val) => val.slice(0, -8)),
searchableKeys: getSearchableKeys(actionsResponse.data.actions?.GET),
};
}, [history.location, typePeers, instance_details]),
{
instances: [],
count: 0,
relatedSearchableKeys: [],
searchableKeys: [],
}
);
useEffect(() => {
fetchInstances();
}, [fetchInstances]);
const renderLookup = () => (
<>
<Lookup
id={fieldName}
header={formLabel}
value={value}
onChange={onChange}
onUpdate={fetchInstances}
fieldName={fieldName}
validate={validate}
qsConfig={QS_CONFIG}
multiple={multiple}
required={required}
isLoading={isLoading}
label={formLabel}
renderItemChip={({ item, removeItem, canDelete }) => (
<Chip
key={item.id}
onClick={() => removeItem(item)}
isReadOnly={!canDelete}
>
{item.hostname}
</Chip>
)}
renderOptionsList={({ state, dispatch, canDelete }) => (
<OptionsList
value={state.selectedItems}
options={instances}
optionCount={count}
columns={columns}
header={formLabel}
displayKey="hostname"
searchColumns={[
{
name: t`Hostname`,
key: 'hostname__icontains',
isDefault: true,
},
]}
sortColumns={[
{
name: t`Hostname`,
key: 'hostname',
},
]}
searchableKeys={searchableKeys}
relatedSearchableKeys={relatedSearchableKeys}
multiple={multiple}
label={formLabel}
name={fieldName}
qsConfig={QS_CONFIG}
readOnly={!canDelete}
selectItem={(item) => dispatch({ type: 'SELECT_ITEM', item })}
deselectItem={(item) => dispatch({ type: 'DESELECT_ITEM', item })}
/>
)}
/>
<LookupErrorMessage error={error} />
</>
);
return isPromptableField ? (
<FieldWithPrompt
fieldId={id}
label={formLabel}
promptId={promptId}
promptName={promptName}
tooltip={tooltip}
>
{renderLookup()}
</FieldWithPrompt>
) : (
<FormGroup
className={className}
label={formLabel}
labelIcon={tooltip && <Popover content={tooltip} />}
fieldId={id}
>
{renderLookup()}
</FormGroup>
);
}
PeersLookup.propTypes = {
id: string,
value: arrayOf(Instance).isRequired,
tooltip: string,
onChange: func.isRequired,
className: string,
required: bool,
validate: func,
multiple: bool,
fieldName: string,
columns: arrayOf(Object),
formLabel: string,
instance_details: (Instance, shape({})),
typePeers: bool,
};
PeersLookup.defaultProps = {
id: 'instances',
tooltip: '',
className: '',
required: false,
validate: () => undefined,
fieldName: 'instances',
columns: [
{
key: 'hostname',
name: t`Hostname`,
},
{
key: 'node_type',
name: t`Node Type`,
},
],
formLabel: t`Instances`,
instance_details: {},
multiple: true,
typePeers: false,
};
export default withRouter(PeersLookup);

View File

@@ -0,0 +1,137 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { Formik } from 'formik';
import { InstancesAPI } from 'api';
import { mountWithContexts } from '../../../testUtils/enzymeHelpers';
import PeersLookup from './PeersLookup';
jest.mock('../../api');
const mockedInstances = {
count: 1,
results: [
{
id: 2,
name: 'Foo',
image: 'quay.io/ansible/awx-ee',
pull: 'missing',
},
],
};
const instances = [
{
id: 1,
hostname: 'awx_1',
type: 'instance',
url: '/api/v2/instances/1/',
related: {
named_url: '/api/v2/instances/awx_1/',
jobs: '/api/v2/instances/1/jobs/',
instance_groups: '/api/v2/instances/1/instance_groups/',
peers: '/api/v2/instances/1/peers/',
},
summary_fields: {
user_capabilities: {
edit: false,
},
links: [],
},
uuid: '00000000-0000-0000-0000-000000000000',
created: '2023-04-26T22:06:46.766198Z',
modified: '2023-04-26T22:06:46.766217Z',
last_seen: '2023-04-26T23:12:02.857732Z',
health_check_started: null,
health_check_pending: false,
last_health_check: '2023-04-26T23:01:13.941693Z',
errors: 'Instance received normal shutdown signal',
capacity_adjustment: '1.00',
version: '0.1.dev33237+g1fdef52',
capacity: 0,
consumed_capacity: 0,
percent_capacity_remaining: 0,
jobs_running: 0,
jobs_total: 0,
cpu: '8.0',
memory: 8011055104,
cpu_capacity: 0,
mem_capacity: 0,
enabled: true,
managed_by_policy: true,
node_type: 'hybrid',
node_state: 'installed',
ip_address: null,
listener_port: 27199,
peers: [],
peers_from_control_nodes: false,
},
];
describe('PeersLookup', () => {
let wrapper;
beforeEach(() => {
InstancesAPI.read.mockResolvedValue({
data: mockedInstances,
});
});
afterEach(() => {
jest.clearAllMocks();
});
test('should render successfully without instance_details (for new added instance)', async () => {
InstancesAPI.readOptions.mockReturnValue({
data: {
actions: {
GET: {},
POST: {},
},
related_search_fields: [],
},
});
await act(async () => {
wrapper = mountWithContexts(
<Formik>
<PeersLookup value={instances} onChange={() => {}} />
</Formik>
);
});
wrapper.update();
expect(InstancesAPI.read).toHaveBeenCalledTimes(1);
expect(wrapper.find('PeersLookup')).toHaveLength(1);
expect(wrapper.find('FormGroup[label="Instances"]').length).toBe(1);
expect(wrapper.find('Checkbox[aria-label="Prompt on launch"]').length).toBe(
0
);
});
test('should render successfully with instance_details for edit instance', async () => {
InstancesAPI.readOptions.mockReturnValue({
data: {
actions: {
GET: {},
POST: {},
},
related_search_fields: [],
},
});
await act(async () => {
wrapper = mountWithContexts(
<Formik>
<PeersLookup
value={instances}
instance_details={instances[0]}
onChange={() => {}}
/>
</Formik>
);
});
wrapper.update();
expect(InstancesAPI.read).toHaveBeenCalledTimes(1);
expect(wrapper.find('PeersLookup')).toHaveLength(1);
expect(wrapper.find('FormGroup[label="Instances"]').length).toBe(1);
expect(wrapper.find('Checkbox[aria-label="Prompt on launch"]').length).toBe(
0
);
});
});

View File

@@ -8,3 +8,4 @@ export { default as ApplicationLookup } from './ApplicationLookup';
export { default as HostFilterLookup } from './HostFilterLookup'; export { default as HostFilterLookup } from './HostFilterLookup';
export { default as OrganizationLookup } from './OrganizationLookup'; export { default as OrganizationLookup } from './OrganizationLookup';
export { default as ExecutionEnvironmentLookup } from './ExecutionEnvironmentLookup'; export { default as ExecutionEnvironmentLookup } from './ExecutionEnvironmentLookup';
export { default as PeersLookup } from './PeersLookup';

View File

@@ -125,19 +125,24 @@ const Item = shape({
name: string.isRequired, name: string.isRequired,
url: string, url: string,
}); });
const InstanceItem = shape({
id: oneOfType([number, string]).isRequired,
hostname: string.isRequired,
url: string,
});
OptionsList.propTypes = { OptionsList.propTypes = {
deselectItem: func.isRequired, deselectItem: func.isRequired,
displayKey: string, displayKey: string,
isSelectedDraggable: bool, isSelectedDraggable: bool,
multiple: bool, multiple: bool,
optionCount: number.isRequired, optionCount: number.isRequired,
options: arrayOf(Item).isRequired, options: oneOfType([arrayOf(Item), arrayOf(InstanceItem)]).isRequired,
qsConfig: QSConfig.isRequired, qsConfig: QSConfig.isRequired,
renderItemChip: func, renderItemChip: func,
searchColumns: SearchColumns, searchColumns: SearchColumns,
selectItem: func.isRequired, selectItem: func.isRequired,
sortColumns: SortColumns, sortColumns: SortColumns,
value: arrayOf(Item).isRequired, value: oneOfType([arrayOf(Item), arrayOf(InstanceItem)]).isRequired,
}; };
OptionsList.defaultProps = { OptionsList.defaultProps = {
isSelectedDraggable: false, isSelectedDraggable: false,

View File

@@ -75,6 +75,7 @@ function SessionProvider({ children }) {
const [sessionCountdown, setSessionCountdown] = useState(0); const [sessionCountdown, setSessionCountdown] = useState(0);
const [authRedirectTo, setAuthRedirectTo] = useState('/'); const [authRedirectTo, setAuthRedirectTo] = useState('/');
const [isUserBeingLoggedOut, setIsUserBeingLoggedOut] = useState(false); const [isUserBeingLoggedOut, setIsUserBeingLoggedOut] = useState(false);
const [isRedirectLinkReceived, setIsRedirectLinkReceived] = useState(false);
const { const {
request: fetchLoginRedirectOverride, request: fetchLoginRedirectOverride,
@@ -99,6 +100,7 @@ function SessionProvider({ children }) {
const logout = useCallback(async () => { const logout = useCallback(async () => {
setIsUserBeingLoggedOut(true); setIsUserBeingLoggedOut(true);
setIsRedirectLinkReceived(false);
if (!isSessionExpired.current) { if (!isSessionExpired.current) {
setAuthRedirectTo('/logout'); setAuthRedirectTo('/logout');
window.localStorage.setItem(SESSION_USER_ID, null); window.localStorage.setItem(SESSION_USER_ID, null);
@@ -112,6 +114,18 @@ function SessionProvider({ children }) {
return <Redirect to="/login" />; return <Redirect to="/login" />;
}, [setSessionTimeout, setSessionCountdown]); }, [setSessionTimeout, setSessionCountdown]);
useEffect(() => {
const unlisten = history.listen((location, action) => {
if (action === 'POP') {
setIsRedirectLinkReceived(true);
}
});
return () => {
unlisten(); // ensure that the listener is removed when the component unmounts
};
}, [history]);
useEffect(() => { useEffect(() => {
if (!isAuthenticated(document.cookie)) { if (!isAuthenticated(document.cookie)) {
return () => {}; return () => {};
@@ -176,6 +190,8 @@ function SessionProvider({ children }) {
logout, logout,
sessionCountdown, sessionCountdown,
setAuthRedirectTo, setAuthRedirectTo,
isRedirectLinkReceived,
setIsRedirectLinkReceived,
}), }),
[ [
authRedirectTo, authRedirectTo,
@@ -186,6 +202,8 @@ function SessionProvider({ children }) {
logout, logout,
sessionCountdown, sessionCountdown,
setAuthRedirectTo, setAuthRedirectTo,
isRedirectLinkReceived,
setIsRedirectLinkReceived,
] ]
); );

View File

@@ -17,6 +17,7 @@ import Organizations from 'screens/Organization';
import Projects from 'screens/Project'; import Projects from 'screens/Project';
import Schedules from 'screens/Schedule'; import Schedules from 'screens/Schedule';
import Settings from 'screens/Setting'; import Settings from 'screens/Setting';
import SubscriptionUsage from 'screens/SubscriptionUsage/SubscriptionUsage';
import Teams from 'screens/Team'; import Teams from 'screens/Team';
import Templates from 'screens/Template'; import Templates from 'screens/Template';
import TopologyView from 'screens/TopologyView'; import TopologyView from 'screens/TopologyView';
@@ -61,6 +62,11 @@ function getRouteConfig(userProfile = {}) {
path: '/host_metrics', path: '/host_metrics',
screen: HostMetrics, screen: HostMetrics,
}, },
{
title: <Trans>Subscription Usage</Trans>,
path: '/subscription_usage',
screen: SubscriptionUsage,
},
], ],
}, },
{ {
@@ -189,6 +195,7 @@ function getRouteConfig(userProfile = {}) {
'unique_managed_hosts' 'unique_managed_hosts'
) { ) {
deleteRoute('host_metrics'); deleteRoute('host_metrics');
deleteRoute('subscription_usage');
} }
if (userProfile?.isSuperUser || userProfile?.isSystemAuditor) if (userProfile?.isSuperUser || userProfile?.isSystemAuditor)
return routeConfig; return routeConfig;
@@ -197,6 +204,7 @@ function getRouteConfig(userProfile = {}) {
deleteRoute('management_jobs'); deleteRoute('management_jobs');
deleteRoute('topology_view'); deleteRoute('topology_view');
deleteRoute('instances'); deleteRoute('instances');
deleteRoute('subscription_usage');
if (userProfile?.isOrgAdmin) return routeConfig; if (userProfile?.isOrgAdmin) return routeConfig;
if (!userProfile?.isNotificationAdmin) deleteRoute('notification_templates'); if (!userProfile?.isNotificationAdmin) deleteRoute('notification_templates');

View File

@@ -31,6 +31,7 @@ describe('getRouteConfig', () => {
'/activity_stream', '/activity_stream',
'/workflow_approvals', '/workflow_approvals',
'/host_metrics', '/host_metrics',
'/subscription_usage',
'/templates', '/templates',
'/credentials', '/credentials',
'/projects', '/projects',
@@ -61,6 +62,7 @@ describe('getRouteConfig', () => {
'/activity_stream', '/activity_stream',
'/workflow_approvals', '/workflow_approvals',
'/host_metrics', '/host_metrics',
'/subscription_usage',
'/templates', '/templates',
'/credentials', '/credentials',
'/projects', '/projects',

Some files were not shown because too many files have changed in this diff Show More