Compare commits

..

331 Commits

Author SHA1 Message Date
softwarefactory-project-zuul[bot]
827adbce76 Merge pull request #6463 from ryanpetrello/release-10.0.0
bump version 10.0.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 16:42:49 +00:00
softwarefactory-project-zuul[bot]
849a64f20a Merge pull request #6481 from ryanpetrello/cli-docs-up
promote AWX CLI installation instructions to the global INSTALL.md

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 16:31:04 +00:00
softwarefactory-project-zuul[bot]
3bbd03732b Merge pull request #6461 from jakemcdermott/6433-fix-org-team-rbac-save
Don't show user-only roles for teams

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 16:05:20 +00:00
Ryan Petrello
32627ce51a promote AWX CLI installation instructions to the global INSTALL.md
a few users have had trouble finding these instructions, so let's move
them into the top level installation docs
2020-03-30 11:46:10 -04:00
softwarefactory-project-zuul[bot]
4a8f1d41fa Merge pull request #6422 from john-westcott-iv/tower_job_wait_update
Initial cut at tower_job_wait conversion

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 13:27:42 +00:00
softwarefactory-project-zuul[bot]
508c9b3102 Merge pull request #6465 from ryanpetrello/ws-bad-group-syntax
prevent ws group subscription if not specified in the valid format

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 22:02:47 +00:00
softwarefactory-project-zuul[bot]
f8be1f4110 Merge pull request #6469 from ryanpetrello/cli-config-wooops
fix a bug that broke `awx config`

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 21:59:16 +00:00
softwarefactory-project-zuul[bot]
d727e69a00 Merge pull request #6459 from chrismeyersfsu/fix-register_queue_race2
fix register_queue race conditionn

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 21:36:41 +00:00
Ryan Petrello
04dd1352c9 prevent ws group subscription if not specified in the valid format 2020-03-27 17:13:21 -04:00
Ryan Petrello
ea54815e6b fix a bug that broke awx config
see: https://github.com/ansible/tower/issues/4206
2020-03-27 17:07:48 -04:00
softwarefactory-project-zuul[bot]
78db965797 Merge pull request #6467 from dsesami/survey-list-ids
Add IDs to survey list items

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 20:17:45 +00:00
chris meyers
3326979806 fix register_queue race conditionn
* Avoid race condition with `apply_cluster_membership_policies`
2020-03-27 16:15:10 -04:00
beeankha
230949c43c Fix timeout error 2020-03-27 15:44:23 -04:00
Daniel Sami
a862a00d24 added id to survey list items 2020-03-27 15:27:08 -04:00
beeankha
2e8f9185ab Fix default value errors for cases of None for min/max_interval 2020-03-27 15:05:23 -04:00
beeankha
6d6322ae4d Update integration tests, update tower_wait module 2020-03-27 15:05:23 -04:00
John Westcott IV
914ea54925 Make module prefer interval (if set) over min/max
Fix linting issues for True vs true

Fix up unit test related errors
2020-03-27 15:05:23 -04:00
John Westcott IV
b9b62e3771 Removing assert that job is pending on job launch 2020-03-27 15:05:23 -04:00
John Westcott IV
e03911d378 Depricate min and max interval in favor of interval 2020-03-27 15:05:23 -04:00
John Westcott IV
61287f6b36 Removing old unneeded output and fixing comments 2020-03-27 15:05:23 -04:00
John Westcott IV
f6bfdef34d Removed old secho comment from Tower-CLI
Fixed job name for tests
2020-03-27 15:05:23 -04:00
John Westcott IV
7494ba7b9c Initial cut at tower_job_wait conversion 2020-03-27 15:05:23 -04:00
softwarefactory-project-zuul[bot]
5f62426684 Merge pull request #6458 from jakemcdermott/6435-fix-notification-toggle-disable
Limit disable-on-load to single notifications

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 17:59:46 +00:00
Ryan Petrello
6914213aa0 bump version 10.0.0 2020-03-27 12:51:18 -04:00
softwarefactory-project-zuul[bot]
83721ff9a8 Merge pull request #6438 from jakemcdermott/6433-fix-org-team-rbac-save-api
Identify user-only object roles

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 16:35:56 +00:00
softwarefactory-project-zuul[bot]
4998c7bf21 Merge pull request #6315 from john-westcott-iv/collections_tools_associations
Collections tools associations

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 16:16:08 +00:00
softwarefactory-project-zuul[bot]
155a1d9a32 Merge pull request #6032 from ryanpetrello/bigint
migrate event table primary keys from integer to bigint

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2020-03-27 16:01:58 +00:00
Jake McDermott
6f582b5688 Don't show user-only roles for teams 2020-03-27 11:18:15 -04:00
Jake McDermott
579648a017 Identify user-only organization roles
Organization participation roles (admin, member) can't be assigned to a
team. Add a field to the object roles so the ui can know not to display
them for team role selection.
2020-03-27 11:12:39 -04:00
softwarefactory-project-zuul[bot]
c4ed9a14ef Merge pull request #6451 from jbradberry/related-webhook-credential-link
Optionally add the webhook_credential link to related

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 14:38:25 +00:00
softwarefactory-project-zuul[bot]
21872e7101 Merge pull request #6444 from mabashian/ui-next-accessibility-low-hanging-fruitz
Adds aria-label to some buttons without text

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 14:38:05 +00:00
Jake McDermott
f2e9a8d1b2 Limit disable-on-load to single notifications 2020-03-27 10:32:11 -04:00
Ryan Petrello
301d6ff616 make the job event bigint migration chunk size configurable 2020-03-27 09:28:10 -04:00
softwarefactory-project-zuul[bot]
d24271849d Merge pull request #6454 from ansible/jakemcdermott-update-issue-template
Add advice for creating bug reports

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 00:57:45 +00:00
beeankha
a50b03da17 Remove unnecessary fields in associations file 2020-03-26 20:04:11 -04:00
Jake McDermott
27b5b534a1 Add advice for creating bug reports 2020-03-26 19:49:38 -04:00
softwarefactory-project-zuul[bot]
6bc97158fe Merge pull request #6443 from ryanpetrello/summary-fields-perf-note
clarify some API documentation on summary_fields

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 21:17:48 +00:00
softwarefactory-project-zuul[bot]
9ce2a9240a Merge pull request #6447 from ryanpetrello/runner-1.4.6
update to the latest version of ansible-runner

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 21:03:16 +00:00
Jeff Bradberry
6b3befcb94 Optionally add the webhook_credential link to related
on JTs and WFJTs.
2020-03-26 16:07:14 -04:00
Ryan Petrello
c8044b4755 migrate event table primary keys from integer to bigint
see: https://github.com/ansible/awx/issues/6010
2020-03-26 15:54:38 -04:00
softwarefactory-project-zuul[bot]
3045511401 Merge pull request #6441 from ryanpetrello/eye-python
fix busted shell_plus in the development environment

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 19:16:25 +00:00
softwarefactory-project-zuul[bot]
24f334085e Merge pull request #6417 from jakemcdermott/no-blank-adhoc-command-module-names
Don't allow blank adhoc command module names

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 19:14:06 +00:00
Ryan Petrello
90d35f07f3 clarify some documentation on summary_fields 2020-03-26 14:54:28 -04:00
softwarefactory-project-zuul[bot]
e334f33d13 Merge pull request #6388 from chrismeyersfsu/feature-websocket_secret2
set broadcast websockets secret

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 18:50:52 +00:00
Ryan Petrello
464db28be5 update to the latest version of ansible-runner 2020-03-26 14:49:45 -04:00
Ryan Petrello
61a0d1f77b fix busted shell_plus in the development environment
for some reason (unsure why), django-extensions has begun noticing
ipython importability and treating "shell_plus" as "start an IPython
notebook by default

it could be that this is a bug in django-extensions that will be fixed
soon, but for now, this fixes the issue
2020-03-26 13:37:13 -04:00
mabashian
77e99ad355 Adds aria-label to some buttons without text 2020-03-26 12:58:45 -04:00
beeankha
9f4afe6972 Fix misc. linter errors 2020-03-26 12:01:48 -04:00
John Westcott IV
b99a04dd8d Adding associations to generator 2020-03-26 12:01:48 -04:00
John Westcott IV
357e22eb51 Compensating for default of '' for a JSON typed field 2020-03-26 12:01:48 -04:00
softwarefactory-project-zuul[bot]
9dbf75f2a9 Merge pull request #6434 from marshmalien/6432-launch-btn-bug
Disable launch button when there are zero nodes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 15:27:45 +00:00
chris meyers
eab74cac07 autogenerate websocket secret 2020-03-26 10:32:37 -04:00
Marliana Lara
979f549d90 Disable launch button when there are zero nodes 2020-03-26 10:16:33 -04:00
softwarefactory-project-zuul[bot]
ca82f48c18 Merge pull request #6429 from marshmalien/5992-wf-doc-btn
Hookup WF documentation button to visualizer toolbar

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 14:00:50 +00:00
Marliana Lara
5a45a62cb3 Hookup WF documentation button to visualizer toolbar 2020-03-25 17:44:34 -04:00
softwarefactory-project-zuul[bot]
090349a49b Merge pull request #6416 from jakemcdermott/6413-remove-unnecessary-project-template-add-option
Remove unnecessary project template add option

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 19:41:23 +00:00
softwarefactory-project-zuul[bot]
c38d13c5ab Merge pull request #6399 from jakemcdermott/6398-fix-confirm-password-reset
Don't delete confirmed password from formik object

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 19:29:05 +00:00
softwarefactory-project-zuul[bot]
f490a940cf Merge pull request #6410 from ryanpetrello/new-social-auth
update social-auth-core to address a GitHub API deprecation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 19:28:59 +00:00
softwarefactory-project-zuul[bot]
42c24419d4 Merge pull request #6409 from AlanCoding/group_children
Rename group-to-group field to align with API

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 17:34:25 +00:00
Jake McDermott
e17c93ecd7 Don't allow blank adhoc command module names 2020-03-25 13:11:30 -04:00
softwarefactory-project-zuul[bot]
67d48a87f8 Merge pull request #6408 from ryanpetrello/rabbitmq-cleanup
remove a bunch of RabbitMQ references

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 17:02:54 +00:00
Ryan Petrello
b755fa6777 update social-auth-core to address a GitHub API deprecation 2020-03-25 12:17:36 -04:00
softwarefactory-project-zuul[bot]
ee4dcd2055 Merge pull request #6403 from jbradberry/awxkit-from-json-connection
Add a connection kwarg to Page.from_json

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 15:33:37 +00:00
softwarefactory-project-zuul[bot]
0f7a4b384b Merge pull request #6386 from jbradberry/awxkit-api-endpoints
Awxkit api endpoints

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 15:33:32 +00:00
softwarefactory-project-zuul[bot]
02415db881 Merge pull request #6407 from marshmalien/5991-wf-launch-btn
Hookup launch button to workflow visualizer

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 15:33:27 +00:00
AlanCoding
703345e9d8 Add alias for group children of groups 2020-03-25 11:33:13 -04:00
AlanCoding
d102b06474 Rename group-to-group field to align with API 2020-03-25 11:33:09 -04:00
Jake McDermott
55c18fa76c Remove unnecessary project template add option 2020-03-25 11:26:30 -04:00
softwarefactory-project-zuul[bot]
d37039a18a Merge pull request #6334 from rooftopcellist/ncat
Add netcat to the dev container

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 12:51:16 +00:00
Christian Adams
6335004c94 Add common debugging tools to the dev container
- nmap-ncat
 - sdb
 - tcpdump
 - strace
 - vim
2020-03-25 08:03:32 -04:00
softwarefactory-project-zuul[bot]
177867de5a Merge pull request #6369 from AlanCoding/create_associate
Fix bug with association on creation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 00:28:51 +00:00
softwarefactory-project-zuul[bot]
08bd445caf Merge pull request #6404 from ryanpetrello/pyyaml-upgrade
pin a minimum pyyaml version to address (CVE-2017-18342)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 23:48:01 +00:00
softwarefactory-project-zuul[bot]
b5776c8eb3 Merge pull request #6405 from ryanpetrello/upgrade-django
update Django to address CVE-2020-9402

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 22:48:09 +00:00
Ryan Petrello
8f1db173c1 remove a bunch of RabbitMQ references 2020-03-24 18:46:58 -04:00
softwarefactory-project-zuul[bot]
62e93d5c57 Merge pull request #6271 from AlexSCorey/6260-UsersOrgList
Adds User Organization List

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 22:38:56 +00:00
Marliana Lara
abfeb735f0 Add launch button to workflow visualizer 2020-03-24 17:42:22 -04:00
Ryan Petrello
68b0b40e91 update Django to address CVE-2020-9402
we don't use Oracle GIS, so this isn't really applicable, but it'll make
security scanners happy <shrug>

see: https://docs.djangoproject.com/en/3.0/releases/2.2.11/
2020-03-24 16:41:53 -04:00
Alex Corey
910d926ac3 Fixes file structure, adds tests 2020-03-24 16:27:56 -04:00
Alex Corey
c84ab9f1dc Adds User Organization List 2020-03-24 16:25:11 -04:00
Ryan Petrello
65cafa37c7 pin a minimum pyyaml version to address (CVE-2017-18342)
see: https://github.com/ansible/awx/issues/6393
2020-03-24 15:59:31 -04:00
AlanCoding
551fd088f5 Remove test workarounds 2020-03-24 15:42:35 -04:00
AlanCoding
a72e885274 Fix bug with association on creation 2020-03-24 15:34:52 -04:00
softwarefactory-project-zuul[bot]
bd7c048113 Merge pull request #6291 from AlanCoding/node_identifier
Add Workflow Node Identifier

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 19:32:52 +00:00
Jeff Bradberry
91135f638f Add a connection kwarg to Page.from_json
if you don't reuse the connection when doing this, you lose your
authentication.
2020-03-24 15:27:51 -04:00
softwarefactory-project-zuul[bot]
cbc02dd607 Merge pull request #6394 from ryanpetrello/runner-145
update to the latest version of ansible-runner

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 19:09:21 +00:00
softwarefactory-project-zuul[bot]
de09deff66 Merge pull request #6348 from AlexSCorey/5895-SurveyList
Adds word wrap functionality

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 19:07:05 +00:00
softwarefactory-project-zuul[bot]
5272d088ed Merge pull request #6390 from marshmalien/fix-select-behavior
Fix bugs related to Job Template labels and tags

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 17:53:01 +00:00
softwarefactory-project-zuul[bot]
22a593f30f Merge pull request #6389 from jlmitch5/fixEmailOptionNotif
update email option notification to select

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 16:54:12 +00:00
Jake McDermott
b56812018a Don't delete confirmed password from formik object
If the save fails, the form will attempt to reload the
deleted value.
2020-03-24 12:21:07 -04:00
Alex Corey
ab755134b3 Adds new DataListCell components to all necessary lists 2020-03-24 10:30:41 -04:00
Alex Corey
ebb0f31b0b Fixes word wrap issue 2020-03-24 10:30:41 -04:00
Ryan Petrello
51ef57188c update to the latest version of ansible-runner 2020-03-24 10:01:17 -04:00
AlanCoding
653850fa6d Remove duplicated index 2020-03-23 22:54:04 -04:00
AlanCoding
8ba4388014 Rewrite tests to use the new modules 2020-03-23 22:47:30 -04:00
AlanCoding
f3e8623a21 Move workflow test target 2020-03-23 22:34:11 -04:00
AlanCoding
077461a3ef Docs touchups 2020-03-23 22:00:02 -04:00
AlanCoding
795c989a49 fix bug processing survey spec 2020-03-23 22:00:02 -04:00
AlanCoding
5e595caf5e Add workflow node identifier
Generate new modules WFJT and WFJT node
Touch up generated syntax, test new modules

Add utility method in awxkit

Fix some issues with non-name identifier in
  AWX collection module_utils

Update workflow docs for workflow node identifier

Test and fix WFJT modules survey_spec
Plug in survey spec for the new module
Handle survey spec idempotency and test

add associations for node connections
Handle node credential prompts as well

Add indexes for new identifier field

Test with unicode dragon in name
2020-03-23 22:00:00 -04:00
softwarefactory-project-zuul[bot]
d941f11ccd Merge pull request #5582 from jakemcdermott/fix-5265
Don't refresh settings on websocket event

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 23:10:21 +00:00
softwarefactory-project-zuul[bot]
c4e50cbf7d Merge pull request #6381 from jakemcdermott/6380-fix-host-event-errors
Fix host event type and reference errors

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 22:34:36 +00:00
Jake McDermott
6f3527ed15 Don't refresh settings on websocket event 2020-03-23 18:29:01 -04:00
Jake McDermott
fe2ebeb872 Fix host event type and reference errors 2020-03-23 17:46:42 -04:00
Marliana Lara
ad7498c3fc Fix bugs related to Job Template labels and tags
* Use default PF select toggle behavior
* Fix label submit when no inventory provided
2020-03-23 17:06:38 -04:00
John Mitchell
cb7257f9e6 update email option notification to select
- delete radio_group option from form generator
2020-03-23 17:04:07 -04:00
Jeff Bradberry
e3ea4e2398 Register the resource copy endpoints as awxkit page types 2020-03-23 15:19:48 -04:00
Jeff Bradberry
e4e2d48f53 Register some missing related endpoints in awxkit
- the newer varieties of notification templates
- organization workflow job templates
- credential owner users and owner teams

this allows the endpoints to get wrapped in appropriate Page types,
not just the Base page type.
2020-03-23 15:18:47 -04:00
softwarefactory-project-zuul[bot]
4b497b8cdc Merge pull request #6364 from wenottingham/dont-make-a-tree-that-never-ends-and-just-goes-on-and-on
Preserve symlinks when copying a tree.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 18:57:05 +00:00
softwarefactory-project-zuul[bot]
31fabad3e5 Merge pull request #6370 from AlanCoding/convert_tower_role
Initial conversion of tower_role

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 18:34:32 +00:00
Jeff Bradberry
1e48d773ae Add job_templates to the set of related endpoints for organizations 2020-03-23 14:33:40 -04:00
softwarefactory-project-zuul[bot]
4529429e99 Merge pull request #6368 from marshmalien/fix-jt-bugs
Fix job template form bugs r/t saving without an inventory

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 14:08:32 +00:00
softwarefactory-project-zuul[bot]
ec4a471e7a Merge pull request #6377 from chrismeyersfsu/fix-register_queue_race
serialize register_queue

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 12:32:34 +00:00
softwarefactory-project-zuul[bot]
77915544d2 Merge pull request #6378 from chrismeyersfsu/fix-launch_awx_cluster
fixup dev cluster bringup

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 12:32:29 +00:00
chris meyers
5ba90b629e fixup dev cluster bringup
* change from bootstrap script to launch_awx.sh script
2020-03-23 07:33:35 -04:00
chris meyers
e9021bd173 serialize register_queue
* also remove uneeded query
2020-03-23 07:21:17 -04:00
AlanCoding
49356236ac Add coverage from issue resolved with tower_role conversion 2020-03-22 13:43:39 -04:00
softwarefactory-project-zuul[bot]
c9015fc0c8 Merge pull request #6361 from john-westcott-iv/tower_label_update
Tower label update

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-21 11:15:06 +00:00
AlanCoding
4ea1101477 update test assertion 2020-03-20 23:49:15 -04:00
AlanCoding
27948aa4e1 Convert tower_role to no longer use tower-cli 2020-03-20 23:28:48 -04:00
softwarefactory-project-zuul[bot]
5263d5aced Merge pull request #6358 from AlanCoding/fix_settings
Fix regression in tower_settings module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-21 01:42:19 +00:00
softwarefactory-project-zuul[bot]
8832f667e4 Merge pull request #6336 from AlanCoding/local_collection_errors
Fix test errors running locally with Ansible devel

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-21 00:51:04 +00:00
softwarefactory-project-zuul[bot]
f4e56b219d Merge pull request #6326 from AlanCoding/docs_patches
Copy edit of backward incompatible changes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-21 00:42:51 +00:00
AlanCoding
abdcdbca76 Add label tests and flake8 fixes 2020-03-20 20:09:08 -04:00
AlanCoding
362016c91b Fix test errors running locally with Ansible devel 2020-03-20 19:52:13 -04:00
AlanCoding
f1634f092d Copy edit of backward incompatible changes 2020-03-20 19:51:24 -04:00
John Westcott IV
8cd4e9b488 Adding state back in 2020-03-20 19:14:00 -04:00
softwarefactory-project-zuul[bot]
1fce77054a Merge pull request #6329 from marshmalien/6143-inv-group-associate-hosts
Add associate modal to nested inventory host list

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 21:02:57 +00:00
Marliana Lara
53c8c80f7f Fix JT form edit save bug when inventory has no value 2020-03-20 16:37:33 -04:00
softwarefactory-project-zuul[bot]
3bf7d41bf3 Merge pull request #6286 from jlmitch5/hostFacts
add facts views to host and inv host detail views

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 20:08:52 +00:00
softwarefactory-project-zuul[bot]
34259e24c0 Merge pull request #6350 from jlmitch5/formErrorCredForm
correct form submission errors for credential form

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 19:52:33 +00:00
John Westcott IV
1daeec356f Initial converstion of tower_label 2020-03-20 15:01:41 -04:00
softwarefactory-project-zuul[bot]
5573e1c7ce Merge pull request #6356 from keithjgrant/5899-survey-add-form
Survey add/edit forms

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 18:56:47 +00:00
softwarefactory-project-zuul[bot]
1cba98e4a7 Merge pull request #6363 from dsesami/translation-fix
Fix typo in japanese string

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 18:20:50 +00:00
Keith Grant
56d31fec77 handle any errors thrown in survey handleSubmit 2020-03-20 10:55:07 -07:00
Keith Grant
564012b2c8 fix errors when adding first question to new survey 2020-03-20 10:55:07 -07:00
Keith Grant
cfe0607b6a add survey form tests 2020-03-20 10:55:07 -07:00
Keith Grant
7f24d0c0c2 add SurveyQuestionForm tests 2020-03-20 10:55:07 -07:00
Keith Grant
3f4e7465a7 move template survey files to Survey subdirectory 2020-03-20 10:55:07 -07:00
Keith Grant
9c32cb30d4 add survey question editing, breadcrumbs 2020-03-20 10:55:07 -07:00
Keith Grant
782d465c78 wire in SurveyQuestionAdd form to post to API 2020-03-20 10:55:07 -07:00
Keith Grant
1412bf6232 add survey form 2020-03-20 10:55:07 -07:00
Alex Corey
e92acce4eb Adds toolbar 2020-03-20 10:55:07 -07:00
Bill Nottingham
ac68e8c4fe Preserve symlinks when copying a tree.
This avoids creating a recursive symlink tree.
2020-03-20 13:41:16 -04:00
Daniel Sami
97a4bb39b6 fix typo in japanese string 2020-03-20 13:24:28 -04:00
Marliana Lara
9e00337bc1 Rename useSelected hook and update error modal condition 2020-03-20 10:54:59 -04:00
Marliana Lara
72672d6bbe Move useSelect to shared util directory 2020-03-20 10:54:59 -04:00
Marliana Lara
51f52f6332 Translate aria labels 2020-03-20 10:54:58 -04:00
Marliana Lara
11b2b17d08 Add associate modal to nested inventory host list 2020-03-20 10:54:55 -04:00
softwarefactory-project-zuul[bot]
e17ff3e03a Merge pull request #6335 from AlexSCorey/6316-TemplateUsesFunction
Moves template.jsx over to a functional component.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 14:54:49 +00:00
softwarefactory-project-zuul[bot]
b998d93bfb Merge pull request #6360 from chrismeyersfsu/log_notification_failures
log when notifications fail to send

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 14:54:30 +00:00
softwarefactory-project-zuul[bot]
b8ec94a0ae Merge pull request #6345 from chrismeyersfsu/redis-cleanup2
fix redis requirements mess

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 13:43:54 +00:00
Alex Corey
a02b332b10 fixes linting and spelling errors 2020-03-20 09:16:10 -04:00
Alex Corey
56919c5d32 Moves template.jsx over to a functional component. 2020-03-20 09:16:10 -04:00
chris meyers
47f5c17b56 log when notifications fail to send
* If a job does not finish in the 5 second timeout. Let the user know
that we failed to even try to send the notification.
2020-03-20 09:11:01 -04:00
softwarefactory-project-zuul[bot]
0fb800f5d0 Merge pull request #6344 from chrismeyersfsu/redis-cleanup1
Redis cleanup1

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 13:07:40 +00:00
AlanCoding
d6e94f9c6f Fix regression in tower_settings module 2020-03-19 23:03:58 -04:00
softwarefactory-project-zuul[bot]
d5bdfa908a Merge pull request #6354 from chrismeyersfsu/redis-cleanup3
remove BROKER_URL special password handling

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 20:45:38 +00:00
softwarefactory-project-zuul[bot]
0a5acb6520 Merge pull request #6166 from fosterseth/feature-cleanup_jobs-perf
Improve performance of cleanup_jobs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 20:09:39 +00:00
softwarefactory-project-zuul[bot]
debc339f75 Merge pull request #6295 from beeankha/module_utils_updates
Update module_utils Functionality

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 19:49:33 +00:00
chris meyers
06f065766f remove BROKER_URL special password handling
* BROKER_URL now describes how to connect to redis. We use a unix socket
to connect to redis. Therefore, no longer need to support fancy uri's
that contain fancy characters in the password.
2020-03-19 15:12:45 -04:00
John Mitchell
16e672dd38 correct form submission errors for credential form 2020-03-19 15:10:10 -04:00
softwarefactory-project-zuul[bot]
3d7420959e Merge pull request #6347 from squidboylan/fix_collection_test
Collection: Fix some tests that broke during the random name update

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 19:01:03 +00:00
John Mitchell
4bec46a910 add facts views to host and inv host detail views and update enable fact storage checkbox option and detail language 2020-03-19 15:01:02 -04:00
chris meyers
0e70543d54 licenses for new python deps 2020-03-19 14:44:29 -04:00
Seth Foster
88fb30e0da Delete jobs without loading objects first
The commit is intended to speed up the cleanup_jobs command in awx. Old
methods takes 7+ hours to delete 1 million old jobs. New method takes
around 6 minutes.

Leverages a sub-classed Collector, called AWXCollector, that does not
load in objects before deleting them. Instead querysets, which are
lazily evaluated, are used in places where Collector normally keeps a
list of objects.

Finally, a couple of tests to ensure parity between old Collector and
AWXCollector. That is, any object that is updated/removed from the
database using Collector should be have identical operations using
AWXCollector.

tower issue 1103
2020-03-19 14:14:02 -04:00
AlanCoding
558814ef3b tower_group relationships
rollback some module_utils changes
add runtime error for 404 type things
2020-03-19 13:53:08 -04:00
beeankha
ace5a0a2b3 Update module utils, part of collections conversion work 2020-03-19 13:53:08 -04:00
softwarefactory-project-zuul[bot]
8a917a5b70 Merge pull request #6343 from AlanCoding/fix_sanity
Fix sanity error

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 17:28:12 +00:00
Caleb Boylan
1bd74a96d6 Collection: Fix some tests that broke during the random name update 2020-03-19 09:40:48 -07:00
softwarefactory-project-zuul[bot]
74ebb0ae59 Merge pull request #6290 from ryanpetrello/notification-host-summary-race
change when we send job notifications to avoid a race condition

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 15:36:24 +00:00
chris meyers
fd56b7c590 pin pexpect to 4.7.0 2020-03-19 11:25:43 -04:00
chris meyers
7f02e64a0d fix redis requirements mess
* Add the end of the redis PR we rebased devel a bunch. requirements
snuck into requirements.txt this way. This PR removes those requirements
(i.e. kombu) and bumps other requirements.
2020-03-19 10:19:07 -04:00
Ryan Petrello
d40a5dec8f change when we send job notifications to avoid a race condition
success/failure notifications for *playbooks* include summary data about
the hosts in based on the contents of the playbook_on_stats event

the current implementation suffers from a number of race conditions that
sometimes can cause that data to be missing or incomplete; this change
makes it so that for *playbooks* we build (and send) the notification in
response to the playbook_on_stats event, not the EOF event
2020-03-19 10:01:52 -04:00
chris meyers
5e481341bc flake8 2020-03-19 10:01:20 -04:00
chris meyers
0a1070834d only update the ip address field on the instance
* The heartbeat of an instance is determined to be the last modified
time of the Instance object. Therefore, we want to be careful to only
update very specific fields of the Instance object.
2020-03-19 10:01:20 -04:00
chris meyers
c7de3b0528 fix spelling 2020-03-19 10:01:20 -04:00
softwarefactory-project-zuul[bot]
a725778b17 Merge pull request #6327 from ryanpetrello/py2-minus-minus-cli
remove python2 support from awxkit

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 13:58:02 +00:00
softwarefactory-project-zuul[bot]
3b520a8ee8 Merge pull request #6341 from egmar/fix-pgsql-connect-options
Jobs not running with external PostgreSQL database after PR #6034

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 13:55:54 +00:00
Ryan Petrello
06b3e54fb1 remove python2 support from awxkit 2020-03-19 09:02:39 -04:00
chris meyers
7f2e1d46bc replace janky unique channel name w/ uuid
* postgres notify/listen channel names have size limitations as well as
character limitations. Respect those limitations while at the same time
generate a unique channel name.
2020-03-19 08:59:15 -04:00
chris meyers
12158bdcba remove dead code 2020-03-19 08:57:05 -04:00
Egor Margineanu
f858eda6b1 Made OPTIONS optional 2020-03-19 13:43:06 +01:00
AlanCoding
c5297b0b86 Fix sanity error 2020-03-19 08:42:07 -04:00
Egor Margineanu
e0633c9122 Merge branch 'fix-pgsql-connect-options' of github.com:egmar/awx into fix-pgsql-connect-options 2020-03-19 13:29:39 +01:00
Egor Margineanu
3a208a0be2 Added support for PG port and options. related #6340 2020-03-19 13:29:06 +01:00
Egor Margineanu
cfdfd96793 Added support for PG port and options 2020-03-19 13:26:59 +01:00
Ryan Petrello
db7f0f9421 Merge pull request #6034 from chrismeyersfsu/pg2_no_pubsub
Replace rabbitmq with redis
2020-03-18 17:19:51 -04:00
Ryan Petrello
f1ee963bd0 fix up rebased migrations 2020-03-18 16:19:04 -04:00
Ryan Petrello
7c3cbe6e58 add a license for redis-cli 2020-03-18 16:10:20 -04:00
chris meyers
87de0cf0b3 flake8, pytest, license fixes 2020-03-18 16:10:20 -04:00
chris meyers
18f5dd6e04 add websocket backplane documentation 2020-03-18 16:10:20 -04:00
chris meyers
89163f2915 remove redis broker url test
* We use sockets everywhere. Thus, password special characters no longer
are an issue.
2020-03-18 16:10:20 -04:00
chris meyers
59c9de2761 awxkit python2.7 compatible print
* awxkit still supports python2.7 so do not use fancy f"" yet; instead,
use .format()
2020-03-18 16:10:20 -04:00
chris meyers
b58c71bb74 remove broadcast websockets view 2020-03-18 16:10:20 -04:00
Ryan Petrello
1caa2e0287 work around a limitation in postgres notify to properly support copying
postgres has a limitation on its notify message size (8k), and the
messages we generate for deep copying functionality easily go over this
limit; instead of passing a giant nested data structure across the
message bus, this change makes it so that we temporarily store the JSON
structure in memcached, and look it up from *within* the task

see: https://github.com/ansible/tower/issues/4162
2020-03-18 16:10:20 -04:00
chris meyers
770b457430 redis socket support 2020-03-18 16:10:19 -04:00
chris meyers
d58df0f34a fix sliding window calculation 2020-03-18 16:10:19 -04:00
chris meyers
3f5e2a3cd3 try to make openshift build happy 2020-03-18 16:10:19 -04:00
chris meyers
2b59af3808 safely operate in async or sync context 2020-03-18 16:10:19 -04:00
chris meyers
9e5fe7f5c6 translate Instance hostname to safe analytics name
* More robust translation of Instance hostname to analytics safe name by
replacing all non-alpha numeric characters with _
2020-03-18 16:10:19 -04:00
chris meyers
093d204d19 fix flake8 2020-03-18 16:10:19 -04:00
chris meyers
e25bd931a1 change dispatcher test to make required queue
* No fallback-default queue anymore. Queue must be explicitly provided.
2020-03-18 16:10:19 -04:00
chris meyers
8350bb3371 robust broadcast websocket error hanndling 2020-03-18 16:10:18 -04:00
chris meyers
d6594ab602 add broadcast websocket metrics
* Gather brroadcast websocket metrics and push them into redis every
configurable seconds.
* Pop metrics from redis in web view layer to display via the api on
demand
2020-03-18 16:10:18 -04:00
chris meyers
b6b9802f9e increase per-channel capacity
* 100 is the default capacity for a channel. If the client doesn't read
the socket fast enough, websocket messages can and will be lost. This
increases the default to 10,000
2020-03-18 16:10:18 -04:00
chris meyers
0da94ada2b add missing service name to dev env
* Dev env was bringing the wsbroadcast service up but not under the
tower-process dependency. This is cleaner.
2020-03-18 16:10:18 -04:00
chris meyers
3b9e67ed1b remove channel group model
* Websocket user session <-> group subscription membership now resides
in Redis rather than the database.
2020-03-18 16:10:18 -04:00
chris meyers
14320bc8e6 handle websocket unsubscribe
* Do not return from blocking unsubscribe until _after_ putting the
gotten unsubscribe message on the queue so that it can be read by the
thread of execution that was unblocked.
2020-03-18 16:10:18 -04:00
chris meyers
3c5c9c6fde move broadcast websocket out into its own process 2020-03-18 16:10:18 -04:00
chris meyers
f5193e5ea5 resolve rebase errors 2020-03-18 16:10:17 -04:00
chris meyers
03b73027e8 websockets aware of Instance changes
* New tower nodes that are (de)registered in the Instance table are seen
by the websocket layer and connected to or disconnected from by the
websocket broadcast backplane using a polling mechanism.
* This is especially useful for openshift and kubernetes. This will be
useful for standalone Tower in the future when the restarting of Tower
services is not required.
2020-03-18 16:10:17 -04:00
chris meyers
c06b6306ab remove health info
* Sending health about websockets over websockets is not a great idea.
* I tried sending health data via prometheus and encountered problems
that will need PR's to prometheus_client library to solve. Circle back
to this later.
2020-03-18 16:10:17 -04:00
Shane McDonald
45ce6d794e Initial migration of rabbitmq -> redis for k8s installs 2020-03-18 16:10:17 -04:00
chris meyers
e94bb44082 replace rabbitmq with redis
* local awx docker-compose and image build only.
2020-03-18 16:10:17 -04:00
chris meyers
be58906aed remove kombu 2020-03-18 16:10:17 -04:00
chris meyers
403e9bbfb5 add websocket health information 2020-03-18 16:10:16 -04:00
chris meyers
ea29f4b91f account for isolated job status
* We can not query the dispatcher running on isolated nodes to see if
the playbook is still running because that is the nature of isolated
nodes, they don't run the dispatcher nor do they run the message broker.
Therefore, we should query the control node that is arbitrating the
isolated work. If the control node process in the dispatcher is dead,
consider the iso job dead.
2020-03-18 16:10:16 -04:00
chris meyers
3f2d757f4e update awxkit to use new unsubscribe event
* Instead of waiting an arbitrary number of seconds. We can now wait the
exact amount of time needed to KNOW that we are unsubscribed. This
changeset takes advantage of the new subscribe reply semantic.
2020-03-18 16:10:16 -04:00
chris meyers
feac93fd24 add websocket group unsubscribe reply
* This change adds more than just an unsubscribe reply.
* Websockets canrequest to join/leave groups. They do so using a single
idempotent request. This change replies to group requests over the
websockets with the diff of the group subscription. i.e. what groups the
user currenntly is in, what groups were left, and what groups were
joined.
2020-03-18 16:10:16 -04:00
chris meyers
088373963b satisfy generic Role code
* User in channels session is a lazy user class. This does not conform
to what the generic Role ancestry code expects. The Role ancestry code
expects a User objects. This change converts the lazy object into a
proper User object before calling the permission code path.
2020-03-18 16:10:16 -04:00
chris meyers
5818dcc980 prefer simple async -> sync
* asgiref async_to_sync was causing a Redis connection _for each_ call
to emit_channel_notification i.e. every event that the callback receiver
processes. This is a "known" issue
https://github.com/django/channels_redis/pull/130#issuecomment-424274470
and the advise is to slow downn the rate at which you call
async_to_sync. That is not an option for us. Instead, we put the async
group_send call onto the event loop for the current thread and wait for
it to be processed immediately.

The known issue has to do with event loop + socket relationship. Each
connection to redis is achieved via a socket. That conection can only be
waiting on by the event loop that corresponds to the calling thread.
async_to_sync creates a _new thread_ for each invocation. Thus, a new
connection to redis is required. Thus, the excess redis connections that
can be observed via netstat | grep redis | wc -l.
2020-03-18 16:10:16 -04:00
chris meyers
dc6c353ecd remove support for multi-reader dispatch queue
* Under the new postgres backed notify/listen message queue, this never
actually worked. Without using the database to store state, we can not
provide a at-most-once delivery mechanism w/ multi-readers.
* With this change, work is done ONLY on the node that requested for the
work to be done. Under rabbitmq, the node that was first to get the
message off the queue would do the work; presumably the least busy node.
2020-03-18 16:10:16 -04:00
chris meyers
50b56aa8cb autobahn 20.1.2 released an hour ago
* 20.1.1 no longer available on pypi
2020-03-18 16:10:15 -04:00
chris meyers
3fec69799c fix websocket job subscription access control 2020-03-18 16:10:15 -04:00
chris meyers
2a2c34f567 combine all the broker replacement pieces
* local redis for event processing
* postgres for message broker
* redis for websockets
2020-03-18 16:10:15 -04:00
chris meyers
558e92806b POC postgres broker 2020-03-18 16:10:15 -04:00
chris meyers
355fb125cb redis events 2020-03-18 16:10:15 -04:00
chris meyers
c8eeacacca POC channels 2 2020-03-18 16:10:12 -04:00
softwarefactory-project-zuul[bot]
d0a3c5a42b Merge pull request #6323 from AlanCoding/rm_verify_ssl_test
Replace verify_ssl test that did not work right

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 19:33:24 +00:00
softwarefactory-project-zuul[bot]
64139f960f Merge pull request #6331 from marshmalien/fix-project-schedule-breadcrumb
Add nested project schedule detail breadcrumb

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 19:33:19 +00:00
softwarefactory-project-zuul[bot]
eda494be63 Merge pull request #6330 from rooftopcellist/fix_flakey_workflow_functest
Fix flaky workflow test & set junit family

Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
             https://github.com/rooftopcellist
2020-03-18 19:27:26 +00:00
Christian Adams
4a0c371014 Fix flaky workflow test & set junit family 2020-03-18 14:02:33 -04:00
softwarefactory-project-zuul[bot]
6b43da35e1 Merge pull request #5745 from wenottingham/no-license-and-registration-please
Clean up a few more cases where we checked the license for features.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 17:59:16 +00:00
softwarefactory-project-zuul[bot]
afa3b500d3 Merge pull request #6273 from AlanCoding/failure_verbosity
Print module standard out in test failure scenarios

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 17:50:16 +00:00
softwarefactory-project-zuul[bot]
c3efb13020 Merge pull request #6325 from AlanCoding/autohack
Automatically hack sys.path to make running tests easier

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 17:46:59 +00:00
Marliana Lara
eb28800082 Fix nested project schedule breadcrumb 2020-03-18 13:17:42 -04:00
softwarefactory-project-zuul[bot]
3219b9b4ac Merge pull request #6318 from AlexSCorey/6100-ConvertWFJTandJTtoHooks
Moves JT Form to using react hooks and custom hooks

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 15:58:13 +00:00
softwarefactory-project-zuul[bot]
e9a48cceba Merge pull request #6319 from squidboylan/collection_test_refactor
Collection test refactor

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 14:53:04 +00:00
softwarefactory-project-zuul[bot]
9a7fa1f3a6 Merge pull request #6313 from mabashian/jest-24-25-upgrade
Upgrade jest and babel-jest to latest (v25)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 14:33:44 +00:00
Caleb Boylan
a03d74d828 Collection: Use random names when we create objects in our tests 2020-03-18 07:02:14 -07:00
mabashian
2274b4b4e4 Upgrade jest and babel-jest to latest (v25) 2020-03-18 09:44:25 -04:00
AlanCoding
c054d7c3d7 Automatically hack sys.path to make running tests easier 2020-03-18 09:40:11 -04:00
softwarefactory-project-zuul[bot]
26d5d7afdc Merge pull request #6304 from AlanCoding/workflow_role
Add workflow to tower_role module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 03:13:37 +00:00
softwarefactory-project-zuul[bot]
6b51b41897 Merge pull request #6322 from AlanCoding/user_no_log
Mark user password as no_log to silence warning

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 01:58:27 +00:00
AlanCoding
ca3cf244fd Replace verify_ssl test that did not work right 2020-03-17 21:43:30 -04:00
softwarefactory-project-zuul[bot]
88d7b24f55 Merge pull request #6311 from jlmitch5/fixDupID
change duplicate IDs where possible

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 01:03:49 +00:00
AlanCoding
ecdb353f6f Mark user password as no_log to silence warning 2020-03-17 19:49:27 -04:00
AlanCoding
d9932eaf6a Add integration test 2020-03-17 19:37:30 -04:00
softwarefactory-project-zuul[bot]
cbc52fa19f Merge pull request #6278 from AlanCoding/wfjt_tests
Add more tests for recent WFJT module issues

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 23:02:14 +00:00
softwarefactory-project-zuul[bot]
cc77b31d4e Merge pull request #6314 from mabashian/6293-toggle-error
Adds error div to toggle fields built using form generator

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 21:34:18 +00:00
Bill Nottingham
b875c03f4a Clean up a few more cases where we checked the license for features. 2020-03-17 17:19:33 -04:00
Alex Corey
e87f804c92 Moves JT Form to using react hooks and custom hooks 2020-03-17 16:40:09 -04:00
softwarefactory-project-zuul[bot]
f86cbf33aa Merge pull request #6307 from mabashian/eslint-5-6-upgrade
Bumps eslint from 5.6 to 6.8

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 20:24:13 +00:00
mabashian
6db6fe90fd Adds error div to toggle fields built using form generator so that errors can be shown underneath the toggle 2020-03-17 15:27:16 -04:00
softwarefactory-project-zuul[bot]
bcbe9691e5 Merge pull request #6312 from beeankha/collections_toolbox_sanity_fix
Fix Shebang Error in Collections Tools

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 19:21:54 +00:00
John Mitchell
c850148ee3 change duplicate IDs where possible 2020-03-17 14:08:13 -04:00
softwarefactory-project-zuul[bot]
b260a88810 Merge pull request #6308 from wenottingham/branch-for-your-branch
Allow scm_branch in notifications.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 17:51:10 +00:00
mabashian
a0937a8b7c Bumps eslint from 5.6 to 6.8 2020-03-17 13:44:33 -04:00
beeankha
c4c0cace88 Fix ansible shebang error 2020-03-17 12:55:32 -04:00
softwarefactory-project-zuul[bot]
a55bcafa3a Merge pull request #6310 from jakemcdermott/update-lockfile
Auto-update dependencies in lock file

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 16:22:48 +00:00
Bill Nottingham
d0c510563f Allow scm_branch in notifications. 2020-03-17 12:09:35 -04:00
Jake McDermott
d23fb17cd9 Auto-update dependencies in lock file 2020-03-17 11:33:55 -04:00
AlanCoding
8668f2ad46 Add workflow to tower_role 2020-03-16 22:36:27 -04:00
softwarefactory-project-zuul[bot]
e210ee4077 Merge pull request #6301 from gamuniz/catch_analytics_failure
rework the gather() to always delete the leftover directories

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 20:45:36 +00:00
softwarefactory-project-zuul[bot]
47ff56c411 Merge pull request #6297 from squidboylan/collection_tests
Collection tests

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 20:07:13 +00:00
softwarefactory-project-zuul[bot]
1e780aad38 Merge pull request #5726 from AlanCoding/jt_org_2020
Add read-only organization field to job templates

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 20:03:47 +00:00
Gabe Muniz
80234c5600 rework the tar to always delete the leftover directories 2020-03-16 19:54:15 +00:00
softwarefactory-project-zuul[bot]
c8510f7d75 Merge pull request #6256 from beeankha/collections_toolbox
Module Generation Tools for the AWX Collection

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 18:55:52 +00:00
softwarefactory-project-zuul[bot]
6431050b36 Merge pull request #6296 from ryanpetrello/fix-iso-node-bug
fix a bug in isolated event handling

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 18:45:05 +00:00
softwarefactory-project-zuul[bot]
5c360aeff3 Merge pull request #6287 from squidboylan/add_collection_test
Collection: add a test for multiple credentials on a jt

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 18:38:52 +00:00
softwarefactory-project-zuul[bot]
44e043d75f Merge pull request #6294 from mabashian/4070-access-5
Adds aria-labels to links without descernible inner text

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 18:27:01 +00:00
Caleb Boylan
ef223b0afb Collection: Add test for workflow_template extra_vars 2020-03-16 11:12:57 -07:00
Caleb Boylan
e9e8283f16 Collection: Add integration test for smart inventory 2020-03-16 11:08:01 -07:00
Ryan Petrello
b73e8d8a56 fix a bug in isolated event handling
see: https://github.com/ansible/awx/issues/6280
2020-03-16 13:15:10 -04:00
beeankha
6db6c6c5ba Revert module util changes, reorder params in group module 2020-03-16 11:18:08 -04:00
AlanCoding
2b5ff9a6f9 Patches to generator to better align with modules 2020-03-16 11:10:07 -04:00
beeankha
97c169780d Delete config file 2020-03-16 11:10:07 -04:00
beeankha
88c46b4573 Add updated tower_api module util file, update generator and template 2020-03-16 11:10:07 -04:00
beeankha
53d27c933e Fix linter issues 2020-03-16 11:10:07 -04:00
beeankha
c340fff643 Add generator playbook for the AWX Collection modules, along with other module generation tools 2020-03-16 11:10:07 -04:00
mabashian
61600a8252 Adds aria-labels to links without descernible inner text 2020-03-16 10:39:21 -04:00
AlanCoding
521cda878e Add named URL docs for uniqueness functionality 2020-03-16 10:04:18 -04:00
softwarefactory-project-zuul[bot]
9ecd6ad0fb Merge pull request #6245 from mabashian/4070-access-3
Adds lang attr to html tag to specify default language for the application

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 22:36:26 +00:00
softwarefactory-project-zuul[bot]
349af22d0f Merge pull request #6261 from jakemcdermott/5386-dont-block-blockquotes
Don't block the blockquotes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 22:12:28 +00:00
softwarefactory-project-zuul[bot]
ad316fc2a3 Merge pull request #6284 from mabashian/4070-access-1
Adds aria-label attrs to buttons without inner text

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 22:00:16 +00:00
softwarefactory-project-zuul[bot]
e4abf634f0 Merge pull request #6268 from keithjgrant/survey-list-sort
Add survey list sorting

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 21:59:34 +00:00
softwarefactory-project-zuul[bot]
bb144acee3 Merge pull request #6249 from jakemcdermott/5771-fix-read-only-display-of-playbooks
Show playbook field on JT when read-only

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 21:11:59 +00:00
Caleb Boylan
16d5456d2b Collection: add a test for multiple credentials on a jt 2020-03-13 13:41:38 -07:00
mabashian
abe8153358 Remove rogue console 2020-03-13 16:38:46 -04:00
Keith Grant
86aabb297e add documentation for useDismissableError, useDeleteItems 2020-03-13 13:21:59 -07:00
softwarefactory-project-zuul[bot]
65a7613c26 Merge pull request #6257 from jakemcdermott/3774-fix-settings-ldap-error-handling
Add parser error handling for settings json

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 19:30:34 +00:00
softwarefactory-project-zuul[bot]
4d1790290e Merge pull request #6259 from jakemcdermott/3956-translate-access-strings
Mark access removal prompts and tech preview message for translation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 19:30:29 +00:00
softwarefactory-project-zuul[bot]
dca335d17c Merge pull request #6251 from jakemcdermott/5987-fix-sometimes-missing-verbosity-and-job-type-values
Fix sometimes missing job_type and verbosity field values

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 19:30:24 +00:00
softwarefactory-project-zuul[bot]
da48cffa12 Merge pull request #6285 from ryanpetrello/user-readonly-last-login
make User.last_login read_only=True in its serializer

Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
             https://github.com/rooftopcellist
2020-03-13 19:24:58 +00:00
Keith Grant
a2eeb6e7b5 delete unused file 2020-03-13 11:40:24 -07:00
Jake McDermott
f8f6fff21e Show playbook field on JT when read-only 2020-03-13 13:35:49 -04:00
Keith Grant
3e616f2770 update SurveyList tests, add TemplateSurvey tests 2020-03-13 10:17:39 -07:00
softwarefactory-project-zuul[bot]
7c6bef15ba Merge pull request #6246 from mabashian/4070-access-4
Adds aria-label attrs to img elements

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 16:54:35 +00:00
Ryan Petrello
27b48fe55b make User.last_login read_only=True in its serializer 2020-03-13 12:53:40 -04:00
softwarefactory-project-zuul[bot]
6b20ffbfdd Merge pull request #6275 from ryanpetrello/fix-isolated-hostname-in-events
consolidate isolated event handling code into one function

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 16:34:46 +00:00
mabashian
43abeec3d7 Mark img aria-labels for translation 2020-03-13 11:25:47 -04:00
mabashian
bd8886a867 Adds lang attr to html tag in ui next app 2020-03-13 11:21:25 -04:00
mabashian
bd68317cfd Adds aria-label attrs to buttons without inner text 2020-03-13 11:13:28 -04:00
Ryan Petrello
f8818730d4 consolidate isolated event handling code into one function
make the non-isolated *and* isolated event handling share the same
function so we don't regress on behavior between the two
2020-03-13 10:05:48 -04:00
Jake McDermott
b41c9e5ba3 Normalize initial value of select2 fields 2020-03-13 09:43:12 -04:00
Jake McDermott
401be0c265 Add parser error handling for settings json 2020-03-13 09:23:11 -04:00
humzam96
35be571eed Don't block the blockquotes 2020-03-13 09:22:20 -04:00
Hideki Saito
8e7faa853e Mark tech preview message for translation 2020-03-13 09:21:02 -04:00
Jake McDermott
1ee46ab98a Mark access removal prompts for translation 2020-03-13 09:20:59 -04:00
Keith Grant
ac9f526cf0 fix useRequest error bug 2020-03-12 17:08:21 -07:00
softwarefactory-project-zuul[bot]
7120e92078 Merge pull request #6262 from rooftopcellist/mv_bootstrap_script
Update dev container to be consistent with other installation methods

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-12 19:51:18 +00:00
AlanCoding
7e6def8acc Grant org JT admin role in migration as well 2020-03-12 15:45:47 -04:00
AlanCoding
aa4842aea5 Various JT.organization cleanup items
cleanup from PR review suggestions

bump migration number

fix test

revert change to old-app JT form no longer needed
2020-03-12 15:45:47 -04:00
John Mitchell
7547793792 add organization to template views in old and new ui 2020-03-12 15:45:47 -04:00
AlanCoding
7d0b207571 Organization on JT as read-only field
Set JT.organization with value from its project

Remove validation requiring JT.organization

Undo some of the additional org definitions in tests

Revert some tests no longer needed for feature

exclude workflow approvals from unified organization field

revert awxkit changes for providing organization

Roll back additional JT creation permission requirement

Fix up more issues by persisting organization field when project is removed

Restrict project org editing, logging, and testing

Grant removed inventory org admin permissions in migration

Add special validate_unique for job templates
  this deals with enforcing name-organization uniqueness

Add back in special message where config is unknown
  when receiving 403 on job relaunch

Fix logical and performance bugs with data migration

within JT.inventory.organization make-permission-explicit migration

remove nested loops so we do .iterator() on JT queryset

in reverse migration, carefully remove execute role on JT
  held by org admins of inventory organization,
  as well as the execute_role holders

Use current state of Role model in logic, with 1 notable exception
  that is used to filter on ancestors
  the ancestor and descentent relationship in the migration model
    is not reliable
  output of this is saved as an integer list to avoid future
    compatibility errors

make the parents rebuilding logic skip over irrelevant models
  this is the largest performance gain for small resource numbers
2020-03-12 15:45:46 -04:00
AlanCoding
daa9282790 Initial (editable) pass of adding JT.organization
This is the old version of this feature from 2019
  this allows setting the organization in the data sent
  to the API when creating a JT, and exposes the field
  in the UI as well

Subsequent commit changes the field from editable
  to read-only, but as of this commit, the machinery
  is not hooked up to infer it from project
2020-03-12 15:45:46 -04:00
AlanCoding
bdd0b9e4d9 Add more tests for recent WFJT module issues 2020-03-12 15:45:25 -04:00
softwarefactory-project-zuul[bot]
1876849d89 Merge pull request #6186 from AlanCoding/wfjt_vars
Modernize types of WFJT module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-12 19:43:04 +00:00
softwarefactory-project-zuul[bot]
e4dd2728ef Merge pull request #6276 from ryanpetrello/approval-start-date-in-notifications
save approval node start time *before* sending "started" notifications

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-12 18:58:03 +00:00
Ryan Petrello
88571f6dcb save approval node start time *before* sending "started" notifications
see: https://github.com/ansible/awx/issues/6267
2020-03-12 14:14:56 -04:00
AlanCoding
a1cdc07944 Print module standard out in test failure scenarios 2020-03-12 12:26:54 -04:00
Keith Grant
eea80c45d1 fix keys, clean up surveylist 2020-03-12 08:42:59 -07:00
Keith Grant
07565b5efc add error handling to TemplateSurvey 2020-03-11 15:06:13 -07:00
Christian Adams
3755151cc5 Update dev container to be consistent with other installation methods
- rename start_development.sh script to `launch_awx.sh`, like it is in k8 installations
2020-03-11 16:23:31 -04:00
Keith Grant
2584f7359e restructure TemplateSurvey & list components 2020-03-11 09:33:04 -07:00
Alex Corey
286cec8bc3 WIP 2020-03-11 09:33:04 -07:00
Alex Corey
64b1aa43b1 Adds SurveyList tool bar 2020-03-11 09:33:04 -07:00
mabashian
6c7ab97159 Adds aria-label attrs to img elements 2020-03-10 16:12:29 -04:00
mabashian
8077c910b0 Adds lang attr to installing template 2020-03-10 16:10:51 -04:00
AlanCoding
feef39c5cc Change test to use native list for schemas 2020-03-10 16:07:04 -04:00
AlanCoding
e80843846e Modernize types of WFJT module 2020-03-10 16:07:03 -04:00
mabashian
ecc68c1003 Adds lang attr to html tag to specify default language for the application 2020-03-10 15:33:31 -04:00
486 changed files with 21087 additions and 7735 deletions

View File

@@ -30,8 +30,9 @@ https://www.ansible.com/security
##### STEPS TO REPRODUCE
<!-- For bugs, please show exactly how to reproduce the problem. For new
features, show how the feature would be used. -->
<!-- For new features, show how the feature would be used. For bugs, please show
exactly how to reproduce the problem. Ideally, provide all steps and data needed
to recreate the bug from a new awx install. -->
##### EXPECTED RESULTS

View File

@@ -2,6 +2,22 @@
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
## 10.0.0 (Mar 30, 2020)
- As of AWX 10.0.0, the official AWX CLI no longer supports Python 2 (it requires at least Python 3.6) (https://github.com/ansible/awx/pull/6327)
- AWX no longer relies on RabbitMQ; Redis is added as a new dependency (https://github.com/ansible/awx/issues/5443)
- Altered AWX's event tables to allow more than ~2 billion total events (https://github.com/ansible/awx/issues/6010)
- Improved the performance (time to execute, and memory consumption) of the periodic job cleanup system job (https://github.com/ansible/awx/pull/6166)
- Updated Job Templates so they now have an explicit Organization field (it is no longer inferred from the associated Project) (https://github.com/ansible/awx/issues/3903)
- Updated social-auth-core to address an upcoming GitHub API deprecation (https://github.com/ansible/awx/issues/5970)
- Updated to ansible-runner 1.4.6 to address various bugs.
- Updated Django to address CVE-2020-9402
- Updated pyyaml version to address CVE-2017-18342
- Fixed a bug which prevented the new `scm_branch` field from being used in custom notification templates (https://github.com/ansible/awx/issues/6258)
- Fixed a race condition that sometimes causes success/failure notifications to include an incomplete list of hosts (https://github.com/ansible/awx/pull/6290)
- Fixed a bug that can cause certain setting pages to lose unsaved form edits when a playbook is launched (https://github.com/ansible/awx/issues/5265)
- Fixed a bug that can prevent the "Use TLS/SSL" field from properly saving when editing email notification templates (https://github.com/ansible/awx/issues/6383)
- Fixed a race condition that sometimes broke event/stdout processing for jobs launched in container groups (https://github.com/ansible/awx/issues/6280)
## 9.3.0 (Mar 12, 2020)
- Added the ability to specify an OAuth2 token description in the AWX CLI (https://github.com/ansible/awx/issues/6122)
- Added support for K8S service account annotations to the installer (https://github.com/ansible/awx/pull/6007)

View File

@@ -155,12 +155,11 @@ If you start a second terminal session, you can take a look at the running conta
(host)$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa4a75d6d77b gcr.io/ansible-tower-engineering/awx_devel:devel "/tini -- /bin/sh ..." 23 seconds ago Up 15 seconds 0.0.0.0:5555->5555/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 22/tcp, 0.0.0.0:8080->8080/tcp tools_awx_1
e4c0afeb548c postgres:10 "docker-entrypoint..." 26 seconds ago Up 23 seconds 5432/tcp tools_postgres_1
0089699d5afd tools_logstash "/docker-entrypoin..." 26 seconds ago Up 25 seconds tools_logstash_1
4d4ff0ced266 memcached:alpine "docker-entrypoint..." 26 seconds ago Up 25 seconds 0.0.0.0:11211->11211/tcp tools_memcached_1
92842acd64cd rabbitmq:3-management "docker-entrypoint..." 26 seconds ago Up 24 seconds 4369/tcp, 5671-5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp tools_rabbitmq_1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44251b476f98 gcr.io/ansible-tower-engineering/awx_devel:devel "/entrypoint.sh /bin…" 27 seconds ago Up 23 seconds 0.0.0.0:6899->6899/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 0.0.0.0:8080->8080/tcp, 22/tcp, 0.0.0.0:8888->8888/tcp tools_awx_run_9e820694d57e
b049a43817b4 memcached:alpine "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:11211->11211/tcp tools_memcached_1
40de380e3c2e redis:latest "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:6379->6379/tcp tools_redis_1
b66a506d3007 postgres:10 "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:5432->5432/tcp tools_postgres_1
```
**NOTE**

View File

@@ -41,6 +41,8 @@ This document provides a guide for installing AWX.
+ [Run the installer](#run-the-installer-2)
+ [Post-install](#post-install-2)
+ [Accessing AWX](#accessing-awx-2)
- [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
## Getting started
@@ -128,7 +130,6 @@ For convenience, you can create a file called `vars.yml`:
```
admin_password: 'adminpass'
pg_password: 'pgpass'
rabbitmq_password: 'rabbitpass'
secret_key: 'mysupersecret'
```
@@ -555,16 +556,7 @@ $ ansible-playbook -i inventory -e docker_registry_password=password install.yml
### Post-install
After the playbook run completes, Docker will report up to 5 running containers. If you chose to use an existing PostgresSQL database, then it will report 4. You can view the running containers using the `docker ps` command, as follows:
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e240ed8209cd awx_task:1.0.0.8 "/tini -- /bin/sh ..." 2 minutes ago Up About a minute 8052/tcp awx_task
1cfd02601690 awx_web:1.0.0.8 "/tini -- /bin/sh ..." 2 minutes ago Up About a minute 0.0.0.0:443->8052/tcp awx_web
55a552142bcd memcached:alpine "docker-entrypoint..." 2 minutes ago Up 2 minutes 11211/tcp memcached
84011c072aad rabbitmq:3 "docker-entrypoint..." 2 minutes ago Up 2 minutes 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
97e196120ab3 postgres:9.6 "docker-entrypoint..." 2 minutes ago Up 2 minutes 5432/tcp postgres
```
After the playbook run completes, Docker starts a series of containers that provide the services that make up AWX. You can view the running containers using the `docker ps` command.
If you're deploying using Docker Compose, container names will be prefixed by the name of the folder where the docker-compose.yml file is created (by default, `awx`).
@@ -630,3 +622,34 @@ Added instance awx to tower
The AWX web server is accessible on the deployment host, using the *host_port* value set in the *inventory* file. The default URL is [http://localhost](http://localhost).
You will prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
# Installing the AWX CLI
`awx` is the official command-line client for AWX. It:
* Uses naming and structure consistent with the AWX HTTP API
* Provides consistent output formats with optional machine-parsable formats
* To the extent possible, auto-detects API versions, available endpoints, and
feature support across multiple versions of AWX.
Potential uses include:
* Configuring and launching jobs/playbooks
* Checking on the status and output of job runs
* Managing objects like organizations, users, teams, etc...
The preferred way to install the AWX CLI is through pip directly from GitHub:
pip install "https://github.com/ansible/awx/archive/$VERSION.tar.gz#egg=awxkit&subdirectory=awxkit"
awx --help
...where ``$VERSION`` is the version of AWX you're running. To see a list of all available releases, visit: https://github.com/ansible/awx/releases
## Building the CLI Documentation
To build the docs, spin up a real AWX server, `pip install sphinx sphinxcontrib-autoprogram`, and run:
~ TOWER_HOST=https://awx.example.org TOWER_USERNAME=example TOWER_PASSWORD=secret make clean html
~ cd build/html/ && python -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ..

View File

@@ -265,28 +265,6 @@ migrate:
dbchange:
$(MANAGEMENT_COMMAND) makemigrations
server_noattach:
tmux new-session -d -s awx 'exec make uwsgi'
tmux rename-window 'AWX'
tmux select-window -t awx:0
tmux split-window -v 'exec make dispatcher'
tmux new-window 'exec make daphne'
tmux select-window -t awx:1
tmux rename-window 'WebSockets'
tmux split-window -h 'exec make runworker'
tmux split-window -v 'exec make nginx'
tmux new-window 'exec make receiver'
tmux select-window -t awx:2
tmux rename-window 'Extra Services'
tmux select-window -t awx:0
server: server_noattach
tmux -2 attach-session -t awx
# Use with iterm2's native tmux protocol support
servercc: server_noattach
tmux -2 -CC attach-session -t awx
supervisor:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -311,18 +289,11 @@ daphne:
fi; \
daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer
runworker:
wsbroadcast:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py runworker --only-channels websocket.*
# Run the built-in development webserver (by default on http://localhost:8013).
runserver:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py runserver
$(PYTHON) manage.py run_wsbroadcast
# Run to start the background task dispatcher for development.
dispatcher:
@@ -400,6 +371,7 @@ prepare_collection_venv:
$(VENV_BASE)/awx/bin/pip install --target=$(COLLECTION_VENV) git+https://github.com/ansible/tower-cli.git
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
COLLECTION_TEST_TARGET ?=
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
@@ -408,7 +380,7 @@ test_collection:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONPATH=$(COLLECTION_VENV):/awx_devel/awx_collection:$PYTHONPATH:/usr/lib/python3.6/site-packages py.test $(COLLECTION_TEST_DIRS)
PYTHONPATH=$(COLLECTION_VENV):$PYTHONPATH:/usr/lib/python3.6/site-packages py.test $(COLLECTION_TEST_DIRS)
flake8_collection:
flake8 awx_collection/ # Different settings, in main exclude list
@@ -434,7 +406,7 @@ test_collection_sanity: install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)
test_unit:
@if [ "$(VENV_BASE)" ]; then \

View File

@@ -1 +1 @@
9.3.0
10.0.0

View File

@@ -5,10 +5,12 @@
import inspect
import logging
import time
import uuid
import urllib.parse
# Django
from django.conf import settings
from django.core.cache import cache
from django.db import connection
from django.db.models.fields import FieldDoesNotExist
from django.db.models.fields.related import OneToOneRel
@@ -548,6 +550,15 @@ class SubListCreateAPIView(SubListAPIView, ListCreateAPIView):
})
return d
def get_queryset(self):
if hasattr(self, 'parent_key'):
# Prefer this filtering because ForeignKey allows us more assumptions
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(**{self.parent_key: parent})
return super(SubListCreateAPIView, self).get_queryset()
def create(self, request, *args, **kwargs):
# If the object ID was not specified, it probably doesn't exist in the
# DB yet. We want to see if we can create it. The URL may choose to
@@ -964,6 +975,11 @@ class CopyAPIView(GenericAPIView):
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all():
new_obj.admin_role.members.add(request.user)
if sub_objs:
# store the copied object dict into memcached, because it's
# often too large for postgres' notification bus
# (which has a default maximum message size of 8k)
key = 'deep-copy-{}'.format(str(uuid.uuid4()))
cache.set(key, sub_objs, timeout=3600)
permission_check_func = None
if hasattr(type(self), 'deep_copy_permission_check_func'):
permission_check_func = (
@@ -971,7 +987,7 @@ class CopyAPIView(GenericAPIView):
)
trigger_delayed_deep_copy(
self.model.__module__, self.model.__name__,
obj.pk, new_obj.pk, request.user.pk, sub_objs,
obj.pk, new_obj.pk, request.user.pk, key,
permission_check_func=permission_check_func
)
serializer = self._get_copy_return_serializer(new_obj)

View File

@@ -60,7 +60,8 @@ class Metadata(metadata.SimpleMetadata):
'type': _('Data type for this {}.'),
'url': _('URL for this {}.'),
'related': _('Data structure with URLs of related resources.'),
'summary_fields': _('Data structure with name/description for related resources.'),
'summary_fields': _('Data structure with name/description for related resources. '
'The output for some objects may be limited for performance reasons.'),
'created': _('Timestamp when this {} was created.'),
'modified': _('Timestamp when this {} was last modified.'),
}

View File

@@ -72,6 +72,7 @@ from awx.main.utils import (
prefetch_page_capabilities, get_external_account, truncate_stdout,
)
from awx.main.utils.filters import SmartFilter
from awx.main.utils.named_url_graph import reset_counters
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.validators import vars_validate_or_raise
@@ -347,6 +348,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
def _generate_named_url(self, url_path, obj, node):
url_units = url_path.split('/')
reset_counters()
named_url = node.generate_named_url(obj)
url_units[4] = named_url
return '/'.join(url_units)
@@ -642,7 +644,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
_capabilities_prefetch = [
'admin', 'execute',
{'copy': ['jobtemplate.project.use', 'jobtemplate.inventory.use',
'workflowjobtemplate.organization.workflow_admin']}
'organization.workflow_admin']}
]
class Meta:
@@ -884,6 +886,9 @@ class UserSerializer(BaseSerializer):
fields = ('*', '-name', '-description', '-modified',
'username', 'first_name', 'last_name',
'email', 'is_superuser', 'is_system_auditor', 'password', 'ldap_dn', 'last_login', 'external_account')
extra_kwargs = {
'last_login': {'read_only': True}
}
def to_representation(self, obj):
ret = super(UserSerializer, self).to_representation(obj)
@@ -1246,6 +1251,7 @@ class OrganizationSerializer(BaseSerializer):
res.update(dict(
projects = self.reverse('api:organization_projects_list', kwargs={'pk': obj.pk}),
inventories = self.reverse('api:organization_inventories_list', kwargs={'pk': obj.pk}),
job_templates = self.reverse('api:organization_job_templates_list', kwargs={'pk': obj.pk}),
workflow_job_templates = self.reverse('api:organization_workflow_job_templates_list', kwargs={'pk': obj.pk}),
users = self.reverse('api:organization_users_list', kwargs={'pk': obj.pk}),
admins = self.reverse('api:organization_admins_list', kwargs={'pk': obj.pk}),
@@ -1274,6 +1280,14 @@ class OrganizationSerializer(BaseSerializer):
'job_templates': 0, 'admins': 0, 'projects': 0}
else:
summary_dict['related_field_counts'] = counts_dict[obj.id]
# Organization participation roles (admin, member) can't be assigned
# to a team. This provides a hint to the ui so it can know to not
# display these roles for team role selection.
for key in ('admin_role', 'member_role',):
if key in summary_dict.get('object_roles', {}):
summary_dict['object_roles'][key]['user_only'] = True
return summary_dict
def validate(self, attrs):
@@ -1387,12 +1401,6 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
organization = None
if 'organization' in attrs:
organization = attrs['organization']
elif self.instance:
organization = self.instance.organization
if 'allow_override' in attrs and self.instance:
# case where user is turning off this project setting
if self.instance.allow_override and not attrs['allow_override']:
@@ -1408,11 +1416,7 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
' '.join([str(pk) for pk in used_by])
)})
view = self.context.get('view', None)
if not organization and not view.request.user.is_superuser:
# Only allow super users to create orgless projects
raise serializers.ValidationError(_('Organization is missing'))
elif get_field_from_model_or_attrs('scm_type') == '':
if get_field_from_model_or_attrs('scm_type') == '':
for fd in ('scm_update_on_launch', 'scm_delete_on_update', 'scm_clean'):
if get_field_from_model_or_attrs(fd):
raise serializers.ValidationError({fd: _('Update options must be set to false for manual projects.')})
@@ -2738,7 +2742,8 @@ class JobOptionsSerializer(LabelsListMixin, BaseSerializer):
fields = ('*', 'job_type', 'inventory', 'project', 'playbook', 'scm_branch',
'forks', 'limit', 'verbosity', 'extra_vars', 'job_tags',
'force_handlers', 'skip_tags', 'start_at_task', 'timeout',
'use_fact_cache',)
'use_fact_cache', 'organization',)
read_only_fields = ('organization',)
def get_related(self, obj):
res = super(JobOptionsSerializer, self).get_related(obj)
@@ -2753,6 +2758,8 @@ class JobOptionsSerializer(LabelsListMixin, BaseSerializer):
res['project'] = self.reverse('api:project_detail', kwargs={'pk': obj.project.pk})
except ObjectDoesNotExist:
setattr(obj, 'project', None)
if obj.organization_id:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization_id})
if isinstance(obj, UnifiedJobTemplate):
res['extra_credentials'] = self.reverse(
'api:job_template_extra_credentials_list',
@@ -2899,6 +2906,10 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
)
if obj.host_config_key:
res['callback'] = self.reverse('api:job_template_callback', kwargs={'pk': obj.pk})
if obj.organization_id:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization_id})
if obj.webhook_credential_id:
res['webhook_credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.webhook_credential_id})
return res
def validate(self, attrs):
@@ -3204,7 +3215,7 @@ class AdHocCommandSerializer(UnifiedJobSerializer):
field_kwargs['choices'] = module_name_choices
field_kwargs['required'] = bool(not module_name_default)
field_kwargs['default'] = module_name_default or serializers.empty
field_kwargs['allow_blank'] = bool(module_name_default)
field_kwargs['allow_blank'] = False
field_kwargs.pop('max_length', None)
return field_class, field_kwargs
@@ -3389,6 +3400,8 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
)
if obj.organization:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization.pk})
if obj.webhook_credential_id:
res['webhook_credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.webhook_credential_id})
return res
def validate_extra_vars(self, value):
@@ -3683,7 +3696,8 @@ class WorkflowJobTemplateNodeSerializer(LaunchConfigurationBaseSerializer):
class Meta:
model = WorkflowJobTemplateNode
fields = ('*', 'workflow_job_template', '-name', '-description', 'id', 'url', 'related',
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes', 'all_parents_must_converge',)
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes', 'all_parents_must_converge',
'identifier',)
def get_related(self, obj):
res = super(WorkflowJobTemplateNodeSerializer, self).get_related(obj)
@@ -3723,7 +3737,7 @@ class WorkflowJobNodeSerializer(LaunchConfigurationBaseSerializer):
model = WorkflowJobNode
fields = ('*', 'job', 'workflow_job', '-name', '-description', 'id', 'url', 'related',
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes',
'all_parents_must_converge', 'do_not_run',)
'all_parents_must_converge', 'do_not_run', 'identifier')
def get_related(self, obj):
res = super(WorkflowJobNodeSerializer, self).get_related(obj)

View File

@@ -10,6 +10,7 @@ from awx.api.views import (
OrganizationAdminsList,
OrganizationInventoriesList,
OrganizationProjectsList,
OrganizationJobTemplatesList,
OrganizationWorkflowJobTemplatesList,
OrganizationTeamsList,
OrganizationCredentialList,
@@ -33,6 +34,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/admins/$', OrganizationAdminsList.as_view(), name='organization_admins_list'),
url(r'^(?P<pk>[0-9]+)/inventories/$', OrganizationInventoriesList.as_view(), name='organization_inventories_list'),
url(r'^(?P<pk>[0-9]+)/projects/$', OrganizationProjectsList.as_view(), name='organization_projects_list'),
url(r'^(?P<pk>[0-9]+)/job_templates/$', OrganizationJobTemplatesList.as_view(), name='organization_job_templates_list'),
url(r'^(?P<pk>[0-9]+)/workflow_job_templates/$', OrganizationWorkflowJobTemplatesList.as_view(), name='organization_workflow_job_templates_list'),
url(r'^(?P<pk>[0-9]+)/teams/$', OrganizationTeamsList.as_view(), name='organization_teams_list'),
url(r'^(?P<pk>[0-9]+)/credentials/$', OrganizationCredentialList.as_view(), name='organization_credential_list'),

View File

@@ -34,7 +34,9 @@ from awx.api.views import (
OAuth2ApplicationDetail,
)
from awx.api.views.metrics import MetricsView
from awx.api.views.metrics import (
MetricsView,
)
from .organization import urls as organization_urls
from .user import urls as user_urls

View File

@@ -111,6 +111,7 @@ from awx.api.views.organization import ( # noqa
OrganizationUsersList,
OrganizationAdminsList,
OrganizationProjectsList,
OrganizationJobTemplatesList,
OrganizationWorkflowJobTemplatesList,
OrganizationTeamsList,
OrganizationActivityStreamList,

View File

@@ -39,3 +39,4 @@ class MetricsView(APIView):
if (request.user.is_superuser or request.user.is_system_auditor):
return Response(metrics().decode('UTF-8'))
raise PermissionDenied()

View File

@@ -4,10 +4,7 @@
import dateutil
import logging
from django.db.models import (
Count,
F,
)
from django.db.models import Count
from django.db import transaction
from django.shortcuts import get_object_or_404
from django.utils.timezone import now
@@ -175,28 +172,18 @@ class OrganizationCountsMixin(object):
inv_qs = Inventory.accessible_objects(self.request.user, 'read_role')
project_qs = Project.accessible_objects(self.request.user, 'read_role')
jt_qs = JobTemplate.accessible_objects(self.request.user, 'read_role')
# Produce counts of Foreign Key relationships
db_results['inventories'] = inv_qs\
.values('organization').annotate(Count('organization')).order_by('organization')
db_results['inventories'] = inv_qs.values('organization').annotate(Count('organization')).order_by('organization')
db_results['teams'] = Team.accessible_objects(
self.request.user, 'read_role').values('organization').annotate(
Count('organization')).order_by('organization')
JT_project_reference = 'project__organization'
JT_inventory_reference = 'inventory__organization'
db_results['job_templates_project'] = JobTemplate.accessible_objects(
self.request.user, 'read_role').exclude(
project__organization=F(JT_inventory_reference)).values(JT_project_reference).annotate(
Count(JT_project_reference)).order_by(JT_project_reference)
db_results['job_templates'] = jt_qs.values('organization').annotate(Count('organization')).order_by('organization')
db_results['job_templates_inventory'] = JobTemplate.accessible_objects(
self.request.user, 'read_role').values(JT_inventory_reference).annotate(
Count(JT_inventory_reference)).order_by(JT_inventory_reference)
db_results['projects'] = project_qs\
.values('organization').annotate(Count('organization')).order_by('organization')
db_results['projects'] = project_qs.values('organization').annotate(Count('organization')).order_by('organization')
# Other members and admins of organization are always viewable
db_results['users'] = org_qs.annotate(
@@ -212,11 +199,7 @@ class OrganizationCountsMixin(object):
'admins': 0, 'projects': 0}
for res, count_qs in db_results.items():
if res == 'job_templates_project':
org_reference = JT_project_reference
elif res == 'job_templates_inventory':
org_reference = JT_inventory_reference
elif res == 'users':
if res == 'users':
org_reference = 'id'
else:
org_reference = 'organization'
@@ -229,14 +212,6 @@ class OrganizationCountsMixin(object):
continue
count_context[org_id][res] = entry['%s__count' % org_reference]
# Combine the counts for job templates by project and inventory
for org in org_id_list:
org_id = org['id']
count_context[org_id]['job_templates'] = 0
for related_path in ['job_templates_project', 'job_templates_inventory']:
if related_path in count_context[org_id]:
count_context[org_id]['job_templates'] += count_context[org_id].pop(related_path)
full_context['related_field_counts'] = count_context
return full_context

View File

@@ -20,7 +20,7 @@ from awx.main.models import (
Role,
User,
Team,
InstanceGroup,
InstanceGroup
)
from awx.api.generics import (
ListCreateAPIView,
@@ -28,6 +28,7 @@ from awx.api.generics import (
SubListAPIView,
SubListCreateAttachDetachAPIView,
SubListAttachDetachAPIView,
SubListCreateAPIView,
ResourceAccessList,
BaseUsersList,
)
@@ -35,14 +36,13 @@ from awx.api.generics import (
from awx.api.serializers import (
OrganizationSerializer,
InventorySerializer,
ProjectSerializer,
UserSerializer,
TeamSerializer,
ActivityStreamSerializer,
RoleSerializer,
NotificationTemplateSerializer,
WorkflowJobTemplateSerializer,
InstanceGroupSerializer,
ProjectSerializer, JobTemplateSerializer, WorkflowJobTemplateSerializer
)
from awx.api.views.mixin import (
RelatedJobsPreventDeleteMixin,
@@ -94,7 +94,7 @@ class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPI
org_counts['projects'] = Project.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['job_templates'] = JobTemplate.accessible_objects(**access_kwargs).filter(
project__organization__id=org_id).count()
organization__id=org_id).count()
full_context['related_field_counts'] = {}
full_context['related_field_counts'][org_id] = org_counts
@@ -128,21 +128,27 @@ class OrganizationAdminsList(BaseUsersList):
ordering = ('username',)
class OrganizationProjectsList(SubListCreateAttachDetachAPIView):
class OrganizationProjectsList(SubListCreateAPIView):
model = Project
serializer_class = ProjectSerializer
parent_model = Organization
relationship = 'projects'
parent_key = 'organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAttachDetachAPIView):
class OrganizationJobTemplatesList(SubListCreateAPIView):
model = JobTemplate
serializer_class = JobTemplateSerializer
parent_model = Organization
parent_key = 'organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization
relationship = 'workflows'
parent_key = 'organization'

View File

@@ -2,14 +2,15 @@
# All Rights Reserved.
import os
import logging
import django
from awx import __version__ as tower_version
# Prepare the AWX environment.
from awx import prepare_env, MODE
prepare_env() # NOQA
from django.core.wsgi import get_wsgi_application # NOQA
from channels.asgi import get_channel_layer
from channels.routing import get_default_application
"""
ASGI config for AWX project.
@@ -32,6 +33,5 @@ if MODE == 'production':
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "awx.settings")
channel_layer = get_channel_layer()
django.setup()
channel_layer = get_default_application()

View File

@@ -1,11 +1,9 @@
# Python
import contextlib
import logging
import re
import sys
import threading
import time
import urllib.parse
# Django
from django.conf import LazySettings
@@ -57,15 +55,6 @@ SETTING_CACHE_DEFAULTS = True
__all__ = ['SettingsWrapper', 'get_settings_to_cache', 'SETTING_CACHE_NOTSET']
def normalize_broker_url(value):
parts = value.rsplit('@', 1)
match = re.search('(amqp://[^:]+:)(.*)', parts[0])
if match:
prefix, password = match.group(1), match.group(2)
parts[0] = prefix + urllib.parse.quote(password)
return '@'.join(parts)
@contextlib.contextmanager
def _ctit_db_wrapper(trans_safe=False):
'''
@@ -415,16 +404,7 @@ class SettingsWrapper(UserSettingsHolder):
value = self._get_local(name)
if value is not empty:
return value
value = self._get_default(name)
# sometimes users specify RabbitMQ passwords that contain
# unescaped : and @ characters that confused urlparse, e.g.,
# amqp://guest:a@ns:ibl3#@localhost:5672//
#
# detect these scenarios, and automatically escape the user's
# password so it just works
if name == 'BROKER_URL':
value = normalize_broker_url(value)
return value
return self._get_default(name)
def _set_local(self, name, value):
field = self.registry.get_setting_field(name)

View File

@@ -789,7 +789,6 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
return self.user in obj.admin_role
def can_delete(self, obj):
self.check_license(check_expiration=False)
is_change_possible = self.can_change(obj, None)
if not is_change_possible:
return False
@@ -1411,7 +1410,7 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
'''
model = JobTemplate
select_related = ('created_by', 'modified_by', 'inventory', 'project',
select_related = ('created_by', 'modified_by', 'inventory', 'project', 'organization',
'next_schedule',)
prefetch_related = (
'instance_groups',
@@ -1435,16 +1434,11 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
Users who are able to create deploy jobs can also run normal and check (dry run) jobs.
'''
if not data: # So the browseable API will work
return (
Project.accessible_objects(self.user, 'use_role').exists() or
Inventory.accessible_objects(self.user, 'use_role').exists())
return Organization.accessible_objects(self.user, 'job_template_admin_role').exists()
# if reference_obj is provided, determine if it can be copied
reference_obj = data.get('reference_obj', None)
if 'survey_enabled' in data and data['survey_enabled']:
self.check_license(feature='surveys')
if self.user.is_superuser:
return True
@@ -1504,22 +1498,28 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
return self.user in obj.execute_role
def can_change(self, obj, data):
data_for_change = data
if self.user not in obj.admin_role and not self.user.is_superuser:
return False
if data is not None:
data = dict(data)
if data is None:
return True
if self.changes_are_non_sensitive(obj, data):
if 'survey_enabled' in data and obj.survey_enabled != data['survey_enabled'] and data['survey_enabled']:
self.check_license(feature='surveys')
return True
# standard type of check for organization - cannot change the value
# unless posessing the respective job_template_admin_role, otherwise non-blocking
if not self.check_related('organization', Organization, data, obj=obj, role_field='job_template_admin_role'):
return False
for required_field in ('inventory', 'project'):
required_obj = getattr(obj, required_field, None)
if required_field not in data_for_change and required_obj is not None:
data_for_change[required_field] = required_obj.pk
return self.can_read(obj) and (self.can_add(data_for_change) if data is not None else True)
data = dict(data)
if self.changes_are_non_sensitive(obj, data):
return True
for required_field, cls in (('inventory', Inventory), ('project', Project)):
is_mandatory = True
if not getattr(obj, '{}_id'.format(required_field)):
is_mandatory = False
if not self.check_related(required_field, cls, data, obj=obj, role_field='use_role', mandatory=is_mandatory):
return False
return True
def changes_are_non_sensitive(self, obj, data):
'''
@@ -1554,9 +1554,9 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
@check_superuser
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if relationship == "instance_groups":
if not obj.project.organization:
if not obj.organization:
return False
return self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.project.organization.admin_role
return self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role
if relationship == 'credentials' and isinstance(sub_obj, Credential):
return self.user in obj.admin_role and self.user in sub_obj.use_role
return super(JobTemplateAccess, self).can_attach(
@@ -1587,6 +1587,7 @@ class JobAccess(BaseAccess):
select_related = ('created_by', 'modified_by', 'job_template', 'inventory',
'project', 'project_update',)
prefetch_related = (
'organization',
'unified_job_template',
'instance_group',
'credentials__credential_type',
@@ -1607,42 +1608,19 @@ class JobAccess(BaseAccess):
return qs.filter(
Q(job_template__in=JobTemplate.accessible_objects(self.user, 'read_role')) |
Q(inventory__organization__in=org_access_qs) |
Q(project__organization__in=org_access_qs)).distinct()
def related_orgs(self, obj):
orgs = []
if obj.inventory and obj.inventory.organization:
orgs.append(obj.inventory.organization)
if obj.project and obj.project.organization and obj.project.organization not in orgs:
orgs.append(obj.project.organization)
return orgs
def org_access(self, obj, role_types=['admin_role']):
orgs = self.related_orgs(obj)
for org in orgs:
for role_type in role_types:
role = getattr(org, role_type)
if self.user in role:
return True
return False
Q(organization__in=org_access_qs)).distinct()
def can_add(self, data, validate_license=True):
if validate_license:
self.check_license()
if not data: # So the browseable API will work
return True
return self.user.is_superuser
raise NotImplementedError('Direct job creation not possible in v2 API')
def can_change(self, obj, data):
return (obj.status == 'new' and
self.can_read(obj) and
self.can_add(data, validate_license=False))
raise NotImplementedError('Direct job editing not supported in v2 API')
@check_superuser
def can_delete(self, obj):
return self.org_access(obj)
if not obj.organization:
return False
return self.user in obj.organization.admin_role
def can_start(self, obj, validate_license=True):
if validate_license:
@@ -1662,6 +1640,7 @@ class JobAccess(BaseAccess):
except JobLaunchConfig.DoesNotExist:
config = None
# Standard permissions model
if obj.job_template and (self.user not in obj.job_template.execute_role):
return False
@@ -1676,24 +1655,17 @@ class JobAccess(BaseAccess):
if JobLaunchConfigAccess(self.user).can_add({'reference_obj': config}):
return True
org_access = bool(obj.inventory) and self.user in obj.inventory.organization.inventory_admin_role
project_access = obj.project is None or self.user in obj.project.admin_role
credential_access = all([self.user in cred.use_role for cred in obj.credentials.all()])
# Standard permissions model without job template involved
if obj.organization and self.user in obj.organization.execute_role:
return True
elif not (obj.job_template or obj.organization):
raise PermissionDenied(_('Job has been orphaned from its job template and organization.'))
elif obj.job_template and config is not None:
raise PermissionDenied(_('Job was launched with prompted fields you do not have access to.'))
elif obj.job_template and config is None:
raise PermissionDenied(_('Job was launched with unknown prompted fields. Organization admin permissions required.'))
# job can be relaunched if user could make an equivalent JT
ret = org_access and credential_access and project_access
if not ret and self.save_messages and not self.messages:
if not obj.job_template:
pretext = _('Job has been orphaned from its job template.')
elif config is None:
pretext = _('Job was launched with unknown prompted fields.')
else:
pretext = _('Job was launched with prompted fields.')
if credential_access:
self.messages['detail'] = '{} {}'.format(pretext, _(' Organization level permissions required.'))
else:
self.messages['detail'] = '{} {}'.format(pretext, _(' You do not have permission to related resources.'))
return ret
return False
def get_method_capability(self, method, obj, parent_obj):
if method == 'start':
@@ -1706,10 +1678,16 @@ class JobAccess(BaseAccess):
def can_cancel(self, obj):
if not obj.can_cancel:
return False
# Delete access allows org admins to stop running jobs
if self.user == obj.created_by or self.can_delete(obj):
# Users may always cancel their own jobs
if self.user == obj.created_by:
return True
return obj.job_template is not None and self.user in obj.job_template.admin_role
# Users with direct admin to JT may cancel jobs started by anyone
if obj.job_template and self.user in obj.job_template.admin_role:
return True
# If orphaned, allow org JT admins to stop running jobs
if not obj.job_template and obj.organization and self.user in obj.organization.job_template_admin_role:
return True
return False
class SystemJobTemplateAccess(BaseAccess):
@@ -1944,11 +1922,11 @@ class WorkflowJobNodeAccess(BaseAccess):
# TODO: notification attachments?
class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
'''
I can only see/manage Workflow Job Templates if I'm a super user
I can see/manage Workflow Job Templates based on object roles
'''
model = WorkflowJobTemplate
select_related = ('created_by', 'modified_by', 'next_schedule',
select_related = ('created_by', 'modified_by', 'organization', 'next_schedule',
'admin_role', 'execute_role', 'read_role',)
def filtered_queryset(self):
@@ -1966,10 +1944,6 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'workflow_admin_role').exists()
# will check this if surveys are added to WFJT
if 'survey_enabled' in data and data['survey_enabled']:
self.check_license(feature='surveys')
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True) and
self.check_related('inventory', Inventory, data, role_field='use_role')
@@ -2038,7 +2012,7 @@ class WorkflowJobAccess(BaseAccess):
I can also cancel it if I started it
'''
model = WorkflowJob
select_related = ('created_by', 'modified_by',)
select_related = ('created_by', 'modified_by', 'organization',)
def filtered_queryset(self):
return WorkflowJob.objects.filter(
@@ -2332,6 +2306,7 @@ class UnifiedJobTemplateAccess(BaseAccess):
prefetch_related = (
'last_job',
'current_job',
'organization',
'credentials__credential_type',
Prefetch('labels', queryset=Label.objects.all().order_by('name')),
)
@@ -2371,6 +2346,7 @@ class UnifiedJobAccess(BaseAccess):
prefetch_related = (
'created_by',
'modified_by',
'organization',
'unified_job_node__workflow_job',
'unified_job_template',
'instance_group',
@@ -2401,8 +2377,7 @@ class UnifiedJobAccess(BaseAccess):
Q(unified_job_template_id__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role')) |
Q(inventoryupdate__inventory_source__inventory__id__in=inv_pk_qs) |
Q(adhoccommand__inventory__id__in=inv_pk_qs) |
Q(job__inventory__organization__in=org_auditor_qs) |
Q(job__project__organization__in=org_auditor_qs)
Q(organization__in=org_auditor_qs)
)
return qs

View File

@@ -0,0 +1,166 @@
import datetime
import asyncio
import logging
import aioredis
import redis
import re
from prometheus_client import (
generate_latest,
Gauge,
Counter,
Enum,
CollectorRegistry,
)
from django.conf import settings
BROADCAST_WEBSOCKET_REDIS_KEY_NAME = 'broadcast_websocket_stats'
logger = logging.getLogger('awx.main.analytics.broadcast_websocket')
def dt_to_seconds(dt):
return int((dt - datetime.datetime(1970,1,1)).total_seconds())
def now_seconds():
return dt_to_seconds(datetime.datetime.now())
# Second granularity; Per-minute
class FixedSlidingWindow():
def __init__(self, start_time=None):
self.buckets = dict()
self.start_time = start_time or now_seconds()
def cleanup(self, now_bucket=None):
now_bucket = now_bucket or now_seconds()
if self.start_time + 60 <= now_bucket:
self.start_time = now_bucket + 60 + 1
# Delete old entries
for k in list(self.buckets.keys()):
if k < self.start_time:
del self.buckets[k]
def record(self, ts=None):
ts = ts or datetime.datetime.now()
now_bucket = int((ts - datetime.datetime(1970,1,1)).total_seconds())
val = self.buckets.get(now_bucket, 0)
self.buckets[now_bucket] = val + 1
self.cleanup(now_bucket)
def render(self):
self.cleanup()
return sum(self.buckets.values()) or 0
class BroadcastWebsocketStatsManager():
def __init__(self, event_loop, local_hostname):
self._local_hostname = local_hostname
self._event_loop = event_loop
self._stats = dict()
self._redis_key = BROADCAST_WEBSOCKET_REDIS_KEY_NAME
def new_remote_host_stats(self, remote_hostname):
self._stats[remote_hostname] = BroadcastWebsocketStats(self._local_hostname,
remote_hostname)
return self._stats[remote_hostname]
def delete_remote_host_stats(self, remote_hostname):
del self._stats[remote_hostname]
async def run_loop(self):
try:
redis_conn = await aioredis.create_redis_pool(settings.BROKER_URL)
while True:
stats_data_str = ''.join(stat.serialize() for stat in self._stats.values())
await redis_conn.set(self._redis_key, stats_data_str)
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_STATS_POLL_RATE_SECONDS)
except Exception as e:
logger.warn(e)
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_STATS_POLL_RATE_SECONDS)
self.start()
def start(self):
self.async_task = self._event_loop.create_task(self.run_loop())
return self.async_task
@classmethod
def get_stats_sync(cls):
'''
Stringified verion of all the stats
'''
redis_conn = redis.Redis.from_url(settings.BROKER_URL)
return redis_conn.get(BROADCAST_WEBSOCKET_REDIS_KEY_NAME)
class BroadcastWebsocketStats():
def __init__(self, local_hostname, remote_hostname):
self._local_hostname = local_hostname
self._remote_hostname = remote_hostname
self._registry = CollectorRegistry()
# TODO: More robust replacement
self.name = self.safe_name(self._local_hostname)
self.remote_name = self.safe_name(self._remote_hostname)
self._messages_received_total = Counter(f'awx_{self.remote_name}_messages_received_total',
'Number of messages received, to be forwarded, by the broadcast websocket system',
registry=self._registry)
self._messages_received = Gauge(f'awx_{self.remote_name}_messages_received',
'Number forwarded messages received by the broadcast websocket system, for the duration of the current connection',
registry=self._registry)
self._connection = Enum(f'awx_{self.remote_name}_connection',
'Websocket broadcast connection',
states=['disconnected', 'connected'],
registry=self._registry)
self._connection_start = Gauge(f'awx_{self.remote_name}_connection_start',
'Time the connection was established',
registry=self._registry)
self._messages_received_per_minute = Gauge(f'awx_{self.remote_name}_messages_received_per_minute',
'Messages received per minute',
registry=self._registry)
self._internal_messages_received_per_minute = FixedSlidingWindow()
def safe_name(self, s):
# Replace all non alpha-numeric characters with _
return re.sub('[^0-9a-zA-Z]+', '_', s)
def unregister(self):
self._registry.unregister(f'awx_{self.remote_name}_messages_received')
self._registry.unregister(f'awx_{self.remote_name}_connection')
def record_message_received(self):
self._internal_messages_received_per_minute.record()
self._messages_received.inc()
self._messages_received_total.inc()
def record_connection_established(self):
self._connection.state('connected')
self._connection_start.set_to_current_time()
self._messages_received.set(0)
def record_connection_lost(self):
self._connection.state('disconnected')
def get_connection_duration(self):
return (datetime.datetime.now() - self._connection_established_ts).total_seconds()
def render(self):
msgs_per_min = self._internal_messages_received_per_minute.render()
self._messages_received_per_minute.set(msgs_per_min)
def serialize(self):
self.render()
registry_data = generate_latest(self._registry).decode('UTF-8')
return registry_data

View File

@@ -257,7 +257,7 @@ def copy_tables(since, full_path):
unified_job_query = '''COPY (SELECT main_unifiedjob.id,
main_unifiedjob.polymorphic_ctype_id,
django_content_type.model,
main_project.organization_id,
main_unifiedjob.organization_id,
main_organization.name as organization_name,
main_unifiedjob.created,
main_unifiedjob.name,
@@ -275,10 +275,8 @@ def copy_tables(since, full_path):
main_unifiedjob.job_explanation,
main_unifiedjob.instance_group_id
FROM main_unifiedjob
JOIN main_job ON main_unifiedjob.id = main_job.unifiedjob_ptr_id
JOIN django_content_type ON main_unifiedjob.polymorphic_ctype_id = django_content_type.id
JOIN main_project ON main_project.unifiedjobtemplate_ptr_id = main_job.project_id
JOIN main_organization ON main_organization.id = main_project.organization_id
JOIN main_organization ON main_organization.id = main_unifiedjob.organization_id
WHERE main_unifiedjob.created > {}
AND main_unifiedjob.launch_type != 'sync'
ORDER BY main_unifiedjob.id ASC) TO STDOUT WITH CSV HEADER'''.format(since.strftime("'%Y-%m-%d %H:%M:%S'"))

View File

@@ -134,13 +134,17 @@ def gather(dest=None, module=None, collection_type='scheduled'):
settings.SYSTEM_UUID,
run_now.strftime('%Y-%m-%d-%H%M%S%z')
])
tgz = shutil.make_archive(
os.path.join(os.path.dirname(dest), tarname),
'gztar',
dest
)
shutil.rmtree(dest)
return tgz
try:
tgz = shutil.make_archive(
os.path.join(os.path.dirname(dest), tarname),
'gztar',
dest
)
return tgz
except Exception:
logger.exception("Failed to write analytics archive file")
finally:
shutil.rmtree(dest)
def ship(path):

View File

@@ -1,97 +1,264 @@
import json
import logging
import datetime
import hmac
import asyncio
from channels import Group
from channels.auth import channel_session_user_from_http, channel_session_user
from django.utils.encoding import smart_str
from django.http.cookie import parse_cookie
from django.core.serializers.json import DjangoJSONEncoder
from django.conf import settings
from django.utils.encoding import force_bytes
from django.contrib.auth.models import User
from channels.generic.websocket import AsyncJsonWebsocketConsumer
from channels.layers import get_channel_layer
from channels.db import database_sync_to_async
logger = logging.getLogger('awx.main.consumers')
XRF_KEY = '_auth_user_xrf'
def discard_groups(message):
if 'groups' in message.channel_session:
for group in message.channel_session['groups']:
Group(group).discard(message.reply_channel)
class WebsocketSecretAuthHelper:
"""
Middlewareish for websockets to verify node websocket broadcast interconnect.
Note: The "ish" is due to the channels routing interface. Routing occurs
_after_ authentication; making it hard to apply this auth to _only_ a subset of
websocket endpoints.
"""
@classmethod
def construct_secret(cls):
nonce_serialized = "{}".format(int((datetime.datetime.utcnow() - datetime.datetime.fromtimestamp(0)).total_seconds()))
payload_dict = {
'secret': settings.BROADCAST_WEBSOCKET_SECRET,
'nonce': nonce_serialized
}
payload_serialized = json.dumps(payload_dict)
secret_serialized = hmac.new(force_bytes(settings.BROADCAST_WEBSOCKET_SECRET),
msg=force_bytes(payload_serialized),
digestmod='sha256').hexdigest()
return 'HMAC-SHA256 {}:{}'.format(nonce_serialized, secret_serialized)
@channel_session_user_from_http
def ws_connect(message):
headers = dict(message.content.get('headers', ''))
message.reply_channel.send({"accept": True})
message.content['method'] = 'FAKE'
if message.user.is_authenticated:
message.reply_channel.send(
{"text": json.dumps({"accept": True, "user": message.user.id})}
)
# store the valid CSRF token from the cookie so we can compare it later
# on ws_receive
cookie_token = parse_cookie(
smart_str(headers.get(b'cookie'))
).get('csrftoken')
if cookie_token:
message.channel_session[XRF_KEY] = cookie_token
else:
logger.error("Request user is not authenticated to use websocket.")
message.reply_channel.send({"close": True})
return None
@classmethod
def verify_secret(cls, s, nonce_tolerance=300):
try:
(prefix, payload) = s.split(' ')
if prefix != 'HMAC-SHA256':
raise ValueError('Unsupported encryption algorithm')
(nonce_parsed, secret_parsed) = payload.split(':')
except Exception:
raise ValueError("Failed to parse secret")
try:
payload_expected = {
'secret': settings.BROADCAST_WEBSOCKET_SECRET,
'nonce': nonce_parsed,
}
payload_serialized = json.dumps(payload_expected)
except Exception:
raise ValueError("Failed to create hash to compare to secret.")
secret_serialized = hmac.new(force_bytes(settings.BROADCAST_WEBSOCKET_SECRET),
msg=force_bytes(payload_serialized),
digestmod='sha256').hexdigest()
if secret_serialized != secret_parsed:
raise ValueError("Invalid secret")
# Avoid timing attack and check the nonce after all the heavy lifting
now = datetime.datetime.utcnow()
nonce_parsed = datetime.datetime.fromtimestamp(int(nonce_parsed))
if (now - nonce_parsed).total_seconds() > nonce_tolerance:
raise ValueError("Potential replay attack or machine(s) time out of sync.")
return True
@classmethod
def is_authorized(cls, scope):
secret = ''
for k, v in scope['headers']:
if k.decode("utf-8") == 'secret':
secret = v.decode("utf-8")
break
WebsocketSecretAuthHelper.verify_secret(secret)
@channel_session_user
def ws_disconnect(message):
discard_groups(message)
class BroadcastConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
try:
WebsocketSecretAuthHelper.is_authorized(self.scope)
except Exception:
# TODO: log ip of connected client
logger.warn("Broadcast client failed to authorize.")
await self.close()
return
# TODO: log ip of connected client
logger.info(f"Broadcast client connected.")
await self.accept()
await self.channel_layer.group_add(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
async def disconnect(self, code):
# TODO: log ip of disconnected client
logger.info("Client disconnected")
async def internal_message(self, event):
await self.send(event['text'])
@channel_session_user
def ws_receive(message):
from awx.main.access import consumer_access
user = message.user
raw_data = message.content['text']
data = json.loads(raw_data)
class EventConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
user = self.scope['user']
if user and not user.is_anonymous:
await self.accept()
await self.send_json({"accept": True, "user": user.id})
# store the valid CSRF token from the cookie so we can compare it later
# on ws_receive
cookie_token = self.scope['cookies'].get('csrftoken')
if cookie_token:
self.scope['session'][XRF_KEY] = cookie_token
else:
logger.error("Request user is not authenticated to use websocket.")
# TODO: Carry over from channels 1 implementation
# We should never .accept() the client and close without sending a close message
await self.accept()
await self.send_json({"close": True})
await self.close()
xrftoken = data.get('xrftoken')
if (
not xrftoken or
XRF_KEY not in message.channel_session or
xrftoken != message.channel_session[XRF_KEY]
):
logger.error(
"access denied to channel, XRF mismatch for {}".format(user.username)
)
message.reply_channel.send({
"text": json.dumps({"error": "access denied to channel"})
})
@database_sync_to_async
def user_can_see_object_id(self, user_access, oid):
# At this point user is a channels.auth.UserLazyObject object
# This causes problems with our generic role permissions checking.
# Specifically, type(user) != User
# Therefore, get the "real" User objects from the database before
# calling the access permission methods
user_access.user = User.objects.get(id=user_access.user.id)
res = user_access.get_queryset().filter(pk=oid).exists()
return res
async def receive_json(self, data):
from awx.main.access import consumer_access
user = self.scope['user']
xrftoken = data.get('xrftoken')
if (
not xrftoken or
XRF_KEY not in self.scope["session"] or
xrftoken != self.scope["session"][XRF_KEY]
):
logger.error(f"access denied to channel, XRF mismatch for {user.username}")
await self.send_json({"error": "access denied to channel"})
return
if 'groups' in data:
groups = data['groups']
new_groups = set()
current_groups = set(self.scope['session'].pop('groups') if 'groups' in self.scope['session'] else [])
for group_name,v in groups.items():
if type(v) is list:
for oid in v:
name = '{}-{}'.format(group_name, oid)
access_cls = consumer_access(group_name)
if access_cls is not None:
user_access = access_cls(user)
if not await self.user_can_see_object_id(user_access, oid):
await self.send_json({"error": "access denied to channel {0} for resource id {1}".format(group_name, oid)})
continue
new_groups.add(name)
else:
await self.send_json({"error": "access denied to channel"})
logger.error(f"groups must be a list, not {groups}")
return
old_groups = current_groups - new_groups
for group_name in old_groups:
await self.channel_layer.group_discard(
group_name,
self.channel_name,
)
new_groups_exclusive = new_groups - current_groups
for group_name in new_groups_exclusive:
await self.channel_layer.group_add(
group_name,
self.channel_name
)
logger.debug(f"Channel {self.channel_name} left groups {old_groups} and joined {new_groups_exclusive}")
self.scope['session']['groups'] = new_groups
await self.send_json({
"groups_current": list(new_groups),
"groups_left": list(old_groups),
"groups_joined": list(new_groups_exclusive)
})
async def internal_message(self, event):
await self.send(event['text'])
def run_sync(func):
event_loop = asyncio.new_event_loop()
event_loop.run_until_complete(func)
event_loop.close()
def _dump_payload(payload):
try:
return json.dumps(payload, cls=DjangoJSONEncoder)
except ValueError:
logger.error("Invalid payload to emit")
return None
async def emit_channel_notification_async(group, payload):
from awx.main.wsbroadcast import wrap_broadcast_msg # noqa
payload_dumped = _dump_payload(payload)
if payload_dumped is None:
return
if 'groups' in data:
discard_groups(message)
groups = data['groups']
current_groups = set(message.channel_session.pop('groups') if 'groups' in message.channel_session else [])
for group_name,v in groups.items():
if type(v) is list:
for oid in v:
name = '{}-{}'.format(group_name, oid)
access_cls = consumer_access(group_name)
if access_cls is not None:
user_access = access_cls(user)
if not user_access.get_queryset().filter(pk=oid).exists():
message.reply_channel.send({"text": json.dumps(
{"error": "access denied to channel {0} for resource id {1}".format(group_name, oid)})})
continue
current_groups.add(name)
Group(name).add(message.reply_channel)
else:
current_groups.add(group_name)
Group(group_name).add(message.reply_channel)
message.channel_session['groups'] = list(current_groups)
channel_layer = get_channel_layer()
await channel_layer.group_send(
group,
{
"type": "internal.message",
"text": payload_dumped
},
)
await channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{
"type": "internal.message",
"text": wrap_broadcast_msg(group, payload_dumped),
},
)
def emit_channel_notification(group, payload):
try:
Group(group).send({"text": json.dumps(payload, cls=DjangoJSONEncoder)})
except ValueError:
logger.error("Invalid payload emitting channel {} on topic: {}".format(group, payload))
from awx.main.wsbroadcast import wrap_broadcast_msg # noqa
payload_dumped = _dump_payload(payload)
if payload_dumped is None:
return
channel_layer = get_channel_layer()
run_sync(channel_layer.group_send(
group,
{
"type": "internal.message",
"text": payload_dumped
},
))
run_sync(channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{
"type": "internal.message",
"text": wrap_broadcast_msg(group, payload_dumped),
},
))

View File

@@ -64,7 +64,7 @@ class RecordedQueryLog(object):
if not os.path.isdir(self.dest):
os.makedirs(self.dest)
progname = ' '.join(sys.argv)
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'runworker'):
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'wsbroadcast'):
if match in progname:
progname = match
break

View File

@@ -1,5 +1,62 @@
import psycopg2
import select
from contextlib import contextmanager
from django.conf import settings
NOT_READY = ([], [], [])
def get_local_queuename():
return settings.CLUSTER_HOST_ID
class PubSub(object):
def __init__(self, conn):
assert conn.autocommit, "Connection must be in autocommit mode."
self.conn = conn
def listen(self, channel):
with self.conn.cursor() as cur:
cur.execute('LISTEN "%s";' % channel)
def unlisten(self, channel):
with self.conn.cursor() as cur:
cur.execute('UNLISTEN "%s";' % channel)
def notify(self, channel, payload):
with self.conn.cursor() as cur:
cur.execute('SELECT pg_notify(%s, %s);', (channel, payload))
def events(self, select_timeout=5, yield_timeouts=False):
while True:
if select.select([self.conn], [], [], select_timeout) == NOT_READY:
if yield_timeouts:
yield None
else:
self.conn.poll()
while self.conn.notifies:
yield self.conn.notifies.pop(0)
def close(self):
self.conn.close()
@contextmanager
def pg_bus_conn():
conf = settings.DATABASES['default']
conn = psycopg2.connect(dbname=conf['NAME'],
host=conf['HOST'],
user=conf['USER'],
password=conf['PASSWORD'],
port=conf['PORT'],
**conf.get("OPTIONS", {}))
# Django connection.cursor().connection doesn't have autocommit=True on
conn.set_session(autocommit=True)
pubsub = PubSub(conn)
yield pubsub
conn.close()

View File

@@ -1,11 +1,10 @@
import logging
import socket
from django.conf import settings
import uuid
import json
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch.kombu import Connection
from kombu import Queue, Exchange, Producer, Consumer
from . import pg_bus_conn
logger = logging.getLogger('awx.main.dispatch')
@@ -20,15 +19,6 @@ class Control(object):
raise RuntimeError('{} must be in {}'.format(service, self.services))
self.service = service
self.queuename = host or get_local_queuename()
self.queue = Queue(self.queuename, Exchange(self.queuename), routing_key=self.queuename)
def publish(self, msg, conn, **kwargs):
producer = Producer(
exchange=self.queue.exchange,
channel=conn,
routing_key=self.queuename
)
producer.publish(msg, expiration=5, **kwargs)
def status(self, *args, **kwargs):
return self.control_with_reply('status', *args, **kwargs)
@@ -36,24 +26,28 @@ class Control(object):
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
@classmethod
def generate_reply_queue_name(cls):
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
def control_with_reply(self, command, timeout=5):
logger.warn('checking {} {} for {}'.format(self.service, command, self.queuename))
reply_queue = Queue(name="amq.rabbitmq.reply-to")
reply_queue = Control.generate_reply_queue_name()
self.result = None
with Connection(settings.BROKER_URL) as conn:
with Consumer(conn, reply_queue, callbacks=[self.process_message], no_ack=True):
self.publish({'control': command}, conn, reply_to='amq.rabbitmq.reply-to')
try:
conn.drain_events(timeout=timeout)
except socket.timeout:
logger.error('{} did not reply within {}s'.format(self.service, timeout))
raise
return self.result
with pg_bus_conn() as conn:
conn.listen(reply_queue)
conn.notify(self.queuename,
json.dumps({'control': command, 'reply_to': reply_queue}))
for reply in conn.events(select_timeout=timeout, yield_timeouts=True):
if reply is None:
logger.error(f'{self.service} did not reply within {timeout}s')
raise RuntimeError("{self.service} did not reply within {timeout}s")
break
return json.loads(reply.payload)
def control(self, msg, **kwargs):
with Connection(settings.BROKER_URL) as conn:
self.publish(msg, conn)
def process_message(self, body, message):
self.result = body
message.ack()
with pg_bus_conn() as conn:
conn.notify(self.queuename, json.dumps(msg))

View File

@@ -1,42 +0,0 @@
from amqp.exceptions import PreconditionFailed
from django.conf import settings
from kombu.connection import Connection as KombuConnection
from kombu.transport import pyamqp
import logging
logger = logging.getLogger('awx.main.dispatch')
__all__ = ['Connection']
class Connection(KombuConnection):
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
class _Channel(pyamqp.Channel):
def queue_declare(self, queue, *args, **kwargs):
kwargs['durable'] = settings.BROKER_DURABILITY
try:
return super(_Channel, self).queue_declare(queue, *args, **kwargs)
except PreconditionFailed as e:
if "inequivalent arg 'durable'" in getattr(e, 'reply_text', None):
logger.error(
'queue {} durability is not {}, deleting and recreating'.format(
queue,
kwargs['durable']
)
)
self.queue_delete(queue)
return super(_Channel, self).queue_declare(queue, *args, **kwargs)
class _Connection(pyamqp.Connection):
Channel = _Channel
class _Transport(pyamqp.Transport):
Connection = _Connection
self.transport_cls = _Transport

View File

@@ -1,12 +1,12 @@
import inspect
import logging
import sys
import json
from uuid import uuid4
from django.conf import settings
from kombu import Exchange, Producer
from awx.main.dispatch.kombu import Connection
from . import pg_bus_conn
logger = logging.getLogger('awx.main.dispatch')
@@ -39,24 +39,22 @@ class task:
add.apply_async([1, 1])
Adder.apply_async([1, 1])
# Tasks can also define a specific target queue or exchange type:
# Tasks can also define a specific target queue or use the special fan-out queue tower_broadcast:
@task(queue='slow-tasks')
def snooze():
time.sleep(10)
@task(queue='tower_broadcast', exchange_type='fanout')
@task(queue='tower_broadcast')
def announce():
print("Run this everywhere!")
"""
def __init__(self, queue=None, exchange_type=None):
def __init__(self, queue=None):
self.queue = queue
self.exchange_type = exchange_type
def __call__(self, fn=None):
queue = self.queue
exchange_type = self.exchange_type
class PublisherMixin(object):
@@ -73,9 +71,12 @@ class task:
kwargs = kwargs or {}
queue = (
queue or
getattr(cls.queue, 'im_func', cls.queue) or
settings.CELERY_DEFAULT_QUEUE
getattr(cls.queue, 'im_func', cls.queue)
)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = {
'uuid': task_id,
'args': args,
@@ -86,21 +87,8 @@ class task:
if callable(queue):
queue = queue()
if not settings.IS_TESTING(sys.argv):
with Connection(settings.BROKER_URL) as conn:
exchange = Exchange(queue, type=exchange_type or 'direct')
producer = Producer(conn)
logger.debug('publish {}({}, queue={})'.format(
cls.name,
task_id,
queue
))
producer.publish(obj,
serializer='json',
compression='bzip2',
exchange=exchange,
declare=[exchange],
delivery_mode="persistent",
routing_key=queue)
with pg_bus_conn() as conn:
conn.notify(queue, json.dumps(obj))
return (obj, queue)
# If the object we're wrapping *is* a class (e.g., RunJob), return

View File

@@ -1,3 +1,3 @@
from .base import AWXConsumer, BaseWorker # noqa
from .base import AWXConsumerRedis, AWXConsumerPG, BaseWorker # noqa
from .callback import CallbackBrokerWorker # noqa
from .task import TaskWorker # noqa

View File

@@ -5,14 +5,17 @@ import os
import logging
import signal
import sys
import redis
import json
import psycopg2
from uuid import UUID
from queue import Empty as QueueEmpty
from django import db
from kombu import Producer
from kombu.mixins import ConsumerMixin
from django.conf import settings
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch import pg_bus_conn
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -37,10 +40,11 @@ class WorkerSignalHandler:
self.kill_now = True
class AWXConsumer(ConsumerMixin):
class AWXConsumerBase(object):
def __init__(self, name, worker, queues=[], pool=None):
self.should_stop = False
def __init__(self, name, connection, worker, queues=[], pool=None):
self.connection = connection
self.name = name
self.total_messages = 0
self.queues = queues
self.worker = worker
@@ -49,25 +53,15 @@ class AWXConsumer(ConsumerMixin):
self.pool = WorkerPool()
self.pool.init_workers(self.worker.work_loop)
def get_consumers(self, Consumer, channel):
logger.debug(self.listening_on)
return [Consumer(queues=self.queues, accept=['json'],
callbacks=[self.process_task])]
@property
def listening_on(self):
return 'listening on {}'.format([
'{} [{}]'.format(q.name, q.exchange.type) for q in self.queues
])
return f'listening on {self.queues}'
def control(self, body, message):
logger.warn('Consumer received control message {}'.format(body))
def control(self, body):
logger.warn(body)
control = body.get('control')
if control in ('status', 'running'):
producer = Producer(
channel=self.connection,
routing_key=message.properties['reply_to']
)
reply_queue = body['reply_to']
if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()])
elif control == 'running':
@@ -75,20 +69,21 @@ class AWXConsumer(ConsumerMixin):
for worker in self.pool.workers:
worker.calculate_managed_tasks()
msg.extend(worker.managed_tasks.keys())
producer.publish(msg)
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
elif control == 'reload':
for worker in self.pool.workers:
worker.quit()
else:
logger.error('unrecognized control message: {}'.format(control))
message.ack()
def process_task(self, body, message):
def process_task(self, body):
if 'control' in body:
try:
return self.control(body, message)
return self.control(body)
except Exception:
logger.exception("Exception handling control message:")
logger.exception(f"Exception handling control message: {body}")
return
if len(self.pool):
if "uuid" in body and body['uuid']:
@@ -102,21 +97,58 @@ class AWXConsumer(ConsumerMixin):
queue = 0
self.pool.write(queue, body)
self.total_messages += 1
message.ack()
def run(self, *args, **kwargs):
signal.signal(signal.SIGINT, self.stop)
signal.signal(signal.SIGTERM, self.stop)
self.worker.on_start()
super(AWXConsumer, self).run(*args, **kwargs)
# Child should implement other things here
def stop(self, signum, frame):
self.should_stop = True # this makes the kombu mixin stop consuming
self.should_stop = True
logger.warn('received {}, stopping'.format(signame(signum)))
self.worker.on_stop()
raise SystemExit()
class AWXConsumerRedis(AWXConsumerBase):
def run(self, *args, **kwargs):
super(AWXConsumerRedis, self).run(*args, **kwargs)
self.worker.on_start()
queue = redis.Redis.from_url(settings.BROKER_URL)
while True:
res = queue.blpop(self.queues)
res = json.loads(res[1])
self.process_task(res)
if self.should_stop:
return
class AWXConsumerPG(AWXConsumerBase):
def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs)
logger.warn(f"Running worker {self.name} listening to queues {self.queues}")
init = False
while True:
try:
with pg_bus_conn() as conn:
for queue in self.queues:
conn.listen(queue)
if init is False:
self.worker.on_start()
init = True
for e in conn.events():
self.process_task(json.loads(e.payload))
if self.should_stop:
return
except psycopg2.InterfaceError:
logger.warn("Stale Postgres message bus connection, reconnecting")
continue
class BaseWorker(object):
def read(self, queue):

View File

@@ -15,7 +15,9 @@ from django.db.utils import InterfaceError, InternalError, IntegrityError
from awx.main.consumers import emit_channel_notification
from awx.main.models import (JobEvent, AdHocCommandEvent, ProjectUpdateEvent,
InventoryUpdateEvent, SystemJobEvent, UnifiedJob)
InventoryUpdateEvent, SystemJobEvent, UnifiedJob,
Job)
from awx.main.tasks import handle_success_and_failure_notifications
from awx.main.models.events import emit_event_detail
from .base import BaseWorker
@@ -137,19 +139,14 @@ class CallbackBrokerWorker(BaseWorker):
# have all the data we need to send out success/failure
# notification templates
uj = UnifiedJob.objects.get(pk=job_identifier)
if hasattr(uj, 'send_notification_templates'):
retries = 0
while retries < 5:
if uj.finished:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
break
else:
# wait a few seconds to avoid a race where the
# events are persisted _before_ the UJ.status
# changes from running -> successful
retries += 1
time.sleep(1)
uj = UnifiedJob.objects.get(pk=job_identifier)
if isinstance(uj, Job):
# *actual playbooks* send their success/failure
# notifications in response to the playbook_on_stats
# event handling code in main.models.events
pass
elif hasattr(uj, 'send_notification_templates'):
handle_success_and_failure_notifications.apply_async([uj.id])
except Exception:
logger.exception('Worker failed to emit notifications: Job {}'.format(job_identifier))
return

View File

@@ -56,7 +56,8 @@ from awx.main import utils
__all__ = ['AutoOneToOneField', 'ImplicitRoleField', 'JSONField',
'SmartFilterField', 'OrderedManyToManyField',
'update_role_parentage_for_instance', 'is_implicit_parent']
'update_role_parentage_for_instance',
'is_implicit_parent']
# Provide a (better) custom error message for enum jsonschema validation
@@ -140,8 +141,9 @@ def resolve_role_field(obj, field):
return []
if len(field_components) == 1:
role_cls = str(utils.get_current_apps().get_model('main', 'Role'))
if not str(type(obj)) == role_cls:
# use extremely generous duck typing to accomidate all possible forms
# of the model that may be used during various migrations
if obj._meta.model_name != 'role' or obj._meta.app_label != 'main':
raise Exception(smart_text('{} refers to a {}, not a Role'.format(field, type(obj))))
ret.append(obj.id)
else:
@@ -197,18 +199,27 @@ def update_role_parentage_for_instance(instance):
updates the parents listing for all the roles
of a given instance if they have changed
'''
parents_removed = set()
parents_added = set()
for implicit_role_field in getattr(instance.__class__, '__implicit_role_fields'):
cur_role = getattr(instance, implicit_role_field.name)
original_parents = set(json.loads(cur_role.implicit_parents))
new_parents = implicit_role_field._resolve_parent_roles(instance)
cur_role.parents.remove(*list(original_parents - new_parents))
cur_role.parents.add(*list(new_parents - original_parents))
removals = original_parents - new_parents
if removals:
cur_role.parents.remove(*list(removals))
parents_removed.add(cur_role.pk)
additions = new_parents - original_parents
if additions:
cur_role.parents.add(*list(additions))
parents_added.add(cur_role.pk)
new_parents_list = list(new_parents)
new_parents_list.sort()
new_parents_json = json.dumps(new_parents_list)
if cur_role.implicit_parents != new_parents_json:
cur_role.implicit_parents = new_parents_json
cur_role.save()
cur_role.save(update_fields=['implicit_parents'])
return (parents_added, parents_removed)
class ImplicitRoleDescriptor(ForwardManyToOneDescriptor):
@@ -256,20 +267,18 @@ class ImplicitRoleField(models.ForeignKey):
field_names = [field_names]
for field_name in field_names:
# Handle the OR syntax for role parents
if type(field_name) == tuple:
continue
if type(field_name) == bytes:
field_name = field_name.decode('utf-8')
if field_name.startswith('singleton:'):
continue
field_name, sep, field_attr = field_name.partition('.')
field = getattr(cls, field_name)
# Non existent fields will occur if ever a parent model is
# moved inside a migration, needed for job_template_organization_field
# migration in particular
# consistency is assured by unit test awx.main.tests.functional
field = getattr(cls, field_name, None)
if type(field) is ReverseManyToOneDescriptor or \
if field and type(field) is ReverseManyToOneDescriptor or \
type(field) is ManyToManyDescriptor:
if '.' in field_attr:

View File

@@ -15,7 +15,6 @@ import awx
from awx.main.utils import (
get_system_task_capacity
)
from awx.main.queue import CallbackQueueDispatcher
logger = logging.getLogger('awx.isolated.manager')
playbook_logger = logging.getLogger('awx.isolated.manager.playbooks')
@@ -32,12 +31,14 @@ def set_pythonpath(venv_libdir, env):
class IsolatedManager(object):
def __init__(self, canceled_callback=None, check_callback=None, pod_manager=None):
def __init__(self, event_handler, canceled_callback=None, check_callback=None, pod_manager=None):
"""
:param event_handler: a callable used to persist event data from isolated nodes
:param canceled_callback: a callable - which returns `True` or `False`
- signifying if the job has been prematurely
canceled
"""
self.event_handler = event_handler
self.canceled_callback = canceled_callback
self.check_callback = check_callback
self.started_at = None
@@ -208,7 +209,6 @@ class IsolatedManager(object):
status = 'failed'
rc = None
last_check = time.time()
dispatcher = CallbackQueueDispatcher()
while status == 'failed':
canceled = self.canceled_callback() if self.canceled_callback else False
@@ -238,7 +238,7 @@ class IsolatedManager(object):
except json.decoder.JSONDecodeError: # Just in case it's not fully here yet.
pass
self.consume_events(dispatcher)
self.consume_events()
last_check = time.time()
@@ -266,19 +266,18 @@ class IsolatedManager(object):
# consume events one last time just to be sure we didn't miss anything
# in the final sync
self.consume_events(dispatcher)
self.consume_events()
# emit an EOF event
event_data = {
'event': 'EOF',
'final_counter': len(self.handled_events)
}
event_data.setdefault(self.event_data_key, self.instance.id)
dispatcher.dispatch(event_data)
self.event_handler(event_data)
return status, rc
def consume_events(self, dispatcher):
def consume_events(self):
# discover new events and ingest them
events_path = self.path_to('artifacts', self.ident, 'job_events')
@@ -302,16 +301,10 @@ class IsolatedManager(object):
# practice
# in this scenario, just ignore this event and try it
# again on the next sync
pass
event_data.setdefault(self.event_data_key, self.instance.id)
dispatcher.dispatch(event_data)
continue
self.event_handler(event_data)
self.handled_events.add(event)
# handle artifacts
if event_data.get('event_data', {}).get('artifact_data', {}):
self.instance.artifacts = event_data['event_data']['artifact_data']
self.instance.save(update_fields=['artifacts'])
def cleanup(self):
extravars = {
@@ -400,8 +393,7 @@ class IsolatedManager(object):
if os.path.exists(private_data_dir):
shutil.rmtree(private_data_dir)
def run(self, instance, private_data_dir, playbook, module, module_args,
event_data_key, ident=None):
def run(self, instance, private_data_dir, playbook, module, module_args, ident=None):
"""
Run a job on an isolated host.
@@ -412,14 +404,12 @@ class IsolatedManager(object):
:param playbook: the playbook to run
:param module: the module to run
:param module_args: the module args to use
:param event_data_key: e.g., job_id, inventory_id, ...
For a completed job run, this function returns (status, rc),
representing the status and return code of the isolated
`ansible-playbook` run.
"""
self.ident = ident
self.event_data_key = event_data_key
self.instance = instance
self.private_data_dir = private_data_dir
self.runner_params = self.build_runner_params(
@@ -433,6 +423,5 @@ class IsolatedManager(object):
else:
# emit an EOF event
event_data = {'event': 'EOF', 'final_counter': 0}
event_data.setdefault(self.event_data_key, self.instance.id)
CallbackQueueDispatcher().dispatch(event_data)
self.event_handler(event_data)
return status, rc

View File

@@ -21,6 +21,8 @@ from awx.main.signals import (
disable_computed_fields
)
from awx.main.management.commands.deletion import AWXCollector, pre_delete
class Command(BaseCommand):
'''
@@ -57,27 +59,37 @@ class Command(BaseCommand):
action='store_true', dest='only_workflow_jobs',
help='Remove workflow jobs')
def cleanup_jobs(self):
#jobs_qs = Job.objects.exclude(status__in=('pending', 'running'))
#jobs_qs = jobs_qs.filter(created__lte=self.cutoff)
skipped, deleted = 0, 0
jobs = Job.objects.filter(created__lt=self.cutoff)
for job in jobs.iterator():
job_display = '"%s" (%d host summaries, %d events)' % \
(str(job),
job.job_host_summaries.count(), job.job_events.count())
if job.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
self.logger.debug('%s %s job %s', action_text, job.status, job_display)
skipped += 1
else:
action_text = 'would delete' if self.dry_run else 'deleting'
self.logger.info('%s %s', action_text, job_display)
if not self.dry_run:
job.delete()
deleted += 1
skipped += Job.objects.filter(created__gte=self.cutoff).count()
def cleanup_jobs(self):
skipped, deleted = 0, 0
batch_size = 1000000
while True:
# get queryset for available jobs to remove
qs = Job.objects.filter(created__lt=self.cutoff).exclude(status__in=['pending', 'waiting', 'running'])
# get pk list for the first N (batch_size) objects
pk_list = qs[0:batch_size].values_list('pk')
# You cannot delete queries with sql LIMIT set, so we must
# create a new query from this pk_list
qs_batch = Job.objects.filter(pk__in=pk_list)
just_deleted = 0
if not self.dry_run:
del_query = pre_delete(qs_batch)
collector = AWXCollector(del_query.db)
collector.collect(del_query)
_, models_deleted = collector.delete()
if models_deleted:
just_deleted = models_deleted['main.Job']
deleted += just_deleted
else:
just_deleted = 0 # break from loop, this is dry run
deleted = qs.count()
if just_deleted == 0:
break
skipped += (Job.objects.filter(created__gte=self.cutoff) | Job.objects.filter(status__in=['pending', 'waiting', 'running'])).count()
return skipped, deleted
def cleanup_ad_hoc_commands(self):

View File

@@ -0,0 +1,177 @@
from django.contrib.contenttypes.models import ContentType
from django.db.models.deletion import (
DO_NOTHING, Collector, get_candidate_relations_to_delete,
)
from collections import Counter, OrderedDict
from django.db import transaction
from django.db.models import sql
def bulk_related_objects(field, objs, using):
# This overrides the method in django.contrib.contenttypes.fields.py
"""
Return all objects related to ``objs`` via this ``GenericRelation``.
"""
return field.remote_field.model._base_manager.db_manager(using).filter(**{
"%s__pk" % field.content_type_field_name: ContentType.objects.db_manager(using).get_for_model(
field.model, for_concrete_model=field.for_concrete_model).pk,
"%s__in" % field.object_id_field_name: list(objs.values_list('pk', flat=True))
})
def pre_delete(qs):
# taken from .delete method in django.db.models.query.py
assert qs.query.can_filter(), \
"Cannot use 'limit' or 'offset' with delete."
if qs._fields is not None:
raise TypeError("Cannot call delete() after .values() or .values_list()")
del_query = qs._chain()
# The delete is actually 2 queries - one to find related objects,
# and one to delete. Make sure that the discovery of related
# objects is performed on the same database as the deletion.
del_query._for_write = True
# Disable non-supported fields.
del_query.query.select_for_update = False
del_query.query.select_related = False
del_query.query.clear_ordering(force_empty=True)
return del_query
class AWXCollector(Collector):
def add(self, objs, source=None, nullable=False, reverse_dependency=False):
"""
Add 'objs' to the collection of objects to be deleted. If the call is
the result of a cascade, 'source' should be the model that caused it,
and 'nullable' should be set to True if the relation can be null.
Return a list of all objects that were not already collected.
"""
if not objs.exists():
return objs
model = objs.model
self.data.setdefault(model, [])
self.data[model].append(objs)
# Nullable relationships can be ignored -- they are nulled out before
# deleting, and therefore do not affect the order in which objects have
# to be deleted.
if source is not None and not nullable:
if reverse_dependency:
source, model = model, source
self.dependencies.setdefault(
source._meta.concrete_model, set()).add(model._meta.concrete_model)
return objs
def add_field_update(self, field, value, objs):
"""
Schedule a field update. 'objs' must be a homogeneous iterable
collection of model instances (e.g. a QuerySet).
"""
if not objs.exists():
return
model = objs.model
self.field_updates.setdefault(model, {})
self.field_updates[model].setdefault((field, value), [])
self.field_updates[model][(field, value)].append(objs)
def collect(self, objs, source=None, nullable=False, collect_related=True,
source_attr=None, reverse_dependency=False, keep_parents=False):
"""
Add 'objs' to the collection of objects to be deleted as well as all
parent instances. 'objs' must be a homogeneous iterable collection of
model instances (e.g. a QuerySet). If 'collect_related' is True,
related objects will be handled by their respective on_delete handler.
If the call is the result of a cascade, 'source' should be the model
that caused it and 'nullable' should be set to True, if the relation
can be null.
If 'reverse_dependency' is True, 'source' will be deleted before the
current model, rather than after. (Needed for cascading to parent
models, the one case in which the cascade follows the forwards
direction of an FK rather than the reverse direction.)
If 'keep_parents' is True, data of parent model's will be not deleted.
"""
if hasattr(objs, 'polymorphic_disabled'):
objs.polymorphic_disabled = True
if self.can_fast_delete(objs):
self.fast_deletes.append(objs)
return
new_objs = self.add(objs, source, nullable,
reverse_dependency=reverse_dependency)
if not new_objs.exists():
return
model = new_objs.model
if not keep_parents:
# Recursively collect concrete model's parent models, but not their
# related objects. These will be found by meta.get_fields()
concrete_model = model._meta.concrete_model
for ptr in concrete_model._meta.parents.keys():
if ptr:
parent_objs = ptr.objects.filter(pk__in = new_objs.values_list('pk', flat=True))
self.collect(parent_objs, source=model,
collect_related=False,
reverse_dependency=True)
if collect_related:
parents = model._meta.parents
for related in get_candidate_relations_to_delete(model._meta):
# Preserve parent reverse relationships if keep_parents=True.
if keep_parents and related.model in parents:
continue
field = related.field
if field.remote_field.on_delete == DO_NOTHING:
continue
related_qs = self.related_objects(related, new_objs)
if self.can_fast_delete(related_qs, from_field=field):
self.fast_deletes.append(related_qs)
elif related_qs:
field.remote_field.on_delete(self, field, related_qs, self.using)
for field in model._meta.private_fields:
if hasattr(field, 'bulk_related_objects'):
# It's something like generic foreign key.
sub_objs = bulk_related_objects(field, new_objs, self.using)
self.collect(sub_objs, source=model, nullable=True)
def delete(self):
self.sort()
# collect pk_list before deletion (once things start to delete
# queries might not be able to retreive pk list)
del_dict = OrderedDict()
for model, instances in self.data.items():
del_dict.setdefault(model, [])
for inst in instances:
del_dict[model] += list(inst.values_list('pk', flat=True))
deleted_counter = Counter()
with transaction.atomic(using=self.using, savepoint=False):
# update fields
for model, instances_for_fieldvalues in self.field_updates.items():
for (field, value), instances in instances_for_fieldvalues.items():
for inst in instances:
query = sql.UpdateQuery(model)
query.update_batch(inst.values_list('pk', flat=True),
{field.name: value}, self.using)
# fast deletes
for qs in self.fast_deletes:
count = qs._raw_delete(using=self.using)
deleted_counter[qs.model._meta.label] += count
# delete instances
for model, pk_list in del_dict.items():
query = sql.DeleteQuery(model)
count = query.delete_batch(pk_list, self.using)
deleted_counter[model._meta.label] += count
return sum(deleted_counter.values()), dict(deleted_counter)

View File

@@ -1,8 +1,6 @@
# Copyright (c) 2016 Ansible, Inc.
# All Rights Reserved
import subprocess
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError
@@ -33,18 +31,9 @@ class Command(BaseCommand):
with advisory_lock('instance_registration_%s' % hostname):
instance = Instance.objects.filter(hostname=hostname)
if instance.exists():
isolated = instance.first().is_isolated()
instance.delete()
print("Instance Removed")
if isolated:
print('Successfully deprovisioned {}'.format(hostname))
else:
result = subprocess.Popen("rabbitmqctl forget_cluster_node rabbitmq@{}".format(hostname), shell=True).wait()
if result != 0:
print("Node deprovisioning may have failed when attempting to "
"remove the RabbitMQ instance {} from the cluster".format(hostname))
else:
print('Successfully deprovisioned {}'.format(hostname))
print('Successfully deprovisioned {}'.format(hostname))
print('(changed: True)')
else:
print('No instance found matching name {}'.format(hostname))

View File

@@ -6,6 +6,7 @@ from awx.main.utils.pglock import advisory_lock
from awx.main.models import Instance, InstanceGroup
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
class InstanceNotFound(Exception):
@@ -31,7 +32,6 @@ class Command(BaseCommand):
def get_create_update_instance_group(self, queuename, instance_percent, instance_min):
ig = InstanceGroup.objects.filter(name=queuename)
created = False
changed = False
@@ -98,26 +98,27 @@ class Command(BaseCommand):
if options.get('hostnames'):
hostname_list = options.get('hostnames').split(",")
with advisory_lock('instance_group_registration_{}'.format(queuename)):
changed2 = False
changed3 = False
(ig, created, changed1) = self.get_create_update_instance_group(queuename, inst_per, inst_min)
if created:
print("Creating instance group {}".format(ig.name))
elif not created:
print("Instance Group already registered {}".format(ig.name))
with advisory_lock('cluster_policy_lock'):
with transaction.atomic():
changed2 = False
changed3 = False
(ig, created, changed1) = self.get_create_update_instance_group(queuename, inst_per, inst_min)
if created:
print("Creating instance group {}".format(ig.name))
elif not created:
print("Instance Group already registered {}".format(ig.name))
if ctrl:
(ig_ctrl, changed2) = self.update_instance_group_controller(ig, ctrl)
if changed2:
print("Set controller group {} on {}.".format(ctrl, queuename))
if ctrl:
(ig_ctrl, changed2) = self.update_instance_group_controller(ig, ctrl)
if changed2:
print("Set controller group {} on {}.".format(ctrl, queuename))
try:
(instances, changed3) = self.add_instances_to_group(ig, hostname_list)
for i in instances:
print("Added instance {} to {}".format(i.hostname, ig.name))
except InstanceNotFound as e:
instance_not_found_err = e
try:
(instances, changed3) = self.add_instances_to_group(ig, hostname_list)
for i in instances:
print("Added instance {} to {}".format(i.hostname, ig.name))
except InstanceNotFound as e:
instance_not_found_err = e
if any([changed1, changed2, changed3]):
print('(changed: True)')

View File

@@ -3,10 +3,8 @@
from django.conf import settings
from django.core.management.base import BaseCommand
from kombu import Exchange, Queue
from awx.main.dispatch.kombu import Connection
from awx.main.dispatch.worker import AWXConsumer, CallbackBrokerWorker
from awx.main.dispatch.worker import AWXConsumerRedis, CallbackBrokerWorker
class Command(BaseCommand):
@@ -18,23 +16,15 @@ class Command(BaseCommand):
help = 'Launch the job callback receiver'
def handle(self, *arg, **options):
with Connection(settings.BROKER_URL) as conn:
consumer = None
try:
consumer = AWXConsumer(
'callback_receiver',
conn,
CallbackBrokerWorker(),
[
Queue(
settings.CALLBACK_QUEUE,
Exchange(settings.CALLBACK_QUEUE, type='direct'),
routing_key=settings.CALLBACK_QUEUE
)
]
)
consumer.run()
except KeyboardInterrupt:
print('Terminating Callback Receiver')
if consumer:
consumer.stop()
consumer = None
try:
consumer = AWXConsumerRedis(
'callback_receiver',
CallbackBrokerWorker(),
queues=[getattr(settings, 'CALLBACK_QUEUE', '')],
)
consumer.run()
except KeyboardInterrupt:
print('Terminating Callback Receiver')
if consumer:
consumer.stop()

View File

@@ -6,14 +6,12 @@ from django.conf import settings
from django.core.cache import cache as django_cache
from django.core.management.base import BaseCommand
from django.db import connection as django_connection
from kombu import Exchange, Queue
from awx.main.utils.handlers import AWXProxyHandler
from awx.main.dispatch import get_local_queuename, reaper
from awx.main.dispatch.control import Control
from awx.main.dispatch.kombu import Connection
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumer, TaskWorker
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
from awx.main.dispatch import periodic
logger = logging.getLogger('awx.main.dispatch')
@@ -63,30 +61,16 @@ class Command(BaseCommand):
# in cpython itself:
# https://bugs.python.org/issue37429
AWXProxyHandler.disable()
with Connection(settings.BROKER_URL) as conn:
try:
bcast = 'tower_broadcast_all'
queues = [
Queue(q, Exchange(q), routing_key=q)
for q in (settings.AWX_CELERY_QUEUES_STATIC + [get_local_queuename()])
]
queues.append(
Queue(
construct_bcast_queue_name(bcast),
exchange=Exchange(bcast, type='fanout'),
routing_key=bcast,
reply=True
)
)
consumer = AWXConsumer(
'dispatcher',
conn,
TaskWorker(),
queues,
AutoscalePool(min_workers=4)
)
consumer.run()
except KeyboardInterrupt:
logger.debug('Terminating Task Dispatcher')
if consumer:
consumer.stop()
try:
queues = ['tower_broadcast_all', get_local_queuename()]
consumer = AWXConsumerPG(
'dispatcher',
TaskWorker(),
queues,
AutoscalePool(min_workers=4)
)
consumer.run()
except KeyboardInterrupt:
logger.debug('Terminating Task Dispatcher')
if consumer:
consumer.stop()

View File

@@ -0,0 +1,25 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
import logging
import asyncio
from django.core.management.base import BaseCommand
from awx.main.wsbroadcast import BroadcastWebsocketManager
logger = logging.getLogger('awx.main.wsbroadcast')
class Command(BaseCommand):
help = 'Launch the websocket broadcaster'
def handle(self, *arg, **options):
try:
broadcast_websocket_mgr = BroadcastWebsocketManager()
task = broadcast_websocket_mgr.start()
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
except KeyboardInterrupt:
logger.debug('Terminating Websocket Broadcaster')

View File

@@ -3,6 +3,7 @@
import sys
import logging
import os
from django.db import models
from django.conf import settings
@@ -114,7 +115,7 @@ class InstanceManager(models.Manager):
return node[0]
raise RuntimeError("No instance found with the current cluster host id")
def register(self, uuid=None, hostname=None):
def register(self, uuid=None, hostname=None, ip_address=None):
if not uuid:
uuid = settings.SYSTEM_UUID
if not hostname:
@@ -122,13 +123,23 @@ class InstanceManager(models.Manager):
with advisory_lock('instance_registration_%s' % hostname):
instance = self.filter(hostname=hostname)
if instance.exists():
return (False, instance[0])
instance = self.create(uuid=uuid, hostname=hostname, capacity=0)
instance = instance.get()
if instance.ip_address != ip_address:
instance.ip_address = ip_address
instance.save(update_fields=['ip_address'])
return (True, instance)
else:
return (False, instance)
instance = self.create(uuid=uuid,
hostname=hostname,
ip_address=ip_address,
capacity=0)
return (True, instance)
def get_or_register(self):
if settings.AWX_AUTO_DEPROVISION_INSTANCES:
return self.register()
pod_ip = os.environ.get('MY_POD_IP')
return self.register(ip_address=pod_ip)
else:
return (False, self.me())

View File

@@ -192,21 +192,41 @@ class URLModificationMiddleware(MiddlewareMixin):
)
super().__init__(get_response)
def _named_url_to_pk(self, node, named_url):
kwargs = {}
if not node.populate_named_url_query_kwargs(kwargs, named_url):
return named_url
return str(get_object_or_404(node.model, **kwargs).pk)
@staticmethod
def _hijack_for_old_jt_name(node, kwargs, named_url):
try:
int(named_url)
return False
except ValueError:
pass
JobTemplate = node.model
name = urllib.parse.unquote(named_url)
return JobTemplate.objects.filter(name=name).order_by('organization__created').first()
def _convert_named_url(self, url_path):
@classmethod
def _named_url_to_pk(cls, node, resource, named_url):
kwargs = {}
if node.populate_named_url_query_kwargs(kwargs, named_url):
return str(get_object_or_404(node.model, **kwargs).pk)
if resource == 'job_templates' and '++' not in named_url:
# special case for deprecated job template case
# will not raise a 404 on its own
jt = cls._hijack_for_old_jt_name(node, kwargs, named_url)
if jt:
return str(jt.pk)
return named_url
@classmethod
def _convert_named_url(cls, url_path):
url_units = url_path.split('/')
# If the identifier is an empty string, it is always invalid.
if len(url_units) < 6 or url_units[1] != 'api' or url_units[2] not in ['v2'] or not url_units[4]:
return url_path
resource = url_units[3]
if resource in settings.NAMED_URL_MAPPINGS:
url_units[4] = self._named_url_to_pk(settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]],
url_units[4])
url_units[4] = cls._named_url_to_pk(
settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]],
resource, url_units[4])
return '/'.join(url_units)
def process_request(self, request):

View File

@@ -0,0 +1,81 @@
# Generated by Django 2.2.4 on 2019-08-07 19:56
import awx.main.utils.polymorphic
import awx.main.fields
from django.db import migrations, models
import django.db.models.deletion
from awx.main.migrations._rbac import (
rebuild_role_parentage, rebuild_role_hierarchy,
migrate_ujt_organization, migrate_ujt_organization_backward,
restore_inventory_admins, restore_inventory_admins_backward
)
def rebuild_jt_parents(apps, schema_editor):
rebuild_role_parentage(apps, schema_editor, models=('jobtemplate',))
class Migration(migrations.Migration):
dependencies = [
('main', '0108_v370_unifiedjob_dependencies_processed'),
]
operations = [
# backwards parents and ancestors caching
migrations.RunPython(migrations.RunPython.noop, rebuild_jt_parents),
# add new organization field for JT and all other unified jobs
migrations.AddField(
model_name='unifiedjob',
name='tmp_organization',
field=models.ForeignKey(blank=True, help_text='The organization used to determine access to this unified job.', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='unifiedjobs', to='main.Organization'),
),
migrations.AddField(
model_name='unifiedjobtemplate',
name='tmp_organization',
field=models.ForeignKey(blank=True, help_text='The organization used to determine access to this template.', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='unifiedjobtemplates', to='main.Organization'),
),
# while new and old fields exist, copy the organization fields
migrations.RunPython(migrate_ujt_organization, migrate_ujt_organization_backward),
# with data saved, remove old fields
migrations.RemoveField(
model_name='project',
name='organization',
),
migrations.RemoveField(
model_name='workflowjobtemplate',
name='organization',
),
# now, without safely rename the new field without conflicts from old field
migrations.RenameField(
model_name='unifiedjobtemplate',
old_name='tmp_organization',
new_name='organization',
),
migrations.RenameField(
model_name='unifiedjob',
old_name='tmp_organization',
new_name='organization',
),
# parentage of job template roles has genuinely changed at this point
migrations.AlterField(
model_name='jobtemplate',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['organization.job_template_admin_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='jobtemplate',
name='execute_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role', 'organization.execute_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='jobtemplate',
name='read_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['organization.auditor_role', 'inventory.organization.auditor_role', 'execute_role', 'admin_role'], related_name='+', to='main.Role'),
),
# Re-compute the role parents and ancestors caching
migrations.RunPython(rebuild_jt_parents, migrations.RunPython.noop),
# for all permissions that will be removed, make them explicit
migrations.RunPython(restore_inventory_admins, restore_inventory_admins_backward),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.8 on 2020-02-12 17:55
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0109_v370_job_template_organization_field'),
]
operations = [
migrations.AddField(
model_name='instance',
name='ip_address',
field=models.CharField(blank=True, default=None, max_length=50, null=True, unique=True),
),
]

View File

@@ -0,0 +1,16 @@
# Generated by Django 2.2.8 on 2020-02-17 14:50
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0110_v370_instance_ip_address'),
]
operations = [
migrations.DeleteModel(
name='ChannelGroup',
),
]

View File

@@ -0,0 +1,61 @@
# Generated by Django 2.2.8 on 2020-03-14 02:29
from django.db import migrations, models
import uuid
import logging
logger = logging.getLogger('awx.main.migrations')
def create_uuid(apps, schema_editor):
WorkflowJobTemplateNode = apps.get_model('main', 'WorkflowJobTemplateNode')
ct = 0
for node in WorkflowJobTemplateNode.objects.iterator():
node.identifier = uuid.uuid4()
node.save(update_fields=['identifier'])
ct += 1
if ct:
logger.info(f'Automatically created uuid4 identifier for {ct} workflow nodes')
class Migration(migrations.Migration):
dependencies = [
('main', '0111_v370_delete_channelgroup'),
]
operations = [
migrations.AddField(
model_name='workflowjobnode',
name='identifier',
field=models.CharField(blank=True, help_text='An identifier coresponding to the workflow job template node that this node was created from.', max_length=512),
),
migrations.AddField(
model_name='workflowjobtemplatenode',
name='identifier',
field=models.CharField(blank=True, null=True, help_text='An identifier for this node that is unique within its workflow. It is copied to workflow job nodes corresponding to this node.', max_length=512),
),
migrations.RunPython(create_uuid, migrations.RunPython.noop), # this fixes the uuid4 issue
migrations.AlterField(
model_name='workflowjobtemplatenode',
name='identifier',
field=models.CharField(default=uuid.uuid4, help_text='An identifier for this node that is unique within its workflow. It is copied to workflow job nodes corresponding to this node.', max_length=512),
),
migrations.AlterUniqueTogether(
name='workflowjobtemplatenode',
unique_together={('identifier', 'workflow_job_template')},
),
migrations.AddIndex(
model_name='workflowjobnode',
index=models.Index(fields=['identifier', 'workflow_job'], name='main_workfl_identif_87b752_idx'),
),
migrations.AddIndex(
model_name='workflowjobnode',
index=models.Index(fields=['identifier'], name='main_workfl_identif_efdfe8_idx'),
),
migrations.AddIndex(
model_name='workflowjobtemplatenode',
index=models.Index(fields=['identifier'], name='main_workfl_identif_0cc025_idx'),
),
]

View File

@@ -0,0 +1,118 @@
# Generated by Django 2.2.8 on 2020-02-21 16:31
from django.db import migrations, models, connection
def migrate_event_data(apps, schema_editor):
# see: https://github.com/ansible/awx/issues/6010
#
# the goal of this function is to end with event tables (e.g., main_jobevent)
# that have a bigint primary key (because the old usage of an integer
# numeric isn't enough, as its range is about 2.1B, see:
# https://www.postgresql.org/docs/9.1/datatype-numeric.html)
# unfortunately, we can't do this with a simple ALTER TABLE, because
# for tables with hundreds of millions or billions of rows, the ALTER TABLE
# can take *hours* on modest hardware.
#
# the approach in this migration means that post-migration, event data will
# *not* immediately show up, but will be repopulated over time progressively
# the trade-off here is not having to wait hours for the full data migration
# before you can start and run AWX again (including new playbook runs)
for tblname in (
'main_jobevent', 'main_inventoryupdateevent',
'main_projectupdateevent', 'main_adhoccommandevent',
'main_systemjobevent'
):
with connection.cursor() as cursor:
# rename the current event table
cursor.execute(
f'ALTER TABLE {tblname} RENAME TO _old_{tblname};'
)
# create a *new* table with the same schema
cursor.execute(
f'CREATE TABLE {tblname} (LIKE _old_{tblname} INCLUDING ALL);'
)
# alter the *new* table so that the primary key is a big int
cursor.execute(
f'ALTER TABLE {tblname} ALTER COLUMN id TYPE bigint USING id::bigint;'
)
# recreate counter for the new table's primary key to
# start where the *old* table left off (we have to do this because the
# counter changed from an int to a bigint)
cursor.execute(f'DROP SEQUENCE IF EXISTS "{tblname}_id_seq" CASCADE;')
cursor.execute(f'CREATE SEQUENCE "{tblname}_id_seq";')
cursor.execute(
f'ALTER TABLE "{tblname}" ALTER COLUMN "id" '
f"SET DEFAULT nextval('{tblname}_id_seq');"
)
cursor.execute(
f"SELECT setval('{tblname}_id_seq', (SELECT MAX(id) FROM _old_{tblname}), true);"
)
# replace the BTREE index on main_jobevent.job_id with
# a BRIN index to drastically improve per-UJ lookup performance
# see: https://info.crunchydata.com/blog/postgresql-brin-indexes-big-data-performance-with-minimal-storage
if tblname == 'main_jobevent':
cursor.execute("SELECT indexname FROM pg_indexes WHERE tablename='main_jobevent' AND indexdef LIKE '%USING btree (job_id)';")
old_index = cursor.fetchone()[0]
cursor.execute(f'DROP INDEX {old_index}')
cursor.execute('CREATE INDEX main_jobevent_job_id_brin_idx ON main_jobevent USING brin (job_id);')
# remove all of the indexes and constraints from the old table
# (they just slow down the data migration)
cursor.execute(f"SELECT indexname, indexdef FROM pg_indexes WHERE tablename='_old_{tblname}' AND indexname != '{tblname}_pkey';")
indexes = cursor.fetchall()
cursor.execute(f"SELECT conname, contype, pg_catalog.pg_get_constraintdef(r.oid, true) as condef FROM pg_catalog.pg_constraint r WHERE r.conrelid = '_old_{tblname}'::regclass AND conname != '{tblname}_pkey';")
constraints = cursor.fetchall()
for indexname, indexdef in indexes:
cursor.execute(f'DROP INDEX IF EXISTS {indexname}')
for conname, contype, condef in constraints:
cursor.execute(f'ALTER TABLE _old_{tblname} DROP CONSTRAINT IF EXISTS {conname}')
class FakeAlterField(migrations.AlterField):
def database_forwards(self, *args):
# this is intentionally left blank, because we're
# going to accomplish the migration with some custom raw SQL
pass
class Migration(migrations.Migration):
dependencies = [
('main', '0112_v370_workflow_node_identifier'),
]
operations = [
migrations.RunPython(migrate_event_data),
FakeAlterField(
model_name='adhoccommandevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
FakeAlterField(
model_name='inventoryupdateevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
FakeAlterField(
model_name='jobevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
FakeAlterField(
model_name='projectupdateevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
FakeAlterField(
model_name='systemjobevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
]

View File

@@ -1,6 +1,9 @@
import logging
from time import time
from django.db.models import Subquery, OuterRef, F
from awx.main.fields import update_role_parentage_for_instance
from awx.main.models.rbac import Role, batch_role_ancestor_rebuilding
logger = logging.getLogger('rbac_migrations')
@@ -10,11 +13,11 @@ def create_roles(apps, schema_editor):
'''
Implicit role creation happens in our post_save hook for all of our
resources. Here we iterate through all of our resource types and call
.save() to ensure all that happens for every object in the system before we
get busy with the actual migration work.
.save() to ensure all that happens for every object in the system.
This gets run after migrate_users, which does role creation for users a
little differently.
This can be used whenever new roles are introduced in a migration to
create those roles for pre-existing objects that did not previously
have them created via signals.
'''
models = [
@@ -35,7 +38,189 @@ def create_roles(apps, schema_editor):
obj.save()
def delete_all_user_roles(apps, schema_editor):
ContentType = apps.get_model('contenttypes', "ContentType")
Role = apps.get_model('main', "Role")
User = apps.get_model('auth', "User")
user_content_type = ContentType.objects.get_for_model(User)
for role in Role.objects.filter(content_type=user_content_type).iterator():
role.delete()
UNIFIED_ORG_LOOKUPS = {
# Job Templates had an implicit organization via their project
'jobtemplate': 'project',
# Inventory Sources had an implicit organization via their inventory
'inventorysource': 'inventory',
# Projects had an explicit organization in their subclass table
'project': None,
# Workflow JTs also had an explicit organization in their subclass table
'workflowjobtemplate': None,
# Jobs inherited project from job templates as a convenience field
'job': 'project',
# Inventory Sources had an convenience field of inventory
'inventoryupdate': 'inventory',
# Project Updates did not have a direct organization field, obtained it from project
'projectupdate': 'project',
# Workflow Jobs are handled same as project updates
# Sliced jobs are a special case, but old data is not given special treatment for simplicity
'workflowjob': 'workflow_job_template',
# AdHocCommands do not have a template, but still migrate them
'adhoccommand': 'inventory'
}
def implicit_org_subquery(UnifiedClass, cls, backward=False):
"""Returns a subquery that returns the so-called organization for objects
in the class in question, before migration to the explicit unified org field.
In some cases, this can still be applied post-migration.
"""
if cls._meta.model_name not in UNIFIED_ORG_LOOKUPS:
return None
cls_name = cls._meta.model_name
source_field = UNIFIED_ORG_LOOKUPS[cls_name]
unified_field = UnifiedClass._meta.get_field(cls_name)
unified_ptr = unified_field.remote_field.name
if backward:
qs = UnifiedClass.objects.filter(**{cls_name: OuterRef('id')}).order_by().values_list('tmp_organization')[:1]
elif source_field is None:
qs = cls.objects.filter(**{unified_ptr: OuterRef('id')}).order_by().values_list('organization')[:1]
else:
intermediary_field = cls._meta.get_field(source_field)
intermediary_model = intermediary_field.related_model
intermediary_reverse_rel = intermediary_field.remote_field.name
qs = intermediary_model.objects.filter(**{
# this filter leverages the fact that the Unified models have same pk as subclasses.
# For instance... filters projects used in job template, where that job template
# has same id same as UJT from the outer reference (which it does)
intermediary_reverse_rel: OuterRef('id')}
).order_by().values_list('organization')[:1]
return Subquery(qs)
def _migrate_unified_organization(apps, unified_cls_name, backward=False):
"""Given a unified base model (either UJT or UJ)
and a dict org_field_mapping which gives related model to get org from
saves organization for those objects to the temporary migration
variable tmp_organization on the unified model
(optimized method)
"""
start = time()
UnifiedClass = apps.get_model('main', unified_cls_name)
ContentType = apps.get_model('contenttypes', 'ContentType')
for cls in UnifiedClass.__subclasses__():
cls_name = cls._meta.model_name
if backward and UNIFIED_ORG_LOOKUPS.get(cls_name, 'not-found') is not None:
logger.debug('Not reverse migrating {}, existing data should remain valid'.format(cls_name))
continue
logger.debug('{}Migrating {} to new organization field'.format('Reverse ' if backward else '', cls_name))
sub_qs = implicit_org_subquery(UnifiedClass, cls, backward=backward)
if sub_qs is None:
logger.debug('Class {} has no organization migration'.format(cls_name))
continue
this_ct = ContentType.objects.get_for_model(cls)
if backward:
r = cls.objects.order_by().update(organization=sub_qs)
else:
r = UnifiedClass.objects.order_by().filter(polymorphic_ctype=this_ct).update(tmp_organization=sub_qs)
if r:
logger.info('Organization migration on {} affected {} rows.'.format(cls_name, r))
logger.info('Unified organization migration completed in {:.4f} seconds'.format(time() - start))
def migrate_ujt_organization(apps, schema_editor):
'''Move organization field to UJT and UJ models'''
_migrate_unified_organization(apps, 'UnifiedJobTemplate')
_migrate_unified_organization(apps, 'UnifiedJob')
def migrate_ujt_organization_backward(apps, schema_editor):
'''Move organization field from UJT and UJ models back to their original places'''
_migrate_unified_organization(apps, 'UnifiedJobTemplate', backward=True)
_migrate_unified_organization(apps, 'UnifiedJob', backward=True)
def _restore_inventory_admins(apps, schema_editor, backward=False):
"""With the JT.organization changes, admins of organizations connected to
job templates via inventory will have their permissions demoted.
This maintains current permissions over the migration by granting the
permissions they used to have explicitly on the JT itself.
"""
start = time()
JobTemplate = apps.get_model('main', 'JobTemplate')
User = apps.get_model('auth', 'User')
changed_ct = 0
jt_qs = JobTemplate.objects.filter(inventory__isnull=False)
jt_qs = jt_qs.exclude(inventory__organization=F('project__organization'))
jt_qs = jt_qs.only('id', 'admin_role_id', 'execute_role_id', 'inventory_id')
for jt in jt_qs.iterator():
org = jt.inventory.organization
for jt_role, org_roles in (
('admin_role', ('admin_role', 'job_template_admin_role',)),
('execute_role', ('execute_role',))
):
role_id = getattr(jt, '{}_id'.format(jt_role))
user_qs = User.objects
if not backward:
# In this specific case, the name for the org role and JT roles were the same
org_role_ids = [getattr(org, '{}_id'.format(role_name)) for role_name in org_roles]
user_qs = user_qs.filter(roles__in=org_role_ids)
# bizarre migration behavior - ancestors / descendents of
# migration version of Role model is reversed, using current model briefly
ancestor_ids = list(
Role.objects.filter(descendents=role_id).values_list('id', flat=True)
)
# same as Role.__contains__, filter for "user in jt.admin_role"
user_qs = user_qs.exclude(roles__in=ancestor_ids)
else:
# use the database to filter intersection of users without access
# to the JT role and either organization role
user_qs = user_qs.filter(roles__in=[org.admin_role_id, org.execute_role_id])
# in reverse, intersection of users who have both
user_qs = user_qs.filter(roles=role_id)
user_ids = list(user_qs.values_list('id', flat=True))
if not user_ids:
continue
role = getattr(jt, jt_role)
logger.debug('{} {} on jt {} for users {} via inventory.organization {}'.format(
'Removing' if backward else 'Setting',
jt_role, jt.pk, user_ids, org.pk
))
if not backward:
# in reverse, explit role becomes redundant
role.members.add(*user_ids)
else:
role.members.remove(*user_ids)
changed_ct += len(user_ids)
if changed_ct:
logger.info('{} explicit JT permission for {} users in {:.4f} seconds'.format(
'Removed' if backward else 'Added',
changed_ct, time() - start
))
def restore_inventory_admins(apps, schema_editor):
_restore_inventory_admins(apps, schema_editor)
def restore_inventory_admins_backward(apps, schema_editor):
_restore_inventory_admins(apps, schema_editor, backward=True)
def rebuild_role_hierarchy(apps, schema_editor):
'''
This should be called in any migration when ownerships are changed.
Ex. I remove a user from the admin_role of a credential.
Ancestors are cached from parents for performance, this re-computes ancestors.
'''
logger.info('Computing role roots..')
start = time()
roots = Role.objects \
@@ -46,14 +231,74 @@ def rebuild_role_hierarchy(apps, schema_editor):
start = time()
Role.rebuild_role_ancestor_list(roots, [])
stop = time()
logger.info('Rebuild completed in %f seconds' % (stop - start))
logger.info('Rebuild ancestors completed in %f seconds' % (stop - start))
logger.info('Done.')
def delete_all_user_roles(apps, schema_editor):
def rebuild_role_parentage(apps, schema_editor, models=None):
'''
This should be called in any migration when any parent_role entry
is modified so that the cached parent fields will be updated. Ex:
foo_role = ImplicitRoleField(
parent_role=['bar_role'] # change to parent_role=['admin_role']
)
This is like rebuild_role_hierarchy, but that method updates ancestors,
whereas this method updates parents.
'''
start = time()
seen_models = set()
model_ct = 0
noop_ct = 0
ContentType = apps.get_model('contenttypes', "ContentType")
Role = apps.get_model('main', "Role")
User = apps.get_model('auth', "User")
user_content_type = ContentType.objects.get_for_model(User)
for role in Role.objects.filter(content_type=user_content_type).iterator():
role.delete()
additions = set()
removals = set()
role_qs = Role.objects
if models:
# update_role_parentage_for_instance is expensive
# if the models have been downselected, ignore those which are not in the list
ct_ids = list(ContentType.objects.filter(
model__in=[name.lower() for name in models]
).values_list('id', flat=True))
role_qs = role_qs.filter(content_type__in=ct_ids)
for role in role_qs.iterator():
if not role.object_id:
continue
model_tuple = (role.content_type_id, role.object_id)
if model_tuple in seen_models:
continue
seen_models.add(model_tuple)
# The GenericForeignKey does not work right in migrations
# with the usage as role.content_object
# so we do the lookup ourselves with current migration models
ct = role.content_type
app = ct.app_label
ct_model = apps.get_model(app, ct.model)
content_object = ct_model.objects.get(pk=role.object_id)
parents_added, parents_removed = update_role_parentage_for_instance(content_object)
additions.update(parents_added)
removals.update(parents_removed)
if parents_added:
model_ct += 1
logger.debug('Added to parents of roles {} of {}'.format(parents_added, content_object))
if parents_removed:
model_ct += 1
logger.debug('Removed from parents of roles {} of {}'.format(parents_removed, content_object))
else:
noop_ct += 1
logger.debug('No changes to role parents for {} resources'.format(noop_ct))
logger.debug('Added parents to {} roles'.format(len(additions)))
logger.debug('Removed parents from {} roles'.format(len(removals)))
if model_ct:
logger.info('Updated implicit parents of {} resources'.format(model_ct))
logger.info('Rebuild parentage completed in %f seconds' % (time() - start))
# this is ran because the ordinary signals for
# Role.parents.add and Role.parents.remove not called in migration
Role.rebuild_role_ancestor_list(list(additions), list(removals))

View File

@@ -3,6 +3,7 @@
# Django
from django.conf import settings # noqa
from django.db import connection, ProgrammingError
from django.db.models.signals import pre_delete # noqa
# AWX
@@ -58,7 +59,6 @@ from awx.main.models.workflow import ( # noqa
WorkflowJob, WorkflowJobNode, WorkflowJobOptions, WorkflowJobTemplate,
WorkflowJobTemplateNode, WorkflowApproval, WorkflowApprovalTemplate,
)
from awx.main.models.channels import ChannelGroup # noqa
from awx.api.versioning import reverse
from awx.main.models.oauth import ( # noqa
OAuth2AccessToken, OAuth2Application
@@ -80,6 +80,27 @@ User.add_to_class('can_access_with_errors', check_user_access_with_errors)
User.add_to_class('accessible_objects', user_accessible_objects)
def enforce_bigint_pk_migration():
# see: https://github.com/ansible/awx/issues/6010
# look at all the event tables and verify that they have been fully migrated
# from the *old* int primary key table to the replacement bigint table
# if not, attempt to migrate them in the background
for tblname in (
'main_jobevent', 'main_inventoryupdateevent',
'main_projectupdateevent', 'main_adhoccommandevent',
'main_systemjobevent'
):
with connection.cursor() as cursor:
try:
cursor.execute(f'SELECT MAX(id) FROM _old_{tblname}')
if cursor.fetchone():
from awx.main.tasks import migrate_legacy_event_data
migrate_legacy_event_data.apply_async([tblname])
except ProgrammingError:
# the table is gone (migration is unnecessary)
pass
def cleanup_created_modified_by(sender, **kwargs):
# work around a bug in django-polymorphic that doesn't properly
# handle cascades for reverse foreign keys on the polymorphic base model

View File

@@ -1,6 +0,0 @@
from django.db import models
class ChannelGroup(models.Model):
group = models.CharField(max_length=200, unique=True)
channels = models.TextField()

View File

@@ -4,7 +4,7 @@ import datetime
import logging
from collections import defaultdict
from django.db import models, DatabaseError
from django.db import models, DatabaseError, connection
from django.utils.dateparse import parse_datetime
from django.utils.text import Truncator
from django.utils.timezone import utc
@@ -356,6 +356,14 @@ class BasePlaybookEvent(CreatedModifiedModel):
job_id=self.job_id, uuid__in=failed
).update(failed=True)
# send success/failure notifications when we've finished handling the playbook_on_stats event
from awx.main.tasks import handle_success_and_failure_notifications # circular import
def _send_notifications():
handle_success_and_failure_notifications.apply_async([self.job.id])
connection.on_commit(_send_notifications)
for field in ('playbook', 'play', 'task', 'role'):
value = force_text(event_data.get(field, '')).strip()
if value != getattr(self, field):
@@ -430,6 +438,7 @@ class JobEvent(BasePlaybookEvent):
('job', 'parent_uuid'),
]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
job = models.ForeignKey(
'Job',
related_name='job_events',
@@ -518,6 +527,7 @@ class ProjectUpdateEvent(BasePlaybookEvent):
('project_update', 'end_line'),
]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
project_update = models.ForeignKey(
'ProjectUpdate',
related_name='project_update_events',
@@ -669,6 +679,7 @@ class AdHocCommandEvent(BaseCommandEvent):
FAILED_EVENTS = [x[0] for x in EVENT_TYPES if x[2]]
EVENT_CHOICES = [(x[0], x[1]) for x in EVENT_TYPES]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
event = models.CharField(
max_length=100,
choices=EVENT_CHOICES,
@@ -731,6 +742,7 @@ class InventoryUpdateEvent(BaseCommandEvent):
('inventory_update', 'end_line'),
]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
inventory_update = models.ForeignKey(
'InventoryUpdate',
related_name='inventory_update_events',
@@ -764,6 +776,7 @@ class SystemJobEvent(BaseCommandEvent):
('system_job', 'end_line'),
]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
system_job = models.ForeignKey(
'SystemJob',
related_name='system_job_events',

View File

@@ -53,6 +53,13 @@ class Instance(HasPolicyEditsMixin, BaseModel):
uuid = models.CharField(max_length=40)
hostname = models.CharField(max_length=250, unique=True)
ip_address = models.CharField(
blank=True,
null=True,
default=None,
max_length=50,
unique=True,
)
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
last_isolated_check = models.DateTimeField(

View File

@@ -426,9 +426,9 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
'''
def _get_related_jobs(self):
return UnifiedJob.objects.non_polymorphic().filter(
Q(Job___inventory=self) |
Q(InventoryUpdate___inventory_source__inventory=self) |
Q(AdHocCommand___inventory=self)
Q(job__inventory=self) |
Q(inventoryupdate__inventory=self) |
Q(adhoccommand__inventory=self)
)
@@ -808,8 +808,8 @@ class Group(CommonModelNameNotUnique, RelatedJobsMixin):
'''
def _get_related_jobs(self):
return UnifiedJob.objects.non_polymorphic().filter(
Q(Job___inventory=self.inventory) |
Q(InventoryUpdate___inventory_source__groups=self)
Q(job__inventory=self.inventory) |
Q(inventoryupdate__inventory_source__groups=self)
)
@@ -1277,10 +1277,14 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in InventorySourceOptions._meta.fields) | set(
['name', 'description', 'credentials', 'inventory']
['name', 'description', 'organization', 'credentials', 'inventory']
)
def save(self, *args, **kwargs):
# if this is a new object, inherit organization from its inventory
if not self.pk and self.inventory and self.inventory.organization_id and not self.organization_id:
self.organization_id = self.inventory.organization_id
# If update_fields has been specified, add our field names to it,
# if it hasn't been specified, then we're just doing a normal save.
update_fields = kwargs.get('update_fields', [])

View File

@@ -199,7 +199,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
'labels', 'instance_groups', 'credentials', 'survey_spec'
]
FIELDS_TO_DISCARD_AT_COPY = ['vault_credential', 'credential']
SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name')]
SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name', 'organization')]
class Meta:
app_label = 'main'
@@ -262,13 +262,17 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
)
admin_role = ImplicitRoleField(
parent_role=['project.organization.job_template_admin_role', 'inventory.organization.job_template_admin_role']
parent_role=['organization.job_template_admin_role']
)
execute_role = ImplicitRoleField(
parent_role=['admin_role', 'project.organization.execute_role', 'inventory.organization.execute_role'],
parent_role=['admin_role', 'organization.execute_role'],
)
read_role = ImplicitRoleField(
parent_role=['project.organization.auditor_role', 'inventory.organization.auditor_role', 'execute_role', 'admin_role'],
parent_role=[
'organization.auditor_role',
'inventory.organization.auditor_role', # partial support for old inheritance via inventory
'execute_role', 'admin_role'
],
)
@@ -279,7 +283,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in JobOptions._meta.fields) | set(
['name', 'description', 'survey_passwords', 'labels', 'credentials',
['name', 'description', 'organization', 'survey_passwords', 'labels', 'credentials',
'job_slice_number', 'job_slice_count']
)
@@ -319,6 +323,41 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
else:
return self.job_slice_count
def save(self, *args, **kwargs):
update_fields = kwargs.get('update_fields', [])
# if project is deleted for some reason, then keep the old organization
# to retain ownership for organization admins
if self.project and self.project.organization_id != self.organization_id:
self.organization_id = self.project.organization_id
if 'organization' not in update_fields and 'organization_id' not in update_fields:
update_fields.append('organization_id')
return super(JobTemplate, self).save(*args, **kwargs)
def validate_unique(self, exclude=None):
"""Custom over-ride for JT specifically
because organization is inferred from project after full_clean is finished
thus the organization field is not yet set when validation happens
"""
errors = []
for ut in JobTemplate.SOFT_UNIQUE_TOGETHER:
kwargs = {'name': self.name}
if self.project:
kwargs['organization'] = self.project.organization_id
else:
kwargs['organization'] = None
qs = JobTemplate.objects.filter(**kwargs)
if self.pk:
qs = qs.exclude(pk=self.pk)
if qs.exists():
errors.append(
'%s with this (%s) combination already exists.' % (
JobTemplate.__name__,
', '.join(set(ut) - {'polymorphic_ctype'})
)
)
if errors:
raise ValidationError(errors)
def create_unified_job(self, **kwargs):
prevent_slicing = kwargs.pop('_prevent_slicing', False)
slice_ct = self.get_effective_slice_ct(kwargs)
@@ -479,13 +518,13 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
success_notification_templates = list(base_notification_templates.filter(
unifiedjobtemplate_notification_templates_for_success__in=[self, self.project]))
# Get Organization NotificationTemplates
if self.project is not None and self.project.organization is not None:
if self.organization is not None:
error_notification_templates = set(error_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_errors=self.project.organization)))
organization_notification_templates_for_errors=self.organization)))
started_notification_templates = set(started_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_started=self.project.organization)))
organization_notification_templates_for_started=self.organization)))
success_notification_templates = set(success_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_success=self.project.organization)))
organization_notification_templates_for_success=self.organization)))
return dict(error=list(error_notification_templates),
started=list(started_notification_templates),
success=list(success_notification_templates))
@@ -588,7 +627,7 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
for virtualenv in (
self.job_template.custom_virtualenv if self.job_template else None,
self.project.custom_virtualenv,
self.project.organization.custom_virtualenv if self.project.organization else None
self.organization.custom_virtualenv if self.organization else None
):
if virtualenv:
return virtualenv
@@ -741,8 +780,8 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
@property
def preferred_instance_groups(self):
if self.project is not None and self.project.organization is not None:
organization_groups = [x for x in self.project.organization.instance_groups.all()]
if self.organization is not None:
organization_groups = [x for x in self.organization.instance_groups.all()]
else:
organization_groups = []
if self.inventory is not None:
@@ -1144,7 +1183,7 @@ class SystemJobTemplate(UnifiedJobTemplate, SystemJobOptions):
@classmethod
def _get_unified_job_field_names(cls):
return ['name', 'description', 'job_type', 'extra_vars']
return ['name', 'description', 'organization', 'job_type', 'extra_vars']
def get_absolute_url(self, request=None):
return reverse('api:system_job_template_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -269,7 +269,7 @@ class JobNotificationMixin(object):
'timeout', 'use_fact_cache', 'launch_type', 'status', 'failed', 'started', 'finished',
'elapsed', 'job_explanation', 'execution_node', 'controller_node', 'allow_simultaneous',
'scm_revision', 'diff_mode', 'job_slice_number', 'job_slice_count', 'custom_virtualenv',
'approval_status', 'approval_node_name', 'workflow_url',
'approval_status', 'approval_node_name', 'workflow_url', 'scm_branch',
{'host_status_counts': ['skipped', 'ok', 'changed', 'failed', 'failures', 'dark'
'processed', 'rescued', 'ignored']},
{'summary_fields': [{'inventory': ['id', 'name', 'description', 'has_active_failures',
@@ -313,6 +313,7 @@ class JobNotificationMixin(object):
'modified': datetime.datetime(2018, 12, 13, 6, 4, 0, 0, tzinfo=datetime.timezone.utc),
'name': 'Stub JobTemplate',
'playbook': 'ping.yml',
'scm_branch': '',
'scm_revision': '',
'skip_tags': '',
'start_at_task': '',

View File

@@ -6,7 +6,6 @@
# Django
from django.conf import settings
from django.db import models
from django.db.models import Q
from django.contrib.auth.models import User
from django.contrib.sessions.models import Session
from django.utils.timezone import now as tz_now
@@ -106,12 +105,7 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
RelatedJobsMixin
'''
def _get_related_jobs(self):
project_ids = self.projects.all().values_list('id')
return UnifiedJob.objects.non_polymorphic().filter(
Q(Job___project__in=project_ids) |
Q(ProjectUpdate___project__in=project_ids) |
Q(InventoryUpdate___inventory_source__inventory__organization=self)
)
return UnifiedJob.objects.non_polymorphic().filter(organization=self)
class Team(CommonModelNameNotUnique, ResourceMixin):

View File

@@ -254,13 +254,6 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
app_label = 'main'
ordering = ('id',)
organization = models.ForeignKey(
'Organization',
blank=True,
null=True,
on_delete=models.CASCADE,
related_name='projects',
)
scm_update_on_launch = models.BooleanField(
default=False,
help_text=_('Update the project when a job is launched that uses the project.'),
@@ -329,9 +322,16 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in ProjectOptions._meta.fields) | set(
['name', 'description']
['name', 'description', 'organization']
)
def clean_organization(self):
if self.pk:
old_org_id = getattr(self, '_prior_values_store', {}).get('organization_id', None)
if self.organization_id != old_org_id and self.jobtemplates.exists():
raise ValidationError({'organization': _('Organization cannot be changed when in use by job templates.')})
return self.organization
def save(self, *args, **kwargs):
new_instance = not bool(self.pk)
pre_save_vals = getattr(self, '_prior_values_store', {})
@@ -450,8 +450,8 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
'''
def _get_related_jobs(self):
return UnifiedJob.objects.non_polymorphic().filter(
models.Q(Job___project=self) |
models.Q(ProjectUpdate___project=self)
models.Q(job__project=self) |
models.Q(projectupdate__project=self)
)
def delete(self, *args, **kwargs):
@@ -584,8 +584,8 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
@property
def preferred_instance_groups(self):
if self.project is not None and self.project.organization is not None:
organization_groups = [x for x in self.project.organization.instance_groups.all()]
if self.organization is not None:
organization_groups = [x for x in self.organization.instance_groups.all()]
else:
organization_groups = []
template_groups = [x for x in super(ProjectUpdate, self).preferred_instance_groups]

View File

@@ -36,6 +36,7 @@ from awx.main.models.base import (
NotificationFieldsModel,
prevent_search
)
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin
@@ -102,7 +103,7 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
ordering = ('name',)
# unique_together here is intentionally commented out. Please make sure sub-classes of this model
# contain at least this uniqueness restriction: SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name')]
#unique_together = [('polymorphic_ctype', 'name')]
#unique_together = [('polymorphic_ctype', 'name', 'organization')]
old_pk = models.PositiveIntegerField(
null=True,
@@ -157,6 +158,14 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
default='ok',
editable=False,
)
organization = models.ForeignKey(
'Organization',
blank=True,
null=True,
on_delete=polymorphic.SET_NULL,
related_name='%(class)ss',
help_text=_('The organization used to determine access to this template.'),
)
credentials = models.ManyToManyField(
'Credential',
related_name='%(class)ss',
@@ -700,6 +709,14 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
on_delete=polymorphic.SET_NULL,
help_text=_('The Rampart/Instance group the job was run under'),
)
organization = models.ForeignKey(
'Organization',
blank=True,
null=True,
on_delete=polymorphic.SET_NULL,
related_name='%(class)ss',
help_text=_('The organization used to determine access to this unified job.'),
)
credentials = models.ManyToManyField(
'Credential',
related_name='%(class)ss',
@@ -1344,7 +1361,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
timeout = 5
try:
running = self.celery_task_id in ControlDispatcher(
'dispatcher', self.execution_node
'dispatcher', self.controller_node or self.execution_node
).running(timeout=timeout)
except socket.timeout:
logger.error('could not reach dispatcher on {} within {}s'.format(
@@ -1450,7 +1467,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
return r
def get_queue_name(self):
return self.controller_node or self.execution_node or settings.CELERY_DEFAULT_QUEUE
return self.controller_node or self.execution_node or get_local_queuename()
def is_isolated(self):
return bool(self.controller_node)

View File

@@ -4,6 +4,7 @@
# Python
import json
import logging
from uuid import uuid4
from copy import copy
from urllib.parse import urljoin
@@ -121,6 +122,7 @@ class WorkflowNodeBase(CreatedModifiedModel, LaunchTimeConfig):
create_kwargs[field_name] = kwargs[field_name]
elif hasattr(self, field_name):
create_kwargs[field_name] = getattr(self, field_name)
create_kwargs['identifier'] = self.identifier
new_node = WorkflowJobNode.objects.create(**create_kwargs)
if self.pk:
allowed_creds = self.credentials.all()
@@ -135,7 +137,7 @@ class WorkflowJobTemplateNode(WorkflowNodeBase):
FIELDS_TO_PRESERVE_AT_COPY = [
'unified_job_template', 'workflow_job_template', 'success_nodes', 'failure_nodes',
'always_nodes', 'credentials', 'inventory', 'extra_data', 'survey_passwords',
'char_prompts', 'all_parents_must_converge'
'char_prompts', 'all_parents_must_converge', 'identifier'
]
REENCRYPTION_BLACKLIST_AT_COPY = ['extra_data', 'survey_passwords']
@@ -144,6 +146,21 @@ class WorkflowJobTemplateNode(WorkflowNodeBase):
related_name='workflow_job_template_nodes',
on_delete=models.CASCADE,
)
identifier = models.CharField(
max_length=512,
default=uuid4,
blank=False,
help_text=_(
'An identifier for this node that is unique within its workflow. '
'It is copied to workflow job nodes corresponding to this node.'),
)
class Meta:
app_label = 'main'
unique_together = (("identifier", "workflow_job_template"),)
indexes = [
models.Index(fields=['identifier']),
]
def get_absolute_url(self, request=None):
return reverse('api:workflow_job_template_node_detail', kwargs={'pk': self.pk}, request=request)
@@ -213,6 +230,18 @@ class WorkflowJobNode(WorkflowNodeBase):
"semantics will mark this True if the node is in a path that will "
"decidedly not be ran. A value of False means the node may not run."),
)
identifier = models.CharField(
max_length=512,
blank=True, # blank denotes pre-migration job nodes
help_text=_('An identifier coresponding to the workflow job template node that this node was created from.'),
)
class Meta:
app_label = 'main'
indexes = [
models.Index(fields=["identifier", "workflow_job"]),
models.Index(fields=['identifier']),
]
def get_absolute_url(self, request=None):
return reverse('api:workflow_job_node_detail', kwargs={'pk': self.pk}, request=request)
@@ -335,7 +364,7 @@ class WorkflowJobOptions(LaunchTimeConfigBase):
@classmethod
def _get_unified_job_field_names(cls):
r = set(f.name for f in WorkflowJobOptions._meta.fields) | set(
['name', 'description', 'survey_passwords', 'labels', 'limit', 'scm_branch']
['name', 'description', 'organization', 'survey_passwords', 'labels', 'limit', 'scm_branch']
)
r.remove('char_prompts') # needed due to copying launch config to launch config
return r
@@ -376,19 +405,12 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name', 'organization')]
FIELDS_TO_PRESERVE_AT_COPY = [
'labels', 'instance_groups', 'workflow_job_template_nodes', 'credentials', 'survey_spec'
'labels', 'organization', 'instance_groups', 'workflow_job_template_nodes', 'credentials', 'survey_spec'
]
class Meta:
app_label = 'main'
organization = models.ForeignKey(
'Organization',
blank=True,
null=True,
on_delete=models.SET_NULL,
related_name='workflows',
)
ask_inventory_on_launch = AskForField(
blank=True,
default=False,
@@ -749,9 +771,9 @@ class WorkflowApproval(UnifiedJob, JobNotificationMixin):
def signal_start(self, **kwargs):
can_start = super(WorkflowApproval, self).signal_start(**kwargs)
self.send_approval_notification('running')
self.started = self.created
self.save(update_fields=['started'])
self.send_approval_notification('running')
return can_start
def send_approval_notification(self, approval_status):

View File

@@ -4,15 +4,11 @@
# Python
import json
import logging
import os
import redis
# Django
from django.conf import settings
# Kombu
from awx.main.dispatch.kombu import Connection
from kombu import Exchange, Producer
from kombu.serialization import registry
__all__ = ['CallbackQueueDispatcher']
@@ -28,47 +24,12 @@ class AnsibleJSONEncoder(json.JSONEncoder):
return super(AnsibleJSONEncoder, self).default(o)
registry.register(
'json-ansible',
lambda obj: json.dumps(obj, cls=AnsibleJSONEncoder),
lambda obj: json.loads(obj),
content_type='application/json',
content_encoding='utf-8'
)
class CallbackQueueDispatcher(object):
def __init__(self):
self.callback_connection = getattr(settings, 'BROKER_URL', None)
self.connection_queue = getattr(settings, 'CALLBACK_QUEUE', '')
self.connection = None
self.exchange = None
self.queue = getattr(settings, 'CALLBACK_QUEUE', '')
self.logger = logging.getLogger('awx.main.queue.CallbackQueueDispatcher')
self.connection = redis.Redis.from_url(settings.BROKER_URL)
def dispatch(self, obj):
if not self.callback_connection or not self.connection_queue:
return
active_pid = os.getpid()
for retry_count in range(4):
try:
if not hasattr(self, 'connection_pid'):
self.connection_pid = active_pid
if self.connection_pid != active_pid:
self.connection = None
if self.connection is None:
self.connection = Connection(self.callback_connection)
self.exchange = Exchange(self.connection_queue, type='direct')
producer = Producer(self.connection)
producer.publish(obj,
serializer='json-ansible',
compression='bzip2',
exchange=self.exchange,
declare=[self.exchange],
delivery_mode="persistent" if settings.PERSISTENT_CALLBACK_MESSAGES else "transient",
routing_key=self.connection_queue)
return
except Exception as e:
self.logger.info('Publish Job Event Exception: %r, retry=%d', e,
retry_count, exc_info=True)
self.connection.rpush(self.queue, json.dumps(obj, cls=AnsibleJSONEncoder))

View File

@@ -1,8 +1,15 @@
from channels.routing import route
from django.conf.urls import url
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
from . import consumers
channel_routing = [
route("websocket.connect", "awx.main.consumers.ws_connect", path=r'^/websocket/$'),
route("websocket.disconnect", "awx.main.consumers.ws_disconnect", path=r'^/websocket/$'),
route("websocket.receive", "awx.main.consumers.ws_receive", path=r'^/websocket/$'),
websocket_urlpatterns = [
url(r'websocket/$', consumers.EventConsumer),
url(r'websocket/broadcast/$', consumers.BroadcastConsumer),
]
application = ProtocolTypeRouter({
'websocket': AuthMiddlewareStack(
URLRouter(websocket_urlpatterns)
),
})

View File

@@ -5,11 +5,12 @@ import logging
# AWX
from awx.main.scheduler import TaskManager
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename
logger = logging.getLogger('awx.main.scheduler')
@task()
@task(queue=get_local_queuename)
def run_task_manager():
logger.debug("Running Tower task manager.")
TaskManager().schedule()

View File

@@ -6,7 +6,6 @@ import contextlib
import logging
import threading
import json
import pkg_resources
import sys
# Django
@@ -157,17 +156,26 @@ def cleanup_detached_labels_on_deleted_parent(sender, instance, **kwargs):
def save_related_job_templates(sender, instance, **kwargs):
'''save_related_job_templates loops through all of the
job templates that use an Inventory or Project that have had their
job templates that use an Inventory that have had their
Organization updated. This triggers the rebuilding of the RBAC hierarchy
and ensures the proper access restrictions.
'''
if sender not in (Project, Inventory):
if sender is not Inventory:
raise ValueError('This signal callback is only intended for use with Project or Inventory')
update_fields = kwargs.get('update_fields', None)
if ((update_fields and not ('organization' in update_fields or 'organization_id' in update_fields)) or
kwargs.get('created', False)):
return
if instance._prior_values_store.get('organization_id') != instance.organization_id:
jtq = JobTemplate.objects.filter(**{sender.__name__.lower(): instance})
for jt in jtq:
update_role_parentage_for_instance(jt)
parents_added, parents_removed = update_role_parentage_for_instance(jt)
if parents_added or parents_removed:
logger.info('Permissions on JT {} changed due to inventory {} organization change from {} to {}.'.format(
jt.pk, instance.pk, instance._prior_values_store.get('organization_id'), instance.organization_id
))
def connect_computed_field_signals():
@@ -183,7 +191,6 @@ def connect_computed_field_signals():
connect_computed_field_signals()
post_save.connect(save_related_job_templates, sender=Project)
post_save.connect(save_related_job_templates, sender=Inventory)
m2m_changed.connect(rebuild_role_ancestor_list, Role.parents.through)
m2m_changed.connect(rbac_activity_stream, Role.members.through)
@@ -585,16 +592,6 @@ def deny_orphaned_approvals(sender, instance, **kwargs):
@receiver(post_save, sender=Session)
def save_user_session_membership(sender, **kwargs):
session = kwargs.get('instance', None)
if pkg_resources.get_distribution('channels').version >= '2':
# If you get into this code block, it means we upgraded channels, but
# didn't make the settings.SESSIONS_PER_USER feature work
raise RuntimeError(
'save_user_session_membership must be updated for channels>=2: '
'http://channels.readthedocs.io/en/latest/one-to-two.html#requirements'
)
if 'runworker' in sys.argv:
# don't track user session membership for websocket per-channel sessions
return
if not session:
return
user_id = session.get_decoded().get(SESSION_KEY, None)

View File

@@ -26,7 +26,7 @@ import urllib.parse as urlparse
# Django
from django.conf import settings
from django.db import transaction, DatabaseError, IntegrityError
from django.db import transaction, DatabaseError, IntegrityError, ProgrammingError, connection
from django.db.models.fields.related import ForeignKey
from django.utils.timezone import now, timedelta
from django.utils.encoding import smart_str
@@ -59,7 +59,7 @@ from awx.main.models import (
Inventory, InventorySource, SmartInventoryMembership,
Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob,
JobEvent, ProjectUpdateEvent, InventoryUpdateEvent, AdHocCommandEvent, SystemJobEvent,
build_safe_env
build_safe_env, enforce_bigint_pk_migration
)
from awx.main.constants import ACTIVE_STATES
from awx.main.exceptions import AwxTaskError
@@ -135,6 +135,12 @@ def dispatch_startup():
if Instance.objects.me().is_controller():
awx_isolated_heartbeat()
# at process startup, detect the need to migrate old event records from int
# to bigint; at *some point* in the future, once certain versions of AWX
# and Tower fall out of use/support, we can probably just _assume_ that
# everybody has moved to bigint, and remove this code entirely
enforce_bigint_pk_migration()
def inform_cluster_of_shutdown():
try:
@@ -151,7 +157,7 @@ def inform_cluster_of_shutdown():
logger.exception('Encountered problem with normal shutdown signal.')
@task()
@task(queue=get_local_queuename)
def apply_cluster_membership_policies():
started_waiting = time.time()
with advisory_lock('cluster_policy_lock', wait=True):
@@ -264,7 +270,7 @@ def apply_cluster_membership_policies():
logger.debug('Cluster policy computation finished in {} seconds'.format(time.time() - started_compute))
@task(queue='tower_broadcast_all', exchange_type='fanout')
@task(queue='tower_broadcast_all')
def handle_setting_changes(setting_keys):
orig_len = len(setting_keys)
for i in range(orig_len):
@@ -275,7 +281,7 @@ def handle_setting_changes(setting_keys):
cache.delete_many(cache_keys)
@task(queue='tower_broadcast_all', exchange_type='fanout')
@task(queue='tower_broadcast_all')
def delete_project_files(project_path):
# TODO: possibly implement some retry logic
lock_file = project_path + '.lock'
@@ -293,7 +299,7 @@ def delete_project_files(project_path):
logger.exception('Could not remove lock file {}'.format(lock_file))
@task(queue='tower_broadcast_all', exchange_type='fanout')
@task(queue='tower_broadcast_all')
def profile_sql(threshold=1, minutes=1):
if threshold == 0:
cache.delete('awx-profile-sql-threshold')
@@ -307,7 +313,7 @@ def profile_sql(threshold=1, minutes=1):
logger.error('SQL QUERIES >={}s ENABLED FOR {} MINUTE(S)'.format(threshold, minutes))
@task()
@task(queue=get_local_queuename)
def send_notifications(notification_list, job_id=None):
if not isinstance(notification_list, list):
raise TypeError("notification_list should be of type list")
@@ -336,7 +342,7 @@ def send_notifications(notification_list, job_id=None):
logger.exception('Error saving notification {} result.'.format(notification.id))
@task()
@task(queue=get_local_queuename)
def gather_analytics():
from awx.conf.models import Setting
from rest_framework.fields import DateTimeField
@@ -489,10 +495,10 @@ def awx_isolated_heartbeat():
# Slow pass looping over isolated IGs and their isolated instances
if len(isolated_instance_qs) > 0:
logger.debug("Managing isolated instances {}.".format(','.join([inst.hostname for inst in isolated_instance_qs])))
isolated_manager.IsolatedManager().health_check(isolated_instance_qs)
isolated_manager.IsolatedManager(CallbackQueueDispatcher.dispatch).health_check(isolated_instance_qs)
@task()
@task(queue=get_local_queuename)
def awx_periodic_scheduler():
with advisory_lock('awx_periodic_scheduler_lock', wait=False) as acquired:
if acquired is False:
@@ -549,7 +555,7 @@ def awx_periodic_scheduler():
state.save()
@task()
@task(queue=get_local_queuename)
def handle_work_success(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
@@ -562,7 +568,7 @@ def handle_work_success(task_actual):
schedule_task_manager()
@task()
@task(queue=get_local_queuename)
def handle_work_error(task_id, *args, **kwargs):
subtasks = kwargs.get('subtasks', None)
logger.debug('Executing error task id %s, subtasks: %s' % (task_id, str(subtasks)))
@@ -602,7 +608,26 @@ def handle_work_error(task_id, *args, **kwargs):
pass
@task()
@task(queue=get_local_queuename)
def handle_success_and_failure_notifications(job_id):
uj = UnifiedJob.objects.get(pk=job_id)
retries = 0
while retries < 5:
if uj.finished:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
return
else:
# wait a few seconds to avoid a race where the
# events are persisted _before_ the UJ.status
# changes from running -> successful
retries += 1
time.sleep(1)
uj = UnifiedJob.objects.get(pk=job_id)
logger.warn(f"Failed to even try to send notifications for job '{uj}' due to job not being in finished state.")
@task(queue=get_local_queuename)
def update_inventory_computed_fields(inventory_id):
'''
Signal handler and wrapper around inventory.update_computed_fields to
@@ -644,7 +669,7 @@ def update_smart_memberships_for_inventory(smart_inventory):
return False
@task()
@task(queue=get_local_queuename)
def update_host_smart_inventory_memberships():
smart_inventories = Inventory.objects.filter(kind='smart', host_filter__isnull=False, pending_deletion=False)
changed_inventories = set([])
@@ -660,7 +685,49 @@ def update_host_smart_inventory_memberships():
smart_inventory.update_computed_fields()
@task()
@task(queue=get_local_queuename)
def migrate_legacy_event_data(tblname):
if 'event' not in tblname:
return
with advisory_lock(f'bigint_migration_{tblname}', wait=False) as acquired:
if acquired is False:
return
chunk = settings.JOB_EVENT_MIGRATION_CHUNK_SIZE
def _remaining():
try:
cursor.execute(f'SELECT MAX(id) FROM _old_{tblname};')
return cursor.fetchone()[0]
except ProgrammingError:
# the table is gone (migration is unnecessary)
return None
with connection.cursor() as cursor:
total_rows = _remaining()
while total_rows:
with transaction.atomic():
cursor.execute(
f'INSERT INTO {tblname} SELECT * FROM _old_{tblname} ORDER BY id DESC LIMIT {chunk} RETURNING id;'
)
last_insert_pk = cursor.fetchone()
if last_insert_pk is None:
# this means that the SELECT from the old table was
# empty, and there was nothing to insert (so we're done)
break
last_insert_pk = last_insert_pk[0]
cursor.execute(
f'DELETE FROM _old_{tblname} WHERE id IN (SELECT id FROM _old_{tblname} ORDER BY id DESC LIMIT {chunk});'
)
logger.warn(
f'migrated int -> bigint rows to {tblname} from _old_{tblname}; # ({last_insert_pk} rows remaining)'
)
if _remaining() is None:
cursor.execute(f'DROP TABLE IF EXISTS _old_{tblname}')
logger.warn(f'{tblname} primary key migration to bigint has finished')
@task(queue=get_local_queuename)
def delete_inventory(inventory_id, user_id, retries=5):
# Delete inventory as user
if user_id is None:
@@ -1162,7 +1229,6 @@ class BaseTask(object):
except json.JSONDecodeError:
pass
should_write_event = False
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
self.event_ct += 1
@@ -1174,7 +1240,7 @@ class BaseTask(object):
self.instance.artifacts = event_data['event_data']['artifact_data']
self.instance.save(update_fields=['artifacts'])
return should_write_event
return False
def cancel_callback(self):
'''
@@ -1374,6 +1440,7 @@ class BaseTask(object):
if not params[v]:
del params[v]
self.dispatcher = CallbackQueueDispatcher()
if self.instance.is_isolated() or containerized:
module_args = None
if 'module_args' in params:
@@ -1388,6 +1455,7 @@ class BaseTask(object):
ansible_runner.utils.dump_artifacts(params)
isolated_manager_instance = isolated_manager.IsolatedManager(
self.event_handler,
canceled_callback=lambda: self.update_model(self.instance.pk).cancel_flag,
check_callback=self.check_handler,
pod_manager=pod_manager
@@ -1397,11 +1465,9 @@ class BaseTask(object):
params.get('playbook'),
params.get('module'),
module_args,
event_data_key=self.event_data_key,
ident=str(self.instance.pk))
self.event_ct = len(isolated_manager_instance.handled_events)
else:
self.dispatcher = CallbackQueueDispatcher()
res = ansible_runner.interface.run(**params)
status = res.status
rc = res.rc
@@ -1479,7 +1545,7 @@ class BaseTask(object):
@task()
@task(queue=get_local_queuename)
class RunJob(BaseTask):
'''
Run a job using ansible-playbook.
@@ -1912,7 +1978,7 @@ class RunJob(BaseTask):
update_inventory_computed_fields.delay(inventory.id)
@task()
@task(queue=get_local_queuename)
class RunProjectUpdate(BaseTask):
model = ProjectUpdate
@@ -2273,7 +2339,7 @@ class RunProjectUpdate(BaseTask):
# force option is necessary because remote refs are not counted, although no information is lost
git_repo.delete_head(tmp_branch_name, force=True)
else:
copy_tree(project_path, destination_folder)
copy_tree(project_path, destination_folder, preserve_symlinks=1)
def post_run_hook(self, instance, status):
# To avoid hangs, very important to release lock even if errors happen here
@@ -2322,7 +2388,7 @@ class RunProjectUpdate(BaseTask):
return getattr(settings, 'AWX_PROOT_ENABLED', False)
@task()
@task(queue=get_local_queuename)
class RunInventoryUpdate(BaseTask):
model = InventoryUpdate
@@ -2590,7 +2656,7 @@ class RunInventoryUpdate(BaseTask):
)
@task()
@task(queue=get_local_queuename)
class RunAdHocCommand(BaseTask):
'''
Run an ad hoc command using ansible.
@@ -2780,7 +2846,7 @@ class RunAdHocCommand(BaseTask):
isolated_manager_instance.cleanup()
@task()
@task(queue=get_local_queuename)
class RunSystemJob(BaseTask):
model = SystemJob
@@ -2854,11 +2920,16 @@ def _reconstruct_relationships(copy_mapping):
new_obj.save()
@task()
@task(queue=get_local_queuename)
def deep_copy_model_obj(
model_module, model_name, obj_pk, new_obj_pk,
user_pk, sub_obj_list, permission_check_func=None
user_pk, uuid, permission_check_func=None
):
sub_obj_list = cache.get(uuid)
if sub_obj_list is None:
logger.error('Deep copy {} from {} to {} failed unexpectedly.'.format(model_name, obj_pk, new_obj_pk))
return
logger.debug('Deep copy {} from {} to {}.'.format(model_name, obj_pk, new_obj_pk))
from awx.api.generics import CopyAPIView
from awx.main.signals import disable_activity_stream

View File

@@ -39,6 +39,26 @@ def test_extra_credentials(get, organization_factory, job_template_factory, cred
@pytest.mark.django_db
def test_job_relaunch_permission_denied_response(
post, get, inventory, project, credential, net_credential, machine_credential):
jt = JobTemplate.objects.create(name='testjt', inventory=inventory, project=project, ask_credential_on_launch=True)
jt.credentials.add(machine_credential)
jt_user = User.objects.create(username='jobtemplateuser')
jt.execute_role.members.add(jt_user)
with impersonate(jt_user):
job = jt.create_unified_job()
# User capability is shown for this
r = get(job.get_absolute_url(), jt_user, expect=200)
assert r.data['summary_fields']['user_capabilities']['start']
# Job has prompted extra_credential, launch denied w/ message
job.launch_config.credentials.add(net_credential)
r = post(reverse('api:job_relaunch', kwargs={'pk':job.pk}), {}, jt_user, expect=403)
assert 'launched with prompted fields you do not have access to' in r.data['detail']
@pytest.mark.django_db
def test_job_relaunch_prompts_not_accepted_response(
post, get, inventory, project, credential, net_credential, machine_credential):
jt = JobTemplate.objects.create(name='testjt', inventory=inventory, project=project)
jt.credentials.add(machine_credential)
jt_user = User.objects.create(username='jobtemplateuser')
@@ -53,8 +73,6 @@ def test_job_relaunch_permission_denied_response(
# Job has prompted extra_credential, launch denied w/ message
job.launch_config.credentials.add(net_credential)
r = post(reverse('api:job_relaunch', kwargs={'pk':job.pk}), {}, jt_user, expect=403)
assert 'launched with prompted fields' in r.data['detail']
assert 'do not have permission' in r.data['detail']
@pytest.mark.django_db
@@ -209,7 +227,8 @@ def test_block_related_unprocessed_events(mocker, organization, project, delete,
status='finished',
finished=time_of_finish,
job_template=job_template,
project=project
project=project,
organization=project.organization
)
view = RelatedJobsPreventDeleteMixin()
time_of_request = time_of_finish + relativedelta(seconds=2)

View File

@@ -6,7 +6,7 @@ import pytest
# AWX
from awx.api.serializers import JobTemplateSerializer
from awx.api.versioning import reverse
from awx.main.models import Job, JobTemplate, CredentialType, WorkflowJobTemplate
from awx.main.models import Job, JobTemplate, CredentialType, WorkflowJobTemplate, Organization, Project
from awx.main.migrations import _save_password_keys as save_password_keys
# Django
@@ -30,14 +30,19 @@ def test_create(post, project, machine_credential, inventory, alice, grant_proje
project.use_role.members.add(alice)
if grant_inventory:
inventory.use_role.members.add(alice)
project.organization.job_template_admin_role.members.add(alice)
r = post(reverse('api:job_template_list'), {
'name': 'Some name',
'project': project.id,
'inventory': inventory.id,
'playbook': 'helloworld.yml',
}, alice)
assert r.status_code == expect
post(
url=reverse('api:job_template_list'),
data={
'name': 'Some name',
'project': project.id,
'inventory': inventory.id,
'playbook': 'helloworld.yml'
},
user=alice,
expect=expect
)
@pytest.mark.django_db
@@ -123,14 +128,18 @@ def test_create_with_forks_exceeding_maximum_xfail(alice, post, project, invento
project.use_role.members.add(alice)
inventory.use_role.members.add(alice)
settings.MAX_FORKS = 10
response = post(reverse('api:job_template_list'), {
'name': 'Some name',
'project': project.id,
'inventory': inventory.id,
'playbook': 'helloworld.yml',
'forks': 11,
}, alice)
assert response.status_code == 400
response = post(
url=reverse('api:job_template_list'),
data={
'name': 'Some name',
'project': project.id,
'inventory': inventory.id,
'playbook': 'helloworld.yml',
'forks': 11,
},
user=alice,
expect=400
)
assert 'Maximum number of forks (10) exceeded' in str(response.data)
@@ -510,6 +519,72 @@ def test_job_template_unset_custom_virtualenv(get, patch, organization_factory,
assert resp.data['custom_virtualenv'] is None
@pytest.mark.django_db
def test_jt_organization_follows_project(post, patch, admin_user):
org1 = Organization.objects.create(name='foo1')
org2 = Organization.objects.create(name='foo2')
project_common = dict(scm_type='git', playbook_files=['helloworld.yml'])
project1 = Project.objects.create(name='proj1', organization=org1, **project_common)
project2 = Project.objects.create(name='proj2', organization=org2, **project_common)
r = post(
url=reverse('api:job_template_list'),
data={
"name": "fooo",
"ask_inventory_on_launch": True,
"project": project1.pk,
"playbook": "helloworld.yml"
},
user=admin_user,
expect=201
)
data = r.data
assert data['organization'] == project1.organization_id
data['project'] = project2.id
jt = JobTemplate.objects.get(pk=data['id'])
r = patch(
url=jt.get_absolute_url(),
data=data,
user=admin_user,
expect=200
)
assert r.data['organization'] == project2.organization_id
@pytest.mark.django_db
def test_jt_organization_field_is_read_only(patch, post, project, admin_user):
org = project.organization
jt = JobTemplate.objects.create(
name='foo_jt',
ask_inventory_on_launch=True,
project=project, playbook='helloworld.yml'
)
org2 = Organization.objects.create(name='foo2')
r = patch(
url=jt.get_absolute_url(),
data={'organization': org2.id},
user=admin_user,
expect=200
)
assert r.data['organization'] == org.id
assert JobTemplate.objects.get(pk=jt.pk).organization == org
# similar test, but on creation
r = post(
url=reverse('api:job_template_list'),
data={
'name': 'foobar',
'project': project.id,
'organization': org2.id,
'ask_inventory_on_launch': True,
'playbook': 'helloworld.yml'
},
user=admin_user,
expect=201
)
assert r.data['organization'] == org.id
assert JobTemplate.objects.get(pk=r.data['id']).organization == org
@pytest.mark.django_db
def test_callback_disallowed_null_inventory(project):
jt = JobTemplate.objects.create(

View File

@@ -2,6 +2,8 @@ import pytest
from awx.api.versioning import reverse
from awx.main.models import Project
@pytest.fixture
def organization_resource_creator(organization, user):
@@ -19,21 +21,26 @@ def organization_resource_creator(organization, user):
for i in range(inventories):
inventory = organization.inventories.create(name="associated-inv %s" % i)
for i in range(projects):
organization.projects.create(name="test-proj %s" % i,
description="test-proj-desc")
Project.objects.create(
name="test-proj %s" % i,
description="test-proj-desc",
organization=organization
)
# Mix up the inventories and projects used by the job templates
i_proj = 0
i_inv = 0
for i in range(job_templates):
project = organization.projects.all()[i_proj]
project = Project.objects.filter(organization=organization)[i_proj]
# project = organization.projects.all()[i_proj]
inventory = organization.inventories.all()[i_inv]
project.jobtemplates.create(name="test-jt %s" % i,
description="test-job-template-desc",
inventory=inventory,
playbook="test_playbook.yml")
playbook="test_playbook.yml",
organization=organization)
i_proj += 1
i_inv += 1
if i_proj >= organization.projects.count():
if i_proj >= Project.objects.filter(organization=organization).count():
i_proj = 0
if i_inv >= organization.inventories.count():
i_inv = 0
@@ -179,12 +186,14 @@ def test_scan_JT_counted(resourced_organization, user, get):
@pytest.mark.django_db
def test_JT_not_double_counted(resourced_organization, user, get):
admin_user = user('admin', True)
proj = Project.objects.filter(organization=resourced_organization).all()[0]
# Add a run job template to the org
resourced_organization.projects.all()[0].jobtemplates.create(
proj.jobtemplates.create(
job_type='run',
inventory=resourced_organization.inventories.all()[0],
project=resourced_organization.projects.all()[0],
name='double-linked-job-template')
project=proj,
name='double-linked-job-template',
organization=resourced_organization)
counts_dict = COUNTS_PRIMES
counts_dict['job_templates'] += 1
@@ -197,38 +206,3 @@ def test_JT_not_double_counted(resourced_organization, user, get):
detail_response = get(reverse('api:organization_detail', kwargs={'pk': resourced_organization.pk}), admin_user)
assert detail_response.status_code == 200
assert detail_response.data['summary_fields']['related_field_counts'] == counts_dict
@pytest.mark.django_db
def test_JT_associated_with_project(organizations, project, user, get):
# Check that adding a project to an organization gets the project's JT
# included in the organization's JT count
external_admin = user('admin', True)
two_orgs = organizations(2)
organization = two_orgs[0]
other_org = two_orgs[1]
unrelated_inv = other_org.inventories.create(name='not-in-organization')
organization.projects.add(project)
project.jobtemplates.create(name="test-jt",
description="test-job-template-desc",
inventory=unrelated_inv,
playbook="test_playbook.yml")
response = get(reverse('api:organization_list'), external_admin)
assert response.status_code == 200
org_id = organization.id
counts = {}
for org_json in response.data['results']:
working_id = org_json['id']
counts[working_id] = org_json['summary_fields']['related_field_counts']
assert counts[org_id] == {
'users': 0,
'admins': 0,
'job_templates': 1,
'projects': 1,
'inventories': 0,
'teams': 0
}

View File

@@ -8,7 +8,6 @@ import os
import time
from django.conf import settings
from kombu.utils.url import parse_url
# Mock
from unittest import mock
@@ -386,15 +385,3 @@ def test_saml_x509cert_validation(patch, get, admin, headers):
}
})
assert resp.status_code == 200
@pytest.mark.django_db
def test_broker_url_with_special_characters():
settings.BROKER_URL = 'amqp://guest:a@ns:ibl3#@rabbitmq:5672//'
url = parse_url(settings.BROKER_URL)
assert url['transport'] == 'amqp'
assert url['hostname'] == 'rabbitmq'
assert url['port'] == 5672
assert url['userid'] == 'guest'
assert url['password'] == 'a@ns:ibl3#'
assert url['virtual_host'] == '/'

View File

@@ -1,6 +1,7 @@
import pytest
from awx.api.versioning import reverse
from awx.main import models
@pytest.mark.django_db
@@ -9,3 +10,76 @@ def test_aliased_forward_reverse_field_searches(instance, options, get, admin):
response = options(url, None, admin)
assert 'job_template__search' in response.data['related_search_fields']
get(reverse("api:unified_job_template_list") + "?job_template__search=anything", user=admin, expect=200)
@pytest.mark.django_db
@pytest.mark.parametrize('model', (
'Project',
'JobTemplate',
'WorkflowJobTemplate'
))
class TestUnifiedOrganization:
def data_for_model(self, model, orm_style=False):
data = {
'name': 'foo',
'organization': None
}
if model == 'JobTemplate':
proj = models.Project.objects.create(
name="test-proj",
playbook_files=['helloworld.yml']
)
if orm_style:
data['project_id'] = proj.id
else:
data['project'] = proj.id
data['playbook'] = 'helloworld.yml'
data['ask_inventory_on_launch'] = True
return data
def test_organization_blank_on_edit_of_orphan(self, model, admin_user, patch):
cls = getattr(models, model)
data = self.data_for_model(model, orm_style=True)
obj = cls.objects.create(**data)
patch(
url=obj.get_absolute_url(),
data={'name': 'foooooo'},
user=admin_user,
expect=200
)
obj.refresh_from_db()
assert obj.name == 'foooooo'
def test_organization_blank_on_edit_of_orphan_as_nonsuperuser(self, model, rando, patch):
"""Test case reflects historical bug where ordinary users got weird error
message when editing an orphaned project
"""
cls = getattr(models, model)
data = self.data_for_model(model, orm_style=True)
obj = cls.objects.create(**data)
if model == 'JobTemplate':
obj.project.admin_role.members.add(rando)
obj.admin_role.members.add(rando)
patch(
url=obj.get_absolute_url(),
data={'name': 'foooooo'},
user=rando,
expect=200
)
obj.refresh_from_db()
assert obj.name == 'foooooo'
def test_organization_blank_on_edit_of_normal(self, model, admin_user, patch, organization):
cls = getattr(models, model)
data = self.data_for_model(model, orm_style=True)
data['organization'] = organization
obj = cls.objects.create(**data)
patch(
url=obj.get_absolute_url(),
data={'name': 'foooooo'},
user=admin_user,
expect=200
)
obj.refresh_from_db()
assert obj.name == 'foooooo'

View File

@@ -2,6 +2,7 @@ import pytest
from django.contrib.sessions.middleware import SessionMiddleware
from awx.main.models import User
from awx.api.versioning import reverse
@@ -48,3 +49,15 @@ def test_create_delete_create_user(post, delete, admin):
response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin, middleware=SessionMiddleware())
print(response.data)
assert response.status_code == 201
@pytest.mark.django_db
def test_user_cannot_update_last_login(patch, admin):
assert admin.last_login is None
patch(
reverse('api:user_detail', kwargs={'pk': admin.pk}),
{'last_login': '2020-03-13T16:39:47.303016Z'},
admin,
middleware=SessionMiddleware()
)
assert User.objects.get(pk=admin.pk).last_login is None

View File

@@ -0,0 +1,179 @@
import pytest
from datetime import datetime, timedelta
from pytz import timezone
from collections import OrderedDict
from django.db.models.deletion import Collector, SET_NULL, CASCADE
from django.core.management import call_command
from awx.main.management.commands.deletion import AWXCollector
from awx.main.models import (
JobTemplate, User, Job, JobEvent, Notification,
WorkflowJobNode, JobHostSummary
)
@pytest.fixture
def setup_environment(inventory, project, machine_credential, host, notification_template, label):
'''
Create old jobs and new jobs, with various other objects to hit the
related fields of Jobs. This makes sure on_delete() effects are tested
properly.
'''
old_jobs = []
new_jobs = []
days = 10
days_str = str(days)
jt = JobTemplate.objects.create(name='testjt', inventory=inventory, project=project)
jt.credentials.add(machine_credential)
jt_user = User.objects.create(username='jobtemplateuser')
jt.execute_role.members.add(jt_user)
notification = Notification()
notification.notification_template = notification_template
notification.save()
for i in range(3):
job1 = jt.create_job()
job1.created =datetime.now(tz=timezone('UTC'))
job1.save()
# create jobs with current time
JobEvent.create_from_data(job_id=job1.pk, uuid='abc123', event='runner_on_start',
stdout='a' * 1025).save()
new_jobs.append(job1)
job2 = jt.create_job()
# create jobs 10 days ago
job2.created = datetime.now(tz=timezone('UTC')) - timedelta(days=days)
job2.save()
job2.dependent_jobs.add(job1)
JobEvent.create_from_data(job_id=job2.pk, uuid='abc123', event='runner_on_start',
stdout='a' * 1025).save()
old_jobs.append(job2)
jt.last_job = job2
jt.current_job = job2
jt.save()
host.last_job = job2
host.save()
notification.unifiedjob_notifications.add(job2)
label.unifiedjob_labels.add(job2)
jn = WorkflowJobNode.objects.create(job=job2)
jn.save()
jh = JobHostSummary.objects.create(job=job2)
jh.save()
return (old_jobs, new_jobs, days_str)
@pytest.mark.django_db
def test_cleanup_jobs(setup_environment):
(old_jobs, new_jobs, days_str) = setup_environment
# related_fields
related = [f for f in Job._meta.get_fields(include_hidden=True)
if f.auto_created and not
f.concrete and
(f.one_to_one or f.one_to_many)]
job = old_jobs[-1] # last job
# gather related objects for job
related_should_be_removed = {}
related_should_be_null = {}
for r in related:
qs = r.related_model._base_manager.using('default').filter(
**{"%s__in" % r.field.name: [job.pk]}
)
if qs.exists():
if r.field.remote_field.on_delete == CASCADE:
related_should_be_removed[qs.model] = set(qs.values_list('pk', flat=True))
if r.field.remote_field.on_delete == SET_NULL:
related_should_be_null[(qs.model,r.field.name)] = set(qs.values_list('pk', flat=True))
assert related_should_be_removed
assert related_should_be_null
call_command('cleanup_jobs', '--days', days_str)
# make sure old jobs are removed
assert not Job.objects.filter(pk__in=[obj.pk for obj in old_jobs]).exists()
# make sure new jobs are untouched
assert len(new_jobs) == Job.objects.filter(pk__in=[obj.pk for obj in new_jobs]).count()
# make sure related objects are destroyed or set to NULL (none)
for model, values in related_should_be_removed.items():
assert not model.objects.filter(pk__in=values).exists()
for (model,fieldname), values in related_should_be_null.items():
for v in values:
assert not getattr(model.objects.get(pk=v), fieldname)
@pytest.mark.django_db
def test_awxcollector(setup_environment):
'''
Efforts to improve the performance of cleanup_jobs involved
sub-classing the django Collector class. This unit test will
check for parity between the django Collector and the modified
AWXCollector class. AWXCollector is used in cleanup_jobs to
bulk-delete old jobs from the database.
Specifically, Collector has four dictionaries to check:
.dependencies, .data, .fast_deletes, and .field_updates
These tests will convert each dictionary from AWXCollector
(after running .collect on jobs), from querysets to sets of
objects. The final result should be a dictionary that is
equivalent to django's Collector.
'''
(old_jobs, new_jobs, days_str) = setup_environment
collector = Collector('default')
collector.collect(old_jobs)
awx_col = AWXCollector('default')
# awx_col accepts a queryset as input
awx_col.collect(Job.objects.filter(pk__in=[obj.pk for obj in old_jobs]))
# check that dependencies are the same
assert awx_col.dependencies == collector.dependencies
# check that objects to delete are the same
awx_del_dict = OrderedDict()
for model, instances in awx_col.data.items():
awx_del_dict.setdefault(model, set())
for inst in instances:
# .update() will put each object in a queryset into the set
awx_del_dict[model].update(inst)
assert awx_del_dict == collector.data
# check that field updates are the same
awx_del_dict = OrderedDict()
for model, instances_for_fieldvalues in awx_col.field_updates.items():
awx_del_dict.setdefault(model, {})
for (field, value), instances in instances_for_fieldvalues.items():
awx_del_dict[model].setdefault((field,value), set())
for inst in instances:
awx_del_dict[model][(field,value)].update(inst)
# collector field updates don't use the base (polymorphic parent) model, e.g.
# it will use JobTemplate instead of UnifiedJobTemplate. Therefore,
# we need to rebuild the dictionary and grab the model from the field
collector_del_dict = OrderedDict()
for model, instances_for_fieldvalues in collector.field_updates.items():
for (field,value), instances in instances_for_fieldvalues.items():
collector_del_dict.setdefault(field.model, {})
collector_del_dict[field.model][(field, value)] = collector.field_updates[model][(field,value)]
assert awx_del_dict == collector_del_dict
# check that fast deletes are the same
collector_fast_deletes = set()
for q in collector.fast_deletes:
collector_fast_deletes.update(q)
awx_col_fast_deletes = set()
for q in awx_col.fast_deletes:
awx_col_fast_deletes.update(q)
assert collector_fast_deletes == awx_col_fast_deletes

View File

@@ -180,8 +180,8 @@ def project_factory(organization):
@pytest.fixture
def job_factory(job_template, admin):
def factory(job_template=job_template, initial_state='new', created_by=admin):
def job_factory(jt_linked, admin):
def factory(job_template=jt_linked, initial_state='new', created_by=admin):
return job_template.create_unified_job(_eager_fields={
'status': initial_state, 'created_by': created_by})
return factory
@@ -701,11 +701,8 @@ def ad_hoc_command_factory(inventory, machine_credential, admin):
@pytest.fixture
def job_template(organization):
jt = JobTemplate(name='test-job_template')
jt.save()
return jt
def job_template():
return JobTemplate.objects.create(name='test-job_template')
@pytest.fixture
@@ -717,20 +714,16 @@ def job_template_labels(organization, job_template):
@pytest.fixture
def jt_linked(job_template_factory, credential, net_credential, vault_credential):
def jt_linked(organization, project, inventory, machine_credential, credential, net_credential, vault_credential):
'''
A job template with a reasonably complete set of related objects to
test RBAC and other functionality affected by related objects
'''
objects = job_template_factory(
'testJT', organization='org1', project='proj1', inventory='inventory1',
credential='cred1')
jt = objects.job_template
jt.credentials.add(vault_credential)
jt.save()
# Add AWS cloud credential and network credential
jt.credentials.add(credential)
jt.credentials.add(net_credential)
jt = JobTemplate.objects.create(
project=project, inventory=inventory, playbook='helloworld.yml',
organization=organization
)
jt.credentials.add(machine_credential, vault_credential, credential, net_credential)
return jt

View File

@@ -12,6 +12,7 @@ from awx.main.models import (
CredentialType,
Inventory,
InventorySource,
Project,
User
)
@@ -99,8 +100,8 @@ class TestRolesAssociationEntries:
).count() == 1, 'In loop %s' % i
def test_model_associations_are_recorded(self, organization):
proj1 = organization.projects.create(name='proj1')
proj2 = organization.projects.create(name='proj2')
proj1 = Project.objects.create(name='proj1', organization=organization)
proj2 = Project.objects.create(name='proj2', organization=organization)
proj2.use_role.parents.add(proj1.admin_role)
assert ActivityStream.objects.filter(role=proj1.admin_role, project=proj2).count() == 1

View File

@@ -1,6 +1,9 @@
import pytest
from awx.main.models import JobTemplate, Job, JobHostSummary, WorkflowJob, Inventory
from awx.main.models import (
JobTemplate, Job, JobHostSummary,
WorkflowJob, Inventory, Project, Organization
)
@pytest.mark.django_db
@@ -29,18 +32,19 @@ def test_prevent_slicing():
@pytest.mark.django_db
def test_awx_custom_virtualenv(inventory, project, machine_credential):
def test_awx_custom_virtualenv(inventory, project, machine_credential, organization):
jt = JobTemplate.objects.create(
name='my-jt',
inventory=inventory,
project=project,
playbook='helloworld.yml'
playbook='helloworld.yml',
organization=organization
)
jt.credentials.add(machine_credential)
job = jt.create_unified_job()
job.project.organization.custom_virtualenv = '/venv/fancy-org'
job.project.organization.save()
job.organization.custom_virtualenv = '/venv/fancy-org'
job.organization.save()
assert job.ansible_virtualenv_path == '/venv/fancy-org'
job.project.custom_virtualenv = '/venv/fancy-proj'
@@ -78,6 +82,22 @@ def test_job_host_summary_representation(host):
assert 'N/A changed=1 dark=2 failures=3 ignored=4 ok=5 processed=6 rescued=7 skipped=8' == str(jhs)
@pytest.mark.django_db
def test_jt_organization_follows_project():
org1 = Organization.objects.create(name='foo1')
org2 = Organization.objects.create(name='foo2')
project1 = Project.objects.create(name='proj1', organization=org1)
project2 = Project.objects.create(name='proj2', organization=org2)
jt = JobTemplate.objects.create(
name='foo', playbook='helloworld.yml',
project=project1
)
assert jt.organization == org1
jt.project = project2
jt.save()
assert JobTemplate.objects.get(pk=jt.id).organization == org2
@pytest.mark.django_db
class TestSlicingModels:

View File

@@ -39,6 +39,7 @@ class TestJobNotificationMixin(object):
'modified': datetime.datetime,
'name': str,
'playbook': str,
'scm_branch': str,
'scm_revision': str,
'skip_tags': str,
'start_at_task': str,

View File

@@ -39,3 +39,9 @@ def test_foreign_key_change_changes_modified_by(project, organization):
assert project._get_fields_snapshot()['organization_id'] == organization.id
project.organization = Organization(name='foo', pk=41)
assert project._get_fields_snapshot()['organization_id'] == 41
@pytest.mark.django_db
def test_project_related_jobs(project):
update = project.create_unified_job()
assert update.id in [u.id for u in project._get_related_jobs()]

View File

@@ -103,7 +103,8 @@ class TestComputedFields:
Schedule.objects.filter(pk=s.pk).update(next_run=old_next_run)
s.next_run = old_next_run
prior_modified = s.modified
s.update_computed_fields()
with mock.patch('awx.main.models.schedules.emit_channel_notification'):
s.update_computed_fields()
assert s.next_run != old_next_run
assert s.modified == prior_modified
@@ -133,7 +134,8 @@ class TestComputedFields:
assert s.next_run is None
assert job_template.next_schedule is None
s.rrule = self.distant_rrule
s.update_computed_fields()
with mock.patch('awx.main.models.schedules.emit_channel_notification'):
s.update_computed_fields()
assert s.next_run is not None
assert job_template.next_schedule == s

View File

@@ -13,6 +13,7 @@ from awx.main.models import (
WorkflowApprovalTemplate, Project, WorkflowJob, Schedule,
Credential
)
from awx.api.versioning import reverse
@pytest.mark.django_db
@@ -26,6 +27,29 @@ def test_subclass_types(rando):
])
@pytest.mark.django_db
def test_soft_unique_together(post, project, admin_user):
"""This tests that SOFT_UNIQUE_TOGETHER restrictions are applied correctly.
"""
jt1 = JobTemplate.objects.create(
name='foo_jt',
project=project
)
assert jt1.organization == project.organization
r = post(
url=reverse('api:job_template_list'),
data=dict(
name='foo_jt', # same as first
project=project.id,
ask_inventory_on_launch=True,
playbook='helloworld.yml'
),
user=admin_user,
expect=400
)
assert 'combination already exists' in str(r.data)
@pytest.mark.django_db
class TestCreateUnifiedJob:
'''

View File

@@ -15,6 +15,15 @@ from awx.main.dispatch.publish import task
from awx.main.dispatch.worker import BaseWorker, TaskWorker
'''
Prevent logger.<warn, debug, error> calls from triggering database operations
'''
@pytest.fixture(autouse=True)
def _disable_database_settings(mocker):
m = mocker.patch('awx.conf.settings.SettingsWrapper.all_supported_settings', new_callable=mock.PropertyMock)
m.return_value = []
def restricted(a, b):
raise AssertionError("This code should not run because it isn't decorated with @task")
@@ -324,22 +333,23 @@ class TestTaskPublisher:
assert Adder().run(2, 2) == 4
def test_function_apply_async(self):
message, queue = add.apply_async([2, 2])
message, queue = add.apply_async([2, 2], queue='foobar')
assert message['args'] == [2, 2]
assert message['kwargs'] == {}
assert message['task'] == 'awx.main.tests.functional.test_dispatch.add'
assert queue == 'awx_private_queue'
assert queue == 'foobar'
def test_method_apply_async(self):
message, queue = Adder.apply_async([2, 2])
message, queue = Adder.apply_async([2, 2], queue='foobar')
assert message['args'] == [2, 2]
assert message['kwargs'] == {}
assert message['task'] == 'awx.main.tests.functional.test_dispatch.Adder'
assert queue == 'awx_private_queue'
assert queue == 'foobar'
def test_apply_with_queue(self):
message, queue = add.apply_async([2, 2], queue='abc123')
assert queue == 'abc123'
def test_apply_async_queue_required(self):
with pytest.raises(ValueError) as e:
message, queue = add.apply_async([2, 2])
assert "awx.main.tests.functional.test_dispatch.add: Queue value required and may not be None" == e.value.args[0]
def test_queue_defined_in_task_decorator(self):
message, queue = multiply.apply_async([2, 2])

View File

@@ -1,7 +1,7 @@
import pytest
from unittest import mock
from awx.main.models import AdHocCommand, InventoryUpdate, Job, JobTemplate, ProjectUpdate
from awx.main.models import AdHocCommand, InventoryUpdate, JobTemplate, ProjectUpdate
from awx.main.models.ha import Instance, InstanceGroup
from awx.main.tasks import apply_cluster_membership_policies
from awx.api.versioning import reverse
@@ -310,7 +310,7 @@ class TestInstanceGroupOrdering:
assert iu.preferred_instance_groups == [ig_inv, ig_org]
def test_project_update_instance_groups(self, instance_group_factory, project, default_instance_group):
pu = ProjectUpdate.objects.create(project=project)
pu = ProjectUpdate.objects.create(project=project, organization=project.organization)
assert pu.preferred_instance_groups == [default_instance_group]
ig_org = instance_group_factory("OrgIstGrp", [default_instance_group.instances.first()])
ig_tmp = instance_group_factory("TmpIstGrp", [default_instance_group.instances.first()])
@@ -321,7 +321,7 @@ class TestInstanceGroupOrdering:
def test_job_instance_groups(self, instance_group_factory, inventory, project, default_instance_group):
jt = JobTemplate.objects.create(inventory=inventory, project=project)
job = Job.objects.create(inventory=inventory, job_template=jt, project=project)
job = jt.create_unified_job()
assert job.preferred_instance_groups == [default_instance_group]
ig_org = instance_group_factory("OrgIstGrp", [default_instance_group.instances.first()])
ig_inv = instance_group_factory("InvIstGrp", [default_instance_group.instances.first()])

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
import pytest
from django.core.exceptions import ImproperlyConfigured
@@ -26,7 +27,7 @@ def setup_module(module):
def teardown_module(module):
# settings_registry will be persistent states unless we explicitly clean them up.
# settings_registry will be persistent states unless we explicitly clean them up.
settings_registry.unregister('NAMED_URL_FORMATS')
settings_registry.unregister('NAMED_URL_GRAPH_NODES')
@@ -58,10 +59,25 @@ def test_organization(get, admin_user):
@pytest.mark.django_db
def test_job_template(get, admin_user):
test_jt = JobTemplate.objects.create(name='test_jt')
test_org = Organization.objects.create(name='test_org')
test_jt = JobTemplate.objects.create(name='test_jt', organization=test_org)
url = reverse('api:job_template_detail', kwargs={'pk': test_jt.pk})
response = get(url, user=admin_user, expect=200)
assert response.data['related']['named_url'].endswith('/test_jt/')
assert response.data['related']['named_url'].endswith('/test_jt++test_org/')
@pytest.mark.django_db
def test_job_template_old_way(get, admin_user, mocker):
test_org = Organization.objects.create(name='test_org')
test_jt = JobTemplate.objects.create(name='test_jt ♥', organization=test_org)
url = reverse('api:job_template_detail', kwargs={'pk': test_jt.pk})
response = get(url, user=admin_user, expect=200)
new_url = response.data['related']['named_url']
old_url = '/'.join([url.rsplit('/', 2)[0], test_jt.name, ''])
assert URLModificationMiddleware._convert_named_url(new_url) == url
assert URLModificationMiddleware._convert_named_url(old_url) == url
@pytest.mark.django_db

View File

@@ -213,34 +213,14 @@ def test_project_credential_protection(post, put, project, organization, scm_cre
}, org_admin, expect=403
)
post(
reverse('api:project_list'), {
'name': 'should not create',
'organization':organization.id,
reverse('api:project_list'), {
'name': 'should not create',
'organization':organization.id,
'credential': scm_credential.id
}, org_admin, expect=403
)
@pytest.mark.django_db()
def test_create_project_null_organization(post, organization, admin):
post(reverse('api:project_list'), { 'name': 't', 'organization': None}, admin, expect=201)
@pytest.mark.django_db()
def test_create_project_null_organization_xfail(post, organization, org_admin):
post(reverse('api:project_list'), { 'name': 't', 'organization': None}, org_admin, expect=403)
@pytest.mark.django_db()
def test_patch_project_null_organization(patch, organization, project, admin):
patch(reverse('api:project_detail', kwargs={'pk':project.id,}), { 'name': 't', 'organization': organization.id}, admin, expect=200)
@pytest.mark.django_db()
def test_patch_project_null_organization_xfail(patch, project, org_admin):
patch(reverse('api:project_detail', kwargs={'pk':project.id,}), { 'name': 't', 'organization': None}, org_admin, expect=400)
@pytest.mark.django_db
def test_cannot_schedule_manual_project(manual_project, admin_user, post):
response = post(

View File

@@ -1,5 +1,7 @@
import pytest
from rest_framework.exceptions import PermissionDenied
from awx.main.access import (
JobAccess,
JobLaunchConfigAccess,
@@ -19,8 +21,6 @@ from awx.main.models import (
Credential
)
from rest_framework.exceptions import PermissionDenied
from crum import impersonate
@@ -29,7 +29,8 @@ def normal_job(deploy_jobtemplate):
return Job.objects.create(
job_template=deploy_jobtemplate,
project=deploy_jobtemplate.project,
inventory=deploy_jobtemplate.inventory
inventory=deploy_jobtemplate.inventory,
organization=deploy_jobtemplate.organization
)
@@ -170,9 +171,11 @@ class TestJobRelaunchAccess:
machine_credential.use_role.members.add(u)
access = JobAccess(u)
assert access.can_start(job_with_links, validate_license=False) == can_start, (
"Inventory access: {}\nCredential access: {}\n Expected access: {}".format(inv_access, cred_access, can_start)
)
if can_start:
assert access.can_start(job_with_links, validate_license=False)
else:
with pytest.raises(PermissionDenied):
access.can_start(job_with_links, validate_license=False)
def test_job_relaunch_credential_access(
self, inventory, project, credential, net_credential):
@@ -187,7 +190,8 @@ class TestJobRelaunchAccess:
# Job has prompted net credential, launch denied w/ message
job = jt.create_unified_job(credentials=[net_credential])
assert not jt_user.can_access(Job, 'start', job, validate_license=False)
with pytest.raises(PermissionDenied):
jt_user.can_access(Job, 'start', job, validate_license=False)
def test_prompted_credential_relaunch_denied(
self, inventory, project, net_credential, rando):
@@ -200,7 +204,8 @@ class TestJobRelaunchAccess:
# Job has prompted net credential, rando lacks permission to use it
job = jt.create_unified_job(credentials=[net_credential])
assert not rando.can_access(Job, 'start', job, validate_license=False)
with pytest.raises(PermissionDenied):
rando.can_access(Job, 'start', job, validate_license=False)
def test_prompted_credential_relaunch_allowed(
self, inventory, project, net_credential, rando):

View File

@@ -1,5 +1,7 @@
import pytest
from rest_framework.exceptions import PermissionDenied
from awx.main.models.inventory import Inventory
from awx.main.models.credential import Credential
from awx.main.models.jobs import JobTemplate, Job
@@ -89,8 +91,8 @@ def test_slice_job(slice_job_factory, rando):
@pytest.mark.django_db
class TestJobRelaunchAccess:
@pytest.fixture
def job_no_prompts(self, machine_credential, inventory):
jt = JobTemplate.objects.create(name='test-job_template', inventory=inventory)
def job_no_prompts(self, machine_credential, inventory, organization):
jt = JobTemplate.objects.create(name='test-job_template', inventory=inventory, organization=organization)
jt.credentials.add(machine_credential)
return jt.create_unified_job()
@@ -119,10 +121,20 @@ class TestJobRelaunchAccess:
job_no_prompts.job_template.execute_role.members.add(rando)
assert rando.can_access(Job, 'start', job_no_prompts)
def test_orphan_relaunch_via_organization(self, job_no_prompts, rando, organization):
"JT for job has been deleted, relevant organization roles will allow management"
assert job_no_prompts.organization == organization
organization.execute_role.members.add(rando)
job_no_prompts.job_template.delete()
job_no_prompts.job_template = None # Django should do this for us, but it does not
assert rando.can_access(Job, 'start', job_no_prompts)
def test_no_relaunch_without_prompted_fields_access(self, job_with_prompts, rando):
"Has JT execute_role but no use_role on inventory & credential - deny relaunch"
job_with_prompts.job_template.execute_role.members.add(rando)
assert not rando.can_access(Job, 'start', job_with_prompts)
with pytest.raises(PermissionDenied) as exc:
rando.can_access(Job, 'start', job_with_prompts)
assert 'Job was launched with prompted fields you do not have access to' in str(exc)
def test_can_relaunch_with_prompted_fields_access(self, job_with_prompts, rando):
"Has use_role on the prompted inventory & credential - allow relaunch"
@@ -141,11 +153,15 @@ class TestJobRelaunchAccess:
jt.ask_limit_on_launch = False
jt.save()
jt.execute_role.members.add(rando)
assert not rando.can_access(Job, 'start', job_with_prompts)
with pytest.raises(PermissionDenied):
rando.can_access(Job, 'start', job_with_prompts)
def test_can_relaunch_if_limit_was_prompt(self, job_with_prompts, rando):
"Job state differs from JT, but only on prompted fields - allow relaunch"
job_with_prompts.job_template.execute_role.members.add(rando)
job_with_prompts.limit = 'webservers'
job_with_prompts.save()
assert not rando.can_access(Job, 'start', job_with_prompts)
job_with_prompts.inventory.use_role.members.add(rando)
for cred in job_with_prompts.credentials.all():
cred.use_role.members.add(rando)
assert rando.can_access(Job, 'start', job_with_prompts)

View File

@@ -8,8 +8,7 @@ from awx.main.access import (
ScheduleAccess
)
from awx.main.models.jobs import JobTemplate
from awx.main.models.organization import Organization
from awx.main.models.schedules import Schedule
from awx.main.models import Project, Organization, Inventory, Schedule, User
@mock.patch.object(BaseAccess, 'check_license', return_value=None)
@@ -24,6 +23,29 @@ def test_job_template_access_superuser(check_license, user, deploy_jobtemplate):
assert access.can_add({})
@pytest.mark.django_db
class TestImplicitAccess:
def test_org_execute(self, jt_linked, rando):
assert rando not in jt_linked.execute_role
jt_linked.organization.execute_role.members.add(rando)
assert rando in jt_linked.execute_role
def test_org_admin(self, jt_linked, rando):
assert rando not in jt_linked.execute_role
jt_linked.organization.job_template_admin_role.members.add(rando)
assert rando in jt_linked.execute_role
def test_org_auditor(self, jt_linked, rando):
assert rando not in jt_linked.read_role
jt_linked.organization.auditor_role.members.add(rando)
assert rando in jt_linked.read_role
def test_deprecated_inventory_read(self, jt_linked, rando):
assert rando not in jt_linked.read_role
jt_linked.inventory.organization.execute_role.members.add(rando)
assert rando in jt_linked.read_role
@pytest.mark.django_db
def test_job_template_access_read_level(jt_linked, rando):
ssh_cred = jt_linked.machine_credential
@@ -45,22 +67,21 @@ def test_job_template_access_read_level(jt_linked, rando):
@pytest.mark.django_db
def test_job_template_access_use_level(jt_linked, rando):
ssh_cred = jt_linked.machine_credential
vault_cred = jt_linked.vault_credentials[0]
access = JobTemplateAccess(rando)
jt_linked.project.use_role.members.add(rando)
jt_linked.inventory.use_role.members.add(rando)
ssh_cred.use_role.members.add(rando)
vault_cred.use_role.members.add(rando)
jt_linked.organization.job_template_admin_role.members.add(rando)
proj_pk = jt_linked.project.pk
assert access.can_add(dict(inventory=jt_linked.inventory.pk, project=proj_pk))
assert access.can_add(dict(credential=ssh_cred.pk, project=proj_pk))
assert access.can_add(dict(vault_credential=vault_cred.pk, project=proj_pk))
org_pk = jt_linked.organization_id
assert access.can_change(jt_linked, {'job_type': 'check', 'project': proj_pk})
assert access.can_change(jt_linked, {'job_type': 'check', 'inventory': None})
for cred in jt_linked.credentials.all():
assert not access.can_unattach(jt_linked, cred, 'credentials', {})
assert access.can_unattach(jt_linked, cred, 'credentials', {})
assert access.can_add(dict(inventory=jt_linked.inventory.pk, project=proj_pk, organization=org_pk))
assert access.can_add(dict(project=proj_pk, organization=org_pk))
@pytest.mark.django_db
@@ -69,22 +90,21 @@ def test_job_template_access_admin(role_names, jt_linked, rando):
ssh_cred = jt_linked.machine_credential
access = JobTemplateAccess(rando)
# Appoint this user as admin of the organization
#jt_linked.inventory.organization.admin_role.members.add(rando)
assert not access.can_read(jt_linked)
assert not access.can_delete(jt_linked)
for role_name in role_names:
role = getattr(jt_linked.inventory.organization, role_name)
role.members.add(rando)
# Appoint this user as admin of the organization
jt_linked.organization.admin_role.members.add(rando)
org_pk = jt_linked.organization.id
# Assign organization permission in the same way the create view does
organization = jt_linked.inventory.organization
ssh_cred.admin_role.parents.add(organization.admin_role)
proj_pk = jt_linked.project.pk
assert access.can_add(dict(inventory=jt_linked.inventory.pk, project=proj_pk))
assert access.can_add(dict(credential=ssh_cred.pk, project=proj_pk))
assert access.can_add(dict(inventory=jt_linked.inventory.pk, project=proj_pk, organization=org_pk))
assert access.can_add(dict(credential=ssh_cred.pk, project=proj_pk, organization=org_pk))
for cred in jt_linked.credentials.all():
assert access.can_unattach(jt_linked, cred, 'credentials', {})
@@ -105,11 +125,11 @@ def test_job_template_extra_credentials_prompts_access(
)
jt.credentials.add(machine_credential)
jt.execute_role.members.add(rando)
r = post(
post(
reverse('api:job_template_launch', kwargs={'pk': jt.id}),
{'credentials': [machine_credential.pk, vault_credential.pk]}, rando
{'credentials': [machine_credential.pk, vault_credential.pk]}, rando,
expect=403
)
assert r.status_code == 403
@pytest.mark.django_db
@@ -148,26 +168,41 @@ class TestOrphanJobTemplate:
@pytest.mark.django_db
@pytest.mark.job_permissions
def test_job_template_creator_access(project, rando, post):
def test_job_template_creator_access(project, organization, rando, post):
project.use_role.members.add(rando)
organization.job_template_admin_role.members.add(rando)
response = post(url=reverse('api:job_template_list'), data=dict(
name='newly-created-jt',
ask_inventory_on_launch=True,
project=project.pk,
organization=organization.id,
playbook='helloworld.yml'
), user=rando, expect=201)
project.admin_role.members.add(rando)
with mock.patch(
'awx.main.models.projects.ProjectOptions.playbooks',
new_callable=mock.PropertyMock(return_value=['helloworld.yml'])):
response = post(reverse('api:job_template_list'), dict(
name='newly-created-jt',
job_type='run',
ask_inventory_on_launch=True,
ask_credential_on_launch=True,
project=project.pk,
playbook='helloworld.yml'
), rando)
assert response.status_code == 201
jt_pk = response.data['id']
jt_obj = JobTemplate.objects.get(pk=jt_pk)
# Creating a JT should place the creator in the admin role
assert rando in jt_obj.admin_role
assert rando in jt_obj.admin_role.members.all()
@pytest.mark.django_db
@pytest.mark.job_permissions
@pytest.mark.parametrize('lacking', ['project', 'inventory'])
def test_job_template_insufficient_creator_permissions(lacking, project, inventory, organization, rando, post):
if lacking != 'project':
project.use_role.members.add(rando)
else:
project.read_role.members.add(rando)
if lacking != 'inventory':
inventory.use_role.members.add(rando)
else:
inventory.read_role.members.add(rando)
post(url=reverse('api:job_template_list'), data=dict(
name='newly-created-jt',
inventory=inventory.id,
project=project.pk,
playbook='helloworld.yml'
), user=rando, expect=403)
@pytest.mark.django_db
@@ -237,27 +272,104 @@ class TestJobTemplateSchedules:
@pytest.mark.django_db
def test_jt_org_ownership_change(user, jt_linked):
admin1 = user('admin1')
org1 = jt_linked.project.organization
org1.admin_role.members.add(admin1)
a1_access = JobTemplateAccess(admin1)
class TestProjectOrganization:
"""Tests stories related to management of JT organization via its project
which have some bearing on RBAC integrity
"""
assert a1_access.can_read(jt_linked)
def test_new_project_org_change(self, project, patch, admin_user):
org2 = Organization.objects.create(name='bar')
patch(
url=project.get_absolute_url(),
data={'organization': org2.id},
user=admin_user,
expect=200
)
assert Project.objects.get(pk=project.id).organization_id == org2.id
def test_jt_org_cannot_change(self, project, post, patch, admin_user):
post(
url=reverse('api:job_template_list'),
data={
'name': 'foo_template',
'project': project.id,
'playbook': 'helloworld.yml',
'ask_inventory_on_launch': True
},
user=admin_user,
expect=201
)
org2 = Organization.objects.create(name='bar')
r = patch(
url=project.get_absolute_url(),
data={'organization': org2.id},
user=admin_user,
expect=400
)
assert 'Organization cannot be changed' in str(r.data)
admin2 = user('admin2')
org2 = Organization.objects.create(name='mrroboto', description='domo')
org2.admin_role.members.add(admin2)
a2_access = JobTemplateAccess(admin2)
def test_orphan_JT_adoption(self, project, patch, admin_user, org_admin):
jt = JobTemplate.objects.create(
name='bar',
ask_inventory_on_launch=True,
playbook='helloworld.yml'
)
assert org_admin not in jt.admin_role
patch(
url=jt.get_absolute_url(),
data={'project': project.id},
user=admin_user,
expect=200
)
assert org_admin in jt.admin_role
assert not a2_access.can_read(jt_linked)
def test_inventory_read_transfer_direct(self, patch):
orgs = []
invs = []
admins = []
for i in range(2):
org = Organization.objects.create(name='org{}'.format(i))
org_admin = User.objects.create(username='user{}'.format(i))
inv = Inventory.objects.create(
organization=org,
name='inv{}'.format(i)
)
org.auditor_role.members.add(org_admin)
orgs.append(org)
admins.append(org_admin)
invs.append(inv)
jt_linked.project.organization = org2
jt_linked.project.save()
jt_linked.inventory.organization = org2
jt_linked.inventory.save()
jt = JobTemplate.objects.create(name='foo', inventory=invs[0])
assert admins[0] in jt.read_role
assert admins[1] not in jt.read_role
assert a2_access.can_read(jt_linked)
assert not a1_access.can_read(jt_linked)
jt.inventory = invs[1]
jt.save(update_fields=['inventory'])
assert admins[0] not in jt.read_role
assert admins[1] in jt.read_role
def test_inventory_read_transfer_indirect(self, patch):
orgs = []
admins = []
for i in range(2):
org = Organization.objects.create(name='org{}'.format(i))
org_admin = User.objects.create(username='user{}'.format(i))
org.auditor_role.members.add(org_admin)
orgs.append(org)
admins.append(org_admin)
inv = Inventory.objects.create(
organization=orgs[0],
name='inv{}'.format(i)
)
jt = JobTemplate.objects.create(name='foo', inventory=inv)
assert admins[0] in jt.read_role
assert admins[1] not in jt.read_role
inv.organization = orgs[1]
inv.save(update_fields=['organization'])
assert admins[0] not in jt.read_role
assert admins[1] in jt.read_role

View File

@@ -0,0 +1,101 @@
import pytest
from django.apps import apps
from awx.main.migrations import _rbac as rbac
from awx.main.models import (
UnifiedJobTemplate,
InventorySource, Inventory,
JobTemplate, Project,
Organization,
User
)
@pytest.mark.django_db
def test_implied_organization_subquery_inventory():
orgs = []
for i in range(3):
orgs.append(Organization.objects.create(name='foo{}'.format(i)))
orgs.append(orgs[0])
for i in range(4):
org = orgs[i]
if i == 2:
inventory = Inventory.objects.create(name='foo{}'.format(i))
else:
inventory = Inventory.objects.create(name='foo{}'.format(i), organization=org)
inv_src = InventorySource.objects.create(name='foo{}'.format(i), inventory=inventory)
sources = UnifiedJobTemplate.objects.annotate(
test_field=rbac.implicit_org_subquery(UnifiedJobTemplate, InventorySource)
)
for inv_src in sources:
assert inv_src.test_field == inv_src.inventory.organization_id
@pytest.mark.django_db
def test_implied_organization_subquery_job_template():
jts = []
for i in range(5):
if i <= 3:
org = Organization.objects.create(name='foo{}'.format(i))
else:
org = None
if i <= 4:
proj = Project.objects.create(
name='foo{}'.format(i),
organization=org
)
else:
proj = None
jts.append(JobTemplate.objects.create(
name='foo{}'.format(i),
project=proj
))
# test case of sharing same org
jts[2].project.organization = jts[3].project.organization
jts[2].save()
ujts = UnifiedJobTemplate.objects.annotate(
test_field=rbac.implicit_org_subquery(UnifiedJobTemplate, JobTemplate)
)
for jt in ujts:
if not isinstance(jt, JobTemplate): # some are projects
assert jt.test_field is None
else:
if jt.project is None:
assert jt.test_field is None
else:
assert jt.test_field == jt.project.organization_id
@pytest.mark.django_db
def test_give_explicit_inventory_permission():
dual_admin = User.objects.create(username='alice')
inv_admin = User.objects.create(username='bob')
inv_org = Organization.objects.create(name='inv-org')
proj_org = Organization.objects.create(name='proj-org')
inv_org.admin_role.members.add(inv_admin, dual_admin)
proj_org.admin_role.members.add(dual_admin)
proj = Project.objects.create(
name="test-proj",
organization=proj_org
)
inv = Inventory.objects.create(
name='test-inv',
organization=inv_org
)
jt = JobTemplate.objects.create(
name='foo',
project=proj,
inventory=inv
)
assert dual_admin in jt.admin_role
rbac.restore_inventory_admins(apps, None)
assert inv_admin in jt.admin_role.members.all()
assert dual_admin not in jt.admin_role.members.all()
assert dual_admin in jt.admin_role

View File

@@ -62,10 +62,11 @@ class TestWorkflowJobTemplateAccess:
@pytest.mark.django_db
class TestWorkflowJobTemplateNodeAccess:
def test_no_jt_access_to_edit(self, wfjt_node, org_admin):
def test_no_jt_access_to_edit(self, wfjt_node, rando):
# without access to the related job template, admin to the WFJT can
# not change the prompted parameters
access = WorkflowJobTemplateNodeAccess(org_admin)
wfjt_node.workflow_job_template.admin_role.members.add(rando)
access = WorkflowJobTemplateNodeAccess(rando)
assert not access.can_change(wfjt_node, {'job_type': 'check'})
def test_node_edit_allowed(self, wfjt_node, org_admin):

View File

@@ -30,6 +30,8 @@ def job_template(mocker):
mock_jt.host_config_key = '9283920492'
mock_jt.validation_errors = mock_JT_resource_data
mock_jt.webhook_service = ''
mock_jt.organization_id = None
mock_jt.webhook_credential_id = None
return mock_jt

View File

@@ -6,6 +6,7 @@ from awx.main.models import (
UnifiedJobTemplate,
WorkflowJob,
WorkflowJobNode,
WorkflowApprovalTemplate,
Job,
User,
Project,
@@ -65,6 +66,16 @@ def test_cancel_job_explanation(unified_job):
unified_job.save.assert_called_with(update_fields=['cancel_flag', 'start_args', 'status', 'job_explanation'])
def test_organization_copy_to_jobs():
'''
All unified job types should infer their organization from their template organization
'''
for cls in UnifiedJobTemplate.__subclasses__():
if cls is WorkflowApprovalTemplate:
continue # these do not track organization
assert 'organization' in cls._get_unified_job_field_names(), cls
def test_log_representation():
'''
Common representation used inside of log messages

View File

@@ -177,7 +177,8 @@ class TestWorkflowJobCreate:
char_prompts=wfjt_node_no_prompts.char_prompts,
inventory=None,
unified_job_template=wfjt_node_no_prompts.unified_job_template,
workflow_job=workflow_job_unit)
workflow_job=workflow_job_unit,
identifier=mocker.ANY)
def test_create_with_prompts(self, wfjt_node_with_prompts, workflow_job_unit, credential, mocker):
mock_create = mocker.MagicMock()
@@ -192,7 +193,8 @@ class TestWorkflowJobCreate:
char_prompts=wfjt_node_with_prompts.char_prompts,
inventory=wfjt_node_with_prompts.inventory,
unified_job_template=wfjt_node_with_prompts.unified_job_template,
workflow_job=workflow_job_unit)
workflow_job=workflow_job_unit,
identifier=mocker.ANY)
@mock.patch('awx.main.models.workflow.WorkflowNodeBase.get_parent_nodes', lambda self: [])

View File

@@ -148,7 +148,9 @@ def job_template_with_ids(job_template_factory):
'testJT', project=proj, inventory=inv, credential=credential,
cloud_credential=cloud_cred, network_credential=net_cred,
persisted=False)
return jt_objects.job_template
jt = jt_objects.job_template
jt.organization = Organization(id=1, pk=1, name='fooOrg')
return jt
def test_superuser(mocker):
@@ -180,21 +182,27 @@ def test_jt_existing_values_are_nonsensitive(job_template_with_ids, user_unit):
def test_change_jt_sensitive_data(job_template_with_ids, mocker, user_unit):
"""Assure that can_add is called with all ForeignKeys."""
job_template_with_ids.admin_role = Role()
class RoleReturnsTrue(Role):
class Meta:
proxy = True
def __contains__(self, accessor):
return True
job_template_with_ids.admin_role = RoleReturnsTrue()
job_template_with_ids.organization.job_template_admin_role = RoleReturnsTrue()
inv2 = Inventory()
inv2.use_role = RoleReturnsTrue()
data = {'inventory': inv2}
data = {'inventory': job_template_with_ids.inventory.id + 1}
access = JobTemplateAccess(user_unit)
mock_add = mock.MagicMock(return_value=False)
with mock.patch('awx.main.models.rbac.Role.__contains__', return_value=True):
with mocker.patch('awx.main.access.JobTemplateAccess.can_add', mock_add):
with mocker.patch('awx.main.access.JobTemplateAccess.can_read', return_value=True):
assert not access.can_change(job_template_with_ids, data)
assert not access.changes_are_non_sensitive(job_template_with_ids, data)
mock_add.assert_called_once_with({
'inventory': data['inventory'],
'project': job_template_with_ids.project.id
})
job_template_with_ids.inventory.use_role = RoleReturnsTrue()
job_template_with_ids.project.use_role = RoleReturnsTrue()
assert access.can_change(job_template_with_ids, data)
def mock_raise_none(self, add_host=False, feature=None, check_expiration=True):

View File

@@ -2,10 +2,17 @@
import pytest
from django.core.exceptions import ValidationError
from django.apps import apps
from django.db.models.fields.related import ForeignKey
from django.db.models.fields.related_descriptors import (
ReverseManyToOneDescriptor,
ForwardManyToOneDescriptor
)
from rest_framework.serializers import ValidationError as DRFValidationError
from awx.main.models import Credential, CredentialType, BaseModel
from awx.main.fields import JSONSchemaField
from awx.main.fields import JSONSchemaField, ImplicitRoleField, ImplicitRoleDescriptor
@pytest.mark.parametrize('schema, given, message', [
@@ -194,3 +201,57 @@ def test_credential_creation_validation_failure(inputs):
with pytest.raises(Exception) as e:
field.validate(inputs, cred)
assert e.type in (ValidationError, DRFValidationError)
def test_implicit_role_field_parents():
"""This assures that every ImplicitRoleField only references parents
which are relationships that actually exist
"""
app_models = apps.get_app_config('main').get_models()
for cls in app_models:
for field in cls._meta.get_fields():
if not isinstance(field, ImplicitRoleField):
continue
if not field.parent_role:
continue
field_names = field.parent_role
if type(field_names) is not list:
field_names = [field_names]
for field_name in field_names:
# this type of specification appears to have been considered
# at some point, but does not exist in the app and would
# need support and tests built out for it
assert not isinstance(field_name, tuple)
# also used to be a thing before py3 upgrade
assert not isinstance(field_name, bytes)
# this is always coherent
if field_name.startswith('singleton:'):
continue
# separate out parent role syntax
field_name, sep, field_attr = field_name.partition('.')
# now make primary assertion, that specified paths exist
assert hasattr(cls, field_name)
# inspect in greater depth
second_field = cls._meta.get_field(field_name)
second_field_descriptor = getattr(cls, field_name)
# all supported linkage types
assert isinstance(second_field_descriptor, (
ReverseManyToOneDescriptor, # not currently used
ImplicitRoleDescriptor,
ForwardManyToOneDescriptor
))
# only these links are supported
if field_attr:
if isinstance(second_field_descriptor, ReverseManyToOneDescriptor):
assert type(second_field) is ForeignKey
rel_model = cls._meta.get_field(field_name).related_model
third_field = getattr(rel_model, field_attr)
# expecting for related_model.foo_role, test role field type
assert isinstance(third_field, ImplicitRoleDescriptor)
else:
# expecting simple format of foo_role
assert type(second_field) is ImplicitRoleField

View File

@@ -126,10 +126,8 @@ def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker
@pytest.mark.parametrize("key,value", [
('REST_API_TOKEN', 'SECRET'),
('SECRET_KEY', 'SECRET'),
('RABBITMQ_PASS', 'SECRET'),
('VMWARE_PASSWORD', 'SECRET'),
('API_SECRET', 'SECRET'),
('CALLBACK_CONNECTION', 'amqp://tower:password@localhost:5672/tower'),
('ANSIBLE_GALAXY_SERVER_PRIMARY_GALAXY_PASSWORD', 'SECRET'),
('ANSIBLE_GALAXY_SERVER_PRIMARY_GALAXY_TOKEN', 'SECRET'),
])
@@ -2381,3 +2379,23 @@ def test_managed_injector_redaction(injector_cls):
if secret_field_name in template:
env[env_name] = 'very_secret_value'
assert 'very_secret_value' not in str(build_safe_env(env))
@mock.patch('logging.getLogger')
def test_notification_job_not_finished(logging_getLogger, mocker):
uj = mocker.MagicMock()
uj.finished = False
logger = mocker.Mock()
logging_getLogger.return_value = logger
with mocker.patch('awx.main.models.UnifiedJob.objects.get', uj):
tasks.handle_success_and_failure_notifications(1)
assert logger.warn.called_with(f"Failed to even try to send notifications for job '{uj}' due to job not being in finished state.")
def test_notification_job_finished(mocker):
uj = mocker.MagicMock(send_notification_templates=mocker.MagicMock(), finished=True)
with mocker.patch('awx.main.models.UnifiedJob.objects.get', mocker.MagicMock(return_value=uj)):
tasks.handle_success_and_failure_notifications(1)
uj.send_notification_templates.assert_called()

View File

@@ -77,6 +77,8 @@ class GraphNode(object):
Performance assured: http://stackoverflow.com/a/27086669
'''
for c in URL_PATH_RESERVED_CHARSET:
if not isinstance(text, str):
text = str(text) # needed for WFJT node creation, identifier temporarily UUID4 type
if c in text:
text = text.replace(c, URL_PATH_RESERVED_CHARSET[c])
text = text.replace(NAMED_URL_RES_INNER_DILIMITER,
@@ -200,14 +202,14 @@ def _get_all_unique_togethers(model):
def _check_unique_together_fields(model, ut):
has_name = False
name_field = None
fk_names = []
fields = []
is_valid = True
for field_name in ut:
field = model._meta.get_field(field_name)
if field_name == 'name':
has_name = True
if field_name in ('name', 'identifier'):
name_field = field_name
elif type(field) == models.ForeignKey and field.related_model != model:
fk_names.append(field_name)
elif issubclass(type(field), models.CharField) and field.choices:
@@ -219,8 +221,8 @@ def _check_unique_together_fields(model, ut):
return (), (), is_valid
fk_names.sort()
fields.sort(reverse=True)
if has_name:
fields.append('name')
if name_field:
fields.append(name_field)
fields.reverse()
return tuple(fk_names), tuple(fields), is_valid
@@ -315,3 +317,8 @@ def generate_graph(models):
settings.NAMED_URL_GRAPH = largest_graph
for node in settings.NAMED_URL_GRAPH.values():
node.add_bindings()
def reset_counters():
for node in settings.NAMED_URL_GRAPH.values():
node.counter = 0

190
awx/main/wsbroadcast.py Normal file
View File

@@ -0,0 +1,190 @@
import json
import logging
import asyncio
import aiohttp
from aiohttp import client_exceptions
from channels.layers import get_channel_layer
from django.conf import settings
from django.apps import apps
from django.core.serializers.json import DjangoJSONEncoder
from awx.main.analytics.broadcast_websocket import (
BroadcastWebsocketStats,
BroadcastWebsocketStatsManager,
)
logger = logging.getLogger('awx.main.wsbroadcast')
def wrap_broadcast_msg(group, message: str):
# TODO: Maybe wrap as "group","message" so that we don't need to
# encode/decode as json.
return json.dumps(dict(group=group, message=message), cls=DjangoJSONEncoder)
def unwrap_broadcast_msg(payload: dict):
return (payload['group'], payload['message'])
def get_broadcast_hosts():
Instance = apps.get_model('main', 'Instance')
instances = Instance.objects.filter(rampart_groups__controller__isnull=True) \
.exclude(hostname=Instance.objects.me().hostname) \
.order_by('hostname') \
.values('hostname', 'ip_address') \
.distinct()
return [i['ip_address'] or i['hostname'] for i in instances]
def get_local_host():
Instance = apps.get_model('main', 'Instance')
return Instance.objects.me().hostname
class WebsocketTask():
def __init__(self,
name,
event_loop,
stats: BroadcastWebsocketStats,
remote_host: str,
remote_port: int = settings.BROADCAST_WEBSOCKET_PORT,
protocol: str = settings.BROADCAST_WEBSOCKET_PROTOCOL,
verify_ssl: bool = settings.BROADCAST_WEBSOCKET_VERIFY_CERT,
endpoint: str = 'broadcast'):
self.name = name
self.event_loop = event_loop
self.stats = stats
self.remote_host = remote_host
self.remote_port = remote_port
self.endpoint = endpoint
self.protocol = protocol
self.verify_ssl = verify_ssl
self.channel_layer = None
async def run_loop(self, websocket: aiohttp.ClientWebSocketResponse):
raise RuntimeError("Implement me")
async def connect(self, attempt):
from awx.main.consumers import WebsocketSecretAuthHelper # noqa
logger.debug(f"{self.name} connect attempt {attempt} to {self.remote_host}")
'''
Can not put get_channel_layer() in the init code because it is in the init
path of channel layers i.e. RedisChannelLayer() calls our init code.
'''
if not self.channel_layer:
self.channel_layer = get_channel_layer()
try:
if attempt > 0:
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_RECONNECT_RETRY_RATE_SECONDS)
except asyncio.CancelledError:
logger.warn(f"{self.name} connection to {self.remote_host} cancelled")
raise
uri = f"{self.protocol}://{self.remote_host}:{self.remote_port}/websocket/{self.endpoint}/"
timeout = aiohttp.ClientTimeout(total=10)
secret_val = WebsocketSecretAuthHelper.construct_secret()
try:
async with aiohttp.ClientSession(headers={'secret': secret_val},
timeout=timeout) as session:
async with session.ws_connect(uri, ssl=self.verify_ssl) as websocket:
self.stats.record_connection_established()
attempt = 0
await self.run_loop(websocket)
except asyncio.CancelledError:
# TODO: Check if connected and disconnect
# Possibly use run_until_complete() if disconnect is async
logger.warn(f"{self.name} connection to {self.remote_host} cancelled")
self.stats.record_connection_lost()
raise
except client_exceptions.ClientConnectorError as e:
logger.warn(f"Failed to connect to {self.remote_host}: '{e}'. Reconnecting ...")
self.stats.record_connection_lost()
self.start(attempt=attempt + 1)
except asyncio.TimeoutError:
logger.warn(f"Timeout while trying to connect to {self.remote_host}. Reconnecting ...")
self.stats.record_connection_lost()
self.start(attempt=attempt + 1)
except Exception as e:
# Early on, this is our canary. I'm not sure what exceptions we can really encounter.
logger.warn(f"Websocket broadcast client exception {type(e)} {e}")
self.stats.record_connection_lost()
self.start(attempt=attempt + 1)
def start(self, attempt=0):
self.async_task = self.event_loop.create_task(self.connect(attempt=attempt))
def cancel(self):
self.async_task.cancel()
class BroadcastWebsocketTask(WebsocketTask):
async def run_loop(self, websocket: aiohttp.ClientWebSocketResponse):
async for msg in websocket:
self.stats.record_message_received()
if msg.type == aiohttp.WSMsgType.ERROR:
break
elif msg.type == aiohttp.WSMsgType.TEXT:
try:
payload = json.loads(msg.data)
except json.JSONDecodeError:
logmsg = "Failed to decode broadcast message"
if logger.isEnabledFor(logging.DEBUG):
logmsg = "{} {}".format(logmsg, payload)
logger.warn(logmsg)
continue
(group, message) = unwrap_broadcast_msg(payload)
await self.channel_layer.group_send(group, {"type": "internal.message", "text": message})
class BroadcastWebsocketManager(object):
def __init__(self):
self.event_loop = asyncio.get_event_loop()
self.broadcast_tasks = dict()
# parallel dict to broadcast_tasks that tracks stats
self.local_hostname = get_local_host()
self.stats_mgr = BroadcastWebsocketStatsManager(self.event_loop, self.local_hostname)
async def run_per_host_websocket(self):
while True:
future_remote_hosts = get_broadcast_hosts()
current_remote_hosts = self.broadcast_tasks.keys()
deleted_remote_hosts = set(current_remote_hosts) - set(future_remote_hosts)
new_remote_hosts = set(future_remote_hosts) - set(current_remote_hosts)
if deleted_remote_hosts:
logger.warn(f"{self.local_hostname} going to remove {deleted_remote_hosts} from the websocket broadcast list")
if new_remote_hosts:
logger.warn(f"{self.local_hostname} going to add {new_remote_hosts} to the websocket broadcast list")
for h in deleted_remote_hosts:
self.broadcast_tasks[h].cancel()
del self.broadcast_tasks[h]
self.stats_mgr.delete_remote_host_stats(h)
for h in new_remote_hosts:
stats = self.stats_mgr.new_remote_host_stats(h)
broadcast_task = BroadcastWebsocketTask(name=self.local_hostname,
event_loop=self.event_loop,
stats=stats,
remote_host=h)
broadcast_task.start()
self.broadcast_tasks[h] = broadcast_task
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS)
def start(self):
self.stats_mgr.start()
self.async_task = self.event_loop.create_task(self.run_per_host_websocket())
return self.async_task

View File

@@ -197,6 +197,9 @@ JOB_EVENT_WORKERS = 4
# The maximum size of the job event worker queue before requests are blocked
JOB_EVENT_MAX_QUEUE_SIZE = 10000
# The number of job events to migrate per-transaction when moving from int -> bigint
JOB_EVENT_MIGRATION_CHUNK_SIZE = 1000000
# Disallow sending session cookies over insecure connections
SESSION_COOKIE_SECURE = True
@@ -421,8 +424,8 @@ os.environ.setdefault('DJANGO_LIVE_TEST_SERVER_ADDRESS', 'localhost:9013-9199')
BROKER_DURABILITY = True
BROKER_POOL_LIMIT = None
BROKER_URL = 'amqp://guest:guest@localhost:5672//'
CELERY_DEFAULT_QUEUE = 'awx_private_queue'
BROKER_URL = 'unix:///var/run/redis/redis.sock'
BROKER_TRANSPORT_OPTIONS = {}
CELERYBEAT_SCHEDULE = {
'tower_scheduler': {
'task': 'awx.main.tasks.awx_periodic_scheduler',
@@ -451,19 +454,6 @@ CELERYBEAT_SCHEDULE = {
# 'isolated_heartbeat': set up at the end of production.py and development.py
}
AWX_CELERY_QUEUES_STATIC = [
CELERY_DEFAULT_QUEUE,
]
AWX_CELERY_BCAST_QUEUES_STATIC = [
'tower_broadcast_all',
]
ASGI_AMQP = {
'INIT_FUNC': 'awx.prepare_env',
'MODEL': 'awx.main.models.channels.ChannelGroup',
}
# Django Caching Configuration
CACHES = {
'default': {
@@ -929,8 +919,6 @@ ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC = False
# Internal API URL for use by inventory scripts and callback plugin.
INTERNAL_API_URL = 'http://127.0.0.1:%s' % DEVSERVER_DEFAULT_PORT
PERSISTENT_CALLBACK_MESSAGES = True
USE_CALLBACK_QUEUE = True
CALLBACK_QUEUE = "callback_tasks"
SCHEDULER_QUEUE = "scheduler"
@@ -965,6 +953,18 @@ LOG_AGGREGATOR_LEVEL = 'INFO'
# raising this value can help
CHANNEL_LAYER_RECEIVE_MAX_RETRY = 10
ASGI_APPLICATION = "awx.main.routing.application"
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [BROKER_URL],
"capacity": 10000,
},
},
}
# Logging configuration.
LOGGING = {
'version': 1,
@@ -1111,10 +1111,6 @@ LOGGING = {
'handlers': ['console', 'file', 'tower_warnings'],
'level': 'WARNING',
},
'kombu': {
'handlers': ['console', 'file', 'tower_warnings'],
'level': 'WARNING',
},
'rest_framework.request': {
'handlers': ['console', 'file', 'tower_warnings'],
'level': 'WARNING',
@@ -1239,3 +1235,29 @@ MIDDLEWARE = [
'awx.main.middleware.URLModificationMiddleware',
'awx.main.middleware.SessionTimeoutMiddleware',
]
# Secret header value to exchange for websockets responsible for distributing websocket messages.
# This needs to be kept secret and randomly generated
BROADCAST_WEBSOCKET_SECRET = ''
# Port for broadcast websockets to connect to
# Note: that the clients will follow redirect responses
BROADCAST_WEBSOCKET_PORT = 443
# Whether or not broadcast websockets should check nginx certs when interconnecting
BROADCAST_WEBSOCKET_VERIFY_CERT = False
# Connect to other AWX nodes using http or https
BROADCAST_WEBSOCKET_PROTOCOL = 'https'
# All websockets that connect to the broadcast websocket endpoint will be put into this group
BROADCAST_WEBSOCKET_GROUP_NAME = 'broadcast-group_send'
# Time wait before retrying connecting to a websocket broadcast tower node
BROADCAST_WEBSOCKET_RECONNECT_RETRY_RATE_SECONDS = 5
# How often websocket process will look for changes in the Instance table
BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS = 10
# How often websocket process will generate stats
BROADCAST_WEBSOCKET_STATS_POLL_RATE_SECONDS = 5

View File

@@ -40,6 +40,7 @@ NOTEBOOK_ARGUMENTS = [
]
# print SQL queries in shell_plus
SHELL_PLUS = 'ipython'
SHELL_PLUS_PRINT_SQL = False
# show colored logs in the dev environment

View File

@@ -12,7 +12,6 @@
# MISC PROJECT SETTINGS
###############################################################################
import os
import urllib.parse
import sys
# Enable the following lines and install the browser extension to use Django debug toolbar
@@ -49,18 +48,6 @@ if "pytest" in sys.modules:
}
}
# AMQP configuration.
BROKER_URL = "amqp://{}:{}@{}/{}".format(os.environ.get("RABBITMQ_USER"),
os.environ.get("RABBITMQ_PASS"),
os.environ.get("RABBITMQ_HOST"),
urllib.parse.quote(os.environ.get("RABBITMQ_VHOST", "/"), safe=''))
CHANNEL_LAYERS = {
'default': {'BACKEND': 'asgi_amqp.AMQPChannelLayer',
'ROUTING': 'awx.main.routing.channel_routing',
'CONFIG': {'url': BROKER_URL}}
}
# Absolute filesystem path to the directory to host projects (with playbooks).
# This directory should NOT be web-accessible.
PROJECTS_ROOT = '/var/lib/awx/projects/'
@@ -238,3 +225,8 @@ TEST_OPENSTACK_PROJECT = ''
# Azure credentials.
TEST_AZURE_USERNAME = ''
TEST_AZURE_KEY_DATA = ''
BROADCAST_WEBSOCKET_SECRET = '🤖starscream🤖'
BROADCAST_WEBSOCKET_PORT = 8013
BROADCAST_WEBSOCKET_VERIFY_CERT = False
BROADCAST_WEBSOCKET_PROTOCOL = 'http'

View File

@@ -16,10 +16,6 @@ function ApplicationsStrings (BaseString) {
USERS: t.s('Tokens')
};
ns.tooltips = {
ADD: t.s('Create a new Application')
};
ns.add = {
PANEL_TITLE: t.s('NEW APPLICATION'),
CLIENT_ID_LABEL: t.s('CLIENT ID'),
@@ -32,7 +28,8 @@ function ApplicationsStrings (BaseString) {
PANEL_TITLE: t.s('APPLICATIONS'),
ROW_ITEM_LABEL_EXPIRED: t.s('EXPIRATION'),
ROW_ITEM_LABEL_ORGANIZATION: t.s('ORG'),
ROW_ITEM_LABEL_MODIFIED: t.s('LAST MODIFIED')
ROW_ITEM_LABEL_MODIFIED: t.s('LAST MODIFIED'),
ADD: t.s('Create a new Application')
};
ns.inputs = {

View File

@@ -43,10 +43,6 @@ function ListApplicationsController (
paginateQuerySet = queryset;
});
vm.tooltips = {
add: strings.get('tooltips.ADD')
};
const toolbarSortDefault = {
label: `${strings.get('sort.NAME_ASCENDING')}`,
value: 'name'

View File

@@ -19,11 +19,12 @@
</smart-search>
<div class="at-List-toolbarAction" ng-show="canAdd">
<button
aria-label="{{:: vm.strings.get('list.ADD') }}"
type="button"
ui-sref="applications.add"
class="at-Button--add"
id="button-add"
aw-tool-tip="{{vm.tooltips.add}}"
aw-tool-tip="{{:: vm.strings.get('list.ADD') }}"
data-placement="top"
aria-haspopup="true"
aria-expanded="false">

Some files were not shown because too many files have changed in this diff Show More