Compare commits

...

768 Commits

Author SHA1 Message Date
softwarefactory-project-zuul[bot]
37ee95314a Merge pull request #6802 from ryanpetrello/version-11-1-0
bump version to 11.1.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 19:18:24 +00:00
softwarefactory-project-zuul[bot]
28c3fa517e Merge pull request #6773 from ryanpetrello/playbook-scan-symlinks
follow symlinks while discovering valid playbooks

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 18:35:53 +00:00
Ryan Petrello
3dd21d720e follow symlinks while discovering valid playbooks
related: https://github.com/ansible/awx/pull/6769

Co-authored-by: Francois Herbert <francois@herbert.org.nz>
2020-04-22 13:38:29 -04:00
softwarefactory-project-zuul[bot]
9cfecb5590 Merge pull request #6788 from ryanpetrello/version-header
include the AWX version as a header in all responses

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 17:11:56 +00:00
Ryan Petrello
2742612be9 bump version to 11.1.0 2020-04-22 13:00:41 -04:00
softwarefactory-project-zuul[bot]
4f4a4e2394 Merge pull request #6204 from Ladas/send_job_and_template_nodes_to_analytics
Send job and template nodes to analytics

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 16:08:31 +00:00
Ryan Petrello
edd9972435 include the AWX version as a header in all responses 2020-04-22 12:07:31 -04:00
softwarefactory-project-zuul[bot]
9fdec9b31b Merge pull request #6785 from shanemcd/really-clean-that-volume
Dev env: stop and remove containers before removing volume

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 15:39:12 +00:00
softwarefactory-project-zuul[bot]
a93ee86581 Merge pull request #6787 from squidboylan/remove_module_tests
Remove tower_receive and tower_send tests

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 15:01:58 +00:00
softwarefactory-project-zuul[bot]
020246736c Merge pull request #6796 from rooftopcellist/fix_awx_rsyslog
rsyslogd is only needed in the web container

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 15:01:50 +00:00
Christian Adams
8d3ce206cd rsyslogd is only needed in the web container 2020-04-22 10:17:04 -04:00
softwarefactory-project-zuul[bot]
28e27c5196 Merge pull request #6768 from keithjgrant/5909-jt-launch-3b
JT Launch Prompting (phase 3) [rebuilt branch]

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-21 23:43:29 +00:00
softwarefactory-project-zuul[bot]
c56352daa4 Merge pull request #6765 from rooftopcellist/fix_flake_zuul
revert back to the old way of calling flake8 linter

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-21 21:37:36 +00:00
Caleb Boylan
5eea4e8881 Remove tower_receive and tower_send tests 2020-04-21 13:46:13 -07:00
Bill Nottingham
58c821f3e1 De-flake the collector test. 2020-04-21 16:32:33 -04:00
Shane McDonald
5cad0d243a Dev env: stop and remove containers before removing volume 2020-04-21 15:47:59 -04:00
softwarefactory-project-zuul[bot]
0aaa2d8c8d Merge pull request #6783 from ryanpetrello/inv-links
update (dead) links to example inv source vars

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-21 19:07:55 +00:00
chris meyers
921feb561d add test case for wfj nodes analytics 2020-04-21 20:21:38 +02:00
Bill Nottingham
5b0bb4939f Allow subsets of table gathering for unit tests.
sqlite does not like some of our PG-isms.
2020-04-21 20:21:20 +02:00
Ladislav Smola
144cffe009 Send job and template nodes to analytics
Sending tables main_workflowjobnode and main_workflowjobtemplatenode
containing arrays of success/failure/always_nodes which is compatible
to what API call for nodes return.
2020-04-21 20:02:30 +02:00
Ryan Petrello
af11055e5c update (dead) links to example inv source vars
see: https://github.com/ansible/awx/issues/6538

some of these are subject to change (in particular, the azure one), but
this at least fixes the dead links for now in ansible devel
2020-04-21 14:00:54 -04:00
softwarefactory-project-zuul[bot]
c0cb546c3c Merge pull request #6779 from squidboylan/fix_project_allow_override
Collection: Fix the tower_project scm_allow_override

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-21 17:52:52 +00:00
softwarefactory-project-zuul[bot]
a800c8cd00 Merge pull request #6781 from ryanpetrello/pg10-doc
update postgres minimum version

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-21 17:32:58 +00:00
Caleb Boylan
f8a23f20aa Collection: Assert tower_project job is successful 2020-04-21 10:14:08 -07:00
softwarefactory-project-zuul[bot]
46edd151e0 Merge pull request #6764 from ryanpetrello/redis-sos
record redis config in the sosreport

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-21 17:13:58 +00:00
Caleb Boylan
ba4b6bdbb7 Collection: tower_project alias allow_override to scm_allow_override 2020-04-21 10:08:06 -07:00
Caleb Boylan
1e24d8b5fa Collection: Add integration tests for project scm_allow_override 2020-04-21 09:58:39 -07:00
Ryan Petrello
41586ea3a6 update postgres minimum version 2020-04-21 12:49:33 -04:00
Caleb Boylan
ded5577832 Collection: Fix the tower_project scm_allow_override 2020-04-21 09:39:16 -07:00
softwarefactory-project-zuul[bot]
cce5f26e34 Merge pull request #6763 from ryanpetrello/rsyslogd-spool-config
let users configure the destination and max disk size of rsyslogd spool

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-20 23:49:20 +00:00
Keith Grant
1940c834cb fix empty CodeMirror bug in modals 2020-04-20 16:21:59 -07:00
Keith Grant
08381577f5 Merge prompt extra_vars before POSTing
* Merge the extra_vars field with survey question responses before sending
to API
* Clean up select and multi-select survey fields
2020-04-20 16:21:48 -07:00
Keith Grant
669d67b8fb flush out validators, survey questions 2020-04-20 16:21:39 -07:00
Keith Grant
8a0be5b111 add survey questions 2020-04-20 16:21:31 -07:00
Ryan Petrello
9e30f004d3 let users configure the destination and max disk size of rsyslogd spool 2020-04-20 19:12:28 -04:00
softwarefactory-project-zuul[bot]
62bf61b2a2 Merge pull request #6766 from ryanpetrello/fixup-6760
escape certain log aggregator settings when generating rsyslog config

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-20 22:52:20 +00:00
Ryan Petrello
f62dfe85cc escape certain log aggregator settings when generating rsyslog config
see: https://github.com/ansible/awx/issues/6760
2020-04-20 18:05:01 -04:00
Christian Adams
97acba8fe9 revert back to the old way of calling flake8 linter 2020-04-20 17:27:52 -04:00
Ryan Petrello
cec7cb393d record redis config in the sosreport 2020-04-20 17:03:50 -04:00
softwarefactory-project-zuul[bot]
e9b254b9d2 Merge pull request #6654 from AlexSCorey/4962-EnableWebhooksForJT
Adds webhooks to Job template form

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-20 20:11:29 +00:00
Alex Corey
222fecc5f6 adds test for new webhook component 2020-04-20 15:33:46 -04:00
softwarefactory-project-zuul[bot]
c833676863 Merge pull request #6752 from fherbert/job_template_notification
Support adding/removing notifications to job_templates

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-20 17:20:25 +00:00
softwarefactory-project-zuul[bot]
7e9835f6ee Merge pull request #6730 from rooftopcellist/pyflake
Fix new flake8 from pyflakes 2.2.0 release

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-20 14:31:52 +00:00
softwarefactory-project-zuul[bot]
5940f6de2c Merge pull request #6737 from ryanpetrello/da-queues
rsyslogd: set some reasonable limits for disk queues

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-20 14:29:05 +00:00
Christian Adams
a899a147e1 Fix new flake8 from pyflakes 2.2.0 release 2020-04-20 09:50:50 -04:00
softwarefactory-project-zuul[bot]
e0c8f3e541 Merge pull request #6747 from chrismeyersfsu/fix-redis_logs
fix redis logs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-20 13:06:40 +00:00
Francois Herbert
68a0bbe125 Support adding/removing notifications to job_templates 2020-04-20 13:02:41 +12:00
chris meyers
8592bf3e39 better broadcast websocket logging
* Make quiter the daphne logs by raising the level to INFO instead of
DEBUG
* Output the django channels name of broadcast clients. This way, if the
queue gets backed up, we can find it in redis.
2020-04-17 17:19:08 -04:00
chris meyers
4787e69afb consistent wsbroadcast log messages 2020-04-17 17:18:21 -04:00
softwarefactory-project-zuul[bot]
8f5afc83ce Merge pull request #6745 from ryanpetrello/redis-tcp-port--
don't expose redis port

Reviewed-by: Elyézer Rezende
             https://github.com/elyezer
2020-04-17 20:43:27 +00:00
softwarefactory-project-zuul[bot]
b1a90d445b Merge pull request #6739 from chrismeyersfsu/fix-redis_group_cleanup
cleanup group membership on disconnect

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-17 20:28:00 +00:00
softwarefactory-project-zuul[bot]
8954e6e556 Merge pull request #6687 from nixocio/ui_convert_user_to_be_function
Update User component to be function based

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-17 19:55:50 +00:00
Ryan Petrello
7bfc99a615 don't expose redis port 2020-04-17 15:34:11 -04:00
Ryan Petrello
f159a6508e rsyslogd: set some higher limits for disk-assisted queues 2020-04-17 14:34:07 -04:00
nixocio
4d7b5adf12 Update User component to be function based
Update User component to be function based. Also update related
unit-tests.
2020-04-17 14:29:31 -04:00
Alex Corey
6e648cf72f Adds webhooks to jt form 2020-04-17 14:18:32 -04:00
softwarefactory-project-zuul[bot]
24a50ea076 Merge pull request #6738 from squidboylan/fix_collection_sanity_ansible2.9
Collection: Ignore some sanity errors in ansible 2.9

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2020-04-17 18:06:41 +00:00
softwarefactory-project-zuul[bot]
2d2add009b Merge pull request #6728 from chrismeyersfsu/fix-noisy_debug
confidence in websocket group logic is high

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-17 18:06:36 +00:00
chris meyers
fd068695ef cleanup group membership on disconnect
* zcard asgi::group:jobs-status_changed <-- to see a group set that
continues to grow. Issue this command in a loop while refreshing the
browser page on the jobs list. Before this change the set size would
continue to grow as daphne channel names are added to the group. After
this change the set size stays stable at the expected, 1.
2020-04-17 13:16:11 -04:00
Caleb Boylan
b19360ac9b Collection: Ignore some sanity errors in ansible 2.9 2020-04-17 09:32:54 -07:00
softwarefactory-project-zuul[bot]
7c3c1f5a29 Merge pull request #6678 from nixocio/ui_issue_5983
Fix List Navigation Pagination

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-17 16:08:55 +00:00
nixocio
a902afcf73 Fix List Navigation Pagination
Fix List Navigation Pagination. Add missing variable `page` to
`handleSetPageSize`. Also update unittests impacted by this change.

closes: https://github.com/ansible/awx/issues/5983
2020-04-17 11:16:12 -04:00
softwarefactory-project-zuul[bot]
501568340b Merge pull request #6736 from beeankha/fix_collection_readme_format
Fix Collection README to Display Properly

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-17 14:18:57 +00:00
softwarefactory-project-zuul[bot]
1d32917ceb Merge pull request #6732 from domq/fix/rsync-EAGAIN-hazard
[fix] Use rsync --blocking-io to work around EAGAIN hazard

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-17 13:08:10 +00:00
beeankha
2d455800c4 More bulleted list formatting changes 2020-04-16 20:18:22 -04:00
softwarefactory-project-zuul[bot]
37491fa4b9 Merge pull request #6735 from wenottingham/true-is-relative
Flip CSRF_COOKIE_SECURE docs.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-16 21:09:20 +00:00
softwarefactory-project-zuul[bot]
f41852c3ee Merge pull request #6709 from marshmalien/6530-wf-node-wf
Add workflow details to node view modal

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-16 20:54:25 +00:00
softwarefactory-project-zuul[bot]
b565ed2077 Merge pull request #6723 from nixocio/ui_issue_6244
Fix Page Size toggle does not persist after a search

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-16 20:25:41 +00:00
beeankha
86bafb52f6 Fix collection README display 2020-04-16 16:13:12 -04:00
Bill Nottingham
11b1d0e84c Flip CSRF_COOKIE_SECURE docs.
I think this was backwards.
2020-04-16 15:34:38 -04:00
softwarefactory-project-zuul[bot]
f47325a532 Merge pull request #6681 from chrismeyersfsu/fix-cluster_stupid_bash
fix copy paste error in docker compose cluster

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-16 19:14:16 +00:00
nixocio
1a261782c7 Fix Page Size toggle does not persist after a search
Fix Page Size toggle does not persist after a search.
Also, add unit-tests related to `onSearch`,`clearAllFilters` and `onRemove`.

closes:https://github.com/ansible/awx/issues/6244
2020-04-16 15:06:50 -04:00
Dominique Quatravaux
5a1599b440 [fix] Use rsync --blocking-io to work around EAGAIN hazard
Fixes #6692
2020-04-16 20:20:21 +02:00
chris meyers
72248db76d fix copy paste error in docker compose cluster 2020-04-16 14:12:30 -04:00
softwarefactory-project-zuul[bot]
21268b779f Merge pull request #6713 from beeankha/awx_collection_deprecations
Deprecate Send, Receive, and Workflow Template Collections Modules

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-16 16:23:15 +00:00
beeankha
8926f635df Mark send, receive, and workflow_job_template modules as deprecated
Add routing.yml file to mark modules for deprecation and pass sanity tests

Ignore sanity tests for deprecated modules
2020-04-16 11:25:38 -04:00
softwarefactory-project-zuul[bot]
e19194b883 Merge pull request #6721 from shanemcd/dockerfile-cleanup
Dockerfile organization

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-16 14:48:58 +00:00
softwarefactory-project-zuul[bot]
fa1c33da7e Merge pull request #6704 from ryanpetrello/11-0-0-release-version
bump version for 11.0.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-16 13:41:56 +00:00
chris meyers
d30ecb6fb3 confidence in websocket group logic is high
* Replying to websocket group membership with the previous state, delta,
and new state has shown to be quite stable. This debug message is not
very helpful and is noisy in the dev env. This change removes the debug
message.
2020-04-16 08:48:12 -04:00
Ryan Petrello
8ed5964871 bump version for 11.0.0 2020-04-15 22:10:12 -04:00
softwarefactory-project-zuul[bot]
a989c624c7 Merge pull request #6724 from chrismeyersfsu/fix-redis_not_registering_disconnect
reconnect when a vanilla server disconnect happens

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 23:38:47 +00:00
chris meyers
7f01de26a1 reconnect when a vanilla server disconnect happens 2020-04-15 19:02:33 -04:00
softwarefactory-project-zuul[bot]
e3b5d64aa7 Merge pull request #6722 from wenottingham/over-the-ramparts-we-no-longer-watch
Remove 'rampart' from a user-facing string.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 21:52:57 +00:00
softwarefactory-project-zuul[bot]
eba0e4fd77 Merge pull request #6710 from rooftopcellist/rsyslog_rename_dir
Rename awx rsyslog socket and PID dir

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 20:43:40 +00:00
softwarefactory-project-zuul[bot]
d3c80eef4d Merge pull request #6560 from mabashian/5865-schedule-edit
Add support for editing proj/jt/wfjt schedule

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 20:21:56 +00:00
softwarefactory-project-zuul[bot]
3683dfab37 Merge pull request #6720 from chrismeyersfsu/feature-wsbroadcast_better_logging
wsbroadcast better logging and behavior

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 19:11:26 +00:00
Bill Nottingham
8e3931de37 Remove 'rampart' from a user-facing string. 2020-04-15 15:00:11 -04:00
Shane McDonald
29a582f869 Dockerfile organization 2020-04-15 14:43:59 -04:00
mabashian
be0a7a2aa9 Pass route contents as child instead of using render prop 2020-04-15 14:33:35 -04:00
mabashian
d0d8d1c66c Fix schedule edit prop types error thrown on schedule prop 2020-04-15 14:33:35 -04:00
mabashian
8a8a48a4ff Fix prop types on schedule edit 2020-04-15 14:33:35 -04:00
mabashian
b0aa795b10 Remove rogue console.logs 2020-04-15 14:33:35 -04:00
mabashian
017064aecf Adds support for editing proj/jt/wfjt schedule 2020-04-15 14:33:35 -04:00
softwarefactory-project-zuul[bot]
7311ddf722 Merge pull request #6562 from AlexSCorey/6333-SurveyCleanUp
Fixes several things about Survey List

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 18:33:07 +00:00
Christian Adams
69835e9895 Write logs to /dev/null if logging is not enabled 2020-04-15 14:17:21 -04:00
Christian Adams
85960d9035 Volume mount supervisor dir to both containers 2020-04-15 14:11:15 -04:00
Christian Adams
c8ceb62269 Rename awx rsyslog socket and PID dir 2020-04-15 14:11:15 -04:00
chris meyers
1acca459ef nice error message when redis is down
* awx_manage run_wsbroadcast --status nice error message if someone
failed to start awx services (i.e. redis)
2020-04-15 13:28:13 -04:00
Alex Corey
ee6fda9f8a moves validator function 2020-04-15 13:06:30 -04:00
Alex Corey
a95632c349 Adds error handling and validation.
Also adresses small PR issues
2020-04-15 13:06:30 -04:00
Alex Corey
ed3b6385f1 Fixes several things about Survey List
Aligns Select All with other select buttons
Add required asterisk to those items that are required
Adds label for the Default and Question Type column
Adds chips for multiselect items.
Adds RBAC to add and edit survey.
Also fixes a bug where the survey was not reloading properly after edit
2020-04-15 13:06:30 -04:00
softwarefactory-project-zuul[bot]
3518fb0c17 Merge pull request #6717 from ryanpetrello/custom-cred-plugin-instructions
update custom credential plugin docs to point at an example project

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 15:39:21 +00:00
softwarefactory-project-zuul[bot]
1289f141d6 Merge pull request #6716 from beeankha/remove_check_mode_text
Remove 'supports_check_mode' Text from Converted Collection Modules

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 14:29:15 +00:00
Ryan Petrello
8464ec5c49 update custom credential plugin docs to point at an example project 2020-04-15 09:59:09 -04:00
beeankha
3bc5975b90 Remove 'supports_check_mode' text from converted Collection modules 2020-04-15 09:37:54 -04:00
softwarefactory-project-zuul[bot]
af7e9cb533 Merge pull request #6712 from ryanpetrello/tcp-timeout
properly implement TCP timeouts for external log aggregation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 03:51:22 +00:00
softwarefactory-project-zuul[bot]
af2a8f9831 Merge pull request #6665 from wenottingham/moar-data-plz
Collect information on inventory sources

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-15 00:51:59 +00:00
Bill Nottingham
f99a43ffa6 Collect information on inventory sources
Also remove one minor query from smart inventory collection that will
never return anything.
2020-04-14 19:15:19 -04:00
Ryan Petrello
262d99fde6 properly implement TCP timeouts for external log aggregation
see: https://github.com/ansible/awx/issues/6683
2020-04-14 17:06:30 -04:00
chris meyers
63f56d33aa show user unsafe name
* We log stats using a safe hostname because of prometheus requirements.
However, when we display users the hostname we should use the Instance
hostname. This change outputs the Instance.hostname instead of the safe
prometheus name.
2020-04-14 16:59:34 -04:00
chris meyers
9cabf3ef4d do not include iso nodes in wsbroadcast status 2020-04-14 16:55:56 -04:00
softwarefactory-project-zuul[bot]
2855be9d26 Merge pull request #6689 from john-westcott-iv/collections_oauth_respect
Make the module util respect oauth token over username/password

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 20:33:53 +00:00
Marliana Lara
2524e8af47 Separate prompted modal section with divider and fix user word-wrap 2020-04-14 15:08:37 -04:00
Marliana Lara
f957ef7249 Add webhook fields to wf node job template detail 2020-04-14 15:07:32 -04:00
Marliana Lara
4551859248 Add WF details to workflow node view 2020-04-14 15:04:21 -04:00
softwarefactory-project-zuul[bot]
2a4912df3e Merge pull request #6706 from ryanpetrello/rsyslog-restart-warn
make rsyslog service restarts a bit less noisy

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 19:03:28 +00:00
chris meyers
daa312d7ee log file for wsbroadcast 2020-04-14 14:21:23 -04:00
Ryan Petrello
e95938715a make rsyslog service restarts a bit less noisy 2020-04-14 14:18:30 -04:00
softwarefactory-project-zuul[bot]
f5d4f7858a Merge pull request #6684 from nixocio/update_ui_docs_naming
Add note about code style for ui_next

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 18:05:22 +00:00
softwarefactory-project-zuul[bot]
25e0efd0b7 Merge pull request #6698 from wenottingham/the-time-zone-is-for-loading-and-saving-only
Cast the start/end times with timezone.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 17:53:35 +00:00
nixocio
47a007caee Add note about code sytle for ui_next
Add note about code style for `ui_next`.
2020-04-14 13:16:37 -04:00
Bill Nottingham
cd6d2ed53a Move the comma so unit test can filter things properly. 2020-04-14 13:12:03 -04:00
softwarefactory-project-zuul[bot]
4de61204c4 Merge pull request #6700 from AlanCoding/more_readme
Update AWX collection docs for release 11.0.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 16:55:14 +00:00
John Westcott IV
6b21f2042b Make the module util respect oauth token over username/password 2020-04-14 12:51:45 -04:00
softwarefactory-project-zuul[bot]
7820517734 Merge pull request #6664 from marshmalien/6530-wf-node-jt
Add JT wf node modal prompt details

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 16:46:20 +00:00
softwarefactory-project-zuul[bot]
2ba1288284 Merge pull request #6695 from ryanpetrello/memcached-cleanup
don't wait on memcached TCP

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 16:40:52 +00:00
softwarefactory-project-zuul[bot]
149f8a21a6 Merge pull request #6696 from ryanpetrello/rsyslog-splunk-extras
add a few minor logging changes to accomodate Splunk's API

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 16:40:19 +00:00
softwarefactory-project-zuul[bot]
602f2951b9 Merge pull request #6702 from ryanpetrello/rsyslogd-no-dev-log
rsyslogd: ignore /dev/log when we load imuxsock

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 16:33:50 +00:00
softwarefactory-project-zuul[bot]
b003f42e22 Merge pull request #6547 from AlexSCorey/6384-ConvertWFJTToHooks
Converts WFJTForm to Formik hooks

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 16:33:45 +00:00
softwarefactory-project-zuul[bot]
2ee2cd0bd9 Merge pull request #6688 from nixocio/ui_remove_console
Remove console.log

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-14 16:11:03 +00:00
AlanCoding
a79f2ff07a Update AWX collection docs for release 11.0.0 2020-04-14 12:06:26 -04:00
Ryan Petrello
75bb7cce22 don't wait on memcached TCP 2020-04-14 11:45:27 -04:00
Ryan Petrello
52a253ad18 add a few minor logging changes to accomodate Splunk's API
see: https://docs.splunk.com/Documentation/Splunk/8.0.3/Data/UsetheHTTPEventCollector
2020-04-14 11:45:04 -04:00
Ryan Petrello
0f74a05fea rsyslogd: ignore /dev/log when we load imuxsock 2020-04-14 11:34:58 -04:00
Alex Corey
440691387b Puts webhook key on the template object in WFJTEdit
Also adds aria-label to Label Select Options to improve test matchers
 Improves the name of the template payload in WFJTAdd and WFJTEdit
 Updates tests including a failing snapshot DeleteConfirmationModal
 test that was failing in devel
2020-04-14 11:11:50 -04:00
Alex Corey
27e6c2d47d Adds tests 2020-04-14 11:11:50 -04:00
Alex Corey
8b69b08991 Adds formik hook functionality to wfjt form 2020-04-14 11:11:50 -04:00
Marliana Lara
8714bde1b4 Wrap entire date/time string in <Trans> tag 2020-04-14 11:08:12 -04:00
Marliana Lara
28b84d0d71 Use delete operator instead of destructuring 2020-04-14 11:08:12 -04:00
Marliana Lara
c6111fface Partition base resource into defaults and overrides 2020-04-14 11:08:12 -04:00
Marliana Lara
98e8a09ad3 Add JT details to wf node modal 2020-04-14 11:08:11 -04:00
nixocio
3f9af8fe69 Remove console.log
Remove console.log
2020-04-14 11:07:52 -04:00
Ryan Petrello
dbe949a2c2 Merge pull request #6697 from chrismeyersfsu/fix-collection_tests
ensure last comma removed in select
2020-04-14 11:05:29 -04:00
Bill Nottingham
a296f64696 Cast the start/end times with timezone. 2020-04-14 10:53:57 -04:00
chris meyers
ee18400a33 ensure last comma removed in select
* We strip out the json select fields in our tests since it is an sql
lite database underneath. Ideally, we would do something fancier, but we
aren't. In doing this stipping, we could strip the last element in the
projection list. This would result in an extra dangling comma. This
commit removes the danging comma in the projection list after the
removal of JSON projections.
2020-04-14 10:44:02 -04:00
softwarefactory-project-zuul[bot]
98a4e85db4 Merge pull request #6108 from rooftopcellist/rsyslog
Replace our external logging feature with Rsyslog

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2020-04-14 13:40:41 +00:00
Ryan Petrello
f7f1bdf9c9 properly configure supervisorctl to point at the web volume mount 2020-04-13 21:56:52 -04:00
Ryan Petrello
69cf915a20 add rsyslogd block to the k8s supervisord config file 2020-04-13 20:25:53 -04:00
Ryan Petrello
9440785bdd properly set the group on the rsyslog config 2020-04-13 19:46:34 -04:00
Christian Adams
ca7c840d8c Fix permissions on rsyslog.conf for k8s 2020-04-13 19:33:23 -04:00
softwarefactory-project-zuul[bot]
f85bcae89f Merge pull request #6685 from marshmalien/fix-user-loading
Fix route bug in User view

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 20:00:25 +00:00
Christian Adams
a0e31b9c01 Map logging timeout value to healthchecktimeout for http in rsyslog config 2020-04-13 15:22:16 -04:00
softwarefactory-project-zuul[bot]
c414fd68a0 Merge pull request #6176 from Ladas/send_also_workflows_as_part_of_unified_jobs
Send also workflows as part of unified jobs and send all changes to jobs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 18:41:36 +00:00
softwarefactory-project-zuul[bot]
2830cdfdeb Merge pull request #6668 from nixocio/ui_refactor_users_functional
Modify Users component to be function based

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 18:35:57 +00:00
softwarefactory-project-zuul[bot]
07e9b46643 Merge pull request #6656 from jlmitch5/withoutWithRouter
excise withRouter from function components

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 18:35:53 +00:00
softwarefactory-project-zuul[bot]
1f01521213 Merge pull request #6651 from nixocio/ui_issue_5820
Rename SCM to Source Control

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 18:35:46 +00:00
Marliana Lara
8587461ac9 Fix loading bug in User view 2020-04-13 14:19:16 -04:00
nixocio
e54e5280f2 Modify Users component to be function based
Modify Users component to be function based.
2020-04-13 13:43:22 -04:00
softwarefactory-project-zuul[bot]
516a44ce73 Merge pull request #6662 from keithjgrant/5909-jt-launch-prompt-2
JT Launch Prompting (phase 2)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 17:04:41 +00:00
Ryan Petrello
e52cebc28e rsyslogd: use %rawmsg-after-pri% instead of %msg%
after some prolonged RFC reading and tinkering w/ rsyslogd...

cpython's SysLogHandler doesn't emit RFC3164 formatted messages
in the format you'd expect; it's missing the ISO date, hostname, etc...
along with other header values; the handler implementation relies on you
to specify a syslog-like formatter (we've replaced all of this with our
own *custom* logstash-esque formatter that effectively outputs valid JSON
- without dates and other syslog header values prepended)

because of this unanticipated format, rsyslogd chokes when trying to
parse the message's parts;  AWX is emitting:

<priority>RAWJSON

...so the usage of `%msg%` isn't going to work for us, because rsyslog
tries to parse *all* of the possible headers (and yells, because it
can't find a date to parse):

see: https://www.rsyslog.com/files/temp/doc-indent/configuration/properties.html#message-properties

this is fine, because we don't *need* any of that message parsing
anyways; in the end, we're *just* interested in forwarding the raw
JSON/text content to the third party log handler
2020-04-13 11:44:00 -04:00
Ryan Petrello
bb5136cdae properly escape URL paths and querystrings for paths in logging settings 2020-04-13 11:44:00 -04:00
Ryan Petrello
b0db2b7bec add some exception handling for dealing with logging connection resets
when rsyslogd restarts due to config changes, there's a brief moment
where the socket will refuse connections on teardown; exception handling
is needed here to deal with that
2020-04-13 11:44:00 -04:00
Ryan Petrello
1000dc10fb an an rsyslogd config check to the logging test endpoint 2020-04-13 11:44:00 -04:00
Ryan Petrello
2a4b009f04 rsyslogd: use %rawmsg-after-pri% instead of %msg%
after some prolonged RFC reading and tinkering w/ rsyslogd...

cpython's SysLogHandler doesn't emit RFC3164 formatted messages
in the format you'd expect; it's missing the ISO date, hostname, etc...
along with other header values; the handler implementation relies on you
to specify a syslog-like formatter (we've replaced all of this with our
own *custom* logstash-esque formatter that effectively outputs valid JSON
- without dates and other syslog header values prepended)

because of this unanticipated format, rsyslogd chokes when trying to
parse the message's parts;  AWX is emitting:

<priority>RAWJSON

...so the usage of `%msg%` isn't going to work for us, because rsyslog
tries to parse *all* of the possible headers (and yells, because it
can't find a date to parse):

see: https://www.rsyslog.com/files/temp/doc-indent/configuration/properties.html#message-properties

this is fine, because we don't *need* any of that message parsing
anyways; in the end, we're *just* interested in forwarding the raw
JSON/text content to the third party log handler
2020-04-13 11:44:00 -04:00
Ryan Petrello
8cdd42307c clarify that logging username/password is only valid for HTTP/s 2020-04-13 11:44:00 -04:00
Ryan Petrello
269558876e only use a basic auth password for external logging if username is set 2020-04-13 11:44:00 -04:00
Ryan Petrello
bba680671b when writing the rsyslog config, do it post-commit
there's a race condition if we do this pre-commit where the correct
value isn't actually *persisted* to the database yet, and we end up
saving the *prior* setting values
2020-04-13 11:44:00 -04:00
Ryan Petrello
f70a76109c make rsyslog fall back to no-op if logging is disabled 2020-04-13 11:44:00 -04:00
Christian Adams
5d54877183 Add action to default rsyslog.conf so supervisor starts correctly the first time 2020-04-13 11:44:00 -04:00
Ryan Petrello
f7dac8e68d more external logging unit test fixups 2020-04-13 11:44:00 -04:00
Ryan Petrello
39648b4f0b fix up a few test and lint errors related to external logging 2020-04-13 11:44:00 -04:00
Christian Adams
b942fde59a Ensure log messages have valid json
- Fix messages getting contatenated at 8k
 - Fix rsyslog cutting off the opening brace of log messages
 - Make valid default conf and emit logs based on prescence of .sock and
 settings
2020-04-13 11:44:00 -04:00
Ryan Petrello
ce82b87d9f rsyslog hardening (fixing a few weird things we noticed) 2020-04-13 11:44:00 -04:00
Christian Adams
70391f96ae Revert rsyslog valid config to one that fails intentionally 2020-04-13 11:43:59 -04:00
Christian Adams
2329c1b797 Add rsyslog config to container from file for consistency 2020-04-13 11:43:59 -04:00
Christian Adams
470159b4d7 Enable innocuous but valid config for rsyslog if disabled 2020-04-13 11:43:59 -04:00
Christian Adams
e740340793 ConfigMap rsyslog conf files for k8 2020-04-13 11:43:59 -04:00
Christian Adams
4d5507d344 Add default rsyslog.conf without including /etc/rsyslog.conf 2020-04-13 11:43:59 -04:00
Christian Adams
d350551547 Tweaks to Test Button logic and cleans up flake8 and test failures 2020-04-13 11:43:59 -04:00
Christian Adams
7fd79b8e54 Remove unneeded logging sock variable 2020-04-13 11:43:59 -04:00
John Mitchell
eb12f45e8e add ngToast disable on timeout for log agg notifications, and disable test button until active test completes. 2020-04-13 11:43:59 -04:00
Christian Adams
fb047b1267 Add unit tests for reconfiguring rsyslog & for test endpoint 2020-04-13 11:43:59 -04:00
Christian Adams
d31c528257 Fix Logging settings "Test" button functionality 2020-04-13 11:43:59 -04:00
Christian Adams
996d7ce054 Move supervisor and rsyslog sock files to their own dirs under /var/run 2020-04-13 11:43:59 -04:00
Christian Adams
7040fcfd88 Fix container rsyslog dir permissions 2020-04-13 11:43:59 -04:00
John Mitchell
88ca4b63e6 update configure tower in tower test ui for log aggregator form 2020-04-13 11:43:59 -04:00
Shane McDonald
c0af3c537b Configure rsyslog to listen over a unix domain socket instead of a port
- Add a placeholder rsyslog.conf so it doesn't fail on start
 - Create access restricted directory for unix socket to be created in
 - Create RSyslogHandler to exit early when logging socket doesn't exist
 - Write updated logging settings when dispatcher comes up and restart rsyslog so they  take effect
 - Move rsyslogd to the web container and create rpc supervisor.sock
 - Add env var for supervisor.conf path
2020-04-13 11:43:59 -04:00
Christian Adams
f8afae308a Add rsyslog to supervisor for the task container
- Add proper paths for rsyslog's supervisor logs
 - Do not enable debug mode for rsyslogd
 - Include system rsyslog.conf, and specify tower logging conf when
   starting rsyslog.
2020-04-13 11:43:59 -04:00
Christian Adams
4cd0d60711 Properly handle logger paths and https/http configuration
- log aggregator url paths were not being passed to rsyslog
 - http log services like loggly will now truly use http and port 80
 - add rsyslog.pid to .gitignore
2020-04-13 11:43:59 -04:00
Christian Adams
955d57bce6 Upstream rsyslog packaging changes
- add rsyslog repo to Dockerfile for AWX installation
 - Update Library Notes for requests-futures removal
2020-04-13 11:43:59 -04:00
Ryan Petrello
589d27c88c POC: replace our external log aggregation feature with rsyslog
- this change adds rsyslog (https://github.com/rsyslog/rsyslog) as
  a new service that runs on every AWX node (managed by supervisord)
  in particular, this feature requires a recent version (v8.38+) of
  rsyslog that supports the omhttp module
  (https://github.com/rsyslog/rsyslog-doc/pull/750)
- the "external_logger" handler in AWX is now a SysLogHandler that ships
  logs to the local UDP port where rsyslog is configured to listen (by
  default, 51414)
- every time a LOG_AGGREGATOR_* setting is changed, every AWX node
  reconfigures and restarts its local instance of rsyslog so that its
  fowarding settings match what has been configured in AWX
- unlike the prior implementation, if the external logging aggregator
  (splunk/logstash) goes temporarily offline, rsyslog will retain the
  messages and ship them when the log aggregator is back online
- 4xx or 5xx level errors are recorded at /var/log/tower/external.err
2020-04-13 11:43:59 -04:00
softwarefactory-project-zuul[bot]
eafb751ecc Merge pull request #6679 from ryanpetrello/fix-6675
skip non-files when consuming events synced from isolated hosts

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 15:36:42 +00:00
softwarefactory-project-zuul[bot]
30ea66023f Merge pull request #6671 from wenottingham/even-moar-data-plz
Collect task timing, warnings, and deprecations from job events

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 15:06:46 +00:00
Ryan Petrello
9843e21632 skip non-files when consuming events synced from isolated hosts
see: https://github.com/ansible/awx/issues/6675
2020-04-13 10:14:10 -04:00
softwarefactory-project-zuul[bot]
6002beb231 Merge pull request #6677 from chrismeyersfsu/fix-spelling
fix spelling mistake in wsbroadcast status output

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-13 14:13:24 +00:00
chris meyers
9c6e42fd1b fix spelling mistake in wsbroadcast status output 2020-04-13 09:37:32 -04:00
softwarefactory-project-zuul[bot]
eeab4b90a5 Merge pull request #6568 from AlanCoding/whoops_not_changed
Do not set changed=True if the object did not change

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-10 00:16:02 +00:00
Keith Grant
7827a2aedd fix double-fetch of cred types in launch prompts 2020-04-09 16:07:06 -07:00
softwarefactory-project-zuul[bot]
a7f1a36ed8 Merge pull request #6670 from ryanpetrello/redis-fixup
work around redis connection failures in the callback receiver

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-09 21:41:08 +00:00
Bill Nottingham
d651786206 Collect task timing, warnings, and deprecations from job events
Timing information requires ansible-runner >= 1.4.6.
2020-04-09 17:27:19 -04:00
softwarefactory-project-zuul[bot]
19e4758be1 Merge pull request #6637 from john-westcott-iv/tower_workflow_job_lanch_update
Initial commit of tests for tower_workflow_launch

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-09 19:53:35 +00:00
softwarefactory-project-zuul[bot]
fe9de0d4cc Merge pull request #6658 from mabashian/6655-job-redirect
Fixes issue where job type redirects weren't firing correctly

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-09 19:47:20 +00:00
Ryan Petrello
80147acc1c work around redis connection failures in the callback receiver
if redis stops/starts, sometimes the callback receiver doesn't recover
without a restart; this fixes that
2020-04-09 15:38:03 -04:00
beeankha
4acdf8584b Update workflow_launch module and test playbook 2020-04-09 15:12:49 -04:00
beeankha
cf607691ac Pass data and errors more clearly, change extra_vars to be a dict, update test playbook to have a task utilizing extra_vars 2020-04-09 12:40:13 -04:00
beeankha
d7adcfb119 Revert unnecessary changes made to test playbook during rebase 2020-04-09 12:38:06 -04:00
beeankha
97d26728e4 Fix one more linter issue 2020-04-09 12:38:06 -04:00
John Westcott IV
6403895eae Puting tasks back to natural order 2020-04-09 12:38:06 -04:00
beeankha
8b26ff1fe6 Fix linter errors 2020-04-09 12:38:06 -04:00
beeankha
9ddd020348 Fix sanity tests and edit test playbook 2020-04-09 12:38:06 -04:00
John Westcott IV
a2d1c32da3 Initial commit of tests for tower_workflow_launch 2020-04-09 12:38:06 -04:00
Keith Grant
af18aa8456 restructure 'if's in LaunchPrompt 2020-04-09 08:58:12 -07:00
mabashian
188b23e88f No need to pass undefined explicitly. view will be undefined if it's not passed 2020-04-09 10:28:25 -04:00
mabashian
63bed7a30d Fixes issue where job type redirects weren't firing correctly 2020-04-09 10:28:25 -04:00
AlanCoding
fd93964953 Changed status tweaks for API validation and encryption
API validation topic:
 - do not set changed=True if the object did not actually change
 - deals with cases where API manipulates data before saving

Warn if encrypted data prevent accurate changed status

Handle false changed case of tower_user password
  password field not present in data

Test changed=True warning with JT/WFJT survey spec defaults
  case for list data in JSON
2020-04-09 09:58:12 -04:00
chris meyers
1f9f86974a test analytics table output
* unified_jobs output should include derived jobs i.e. project update,
inventory update, job
* This PR adds a test to ensure that.
2020-04-09 15:20:27 +02:00
Ladislav Smola
6a86af5b43 Use indexed timestamps
Use created and finished, which are indexed, to try to fetch all
states of jobs. If job is not finished, we might not get the
right terminal status, but that should be ok for now.
2020-04-09 15:20:27 +02:00
Ladislav Smola
6a503e152a Send also workflows as part of unified jobs
Workflows do not have a record in main_job, therefore the JOIN
was ignoring those. We need to do LEFT JOIN to include also
workflows.

It also seems like we are not able to get a link to organizations
from workflows? When looking at:
<tower_url>#/organizations?organization_search=page_size:20;order_by:name

We don't seem to list a relation to workflows. Is it possible to get it from
somewhere?
2020-04-09 15:20:27 +02:00
Ladislav Smola
b7227113be Use modified to check if job should be sent to analytics
It can take several hours for a job to go from pending to
successful/failed state and we need to also send the job with
a changed state, otherwise the analytics will be incorrect.
2020-04-09 15:20:27 +02:00
softwarefactory-project-zuul[bot]
907da2ae61 Merge pull request #6660 from mabashian/6606-jt-launch
Pass empty params to launch endpoint rather than null

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 23:46:48 +00:00
Keith Grant
6f76b15d92 fix LaunchButton tests 2020-04-08 15:36:45 -07:00
mabashian
9d6fbd6c78 Updates launch button tests to reflect passing empty object rather than null for launch payload without prompts 2020-04-08 16:10:02 -04:00
mabashian
edb4dac652 Pass empty params to launch endpoint rather than null to alleviate 400 error when launching a JT with default creds. 2020-04-08 16:10:02 -04:00
Keith Grant
42898b94e2 add more prompt tests 2020-04-08 11:48:11 -07:00
softwarefactory-project-zuul[bot]
943543354a Merge pull request #6643 from mabashian/upgrade-pf-2.39.15
Upgrades pf deps to latest

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 18:11:04 +00:00
softwarefactory-project-zuul[bot]
2da22ccd8a Merge pull request #6659 from shanemcd/pre-tty
Enable tty in dev container

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 16:35:37 +00:00
Keith Grant
9cab5a5046 add 'other prompt' fields to launch API call 2020-04-08 08:58:14 -07:00
softwarefactory-project-zuul[bot]
e270a692b7 Merge pull request #6644 from jakemcdermott/6638-fix-initial-playbook-value
Only clear playbook when different project is selected

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 15:45:27 +00:00
Shane McDonald
677a8dae7b Enable tty in dev container
Pretty colors and real-time migration logs
2020-04-08 11:43:30 -04:00
John Mitchell
6eeb32a447 excise withRouter from function components 2020-04-08 10:59:57 -04:00
softwarefactory-project-zuul[bot]
e57991d498 Merge pull request #6652 from matburt/update_zome_docz
Update some contributing docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 14:58:40 +00:00
softwarefactory-project-zuul[bot]
4242bd55c2 Merge pull request #6639 from mabashian/route-render-prop
Converts most of our route render prop usage to children

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 14:49:42 +00:00
softwarefactory-project-zuul[bot]
e8fb466f0f Merge pull request #6646 from beeankha/credential_module_no_log
Activate no_log for Values in input Parameter of tower_credential Module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 14:18:08 +00:00
nixocio
869fcbf483 Rename SCM to Source Control
Rename `SCM` references to `Source Control`.
Also update tests to reflect this change.

closes: https://github.com/ansible/awx/issues/5820
2020-04-08 10:10:07 -04:00
Matthew Jones
6abeaf2c55 Update some contributing docs
* Update the tools called in the dev environment
* More RMQ purges from architecture docs
* Remove the old clusterdev target
2020-04-08 10:03:22 -04:00
mabashian
f734918d3e Removes withRouter from breadcrumbs in favor of hooks 2020-04-08 09:48:16 -04:00
softwarefactory-project-zuul[bot]
91f2e0c32b Merge pull request #6605 from ansible/firehose_pkey
update firehose script for bigint migration

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 13:33:19 +00:00
softwarefactory-project-zuul[bot]
88d6dd96fa Merge pull request #6645 from ryanpetrello/some-more-iso-cleanup
more ansible runner isolated cleanup

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 13:19:06 +00:00
mabashian
7feac5ecd6 Updates routes in breadcrumbs so they no longer use the render prop 2020-04-08 09:17:46 -04:00
mabashian
193ec21149 Converts most of our route render prop usage to children 2020-04-08 09:17:46 -04:00
mabashian
14e62057da Fix linting error by not using index in key 2020-04-08 09:12:32 -04:00
softwarefactory-project-zuul[bot]
a26c0dfb8a Merge pull request #6629 from AlanCoding/one_token_to_rule_them_all
Document and align the env var for OAuth token

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-08 06:09:29 +00:00
Ryan Petrello
6b4219badb more ansible runner isolated cleanup
follow-up to https://github.com/ansible/awx/pull/6296
2020-04-08 01:18:05 -04:00
beeankha
1f598e1b12 Activate no_log for values in input parameter 2020-04-07 20:34:54 -04:00
softwarefactory-project-zuul[bot]
7ddd4d74c0 Merge pull request #6625 from marshmalien/jt-form-bugs
Fix JT form playbook select error message and more 

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 22:55:30 +00:00
softwarefactory-project-zuul[bot]
6ad6f48ff0 Merge pull request #6642 from jakemcdermott/update-contrib-doc
Add note to docs about async form behavior

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 22:32:48 +00:00
Jake McDermott
d736adbedc Only clear playbook when different project is selected 2020-04-07 18:12:48 -04:00
mabashian
c881762c97 Upgrades pf deps to latest 2020-04-07 18:07:47 -04:00
Jake McDermott
be5d067148 Add note to docs about async form behavior 2020-04-07 17:30:03 -04:00
Marliana Lara
189a10e35a Fix playbook error message and JT save bug 2020-04-07 17:01:53 -04:00
softwarefactory-project-zuul[bot]
285e9c2f62 Merge pull request #6635 from AlanCoding/no_tower_cli
Remove tower-cli from Zuul CI for AWX collection

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 20:46:45 +00:00
softwarefactory-project-zuul[bot]
054de87f8e Merge pull request #6601 from shanemcd/dont-delete-my-db
Use a docker volume for the dev env db

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 20:00:57 +00:00
softwarefactory-project-zuul[bot]
7de8a8700c Merge pull request #6487 from lj020326/devel
fix for CSRF issue in traefik configuration 

Reviewed-by: Shane McDonald <me@shanemcd.com>
             https://github.com/shanemcd
2020-04-07 20:00:51 +00:00
softwarefactory-project-zuul[bot]
4f7669dec1 Merge pull request #6634 from AlanCoding/silence
Silence deprecation warnings from tower_credential

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 19:30:45 +00:00
softwarefactory-project-zuul[bot]
25a1bc7a33 Merge pull request #6631 from ryanpetrello/refresh-token-expiry
properly respect REFRESH_TOKEN_EXPIRE_SECONDS when generating new tokens

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 18:28:26 +00:00
softwarefactory-project-zuul[bot]
955ef3e9cb Merge pull request #6541 from AlanCoding/jt_org_left_behind
Fix RBAC loose items from reversed decision on JT org permissions

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 17:41:31 +00:00
AlanCoding
0e8f2307fc Remove tower-cli from Zuul CI for AWX collection 2020-04-07 13:31:06 -04:00
AlanCoding
bcfd2d6aa4 Silence deprecation warnings from tower_credential 2020-04-07 13:24:34 -04:00
Shane McDonald
7e52f4682c Use a docker volume for the dev env db 2020-04-07 13:14:19 -04:00
Keith Grant
9c218fa5f5 flush out prompt misc fields 2020-04-07 09:41:45 -07:00
softwarefactory-project-zuul[bot]
508aed67de Merge pull request #6624 from marshmalien/6608-project-lookup-bug
Prevent project lookup from firing requests on every render

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 15:53:07 +00:00
Ryan Petrello
0bf1116ef8 properly respect REFRESH_TOKEN_EXPIRE_SECONDS when generating new tokens
see: https://github.com/ansible/awx/issues/6630
see: https://github.com/jazzband/django-oauth-toolkit/issues/746
2020-04-07 11:34:01 -04:00
AlanCoding
45df5ba9c4 Manually document tower host default 2020-04-07 10:18:55 -04:00
AlanCoding
b90a296d41 Document and align the env var for OAuth token 2020-04-07 10:00:02 -04:00
softwarefactory-project-zuul[bot]
d40143a63d Merge pull request #6607 from ryanpetrello/graphite-no-tags
don't send tags to the Grafana annotations API if none are specified

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 06:07:58 +00:00
softwarefactory-project-zuul[bot]
db40d550be Merge pull request #6472 from AlanCoding/no_required
Remove field properties which are default values anyway

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 02:57:19 +00:00
AlanCoding
da661e45ae Remove unnecessary module parameters
remove cases of required=False, the default
remove str type specifier which, the default
remove supports check mode, not changeable
2020-04-06 22:08:41 -04:00
softwarefactory-project-zuul[bot]
58160b9eb4 Merge pull request #6623 from nixocio/ui_issue_6133
Update "Enable Webhooks" option in WFJT

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-07 01:05:03 +00:00
softwarefactory-project-zuul[bot]
05b28efd9c Merge pull request #6617 from chrismeyersfsu/fix-memcached
fix memcached in dev env

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 23:49:04 +00:00
softwarefactory-project-zuul[bot]
0b433ebb1c Merge pull request #6609 from beeankha/wfjt_module_inventory_fix
Resolve Name to ID Properly in WFJT Module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 23:42:34 +00:00
softwarefactory-project-zuul[bot]
5b3f5bf37d Merge pull request #6559 from jlmitch5/newNewAssocDisassocHostGroupsList
association and disassociation of host groups and inventory host groups list.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 23:07:19 +00:00
softwarefactory-project-zuul[bot]
397c0092a0 Merge pull request #6569 from ryanpetrello/log-decimal
properly serialize external logs that contain decimal.Decimal objects

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 23:07:15 +00:00
softwarefactory-project-zuul[bot]
362fdaeecc Merge pull request #6604 from jakemcdermott/remove-state-checks-from-user-test
Remove state checks from user list test

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 23:07:11 +00:00
softwarefactory-project-zuul[bot]
606c3c3595 Merge pull request #6338 from rooftopcellist/update_logstash_docs
Update logstash docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 21:10:44 +00:00
softwarefactory-project-zuul[bot]
42705c9eb0 Merge pull request #6545 from fosterseth/fix-4198-readd-user-to-org
Fix adding orphaned user to org

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 21:10:38 +00:00
Marliana Lara
c2ba495824 Prevent project lookup from firing requests on every render 2020-04-06 16:50:10 -04:00
nixocio
85a1c88653 Update "Enable Webhooks" option in WFJT
closes: https://github.com/ansible/awx/issues/6163
2020-04-06 16:48:18 -04:00
chris meyers
c4d704bee1 fix memcached in dev env
* create memcached dir via git so that the current user owns it.
Otherwise, docker will create the dir as root at runtime
2020-04-06 16:35:52 -04:00
softwarefactory-project-zuul[bot]
60d499e11c Merge pull request #6495 from john-westcott-iv/tower_credential_update_new
Initial conversion of tower_credential

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 20:29:44 +00:00
softwarefactory-project-zuul[bot]
bb48ef40be Merge pull request #6595 from nixocio/ui_docs_minor_update
Minor update UI docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 20:26:09 +00:00
Ryan Petrello
771ca2400a don't send tags to the Grafana annotations API if none are specified
see: https://github.com/ansible/awx/issues/6580
2020-04-06 15:47:48 -04:00
softwarefactory-project-zuul[bot]
735d44816b Merge pull request #6592 from kdelee/awxkit_wfjtn_identifier
make awxkit pass through identifier for wfjtn

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 19:42:10 +00:00
beeankha
e346493921 Add inventory param to the wfjt module test playbook 2020-04-06 15:21:57 -04:00
beeankha
bd39fab17a Resolve name to ID correctly in workflow jt module 2020-04-06 15:08:01 -04:00
John Mitchell
ce30594b30 update inventory and host routes to being child-based instead of render prop based 2020-04-06 15:05:11 -04:00
John Mitchell
2021c2a596 remove unnecessary eslint ignore comics, replace react router use with hooks where possible in inventories 2020-04-06 14:38:33 -04:00
John Mitchell
ecd1d09c9a add breadcrumb config for inv host facts and groups 2020-04-06 14:38:33 -04:00
John Mitchell
7dbde8d82c fix linting errors and add note to host groups disassocation modal 2020-04-06 14:38:33 -04:00
John Mitchell
4e64b17712 update hosts groups api GET to all_groups 2020-04-06 14:38:33 -04:00
John Mitchell
cc4c514103 add association and disassociation of groups on invhostgroups/hostgroups lists 2020-04-06 14:38:33 -04:00
John Mitchell
ab8726dafa move associate modal and disassociate button up to components for use across screens 2020-04-06 14:38:33 -04:00
Ryan Petrello
2cefba6f96 properly serialize external logs that contain decimal.Decimal objects 2020-04-06 14:24:24 -04:00
softwarefactory-project-zuul[bot]
592043fa70 Merge pull request #6588 from ryanpetrello/400-error-creds
fix a typo in the credentials UI

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 18:00:43 +00:00
Mat Wilson
59477aa221 update firehose script for bigint migration 2020-04-06 10:54:08 -07:00
Jake McDermott
279fe53837 Remove state checks from user list test
Don't check for loading in UserList test
2020-04-06 13:40:31 -04:00
Shane McDonald
bb319136e4 Merge pull request #6585 from shanemcd/cleanup-cleanup
Tidy up the dev environment a bit
2020-04-06 13:09:39 -04:00
beeankha
b0f68d97da Update comment in test playbook: 2020-04-06 12:38:46 -04:00
softwarefactory-project-zuul[bot]
a46462eede Merge pull request #6526 from chrismeyersfsu/feature-memcached_socket_devel
Feature memcached socket devel

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 16:24:42 +00:00
softwarefactory-project-zuul[bot]
646e403fbd Merge pull request #6570 from marshmalien/6530-wf-node-inv-src-details
Add Inventory Source workflow node prompt details

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 16:10:33 +00:00
nixocio
64c846cfc1 Minor update UI docs
Fix typos
Highlight code sessions
2020-04-06 11:36:41 -04:00
Elijah DeLee
8e07269738 make awxkit pass through identifier for wfjtn
We need this to be able to create workflow job template nodes with identifier
2020-04-06 11:26:56 -04:00
Shane McDonald
6fc815937b Tidy up the dev environment a bit 2020-04-06 11:13:51 -04:00
Ryan Petrello
014c995a8f fix a typo in the credentials UI
this is causing 400 level errors for some users
2020-04-06 10:45:33 -04:00
John Westcott IV
c1bb62cc36 Removing recursive check, allowwing old pattern to commence 2020-04-06 10:11:18 -04:00
beeankha
f5cf7c204f Update unit test, edit credential module to pass sanity tests 2020-04-06 10:11:18 -04:00
John Westcott IV
6d08e21511 Resolving comment and updating tests 2020-04-06 10:11:18 -04:00
John Westcott IV
8b881d195d Change lookup to include organization 2020-04-06 10:11:18 -04:00
John Westcott IV
5c9ff51248 Change compare_fields to static method 2020-04-06 10:11:18 -04:00
AlanCoding
3f64768ba8 loosen some credential test assertions 2020-04-06 10:11:18 -04:00
John Westcott IV
fd24918ba8 Initial conversion of tower_credential 2020-04-06 10:11:18 -04:00
softwarefactory-project-zuul[bot]
f04e7067e8 Merge pull request #6582 from chrismeyersfsu/fix-redis_startup
align with openshift

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-06 13:52:08 +00:00
softwarefactory-project-zuul[bot]
9a91c0bfb2 Merge pull request #6572 from AlanCoding/approval_identifier
Allow setting identifier for approval nodes

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
             https://github.com/beeankha
2020-04-06 13:39:39 +00:00
chris meyers
c06188da56 align with openshift 2020-04-06 09:16:46 -04:00
chris meyers
7433aab258 switch memcached from tcp to unix domain socket 2020-04-06 08:35:12 -04:00
chris meyers
37a715c680 use memcached unix domain socket rather than tcp 2020-04-06 08:35:12 -04:00
chris meyers
3d9eb3b600 align with openshift 2020-04-05 20:07:15 -04:00
softwarefactory-project-zuul[bot]
99511de728 Merge pull request #6554 from wenottingham/this-may-be-what-alan-suggested
Allow disassociating orphaned users from credentials

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 21:26:49 +00:00
softwarefactory-project-zuul[bot]
82b1b85fa4 Merge pull request #6421 from AlexSCorey/6183-SurveyPreview
Adds Survey Preview Functionality

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 21:20:41 +00:00
softwarefactory-project-zuul[bot]
2aa29420ee Merge pull request #6565 from chrismeyersfsu/fix-schema_workflow_identifier
static identifier in OPTIONS response for workflow job template node

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 21:20:37 +00:00
softwarefactory-project-zuul[bot]
9e331fe029 Merge pull request #6567 from mabashian/6531-approval-drawer-item-id
Adds workflow job id to header of approval drawer items

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 21:14:53 +00:00
softwarefactory-project-zuul[bot]
591cdb6015 Merge pull request #6566 from mabashian/4227-wf-template-row-rbac
Fix bug where JT is disabled in workflow node form for user with execute permissions on said JT

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 20:56:25 +00:00
softwarefactory-project-zuul[bot]
bc244b3600 Merge pull request #6564 from dsesami/column-type-name-change
Changed column label for plain jobs to "Playbook Run" to align with search

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 20:38:20 +00:00
AlanCoding
dbe3863b04 Allow setting identifier for approval nodes 2020-04-03 15:33:57 -04:00
Marliana Lara
ae021c37e3 Add inventory source prompt details 2020-04-03 14:56:20 -04:00
Keith Grant
8baa9d8458 clean up launch prompt credentials, display errors 2020-04-03 11:47:06 -07:00
Daniel Sami
3c888475a5 Changed displayed type name of plain jobs
updated and added i18n

removed import

prettier
2020-04-03 14:35:09 -04:00
softwarefactory-project-zuul[bot]
29b567d6e1 Merge pull request #6550 from ryanpetrello/fix-minutely-hourly
remove the limitation on (very) old DTSTART values for schedules

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 18:32:15 +00:00
softwarefactory-project-zuul[bot]
00aa1ad295 Merge pull request #6553 from ryanpetrello/remove-manual-inv-source-for-good
remove deprecated manual inventory source support

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 18:09:36 +00:00
Bill Nottingham
4f3213715e Allow disassociating any user from a credential role.
This is preventing removing roles from users no longer in the organization.
2020-04-03 13:39:28 -04:00
mabashian
0389e72197 Adds workflow job id to approval header link to match up with what's displayed on the jobs list 2020-04-03 13:39:06 -04:00
mabashian
0732795ecc Rows in the wfjt node form templates list should only be disabled if the user cannot start the job. 2020-04-03 13:27:28 -04:00
chris meyers
a26df3135b static identifier in docs
* OPTIONS response descritpion for workflow job template node identifier
value was an ever changing uuid4(). This is telling the user the wrong
thing. We can not know what uuid4() is going to be in the docs. Instead,
for the OPTIONS response description, tell the user the form that the
uuid4() takes, ie. xxx-xxxx...
* Note that the API browser still populates a uuid4 for the user when it
generates the sample POST data. This is nice.
2020-04-03 13:12:49 -04:00
softwarefactory-project-zuul[bot]
a904aea519 Merge pull request #6551 from chrismeyersfsu/fix-nonce_replay_timestamp
simplify nonce creation and extraction

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 16:32:09 +00:00
Ryan Petrello
6bd5053ae8 remove the limitation on (very) old DTSTART values for schedules 2020-04-03 10:59:35 -04:00
Ryan Petrello
8b00b8c9c2 remove deprecated legacy manual inventory source support
see: https://github.com/ansible/awx/issues/6309
2020-04-03 10:54:43 -04:00
softwarefactory-project-zuul[bot]
2b9acd78c8 Merge pull request #6522 from chrismeyersfsu/feature-wsbroadcast_status
add broadcast websocket status command

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 02:25:25 +00:00
chris meyers
d7f0642f48 add ws broadcast status to sos report 2020-04-02 21:46:12 -04:00
chris meyers
8bbae0cc3a color output of ws broadcast connection status 2020-04-02 21:46:12 -04:00
chris meyers
c00f1505d7 add broadcast websocket status command 2020-04-02 21:46:12 -04:00
softwarefactory-project-zuul[bot]
a08e6691fb Merge pull request #6266 from rooftopcellist/configmap_container_files
ConfigMap supervisor configs and launch scripts for k8s

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 01:07:27 +00:00
softwarefactory-project-zuul[bot]
98bc499498 Merge pull request #6468 from jlmitch5/hostGroupsList
add inventory host groups list and host groups lists

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-03 00:46:33 +00:00
chris meyers
6d0c42a91a align with configmap changes 2020-04-02 20:05:26 -04:00
chris meyers
79c5a62279 simplify nonce creation and extraction
* time() library supports leap seconds also
2020-04-02 19:57:50 -04:00
softwarefactory-project-zuul[bot]
3bb671f3f2 Merge pull request #6497 from john-westcott-iv/tower_notification_update
Initial conversion of tower_notification

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-02 23:29:57 +00:00
Keith Grant
0b9c5c410a add credential select list to launch CredentialsStep 2020-04-02 16:29:40 -07:00
softwarefactory-project-zuul[bot]
d77d5a7734 Merge pull request #6548 from marshmalien/5636-translate-login
Mark login button for translation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-02 23:20:37 +00:00
Keith Grant
0a00a3104a add CredentialTypesAPI.loadAllTypes helper function 2020-04-02 13:55:30 -07:00
John Mitchell
ab36129395 add inventory host groups list and host groups lists 2020-04-02 15:02:41 -04:00
AlanCoding
e99500cf16 Mark test as xfail, move to unit testing 2020-04-02 14:48:33 -04:00
softwarefactory-project-zuul[bot]
299497ea12 Merge pull request #6490 from marshmalien/5997-wf-view-node
Hook up view node button in workflow visualizer

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-02 18:27:09 +00:00
Seth Foster
843c22c6b1 Allow orphaned user to be added to org
Fixed bug where an org admin was not able to add
an orphaned user to the org, in the case where the
orphan had an ancestor role that matched one of the
roles for of the org admin.

scenario to fix -- sue is member of cred1, where cred1 is
part of org1. org1 admin cannot add sue to org1, because
the cred1 role for sue has an ancestor to org1 role. The org1
admin cannot change or attach sue to org1.

tower issue #4198 and #4197
2020-04-02 14:24:55 -04:00
Marliana Lara
86b49b6fe2 Mark login button for translation 2020-04-02 14:19:13 -04:00
Christian Adams
9489f00ca4 Align k8 and ocp supervisor scripts
- Handle scl enable calls for python processes that use postgresql
 - Handle ocp specific vars better
2020-04-02 13:56:33 -04:00
chris meyers
6d60e7dadc align with openshift 2020-04-02 13:56:33 -04:00
Christian Adams
346b9b9e3e ConfigMap supervisor configs and launch scripts for k8s 2020-04-02 13:56:33 -04:00
softwarefactory-project-zuul[bot]
99384b1db9 Merge pull request #6506 from shanemcd/stateless-set
Switch from StatefulSet to Deployment

Reviewed-by: Matthew Jones <mat@matburt.net>
             https://github.com/matburt
2020-04-02 17:51:25 +00:00
Marliana Lara
d1b5a60bb9 Add project node details 2020-04-02 13:09:24 -04:00
Shane McDonald
d57258878d Update more references to statefulset 2020-04-02 12:44:26 -04:00
softwarefactory-project-zuul[bot]
48414f6dab Merge pull request #6542 from chrismeyersfsu/fix-align_redis
Fix align redis

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-02 16:22:31 +00:00
Shane McDonald
ff0186f72b Delete k8s StatefulSet if it exists (for upgrades) 2020-04-02 12:21:35 -04:00
softwarefactory-project-zuul[bot]
a682565758 Merge pull request #6385 from AlexSCorey/6317-ConvertJTFormstoFormikHooks
Uses formik hooks for JT Form

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-02 15:52:35 +00:00
softwarefactory-project-zuul[bot]
0dee2e5973 Merge pull request #6482 from AlexSCorey/5901-SupportForWFJTSurvey
Adds Survey Functionality to WFJT

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-02 15:19:21 +00:00
chris meyers
929f4bfb81 start redis container with conf file 2020-04-02 11:13:35 -04:00
AlanCoding
ac474e2108 Fix RBAC loose items from reversed decision on JT org permissions 2020-04-02 10:17:04 -04:00
Alex Corey
d6722c2106 Adds tests for Survey Preview functionality 2020-04-02 10:02:35 -04:00
Alex Corey
6eef0b82bd Adds Survey Preview Functionality 2020-04-02 10:02:35 -04:00
Alex Corey
fb4343d75e Removes uncessary formikContext items in favor of useField.
Removed OrgId value from formik and get that value from project field
Updates tests and type.js to reflect those changes.
2020-04-02 09:31:35 -04:00
Alex Corey
a867a32b4e Uses formik hooks for JT Form 2020-04-02 09:30:12 -04:00
Shane McDonald
3060505110 Switch from StatefulSet to Deployment
We can do this now that we dropped RabbitMQ.
2020-04-02 09:24:49 -04:00
beeankha
5d68f796aa Rebase + fix typos 2020-04-02 09:21:33 -04:00
AlanCoding
15036ff970 Add unit tests for notification module 2020-04-02 09:14:50 -04:00
John Westcott IV
32783f7aaf Fixing linting errors 2020-04-02 09:14:50 -04:00
John Westcott IV
8699a8fbc2 Resolving comments on PR
Made notification type optional

Fixed examples to use notification_configuration

Fixed defaults for headers to prevent deprication warning

Removed default on messages
2020-04-02 09:14:49 -04:00
John Westcott IV
b4cde80fa9 Updating example to match test 2020-04-02 09:14:49 -04:00
John Westcott IV
eb4db4ed43 Adding field change to readme and example and test of custom messages 2020-04-02 09:14:49 -04:00
John Westcott IV
649aafb454 Initial conversion of tower_notification 2020-04-02 09:14:49 -04:00
softwarefactory-project-zuul[bot]
b6c272e946 Merge pull request #6525 from ryanpetrello/bye-bye-activity-stream-middleware
get rid of the activity stream middleware

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-02 05:25:24 +00:00
Ryan Petrello
9fe2211f82 get rid of the activity stream middleware
it has bugs and is very confusing

see: https://github.com/ansible/tower/issues/4037
2020-04-01 16:02:42 -04:00
Marliana Lara
4704e24c24 Fetch full resource object and replace the matching node 2020-04-01 15:21:42 -04:00
softwarefactory-project-zuul[bot]
e5f293ce52 Merge pull request #6486 from keithjgrant/5909-jt-launch-prompt
JT Launch Prompting (phase 1)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-01 18:29:27 +00:00
softwarefactory-project-zuul[bot]
d64b898390 Merge pull request #6491 from john-westcott-iv/second_tower_job_template_update
Second attempt at converting tower_job_template

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-01 14:09:22 +00:00
softwarefactory-project-zuul[bot]
498c525b34 Merge pull request #6513 from SebastianThorn/devel
[DOC] Adds comment about needing to be a pem-file

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-01 13:22:03 +00:00
beeankha
bb184f8ffb Update booleans to pass linter 2020-04-01 08:58:28 -04:00
softwarefactory-project-zuul[bot]
7f537dbedf Merge pull request #6515 from ryanpetrello/cleanup-some-more-redis
remove some unused code from the redis rewrite

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-01 12:38:48 +00:00
Ryan Petrello
f9b8a69f7b remove some unused code from the redis rewrite 2020-04-01 08:03:59 -04:00
Sebastian Thörn
bc228b8d77 Adds comment about needing to be a pem-file
This needs to be a .pem-file
2020-04-01 11:54:07 +02:00
Keith Grant
7710ad2e57 move OptionsList to components; add launch prompt tests 2020-03-31 13:59:14 -07:00
beeankha
9f2c9b13d7 Update unit test, extra_vars handling, and edit README 2020-03-31 16:16:11 -04:00
softwarefactory-project-zuul[bot]
6940704deb Merge pull request #6509 from ryanpetrello/twisted-cves
update to the latest twisted to address two open CVEs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-31 19:59:11 +00:00
softwarefactory-project-zuul[bot]
6b9cacb85f Merge pull request #6508 from ryanpetrello/django-extensions-bump
bump django-extensions version to address a bug in shell_plus

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-31 19:16:49 +00:00
softwarefactory-project-zuul[bot]
cfa0fdaa12 Merge pull request #6337 from rebeccahhh/activity-stream-grab-bag
add in summary fields to activity stream logging output

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-31 19:15:51 +00:00
Ryan Petrello
4423e6edae update to the latest twisted to address two open CVEs 2020-03-31 13:47:56 -04:00
softwarefactory-project-zuul[bot]
13faa0ed2e Merge pull request #6489 from wenottingham/that-ain't-right
Don't return different fields for smart vs non-smart inventories

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-31 17:43:03 +00:00
Ryan Petrello
42336355bb bump django-extensions version to address a bug in shell_plus
see: https://github.com/ansible/awx/pull/6441
see: e8d5daa06e
2020-03-31 13:39:13 -04:00
Marliana Lara
c18aa90534 Add timeout detail to node view modal 2020-03-31 13:39:05 -04:00
softwarefactory-project-zuul[bot]
39460fb3d3 Merge pull request #6505 from squidboylan/tower_group_integration_tests
Collection: add tower_group child group tests

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-31 17:31:00 +00:00
Keith Grant
4f51c1d2c9 fix LaunchButton tests 2020-03-31 10:09:33 -07:00
Caleb Boylan
04ccff0e3f Collection: add tower_group child group tests 2020-03-31 09:43:53 -07:00
softwarefactory-project-zuul[bot]
2242119182 Merge pull request #6419 from mabashian/5864-schedule-add-2
Implement schedule add form on JT/WFJT/Proj

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-31 16:06:46 +00:00
Marliana Lara
5cba34c34d Use styles to add prompt header spacing 2020-03-31 12:05:13 -04:00
mabashian
33a699b8ae Display form errors on new lines if there are multiple 2020-03-31 10:57:30 -04:00
softwarefactory-project-zuul[bot]
344a4bb238 Merge pull request #6494 from ryanpetrello/quiter-pg-migration
detect event migration tables in a less noisy way

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-31 14:30:15 +00:00
softwarefactory-project-zuul[bot]
0beda08cf9 Merge pull request #6471 from megabreit/jinja2-installer-fix
support for older jinja2 in installer #5501

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2020-03-31 14:10:21 +00:00
softwarefactory-project-zuul[bot]
2264a98c04 Merge pull request #6455 from AlanCoding/auth_errors
Improve error handling related to authentication

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-31 11:28:37 +00:00
Ryan Petrello
d19a9db523 detect event migration tables in a less noisy way
see: https://github.com/ansible/awx/issues/6493
2020-03-31 00:05:30 -04:00
John Westcott IV
4b76332daf Added notification of removal of extra_vars_path 2020-03-30 23:35:11 -04:00
John Westcott IV
db38339179 Second attempt at converting tower_job_template 2020-03-30 23:35:11 -04:00
softwarefactory-project-zuul[bot]
5eddcdd5f5 Merge pull request #6484 from ryanpetrello/inv-source-required
prevent manual updates at POST /api/v2/inventory_sources/N/update/

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 23:46:13 +00:00
softwarefactory-project-zuul[bot]
3480d2da59 Merge pull request #6488 from ryanpetrello/galaxy-role-host-key-checking
disable host key checking when installing galaxy roles/collections

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 22:06:52 +00:00
Keith Grant
e60e6c7d08 pass prompted params through to launch API request 2020-03-30 14:39:16 -07:00
Keith Grant
55356ebb51 set default values on prompts 2020-03-30 14:37:07 -07:00
Keith Grant
7f4bbbe5c5 add launch prompt inventory step 2020-03-30 14:37:07 -07:00
Keith Grant
49b1ce6e8c add skeleton of launch prompt wizard 2020-03-30 14:37:07 -07:00
Marliana Lara
caaefef900 Add modal to show a preview of node prompt values 2020-03-30 17:31:50 -04:00
Bill Nottingham
96576b0e3d Don't return different fields for smart vs non-smart inventories 2020-03-30 17:15:55 -04:00
mabashian
288ce123ca Adds resources_needed_to_start to the list of keys for error message handling 2020-03-30 17:04:17 -04:00
Ryan Petrello
140dbbaa7d disable host key checking when installing galaxy roles/collections
see: https://github.com/ansible/awx/issues/5947
2020-03-30 17:03:14 -04:00
softwarefactory-project-zuul[bot]
e9d11be680 Merge pull request #6104 from mabashian/6086-proj-form-org
Fixes issues with organization when saving the project form.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 20:47:12 +00:00
softwarefactory-project-zuul[bot]
d7f117e83f Merge pull request #6448 from AlexSCorey/6446-HostAdd
Fixes HostAdd Form layout issue

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 20:47:08 +00:00
lj020326
eef1246e0b Merge pull request #1 from lj020326/lj020326-patch-1
Update settings.py to resolve CSRF issue in traefik configuration
2020-03-30 16:29:06 -04:00
lj020326
65e38aa37d Update settings.py
This is needed for LB (e.g., traefik) for proxying into nginx
otherwise, get CSRF error
ref: https://stackoverflow.com/questions/27533011/django-csrf-error-casused-by-nginx-x-forwarded-host

resolved by adding USE_X_FORWARDED_HOST using the following similar issue as a reference:
https://github.com/catmaid/CATMAID/issues/1781
2020-03-30 16:27:40 -04:00
mabashian
c7b23aac9b Removes static run on string options and opts for the more dynamic ux pattern already adopted in the old UI 2020-03-30 15:37:56 -04:00
mabashian
b4ea60eb79 Fixes issue where repeat frequency was not displaying correctly for schedules that only run once 2020-03-30 15:37:56 -04:00
mabashian
24c738c6d8 Moves generation of today and tomorrow strings out of the return of the ScheduleForm 2020-03-30 15:37:56 -04:00
mabashian
0c26734d7d Move the construction of the rule object out to it's own function 2020-03-30 15:37:56 -04:00
mabashian
d9b613ccb3 Implement schedule add form on JT/WFJT/Proj 2020-03-30 15:37:56 -04:00
Ryan Petrello
831bf9124f prevent manual updates at POST /api/v2/inventory_sources/N/update/
see: https://github.com/ansible/awx/issues/6309
2020-03-30 15:35:04 -04:00
Alex Corey
0b31cad2db Adds Survey Functionality to WFJT 2020-03-30 14:20:44 -04:00
AlanCoding
059e744774 Address errors with login and logout in python2
Addresses scenarios when username and password
  were used and collection obtained token

Fix error sendall() arg 1 must be string or buffer

Improve error handling related to authentication
  clear the query after request and before logout
  put response data in error in both cases
2020-03-30 13:48:14 -04:00
softwarefactory-project-zuul[bot]
827adbce76 Merge pull request #6463 from ryanpetrello/release-10.0.0
bump version 10.0.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 16:42:49 +00:00
softwarefactory-project-zuul[bot]
849a64f20a Merge pull request #6481 from ryanpetrello/cli-docs-up
promote AWX CLI installation instructions to the global INSTALL.md

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 16:31:04 +00:00
softwarefactory-project-zuul[bot]
3bbd03732b Merge pull request #6461 from jakemcdermott/6433-fix-org-team-rbac-save
Don't show user-only roles for teams

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 16:05:20 +00:00
Ryan Petrello
32627ce51a promote AWX CLI installation instructions to the global INSTALL.md
a few users have had trouble finding these instructions, so let's move
them into the top level installation docs
2020-03-30 11:46:10 -04:00
softwarefactory-project-zuul[bot]
4a8f1d41fa Merge pull request #6422 from john-westcott-iv/tower_job_wait_update
Initial cut at tower_job_wait conversion

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-30 13:27:42 +00:00
Armin Kunaschik
2b3c57755c support for older jinja2 in installer 2020-03-28 02:59:40 +01:00
softwarefactory-project-zuul[bot]
508c9b3102 Merge pull request #6465 from ryanpetrello/ws-bad-group-syntax
prevent ws group subscription if not specified in the valid format

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 22:02:47 +00:00
softwarefactory-project-zuul[bot]
f8be1f4110 Merge pull request #6469 from ryanpetrello/cli-config-wooops
fix a bug that broke `awx config`

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 21:59:16 +00:00
softwarefactory-project-zuul[bot]
d727e69a00 Merge pull request #6459 from chrismeyersfsu/fix-register_queue_race2
fix register_queue race conditionn

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 21:36:41 +00:00
Ryan Petrello
04dd1352c9 prevent ws group subscription if not specified in the valid format 2020-03-27 17:13:21 -04:00
Ryan Petrello
ea54815e6b fix a bug that broke awx config
see: https://github.com/ansible/tower/issues/4206
2020-03-27 17:07:48 -04:00
softwarefactory-project-zuul[bot]
78db965797 Merge pull request #6467 from dsesami/survey-list-ids
Add IDs to survey list items

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 20:17:45 +00:00
chris meyers
3326979806 fix register_queue race conditionn
* Avoid race condition with `apply_cluster_membership_policies`
2020-03-27 16:15:10 -04:00
beeankha
230949c43c Fix timeout error 2020-03-27 15:44:23 -04:00
Daniel Sami
a862a00d24 added id to survey list items 2020-03-27 15:27:08 -04:00
beeankha
2e8f9185ab Fix default value errors for cases of None for min/max_interval 2020-03-27 15:05:23 -04:00
beeankha
6d6322ae4d Update integration tests, update tower_wait module 2020-03-27 15:05:23 -04:00
John Westcott IV
914ea54925 Make module prefer interval (if set) over min/max
Fix linting issues for True vs true

Fix up unit test related errors
2020-03-27 15:05:23 -04:00
John Westcott IV
b9b62e3771 Removing assert that job is pending on job launch 2020-03-27 15:05:23 -04:00
John Westcott IV
e03911d378 Depricate min and max interval in favor of interval 2020-03-27 15:05:23 -04:00
John Westcott IV
61287f6b36 Removing old unneeded output and fixing comments 2020-03-27 15:05:23 -04:00
John Westcott IV
f6bfdef34d Removed old secho comment from Tower-CLI
Fixed job name for tests
2020-03-27 15:05:23 -04:00
John Westcott IV
7494ba7b9c Initial cut at tower_job_wait conversion 2020-03-27 15:05:23 -04:00
softwarefactory-project-zuul[bot]
5f62426684 Merge pull request #6458 from jakemcdermott/6435-fix-notification-toggle-disable
Limit disable-on-load to single notifications

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 17:59:46 +00:00
Ryan Petrello
6914213aa0 bump version 10.0.0 2020-03-27 12:51:18 -04:00
softwarefactory-project-zuul[bot]
83721ff9a8 Merge pull request #6438 from jakemcdermott/6433-fix-org-team-rbac-save-api
Identify user-only object roles

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 16:35:56 +00:00
softwarefactory-project-zuul[bot]
4998c7bf21 Merge pull request #6315 from john-westcott-iv/collections_tools_associations
Collections tools associations

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 16:16:08 +00:00
softwarefactory-project-zuul[bot]
155a1d9a32 Merge pull request #6032 from ryanpetrello/bigint
migrate event table primary keys from integer to bigint

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2020-03-27 16:01:58 +00:00
Jake McDermott
6f582b5688 Don't show user-only roles for teams 2020-03-27 11:18:15 -04:00
Jake McDermott
579648a017 Identify user-only organization roles
Organization participation roles (admin, member) can't be assigned to a
team. Add a field to the object roles so the ui can know not to display
them for team role selection.
2020-03-27 11:12:39 -04:00
softwarefactory-project-zuul[bot]
c4ed9a14ef Merge pull request #6451 from jbradberry/related-webhook-credential-link
Optionally add the webhook_credential link to related

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 14:38:25 +00:00
softwarefactory-project-zuul[bot]
21872e7101 Merge pull request #6444 from mabashian/ui-next-accessibility-low-hanging-fruitz
Adds aria-label to some buttons without text

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 14:38:05 +00:00
Jake McDermott
f2e9a8d1b2 Limit disable-on-load to single notifications 2020-03-27 10:32:11 -04:00
Ryan Petrello
301d6ff616 make the job event bigint migration chunk size configurable 2020-03-27 09:28:10 -04:00
softwarefactory-project-zuul[bot]
d24271849d Merge pull request #6454 from ansible/jakemcdermott-update-issue-template
Add advice for creating bug reports

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-27 00:57:45 +00:00
beeankha
a50b03da17 Remove unnecessary fields in associations file 2020-03-26 20:04:11 -04:00
Jake McDermott
27b5b534a1 Add advice for creating bug reports 2020-03-26 19:49:38 -04:00
softwarefactory-project-zuul[bot]
6bc97158fe Merge pull request #6443 from ryanpetrello/summary-fields-perf-note
clarify some API documentation on summary_fields

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 21:17:48 +00:00
softwarefactory-project-zuul[bot]
9ce2a9240a Merge pull request #6447 from ryanpetrello/runner-1.4.6
update to the latest version of ansible-runner

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 21:03:16 +00:00
Jeff Bradberry
6b3befcb94 Optionally add the webhook_credential link to related
on JTs and WFJTs.
2020-03-26 16:07:14 -04:00
Ryan Petrello
c8044b4755 migrate event table primary keys from integer to bigint
see: https://github.com/ansible/awx/issues/6010
2020-03-26 15:54:38 -04:00
Alex Corey
0eb526919f Fixes HostAdd Form layout issue 2020-03-26 15:23:31 -04:00
softwarefactory-project-zuul[bot]
3045511401 Merge pull request #6441 from ryanpetrello/eye-python
fix busted shell_plus in the development environment

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 19:16:25 +00:00
softwarefactory-project-zuul[bot]
24f334085e Merge pull request #6417 from jakemcdermott/no-blank-adhoc-command-module-names
Don't allow blank adhoc command module names

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 19:14:06 +00:00
Ryan Petrello
90d35f07f3 clarify some documentation on summary_fields 2020-03-26 14:54:28 -04:00
softwarefactory-project-zuul[bot]
e334f33d13 Merge pull request #6388 from chrismeyersfsu/feature-websocket_secret2
set broadcast websockets secret

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 18:50:52 +00:00
Ryan Petrello
464db28be5 update to the latest version of ansible-runner 2020-03-26 14:49:45 -04:00
mabashian
a8f56f78e9 Remove unused touched and error props 2020-03-26 14:11:22 -04:00
mabashian
f7ad3d78eb Fixes issues with organization when saving the project form.
Changes helperTextInvalid prop type to node which matches more closely with what the upstream PF component supports.
2020-03-26 13:57:11 -04:00
Ryan Petrello
61a0d1f77b fix busted shell_plus in the development environment
for some reason (unsure why), django-extensions has begun noticing
ipython importability and treating "shell_plus" as "start an IPython
notebook by default

it could be that this is a bug in django-extensions that will be fixed
soon, but for now, this fixes the issue
2020-03-26 13:37:13 -04:00
mabashian
77e99ad355 Adds aria-label to some buttons without text 2020-03-26 12:58:45 -04:00
beeankha
9f4afe6972 Fix misc. linter errors 2020-03-26 12:01:48 -04:00
John Westcott IV
b99a04dd8d Adding associations to generator 2020-03-26 12:01:48 -04:00
John Westcott IV
357e22eb51 Compensating for default of '' for a JSON typed field 2020-03-26 12:01:48 -04:00
softwarefactory-project-zuul[bot]
9dbf75f2a9 Merge pull request #6434 from marshmalien/6432-launch-btn-bug
Disable launch button when there are zero nodes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 15:27:45 +00:00
chris meyers
eab74cac07 autogenerate websocket secret 2020-03-26 10:32:37 -04:00
Marliana Lara
979f549d90 Disable launch button when there are zero nodes 2020-03-26 10:16:33 -04:00
softwarefactory-project-zuul[bot]
ca82f48c18 Merge pull request #6429 from marshmalien/5992-wf-doc-btn
Hookup WF documentation button to visualizer toolbar

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-26 14:00:50 +00:00
Marliana Lara
5a45a62cb3 Hookup WF documentation button to visualizer toolbar 2020-03-25 17:44:34 -04:00
softwarefactory-project-zuul[bot]
090349a49b Merge pull request #6416 from jakemcdermott/6413-remove-unnecessary-project-template-add-option
Remove unnecessary project template add option

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 19:41:23 +00:00
softwarefactory-project-zuul[bot]
c38d13c5ab Merge pull request #6399 from jakemcdermott/6398-fix-confirm-password-reset
Don't delete confirmed password from formik object

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 19:29:05 +00:00
softwarefactory-project-zuul[bot]
f490a940cf Merge pull request #6410 from ryanpetrello/new-social-auth
update social-auth-core to address a GitHub API deprecation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 19:28:59 +00:00
softwarefactory-project-zuul[bot]
42c24419d4 Merge pull request #6409 from AlanCoding/group_children
Rename group-to-group field to align with API

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 17:34:25 +00:00
Jake McDermott
e17c93ecd7 Don't allow blank adhoc command module names 2020-03-25 13:11:30 -04:00
softwarefactory-project-zuul[bot]
67d48a87f8 Merge pull request #6408 from ryanpetrello/rabbitmq-cleanup
remove a bunch of RabbitMQ references

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 17:02:54 +00:00
Ryan Petrello
b755fa6777 update social-auth-core to address a GitHub API deprecation 2020-03-25 12:17:36 -04:00
softwarefactory-project-zuul[bot]
ee4dcd2055 Merge pull request #6403 from jbradberry/awxkit-from-json-connection
Add a connection kwarg to Page.from_json

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 15:33:37 +00:00
softwarefactory-project-zuul[bot]
0f7a4b384b Merge pull request #6386 from jbradberry/awxkit-api-endpoints
Awxkit api endpoints

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 15:33:32 +00:00
softwarefactory-project-zuul[bot]
02415db881 Merge pull request #6407 from marshmalien/5991-wf-launch-btn
Hookup launch button to workflow visualizer

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 15:33:27 +00:00
AlanCoding
703345e9d8 Add alias for group children of groups 2020-03-25 11:33:13 -04:00
AlanCoding
d102b06474 Rename group-to-group field to align with API 2020-03-25 11:33:09 -04:00
Jake McDermott
55c18fa76c Remove unnecessary project template add option 2020-03-25 11:26:30 -04:00
softwarefactory-project-zuul[bot]
d37039a18a Merge pull request #6334 from rooftopcellist/ncat
Add netcat to the dev container

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 12:51:16 +00:00
Christian Adams
6335004c94 Add common debugging tools to the dev container
- nmap-ncat
 - sdb
 - tcpdump
 - strace
 - vim
2020-03-25 08:03:32 -04:00
softwarefactory-project-zuul[bot]
177867de5a Merge pull request #6369 from AlanCoding/create_associate
Fix bug with association on creation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-25 00:28:51 +00:00
softwarefactory-project-zuul[bot]
08bd445caf Merge pull request #6404 from ryanpetrello/pyyaml-upgrade
pin a minimum pyyaml version to address (CVE-2017-18342)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 23:48:01 +00:00
softwarefactory-project-zuul[bot]
b5776c8eb3 Merge pull request #6405 from ryanpetrello/upgrade-django
update Django to address CVE-2020-9402

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 22:48:09 +00:00
Ryan Petrello
8f1db173c1 remove a bunch of RabbitMQ references 2020-03-24 18:46:58 -04:00
softwarefactory-project-zuul[bot]
62e93d5c57 Merge pull request #6271 from AlexSCorey/6260-UsersOrgList
Adds User Organization List

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 22:38:56 +00:00
Marliana Lara
abfeb735f0 Add launch button to workflow visualizer 2020-03-24 17:42:22 -04:00
Ryan Petrello
68b0b40e91 update Django to address CVE-2020-9402
we don't use Oracle GIS, so this isn't really applicable, but it'll make
security scanners happy <shrug>

see: https://docs.djangoproject.com/en/3.0/releases/2.2.11/
2020-03-24 16:41:53 -04:00
Alex Corey
910d926ac3 Fixes file structure, adds tests 2020-03-24 16:27:56 -04:00
Alex Corey
c84ab9f1dc Adds User Organization List 2020-03-24 16:25:11 -04:00
Ryan Petrello
65cafa37c7 pin a minimum pyyaml version to address (CVE-2017-18342)
see: https://github.com/ansible/awx/issues/6393
2020-03-24 15:59:31 -04:00
AlanCoding
551fd088f5 Remove test workarounds 2020-03-24 15:42:35 -04:00
AlanCoding
a72e885274 Fix bug with association on creation 2020-03-24 15:34:52 -04:00
softwarefactory-project-zuul[bot]
bd7c048113 Merge pull request #6291 from AlanCoding/node_identifier
Add Workflow Node Identifier

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 19:32:52 +00:00
Jeff Bradberry
91135f638f Add a connection kwarg to Page.from_json
if you don't reuse the connection when doing this, you lose your
authentication.
2020-03-24 15:27:51 -04:00
softwarefactory-project-zuul[bot]
cbc02dd607 Merge pull request #6394 from ryanpetrello/runner-145
update to the latest version of ansible-runner

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 19:09:21 +00:00
softwarefactory-project-zuul[bot]
de09deff66 Merge pull request #6348 from AlexSCorey/5895-SurveyList
Adds word wrap functionality

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 19:07:05 +00:00
softwarefactory-project-zuul[bot]
5272d088ed Merge pull request #6390 from marshmalien/fix-select-behavior
Fix bugs related to Job Template labels and tags

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 17:53:01 +00:00
softwarefactory-project-zuul[bot]
22a593f30f Merge pull request #6389 from jlmitch5/fixEmailOptionNotif
update email option notification to select

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-24 16:54:12 +00:00
Jake McDermott
b56812018a Don't delete confirmed password from formik object
If the save fails, the form will attempt to reload the
deleted value.
2020-03-24 12:21:07 -04:00
Alex Corey
ab755134b3 Adds new DataListCell components to all necessary lists 2020-03-24 10:30:41 -04:00
Alex Corey
ebb0f31b0b Fixes word wrap issue 2020-03-24 10:30:41 -04:00
Ryan Petrello
51ef57188c update to the latest version of ansible-runner 2020-03-24 10:01:17 -04:00
AlanCoding
653850fa6d Remove duplicated index 2020-03-23 22:54:04 -04:00
AlanCoding
8ba4388014 Rewrite tests to use the new modules 2020-03-23 22:47:30 -04:00
AlanCoding
f3e8623a21 Move workflow test target 2020-03-23 22:34:11 -04:00
AlanCoding
077461a3ef Docs touchups 2020-03-23 22:00:02 -04:00
AlanCoding
795c989a49 fix bug processing survey spec 2020-03-23 22:00:02 -04:00
AlanCoding
5e595caf5e Add workflow node identifier
Generate new modules WFJT and WFJT node
Touch up generated syntax, test new modules

Add utility method in awxkit

Fix some issues with non-name identifier in
  AWX collection module_utils

Update workflow docs for workflow node identifier

Test and fix WFJT modules survey_spec
Plug in survey spec for the new module
Handle survey spec idempotency and test

add associations for node connections
Handle node credential prompts as well

Add indexes for new identifier field

Test with unicode dragon in name
2020-03-23 22:00:00 -04:00
softwarefactory-project-zuul[bot]
d941f11ccd Merge pull request #5582 from jakemcdermott/fix-5265
Don't refresh settings on websocket event

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 23:10:21 +00:00
softwarefactory-project-zuul[bot]
c4e50cbf7d Merge pull request #6381 from jakemcdermott/6380-fix-host-event-errors
Fix host event type and reference errors

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 22:34:36 +00:00
Jake McDermott
6f3527ed15 Don't refresh settings on websocket event 2020-03-23 18:29:01 -04:00
Jake McDermott
fe2ebeb872 Fix host event type and reference errors 2020-03-23 17:46:42 -04:00
Marliana Lara
ad7498c3fc Fix bugs related to Job Template labels and tags
* Use default PF select toggle behavior
* Fix label submit when no inventory provided
2020-03-23 17:06:38 -04:00
John Mitchell
cb7257f9e6 update email option notification to select
- delete radio_group option from form generator
2020-03-23 17:04:07 -04:00
Jeff Bradberry
e3ea4e2398 Register the resource copy endpoints as awxkit page types 2020-03-23 15:19:48 -04:00
Jeff Bradberry
e4e2d48f53 Register some missing related endpoints in awxkit
- the newer varieties of notification templates
- organization workflow job templates
- credential owner users and owner teams

this allows the endpoints to get wrapped in appropriate Page types,
not just the Base page type.
2020-03-23 15:18:47 -04:00
Rebeccah
5bfe89be6e removed the to_representation and replaced with get_summary_fields per suggestion in PR comments 2020-03-23 14:57:07 -04:00
Rebeccah
47661fad51 added in summary fields into logging which will solve several issues related to needing more data in logging outputs 2020-03-23 14:57:07 -04:00
softwarefactory-project-zuul[bot]
4b497b8cdc Merge pull request #6364 from wenottingham/dont-make-a-tree-that-never-ends-and-just-goes-on-and-on
Preserve symlinks when copying a tree.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 18:57:05 +00:00
softwarefactory-project-zuul[bot]
31fabad3e5 Merge pull request #6370 from AlanCoding/convert_tower_role
Initial conversion of tower_role

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 18:34:32 +00:00
Jeff Bradberry
1e48d773ae Add job_templates to the set of related endpoints for organizations 2020-03-23 14:33:40 -04:00
softwarefactory-project-zuul[bot]
4529429e99 Merge pull request #6368 from marshmalien/fix-jt-bugs
Fix job template form bugs r/t saving without an inventory

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 14:08:32 +00:00
softwarefactory-project-zuul[bot]
ec4a471e7a Merge pull request #6377 from chrismeyersfsu/fix-register_queue_race
serialize register_queue

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 12:32:34 +00:00
softwarefactory-project-zuul[bot]
77915544d2 Merge pull request #6378 from chrismeyersfsu/fix-launch_awx_cluster
fixup dev cluster bringup

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-23 12:32:29 +00:00
chris meyers
5ba90b629e fixup dev cluster bringup
* change from bootstrap script to launch_awx.sh script
2020-03-23 07:33:35 -04:00
chris meyers
e9021bd173 serialize register_queue
* also remove uneeded query
2020-03-23 07:21:17 -04:00
AlanCoding
49356236ac Add coverage from issue resolved with tower_role conversion 2020-03-22 13:43:39 -04:00
softwarefactory-project-zuul[bot]
c9015fc0c8 Merge pull request #6361 from john-westcott-iv/tower_label_update
Tower label update

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-21 11:15:06 +00:00
AlanCoding
4ea1101477 update test assertion 2020-03-20 23:49:15 -04:00
AlanCoding
27948aa4e1 Convert tower_role to no longer use tower-cli 2020-03-20 23:28:48 -04:00
softwarefactory-project-zuul[bot]
5263d5aced Merge pull request #6358 from AlanCoding/fix_settings
Fix regression in tower_settings module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-21 01:42:19 +00:00
softwarefactory-project-zuul[bot]
8832f667e4 Merge pull request #6336 from AlanCoding/local_collection_errors
Fix test errors running locally with Ansible devel

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-21 00:51:04 +00:00
softwarefactory-project-zuul[bot]
f4e56b219d Merge pull request #6326 from AlanCoding/docs_patches
Copy edit of backward incompatible changes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-21 00:42:51 +00:00
AlanCoding
abdcdbca76 Add label tests and flake8 fixes 2020-03-20 20:09:08 -04:00
AlanCoding
362016c91b Fix test errors running locally with Ansible devel 2020-03-20 19:52:13 -04:00
AlanCoding
f1634f092d Copy edit of backward incompatible changes 2020-03-20 19:51:24 -04:00
John Westcott IV
8cd4e9b488 Adding state back in 2020-03-20 19:14:00 -04:00
softwarefactory-project-zuul[bot]
1fce77054a Merge pull request #6329 from marshmalien/6143-inv-group-associate-hosts
Add associate modal to nested inventory host list

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 21:02:57 +00:00
Marliana Lara
53c8c80f7f Fix JT form edit save bug when inventory has no value 2020-03-20 16:37:33 -04:00
softwarefactory-project-zuul[bot]
3bf7d41bf3 Merge pull request #6286 from jlmitch5/hostFacts
add facts views to host and inv host detail views

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 20:08:52 +00:00
softwarefactory-project-zuul[bot]
34259e24c0 Merge pull request #6350 from jlmitch5/formErrorCredForm
correct form submission errors for credential form

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 19:52:33 +00:00
John Westcott IV
1daeec356f Initial converstion of tower_label 2020-03-20 15:01:41 -04:00
softwarefactory-project-zuul[bot]
5573e1c7ce Merge pull request #6356 from keithjgrant/5899-survey-add-form
Survey add/edit forms

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 18:56:47 +00:00
softwarefactory-project-zuul[bot]
1cba98e4a7 Merge pull request #6363 from dsesami/translation-fix
Fix typo in japanese string

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 18:20:50 +00:00
Keith Grant
56d31fec77 handle any errors thrown in survey handleSubmit 2020-03-20 10:55:07 -07:00
Keith Grant
564012b2c8 fix errors when adding first question to new survey 2020-03-20 10:55:07 -07:00
Keith Grant
cfe0607b6a add survey form tests 2020-03-20 10:55:07 -07:00
Keith Grant
7f24d0c0c2 add SurveyQuestionForm tests 2020-03-20 10:55:07 -07:00
Keith Grant
3f4e7465a7 move template survey files to Survey subdirectory 2020-03-20 10:55:07 -07:00
Keith Grant
9c32cb30d4 add survey question editing, breadcrumbs 2020-03-20 10:55:07 -07:00
Keith Grant
782d465c78 wire in SurveyQuestionAdd form to post to API 2020-03-20 10:55:07 -07:00
Keith Grant
1412bf6232 add survey form 2020-03-20 10:55:07 -07:00
Alex Corey
e92acce4eb Adds toolbar 2020-03-20 10:55:07 -07:00
Bill Nottingham
ac68e8c4fe Preserve symlinks when copying a tree.
This avoids creating a recursive symlink tree.
2020-03-20 13:41:16 -04:00
Daniel Sami
97a4bb39b6 fix typo in japanese string 2020-03-20 13:24:28 -04:00
Marliana Lara
9e00337bc1 Rename useSelected hook and update error modal condition 2020-03-20 10:54:59 -04:00
Marliana Lara
72672d6bbe Move useSelect to shared util directory 2020-03-20 10:54:59 -04:00
Marliana Lara
51f52f6332 Translate aria labels 2020-03-20 10:54:58 -04:00
Marliana Lara
11b2b17d08 Add associate modal to nested inventory host list 2020-03-20 10:54:55 -04:00
softwarefactory-project-zuul[bot]
e17ff3e03a Merge pull request #6335 from AlexSCorey/6316-TemplateUsesFunction
Moves template.jsx over to a functional component.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 14:54:49 +00:00
softwarefactory-project-zuul[bot]
b998d93bfb Merge pull request #6360 from chrismeyersfsu/log_notification_failures
log when notifications fail to send

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 14:54:30 +00:00
softwarefactory-project-zuul[bot]
b8ec94a0ae Merge pull request #6345 from chrismeyersfsu/redis-cleanup2
fix redis requirements mess

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 13:43:54 +00:00
Alex Corey
a02b332b10 fixes linting and spelling errors 2020-03-20 09:16:10 -04:00
Alex Corey
56919c5d32 Moves template.jsx over to a functional component. 2020-03-20 09:16:10 -04:00
chris meyers
47f5c17b56 log when notifications fail to send
* If a job does not finish in the 5 second timeout. Let the user know
that we failed to even try to send the notification.
2020-03-20 09:11:01 -04:00
softwarefactory-project-zuul[bot]
0fb800f5d0 Merge pull request #6344 from chrismeyersfsu/redis-cleanup1
Redis cleanup1

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-20 13:07:40 +00:00
AlanCoding
d6e94f9c6f Fix regression in tower_settings module 2020-03-19 23:03:58 -04:00
softwarefactory-project-zuul[bot]
d5bdfa908a Merge pull request #6354 from chrismeyersfsu/redis-cleanup3
remove BROKER_URL special password handling

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 20:45:38 +00:00
softwarefactory-project-zuul[bot]
0a5acb6520 Merge pull request #6166 from fosterseth/feature-cleanup_jobs-perf
Improve performance of cleanup_jobs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 20:09:39 +00:00
softwarefactory-project-zuul[bot]
debc339f75 Merge pull request #6295 from beeankha/module_utils_updates
Update module_utils Functionality

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 19:49:33 +00:00
chris meyers
06f065766f remove BROKER_URL special password handling
* BROKER_URL now describes how to connect to redis. We use a unix socket
to connect to redis. Therefore, no longer need to support fancy uri's
that contain fancy characters in the password.
2020-03-19 15:12:45 -04:00
John Mitchell
16e672dd38 correct form submission errors for credential form 2020-03-19 15:10:10 -04:00
softwarefactory-project-zuul[bot]
3d7420959e Merge pull request #6347 from squidboylan/fix_collection_test
Collection: Fix some tests that broke during the random name update

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 19:01:03 +00:00
John Mitchell
4bec46a910 add facts views to host and inv host detail views and update enable fact storage checkbox option and detail language 2020-03-19 15:01:02 -04:00
chris meyers
0e70543d54 licenses for new python deps 2020-03-19 14:44:29 -04:00
Seth Foster
88fb30e0da Delete jobs without loading objects first
The commit is intended to speed up the cleanup_jobs command in awx. Old
methods takes 7+ hours to delete 1 million old jobs. New method takes
around 6 minutes.

Leverages a sub-classed Collector, called AWXCollector, that does not
load in objects before deleting them. Instead querysets, which are
lazily evaluated, are used in places where Collector normally keeps a
list of objects.

Finally, a couple of tests to ensure parity between old Collector and
AWXCollector. That is, any object that is updated/removed from the
database using Collector should be have identical operations using
AWXCollector.

tower issue 1103
2020-03-19 14:14:02 -04:00
AlanCoding
558814ef3b tower_group relationships
rollback some module_utils changes
add runtime error for 404 type things
2020-03-19 13:53:08 -04:00
beeankha
ace5a0a2b3 Update module utils, part of collections conversion work 2020-03-19 13:53:08 -04:00
softwarefactory-project-zuul[bot]
8a917a5b70 Merge pull request #6343 from AlanCoding/fix_sanity
Fix sanity error

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 17:28:12 +00:00
Caleb Boylan
1bd74a96d6 Collection: Fix some tests that broke during the random name update 2020-03-19 09:40:48 -07:00
softwarefactory-project-zuul[bot]
74ebb0ae59 Merge pull request #6290 from ryanpetrello/notification-host-summary-race
change when we send job notifications to avoid a race condition

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 15:36:24 +00:00
chris meyers
fd56b7c590 pin pexpect to 4.7.0 2020-03-19 11:25:43 -04:00
chris meyers
7f02e64a0d fix redis requirements mess
* Add the end of the redis PR we rebased devel a bunch. requirements
snuck into requirements.txt this way. This PR removes those requirements
(i.e. kombu) and bumps other requirements.
2020-03-19 10:19:07 -04:00
Ryan Petrello
d40a5dec8f change when we send job notifications to avoid a race condition
success/failure notifications for *playbooks* include summary data about
the hosts in based on the contents of the playbook_on_stats event

the current implementation suffers from a number of race conditions that
sometimes can cause that data to be missing or incomplete; this change
makes it so that for *playbooks* we build (and send) the notification in
response to the playbook_on_stats event, not the EOF event
2020-03-19 10:01:52 -04:00
chris meyers
5e481341bc flake8 2020-03-19 10:01:20 -04:00
chris meyers
0a1070834d only update the ip address field on the instance
* The heartbeat of an instance is determined to be the last modified
time of the Instance object. Therefore, we want to be careful to only
update very specific fields of the Instance object.
2020-03-19 10:01:20 -04:00
chris meyers
c7de3b0528 fix spelling 2020-03-19 10:01:20 -04:00
softwarefactory-project-zuul[bot]
a725778b17 Merge pull request #6327 from ryanpetrello/py2-minus-minus-cli
remove python2 support from awxkit

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 13:58:02 +00:00
softwarefactory-project-zuul[bot]
3b520a8ee8 Merge pull request #6341 from egmar/fix-pgsql-connect-options
Jobs not running with external PostgreSQL database after PR #6034

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-19 13:55:54 +00:00
Christian Adams
9a38971d47 Update ELK Stack container files 2020-03-19 09:35:08 -04:00
Ryan Petrello
06b3e54fb1 remove python2 support from awxkit 2020-03-19 09:02:39 -04:00
chris meyers
7f2e1d46bc replace janky unique channel name w/ uuid
* postgres notify/listen channel names have size limitations as well as
character limitations. Respect those limitations while at the same time
generate a unique channel name.
2020-03-19 08:59:15 -04:00
chris meyers
12158bdcba remove dead code 2020-03-19 08:57:05 -04:00
Egor Margineanu
f858eda6b1 Made OPTIONS optional 2020-03-19 13:43:06 +01:00
AlanCoding
c5297b0b86 Fix sanity error 2020-03-19 08:42:07 -04:00
Egor Margineanu
e0633c9122 Merge branch 'fix-pgsql-connect-options' of github.com:egmar/awx into fix-pgsql-connect-options 2020-03-19 13:29:39 +01:00
Egor Margineanu
3a208a0be2 Added support for PG port and options. related #6340 2020-03-19 13:29:06 +01:00
Egor Margineanu
cfdfd96793 Added support for PG port and options 2020-03-19 13:26:59 +01:00
Christian Adams
c4e697879d Improve docs for using the logstash container 2020-03-18 18:32:45 -04:00
Ryan Petrello
db7f0f9421 Merge pull request #6034 from chrismeyersfsu/pg2_no_pubsub
Replace rabbitmq with redis
2020-03-18 17:19:51 -04:00
Ryan Petrello
f1ee963bd0 fix up rebased migrations 2020-03-18 16:19:04 -04:00
Ryan Petrello
7c3cbe6e58 add a license for redis-cli 2020-03-18 16:10:20 -04:00
chris meyers
87de0cf0b3 flake8, pytest, license fixes 2020-03-18 16:10:20 -04:00
chris meyers
18f5dd6e04 add websocket backplane documentation 2020-03-18 16:10:20 -04:00
chris meyers
89163f2915 remove redis broker url test
* We use sockets everywhere. Thus, password special characters no longer
are an issue.
2020-03-18 16:10:20 -04:00
chris meyers
59c9de2761 awxkit python2.7 compatible print
* awxkit still supports python2.7 so do not use fancy f"" yet; instead,
use .format()
2020-03-18 16:10:20 -04:00
chris meyers
b58c71bb74 remove broadcast websockets view 2020-03-18 16:10:20 -04:00
Ryan Petrello
1caa2e0287 work around a limitation in postgres notify to properly support copying
postgres has a limitation on its notify message size (8k), and the
messages we generate for deep copying functionality easily go over this
limit; instead of passing a giant nested data structure across the
message bus, this change makes it so that we temporarily store the JSON
structure in memcached, and look it up from *within* the task

see: https://github.com/ansible/tower/issues/4162
2020-03-18 16:10:20 -04:00
chris meyers
770b457430 redis socket support 2020-03-18 16:10:19 -04:00
chris meyers
d58df0f34a fix sliding window calculation 2020-03-18 16:10:19 -04:00
chris meyers
3f5e2a3cd3 try to make openshift build happy 2020-03-18 16:10:19 -04:00
chris meyers
2b59af3808 safely operate in async or sync context 2020-03-18 16:10:19 -04:00
chris meyers
9e5fe7f5c6 translate Instance hostname to safe analytics name
* More robust translation of Instance hostname to analytics safe name by
replacing all non-alpha numeric characters with _
2020-03-18 16:10:19 -04:00
chris meyers
093d204d19 fix flake8 2020-03-18 16:10:19 -04:00
chris meyers
e25bd931a1 change dispatcher test to make required queue
* No fallback-default queue anymore. Queue must be explicitly provided.
2020-03-18 16:10:19 -04:00
chris meyers
8350bb3371 robust broadcast websocket error hanndling 2020-03-18 16:10:18 -04:00
chris meyers
d6594ab602 add broadcast websocket metrics
* Gather brroadcast websocket metrics and push them into redis every
configurable seconds.
* Pop metrics from redis in web view layer to display via the api on
demand
2020-03-18 16:10:18 -04:00
chris meyers
b6b9802f9e increase per-channel capacity
* 100 is the default capacity for a channel. If the client doesn't read
the socket fast enough, websocket messages can and will be lost. This
increases the default to 10,000
2020-03-18 16:10:18 -04:00
chris meyers
0da94ada2b add missing service name to dev env
* Dev env was bringing the wsbroadcast service up but not under the
tower-process dependency. This is cleaner.
2020-03-18 16:10:18 -04:00
chris meyers
3b9e67ed1b remove channel group model
* Websocket user session <-> group subscription membership now resides
in Redis rather than the database.
2020-03-18 16:10:18 -04:00
chris meyers
14320bc8e6 handle websocket unsubscribe
* Do not return from blocking unsubscribe until _after_ putting the
gotten unsubscribe message on the queue so that it can be read by the
thread of execution that was unblocked.
2020-03-18 16:10:18 -04:00
chris meyers
3c5c9c6fde move broadcast websocket out into its own process 2020-03-18 16:10:18 -04:00
chris meyers
f5193e5ea5 resolve rebase errors 2020-03-18 16:10:17 -04:00
chris meyers
03b73027e8 websockets aware of Instance changes
* New tower nodes that are (de)registered in the Instance table are seen
by the websocket layer and connected to or disconnected from by the
websocket broadcast backplane using a polling mechanism.
* This is especially useful for openshift and kubernetes. This will be
useful for standalone Tower in the future when the restarting of Tower
services is not required.
2020-03-18 16:10:17 -04:00
chris meyers
c06b6306ab remove health info
* Sending health about websockets over websockets is not a great idea.
* I tried sending health data via prometheus and encountered problems
that will need PR's to prometheus_client library to solve. Circle back
to this later.
2020-03-18 16:10:17 -04:00
Shane McDonald
45ce6d794e Initial migration of rabbitmq -> redis for k8s installs 2020-03-18 16:10:17 -04:00
chris meyers
e94bb44082 replace rabbitmq with redis
* local awx docker-compose and image build only.
2020-03-18 16:10:17 -04:00
chris meyers
be58906aed remove kombu 2020-03-18 16:10:17 -04:00
chris meyers
403e9bbfb5 add websocket health information 2020-03-18 16:10:16 -04:00
chris meyers
ea29f4b91f account for isolated job status
* We can not query the dispatcher running on isolated nodes to see if
the playbook is still running because that is the nature of isolated
nodes, they don't run the dispatcher nor do they run the message broker.
Therefore, we should query the control node that is arbitrating the
isolated work. If the control node process in the dispatcher is dead,
consider the iso job dead.
2020-03-18 16:10:16 -04:00
chris meyers
3f2d757f4e update awxkit to use new unsubscribe event
* Instead of waiting an arbitrary number of seconds. We can now wait the
exact amount of time needed to KNOW that we are unsubscribed. This
changeset takes advantage of the new subscribe reply semantic.
2020-03-18 16:10:16 -04:00
chris meyers
feac93fd24 add websocket group unsubscribe reply
* This change adds more than just an unsubscribe reply.
* Websockets canrequest to join/leave groups. They do so using a single
idempotent request. This change replies to group requests over the
websockets with the diff of the group subscription. i.e. what groups the
user currenntly is in, what groups were left, and what groups were
joined.
2020-03-18 16:10:16 -04:00
chris meyers
088373963b satisfy generic Role code
* User in channels session is a lazy user class. This does not conform
to what the generic Role ancestry code expects. The Role ancestry code
expects a User objects. This change converts the lazy object into a
proper User object before calling the permission code path.
2020-03-18 16:10:16 -04:00
chris meyers
5818dcc980 prefer simple async -> sync
* asgiref async_to_sync was causing a Redis connection _for each_ call
to emit_channel_notification i.e. every event that the callback receiver
processes. This is a "known" issue
https://github.com/django/channels_redis/pull/130#issuecomment-424274470
and the advise is to slow downn the rate at which you call
async_to_sync. That is not an option for us. Instead, we put the async
group_send call onto the event loop for the current thread and wait for
it to be processed immediately.

The known issue has to do with event loop + socket relationship. Each
connection to redis is achieved via a socket. That conection can only be
waiting on by the event loop that corresponds to the calling thread.
async_to_sync creates a _new thread_ for each invocation. Thus, a new
connection to redis is required. Thus, the excess redis connections that
can be observed via netstat | grep redis | wc -l.
2020-03-18 16:10:16 -04:00
chris meyers
dc6c353ecd remove support for multi-reader dispatch queue
* Under the new postgres backed notify/listen message queue, this never
actually worked. Without using the database to store state, we can not
provide a at-most-once delivery mechanism w/ multi-readers.
* With this change, work is done ONLY on the node that requested for the
work to be done. Under rabbitmq, the node that was first to get the
message off the queue would do the work; presumably the least busy node.
2020-03-18 16:10:16 -04:00
chris meyers
50b56aa8cb autobahn 20.1.2 released an hour ago
* 20.1.1 no longer available on pypi
2020-03-18 16:10:15 -04:00
chris meyers
3fec69799c fix websocket job subscription access control 2020-03-18 16:10:15 -04:00
chris meyers
2a2c34f567 combine all the broker replacement pieces
* local redis for event processing
* postgres for message broker
* redis for websockets
2020-03-18 16:10:15 -04:00
chris meyers
558e92806b POC postgres broker 2020-03-18 16:10:15 -04:00
chris meyers
355fb125cb redis events 2020-03-18 16:10:15 -04:00
chris meyers
c8eeacacca POC channels 2 2020-03-18 16:10:12 -04:00
softwarefactory-project-zuul[bot]
d0a3c5a42b Merge pull request #6323 from AlanCoding/rm_verify_ssl_test
Replace verify_ssl test that did not work right

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 19:33:24 +00:00
softwarefactory-project-zuul[bot]
64139f960f Merge pull request #6331 from marshmalien/fix-project-schedule-breadcrumb
Add nested project schedule detail breadcrumb

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 19:33:19 +00:00
softwarefactory-project-zuul[bot]
eda494be63 Merge pull request #6330 from rooftopcellist/fix_flakey_workflow_functest
Fix flaky workflow test & set junit family

Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
             https://github.com/rooftopcellist
2020-03-18 19:27:26 +00:00
Christian Adams
4a0c371014 Fix flaky workflow test & set junit family 2020-03-18 14:02:33 -04:00
softwarefactory-project-zuul[bot]
6b43da35e1 Merge pull request #5745 from wenottingham/no-license-and-registration-please
Clean up a few more cases where we checked the license for features.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 17:59:16 +00:00
softwarefactory-project-zuul[bot]
afa3b500d3 Merge pull request #6273 from AlanCoding/failure_verbosity
Print module standard out in test failure scenarios

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 17:50:16 +00:00
softwarefactory-project-zuul[bot]
c3efb13020 Merge pull request #6325 from AlanCoding/autohack
Automatically hack sys.path to make running tests easier

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 17:46:59 +00:00
Marliana Lara
eb28800082 Fix nested project schedule breadcrumb 2020-03-18 13:17:42 -04:00
softwarefactory-project-zuul[bot]
3219b9b4ac Merge pull request #6318 from AlexSCorey/6100-ConvertWFJTandJTtoHooks
Moves JT Form to using react hooks and custom hooks

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 15:58:13 +00:00
softwarefactory-project-zuul[bot]
e9a48cceba Merge pull request #6319 from squidboylan/collection_test_refactor
Collection test refactor

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 14:53:04 +00:00
softwarefactory-project-zuul[bot]
9a7fa1f3a6 Merge pull request #6313 from mabashian/jest-24-25-upgrade
Upgrade jest and babel-jest to latest (v25)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 14:33:44 +00:00
Caleb Boylan
a03d74d828 Collection: Use random names when we create objects in our tests 2020-03-18 07:02:14 -07:00
mabashian
2274b4b4e4 Upgrade jest and babel-jest to latest (v25) 2020-03-18 09:44:25 -04:00
AlanCoding
c054d7c3d7 Automatically hack sys.path to make running tests easier 2020-03-18 09:40:11 -04:00
softwarefactory-project-zuul[bot]
26d5d7afdc Merge pull request #6304 from AlanCoding/workflow_role
Add workflow to tower_role module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 03:13:37 +00:00
softwarefactory-project-zuul[bot]
6b51b41897 Merge pull request #6322 from AlanCoding/user_no_log
Mark user password as no_log to silence warning

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 01:58:27 +00:00
AlanCoding
ca3cf244fd Replace verify_ssl test that did not work right 2020-03-17 21:43:30 -04:00
softwarefactory-project-zuul[bot]
88d7b24f55 Merge pull request #6311 from jlmitch5/fixDupID
change duplicate IDs where possible

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-18 01:03:49 +00:00
AlanCoding
ecdb353f6f Mark user password as no_log to silence warning 2020-03-17 19:49:27 -04:00
AlanCoding
d9932eaf6a Add integration test 2020-03-17 19:37:30 -04:00
softwarefactory-project-zuul[bot]
cbc52fa19f Merge pull request #6278 from AlanCoding/wfjt_tests
Add more tests for recent WFJT module issues

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 23:02:14 +00:00
softwarefactory-project-zuul[bot]
cc77b31d4e Merge pull request #6314 from mabashian/6293-toggle-error
Adds error div to toggle fields built using form generator

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 21:34:18 +00:00
Bill Nottingham
b875c03f4a Clean up a few more cases where we checked the license for features. 2020-03-17 17:19:33 -04:00
Alex Corey
e87f804c92 Moves JT Form to using react hooks and custom hooks 2020-03-17 16:40:09 -04:00
softwarefactory-project-zuul[bot]
f86cbf33aa Merge pull request #6307 from mabashian/eslint-5-6-upgrade
Bumps eslint from 5.6 to 6.8

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 20:24:13 +00:00
mabashian
6db6fe90fd Adds error div to toggle fields built using form generator so that errors can be shown underneath the toggle 2020-03-17 15:27:16 -04:00
softwarefactory-project-zuul[bot]
bcbe9691e5 Merge pull request #6312 from beeankha/collections_toolbox_sanity_fix
Fix Shebang Error in Collections Tools

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 19:21:54 +00:00
John Mitchell
c850148ee3 change duplicate IDs where possible 2020-03-17 14:08:13 -04:00
softwarefactory-project-zuul[bot]
b260a88810 Merge pull request #6308 from wenottingham/branch-for-your-branch
Allow scm_branch in notifications.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 17:51:10 +00:00
mabashian
a0937a8b7c Bumps eslint from 5.6 to 6.8 2020-03-17 13:44:33 -04:00
beeankha
c4c0cace88 Fix ansible shebang error 2020-03-17 12:55:32 -04:00
softwarefactory-project-zuul[bot]
a55bcafa3a Merge pull request #6310 from jakemcdermott/update-lockfile
Auto-update dependencies in lock file

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-17 16:22:48 +00:00
Bill Nottingham
d0c510563f Allow scm_branch in notifications. 2020-03-17 12:09:35 -04:00
Jake McDermott
d23fb17cd9 Auto-update dependencies in lock file 2020-03-17 11:33:55 -04:00
AlanCoding
8668f2ad46 Add workflow to tower_role 2020-03-16 22:36:27 -04:00
softwarefactory-project-zuul[bot]
e210ee4077 Merge pull request #6301 from gamuniz/catch_analytics_failure
rework the gather() to always delete the leftover directories

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 20:45:36 +00:00
softwarefactory-project-zuul[bot]
47ff56c411 Merge pull request #6297 from squidboylan/collection_tests
Collection tests

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 20:07:13 +00:00
softwarefactory-project-zuul[bot]
1e780aad38 Merge pull request #5726 from AlanCoding/jt_org_2020
Add read-only organization field to job templates

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 20:03:47 +00:00
Gabe Muniz
80234c5600 rework the tar to always delete the leftover directories 2020-03-16 19:54:15 +00:00
softwarefactory-project-zuul[bot]
c8510f7d75 Merge pull request #6256 from beeankha/collections_toolbox
Module Generation Tools for the AWX Collection

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 18:55:52 +00:00
softwarefactory-project-zuul[bot]
6431050b36 Merge pull request #6296 from ryanpetrello/fix-iso-node-bug
fix a bug in isolated event handling

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 18:45:05 +00:00
softwarefactory-project-zuul[bot]
5c360aeff3 Merge pull request #6287 from squidboylan/add_collection_test
Collection: add a test for multiple credentials on a jt

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 18:38:52 +00:00
softwarefactory-project-zuul[bot]
44e043d75f Merge pull request #6294 from mabashian/4070-access-5
Adds aria-labels to links without descernible inner text

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-16 18:27:01 +00:00
Caleb Boylan
ef223b0afb Collection: Add test for workflow_template extra_vars 2020-03-16 11:12:57 -07:00
Caleb Boylan
e9e8283f16 Collection: Add integration test for smart inventory 2020-03-16 11:08:01 -07:00
Ryan Petrello
b73e8d8a56 fix a bug in isolated event handling
see: https://github.com/ansible/awx/issues/6280
2020-03-16 13:15:10 -04:00
beeankha
6db6c6c5ba Revert module util changes, reorder params in group module 2020-03-16 11:18:08 -04:00
AlanCoding
2b5ff9a6f9 Patches to generator to better align with modules 2020-03-16 11:10:07 -04:00
beeankha
97c169780d Delete config file 2020-03-16 11:10:07 -04:00
beeankha
88c46b4573 Add updated tower_api module util file, update generator and template 2020-03-16 11:10:07 -04:00
beeankha
53d27c933e Fix linter issues 2020-03-16 11:10:07 -04:00
beeankha
c340fff643 Add generator playbook for the AWX Collection modules, along with other module generation tools 2020-03-16 11:10:07 -04:00
mabashian
61600a8252 Adds aria-labels to links without descernible inner text 2020-03-16 10:39:21 -04:00
AlanCoding
521cda878e Add named URL docs for uniqueness functionality 2020-03-16 10:04:18 -04:00
softwarefactory-project-zuul[bot]
9ecd6ad0fb Merge pull request #6245 from mabashian/4070-access-3
Adds lang attr to html tag to specify default language for the application

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 22:36:26 +00:00
softwarefactory-project-zuul[bot]
349af22d0f Merge pull request #6261 from jakemcdermott/5386-dont-block-blockquotes
Don't block the blockquotes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 22:12:28 +00:00
softwarefactory-project-zuul[bot]
ad316fc2a3 Merge pull request #6284 from mabashian/4070-access-1
Adds aria-label attrs to buttons without inner text

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 22:00:16 +00:00
softwarefactory-project-zuul[bot]
e4abf634f0 Merge pull request #6268 from keithjgrant/survey-list-sort
Add survey list sorting

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 21:59:34 +00:00
softwarefactory-project-zuul[bot]
bb144acee3 Merge pull request #6249 from jakemcdermott/5771-fix-read-only-display-of-playbooks
Show playbook field on JT when read-only

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 21:11:59 +00:00
Caleb Boylan
16d5456d2b Collection: add a test for multiple credentials on a jt 2020-03-13 13:41:38 -07:00
mabashian
abe8153358 Remove rogue console 2020-03-13 16:38:46 -04:00
Keith Grant
86aabb297e add documentation for useDismissableError, useDeleteItems 2020-03-13 13:21:59 -07:00
softwarefactory-project-zuul[bot]
65a7613c26 Merge pull request #6257 from jakemcdermott/3774-fix-settings-ldap-error-handling
Add parser error handling for settings json

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 19:30:34 +00:00
softwarefactory-project-zuul[bot]
4d1790290e Merge pull request #6259 from jakemcdermott/3956-translate-access-strings
Mark access removal prompts and tech preview message for translation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 19:30:29 +00:00
softwarefactory-project-zuul[bot]
dca335d17c Merge pull request #6251 from jakemcdermott/5987-fix-sometimes-missing-verbosity-and-job-type-values
Fix sometimes missing job_type and verbosity field values

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 19:30:24 +00:00
softwarefactory-project-zuul[bot]
da48cffa12 Merge pull request #6285 from ryanpetrello/user-readonly-last-login
make User.last_login read_only=True in its serializer

Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
             https://github.com/rooftopcellist
2020-03-13 19:24:58 +00:00
Keith Grant
a2eeb6e7b5 delete unused file 2020-03-13 11:40:24 -07:00
Jake McDermott
f8f6fff21e Show playbook field on JT when read-only 2020-03-13 13:35:49 -04:00
Keith Grant
3e616f2770 update SurveyList tests, add TemplateSurvey tests 2020-03-13 10:17:39 -07:00
softwarefactory-project-zuul[bot]
7c6bef15ba Merge pull request #6246 from mabashian/4070-access-4
Adds aria-label attrs to img elements

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 16:54:35 +00:00
Ryan Petrello
27b48fe55b make User.last_login read_only=True in its serializer 2020-03-13 12:53:40 -04:00
softwarefactory-project-zuul[bot]
6b20ffbfdd Merge pull request #6275 from ryanpetrello/fix-isolated-hostname-in-events
consolidate isolated event handling code into one function

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-13 16:34:46 +00:00
mabashian
43abeec3d7 Mark img aria-labels for translation 2020-03-13 11:25:47 -04:00
mabashian
bd8886a867 Adds lang attr to html tag in ui next app 2020-03-13 11:21:25 -04:00
mabashian
bd68317cfd Adds aria-label attrs to buttons without inner text 2020-03-13 11:13:28 -04:00
Ryan Petrello
f8818730d4 consolidate isolated event handling code into one function
make the non-isolated *and* isolated event handling share the same
function so we don't regress on behavior between the two
2020-03-13 10:05:48 -04:00
Jake McDermott
b41c9e5ba3 Normalize initial value of select2 fields 2020-03-13 09:43:12 -04:00
Jake McDermott
401be0c265 Add parser error handling for settings json 2020-03-13 09:23:11 -04:00
humzam96
35be571eed Don't block the blockquotes 2020-03-13 09:22:20 -04:00
Hideki Saito
8e7faa853e Mark tech preview message for translation 2020-03-13 09:21:02 -04:00
Jake McDermott
1ee46ab98a Mark access removal prompts for translation 2020-03-13 09:20:59 -04:00
Keith Grant
ac9f526cf0 fix useRequest error bug 2020-03-12 17:08:21 -07:00
softwarefactory-project-zuul[bot]
7120e92078 Merge pull request #6262 from rooftopcellist/mv_bootstrap_script
Update dev container to be consistent with other installation methods

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-12 19:51:18 +00:00
AlanCoding
7e6def8acc Grant org JT admin role in migration as well 2020-03-12 15:45:47 -04:00
AlanCoding
aa4842aea5 Various JT.organization cleanup items
cleanup from PR review suggestions

bump migration number

fix test

revert change to old-app JT form no longer needed
2020-03-12 15:45:47 -04:00
John Mitchell
7547793792 add organization to template views in old and new ui 2020-03-12 15:45:47 -04:00
AlanCoding
7d0b207571 Organization on JT as read-only field
Set JT.organization with value from its project

Remove validation requiring JT.organization

Undo some of the additional org definitions in tests

Revert some tests no longer needed for feature

exclude workflow approvals from unified organization field

revert awxkit changes for providing organization

Roll back additional JT creation permission requirement

Fix up more issues by persisting organization field when project is removed

Restrict project org editing, logging, and testing

Grant removed inventory org admin permissions in migration

Add special validate_unique for job templates
  this deals with enforcing name-organization uniqueness

Add back in special message where config is unknown
  when receiving 403 on job relaunch

Fix logical and performance bugs with data migration

within JT.inventory.organization make-permission-explicit migration

remove nested loops so we do .iterator() on JT queryset

in reverse migration, carefully remove execute role on JT
  held by org admins of inventory organization,
  as well as the execute_role holders

Use current state of Role model in logic, with 1 notable exception
  that is used to filter on ancestors
  the ancestor and descentent relationship in the migration model
    is not reliable
  output of this is saved as an integer list to avoid future
    compatibility errors

make the parents rebuilding logic skip over irrelevant models
  this is the largest performance gain for small resource numbers
2020-03-12 15:45:46 -04:00
AlanCoding
daa9282790 Initial (editable) pass of adding JT.organization
This is the old version of this feature from 2019
  this allows setting the organization in the data sent
  to the API when creating a JT, and exposes the field
  in the UI as well

Subsequent commit changes the field from editable
  to read-only, but as of this commit, the machinery
  is not hooked up to infer it from project
2020-03-12 15:45:46 -04:00
AlanCoding
bdd0b9e4d9 Add more tests for recent WFJT module issues 2020-03-12 15:45:25 -04:00
softwarefactory-project-zuul[bot]
1876849d89 Merge pull request #6186 from AlanCoding/wfjt_vars
Modernize types of WFJT module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-12 19:43:04 +00:00
softwarefactory-project-zuul[bot]
e4dd2728ef Merge pull request #6276 from ryanpetrello/approval-start-date-in-notifications
save approval node start time *before* sending "started" notifications

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-03-12 18:58:03 +00:00
Ryan Petrello
88571f6dcb save approval node start time *before* sending "started" notifications
see: https://github.com/ansible/awx/issues/6267
2020-03-12 14:14:56 -04:00
AlanCoding
a1cdc07944 Print module standard out in test failure scenarios 2020-03-12 12:26:54 -04:00
Keith Grant
eea80c45d1 fix keys, clean up surveylist 2020-03-12 08:42:59 -07:00
Keith Grant
07565b5efc add error handling to TemplateSurvey 2020-03-11 15:06:13 -07:00
Christian Adams
3755151cc5 Update dev container to be consistent with other installation methods
- rename start_development.sh script to `launch_awx.sh`, like it is in k8 installations
2020-03-11 16:23:31 -04:00
Keith Grant
2584f7359e restructure TemplateSurvey & list components 2020-03-11 09:33:04 -07:00
Alex Corey
286cec8bc3 WIP 2020-03-11 09:33:04 -07:00
Alex Corey
64b1aa43b1 Adds SurveyList tool bar 2020-03-11 09:33:04 -07:00
mabashian
6c7ab97159 Adds aria-label attrs to img elements 2020-03-10 16:12:29 -04:00
mabashian
8077c910b0 Adds lang attr to installing template 2020-03-10 16:10:51 -04:00
AlanCoding
feef39c5cc Change test to use native list for schemas 2020-03-10 16:07:04 -04:00
AlanCoding
e80843846e Modernize types of WFJT module 2020-03-10 16:07:03 -04:00
mabashian
ecc68c1003 Adds lang attr to html tag to specify default language for the application 2020-03-10 15:33:31 -04:00
772 changed files with 36602 additions and 12735 deletions

View File

@@ -30,8 +30,9 @@ https://www.ansible.com/security
##### STEPS TO REPRODUCE
<!-- For bugs, please show exactly how to reproduce the problem. For new
features, show how the feature would be used. -->
<!-- For new features, show how the feature would be used. For bugs, please show
exactly how to reproduce the problem. Ideally, provide all steps and data needed
to recreate the bug from a new awx install. -->
##### EXPECTED RESULTS

1
.gitignore vendored
View File

@@ -31,6 +31,7 @@ awx/ui/templates/ui/installing.html
awx/ui_next/node_modules/
awx/ui_next/coverage/
awx/ui_next/build/locales/_build
rsyslog.pid
/tower-license
/tower-license/**
tools/prometheus/data

View File

@@ -2,6 +2,45 @@
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
## 11.1.0 (Apr 22, 2020)
- Changed rsyslogd to persist queued events to disk (to prevent a risk of out-of-memory errors) (https://github.com/ansible/awx/issues/6746)
- Added the ability to configure the destination and maximum disk size of rsyslogd spool (in the event of a log aggregator outage) (https://github.com/ansible/awx/pull/6763)
- Added the ability to discover playbooks in project clones from symlinked directories (https://github.com/ansible/awx/pull/6773)
- Fixed a bug that caused certain log aggregator settings to break logging integration (https://github.com/ansible/awx/issues/6760)
- Fixed a bug that caused playbook execution in container groups to sometimes unexpectedly deadlock (https://github.com/ansible/awx/issues/6692)
- Improved stability of the new redis clustering implementation (https://github.com/ansible/awx/pull/6739 https://github.com/ansible/awx/pull/6720)
- Improved stability of the new rsyslogd-based logging implementation (https://github.com/ansible/awx/pull/6796)
## 11.0.0 (Apr 16, 2020)
- As of AWX 11.0.0, Kubernetes-based deployments use a Deployment rather than a StatefulSet.
- Reimplemented external logging support using rsyslogd to improve reliability and address a number of issues (https://github.com/ansible/awx/issues/5155)
- Changed activity stream logs to include summary fields for related objects (https://github.com/ansible/awx/issues/1761)
- Added code to more gracefully attempt to reconnect to redis if it restarts/becomes unavailable (https://github.com/ansible/awx/pull/6670)
- Fixed a bug that caused REFRESH_TOKEN_EXPIRE_SECONDS to not properly be respected for OAuth2.0 refresh tokens generated by AWX (https://github.com/ansible/awx/issues/6630)
- Fixed a bug that broke schedules containing RRULES with very old DTSTART dates (https://github.com/ansible/awx/pull/6550)
- Fixed a bug that broke installs on older versions of Ansible packaged with certain Linux distributions (https://github.com/ansible/awx/issues/5501)
- Fixed a bug that caused the activity stream to sometimes report the incorrect actor when associating user membership on SAML login (https://github.com/ansible/awx/pull/6525)
- Fixed a bug in AWX's Grafana notification support when annotation tags are omitted (https://github.com/ansible/awx/issues/6580)
- Fixed a bug that prevented some users from searching for Source Control credentials in the AWX user interface (https://github.com/ansible/awx/issues/6600)
- Fixed a bug that prevented disassociating orphaned users from credentials (https://github.com/ansible/awx/pull/6554)
- Updated Twisted to address CVE-2020-10108 and CVE-2020-10109.
## 10.0.0 (Mar 30, 2020)
- As of AWX 10.0.0, the official AWX CLI no longer supports Python 2 (it requires at least Python 3.6) (https://github.com/ansible/awx/pull/6327)
- AWX no longer relies on RabbitMQ; Redis is added as a new dependency (https://github.com/ansible/awx/issues/5443)
- Altered AWX's event tables to allow more than ~2 billion total events (https://github.com/ansible/awx/issues/6010)
- Improved the performance (time to execute, and memory consumption) of the periodic job cleanup system job (https://github.com/ansible/awx/pull/6166)
- Updated Job Templates so they now have an explicit Organization field (it is no longer inferred from the associated Project) (https://github.com/ansible/awx/issues/3903)
- Updated social-auth-core to address an upcoming GitHub API deprecation (https://github.com/ansible/awx/issues/5970)
- Updated to ansible-runner 1.4.6 to address various bugs.
- Updated Django to address CVE-2020-9402
- Updated pyyaml version to address CVE-2017-18342
- Fixed a bug which prevented the new `scm_branch` field from being used in custom notification templates (https://github.com/ansible/awx/issues/6258)
- Fixed a race condition that sometimes causes success/failure notifications to include an incomplete list of hosts (https://github.com/ansible/awx/pull/6290)
- Fixed a bug that can cause certain setting pages to lose unsaved form edits when a playbook is launched (https://github.com/ansible/awx/issues/5265)
- Fixed a bug that can prevent the "Use TLS/SSL" field from properly saving when editing email notification templates (https://github.com/ansible/awx/issues/6383)
- Fixed a race condition that sometimes broke event/stdout processing for jobs launched in container groups (https://github.com/ansible/awx/issues/6280)
## 9.3.0 (Mar 12, 2020)
- Added the ability to specify an OAuth2 token description in the AWX CLI (https://github.com/ansible/awx/issues/6122)
- Added support for K8S service account annotations to the installer (https://github.com/ansible/awx/pull/6007)
@@ -79,7 +118,7 @@ This is a list of high-level changes for each release of AWX. A full list of com
- Fixed a bug in the CLI which incorrectly parsed launch time arguments for `awx job_templates launch` and `awx workflow_job_templates launch` (https://github.com/ansible/awx/issues/5093).
- Fixed a bug that caused inventory updates using "sourced from a project" to stop working (https://github.com/ansible/awx/issues/4750).
- Fixed a bug that caused Slack notifications to sometimes show the wrong bot avatar (https://github.com/ansible/awx/pull/5125).
- Fixed a bug that prevented the use of digits in Tower's URL settings (https://github.com/ansible/awx/issues/5081).
- Fixed a bug that prevented the use of digits in AWX's URL settings (https://github.com/ansible/awx/issues/5081).
## 8.0.0 (Oct 21, 2019)

View File

@@ -155,12 +155,11 @@ If you start a second terminal session, you can take a look at the running conta
(host)$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa4a75d6d77b gcr.io/ansible-tower-engineering/awx_devel:devel "/tini -- /bin/sh ..." 23 seconds ago Up 15 seconds 0.0.0.0:5555->5555/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 22/tcp, 0.0.0.0:8080->8080/tcp tools_awx_1
e4c0afeb548c postgres:10 "docker-entrypoint..." 26 seconds ago Up 23 seconds 5432/tcp tools_postgres_1
0089699d5afd tools_logstash "/docker-entrypoin..." 26 seconds ago Up 25 seconds tools_logstash_1
4d4ff0ced266 memcached:alpine "docker-entrypoint..." 26 seconds ago Up 25 seconds 0.0.0.0:11211->11211/tcp tools_memcached_1
92842acd64cd rabbitmq:3-management "docker-entrypoint..." 26 seconds ago Up 24 seconds 4369/tcp, 5671-5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp tools_rabbitmq_1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44251b476f98 gcr.io/ansible-tower-engineering/awx_devel:devel "/entrypoint.sh /bin…" 27 seconds ago Up 23 seconds 0.0.0.0:6899->6899/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 0.0.0.0:8080->8080/tcp, 22/tcp, 0.0.0.0:8888->8888/tcp tools_awx_run_9e820694d57e
b049a43817b4 memcached:alpine "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:11211->11211/tcp tools_memcached_1
40de380e3c2e redis:latest "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:6379->6379/tcp tools_redis_1
b66a506d3007 postgres:10 "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:5432->5432/tcp tools_postgres_1
```
**NOTE**
@@ -216,18 +215,23 @@ Using `docker exec`, this will create a session in the running *awx* container,
If you want to start and use the development environment, you'll first need to bootstrap it by running the following command:
```bash
(container)# /bootstrap_development.sh
(container)# /usr/bin/bootstrap_development.sh
```
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes.
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes. Once it's done it
will drop you back to the shell.
Now you can start each service individually, or start all services in a pre-configured tmux session like so:
In order to launch all developer services:
```bash
(container)# cd /awx_devel
(container)# make server
(container)# /usr/bin/launch_awx.sh
```
`launch_awx.sh` also calls `bootstrap_development.sh` so if all you are doing is launching the supervisor to start all services, you don't
need to call `bootstrap_development.sh` first.
### Post Build Steps
Before you can log in and use the system, you will need to create an admin user. Optionally, you may also want to load some demo data.

View File

@@ -41,6 +41,8 @@ This document provides a guide for installing AWX.
+ [Run the installer](#run-the-installer-2)
+ [Post-install](#post-install-2)
+ [Accessing AWX](#accessing-awx-2)
- [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
## Getting started
@@ -80,7 +82,7 @@ The system that runs the AWX service will need to satisfy the following requirem
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 9.6+.
- If you choose to use an external PostgreSQL database, please note that the minimum version is 10+.
### AWX Tunables
@@ -128,7 +130,6 @@ For convenience, you can create a file called `vars.yml`:
```
admin_password: 'adminpass'
pg_password: 'pgpass'
rabbitmq_password: 'rabbitpass'
secret_key: 'mysupersecret'
```
@@ -476,7 +477,7 @@ Before starting the install process, review the [inventory](./installer/inventor
*ssl_certificate*
> Optionally, provide the path to a file that contains a certificate and its private key.
> Optionally, provide the path to a file that contains a certificate and its private key. This needs to be a .pem-file
*docker_compose_dir*
@@ -555,16 +556,7 @@ $ ansible-playbook -i inventory -e docker_registry_password=password install.yml
### Post-install
After the playbook run completes, Docker will report up to 5 running containers. If you chose to use an existing PostgresSQL database, then it will report 4. You can view the running containers using the `docker ps` command, as follows:
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e240ed8209cd awx_task:1.0.0.8 "/tini -- /bin/sh ..." 2 minutes ago Up About a minute 8052/tcp awx_task
1cfd02601690 awx_web:1.0.0.8 "/tini -- /bin/sh ..." 2 minutes ago Up About a minute 0.0.0.0:443->8052/tcp awx_web
55a552142bcd memcached:alpine "docker-entrypoint..." 2 minutes ago Up 2 minutes 11211/tcp memcached
84011c072aad rabbitmq:3 "docker-entrypoint..." 2 minutes ago Up 2 minutes 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
97e196120ab3 postgres:9.6 "docker-entrypoint..." 2 minutes ago Up 2 minutes 5432/tcp postgres
```
After the playbook run completes, Docker starts a series of containers that provide the services that make up AWX. You can view the running containers using the `docker ps` command.
If you're deploying using Docker Compose, container names will be prefixed by the name of the folder where the docker-compose.yml file is created (by default, `awx`).
@@ -630,3 +622,34 @@ Added instance awx to tower
The AWX web server is accessible on the deployment host, using the *host_port* value set in the *inventory* file. The default URL is [http://localhost](http://localhost).
You will prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
# Installing the AWX CLI
`awx` is the official command-line client for AWX. It:
* Uses naming and structure consistent with the AWX HTTP API
* Provides consistent output formats with optional machine-parsable formats
* To the extent possible, auto-detects API versions, available endpoints, and
feature support across multiple versions of AWX.
Potential uses include:
* Configuring and launching jobs/playbooks
* Checking on the status and output of job runs
* Managing objects like organizations, users, teams, etc...
The preferred way to install the AWX CLI is through pip directly from GitHub:
pip install "https://github.com/ansible/awx/archive/$VERSION.tar.gz#egg=awxkit&subdirectory=awxkit"
awx --help
...where ``$VERSION`` is the version of AWX you're running. To see a list of all available releases, visit: https://github.com/ansible/awx/releases
## Building the CLI Documentation
To build the docs, spin up a real AWX server, `pip install sphinx sphinxcontrib-autoprogram`, and run:
~ TOWER_HOST=https://awx.example.org TOWER_USERNAME=example TOWER_PASSWORD=secret make clean html
~ cd build/html/ && python -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ..

View File

@@ -18,7 +18,6 @@ COMPOSE_TAG ?= $(GIT_BRANCH)
COMPOSE_HOST ?= $(shell hostname)
VENV_BASE ?= /venv
COLLECTION_VENV ?= /awx_devel/awx_collection_test_venv
SCL_PREFIX ?=
CELERY_SCHEDULE_FILE ?= /var/lib/awx/beat.db
@@ -265,28 +264,6 @@ migrate:
dbchange:
$(MANAGEMENT_COMMAND) makemigrations
server_noattach:
tmux new-session -d -s awx 'exec make uwsgi'
tmux rename-window 'AWX'
tmux select-window -t awx:0
tmux split-window -v 'exec make dispatcher'
tmux new-window 'exec make daphne'
tmux select-window -t awx:1
tmux rename-window 'WebSockets'
tmux split-window -h 'exec make runworker'
tmux split-window -v 'exec make nginx'
tmux new-window 'exec make receiver'
tmux select-window -t awx:2
tmux rename-window 'Extra Services'
tmux select-window -t awx:0
server: server_noattach
tmux -2 attach-session -t awx
# Use with iterm2's native tmux protocol support
servercc: server_noattach
tmux -2 -CC attach-session -t awx
supervisor:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -311,18 +288,11 @@ daphne:
fi; \
daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer
runworker:
wsbroadcast:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py runworker --only-channels websocket.*
# Run the built-in development webserver (by default on http://localhost:8013).
runserver:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py runserver
$(PYTHON) manage.py run_wsbroadcast
# Run to start the background task dispatcher for development.
dispatcher:
@@ -394,12 +364,8 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py2,py3
awx-manage check_migrations --dry-run --check -n 'vNNN_missing_migration_file'
prepare_collection_venv:
rm -rf $(COLLECTION_VENV)
mkdir $(COLLECTION_VENV)
$(VENV_BASE)/awx/bin/pip install --target=$(COLLECTION_VENV) git+https://github.com/ansible/tower-cli.git
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
COLLECTION_TEST_TARGET ?=
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
@@ -408,12 +374,12 @@ test_collection:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONPATH=$(COLLECTION_VENV):/awx_devel/awx_collection:$PYTHONPATH:/usr/lib/python3.6/site-packages py.test $(COLLECTION_TEST_DIRS)
PYTHONPATH=$PYTHONPATH:/usr/lib/python3.6/site-packages py.test $(COLLECTION_TEST_DIRS)
flake8_collection:
flake8 awx_collection/ # Different settings, in main exclude list
test_collection_all: prepare_collection_venv test_collection flake8_collection
test_collection_all: test_collection flake8_collection
# WARNING: symlinking a collection is fundamentally unstable
# this is for rapid development iteration with playbooks, do not use with other test targets
@@ -434,7 +400,7 @@ test_collection_sanity: install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)
test_unit:
@if [ "$(VENV_BASE)" ]; then \
@@ -678,7 +644,6 @@ detect-schema-change: genschema
diff -u -b reference-schema.json schema.json
docker-compose-clean: awx/projects
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm -w /awx_devel --service-ports awx make clean
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose rm -sf
docker-compose-build: awx-devel-build
@@ -696,11 +661,12 @@ docker-compose-isolated-build: awx-devel-build
docker tag ansible/awx_isolated $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
MACHINE?=default
docker-clean:
eval $$(docker-machine env $(MACHINE))
$(foreach container_id,$(shell docker ps -f name=tools_awx -aq),docker stop $(container_id); docker rm -f $(container_id);)
-docker images | grep "awx_devel" | awk '{print $$1 ":" $$2}' | xargs docker rmi
docker images | grep "awx_devel" | awk '{print $$1 ":" $$2}' | xargs docker rmi
docker-clean-volumes: docker-compose-clean
docker volume rm tools_awx_db
docker-refresh: docker-clean docker-compose
@@ -714,9 +680,6 @@ docker-compose-cluster-elk: docker-auth awx/projects
prometheus:
docker run -u0 --net=tools_default --link=`docker ps | egrep -o "tools_awx(_run)?_([^ ]+)?"`:awxweb --volume `pwd`/tools/prometheus:/prometheus --name prometheus -d -p 0.0.0.0:9090:9090 prom/prometheus --web.enable-lifecycle --config.file=/prometheus/prometheus.yml
minishift-dev:
ansible-playbook -i localhost, -e devtree_directory=$(CURDIR) tools/clusterdevel/start_minishift_dev.yml
clean-elk:
docker stop tools_kibana_1
docker stop tools_logstash_1

View File

@@ -1 +1 @@
9.3.0
11.1.0

View File

@@ -5,10 +5,12 @@
import inspect
import logging
import time
import uuid
import urllib.parse
# Django
from django.conf import settings
from django.core.cache import cache
from django.db import connection
from django.db.models.fields import FieldDoesNotExist
from django.db.models.fields.related import OneToOneRel
@@ -43,7 +45,10 @@ from awx.main.utils import (
get_search_fields,
getattrd,
get_object_or_400,
decrypt_field
decrypt_field,
get_awx_version,
get_licenser,
StubLicense
)
from awx.main.utils.db import get_all_field_names
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer, UserSerializer
@@ -195,6 +200,8 @@ class APIView(views.APIView):
logger.warning(status_msg)
response = super(APIView, self).finalize_response(request, response, *args, **kwargs)
time_started = getattr(self, 'time_started', None)
response['X-API-Product-Version'] = get_awx_version()
response['X-API-Product-Name'] = 'AWX' if isinstance(get_licenser(), StubLicense) else 'Red Hat Ansible Tower'
response['X-API-Node'] = settings.CLUSTER_HOST_ID
if time_started:
time_elapsed = time.time() - self.time_started
@@ -548,6 +555,15 @@ class SubListCreateAPIView(SubListAPIView, ListCreateAPIView):
})
return d
def get_queryset(self):
if hasattr(self, 'parent_key'):
# Prefer this filtering because ForeignKey allows us more assumptions
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(**{self.parent_key: parent})
return super(SubListCreateAPIView, self).get_queryset()
def create(self, request, *args, **kwargs):
# If the object ID was not specified, it probably doesn't exist in the
# DB yet. We want to see if we can create it. The URL may choose to
@@ -964,6 +980,11 @@ class CopyAPIView(GenericAPIView):
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all():
new_obj.admin_role.members.add(request.user)
if sub_objs:
# store the copied object dict into memcached, because it's
# often too large for postgres' notification bus
# (which has a default maximum message size of 8k)
key = 'deep-copy-{}'.format(str(uuid.uuid4()))
cache.set(key, sub_objs, timeout=3600)
permission_check_func = None
if hasattr(type(self), 'deep_copy_permission_check_func'):
permission_check_func = (
@@ -971,7 +992,7 @@ class CopyAPIView(GenericAPIView):
)
trigger_delayed_deep_copy(
self.model.__module__, self.model.__name__,
obj.pk, new_obj.pk, request.user.pk, sub_objs,
obj.pk, new_obj.pk, request.user.pk, key,
permission_check_func=permission_check_func
)
serializer = self._get_copy_return_serializer(new_obj)

View File

@@ -2,6 +2,7 @@
# All Rights Reserved.
from collections import OrderedDict
from uuid import UUID
# Django
from django.core.exceptions import PermissionDenied
@@ -60,7 +61,8 @@ class Metadata(metadata.SimpleMetadata):
'type': _('Data type for this {}.'),
'url': _('URL for this {}.'),
'related': _('Data structure with URLs of related resources.'),
'summary_fields': _('Data structure with name/description for related resources.'),
'summary_fields': _('Data structure with name/description for related resources. '
'The output for some objects may be limited for performance reasons.'),
'created': _('Timestamp when this {} was created.'),
'modified': _('Timestamp when this {} was last modified.'),
}
@@ -85,6 +87,8 @@ class Metadata(metadata.SimpleMetadata):
# FIXME: Still isn't showing all default values?
try:
default = field.get_default()
if type(default) is UUID:
default = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
if field.field_name == 'TOWER_URL_BASE' and default == 'https://towerhost':
default = '{}://{}'.format(self.request.scheme, self.request.get_host())
field_info['default'] = default

View File

@@ -72,6 +72,7 @@ from awx.main.utils import (
prefetch_page_capabilities, get_external_account, truncate_stdout,
)
from awx.main.utils.filters import SmartFilter
from awx.main.utils.named_url_graph import reset_counters
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.validators import vars_validate_or_raise
@@ -347,6 +348,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
def _generate_named_url(self, url_path, obj, node):
url_units = url_path.split('/')
reset_counters()
named_url = node.generate_named_url(obj)
url_units[4] = named_url
return '/'.join(url_units)
@@ -642,7 +644,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
_capabilities_prefetch = [
'admin', 'execute',
{'copy': ['jobtemplate.project.use', 'jobtemplate.inventory.use',
'workflowjobtemplate.organization.workflow_admin']}
'organization.workflow_admin']}
]
class Meta:
@@ -884,6 +886,9 @@ class UserSerializer(BaseSerializer):
fields = ('*', '-name', '-description', '-modified',
'username', 'first_name', 'last_name',
'email', 'is_superuser', 'is_system_auditor', 'password', 'ldap_dn', 'last_login', 'external_account')
extra_kwargs = {
'last_login': {'read_only': True}
}
def to_representation(self, obj):
ret = super(UserSerializer, self).to_representation(obj)
@@ -1246,6 +1251,7 @@ class OrganizationSerializer(BaseSerializer):
res.update(dict(
projects = self.reverse('api:organization_projects_list', kwargs={'pk': obj.pk}),
inventories = self.reverse('api:organization_inventories_list', kwargs={'pk': obj.pk}),
job_templates = self.reverse('api:organization_job_templates_list', kwargs={'pk': obj.pk}),
workflow_job_templates = self.reverse('api:organization_workflow_job_templates_list', kwargs={'pk': obj.pk}),
users = self.reverse('api:organization_users_list', kwargs={'pk': obj.pk}),
admins = self.reverse('api:organization_admins_list', kwargs={'pk': obj.pk}),
@@ -1274,6 +1280,14 @@ class OrganizationSerializer(BaseSerializer):
'job_templates': 0, 'admins': 0, 'projects': 0}
else:
summary_dict['related_field_counts'] = counts_dict[obj.id]
# Organization participation roles (admin, member) can't be assigned
# to a team. This provides a hint to the ui so it can know to not
# display these roles for team role selection.
for key in ('admin_role', 'member_role',):
if key in summary_dict.get('object_roles', {}):
summary_dict['object_roles'][key]['user_only'] = True
return summary_dict
def validate(self, attrs):
@@ -1387,12 +1401,6 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
organization = None
if 'organization' in attrs:
organization = attrs['organization']
elif self.instance:
organization = self.instance.organization
if 'allow_override' in attrs and self.instance:
# case where user is turning off this project setting
if self.instance.allow_override and not attrs['allow_override']:
@@ -1408,11 +1416,7 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
' '.join([str(pk) for pk in used_by])
)})
view = self.context.get('view', None)
if not organization and not view.request.user.is_superuser:
# Only allow super users to create orgless projects
raise serializers.ValidationError(_('Organization is missing'))
elif get_field_from_model_or_attrs('scm_type') == '':
if get_field_from_model_or_attrs('scm_type') == '':
for fd in ('scm_update_on_launch', 'scm_delete_on_update', 'scm_clean'):
if get_field_from_model_or_attrs(fd):
raise serializers.ValidationError({fd: _('Update options must be set to false for manual projects.')})
@@ -2030,11 +2034,6 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
res['credentials'] = self.reverse('api:inventory_source_credentials_list', kwargs={'pk': obj.pk})
return res
def get_group(self, obj): # TODO: remove in 3.3
if obj.deprecated_group:
return obj.deprecated_group.id
return None
def build_relational_field(self, field_name, relation_info):
field_class, field_kwargs = super(InventorySourceSerializer, self).build_relational_field(field_name, relation_info)
# SCM Project and inventory are read-only unless creating a new inventory.
@@ -2738,7 +2737,8 @@ class JobOptionsSerializer(LabelsListMixin, BaseSerializer):
fields = ('*', 'job_type', 'inventory', 'project', 'playbook', 'scm_branch',
'forks', 'limit', 'verbosity', 'extra_vars', 'job_tags',
'force_handlers', 'skip_tags', 'start_at_task', 'timeout',
'use_fact_cache',)
'use_fact_cache', 'organization',)
read_only_fields = ('organization',)
def get_related(self, obj):
res = super(JobOptionsSerializer, self).get_related(obj)
@@ -2753,6 +2753,8 @@ class JobOptionsSerializer(LabelsListMixin, BaseSerializer):
res['project'] = self.reverse('api:project_detail', kwargs={'pk': obj.project.pk})
except ObjectDoesNotExist:
setattr(obj, 'project', None)
if obj.organization_id:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization_id})
if isinstance(obj, UnifiedJobTemplate):
res['extra_credentials'] = self.reverse(
'api:job_template_extra_credentials_list',
@@ -2899,6 +2901,10 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
)
if obj.host_config_key:
res['callback'] = self.reverse('api:job_template_callback', kwargs={'pk': obj.pk})
if obj.organization_id:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization_id})
if obj.webhook_credential_id:
res['webhook_credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.webhook_credential_id})
return res
def validate(self, attrs):
@@ -3204,7 +3210,7 @@ class AdHocCommandSerializer(UnifiedJobSerializer):
field_kwargs['choices'] = module_name_choices
field_kwargs['required'] = bool(not module_name_default)
field_kwargs['default'] = module_name_default or serializers.empty
field_kwargs['allow_blank'] = bool(module_name_default)
field_kwargs['allow_blank'] = False
field_kwargs.pop('max_length', None)
return field_class, field_kwargs
@@ -3389,6 +3395,8 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
)
if obj.organization:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization.pk})
if obj.webhook_credential_id:
res['webhook_credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.webhook_credential_id})
return res
def validate_extra_vars(self, value):
@@ -3603,9 +3611,11 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
elif self.instance:
ujt = self.instance.unified_job_template
if ujt is None:
if 'workflow_job_template' in attrs:
return {'workflow_job_template': attrs['workflow_job_template']}
return {}
ret = {}
for fd in ('workflow_job_template', 'identifier'):
if fd in attrs:
ret[fd] = attrs[fd]
return ret
# build additional field survey_passwords to track redacted variables
password_dict = {}
@@ -3658,7 +3668,7 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
attrs.get('survey_passwords', {}).pop(key, None)
else:
errors.setdefault('extra_vars', []).append(
_('"$encrypted$ is a reserved keyword, may not be used for {var_name}."'.format(key))
_('"$encrypted$ is a reserved keyword, may not be used for {}."'.format(key))
)
# Launch configs call extra_vars extra_data for historical reasons
@@ -3683,7 +3693,8 @@ class WorkflowJobTemplateNodeSerializer(LaunchConfigurationBaseSerializer):
class Meta:
model = WorkflowJobTemplateNode
fields = ('*', 'workflow_job_template', '-name', '-description', 'id', 'url', 'related',
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes', 'all_parents_must_converge',)
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes', 'all_parents_must_converge',
'identifier',)
def get_related(self, obj):
res = super(WorkflowJobTemplateNodeSerializer, self).get_related(obj)
@@ -3723,7 +3734,7 @@ class WorkflowJobNodeSerializer(LaunchConfigurationBaseSerializer):
model = WorkflowJobNode
fields = ('*', 'job', 'workflow_job', '-name', '-description', 'id', 'url', 'related',
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes',
'all_parents_must_converge', 'do_not_run',)
'all_parents_must_converge', 'do_not_run', 'identifier')
def get_related(self, obj):
res = super(WorkflowJobNodeSerializer, self).get_related(obj)
@@ -4525,6 +4536,8 @@ class SchedulePreviewSerializer(BaseSerializer):
try:
Schedule.rrulestr(rrule_value)
except Exception as e:
import traceback
logger.error(traceback.format_exc())
raise serializers.ValidationError(_("rrule parsing failed validation: {}").format(e))
return value

View File

@@ -1 +1,2 @@
# Test Logging Configuration

View File

@@ -1,11 +1,15 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from datetime import timedelta
from django.utils.timezone import now
from django.conf import settings
from django.conf.urls import url
from oauthlib import oauth2
from oauth2_provider import views
from awx.main.models import RefreshToken
from awx.api.views import (
ApiOAuthAuthorizationRootView,
)
@@ -14,6 +18,21 @@ from awx.api.views import (
class TokenView(views.TokenView):
def create_token_response(self, request):
# Django OAuth2 Toolkit has a bug whereby refresh tokens are *never*
# properly expired (ugh):
#
# https://github.com/jazzband/django-oauth-toolkit/issues/746
#
# This code detects and auto-expires them on refresh grant
# requests.
if request.POST.get('grant_type') == 'refresh_token' and 'refresh_token' in request.POST:
refresh_token = RefreshToken.objects.filter(
token=request.POST['refresh_token']
).first()
if refresh_token:
expire_seconds = settings.OAUTH2_PROVIDER.get('REFRESH_TOKEN_EXPIRE_SECONDS', 0)
if refresh_token.created + timedelta(seconds=expire_seconds) < now():
return request.build_absolute_uri(), {}, 'The refresh token has expired.', '403'
try:
return super(TokenView, self).create_token_response(request)
except oauth2.AccessDeniedError as e:

View File

@@ -10,6 +10,7 @@ from awx.api.views import (
OrganizationAdminsList,
OrganizationInventoriesList,
OrganizationProjectsList,
OrganizationJobTemplatesList,
OrganizationWorkflowJobTemplatesList,
OrganizationTeamsList,
OrganizationCredentialList,
@@ -33,6 +34,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/admins/$', OrganizationAdminsList.as_view(), name='organization_admins_list'),
url(r'^(?P<pk>[0-9]+)/inventories/$', OrganizationInventoriesList.as_view(), name='organization_inventories_list'),
url(r'^(?P<pk>[0-9]+)/projects/$', OrganizationProjectsList.as_view(), name='organization_projects_list'),
url(r'^(?P<pk>[0-9]+)/job_templates/$', OrganizationJobTemplatesList.as_view(), name='organization_job_templates_list'),
url(r'^(?P<pk>[0-9]+)/workflow_job_templates/$', OrganizationWorkflowJobTemplatesList.as_view(), name='organization_workflow_job_templates_list'),
url(r'^(?P<pk>[0-9]+)/teams/$', OrganizationTeamsList.as_view(), name='organization_teams_list'),
url(r'^(?P<pk>[0-9]+)/credentials/$', OrganizationCredentialList.as_view(), name='organization_credential_list'),

View File

@@ -34,7 +34,9 @@ from awx.api.views import (
OAuth2ApplicationDetail,
)
from awx.api.views.metrics import MetricsView
from awx.api.views.metrics import (
MetricsView,
)
from .organization import urls as organization_urls
from .user import urls as user_urls

View File

@@ -111,6 +111,7 @@ from awx.api.views.organization import ( # noqa
OrganizationUsersList,
OrganizationAdminsList,
OrganizationProjectsList,
OrganizationJobTemplatesList,
OrganizationWorkflowJobTemplatesList,
OrganizationTeamsList,
OrganizationActivityStreamList,
@@ -1091,7 +1092,7 @@ class UserRolesList(SubListAttachDetachAPIView):
credential_content_type = ContentType.objects.get_for_model(models.Credential)
if role.content_type == credential_content_type:
if role.content_object.organization and user not in role.content_object.organization.member_role:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization"))
return Response(data, status=status.HTTP_400_BAD_REQUEST)
@@ -4414,7 +4415,7 @@ class RoleUsersList(SubListAttachDetachAPIView):
credential_content_type = ContentType.objects.get_for_model(models.Credential)
if role.content_type == credential_content_type:
if role.content_object.organization and user not in role.content_object.organization.member_role:
if 'disassociate' not in request.data and role.content_object.organization and user not in role.content_object.organization.member_role:
data = dict(msg=_("You cannot grant credential access to a user not in the credentials' organization"))
return Response(data, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -39,3 +39,4 @@ class MetricsView(APIView):
if (request.user.is_superuser or request.user.is_system_auditor):
return Response(metrics().decode('UTF-8'))
raise PermissionDenied()

View File

@@ -4,10 +4,7 @@
import dateutil
import logging
from django.db.models import (
Count,
F,
)
from django.db.models import Count
from django.db import transaction
from django.shortcuts import get_object_or_404
from django.utils.timezone import now
@@ -175,28 +172,18 @@ class OrganizationCountsMixin(object):
inv_qs = Inventory.accessible_objects(self.request.user, 'read_role')
project_qs = Project.accessible_objects(self.request.user, 'read_role')
jt_qs = JobTemplate.accessible_objects(self.request.user, 'read_role')
# Produce counts of Foreign Key relationships
db_results['inventories'] = inv_qs\
.values('organization').annotate(Count('organization')).order_by('organization')
db_results['inventories'] = inv_qs.values('organization').annotate(Count('organization')).order_by('organization')
db_results['teams'] = Team.accessible_objects(
self.request.user, 'read_role').values('organization').annotate(
Count('organization')).order_by('organization')
JT_project_reference = 'project__organization'
JT_inventory_reference = 'inventory__organization'
db_results['job_templates_project'] = JobTemplate.accessible_objects(
self.request.user, 'read_role').exclude(
project__organization=F(JT_inventory_reference)).values(JT_project_reference).annotate(
Count(JT_project_reference)).order_by(JT_project_reference)
db_results['job_templates'] = jt_qs.values('organization').annotate(Count('organization')).order_by('organization')
db_results['job_templates_inventory'] = JobTemplate.accessible_objects(
self.request.user, 'read_role').values(JT_inventory_reference).annotate(
Count(JT_inventory_reference)).order_by(JT_inventory_reference)
db_results['projects'] = project_qs\
.values('organization').annotate(Count('organization')).order_by('organization')
db_results['projects'] = project_qs.values('organization').annotate(Count('organization')).order_by('organization')
# Other members and admins of organization are always viewable
db_results['users'] = org_qs.annotate(
@@ -212,11 +199,7 @@ class OrganizationCountsMixin(object):
'admins': 0, 'projects': 0}
for res, count_qs in db_results.items():
if res == 'job_templates_project':
org_reference = JT_project_reference
elif res == 'job_templates_inventory':
org_reference = JT_inventory_reference
elif res == 'users':
if res == 'users':
org_reference = 'id'
else:
org_reference = 'organization'
@@ -229,14 +212,6 @@ class OrganizationCountsMixin(object):
continue
count_context[org_id][res] = entry['%s__count' % org_reference]
# Combine the counts for job templates by project and inventory
for org in org_id_list:
org_id = org['id']
count_context[org_id]['job_templates'] = 0
for related_path in ['job_templates_project', 'job_templates_inventory']:
if related_path in count_context[org_id]:
count_context[org_id]['job_templates'] += count_context[org_id].pop(related_path)
full_context['related_field_counts'] = count_context
return full_context

View File

@@ -20,7 +20,7 @@ from awx.main.models import (
Role,
User,
Team,
InstanceGroup,
InstanceGroup
)
from awx.api.generics import (
ListCreateAPIView,
@@ -28,6 +28,7 @@ from awx.api.generics import (
SubListAPIView,
SubListCreateAttachDetachAPIView,
SubListAttachDetachAPIView,
SubListCreateAPIView,
ResourceAccessList,
BaseUsersList,
)
@@ -35,14 +36,13 @@ from awx.api.generics import (
from awx.api.serializers import (
OrganizationSerializer,
InventorySerializer,
ProjectSerializer,
UserSerializer,
TeamSerializer,
ActivityStreamSerializer,
RoleSerializer,
NotificationTemplateSerializer,
WorkflowJobTemplateSerializer,
InstanceGroupSerializer,
ProjectSerializer, JobTemplateSerializer, WorkflowJobTemplateSerializer
)
from awx.api.views.mixin import (
RelatedJobsPreventDeleteMixin,
@@ -94,7 +94,7 @@ class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPI
org_counts['projects'] = Project.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['job_templates'] = JobTemplate.accessible_objects(**access_kwargs).filter(
project__organization__id=org_id).count()
organization__id=org_id).count()
full_context['related_field_counts'] = {}
full_context['related_field_counts'][org_id] = org_counts
@@ -128,21 +128,27 @@ class OrganizationAdminsList(BaseUsersList):
ordering = ('username',)
class OrganizationProjectsList(SubListCreateAttachDetachAPIView):
class OrganizationProjectsList(SubListCreateAPIView):
model = Project
serializer_class = ProjectSerializer
parent_model = Organization
relationship = 'projects'
parent_key = 'organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAttachDetachAPIView):
class OrganizationJobTemplatesList(SubListCreateAPIView):
model = JobTemplate
serializer_class = JobTemplateSerializer
parent_model = Organization
parent_key = 'organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization
relationship = 'workflows'
parent_key = 'organization'

View File

@@ -2,14 +2,15 @@
# All Rights Reserved.
import os
import logging
import django
from awx import __version__ as tower_version
# Prepare the AWX environment.
from awx import prepare_env, MODE
prepare_env() # NOQA
from django.core.wsgi import get_wsgi_application # NOQA
from channels.asgi import get_channel_layer
from channels.routing import get_default_application
"""
ASGI config for AWX project.
@@ -32,6 +33,5 @@ if MODE == 'production':
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "awx.settings")
channel_layer = get_channel_layer()
django.setup()
channel_layer = get_default_application()

View File

@@ -172,9 +172,9 @@ class URLField(CharField):
netloc = '{}:{}'.format(netloc, url_parts.port)
if url_parts.username:
if url_parts.password:
netloc = '{}:{}@{}' % (url_parts.username, url_parts.password, netloc)
netloc = '{}:{}@{}'.format(url_parts.username, url_parts.password, netloc)
else:
netloc = '{}@{}' % (url_parts.username, netloc)
netloc = '{}@{}'.format(url_parts.username, netloc)
value = urlparse.urlunsplit([url_parts.scheme, netloc, url_parts.path, url_parts.query, url_parts.fragment])
except Exception:
raise # If something fails here, just fall through and let the validators check it.

View File

@@ -1,11 +1,9 @@
# Python
import contextlib
import logging
import re
import sys
import threading
import time
import urllib.parse
# Django
from django.conf import LazySettings
@@ -57,15 +55,6 @@ SETTING_CACHE_DEFAULTS = True
__all__ = ['SettingsWrapper', 'get_settings_to_cache', 'SETTING_CACHE_NOTSET']
def normalize_broker_url(value):
parts = value.rsplit('@', 1)
match = re.search('(amqp://[^:]+:)(.*)', parts[0])
if match:
prefix, password = match.group(1), match.group(2)
parts[0] = prefix + urllib.parse.quote(password)
return '@'.join(parts)
@contextlib.contextmanager
def _ctit_db_wrapper(trans_safe=False):
'''
@@ -415,22 +404,13 @@ class SettingsWrapper(UserSettingsHolder):
value = self._get_local(name)
if value is not empty:
return value
value = self._get_default(name)
# sometimes users specify RabbitMQ passwords that contain
# unescaped : and @ characters that confused urlparse, e.g.,
# amqp://guest:a@ns:ibl3#@localhost:5672//
#
# detect these scenarios, and automatically escape the user's
# password so it just works
if name == 'BROKER_URL':
value = normalize_broker_url(value)
return value
return self._get_default(name)
def _set_local(self, name, value):
field = self.registry.get_setting_field(name)
if field.read_only:
logger.warning('Attempt to set read only setting "%s".', name)
raise ImproperlyConfigured('Setting "%s" is read only.'.format(name))
raise ImproperlyConfigured('Setting "{}" is read only.'.format(name))
try:
data = field.to_representation(value)
@@ -461,7 +441,7 @@ class SettingsWrapper(UserSettingsHolder):
field = self.registry.get_setting_field(name)
if field.read_only:
logger.warning('Attempt to delete read only setting "%s".', name)
raise ImproperlyConfigured('Setting "%s" is read only.'.format(name))
raise ImproperlyConfigured('Setting "{}" is read only.'.format(name))
for setting in Setting.objects.filter(key=name, user__isnull=True):
setting.delete()
# pre_delete handler will delete from cache.

View File

@@ -325,17 +325,3 @@ def test_setting_singleton_delete_no_read_only_fields(api_request, dummy_setting
)
assert response.data['FOO_BAR'] == 23
@pytest.mark.django_db
def test_setting_logging_test(api_request):
with mock.patch('awx.conf.views.AWXProxyHandler.perform_test') as mock_func:
api_request(
'post',
reverse('api:setting_logging_test'),
data={'LOG_AGGREGATOR_HOST': 'http://foobar', 'LOG_AGGREGATOR_TYPE': 'logstash'}
)
call = mock_func.call_args_list[0]
args, kwargs = call
given_settings = kwargs['custom_settings']
assert given_settings.LOG_AGGREGATOR_HOST == 'http://foobar'
assert given_settings.LOG_AGGREGATOR_TYPE == 'logstash'

View File

@@ -3,7 +3,11 @@
# Python
import collections
import logging
import subprocess
import sys
import socket
from socket import SHUT_RDWR
# Django
from django.conf import settings
@@ -11,7 +15,7 @@ from django.http import Http404
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import PermissionDenied, ValidationError
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import serializers
from rest_framework import status
@@ -26,7 +30,6 @@ from awx.api.generics import (
from awx.api.permissions import IsSuperUser
from awx.api.versioning import reverse
from awx.main.utils import camelcase_to_underscore
from awx.main.utils.handlers import AWXProxyHandler, LoggingConnectivityException
from awx.main.tasks import handle_setting_changes
from awx.conf.models import Setting
from awx.conf.serializers import SettingCategorySerializer, SettingSingletonSerializer
@@ -161,40 +164,47 @@ class SettingLoggingTest(GenericAPIView):
filter_backends = []
def post(self, request, *args, **kwargs):
defaults = dict()
for key in settings_registry.get_registered_settings(category_slug='logging'):
try:
defaults[key] = settings_registry.get_setting_field(key).get_default()
except serializers.SkipField:
defaults[key] = None
obj = type('Settings', (object,), defaults)()
serializer = self.get_serializer(obj, data=request.data)
serializer.is_valid(raise_exception=True)
# Special validation specific to logging test.
errors = {}
for key in ['LOG_AGGREGATOR_TYPE', 'LOG_AGGREGATOR_HOST']:
if not request.data.get(key, ''):
errors[key] = 'This field is required.'
if errors:
raise ValidationError(errors)
if request.data.get('LOG_AGGREGATOR_PASSWORD', '').startswith('$encrypted$'):
serializer.validated_data['LOG_AGGREGATOR_PASSWORD'] = getattr(
settings, 'LOG_AGGREGATOR_PASSWORD', ''
)
# Error if logging is not enabled
enabled = getattr(settings, 'LOG_AGGREGATOR_ENABLED', False)
if not enabled:
return Response({'error': 'Logging not enabled'}, status=status.HTTP_409_CONFLICT)
# Send test message to configured logger based on db settings
logging.getLogger('awx').error('AWX Connection Test Message')
hostname = getattr(settings, 'LOG_AGGREGATOR_HOST', None)
protocol = getattr(settings, 'LOG_AGGREGATOR_PROTOCOL', None)
try:
class MockSettings:
pass
mock_settings = MockSettings()
for k, v in serializer.validated_data.items():
setattr(mock_settings, k, v)
AWXProxyHandler().perform_test(custom_settings=mock_settings)
if mock_settings.LOG_AGGREGATOR_PROTOCOL.upper() == 'UDP':
return Response(status=status.HTTP_201_CREATED)
except LoggingConnectivityException as e:
return Response({'error': str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
return Response(status=status.HTTP_200_OK)
subprocess.check_output(
['rsyslogd', '-N1', '-f', '/var/lib/awx/rsyslog/rsyslog.conf'],
stderr=subprocess.STDOUT
)
except subprocess.CalledProcessError as exc:
return Response({'error': exc.output}, status=status.HTTP_400_BAD_REQUEST)
# Check to ensure port is open at host
if protocol in ['udp', 'tcp']:
port = getattr(settings, 'LOG_AGGREGATOR_PORT', None)
# Error if port is not set when using UDP/TCP
if not port:
return Response({'error': 'Port required for ' + protocol}, status=status.HTTP_400_BAD_REQUEST)
else:
# if http/https by this point, domain is reacheable
return Response(status=status.HTTP_202_ACCEPTED)
if protocol == 'udp':
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
else:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.settimeout(.5)
s.connect((hostname, int(port)))
s.shutdown(SHUT_RDWR)
s.close()
return Response(status=status.HTTP_202_ACCEPTED)
except Exception as e:
return Response({'error': str(e)}, status=status.HTTP_400_BAD_REQUEST)
# Create view functions for all of the class-based views to simplify inclusion

View File

@@ -11,7 +11,6 @@ from functools import reduce
from django.conf import settings
from django.db.models import Q, Prefetch
from django.contrib.auth.models import User
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
from django.core.exceptions import ObjectDoesNotExist
@@ -405,14 +404,6 @@ class BaseAccess(object):
# Cannot copy manual project without errors
user_capabilities[display_method] = False
continue
elif display_method in ['start', 'schedule'] and isinstance(obj, Group): # TODO: remove in 3.3
try:
if obj.deprecated_inventory_source and not obj.deprecated_inventory_source._can_update():
user_capabilities[display_method] = False
continue
except Group.deprecated_inventory_source.RelatedObjectDoesNotExist:
user_capabilities[display_method] = False
continue
elif display_method in ['start', 'schedule'] and isinstance(obj, (Project)):
if obj.scm_type == '':
user_capabilities[display_method] = False
@@ -650,8 +641,8 @@ class UserAccess(BaseAccess):
# in these cases only superusers can modify orphan users
return False
return not obj.roles.all().exclude(
content_type=ContentType.objects.get_for_model(User)
).filter(ancestors__in=self.user.roles.all()).exists()
ancestors__in=self.user.roles.all()
).exists()
else:
return self.is_all_org_admin(obj)
@@ -789,7 +780,6 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
return self.user in obj.admin_role
def can_delete(self, obj):
self.check_license(check_expiration=False)
is_change_possible = self.can_change(obj, None)
if not is_change_possible:
return False
@@ -1411,7 +1401,7 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
'''
model = JobTemplate
select_related = ('created_by', 'modified_by', 'inventory', 'project',
select_related = ('created_by', 'modified_by', 'inventory', 'project', 'organization',
'next_schedule',)
prefetch_related = (
'instance_groups',
@@ -1435,16 +1425,11 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
Users who are able to create deploy jobs can also run normal and check (dry run) jobs.
'''
if not data: # So the browseable API will work
return (
Project.accessible_objects(self.user, 'use_role').exists() or
Inventory.accessible_objects(self.user, 'use_role').exists())
return Project.accessible_objects(self.user, 'use_role').exists()
# if reference_obj is provided, determine if it can be copied
reference_obj = data.get('reference_obj', None)
if 'survey_enabled' in data and data['survey_enabled']:
self.check_license(feature='surveys')
if self.user.is_superuser:
return True
@@ -1504,22 +1489,23 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
return self.user in obj.execute_role
def can_change(self, obj, data):
data_for_change = data
if self.user not in obj.admin_role and not self.user.is_superuser:
return False
if data is not None:
data = dict(data)
if data is None:
return True
if self.changes_are_non_sensitive(obj, data):
if 'survey_enabled' in data and obj.survey_enabled != data['survey_enabled'] and data['survey_enabled']:
self.check_license(feature='surveys')
return True
data = dict(data)
for required_field in ('inventory', 'project'):
required_obj = getattr(obj, required_field, None)
if required_field not in data_for_change and required_obj is not None:
data_for_change[required_field] = required_obj.pk
return self.can_read(obj) and (self.can_add(data_for_change) if data is not None else True)
if self.changes_are_non_sensitive(obj, data):
return True
for required_field, cls in (('inventory', Inventory), ('project', Project)):
is_mandatory = True
if not getattr(obj, '{}_id'.format(required_field)):
is_mandatory = False
if not self.check_related(required_field, cls, data, obj=obj, role_field='use_role', mandatory=is_mandatory):
return False
return True
def changes_are_non_sensitive(self, obj, data):
'''
@@ -1554,9 +1540,9 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
@check_superuser
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if relationship == "instance_groups":
if not obj.project.organization:
if not obj.organization:
return False
return self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.project.organization.admin_role
return self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role
if relationship == 'credentials' and isinstance(sub_obj, Credential):
return self.user in obj.admin_role and self.user in sub_obj.use_role
return super(JobTemplateAccess, self).can_attach(
@@ -1587,6 +1573,7 @@ class JobAccess(BaseAccess):
select_related = ('created_by', 'modified_by', 'job_template', 'inventory',
'project', 'project_update',)
prefetch_related = (
'organization',
'unified_job_template',
'instance_group',
'credentials__credential_type',
@@ -1607,42 +1594,19 @@ class JobAccess(BaseAccess):
return qs.filter(
Q(job_template__in=JobTemplate.accessible_objects(self.user, 'read_role')) |
Q(inventory__organization__in=org_access_qs) |
Q(project__organization__in=org_access_qs)).distinct()
def related_orgs(self, obj):
orgs = []
if obj.inventory and obj.inventory.organization:
orgs.append(obj.inventory.organization)
if obj.project and obj.project.organization and obj.project.organization not in orgs:
orgs.append(obj.project.organization)
return orgs
def org_access(self, obj, role_types=['admin_role']):
orgs = self.related_orgs(obj)
for org in orgs:
for role_type in role_types:
role = getattr(org, role_type)
if self.user in role:
return True
return False
Q(organization__in=org_access_qs)).distinct()
def can_add(self, data, validate_license=True):
if validate_license:
self.check_license()
if not data: # So the browseable API will work
return True
return self.user.is_superuser
raise NotImplementedError('Direct job creation not possible in v2 API')
def can_change(self, obj, data):
return (obj.status == 'new' and
self.can_read(obj) and
self.can_add(data, validate_license=False))
raise NotImplementedError('Direct job editing not supported in v2 API')
@check_superuser
def can_delete(self, obj):
return self.org_access(obj)
if not obj.organization:
return False
return self.user in obj.organization.admin_role
def can_start(self, obj, validate_license=True):
if validate_license:
@@ -1662,6 +1626,7 @@ class JobAccess(BaseAccess):
except JobLaunchConfig.DoesNotExist:
config = None
# Standard permissions model
if obj.job_template and (self.user not in obj.job_template.execute_role):
return False
@@ -1676,24 +1641,17 @@ class JobAccess(BaseAccess):
if JobLaunchConfigAccess(self.user).can_add({'reference_obj': config}):
return True
org_access = bool(obj.inventory) and self.user in obj.inventory.organization.inventory_admin_role
project_access = obj.project is None or self.user in obj.project.admin_role
credential_access = all([self.user in cred.use_role for cred in obj.credentials.all()])
# Standard permissions model without job template involved
if obj.organization and self.user in obj.organization.execute_role:
return True
elif not (obj.job_template or obj.organization):
raise PermissionDenied(_('Job has been orphaned from its job template and organization.'))
elif obj.job_template and config is not None:
raise PermissionDenied(_('Job was launched with prompted fields you do not have access to.'))
elif obj.job_template and config is None:
raise PermissionDenied(_('Job was launched with unknown prompted fields. Organization admin permissions required.'))
# job can be relaunched if user could make an equivalent JT
ret = org_access and credential_access and project_access
if not ret and self.save_messages and not self.messages:
if not obj.job_template:
pretext = _('Job has been orphaned from its job template.')
elif config is None:
pretext = _('Job was launched with unknown prompted fields.')
else:
pretext = _('Job was launched with prompted fields.')
if credential_access:
self.messages['detail'] = '{} {}'.format(pretext, _(' Organization level permissions required.'))
else:
self.messages['detail'] = '{} {}'.format(pretext, _(' You do not have permission to related resources.'))
return ret
return False
def get_method_capability(self, method, obj, parent_obj):
if method == 'start':
@@ -1706,10 +1664,16 @@ class JobAccess(BaseAccess):
def can_cancel(self, obj):
if not obj.can_cancel:
return False
# Delete access allows org admins to stop running jobs
if self.user == obj.created_by or self.can_delete(obj):
# Users may always cancel their own jobs
if self.user == obj.created_by:
return True
return obj.job_template is not None and self.user in obj.job_template.admin_role
# Users with direct admin to JT may cancel jobs started by anyone
if obj.job_template and self.user in obj.job_template.admin_role:
return True
# If orphaned, allow org JT admins to stop running jobs
if not obj.job_template and obj.organization and self.user in obj.organization.job_template_admin_role:
return True
return False
class SystemJobTemplateAccess(BaseAccess):
@@ -1944,11 +1908,11 @@ class WorkflowJobNodeAccess(BaseAccess):
# TODO: notification attachments?
class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
'''
I can only see/manage Workflow Job Templates if I'm a super user
I can see/manage Workflow Job Templates based on object roles
'''
model = WorkflowJobTemplate
select_related = ('created_by', 'modified_by', 'next_schedule',
select_related = ('created_by', 'modified_by', 'organization', 'next_schedule',
'admin_role', 'execute_role', 'read_role',)
def filtered_queryset(self):
@@ -1966,10 +1930,6 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'workflow_admin_role').exists()
# will check this if surveys are added to WFJT
if 'survey_enabled' in data and data['survey_enabled']:
self.check_license(feature='surveys')
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True) and
self.check_related('inventory', Inventory, data, role_field='use_role')
@@ -2038,7 +1998,7 @@ class WorkflowJobAccess(BaseAccess):
I can also cancel it if I started it
'''
model = WorkflowJob
select_related = ('created_by', 'modified_by',)
select_related = ('created_by', 'modified_by', 'organization',)
def filtered_queryset(self):
return WorkflowJob.objects.filter(
@@ -2332,6 +2292,7 @@ class UnifiedJobTemplateAccess(BaseAccess):
prefetch_related = (
'last_job',
'current_job',
'organization',
'credentials__credential_type',
Prefetch('labels', queryset=Label.objects.all().order_by('name')),
)
@@ -2371,6 +2332,7 @@ class UnifiedJobAccess(BaseAccess):
prefetch_related = (
'created_by',
'modified_by',
'organization',
'unified_job_node__workflow_job',
'unified_job_template',
'instance_group',
@@ -2401,8 +2363,7 @@ class UnifiedJobAccess(BaseAccess):
Q(unified_job_template_id__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role')) |
Q(inventoryupdate__inventory_source__inventory__id__in=inv_pk_qs) |
Q(adhoccommand__inventory__id__in=inv_pk_qs) |
Q(job__inventory__organization__in=org_auditor_qs) |
Q(job__project__organization__in=org_auditor_qs)
Q(organization__in=org_auditor_qs)
)
return qs

View File

@@ -0,0 +1,170 @@
import datetime
import asyncio
import logging
import aioredis
import redis
import re
from prometheus_client import (
generate_latest,
Gauge,
Counter,
Enum,
CollectorRegistry,
parser,
)
from django.conf import settings
BROADCAST_WEBSOCKET_REDIS_KEY_NAME = 'broadcast_websocket_stats'
logger = logging.getLogger('awx.main.analytics.broadcast_websocket')
def dt_to_seconds(dt):
return int((dt - datetime.datetime(1970,1,1)).total_seconds())
def now_seconds():
return dt_to_seconds(datetime.datetime.now())
def safe_name(s):
# Replace all non alpha-numeric characters with _
return re.sub('[^0-9a-zA-Z]+', '_', s)
# Second granularity; Per-minute
class FixedSlidingWindow():
def __init__(self, start_time=None):
self.buckets = dict()
self.start_time = start_time or now_seconds()
def cleanup(self, now_bucket=None):
now_bucket = now_bucket or now_seconds()
if self.start_time + 60 <= now_bucket:
self.start_time = now_bucket + 60 + 1
# Delete old entries
for k in list(self.buckets.keys()):
if k < self.start_time:
del self.buckets[k]
def record(self, ts=None):
ts = ts or datetime.datetime.now()
now_bucket = int((ts - datetime.datetime(1970,1,1)).total_seconds())
val = self.buckets.get(now_bucket, 0)
self.buckets[now_bucket] = val + 1
self.cleanup(now_bucket)
def render(self):
self.cleanup()
return sum(self.buckets.values()) or 0
class BroadcastWebsocketStatsManager():
def __init__(self, event_loop, local_hostname):
self._local_hostname = local_hostname
self._event_loop = event_loop
self._stats = dict()
self._redis_key = BROADCAST_WEBSOCKET_REDIS_KEY_NAME
def new_remote_host_stats(self, remote_hostname):
self._stats[remote_hostname] = BroadcastWebsocketStats(self._local_hostname,
remote_hostname)
return self._stats[remote_hostname]
def delete_remote_host_stats(self, remote_hostname):
del self._stats[remote_hostname]
async def run_loop(self):
try:
redis_conn = await aioredis.create_redis_pool(settings.BROKER_URL)
while True:
stats_data_str = ''.join(stat.serialize() for stat in self._stats.values())
await redis_conn.set(self._redis_key, stats_data_str)
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_STATS_POLL_RATE_SECONDS)
except Exception as e:
logger.warn(e)
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_STATS_POLL_RATE_SECONDS)
self.start()
def start(self):
self.async_task = self._event_loop.create_task(self.run_loop())
return self.async_task
@classmethod
def get_stats_sync(cls):
'''
Stringified verion of all the stats
'''
redis_conn = redis.Redis.from_url(settings.BROKER_URL)
stats_str = redis_conn.get(BROADCAST_WEBSOCKET_REDIS_KEY_NAME) or b''
return parser.text_string_to_metric_families(stats_str.decode('UTF-8'))
class BroadcastWebsocketStats():
def __init__(self, local_hostname, remote_hostname):
self._local_hostname = local_hostname
self._remote_hostname = remote_hostname
self._registry = CollectorRegistry()
# TODO: More robust replacement
self.name = safe_name(self._local_hostname)
self.remote_name = safe_name(self._remote_hostname)
self._messages_received_total = Counter(f'awx_{self.remote_name}_messages_received_total',
'Number of messages received, to be forwarded, by the broadcast websocket system',
registry=self._registry)
self._messages_received = Gauge(f'awx_{self.remote_name}_messages_received',
'Number forwarded messages received by the broadcast websocket system, for the duration of the current connection',
registry=self._registry)
self._connection = Enum(f'awx_{self.remote_name}_connection',
'Websocket broadcast connection',
states=['disconnected', 'connected'],
registry=self._registry)
self._connection.state('disconnected')
self._connection_start = Gauge(f'awx_{self.remote_name}_connection_start',
'Time the connection was established',
registry=self._registry)
self._messages_received_per_minute = Gauge(f'awx_{self.remote_name}_messages_received_per_minute',
'Messages received per minute',
registry=self._registry)
self._internal_messages_received_per_minute = FixedSlidingWindow()
def unregister(self):
self._registry.unregister(f'awx_{self.remote_name}_messages_received')
self._registry.unregister(f'awx_{self.remote_name}_connection')
def record_message_received(self):
self._internal_messages_received_per_minute.record()
self._messages_received.inc()
self._messages_received_total.inc()
def record_connection_established(self):
self._connection.state('connected')
self._connection_start.set_to_current_time()
self._messages_received.set(0)
def record_connection_lost(self):
self._connection.state('disconnected')
def get_connection_duration(self):
return (datetime.datetime.now() - self._connection_established_ts).total_seconds()
def render(self):
msgs_per_min = self._internal_messages_received_per_minute.render()
self._messages_received_per_minute.set(msgs_per_min)
def serialize(self):
self.render()
registry_data = generate_latest(self._registry).decode('UTF-8')
return registry_data

View File

@@ -122,22 +122,27 @@ def cred_type_counts(since):
return counts
@register('inventory_counts', '1.0')
@register('inventory_counts', '1.2')
def inventory_counts(since):
counts = {}
for inv in models.Inventory.objects.filter(kind='').annotate(num_sources=Count('inventory_sources', distinct=True),
num_hosts=Count('hosts', distinct=True)).only('id', 'name', 'kind'):
source_list = []
for source in inv.inventory_sources.filter().annotate(num_hosts=Count('hosts', distinct=True)).values('name','source', 'num_hosts'):
source_list.append(source)
counts[inv.id] = {'name': inv.name,
'kind': inv.kind,
'hosts': inv.num_hosts,
'sources': inv.num_sources
'sources': inv.num_sources,
'source_list': source_list
}
for smart_inv in models.Inventory.objects.filter(kind='smart'):
counts[smart_inv.id] = {'name': smart_inv.name,
'kind': smart_inv.kind,
'num_hosts': smart_inv.hosts.count(),
'num_sources': smart_inv.inventory_sources.count()
'hosts': smart_inv.hosts.count(),
'sources': 0,
'source_list': []
}
return counts
@@ -222,10 +227,12 @@ def query_info(since, collection_type):
# Copies Job Events from db to a .csv to be shipped
@table_version('events_table.csv', '1.0')
@table_version('events_table.csv', '1.1')
@table_version('unified_jobs_table.csv', '1.0')
@table_version('unified_job_template_table.csv', '1.0')
def copy_tables(since, full_path):
@table_version('workflow_job_node_table.csv', '1.0')
@table_version('workflow_job_template_node_table.csv', '1.0')
def copy_tables(since, full_path, subset=None):
def _copy_table(table, query, path):
file_path = os.path.join(path, table + '_table.csv')
file = open(file_path, 'w', encoding='utf-8')
@@ -249,15 +256,21 @@ def copy_tables(since, full_path):
main_jobevent.job_id,
main_jobevent.host_id,
main_jobevent.host_name
, CAST(main_jobevent.event_data::json->>'start' AS TIMESTAMP WITH TIME ZONE) AS start,
CAST(main_jobevent.event_data::json->>'end' AS TIMESTAMP WITH TIME ZONE) AS end,
main_jobevent.event_data::json->'duration' AS duration,
main_jobevent.event_data::json->'res'->'warnings' AS warnings,
main_jobevent.event_data::json->'res'->'deprecations' AS deprecations
FROM main_jobevent
WHERE main_jobevent.created > {}
ORDER BY main_jobevent.id ASC) TO STDOUT WITH CSV HEADER'''.format(since.strftime("'%Y-%m-%d %H:%M:%S'"))
_copy_table(table='events', query=events_query, path=full_path)
if not subset or 'events' in subset:
_copy_table(table='events', query=events_query, path=full_path)
unified_job_query = '''COPY (SELECT main_unifiedjob.id,
main_unifiedjob.polymorphic_ctype_id,
django_content_type.model,
main_project.organization_id,
main_unifiedjob.organization_id,
main_organization.name as organization_name,
main_unifiedjob.created,
main_unifiedjob.name,
@@ -275,14 +288,13 @@ def copy_tables(since, full_path):
main_unifiedjob.job_explanation,
main_unifiedjob.instance_group_id
FROM main_unifiedjob
JOIN main_job ON main_unifiedjob.id = main_job.unifiedjob_ptr_id
JOIN django_content_type ON main_unifiedjob.polymorphic_ctype_id = django_content_type.id
JOIN main_project ON main_project.unifiedjobtemplate_ptr_id = main_job.project_id
JOIN main_organization ON main_organization.id = main_project.organization_id
WHERE main_unifiedjob.created > {}
AND main_unifiedjob.launch_type != 'sync'
LEFT JOIN main_organization ON main_organization.id = main_unifiedjob.organization_id
WHERE (main_unifiedjob.created > {0} OR main_unifiedjob.finished > {0})
AND main_unifiedjob.launch_type != 'sync'
ORDER BY main_unifiedjob.id ASC) TO STDOUT WITH CSV HEADER'''.format(since.strftime("'%Y-%m-%d %H:%M:%S'"))
_copy_table(table='unified_jobs', query=unified_job_query, path=full_path)
if not subset or 'unified_jobs' in subset:
_copy_table(table='unified_jobs', query=unified_job_query, path=full_path)
unified_job_template_query = '''COPY (SELECT main_unifiedjobtemplate.id,
main_unifiedjobtemplate.polymorphic_ctype_id,
@@ -301,6 +313,71 @@ def copy_tables(since, full_path):
main_unifiedjobtemplate.status
FROM main_unifiedjobtemplate, django_content_type
WHERE main_unifiedjobtemplate.polymorphic_ctype_id = django_content_type.id
ORDER BY main_unifiedjobtemplate.id ASC) TO STDOUT WITH CSV HEADER'''.format(since.strftime("'%Y-%m-%d %H:%M:%S'"))
_copy_table(table='unified_job_template', query=unified_job_template_query, path=full_path)
ORDER BY main_unifiedjobtemplate.id ASC) TO STDOUT WITH CSV HEADER'''
if not subset or 'unified_job_template' in subset:
_copy_table(table='unified_job_template', query=unified_job_template_query, path=full_path)
workflow_job_node_query = '''COPY (SELECT main_workflowjobnode.id,
main_workflowjobnode.created,
main_workflowjobnode.modified,
main_workflowjobnode.job_id,
main_workflowjobnode.unified_job_template_id,
main_workflowjobnode.workflow_job_id,
main_workflowjobnode.inventory_id,
success_nodes.nodes AS success_nodes,
failure_nodes.nodes AS failure_nodes,
always_nodes.nodes AS always_nodes,
main_workflowjobnode.do_not_run,
main_workflowjobnode.all_parents_must_converge
FROM main_workflowjobnode
LEFT JOIN (
SELECT from_workflowjobnode_id, ARRAY_AGG(to_workflowjobnode_id) AS nodes
FROM main_workflowjobnode_success_nodes
GROUP BY from_workflowjobnode_id
) success_nodes ON main_workflowjobnode.id = success_nodes.from_workflowjobnode_id
LEFT JOIN (
SELECT from_workflowjobnode_id, ARRAY_AGG(to_workflowjobnode_id) AS nodes
FROM main_workflowjobnode_failure_nodes
GROUP BY from_workflowjobnode_id
) failure_nodes ON main_workflowjobnode.id = failure_nodes.from_workflowjobnode_id
LEFT JOIN (
SELECT from_workflowjobnode_id, ARRAY_AGG(to_workflowjobnode_id) AS nodes
FROM main_workflowjobnode_always_nodes
GROUP BY from_workflowjobnode_id
) always_nodes ON main_workflowjobnode.id = always_nodes.from_workflowjobnode_id
WHERE main_workflowjobnode.modified > {}
ORDER BY main_workflowjobnode.id ASC) TO STDOUT WITH CSV HEADER'''.format(since.strftime("'%Y-%m-%d %H:%M:%S'"))
if not subset or 'workflow_job_node' in subset:
_copy_table(table='workflow_job_node', query=workflow_job_node_query, path=full_path)
workflow_job_template_node_query = '''COPY (SELECT main_workflowjobtemplatenode.id,
main_workflowjobtemplatenode.created,
main_workflowjobtemplatenode.modified,
main_workflowjobtemplatenode.unified_job_template_id,
main_workflowjobtemplatenode.workflow_job_template_id,
main_workflowjobtemplatenode.inventory_id,
success_nodes.nodes AS success_nodes,
failure_nodes.nodes AS failure_nodes,
always_nodes.nodes AS always_nodes,
main_workflowjobtemplatenode.all_parents_must_converge
FROM main_workflowjobtemplatenode
LEFT JOIN (
SELECT from_workflowjobtemplatenode_id, ARRAY_AGG(to_workflowjobtemplatenode_id) AS nodes
FROM main_workflowjobtemplatenode_success_nodes
GROUP BY from_workflowjobtemplatenode_id
) success_nodes ON main_workflowjobtemplatenode.id = success_nodes.from_workflowjobtemplatenode_id
LEFT JOIN (
SELECT from_workflowjobtemplatenode_id, ARRAY_AGG(to_workflowjobtemplatenode_id) AS nodes
FROM main_workflowjobtemplatenode_failure_nodes
GROUP BY from_workflowjobtemplatenode_id
) failure_nodes ON main_workflowjobtemplatenode.id = failure_nodes.from_workflowjobtemplatenode_id
LEFT JOIN (
SELECT from_workflowjobtemplatenode_id, ARRAY_AGG(to_workflowjobtemplatenode_id) AS nodes
FROM main_workflowjobtemplatenode_always_nodes
GROUP BY from_workflowjobtemplatenode_id
) always_nodes ON main_workflowjobtemplatenode.id = always_nodes.from_workflowjobtemplatenode_id
ORDER BY main_workflowjobtemplatenode.id ASC) TO STDOUT WITH CSV HEADER'''
if not subset or 'workflow_job_template_node' in subset:
_copy_table(table='workflow_job_template_node', query=workflow_job_template_node_query, path=full_path)
return

View File

@@ -134,13 +134,17 @@ def gather(dest=None, module=None, collection_type='scheduled'):
settings.SYSTEM_UUID,
run_now.strftime('%Y-%m-%d-%H%M%S%z')
])
tgz = shutil.make_archive(
os.path.join(os.path.dirname(dest), tarname),
'gztar',
dest
)
shutil.rmtree(dest)
return tgz
try:
tgz = shutil.make_archive(
os.path.join(os.path.dirname(dest), tarname),
'gztar',
dest
)
return tgz
except Exception:
logger.exception("Failed to write analytics archive file")
finally:
shutil.rmtree(dest)
def ship(path):

View File

@@ -667,7 +667,7 @@ register(
allow_blank=True,
default='',
label=_('Logging Aggregator Username'),
help_text=_('Username for external log aggregator (if required).'),
help_text=_('Username for external log aggregator (if required; HTTP/s only).'),
category=_('Logging'),
category_slug='logging',
required=False,
@@ -679,7 +679,7 @@ register(
default='',
encrypted=True,
label=_('Logging Aggregator Password/Token'),
help_text=_('Password or authentication token for external log aggregator (if required).'),
help_text=_('Password or authentication token for external log aggregator (if required; HTTP/s only).'),
category=_('Logging'),
category_slug='logging',
required=False,
@@ -787,6 +787,29 @@ register(
category=_('Logging'),
category_slug='logging',
)
register(
'LOG_AGGREGATOR_MAX_DISK_USAGE_GB',
field_class=fields.IntegerField,
default=1,
min_value=1,
label=_('Maximum disk persistance for external log aggregation (in GB)'),
help_text=_('Amount of data to store (in gigabytes) during an outage of '
'the external log aggregator (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting.'),
category=_('Logging'),
category_slug='logging',
)
register(
'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH',
field_class=fields.CharField,
default='/var/lib/awx',
label=_('File system location for rsyslogd disk persistence'),
help_text=_('Location to persist logs that should be retried after an outage '
'of the external log aggregator (defaults to /var/lib/awx). '
'Equivalent to the rsyslogd queue.spoolDirectory setting.'),
category=_('Logging'),
category_slug='logging',
)
register(

View File

@@ -38,7 +38,7 @@ ENV_BLACKLIST = frozenset((
'AD_HOC_COMMAND_ID', 'REST_API_URL', 'REST_API_TOKEN', 'MAX_EVENT_RES',
'CALLBACK_QUEUE', 'CALLBACK_CONNECTION', 'CACHE',
'JOB_CALLBACK_DEBUG', 'INVENTORY_HOSTVARS',
'AWX_HOST', 'PROJECT_REVISION'
'AWX_HOST', 'PROJECT_REVISION', 'SUPERVISOR_WEB_CONFIG_PATH'
))
# loggers that may be called in process of emitting a log

View File

@@ -1,97 +1,246 @@
import json
import logging
import time
import hmac
import asyncio
from channels import Group
from channels.auth import channel_session_user_from_http, channel_session_user
from django.utils.encoding import smart_str
from django.http.cookie import parse_cookie
from django.core.serializers.json import DjangoJSONEncoder
from django.conf import settings
from django.utils.encoding import force_bytes
from django.contrib.auth.models import User
from channels.generic.websocket import AsyncJsonWebsocketConsumer
from channels.layers import get_channel_layer
from channels.db import database_sync_to_async
logger = logging.getLogger('awx.main.consumers')
XRF_KEY = '_auth_user_xrf'
def discard_groups(message):
if 'groups' in message.channel_session:
for group in message.channel_session['groups']:
Group(group).discard(message.reply_channel)
class WebsocketSecretAuthHelper:
"""
Middlewareish for websockets to verify node websocket broadcast interconnect.
Note: The "ish" is due to the channels routing interface. Routing occurs
_after_ authentication; making it hard to apply this auth to _only_ a subset of
websocket endpoints.
"""
@classmethod
def construct_secret(cls):
nonce_serialized = f"{int(time.time())}"
payload_dict = {
'secret': settings.BROADCAST_WEBSOCKET_SECRET,
'nonce': nonce_serialized
}
payload_serialized = json.dumps(payload_dict)
secret_serialized = hmac.new(force_bytes(settings.BROADCAST_WEBSOCKET_SECRET),
msg=force_bytes(payload_serialized),
digestmod='sha256').hexdigest()
return 'HMAC-SHA256 {}:{}'.format(nonce_serialized, secret_serialized)
@channel_session_user_from_http
def ws_connect(message):
headers = dict(message.content.get('headers', ''))
message.reply_channel.send({"accept": True})
message.content['method'] = 'FAKE'
if message.user.is_authenticated:
message.reply_channel.send(
{"text": json.dumps({"accept": True, "user": message.user.id})}
)
# store the valid CSRF token from the cookie so we can compare it later
# on ws_receive
cookie_token = parse_cookie(
smart_str(headers.get(b'cookie'))
).get('csrftoken')
if cookie_token:
message.channel_session[XRF_KEY] = cookie_token
else:
logger.error("Request user is not authenticated to use websocket.")
message.reply_channel.send({"close": True})
return None
@classmethod
def verify_secret(cls, s, nonce_tolerance=300):
try:
(prefix, payload) = s.split(' ')
if prefix != 'HMAC-SHA256':
raise ValueError('Unsupported encryption algorithm')
(nonce_parsed, secret_parsed) = payload.split(':')
except Exception:
raise ValueError("Failed to parse secret")
try:
payload_expected = {
'secret': settings.BROADCAST_WEBSOCKET_SECRET,
'nonce': nonce_parsed,
}
payload_serialized = json.dumps(payload_expected)
except Exception:
raise ValueError("Failed to create hash to compare to secret.")
secret_serialized = hmac.new(force_bytes(settings.BROADCAST_WEBSOCKET_SECRET),
msg=force_bytes(payload_serialized),
digestmod='sha256').hexdigest()
if secret_serialized != secret_parsed:
raise ValueError("Invalid secret")
# Avoid timing attack and check the nonce after all the heavy lifting
now = int(time.time())
nonce_parsed = int(nonce_parsed)
nonce_diff = now - nonce_parsed
if abs(nonce_diff) > nonce_tolerance:
logger.warn(f"Potential replay attack or machine(s) time out of sync by {nonce_diff} seconds.")
raise ValueError("Potential replay attack or machine(s) time out of sync by {nonce_diff} seconds.")
return True
@classmethod
def is_authorized(cls, scope):
secret = ''
for k, v in scope['headers']:
if k.decode("utf-8") == 'secret':
secret = v.decode("utf-8")
break
WebsocketSecretAuthHelper.verify_secret(secret)
@channel_session_user
def ws_disconnect(message):
discard_groups(message)
class BroadcastConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
try:
WebsocketSecretAuthHelper.is_authorized(self.scope)
except Exception:
logger.warn(f"client '{self.channel_name}' failed to authorize against the broadcast endpoint.")
await self.close()
return
await self.accept()
await self.channel_layer.group_add(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
logger.info(f"client '{self.channel_name}' joined the broadcast group.")
async def disconnect(self, code):
logger.info("client '{self.channel_name}' disconnected from the broadcast group.")
await self.channel_layer.group_discard(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
async def internal_message(self, event):
await self.send(event['text'])
@channel_session_user
def ws_receive(message):
from awx.main.access import consumer_access
user = message.user
raw_data = message.content['text']
data = json.loads(raw_data)
class EventConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
user = self.scope['user']
if user and not user.is_anonymous:
await self.accept()
await self.send_json({"accept": True, "user": user.id})
# store the valid CSRF token from the cookie so we can compare it later
# on ws_receive
cookie_token = self.scope['cookies'].get('csrftoken')
if cookie_token:
self.scope['session'][XRF_KEY] = cookie_token
else:
logger.error("Request user is not authenticated to use websocket.")
# TODO: Carry over from channels 1 implementation
# We should never .accept() the client and close without sending a close message
await self.accept()
await self.send_json({"close": True})
await self.close()
xrftoken = data.get('xrftoken')
if (
not xrftoken or
XRF_KEY not in message.channel_session or
xrftoken != message.channel_session[XRF_KEY]
):
logger.error(
"access denied to channel, XRF mismatch for {}".format(user.username)
)
message.reply_channel.send({
"text": json.dumps({"error": "access denied to channel"})
})
return
async def disconnect(self, code):
current_groups = set(self.scope['session'].pop('groups') if 'groups' in self.scope['session'] else [])
for group_name in current_groups:
await self.channel_layer.group_discard(
group_name,
self.channel_name,
)
if 'groups' in data:
discard_groups(message)
groups = data['groups']
current_groups = set(message.channel_session.pop('groups') if 'groups' in message.channel_session else [])
for group_name,v in groups.items():
if type(v) is list:
for oid in v:
name = '{}-{}'.format(group_name, oid)
access_cls = consumer_access(group_name)
if access_cls is not None:
user_access = access_cls(user)
if not user_access.get_queryset().filter(pk=oid).exists():
message.reply_channel.send({"text": json.dumps(
{"error": "access denied to channel {0} for resource id {1}".format(group_name, oid)})})
continue
current_groups.add(name)
Group(name).add(message.reply_channel)
else:
current_groups.add(group_name)
Group(group_name).add(message.reply_channel)
message.channel_session['groups'] = list(current_groups)
@database_sync_to_async
def user_can_see_object_id(self, user_access, oid):
# At this point user is a channels.auth.UserLazyObject object
# This causes problems with our generic role permissions checking.
# Specifically, type(user) != User
# Therefore, get the "real" User objects from the database before
# calling the access permission methods
user_access.user = User.objects.get(id=user_access.user.id)
res = user_access.get_queryset().filter(pk=oid).exists()
return res
async def receive_json(self, data):
from awx.main.access import consumer_access
user = self.scope['user']
xrftoken = data.get('xrftoken')
if (
not xrftoken or
XRF_KEY not in self.scope["session"] or
xrftoken != self.scope["session"][XRF_KEY]
):
logger.error(f"access denied to channel, XRF mismatch for {user.username}")
await self.send_json({"error": "access denied to channel"})
return
if 'groups' in data:
groups = data['groups']
new_groups = set()
current_groups = set(self.scope['session'].pop('groups') if 'groups' in self.scope['session'] else [])
for group_name,v in groups.items():
if type(v) is list:
for oid in v:
name = '{}-{}'.format(group_name, oid)
access_cls = consumer_access(group_name)
if access_cls is not None:
user_access = access_cls(user)
if not await self.user_can_see_object_id(user_access, oid):
await self.send_json({"error": "access denied to channel {0} for resource id {1}".format(group_name, oid)})
continue
new_groups.add(name)
else:
await self.send_json({"error": "access denied to channel"})
logger.error(f"groups must be a list, not {groups}")
return
old_groups = current_groups - new_groups
for group_name in old_groups:
await self.channel_layer.group_discard(
group_name,
self.channel_name,
)
new_groups_exclusive = new_groups - current_groups
for group_name in new_groups_exclusive:
await self.channel_layer.group_add(
group_name,
self.channel_name
)
self.scope['session']['groups'] = new_groups
await self.send_json({
"groups_current": list(new_groups),
"groups_left": list(old_groups),
"groups_joined": list(new_groups_exclusive)
})
async def internal_message(self, event):
await self.send(event['text'])
def run_sync(func):
event_loop = asyncio.new_event_loop()
event_loop.run_until_complete(func)
event_loop.close()
def _dump_payload(payload):
try:
return json.dumps(payload, cls=DjangoJSONEncoder)
except ValueError:
logger.error("Invalid payload to emit")
return None
def emit_channel_notification(group, payload):
try:
Group(group).send({"text": json.dumps(payload, cls=DjangoJSONEncoder)})
except ValueError:
logger.error("Invalid payload emitting channel {} on topic: {}".format(group, payload))
from awx.main.wsbroadcast import wrap_broadcast_msg # noqa
payload_dumped = _dump_payload(payload)
if payload_dumped is None:
return
channel_layer = get_channel_layer()
run_sync(channel_layer.group_send(
group,
{
"type": "internal.message",
"text": payload_dumped
},
))
run_sync(channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{
"type": "internal.message",
"text": wrap_broadcast_msg(group, payload_dumped),
},
))

View File

@@ -64,7 +64,7 @@ class RecordedQueryLog(object):
if not os.path.isdir(self.dest):
os.makedirs(self.dest)
progname = ' '.join(sys.argv)
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'runworker'):
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'wsbroadcast'):
if match in progname:
progname = match
break

View File

@@ -1,5 +1,62 @@
import psycopg2
import select
from contextlib import contextmanager
from django.conf import settings
NOT_READY = ([], [], [])
def get_local_queuename():
return settings.CLUSTER_HOST_ID
class PubSub(object):
def __init__(self, conn):
assert conn.autocommit, "Connection must be in autocommit mode."
self.conn = conn
def listen(self, channel):
with self.conn.cursor() as cur:
cur.execute('LISTEN "%s";' % channel)
def unlisten(self, channel):
with self.conn.cursor() as cur:
cur.execute('UNLISTEN "%s";' % channel)
def notify(self, channel, payload):
with self.conn.cursor() as cur:
cur.execute('SELECT pg_notify(%s, %s);', (channel, payload))
def events(self, select_timeout=5, yield_timeouts=False):
while True:
if select.select([self.conn], [], [], select_timeout) == NOT_READY:
if yield_timeouts:
yield None
else:
self.conn.poll()
while self.conn.notifies:
yield self.conn.notifies.pop(0)
def close(self):
self.conn.close()
@contextmanager
def pg_bus_conn():
conf = settings.DATABASES['default']
conn = psycopg2.connect(dbname=conf['NAME'],
host=conf['HOST'],
user=conf['USER'],
password=conf['PASSWORD'],
port=conf['PORT'],
**conf.get("OPTIONS", {}))
# Django connection.cursor().connection doesn't have autocommit=True on
conn.set_session(autocommit=True)
pubsub = PubSub(conn)
yield pubsub
conn.close()

View File

@@ -1,11 +1,10 @@
import logging
import socket
from django.conf import settings
import uuid
import json
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch.kombu import Connection
from kombu import Queue, Exchange, Producer, Consumer
from . import pg_bus_conn
logger = logging.getLogger('awx.main.dispatch')
@@ -20,15 +19,6 @@ class Control(object):
raise RuntimeError('{} must be in {}'.format(service, self.services))
self.service = service
self.queuename = host or get_local_queuename()
self.queue = Queue(self.queuename, Exchange(self.queuename), routing_key=self.queuename)
def publish(self, msg, conn, **kwargs):
producer = Producer(
exchange=self.queue.exchange,
channel=conn,
routing_key=self.queuename
)
producer.publish(msg, expiration=5, **kwargs)
def status(self, *args, **kwargs):
return self.control_with_reply('status', *args, **kwargs)
@@ -36,24 +26,28 @@ class Control(object):
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
@classmethod
def generate_reply_queue_name(cls):
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
def control_with_reply(self, command, timeout=5):
logger.warn('checking {} {} for {}'.format(self.service, command, self.queuename))
reply_queue = Queue(name="amq.rabbitmq.reply-to")
reply_queue = Control.generate_reply_queue_name()
self.result = None
with Connection(settings.BROKER_URL) as conn:
with Consumer(conn, reply_queue, callbacks=[self.process_message], no_ack=True):
self.publish({'control': command}, conn, reply_to='amq.rabbitmq.reply-to')
try:
conn.drain_events(timeout=timeout)
except socket.timeout:
logger.error('{} did not reply within {}s'.format(self.service, timeout))
raise
return self.result
with pg_bus_conn() as conn:
conn.listen(reply_queue)
conn.notify(self.queuename,
json.dumps({'control': command, 'reply_to': reply_queue}))
for reply in conn.events(select_timeout=timeout, yield_timeouts=True):
if reply is None:
logger.error(f'{self.service} did not reply within {timeout}s')
raise RuntimeError("{self.service} did not reply within {timeout}s")
break
return json.loads(reply.payload)
def control(self, msg, **kwargs):
with Connection(settings.BROKER_URL) as conn:
self.publish(msg, conn)
def process_message(self, body, message):
self.result = body
message.ack()
with pg_bus_conn() as conn:
conn.notify(self.queuename, json.dumps(msg))

View File

@@ -1,42 +0,0 @@
from amqp.exceptions import PreconditionFailed
from django.conf import settings
from kombu.connection import Connection as KombuConnection
from kombu.transport import pyamqp
import logging
logger = logging.getLogger('awx.main.dispatch')
__all__ = ['Connection']
class Connection(KombuConnection):
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
class _Channel(pyamqp.Channel):
def queue_declare(self, queue, *args, **kwargs):
kwargs['durable'] = settings.BROKER_DURABILITY
try:
return super(_Channel, self).queue_declare(queue, *args, **kwargs)
except PreconditionFailed as e:
if "inequivalent arg 'durable'" in getattr(e, 'reply_text', None):
logger.error(
'queue {} durability is not {}, deleting and recreating'.format(
queue,
kwargs['durable']
)
)
self.queue_delete(queue)
return super(_Channel, self).queue_declare(queue, *args, **kwargs)
class _Connection(pyamqp.Connection):
Channel = _Channel
class _Transport(pyamqp.Transport):
Connection = _Connection
self.transport_cls = _Transport

View File

@@ -22,7 +22,7 @@ class Scheduler(Scheduler):
def run():
ppid = os.getppid()
logger.warn(f'periodic beat started')
logger.warn('periodic beat started')
while True:
if os.getppid() != ppid:
# if the parent PID changes, this process has been orphaned

View File

@@ -1,12 +1,12 @@
import inspect
import logging
import sys
import json
from uuid import uuid4
from django.conf import settings
from kombu import Exchange, Producer
from awx.main.dispatch.kombu import Connection
from . import pg_bus_conn
logger = logging.getLogger('awx.main.dispatch')
@@ -39,24 +39,22 @@ class task:
add.apply_async([1, 1])
Adder.apply_async([1, 1])
# Tasks can also define a specific target queue or exchange type:
# Tasks can also define a specific target queue or use the special fan-out queue tower_broadcast:
@task(queue='slow-tasks')
def snooze():
time.sleep(10)
@task(queue='tower_broadcast', exchange_type='fanout')
@task(queue='tower_broadcast')
def announce():
print("Run this everywhere!")
"""
def __init__(self, queue=None, exchange_type=None):
def __init__(self, queue=None):
self.queue = queue
self.exchange_type = exchange_type
def __call__(self, fn=None):
queue = self.queue
exchange_type = self.exchange_type
class PublisherMixin(object):
@@ -73,9 +71,12 @@ class task:
kwargs = kwargs or {}
queue = (
queue or
getattr(cls.queue, 'im_func', cls.queue) or
settings.CELERY_DEFAULT_QUEUE
getattr(cls.queue, 'im_func', cls.queue)
)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = {
'uuid': task_id,
'args': args,
@@ -86,21 +87,8 @@ class task:
if callable(queue):
queue = queue()
if not settings.IS_TESTING(sys.argv):
with Connection(settings.BROKER_URL) as conn:
exchange = Exchange(queue, type=exchange_type or 'direct')
producer = Producer(conn)
logger.debug('publish {}({}, queue={})'.format(
cls.name,
task_id,
queue
))
producer.publish(obj,
serializer='json',
compression='bzip2',
exchange=exchange,
declare=[exchange],
delivery_mode="persistent",
routing_key=queue)
with pg_bus_conn() as conn:
conn.notify(queue, json.dumps(obj))
return (obj, queue)
# If the object we're wrapping *is* a class (e.g., RunJob), return

View File

@@ -1,3 +1,3 @@
from .base import AWXConsumer, BaseWorker # noqa
from .base import AWXConsumerRedis, AWXConsumerPG, BaseWorker # noqa
from .callback import CallbackBrokerWorker # noqa
from .task import TaskWorker # noqa

View File

@@ -5,14 +5,17 @@ import os
import logging
import signal
import sys
import redis
import json
import psycopg2
from uuid import UUID
from queue import Empty as QueueEmpty
from django import db
from kombu import Producer
from kombu.mixins import ConsumerMixin
from django.conf import settings
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch import pg_bus_conn
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -37,10 +40,11 @@ class WorkerSignalHandler:
self.kill_now = True
class AWXConsumer(ConsumerMixin):
class AWXConsumerBase(object):
def __init__(self, name, worker, queues=[], pool=None):
self.should_stop = False
def __init__(self, name, connection, worker, queues=[], pool=None):
self.connection = connection
self.name = name
self.total_messages = 0
self.queues = queues
self.worker = worker
@@ -49,25 +53,15 @@ class AWXConsumer(ConsumerMixin):
self.pool = WorkerPool()
self.pool.init_workers(self.worker.work_loop)
def get_consumers(self, Consumer, channel):
logger.debug(self.listening_on)
return [Consumer(queues=self.queues, accept=['json'],
callbacks=[self.process_task])]
@property
def listening_on(self):
return 'listening on {}'.format([
'{} [{}]'.format(q.name, q.exchange.type) for q in self.queues
])
return f'listening on {self.queues}'
def control(self, body, message):
logger.warn('Consumer received control message {}'.format(body))
def control(self, body):
logger.warn(body)
control = body.get('control')
if control in ('status', 'running'):
producer = Producer(
channel=self.connection,
routing_key=message.properties['reply_to']
)
reply_queue = body['reply_to']
if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()])
elif control == 'running':
@@ -75,20 +69,21 @@ class AWXConsumer(ConsumerMixin):
for worker in self.pool.workers:
worker.calculate_managed_tasks()
msg.extend(worker.managed_tasks.keys())
producer.publish(msg)
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
elif control == 'reload':
for worker in self.pool.workers:
worker.quit()
else:
logger.error('unrecognized control message: {}'.format(control))
message.ack()
def process_task(self, body, message):
def process_task(self, body):
if 'control' in body:
try:
return self.control(body, message)
return self.control(body)
except Exception:
logger.exception("Exception handling control message:")
logger.exception(f"Exception handling control message: {body}")
return
if len(self.pool):
if "uuid" in body and body['uuid']:
@@ -102,21 +97,63 @@ class AWXConsumer(ConsumerMixin):
queue = 0
self.pool.write(queue, body)
self.total_messages += 1
message.ack()
def run(self, *args, **kwargs):
signal.signal(signal.SIGINT, self.stop)
signal.signal(signal.SIGTERM, self.stop)
self.worker.on_start()
super(AWXConsumer, self).run(*args, **kwargs)
# Child should implement other things here
def stop(self, signum, frame):
self.should_stop = True # this makes the kombu mixin stop consuming
self.should_stop = True
logger.warn('received {}, stopping'.format(signame(signum)))
self.worker.on_stop()
raise SystemExit()
class AWXConsumerRedis(AWXConsumerBase):
def run(self, *args, **kwargs):
super(AWXConsumerRedis, self).run(*args, **kwargs)
self.worker.on_start()
queue = redis.Redis.from_url(settings.BROKER_URL)
while True:
try:
res = queue.blpop(self.queues)
res = json.loads(res[1])
self.process_task(res)
except redis.exceptions.RedisError:
logger.exception("encountered an error communicating with redis")
except (json.JSONDecodeError, KeyError):
logger.exception("failed to decode JSON message from redis")
if self.should_stop:
return
class AWXConsumerPG(AWXConsumerBase):
def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs)
logger.warn(f"Running worker {self.name} listening to queues {self.queues}")
init = False
while True:
try:
with pg_bus_conn() as conn:
for queue in self.queues:
conn.listen(queue)
if init is False:
self.worker.on_start()
init = True
for e in conn.events():
self.process_task(json.loads(e.payload))
if self.should_stop:
return
except psycopg2.InterfaceError:
logger.warn("Stale Postgres message bus connection, reconnecting")
continue
class BaseWorker(object):
def read(self, queue):

View File

@@ -15,7 +15,9 @@ from django.db.utils import InterfaceError, InternalError, IntegrityError
from awx.main.consumers import emit_channel_notification
from awx.main.models import (JobEvent, AdHocCommandEvent, ProjectUpdateEvent,
InventoryUpdateEvent, SystemJobEvent, UnifiedJob)
InventoryUpdateEvent, SystemJobEvent, UnifiedJob,
Job)
from awx.main.tasks import handle_success_and_failure_notifications
from awx.main.models.events import emit_event_detail
from .base import BaseWorker
@@ -89,7 +91,7 @@ class CallbackBrokerWorker(BaseWorker):
for e in events:
try:
if (
isinstance(exc, IntegrityError),
isinstance(exc, IntegrityError) and
getattr(e, 'host_id', '')
):
# this is one potential IntegrityError we can
@@ -137,19 +139,14 @@ class CallbackBrokerWorker(BaseWorker):
# have all the data we need to send out success/failure
# notification templates
uj = UnifiedJob.objects.get(pk=job_identifier)
if hasattr(uj, 'send_notification_templates'):
retries = 0
while retries < 5:
if uj.finished:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
break
else:
# wait a few seconds to avoid a race where the
# events are persisted _before_ the UJ.status
# changes from running -> successful
retries += 1
time.sleep(1)
uj = UnifiedJob.objects.get(pk=job_identifier)
if isinstance(uj, Job):
# *actual playbooks* send their success/failure
# notifications in response to the playbook_on_stats
# event handling code in main.models.events
pass
elif hasattr(uj, 'send_notification_templates'):
handle_success_and_failure_notifications.apply_async([uj.id])
except Exception:
logger.exception('Worker failed to emit notifications: Job {}'.format(job_identifier))
return

View File

@@ -56,7 +56,8 @@ from awx.main import utils
__all__ = ['AutoOneToOneField', 'ImplicitRoleField', 'JSONField',
'SmartFilterField', 'OrderedManyToManyField',
'update_role_parentage_for_instance', 'is_implicit_parent']
'update_role_parentage_for_instance',
'is_implicit_parent']
# Provide a (better) custom error message for enum jsonschema validation
@@ -140,8 +141,9 @@ def resolve_role_field(obj, field):
return []
if len(field_components) == 1:
role_cls = str(utils.get_current_apps().get_model('main', 'Role'))
if not str(type(obj)) == role_cls:
# use extremely generous duck typing to accomidate all possible forms
# of the model that may be used during various migrations
if obj._meta.model_name != 'role' or obj._meta.app_label != 'main':
raise Exception(smart_text('{} refers to a {}, not a Role'.format(field, type(obj))))
ret.append(obj.id)
else:
@@ -197,18 +199,27 @@ def update_role_parentage_for_instance(instance):
updates the parents listing for all the roles
of a given instance if they have changed
'''
parents_removed = set()
parents_added = set()
for implicit_role_field in getattr(instance.__class__, '__implicit_role_fields'):
cur_role = getattr(instance, implicit_role_field.name)
original_parents = set(json.loads(cur_role.implicit_parents))
new_parents = implicit_role_field._resolve_parent_roles(instance)
cur_role.parents.remove(*list(original_parents - new_parents))
cur_role.parents.add(*list(new_parents - original_parents))
removals = original_parents - new_parents
if removals:
cur_role.parents.remove(*list(removals))
parents_removed.add(cur_role.pk)
additions = new_parents - original_parents
if additions:
cur_role.parents.add(*list(additions))
parents_added.add(cur_role.pk)
new_parents_list = list(new_parents)
new_parents_list.sort()
new_parents_json = json.dumps(new_parents_list)
if cur_role.implicit_parents != new_parents_json:
cur_role.implicit_parents = new_parents_json
cur_role.save()
cur_role.save(update_fields=['implicit_parents'])
return (parents_added, parents_removed)
class ImplicitRoleDescriptor(ForwardManyToOneDescriptor):
@@ -256,20 +267,18 @@ class ImplicitRoleField(models.ForeignKey):
field_names = [field_names]
for field_name in field_names:
# Handle the OR syntax for role parents
if type(field_name) == tuple:
continue
if type(field_name) == bytes:
field_name = field_name.decode('utf-8')
if field_name.startswith('singleton:'):
continue
field_name, sep, field_attr = field_name.partition('.')
field = getattr(cls, field_name)
# Non existent fields will occur if ever a parent model is
# moved inside a migration, needed for job_template_organization_field
# migration in particular
# consistency is assured by unit test awx.main.tests.functional
field = getattr(cls, field_name, None)
if type(field) is ReverseManyToOneDescriptor or \
if field and type(field) is ReverseManyToOneDescriptor or \
type(field) is ManyToManyDescriptor:
if '.' in field_attr:

View File

@@ -15,7 +15,6 @@ import awx
from awx.main.utils import (
get_system_task_capacity
)
from awx.main.queue import CallbackQueueDispatcher
logger = logging.getLogger('awx.isolated.manager')
playbook_logger = logging.getLogger('awx.isolated.manager.playbooks')
@@ -32,12 +31,14 @@ def set_pythonpath(venv_libdir, env):
class IsolatedManager(object):
def __init__(self, canceled_callback=None, check_callback=None, pod_manager=None):
def __init__(self, event_handler, canceled_callback=None, check_callback=None, pod_manager=None):
"""
:param event_handler: a callable used to persist event data from isolated nodes
:param canceled_callback: a callable - which returns `True` or `False`
- signifying if the job has been prematurely
canceled
"""
self.event_handler = event_handler
self.canceled_callback = canceled_callback
self.check_callback = check_callback
self.started_at = None
@@ -208,7 +209,6 @@ class IsolatedManager(object):
status = 'failed'
rc = None
last_check = time.time()
dispatcher = CallbackQueueDispatcher()
while status == 'failed':
canceled = self.canceled_callback() if self.canceled_callback else False
@@ -238,7 +238,7 @@ class IsolatedManager(object):
except json.decoder.JSONDecodeError: # Just in case it's not fully here yet.
pass
self.consume_events(dispatcher)
self.consume_events()
last_check = time.time()
@@ -266,19 +266,11 @@ class IsolatedManager(object):
# consume events one last time just to be sure we didn't miss anything
# in the final sync
self.consume_events(dispatcher)
# emit an EOF event
event_data = {
'event': 'EOF',
'final_counter': len(self.handled_events)
}
event_data.setdefault(self.event_data_key, self.instance.id)
dispatcher.dispatch(event_data)
self.consume_events()
return status, rc
def consume_events(self, dispatcher):
def consume_events(self):
# discover new events and ingest them
events_path = self.path_to('artifacts', self.ident, 'job_events')
@@ -288,7 +280,7 @@ class IsolatedManager(object):
if os.path.exists(events_path):
for event in set(os.listdir(events_path)) - self.handled_events:
path = os.path.join(events_path, event)
if os.path.exists(path):
if os.path.exists(path) and os.path.isfile(path):
try:
event_data = json.load(
open(os.path.join(events_path, event), 'r')
@@ -302,16 +294,10 @@ class IsolatedManager(object):
# practice
# in this scenario, just ignore this event and try it
# again on the next sync
pass
event_data.setdefault(self.event_data_key, self.instance.id)
dispatcher.dispatch(event_data)
continue
self.event_handler(event_data)
self.handled_events.add(event)
# handle artifacts
if event_data.get('event_data', {}).get('artifact_data', {}):
self.instance.artifacts = event_data['event_data']['artifact_data']
self.instance.save(update_fields=['artifacts'])
def cleanup(self):
extravars = {
@@ -400,8 +386,7 @@ class IsolatedManager(object):
if os.path.exists(private_data_dir):
shutil.rmtree(private_data_dir)
def run(self, instance, private_data_dir, playbook, module, module_args,
event_data_key, ident=None):
def run(self, instance, private_data_dir, playbook, module, module_args, ident=None):
"""
Run a job on an isolated host.
@@ -412,14 +397,12 @@ class IsolatedManager(object):
:param playbook: the playbook to run
:param module: the module to run
:param module_args: the module args to use
:param event_data_key: e.g., job_id, inventory_id, ...
For a completed job run, this function returns (status, rc),
representing the status and return code of the isolated
`ansible-playbook` run.
"""
self.ident = ident
self.event_data_key = event_data_key
self.instance = instance
self.private_data_dir = private_data_dir
self.runner_params = self.build_runner_params(
@@ -430,9 +413,4 @@ class IsolatedManager(object):
status, rc = self.dispatch(playbook, module, module_args)
if status == 'successful':
status, rc = self.check()
else:
# emit an EOF event
event_data = {'event': 'EOF', 'final_counter': 0}
event_data.setdefault(self.event_data_key, self.instance.id)
CallbackQueueDispatcher().dispatch(event_data)
return status, rc

View File

@@ -21,6 +21,8 @@ from awx.main.signals import (
disable_computed_fields
)
from awx.main.management.commands.deletion import AWXCollector, pre_delete
class Command(BaseCommand):
'''
@@ -57,27 +59,37 @@ class Command(BaseCommand):
action='store_true', dest='only_workflow_jobs',
help='Remove workflow jobs')
def cleanup_jobs(self):
#jobs_qs = Job.objects.exclude(status__in=('pending', 'running'))
#jobs_qs = jobs_qs.filter(created__lte=self.cutoff)
skipped, deleted = 0, 0
jobs = Job.objects.filter(created__lt=self.cutoff)
for job in jobs.iterator():
job_display = '"%s" (%d host summaries, %d events)' % \
(str(job),
job.job_host_summaries.count(), job.job_events.count())
if job.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
self.logger.debug('%s %s job %s', action_text, job.status, job_display)
skipped += 1
else:
action_text = 'would delete' if self.dry_run else 'deleting'
self.logger.info('%s %s', action_text, job_display)
if not self.dry_run:
job.delete()
deleted += 1
skipped += Job.objects.filter(created__gte=self.cutoff).count()
def cleanup_jobs(self):
skipped, deleted = 0, 0
batch_size = 1000000
while True:
# get queryset for available jobs to remove
qs = Job.objects.filter(created__lt=self.cutoff).exclude(status__in=['pending', 'waiting', 'running'])
# get pk list for the first N (batch_size) objects
pk_list = qs[0:batch_size].values_list('pk')
# You cannot delete queries with sql LIMIT set, so we must
# create a new query from this pk_list
qs_batch = Job.objects.filter(pk__in=pk_list)
just_deleted = 0
if not self.dry_run:
del_query = pre_delete(qs_batch)
collector = AWXCollector(del_query.db)
collector.collect(del_query)
_, models_deleted = collector.delete()
if models_deleted:
just_deleted = models_deleted['main.Job']
deleted += just_deleted
else:
just_deleted = 0 # break from loop, this is dry run
deleted = qs.count()
if just_deleted == 0:
break
skipped += (Job.objects.filter(created__gte=self.cutoff) | Job.objects.filter(status__in=['pending', 'waiting', 'running'])).count()
return skipped, deleted
def cleanup_ad_hoc_commands(self):

View File

@@ -0,0 +1,177 @@
from django.contrib.contenttypes.models import ContentType
from django.db.models.deletion import (
DO_NOTHING, Collector, get_candidate_relations_to_delete,
)
from collections import Counter, OrderedDict
from django.db import transaction
from django.db.models import sql
def bulk_related_objects(field, objs, using):
# This overrides the method in django.contrib.contenttypes.fields.py
"""
Return all objects related to ``objs`` via this ``GenericRelation``.
"""
return field.remote_field.model._base_manager.db_manager(using).filter(**{
"%s__pk" % field.content_type_field_name: ContentType.objects.db_manager(using).get_for_model(
field.model, for_concrete_model=field.for_concrete_model).pk,
"%s__in" % field.object_id_field_name: list(objs.values_list('pk', flat=True))
})
def pre_delete(qs):
# taken from .delete method in django.db.models.query.py
assert qs.query.can_filter(), \
"Cannot use 'limit' or 'offset' with delete."
if qs._fields is not None:
raise TypeError("Cannot call delete() after .values() or .values_list()")
del_query = qs._chain()
# The delete is actually 2 queries - one to find related objects,
# and one to delete. Make sure that the discovery of related
# objects is performed on the same database as the deletion.
del_query._for_write = True
# Disable non-supported fields.
del_query.query.select_for_update = False
del_query.query.select_related = False
del_query.query.clear_ordering(force_empty=True)
return del_query
class AWXCollector(Collector):
def add(self, objs, source=None, nullable=False, reverse_dependency=False):
"""
Add 'objs' to the collection of objects to be deleted. If the call is
the result of a cascade, 'source' should be the model that caused it,
and 'nullable' should be set to True if the relation can be null.
Return a list of all objects that were not already collected.
"""
if not objs.exists():
return objs
model = objs.model
self.data.setdefault(model, [])
self.data[model].append(objs)
# Nullable relationships can be ignored -- they are nulled out before
# deleting, and therefore do not affect the order in which objects have
# to be deleted.
if source is not None and not nullable:
if reverse_dependency:
source, model = model, source
self.dependencies.setdefault(
source._meta.concrete_model, set()).add(model._meta.concrete_model)
return objs
def add_field_update(self, field, value, objs):
"""
Schedule a field update. 'objs' must be a homogeneous iterable
collection of model instances (e.g. a QuerySet).
"""
if not objs.exists():
return
model = objs.model
self.field_updates.setdefault(model, {})
self.field_updates[model].setdefault((field, value), [])
self.field_updates[model][(field, value)].append(objs)
def collect(self, objs, source=None, nullable=False, collect_related=True,
source_attr=None, reverse_dependency=False, keep_parents=False):
"""
Add 'objs' to the collection of objects to be deleted as well as all
parent instances. 'objs' must be a homogeneous iterable collection of
model instances (e.g. a QuerySet). If 'collect_related' is True,
related objects will be handled by their respective on_delete handler.
If the call is the result of a cascade, 'source' should be the model
that caused it and 'nullable' should be set to True, if the relation
can be null.
If 'reverse_dependency' is True, 'source' will be deleted before the
current model, rather than after. (Needed for cascading to parent
models, the one case in which the cascade follows the forwards
direction of an FK rather than the reverse direction.)
If 'keep_parents' is True, data of parent model's will be not deleted.
"""
if hasattr(objs, 'polymorphic_disabled'):
objs.polymorphic_disabled = True
if self.can_fast_delete(objs):
self.fast_deletes.append(objs)
return
new_objs = self.add(objs, source, nullable,
reverse_dependency=reverse_dependency)
if not new_objs.exists():
return
model = new_objs.model
if not keep_parents:
# Recursively collect concrete model's parent models, but not their
# related objects. These will be found by meta.get_fields()
concrete_model = model._meta.concrete_model
for ptr in concrete_model._meta.parents.keys():
if ptr:
parent_objs = ptr.objects.filter(pk__in = new_objs.values_list('pk', flat=True))
self.collect(parent_objs, source=model,
collect_related=False,
reverse_dependency=True)
if collect_related:
parents = model._meta.parents
for related in get_candidate_relations_to_delete(model._meta):
# Preserve parent reverse relationships if keep_parents=True.
if keep_parents and related.model in parents:
continue
field = related.field
if field.remote_field.on_delete == DO_NOTHING:
continue
related_qs = self.related_objects(related, new_objs)
if self.can_fast_delete(related_qs, from_field=field):
self.fast_deletes.append(related_qs)
elif related_qs:
field.remote_field.on_delete(self, field, related_qs, self.using)
for field in model._meta.private_fields:
if hasattr(field, 'bulk_related_objects'):
# It's something like generic foreign key.
sub_objs = bulk_related_objects(field, new_objs, self.using)
self.collect(sub_objs, source=model, nullable=True)
def delete(self):
self.sort()
# collect pk_list before deletion (once things start to delete
# queries might not be able to retreive pk list)
del_dict = OrderedDict()
for model, instances in self.data.items():
del_dict.setdefault(model, [])
for inst in instances:
del_dict[model] += list(inst.values_list('pk', flat=True))
deleted_counter = Counter()
with transaction.atomic(using=self.using, savepoint=False):
# update fields
for model, instances_for_fieldvalues in self.field_updates.items():
for (field, value), instances in instances_for_fieldvalues.items():
for inst in instances:
query = sql.UpdateQuery(model)
query.update_batch(inst.values_list('pk', flat=True),
{field.name: value}, self.using)
# fast deletes
for qs in self.fast_deletes:
count = qs._raw_delete(using=self.using)
deleted_counter[qs.model._meta.label] += count
# delete instances
for model, pk_list in del_dict.items():
query = sql.DeleteQuery(model)
count = query.delete_batch(pk_list, self.using)
deleted_counter[model._meta.label] += count
return sum(deleted_counter.values()), dict(deleted_counter)

View File

@@ -1,8 +1,6 @@
# Copyright (c) 2016 Ansible, Inc.
# All Rights Reserved
import subprocess
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError
@@ -33,18 +31,9 @@ class Command(BaseCommand):
with advisory_lock('instance_registration_%s' % hostname):
instance = Instance.objects.filter(hostname=hostname)
if instance.exists():
isolated = instance.first().is_isolated()
instance.delete()
print("Instance Removed")
if isolated:
print('Successfully deprovisioned {}'.format(hostname))
else:
result = subprocess.Popen("rabbitmqctl forget_cluster_node rabbitmq@{}".format(hostname), shell=True).wait()
if result != 0:
print("Node deprovisioning may have failed when attempting to "
"remove the RabbitMQ instance {} from the cluster".format(hostname))
else:
print('Successfully deprovisioned {}'.format(hostname))
print('Successfully deprovisioned {}'.format(hostname))
print('(changed: True)')
else:
print('No instance found matching name {}'.format(hostname))

View File

@@ -496,12 +496,6 @@ class Command(BaseCommand):
group_names = all_group_names[offset:(offset + self._batch_size)]
for group_pk in groups_qs.filter(name__in=group_names).values_list('pk', flat=True):
del_group_pks.discard(group_pk)
if self.inventory_source.deprecated_group_id in del_group_pks: # TODO: remove in 3.3
logger.warning(
'Group "%s" from v1 API is not deleted by overwrite',
self.inventory_source.deprecated_group.name
)
del_group_pks.discard(self.inventory_source.deprecated_group_id)
# Now delete all remaining groups in batches.
all_del_pks = sorted(list(del_group_pks))
for offset in range(0, len(all_del_pks), self._batch_size):
@@ -534,12 +528,6 @@ class Command(BaseCommand):
# Set of all host pks managed by this inventory source
all_source_host_pks = self._existing_host_pks()
for db_group in db_groups.all():
if self.inventory_source.deprecated_group_id == db_group.id: # TODO: remove in 3.3
logger.debug(
'Group "%s" from v1 API child group/host connections preserved',
db_group.name
)
continue
# Delete child group relationships not present in imported data.
db_children = db_group.children
db_children_name_pk_map = dict(db_children.values_list('name', 'pk'))

View File

@@ -6,6 +6,7 @@ from awx.main.utils.pglock import advisory_lock
from awx.main.models import Instance, InstanceGroup
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
class InstanceNotFound(Exception):
@@ -31,7 +32,6 @@ class Command(BaseCommand):
def get_create_update_instance_group(self, queuename, instance_percent, instance_min):
ig = InstanceGroup.objects.filter(name=queuename)
created = False
changed = False
@@ -98,26 +98,27 @@ class Command(BaseCommand):
if options.get('hostnames'):
hostname_list = options.get('hostnames').split(",")
with advisory_lock('instance_group_registration_{}'.format(queuename)):
changed2 = False
changed3 = False
(ig, created, changed1) = self.get_create_update_instance_group(queuename, inst_per, inst_min)
if created:
print("Creating instance group {}".format(ig.name))
elif not created:
print("Instance Group already registered {}".format(ig.name))
with advisory_lock('cluster_policy_lock'):
with transaction.atomic():
changed2 = False
changed3 = False
(ig, created, changed1) = self.get_create_update_instance_group(queuename, inst_per, inst_min)
if created:
print("Creating instance group {}".format(ig.name))
elif not created:
print("Instance Group already registered {}".format(ig.name))
if ctrl:
(ig_ctrl, changed2) = self.update_instance_group_controller(ig, ctrl)
if changed2:
print("Set controller group {} on {}.".format(ctrl, queuename))
if ctrl:
(ig_ctrl, changed2) = self.update_instance_group_controller(ig, ctrl)
if changed2:
print("Set controller group {} on {}.".format(ctrl, queuename))
try:
(instances, changed3) = self.add_instances_to_group(ig, hostname_list)
for i in instances:
print("Added instance {} to {}".format(i.hostname, ig.name))
except InstanceNotFound as e:
instance_not_found_err = e
try:
(instances, changed3) = self.add_instances_to_group(ig, hostname_list)
for i in instances:
print("Added instance {} to {}".format(i.hostname, ig.name))
except InstanceNotFound as e:
instance_not_found_err = e
if any([changed1, changed2, changed3]):
print('(changed: True)')

View File

@@ -3,10 +3,8 @@
from django.conf import settings
from django.core.management.base import BaseCommand
from kombu import Exchange, Queue
from awx.main.dispatch.kombu import Connection
from awx.main.dispatch.worker import AWXConsumer, CallbackBrokerWorker
from awx.main.dispatch.worker import AWXConsumerRedis, CallbackBrokerWorker
class Command(BaseCommand):
@@ -18,23 +16,15 @@ class Command(BaseCommand):
help = 'Launch the job callback receiver'
def handle(self, *arg, **options):
with Connection(settings.BROKER_URL) as conn:
consumer = None
try:
consumer = AWXConsumer(
'callback_receiver',
conn,
CallbackBrokerWorker(),
[
Queue(
settings.CALLBACK_QUEUE,
Exchange(settings.CALLBACK_QUEUE, type='direct'),
routing_key=settings.CALLBACK_QUEUE
)
]
)
consumer.run()
except KeyboardInterrupt:
print('Terminating Callback Receiver')
if consumer:
consumer.stop()
consumer = None
try:
consumer = AWXConsumerRedis(
'callback_receiver',
CallbackBrokerWorker(),
queues=[getattr(settings, 'CALLBACK_QUEUE', '')],
)
consumer.run()
except KeyboardInterrupt:
print('Terminating Callback Receiver')
if consumer:
consumer.stop()

View File

@@ -6,14 +6,11 @@ from django.conf import settings
from django.core.cache import cache as django_cache
from django.core.management.base import BaseCommand
from django.db import connection as django_connection
from kombu import Exchange, Queue
from awx.main.utils.handlers import AWXProxyHandler
from awx.main.dispatch import get_local_queuename, reaper
from awx.main.dispatch.control import Control
from awx.main.dispatch.kombu import Connection
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumer, TaskWorker
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
from awx.main.dispatch import periodic
logger = logging.getLogger('awx.main.dispatch')
@@ -58,35 +55,16 @@ class Command(BaseCommand):
reaper.reap()
consumer = None
# don't ship external logs inside the dispatcher's parent process
# this exists to work around a race condition + deadlock bug on fork
# in cpython itself:
# https://bugs.python.org/issue37429
AWXProxyHandler.disable()
with Connection(settings.BROKER_URL) as conn:
try:
bcast = 'tower_broadcast_all'
queues = [
Queue(q, Exchange(q), routing_key=q)
for q in (settings.AWX_CELERY_QUEUES_STATIC + [get_local_queuename()])
]
queues.append(
Queue(
construct_bcast_queue_name(bcast),
exchange=Exchange(bcast, type='fanout'),
routing_key=bcast,
reply=True
)
)
consumer = AWXConsumer(
'dispatcher',
conn,
TaskWorker(),
queues,
AutoscalePool(min_workers=4)
)
consumer.run()
except KeyboardInterrupt:
logger.debug('Terminating Task Dispatcher')
if consumer:
consumer.stop()
try:
queues = ['tower_broadcast_all', get_local_queuename()]
consumer = AWXConsumerPG(
'dispatcher',
TaskWorker(),
queues,
AutoscalePool(min_workers=4)
)
consumer.run()
except KeyboardInterrupt:
logger.debug('Terminating Task Dispatcher')
if consumer:
consumer.stop()

View File

@@ -0,0 +1,134 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
import logging
import asyncio
import datetime
import re
import redis
from datetime import datetime as dt
from django.core.management.base import BaseCommand
from django.db.models import Q
from awx.main.analytics.broadcast_websocket import (
BroadcastWebsocketStatsManager,
safe_name,
)
from awx.main.wsbroadcast import BroadcastWebsocketManager
from awx.main.models.ha import Instance
logger = logging.getLogger('awx.main.wsbroadcast')
class Command(BaseCommand):
help = 'Launch the websocket broadcaster'
def add_arguments(self, parser):
parser.add_argument('--status', dest='status', action='store_true',
help='print the internal state of any running broadcast websocket')
@classmethod
def display_len(cls, s):
return len(re.sub('\x1b.*?m', '', s))
@classmethod
def _format_lines(cls, host_stats, padding=5):
widths = [0 for i in host_stats[0]]
for entry in host_stats:
for i, e in enumerate(entry):
if Command.display_len(e) > widths[i]:
widths[i] = Command.display_len(e)
paddings = [padding for i in widths]
lines = []
for entry in host_stats:
line = ""
for pad, width, value in zip(paddings, widths, entry):
if len(value) > Command.display_len(value):
width += len(value) - Command.display_len(value)
total_width = width + pad
line += f'{value:{total_width}}'
lines.append(line)
return lines
@classmethod
def get_connection_status(cls, me, hostnames, data):
host_stats = [('hostname', 'state', 'start time', 'duration (sec)')]
for h in hostnames:
connection_color = '91' # red
h_safe = safe_name(h)
prefix = f'awx_{h_safe}'
connection_state = data.get(f'{prefix}_connection', 'N/A')
connection_started = 'N/A'
connection_duration = 'N/A'
if connection_state is None:
connection_state = 'unknown'
if connection_state == 'connected':
connection_color = '92' # green
connection_started = data.get(f'{prefix}_connection_start', 'Error')
if connection_started != 'Error':
connection_started = datetime.datetime.fromtimestamp(connection_started)
connection_duration = int((dt.now() - connection_started).total_seconds())
connection_state = f'\033[{connection_color}m{connection_state}\033[0m'
host_stats.append((h, connection_state, str(connection_started), str(connection_duration)))
return host_stats
@classmethod
def get_connection_stats(cls, me, hostnames, data):
host_stats = [('hostname', 'total', 'per minute')]
for h in hostnames:
h_safe = safe_name(h)
prefix = f'awx_{h_safe}'
messages_total = data.get(f'{prefix}_messages_received', '0')
messages_per_minute = data.get(f'{prefix}_messages_received_per_minute', '0')
host_stats.append((h, str(int(messages_total)), str(int(messages_per_minute))))
return host_stats
def handle(self, *arg, **options):
if options.get('status'):
try:
stats_all = BroadcastWebsocketStatsManager.get_stats_sync()
except redis.exceptions.ConnectionError as e:
print(f"Unable to get Broadcast Websocket Status. Failed to connect to redis {e}")
return
data = {}
for family in stats_all:
if family.type == 'gauge' and len(family.samples) > 1:
for sample in family.samples:
if sample.value >= 1:
data[family.name] = sample.labels[family.name]
break
else:
data[family.name] = family.samples[0].value
me = Instance.objects.me()
hostnames = [i.hostname for i in Instance.objects.exclude(Q(hostname=me.hostname) | Q(rampart_groups__controller__isnull=False))]
host_stats = Command.get_connection_status(me, hostnames, data)
lines = Command._format_lines(host_stats)
print(f'Broadcast websocket connection status from "{me.hostname}" to:')
print('\n'.join(lines))
host_stats = Command.get_connection_stats(me, hostnames, data)
lines = Command._format_lines(host_stats)
print(f'\nBroadcast websocket connection stats from "{me.hostname}" to:')
print('\n'.join(lines))
return
try:
broadcast_websocket_mgr = BroadcastWebsocketManager()
task = broadcast_websocket_mgr.start()
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
except KeyboardInterrupt:
logger.debug('Terminating Websocket Broadcaster')

View File

@@ -3,6 +3,7 @@
import sys
import logging
import os
from django.db import models
from django.conf import settings
@@ -114,7 +115,7 @@ class InstanceManager(models.Manager):
return node[0]
raise RuntimeError("No instance found with the current cluster host id")
def register(self, uuid=None, hostname=None):
def register(self, uuid=None, hostname=None, ip_address=None):
if not uuid:
uuid = settings.SYSTEM_UUID
if not hostname:
@@ -122,13 +123,23 @@ class InstanceManager(models.Manager):
with advisory_lock('instance_registration_%s' % hostname):
instance = self.filter(hostname=hostname)
if instance.exists():
return (False, instance[0])
instance = self.create(uuid=uuid, hostname=hostname, capacity=0)
instance = instance.get()
if instance.ip_address != ip_address:
instance.ip_address = ip_address
instance.save(update_fields=['ip_address'])
return (True, instance)
else:
return (False, instance)
instance = self.create(uuid=uuid,
hostname=hostname,
ip_address=ip_address,
capacity=0)
return (True, instance)
def get_or_register(self):
if settings.AWX_AUTO_DEPROVISION_INSTANCES:
return self.register()
pod_ip = os.environ.get('MY_POD_IP')
return self.register(ip_address=pod_ip)
else:
return (False, self.me())

View File

@@ -12,23 +12,19 @@ import urllib.parse
from django.conf import settings
from django.contrib.auth.models import User
from django.db.models.signals import post_save
from django.db.migrations.executor import MigrationExecutor
from django.db import IntegrityError, connection
from django.utils.functional import curry
from django.db import connection
from django.shortcuts import get_object_or_404, redirect
from django.apps import apps
from django.utils.deprecation import MiddlewareMixin
from django.utils.translation import ugettext_lazy as _
from django.urls import reverse, resolve
from awx.main.models import ActivityStream
from awx.main.utils.named_url_graph import generate_graph, GraphNode
from awx.conf import fields, register
logger = logging.getLogger('awx.main.middleware')
analytics_logger = logging.getLogger('awx.analytics.activity_stream')
perf_logger = logging.getLogger('awx.analytics.performance')
@@ -76,61 +72,6 @@ class TimingMiddleware(threading.local, MiddlewareMixin):
return filepath
class ActivityStreamMiddleware(threading.local, MiddlewareMixin):
def __init__(self, get_response=None):
self.disp_uid = None
self.instance_ids = []
super().__init__(get_response)
def process_request(self, request):
if hasattr(request, 'user') and request.user.is_authenticated:
user = request.user
else:
user = None
set_actor = curry(self.set_actor, user)
self.disp_uid = str(uuid.uuid1())
self.instance_ids = []
post_save.connect(set_actor, sender=ActivityStream, dispatch_uid=self.disp_uid, weak=False)
def process_response(self, request, response):
drf_request = getattr(request, 'drf_request', None)
drf_user = getattr(drf_request, 'user', None)
if self.disp_uid is not None:
post_save.disconnect(dispatch_uid=self.disp_uid)
for instance in ActivityStream.objects.filter(id__in=self.instance_ids):
if drf_user and drf_user.id:
instance.actor = drf_user
try:
instance.save(update_fields=['actor'])
analytics_logger.info('Activity Stream update entry for %s' % str(instance.object1),
extra=dict(changes=instance.changes, relationship=instance.object_relationship_type,
actor=drf_user.username, operation=instance.operation,
object1=instance.object1, object2=instance.object2))
except IntegrityError:
logger.debug("Integrity Error saving Activity Stream instance for id : " + str(instance.id))
# else:
# obj1_type_actual = instance.object1_type.split(".")[-1]
# if obj1_type_actual in ("InventoryUpdate", "ProjectUpdate", "Job") and instance.id is not None:
# instance.delete()
self.instance_ids = []
return response
def set_actor(self, user, sender, instance, **kwargs):
if sender == ActivityStream:
if isinstance(user, User) and instance.actor is None:
user = User.objects.filter(id=user.id)
if user.exists():
user = user[0]
instance.actor = user
else:
if instance.id not in self.instance_ids:
self.instance_ids.append(instance.id)
class SessionTimeoutMiddleware(MiddlewareMixin):
"""
Resets the session timeout for both the UI and the actual session for the API
@@ -192,21 +133,41 @@ class URLModificationMiddleware(MiddlewareMixin):
)
super().__init__(get_response)
def _named_url_to_pk(self, node, named_url):
kwargs = {}
if not node.populate_named_url_query_kwargs(kwargs, named_url):
return named_url
return str(get_object_or_404(node.model, **kwargs).pk)
@staticmethod
def _hijack_for_old_jt_name(node, kwargs, named_url):
try:
int(named_url)
return False
except ValueError:
pass
JobTemplate = node.model
name = urllib.parse.unquote(named_url)
return JobTemplate.objects.filter(name=name).order_by('organization__created').first()
def _convert_named_url(self, url_path):
@classmethod
def _named_url_to_pk(cls, node, resource, named_url):
kwargs = {}
if node.populate_named_url_query_kwargs(kwargs, named_url):
return str(get_object_or_404(node.model, **kwargs).pk)
if resource == 'job_templates' and '++' not in named_url:
# special case for deprecated job template case
# will not raise a 404 on its own
jt = cls._hijack_for_old_jt_name(node, kwargs, named_url)
if jt:
return str(jt.pk)
return named_url
@classmethod
def _convert_named_url(cls, url_path):
url_units = url_path.split('/')
# If the identifier is an empty string, it is always invalid.
if len(url_units) < 6 or url_units[1] != 'api' or url_units[2] not in ['v2'] or not url_units[4]:
return url_path
resource = url_units[3]
if resource in settings.NAMED_URL_MAPPINGS:
url_units[4] = self._named_url_to_pk(settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]],
url_units[4])
url_units[4] = cls._named_url_to_pk(
settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]],
resource, url_units[4])
return '/'.join(url_units)
def process_request(self, request):

View File

@@ -464,7 +464,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='unifiedjob',
name='instance_group',
field=models.ForeignKey(on_delete=models.SET_NULL, default=None, blank=True, to='main.InstanceGroup', help_text='The Rampart/Instance group the job was run under', null=True),
field=models.ForeignKey(on_delete=models.SET_NULL, default=None, blank=True, to='main.InstanceGroup', help_text='The Instance group the job was run under', null=True),
),
migrations.AddField(
model_name='unifiedjobtemplate',

View File

@@ -16,6 +16,6 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='unifiedjob',
name='instance_group',
field=models.ForeignKey(blank=True, default=None, help_text='The Rampart/Instance group the job was run under', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, to='main.InstanceGroup'),
field=models.ForeignKey(blank=True, default=None, help_text='The Instance group the job was run under', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, to='main.InstanceGroup'),
),
]

View File

@@ -0,0 +1,81 @@
# Generated by Django 2.2.4 on 2019-08-07 19:56
import awx.main.utils.polymorphic
import awx.main.fields
from django.db import migrations, models
import django.db.models.deletion
from awx.main.migrations._rbac import (
rebuild_role_parentage, rebuild_role_hierarchy,
migrate_ujt_organization, migrate_ujt_organization_backward,
restore_inventory_admins, restore_inventory_admins_backward
)
def rebuild_jt_parents(apps, schema_editor):
rebuild_role_parentage(apps, schema_editor, models=('jobtemplate',))
class Migration(migrations.Migration):
dependencies = [
('main', '0108_v370_unifiedjob_dependencies_processed'),
]
operations = [
# backwards parents and ancestors caching
migrations.RunPython(migrations.RunPython.noop, rebuild_jt_parents),
# add new organization field for JT and all other unified jobs
migrations.AddField(
model_name='unifiedjob',
name='tmp_organization',
field=models.ForeignKey(blank=True, help_text='The organization used to determine access to this unified job.', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='unifiedjobs', to='main.Organization'),
),
migrations.AddField(
model_name='unifiedjobtemplate',
name='tmp_organization',
field=models.ForeignKey(blank=True, help_text='The organization used to determine access to this template.', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='unifiedjobtemplates', to='main.Organization'),
),
# while new and old fields exist, copy the organization fields
migrations.RunPython(migrate_ujt_organization, migrate_ujt_organization_backward),
# with data saved, remove old fields
migrations.RemoveField(
model_name='project',
name='organization',
),
migrations.RemoveField(
model_name='workflowjobtemplate',
name='organization',
),
# now, without safely rename the new field without conflicts from old field
migrations.RenameField(
model_name='unifiedjobtemplate',
old_name='tmp_organization',
new_name='organization',
),
migrations.RenameField(
model_name='unifiedjob',
old_name='tmp_organization',
new_name='organization',
),
# parentage of job template roles has genuinely changed at this point
migrations.AlterField(
model_name='jobtemplate',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['organization.job_template_admin_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='jobtemplate',
name='execute_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role', 'organization.execute_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='jobtemplate',
name='read_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['organization.auditor_role', 'inventory.organization.auditor_role', 'execute_role', 'admin_role'], related_name='+', to='main.Role'),
),
# Re-compute the role parents and ancestors caching
migrations.RunPython(rebuild_jt_parents, migrations.RunPython.noop),
# for all permissions that will be removed, make them explicit
migrations.RunPython(restore_inventory_admins, restore_inventory_admins_backward),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.8 on 2020-02-12 17:55
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0109_v370_job_template_organization_field'),
]
operations = [
migrations.AddField(
model_name='instance',
name='ip_address',
field=models.CharField(blank=True, default=None, max_length=50, null=True, unique=True),
),
]

View File

@@ -0,0 +1,16 @@
# Generated by Django 2.2.8 on 2020-02-17 14:50
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0110_v370_instance_ip_address'),
]
operations = [
migrations.DeleteModel(
name='ChannelGroup',
),
]

View File

@@ -0,0 +1,61 @@
# Generated by Django 2.2.8 on 2020-03-14 02:29
from django.db import migrations, models
import uuid
import logging
logger = logging.getLogger('awx.main.migrations')
def create_uuid(apps, schema_editor):
WorkflowJobTemplateNode = apps.get_model('main', 'WorkflowJobTemplateNode')
ct = 0
for node in WorkflowJobTemplateNode.objects.iterator():
node.identifier = uuid.uuid4()
node.save(update_fields=['identifier'])
ct += 1
if ct:
logger.info(f'Automatically created uuid4 identifier for {ct} workflow nodes')
class Migration(migrations.Migration):
dependencies = [
('main', '0111_v370_delete_channelgroup'),
]
operations = [
migrations.AddField(
model_name='workflowjobnode',
name='identifier',
field=models.CharField(blank=True, help_text='An identifier coresponding to the workflow job template node that this node was created from.', max_length=512),
),
migrations.AddField(
model_name='workflowjobtemplatenode',
name='identifier',
field=models.CharField(blank=True, null=True, help_text='An identifier for this node that is unique within its workflow. It is copied to workflow job nodes corresponding to this node.', max_length=512),
),
migrations.RunPython(create_uuid, migrations.RunPython.noop), # this fixes the uuid4 issue
migrations.AlterField(
model_name='workflowjobtemplatenode',
name='identifier',
field=models.CharField(default=uuid.uuid4, help_text='An identifier for this node that is unique within its workflow. It is copied to workflow job nodes corresponding to this node.', max_length=512),
),
migrations.AlterUniqueTogether(
name='workflowjobtemplatenode',
unique_together={('identifier', 'workflow_job_template')},
),
migrations.AddIndex(
model_name='workflowjobnode',
index=models.Index(fields=['identifier', 'workflow_job'], name='main_workfl_identif_87b752_idx'),
),
migrations.AddIndex(
model_name='workflowjobnode',
index=models.Index(fields=['identifier'], name='main_workfl_identif_efdfe8_idx'),
),
migrations.AddIndex(
model_name='workflowjobtemplatenode',
index=models.Index(fields=['identifier'], name='main_workfl_identif_0cc025_idx'),
),
]

View File

@@ -0,0 +1,118 @@
# Generated by Django 2.2.8 on 2020-02-21 16:31
from django.db import migrations, models, connection
def migrate_event_data(apps, schema_editor):
# see: https://github.com/ansible/awx/issues/6010
#
# the goal of this function is to end with event tables (e.g., main_jobevent)
# that have a bigint primary key (because the old usage of an integer
# numeric isn't enough, as its range is about 2.1B, see:
# https://www.postgresql.org/docs/9.1/datatype-numeric.html)
# unfortunately, we can't do this with a simple ALTER TABLE, because
# for tables with hundreds of millions or billions of rows, the ALTER TABLE
# can take *hours* on modest hardware.
#
# the approach in this migration means that post-migration, event data will
# *not* immediately show up, but will be repopulated over time progressively
# the trade-off here is not having to wait hours for the full data migration
# before you can start and run AWX again (including new playbook runs)
for tblname in (
'main_jobevent', 'main_inventoryupdateevent',
'main_projectupdateevent', 'main_adhoccommandevent',
'main_systemjobevent'
):
with connection.cursor() as cursor:
# rename the current event table
cursor.execute(
f'ALTER TABLE {tblname} RENAME TO _old_{tblname};'
)
# create a *new* table with the same schema
cursor.execute(
f'CREATE TABLE {tblname} (LIKE _old_{tblname} INCLUDING ALL);'
)
# alter the *new* table so that the primary key is a big int
cursor.execute(
f'ALTER TABLE {tblname} ALTER COLUMN id TYPE bigint USING id::bigint;'
)
# recreate counter for the new table's primary key to
# start where the *old* table left off (we have to do this because the
# counter changed from an int to a bigint)
cursor.execute(f'DROP SEQUENCE IF EXISTS "{tblname}_id_seq" CASCADE;')
cursor.execute(f'CREATE SEQUENCE "{tblname}_id_seq";')
cursor.execute(
f'ALTER TABLE "{tblname}" ALTER COLUMN "id" '
f"SET DEFAULT nextval('{tblname}_id_seq');"
)
cursor.execute(
f"SELECT setval('{tblname}_id_seq', (SELECT MAX(id) FROM _old_{tblname}), true);"
)
# replace the BTREE index on main_jobevent.job_id with
# a BRIN index to drastically improve per-UJ lookup performance
# see: https://info.crunchydata.com/blog/postgresql-brin-indexes-big-data-performance-with-minimal-storage
if tblname == 'main_jobevent':
cursor.execute("SELECT indexname FROM pg_indexes WHERE tablename='main_jobevent' AND indexdef LIKE '%USING btree (job_id)';")
old_index = cursor.fetchone()[0]
cursor.execute(f'DROP INDEX {old_index}')
cursor.execute('CREATE INDEX main_jobevent_job_id_brin_idx ON main_jobevent USING brin (job_id);')
# remove all of the indexes and constraints from the old table
# (they just slow down the data migration)
cursor.execute(f"SELECT indexname, indexdef FROM pg_indexes WHERE tablename='_old_{tblname}' AND indexname != '{tblname}_pkey';")
indexes = cursor.fetchall()
cursor.execute(f"SELECT conname, contype, pg_catalog.pg_get_constraintdef(r.oid, true) as condef FROM pg_catalog.pg_constraint r WHERE r.conrelid = '_old_{tblname}'::regclass AND conname != '{tblname}_pkey';")
constraints = cursor.fetchall()
for indexname, indexdef in indexes:
cursor.execute(f'DROP INDEX IF EXISTS {indexname}')
for conname, contype, condef in constraints:
cursor.execute(f'ALTER TABLE _old_{tblname} DROP CONSTRAINT IF EXISTS {conname}')
class FakeAlterField(migrations.AlterField):
def database_forwards(self, *args):
# this is intentionally left blank, because we're
# going to accomplish the migration with some custom raw SQL
pass
class Migration(migrations.Migration):
dependencies = [
('main', '0112_v370_workflow_node_identifier'),
]
operations = [
migrations.RunPython(migrate_event_data),
FakeAlterField(
model_name='adhoccommandevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
FakeAlterField(
model_name='inventoryupdateevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
FakeAlterField(
model_name='jobevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
FakeAlterField(
model_name='projectupdateevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
FakeAlterField(
model_name='systemjobevent',
name='id',
field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
),
]

View File

@@ -0,0 +1,39 @@
# Generated by Django 2.2.11 on 2020-04-03 00:11
from django.db import migrations, models
def remove_manual_inventory_sources(apps, schema_editor):
'''Previously we would automatically create inventory sources after
Group creation and we would use the parent Group as our interface for the user.
During that process we would create InventorySource that had a source of "manual".
'''
InventoryUpdate = apps.get_model('main', 'InventoryUpdate')
InventoryUpdate.objects.filter(source='').delete()
InventorySource = apps.get_model('main', 'InventorySource')
InventorySource.objects.filter(source='').delete()
class Migration(migrations.Migration):
dependencies = [
('main', '0113_v370_event_bigint'),
]
operations = [
migrations.RemoveField(
model_name='inventorysource',
name='deprecated_group',
),
migrations.RunPython(remove_manual_inventory_sources),
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(choices=[('file', 'File, Directory or Script'), ('scm', 'Sourced from a Project'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('rhv', 'Red Hat Virtualization'), ('tower', 'Ansible Tower'), ('custom', 'Custom Script')], default=None, max_length=32),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(choices=[('file', 'File, Directory or Script'), ('scm', 'Sourced from a Project'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('rhv', 'Red Hat Virtualization'), ('tower', 'Ansible Tower'), ('custom', 'Custom Script')], default=None, max_length=32),
),
]

View File

@@ -1,6 +1,9 @@
import logging
from time import time
from django.db.models import Subquery, OuterRef, F
from awx.main.fields import update_role_parentage_for_instance
from awx.main.models.rbac import Role, batch_role_ancestor_rebuilding
logger = logging.getLogger('rbac_migrations')
@@ -10,11 +13,11 @@ def create_roles(apps, schema_editor):
'''
Implicit role creation happens in our post_save hook for all of our
resources. Here we iterate through all of our resource types and call
.save() to ensure all that happens for every object in the system before we
get busy with the actual migration work.
.save() to ensure all that happens for every object in the system.
This gets run after migrate_users, which does role creation for users a
little differently.
This can be used whenever new roles are introduced in a migration to
create those roles for pre-existing objects that did not previously
have them created via signals.
'''
models = [
@@ -35,7 +38,189 @@ def create_roles(apps, schema_editor):
obj.save()
def delete_all_user_roles(apps, schema_editor):
ContentType = apps.get_model('contenttypes', "ContentType")
Role = apps.get_model('main', "Role")
User = apps.get_model('auth', "User")
user_content_type = ContentType.objects.get_for_model(User)
for role in Role.objects.filter(content_type=user_content_type).iterator():
role.delete()
UNIFIED_ORG_LOOKUPS = {
# Job Templates had an implicit organization via their project
'jobtemplate': 'project',
# Inventory Sources had an implicit organization via their inventory
'inventorysource': 'inventory',
# Projects had an explicit organization in their subclass table
'project': None,
# Workflow JTs also had an explicit organization in their subclass table
'workflowjobtemplate': None,
# Jobs inherited project from job templates as a convenience field
'job': 'project',
# Inventory Sources had an convenience field of inventory
'inventoryupdate': 'inventory',
# Project Updates did not have a direct organization field, obtained it from project
'projectupdate': 'project',
# Workflow Jobs are handled same as project updates
# Sliced jobs are a special case, but old data is not given special treatment for simplicity
'workflowjob': 'workflow_job_template',
# AdHocCommands do not have a template, but still migrate them
'adhoccommand': 'inventory'
}
def implicit_org_subquery(UnifiedClass, cls, backward=False):
"""Returns a subquery that returns the so-called organization for objects
in the class in question, before migration to the explicit unified org field.
In some cases, this can still be applied post-migration.
"""
if cls._meta.model_name not in UNIFIED_ORG_LOOKUPS:
return None
cls_name = cls._meta.model_name
source_field = UNIFIED_ORG_LOOKUPS[cls_name]
unified_field = UnifiedClass._meta.get_field(cls_name)
unified_ptr = unified_field.remote_field.name
if backward:
qs = UnifiedClass.objects.filter(**{cls_name: OuterRef('id')}).order_by().values_list('tmp_organization')[:1]
elif source_field is None:
qs = cls.objects.filter(**{unified_ptr: OuterRef('id')}).order_by().values_list('organization')[:1]
else:
intermediary_field = cls._meta.get_field(source_field)
intermediary_model = intermediary_field.related_model
intermediary_reverse_rel = intermediary_field.remote_field.name
qs = intermediary_model.objects.filter(**{
# this filter leverages the fact that the Unified models have same pk as subclasses.
# For instance... filters projects used in job template, where that job template
# has same id same as UJT from the outer reference (which it does)
intermediary_reverse_rel: OuterRef('id')}
).order_by().values_list('organization')[:1]
return Subquery(qs)
def _migrate_unified_organization(apps, unified_cls_name, backward=False):
"""Given a unified base model (either UJT or UJ)
and a dict org_field_mapping which gives related model to get org from
saves organization for those objects to the temporary migration
variable tmp_organization on the unified model
(optimized method)
"""
start = time()
UnifiedClass = apps.get_model('main', unified_cls_name)
ContentType = apps.get_model('contenttypes', 'ContentType')
for cls in UnifiedClass.__subclasses__():
cls_name = cls._meta.model_name
if backward and UNIFIED_ORG_LOOKUPS.get(cls_name, 'not-found') is not None:
logger.debug('Not reverse migrating {}, existing data should remain valid'.format(cls_name))
continue
logger.debug('{}Migrating {} to new organization field'.format('Reverse ' if backward else '', cls_name))
sub_qs = implicit_org_subquery(UnifiedClass, cls, backward=backward)
if sub_qs is None:
logger.debug('Class {} has no organization migration'.format(cls_name))
continue
this_ct = ContentType.objects.get_for_model(cls)
if backward:
r = cls.objects.order_by().update(organization=sub_qs)
else:
r = UnifiedClass.objects.order_by().filter(polymorphic_ctype=this_ct).update(tmp_organization=sub_qs)
if r:
logger.info('Organization migration on {} affected {} rows.'.format(cls_name, r))
logger.info('Unified organization migration completed in {:.4f} seconds'.format(time() - start))
def migrate_ujt_organization(apps, schema_editor):
'''Move organization field to UJT and UJ models'''
_migrate_unified_organization(apps, 'UnifiedJobTemplate')
_migrate_unified_organization(apps, 'UnifiedJob')
def migrate_ujt_organization_backward(apps, schema_editor):
'''Move organization field from UJT and UJ models back to their original places'''
_migrate_unified_organization(apps, 'UnifiedJobTemplate', backward=True)
_migrate_unified_organization(apps, 'UnifiedJob', backward=True)
def _restore_inventory_admins(apps, schema_editor, backward=False):
"""With the JT.organization changes, admins of organizations connected to
job templates via inventory will have their permissions demoted.
This maintains current permissions over the migration by granting the
permissions they used to have explicitly on the JT itself.
"""
start = time()
JobTemplate = apps.get_model('main', 'JobTemplate')
User = apps.get_model('auth', 'User')
changed_ct = 0
jt_qs = JobTemplate.objects.filter(inventory__isnull=False)
jt_qs = jt_qs.exclude(inventory__organization=F('project__organization'))
jt_qs = jt_qs.only('id', 'admin_role_id', 'execute_role_id', 'inventory_id')
for jt in jt_qs.iterator():
org = jt.inventory.organization
for jt_role, org_roles in (
('admin_role', ('admin_role', 'job_template_admin_role',)),
('execute_role', ('execute_role',))
):
role_id = getattr(jt, '{}_id'.format(jt_role))
user_qs = User.objects
if not backward:
# In this specific case, the name for the org role and JT roles were the same
org_role_ids = [getattr(org, '{}_id'.format(role_name)) for role_name in org_roles]
user_qs = user_qs.filter(roles__in=org_role_ids)
# bizarre migration behavior - ancestors / descendents of
# migration version of Role model is reversed, using current model briefly
ancestor_ids = list(
Role.objects.filter(descendents=role_id).values_list('id', flat=True)
)
# same as Role.__contains__, filter for "user in jt.admin_role"
user_qs = user_qs.exclude(roles__in=ancestor_ids)
else:
# use the database to filter intersection of users without access
# to the JT role and either organization role
user_qs = user_qs.filter(roles__in=[org.admin_role_id, org.execute_role_id])
# in reverse, intersection of users who have both
user_qs = user_qs.filter(roles=role_id)
user_ids = list(user_qs.values_list('id', flat=True))
if not user_ids:
continue
role = getattr(jt, jt_role)
logger.debug('{} {} on jt {} for users {} via inventory.organization {}'.format(
'Removing' if backward else 'Setting',
jt_role, jt.pk, user_ids, org.pk
))
if not backward:
# in reverse, explit role becomes redundant
role.members.add(*user_ids)
else:
role.members.remove(*user_ids)
changed_ct += len(user_ids)
if changed_ct:
logger.info('{} explicit JT permission for {} users in {:.4f} seconds'.format(
'Removed' if backward else 'Added',
changed_ct, time() - start
))
def restore_inventory_admins(apps, schema_editor):
_restore_inventory_admins(apps, schema_editor)
def restore_inventory_admins_backward(apps, schema_editor):
_restore_inventory_admins(apps, schema_editor, backward=True)
def rebuild_role_hierarchy(apps, schema_editor):
'''
This should be called in any migration when ownerships are changed.
Ex. I remove a user from the admin_role of a credential.
Ancestors are cached from parents for performance, this re-computes ancestors.
'''
logger.info('Computing role roots..')
start = time()
roots = Role.objects \
@@ -46,14 +231,74 @@ def rebuild_role_hierarchy(apps, schema_editor):
start = time()
Role.rebuild_role_ancestor_list(roots, [])
stop = time()
logger.info('Rebuild completed in %f seconds' % (stop - start))
logger.info('Rebuild ancestors completed in %f seconds' % (stop - start))
logger.info('Done.')
def delete_all_user_roles(apps, schema_editor):
def rebuild_role_parentage(apps, schema_editor, models=None):
'''
This should be called in any migration when any parent_role entry
is modified so that the cached parent fields will be updated. Ex:
foo_role = ImplicitRoleField(
parent_role=['bar_role'] # change to parent_role=['admin_role']
)
This is like rebuild_role_hierarchy, but that method updates ancestors,
whereas this method updates parents.
'''
start = time()
seen_models = set()
model_ct = 0
noop_ct = 0
ContentType = apps.get_model('contenttypes', "ContentType")
Role = apps.get_model('main', "Role")
User = apps.get_model('auth', "User")
user_content_type = ContentType.objects.get_for_model(User)
for role in Role.objects.filter(content_type=user_content_type).iterator():
role.delete()
additions = set()
removals = set()
role_qs = Role.objects
if models:
# update_role_parentage_for_instance is expensive
# if the models have been downselected, ignore those which are not in the list
ct_ids = list(ContentType.objects.filter(
model__in=[name.lower() for name in models]
).values_list('id', flat=True))
role_qs = role_qs.filter(content_type__in=ct_ids)
for role in role_qs.iterator():
if not role.object_id:
continue
model_tuple = (role.content_type_id, role.object_id)
if model_tuple in seen_models:
continue
seen_models.add(model_tuple)
# The GenericForeignKey does not work right in migrations
# with the usage as role.content_object
# so we do the lookup ourselves with current migration models
ct = role.content_type
app = ct.app_label
ct_model = apps.get_model(app, ct.model)
content_object = ct_model.objects.get(pk=role.object_id)
parents_added, parents_removed = update_role_parentage_for_instance(content_object)
additions.update(parents_added)
removals.update(parents_removed)
if parents_added:
model_ct += 1
logger.debug('Added to parents of roles {} of {}'.format(parents_added, content_object))
if parents_removed:
model_ct += 1
logger.debug('Removed from parents of roles {} of {}'.format(parents_removed, content_object))
else:
noop_ct += 1
logger.debug('No changes to role parents for {} resources'.format(noop_ct))
logger.debug('Added parents to {} roles'.format(len(additions)))
logger.debug('Removed parents from {} roles'.format(len(removals)))
if model_ct:
logger.info('Updated implicit parents of {} resources'.format(model_ct))
logger.info('Rebuild parentage completed in %f seconds' % (time() - start))
# this is ran because the ordinary signals for
# Role.parents.add and Role.parents.remove not called in migration
Role.rebuild_role_ancestor_list(list(additions), list(removals))

View File

@@ -3,6 +3,7 @@
# Django
from django.conf import settings # noqa
from django.db import connection
from django.db.models.signals import pre_delete # noqa
# AWX
@@ -58,7 +59,6 @@ from awx.main.models.workflow import ( # noqa
WorkflowJob, WorkflowJobNode, WorkflowJobOptions, WorkflowJobTemplate,
WorkflowJobTemplateNode, WorkflowApproval, WorkflowApprovalTemplate,
)
from awx.main.models.channels import ChannelGroup # noqa
from awx.api.versioning import reverse
from awx.main.models.oauth import ( # noqa
OAuth2AccessToken, OAuth2Application
@@ -80,6 +80,26 @@ User.add_to_class('can_access_with_errors', check_user_access_with_errors)
User.add_to_class('accessible_objects', user_accessible_objects)
def enforce_bigint_pk_migration():
# see: https://github.com/ansible/awx/issues/6010
# look at all the event tables and verify that they have been fully migrated
# from the *old* int primary key table to the replacement bigint table
# if not, attempt to migrate them in the background
for tblname in (
'main_jobevent', 'main_inventoryupdateevent',
'main_projectupdateevent', 'main_adhoccommandevent',
'main_systemjobevent'
):
with connection.cursor() as cursor:
cursor.execute(
'SELECT 1 FROM information_schema.tables WHERE table_name=%s',
(f'_old_{tblname}',)
)
if bool(cursor.rowcount):
from awx.main.tasks import migrate_legacy_event_data
migrate_legacy_event_data.apply_async([tblname])
def cleanup_created_modified_by(sender, **kwargs):
# work around a bug in django-polymorphic that doesn't properly
# handle cascades for reverse foreign keys on the polymorphic base model

View File

@@ -1,6 +0,0 @@
from django.db import models
class ChannelGroup(models.Model):
group = models.CharField(max_length=200, unique=True)
channels = models.TextField()

View File

@@ -4,7 +4,7 @@ import datetime
import logging
from collections import defaultdict
from django.db import models, DatabaseError
from django.db import models, DatabaseError, connection
from django.utils.dateparse import parse_datetime
from django.utils.text import Truncator
from django.utils.timezone import utc
@@ -356,6 +356,14 @@ class BasePlaybookEvent(CreatedModifiedModel):
job_id=self.job_id, uuid__in=failed
).update(failed=True)
# send success/failure notifications when we've finished handling the playbook_on_stats event
from awx.main.tasks import handle_success_and_failure_notifications # circular import
def _send_notifications():
handle_success_and_failure_notifications.apply_async([self.job.id])
connection.on_commit(_send_notifications)
for field in ('playbook', 'play', 'task', 'role'):
value = force_text(event_data.get(field, '')).strip()
if value != getattr(self, field):
@@ -430,6 +438,7 @@ class JobEvent(BasePlaybookEvent):
('job', 'parent_uuid'),
]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
job = models.ForeignKey(
'Job',
related_name='job_events',
@@ -518,6 +527,7 @@ class ProjectUpdateEvent(BasePlaybookEvent):
('project_update', 'end_line'),
]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
project_update = models.ForeignKey(
'ProjectUpdate',
related_name='project_update_events',
@@ -669,6 +679,7 @@ class AdHocCommandEvent(BaseCommandEvent):
FAILED_EVENTS = [x[0] for x in EVENT_TYPES if x[2]]
EVENT_CHOICES = [(x[0], x[1]) for x in EVENT_TYPES]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
event = models.CharField(
max_length=100,
choices=EVENT_CHOICES,
@@ -731,6 +742,7 @@ class InventoryUpdateEvent(BaseCommandEvent):
('inventory_update', 'end_line'),
]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
inventory_update = models.ForeignKey(
'InventoryUpdate',
related_name='inventory_update_events',
@@ -764,6 +776,7 @@ class SystemJobEvent(BaseCommandEvent):
('system_job', 'end_line'),
]
id = models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
system_job = models.ForeignKey(
'SystemJob',
related_name='system_job_events',

View File

@@ -53,6 +53,13 @@ class Instance(HasPolicyEditsMixin, BaseModel):
uuid = models.CharField(max_length=40)
hostname = models.CharField(max_length=250, unique=True)
ip_address = models.CharField(
blank=True,
null=True,
default=None,
max_length=50,
unique=True,
)
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
last_isolated_check = models.DateTimeField(

View File

@@ -426,9 +426,9 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
'''
def _get_related_jobs(self):
return UnifiedJob.objects.non_polymorphic().filter(
Q(Job___inventory=self) |
Q(InventoryUpdate___inventory_source__inventory=self) |
Q(AdHocCommand___inventory=self)
Q(job__inventory=self) |
Q(inventoryupdate__inventory=self) |
Q(adhoccommand__inventory=self)
)
@@ -808,8 +808,8 @@ class Group(CommonModelNameNotUnique, RelatedJobsMixin):
'''
def _get_related_jobs(self):
return UnifiedJob.objects.non_polymorphic().filter(
Q(Job___inventory=self.inventory) |
Q(InventoryUpdate___inventory_source__groups=self)
Q(job__inventory=self.inventory) |
Q(inventoryupdate__inventory_source__groups=self)
)
@@ -821,7 +821,6 @@ class InventorySourceOptions(BaseModel):
injectors = dict()
SOURCE_CHOICES = [
('', _('Manual')),
('file', _('File, Directory or Script')),
('scm', _('Sourced from a Project')),
('ec2', _('Amazon EC2')),
@@ -932,8 +931,8 @@ class InventorySourceOptions(BaseModel):
source = models.CharField(
max_length=32,
choices=SOURCE_CHOICES,
blank=True,
default='',
blank=False,
default=None,
)
source_path = models.CharField(
max_length=1024,
@@ -1237,14 +1236,6 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
on_delete=models.CASCADE,
)
deprecated_group = models.OneToOneField(
'Group',
related_name='deprecated_inventory_source',
null=True,
default=None,
on_delete=models.CASCADE,
)
source_project = models.ForeignKey(
'Project',
related_name='scm_inventory_sources',
@@ -1277,10 +1268,14 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in InventorySourceOptions._meta.fields) | set(
['name', 'description', 'credentials', 'inventory']
['name', 'description', 'organization', 'credentials', 'inventory']
)
def save(self, *args, **kwargs):
# if this is a new object, inherit organization from its inventory
if not self.pk and self.inventory and self.inventory.organization_id and not self.organization_id:
self.organization_id = self.inventory.organization_id
# If update_fields has been specified, add our field names to it,
# if it hasn't been specified, then we're just doing a normal save.
update_fields = kwargs.get('update_fields', [])
@@ -1410,16 +1405,6 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
started=list(started_notification_templates),
success=list(success_notification_templates))
def clean_source(self): # TODO: remove in 3.3
source = self.source
if source and self.deprecated_group:
qs = self.deprecated_group.inventory_sources.filter(source__in=CLOUD_INVENTORY_SOURCES)
existing_sources = qs.exclude(pk=self.pk)
if existing_sources.count():
s = u', '.join([x.deprecated_group.name for x in existing_sources])
raise ValidationError(_('Unable to configure this item for cloud sync. It is already managed by %s.') % s)
return source
def clean_update_on_project_update(self):
if self.update_on_project_update is True and \
self.source == 'scm' and \
@@ -1508,8 +1493,6 @@ class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin,
if self.inventory_source.inventory is not None:
websocket_data.update(dict(inventory_id=self.inventory_source.inventory.pk))
if self.inventory_source.deprecated_group is not None: # TODO: remove in 3.3
websocket_data.update(dict(group_id=self.inventory_source.deprecated_group.id))
return websocket_data
def get_absolute_url(self, request=None):

View File

@@ -199,7 +199,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
'labels', 'instance_groups', 'credentials', 'survey_spec'
]
FIELDS_TO_DISCARD_AT_COPY = ['vault_credential', 'credential']
SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name')]
SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name', 'organization')]
class Meta:
app_label = 'main'
@@ -262,13 +262,17 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
)
admin_role = ImplicitRoleField(
parent_role=['project.organization.job_template_admin_role', 'inventory.organization.job_template_admin_role']
parent_role=['organization.job_template_admin_role']
)
execute_role = ImplicitRoleField(
parent_role=['admin_role', 'project.organization.execute_role', 'inventory.organization.execute_role'],
parent_role=['admin_role', 'organization.execute_role'],
)
read_role = ImplicitRoleField(
parent_role=['project.organization.auditor_role', 'inventory.organization.auditor_role', 'execute_role', 'admin_role'],
parent_role=[
'organization.auditor_role',
'inventory.organization.auditor_role', # partial support for old inheritance via inventory
'execute_role', 'admin_role'
],
)
@@ -279,7 +283,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in JobOptions._meta.fields) | set(
['name', 'description', 'survey_passwords', 'labels', 'credentials',
['name', 'description', 'organization', 'survey_passwords', 'labels', 'credentials',
'job_slice_number', 'job_slice_count']
)
@@ -319,6 +323,41 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
else:
return self.job_slice_count
def save(self, *args, **kwargs):
update_fields = kwargs.get('update_fields', [])
# if project is deleted for some reason, then keep the old organization
# to retain ownership for organization admins
if self.project and self.project.organization_id != self.organization_id:
self.organization_id = self.project.organization_id
if 'organization' not in update_fields and 'organization_id' not in update_fields:
update_fields.append('organization_id')
return super(JobTemplate, self).save(*args, **kwargs)
def validate_unique(self, exclude=None):
"""Custom over-ride for JT specifically
because organization is inferred from project after full_clean is finished
thus the organization field is not yet set when validation happens
"""
errors = []
for ut in JobTemplate.SOFT_UNIQUE_TOGETHER:
kwargs = {'name': self.name}
if self.project:
kwargs['organization'] = self.project.organization_id
else:
kwargs['organization'] = None
qs = JobTemplate.objects.filter(**kwargs)
if self.pk:
qs = qs.exclude(pk=self.pk)
if qs.exists():
errors.append(
'%s with this (%s) combination already exists.' % (
JobTemplate.__name__,
', '.join(set(ut) - {'polymorphic_ctype'})
)
)
if errors:
raise ValidationError(errors)
def create_unified_job(self, **kwargs):
prevent_slicing = kwargs.pop('_prevent_slicing', False)
slice_ct = self.get_effective_slice_ct(kwargs)
@@ -479,13 +518,13 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
success_notification_templates = list(base_notification_templates.filter(
unifiedjobtemplate_notification_templates_for_success__in=[self, self.project]))
# Get Organization NotificationTemplates
if self.project is not None and self.project.organization is not None:
if self.organization is not None:
error_notification_templates = set(error_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_errors=self.project.organization)))
organization_notification_templates_for_errors=self.organization)))
started_notification_templates = set(started_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_started=self.project.organization)))
organization_notification_templates_for_started=self.organization)))
success_notification_templates = set(success_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_success=self.project.organization)))
organization_notification_templates_for_success=self.organization)))
return dict(error=list(error_notification_templates),
started=list(started_notification_templates),
success=list(success_notification_templates))
@@ -588,7 +627,7 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
for virtualenv in (
self.job_template.custom_virtualenv if self.job_template else None,
self.project.custom_virtualenv,
self.project.organization.custom_virtualenv if self.project.organization else None
self.organization.custom_virtualenv if self.organization else None
):
if virtualenv:
return virtualenv
@@ -741,8 +780,8 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
@property
def preferred_instance_groups(self):
if self.project is not None and self.project.organization is not None:
organization_groups = [x for x in self.project.organization.instance_groups.all()]
if self.organization is not None:
organization_groups = [x for x in self.organization.instance_groups.all()]
else:
organization_groups = []
if self.inventory is not None:
@@ -1144,7 +1183,7 @@ class SystemJobTemplate(UnifiedJobTemplate, SystemJobOptions):
@classmethod
def _get_unified_job_field_names(cls):
return ['name', 'description', 'job_type', 'extra_vars']
return ['name', 'description', 'organization', 'job_type', 'extra_vars']
def get_absolute_url(self, request=None):
return reverse('api:system_job_template_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -269,7 +269,7 @@ class JobNotificationMixin(object):
'timeout', 'use_fact_cache', 'launch_type', 'status', 'failed', 'started', 'finished',
'elapsed', 'job_explanation', 'execution_node', 'controller_node', 'allow_simultaneous',
'scm_revision', 'diff_mode', 'job_slice_number', 'job_slice_count', 'custom_virtualenv',
'approval_status', 'approval_node_name', 'workflow_url',
'approval_status', 'approval_node_name', 'workflow_url', 'scm_branch',
{'host_status_counts': ['skipped', 'ok', 'changed', 'failed', 'failures', 'dark'
'processed', 'rescued', 'ignored']},
{'summary_fields': [{'inventory': ['id', 'name', 'description', 'has_active_failures',
@@ -313,6 +313,7 @@ class JobNotificationMixin(object):
'modified': datetime.datetime(2018, 12, 13, 6, 4, 0, 0, tzinfo=datetime.timezone.utc),
'name': 'Stub JobTemplate',
'playbook': 'ping.yml',
'scm_branch': '',
'scm_revision': '',
'skip_tags': '',
'start_at_task': '',

View File

@@ -6,7 +6,6 @@
# Django
from django.conf import settings
from django.db import models
from django.db.models import Q
from django.contrib.auth.models import User
from django.contrib.sessions.models import Session
from django.utils.timezone import now as tz_now
@@ -106,12 +105,7 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
RelatedJobsMixin
'''
def _get_related_jobs(self):
project_ids = self.projects.all().values_list('id')
return UnifiedJob.objects.non_polymorphic().filter(
Q(Job___project__in=project_ids) |
Q(ProjectUpdate___project__in=project_ids) |
Q(InventoryUpdate___inventory_source__inventory__organization=self)
)
return UnifiedJob.objects.non_polymorphic().filter(organization=self)
class Team(CommonModelNameNotUnique, ResourceMixin):

View File

@@ -199,7 +199,7 @@ class ProjectOptions(models.Model):
results = []
project_path = self.get_project_path()
if project_path:
for dirpath, dirnames, filenames in os.walk(smart_str(project_path)):
for dirpath, dirnames, filenames in os.walk(smart_str(project_path), followlinks=True):
if skip_directory(dirpath):
continue
for filename in filenames:
@@ -254,13 +254,6 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
app_label = 'main'
ordering = ('id',)
organization = models.ForeignKey(
'Organization',
blank=True,
null=True,
on_delete=models.CASCADE,
related_name='projects',
)
scm_update_on_launch = models.BooleanField(
default=False,
help_text=_('Update the project when a job is launched that uses the project.'),
@@ -329,9 +322,16 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in ProjectOptions._meta.fields) | set(
['name', 'description']
['name', 'description', 'organization']
)
def clean_organization(self):
if self.pk:
old_org_id = getattr(self, '_prior_values_store', {}).get('organization_id', None)
if self.organization_id != old_org_id and self.jobtemplates.exists():
raise ValidationError({'organization': _('Organization cannot be changed when in use by job templates.')})
return self.organization
def save(self, *args, **kwargs):
new_instance = not bool(self.pk)
pre_save_vals = getattr(self, '_prior_values_store', {})
@@ -450,8 +450,8 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
'''
def _get_related_jobs(self):
return UnifiedJob.objects.non_polymorphic().filter(
models.Q(Job___project=self) |
models.Q(ProjectUpdate___project=self)
models.Q(job__project=self) |
models.Q(projectupdate__project=self)
)
def delete(self, *args, **kwargs):
@@ -584,8 +584,8 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
@property
def preferred_instance_groups(self):
if self.project is not None and self.project.organization is not None:
organization_groups = [x for x in self.project.organization.instance_groups.all()]
if self.organization is not None:
organization_groups = [x for x in self.organization.instance_groups.all()]
else:
organization_groups = []
template_groups = [x for x in super(ProjectUpdate, self).preferred_instance_groups]

View File

@@ -191,7 +191,7 @@ class Schedule(PrimordialModel, LaunchTimeConfig):
return rrule
@classmethod
def rrulestr(cls, rrule, **kwargs):
def rrulestr(cls, rrule, fast_forward=True, **kwargs):
"""
Apply our own custom rrule parsing requirements
"""
@@ -205,11 +205,17 @@ class Schedule(PrimordialModel, LaunchTimeConfig):
'A valid TZID must be provided (e.g., America/New_York)'
)
if 'MINUTELY' in rrule or 'HOURLY' in rrule:
if fast_forward and ('MINUTELY' in rrule or 'HOURLY' in rrule):
try:
first_event = x[0]
if first_event < now() - datetime.timedelta(days=365 * 5):
raise ValueError('RRULE values with more than 1000 events are not allowed.')
if first_event < now():
# hourly/minutely rrules with far-past DTSTART values
# are *really* slow to precompute
# start *from* one week ago to speed things up drastically
dtstart = x._rrule[0]._dtstart.strftime(':%Y%m%dT')
new_start = (now() - datetime.timedelta(days=7)).strftime(':%Y%m%dT')
new_rrule = rrule.replace(dtstart, new_start)
return Schedule.rrulestr(new_rrule, fast_forward=False)
except IndexError:
pass
return x

View File

@@ -36,6 +36,7 @@ from awx.main.models.base import (
NotificationFieldsModel,
prevent_search
)
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin
@@ -102,7 +103,7 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
ordering = ('name',)
# unique_together here is intentionally commented out. Please make sure sub-classes of this model
# contain at least this uniqueness restriction: SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name')]
#unique_together = [('polymorphic_ctype', 'name')]
#unique_together = [('polymorphic_ctype', 'name', 'organization')]
old_pk = models.PositiveIntegerField(
null=True,
@@ -157,6 +158,14 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
default='ok',
editable=False,
)
organization = models.ForeignKey(
'Organization',
blank=True,
null=True,
on_delete=polymorphic.SET_NULL,
related_name='%(class)ss',
help_text=_('The organization used to determine access to this template.'),
)
credentials = models.ManyToManyField(
'Credential',
related_name='%(class)ss',
@@ -698,7 +707,15 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
null=True,
default=None,
on_delete=polymorphic.SET_NULL,
help_text=_('The Rampart/Instance group the job was run under'),
help_text=_('The Instance group the job was run under'),
)
organization = models.ForeignKey(
'Organization',
blank=True,
null=True,
on_delete=polymorphic.SET_NULL,
related_name='%(class)ss',
help_text=_('The organization used to determine access to this unified job.'),
)
credentials = models.ManyToManyField(
'Credential',
@@ -1344,7 +1361,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
timeout = 5
try:
running = self.celery_task_id in ControlDispatcher(
'dispatcher', self.execution_node
'dispatcher', self.controller_node or self.execution_node
).running(timeout=timeout)
except socket.timeout:
logger.error('could not reach dispatcher on {} within {}s'.format(
@@ -1450,7 +1467,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
return r
def get_queue_name(self):
return self.controller_node or self.execution_node or settings.CELERY_DEFAULT_QUEUE
return self.controller_node or self.execution_node or get_local_queuename()
def is_isolated(self):
return bool(self.controller_node)

View File

@@ -4,6 +4,7 @@
# Python
import json
import logging
from uuid import uuid4
from copy import copy
from urllib.parse import urljoin
@@ -121,6 +122,7 @@ class WorkflowNodeBase(CreatedModifiedModel, LaunchTimeConfig):
create_kwargs[field_name] = kwargs[field_name]
elif hasattr(self, field_name):
create_kwargs[field_name] = getattr(self, field_name)
create_kwargs['identifier'] = self.identifier
new_node = WorkflowJobNode.objects.create(**create_kwargs)
if self.pk:
allowed_creds = self.credentials.all()
@@ -135,7 +137,7 @@ class WorkflowJobTemplateNode(WorkflowNodeBase):
FIELDS_TO_PRESERVE_AT_COPY = [
'unified_job_template', 'workflow_job_template', 'success_nodes', 'failure_nodes',
'always_nodes', 'credentials', 'inventory', 'extra_data', 'survey_passwords',
'char_prompts', 'all_parents_must_converge'
'char_prompts', 'all_parents_must_converge', 'identifier'
]
REENCRYPTION_BLACKLIST_AT_COPY = ['extra_data', 'survey_passwords']
@@ -144,6 +146,21 @@ class WorkflowJobTemplateNode(WorkflowNodeBase):
related_name='workflow_job_template_nodes',
on_delete=models.CASCADE,
)
identifier = models.CharField(
max_length=512,
default=uuid4,
blank=False,
help_text=_(
'An identifier for this node that is unique within its workflow. '
'It is copied to workflow job nodes corresponding to this node.'),
)
class Meta:
app_label = 'main'
unique_together = (("identifier", "workflow_job_template"),)
indexes = [
models.Index(fields=['identifier']),
]
def get_absolute_url(self, request=None):
return reverse('api:workflow_job_template_node_detail', kwargs={'pk': self.pk}, request=request)
@@ -213,6 +230,18 @@ class WorkflowJobNode(WorkflowNodeBase):
"semantics will mark this True if the node is in a path that will "
"decidedly not be ran. A value of False means the node may not run."),
)
identifier = models.CharField(
max_length=512,
blank=True, # blank denotes pre-migration job nodes
help_text=_('An identifier coresponding to the workflow job template node that this node was created from.'),
)
class Meta:
app_label = 'main'
indexes = [
models.Index(fields=["identifier", "workflow_job"]),
models.Index(fields=['identifier']),
]
def get_absolute_url(self, request=None):
return reverse('api:workflow_job_node_detail', kwargs={'pk': self.pk}, request=request)
@@ -335,7 +364,7 @@ class WorkflowJobOptions(LaunchTimeConfigBase):
@classmethod
def _get_unified_job_field_names(cls):
r = set(f.name for f in WorkflowJobOptions._meta.fields) | set(
['name', 'description', 'survey_passwords', 'labels', 'limit', 'scm_branch']
['name', 'description', 'organization', 'survey_passwords', 'labels', 'limit', 'scm_branch']
)
r.remove('char_prompts') # needed due to copying launch config to launch config
return r
@@ -376,19 +405,12 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name', 'organization')]
FIELDS_TO_PRESERVE_AT_COPY = [
'labels', 'instance_groups', 'workflow_job_template_nodes', 'credentials', 'survey_spec'
'labels', 'organization', 'instance_groups', 'workflow_job_template_nodes', 'credentials', 'survey_spec'
]
class Meta:
app_label = 'main'
organization = models.ForeignKey(
'Organization',
blank=True,
null=True,
on_delete=models.SET_NULL,
related_name='workflows',
)
ask_inventory_on_launch = AskForField(
blank=True,
default=False,
@@ -749,9 +771,9 @@ class WorkflowApproval(UnifiedJob, JobNotificationMixin):
def signal_start(self, **kwargs):
can_start = super(WorkflowApproval, self).signal_start(**kwargs)
self.send_approval_notification('running')
self.started = self.created
self.save(update_fields=['started'])
self.send_approval_notification('running')
return can_start
def send_approval_notification(self, approval_status):

View File

@@ -89,7 +89,8 @@ class GrafanaBackend(AWXBaseEmailBackend, CustomNotificationBase):
grafana_data['isRegion'] = self.isRegion
grafana_data['dashboardId'] = self.dashboardId
grafana_data['panelId'] = self.panelId
grafana_data['tags'] = self.annotation_tags
if self.annotation_tags:
grafana_data['tags'] = self.annotation_tags
grafana_data['text'] = m.subject
grafana_headers['Authorization'] = "Bearer {}".format(self.grafana_key)
grafana_headers['Content-Type'] = "application/json"

View File

@@ -4,15 +4,11 @@
# Python
import json
import logging
import os
import redis
# Django
from django.conf import settings
# Kombu
from awx.main.dispatch.kombu import Connection
from kombu import Exchange, Producer
from kombu.serialization import registry
__all__ = ['CallbackQueueDispatcher']
@@ -28,47 +24,12 @@ class AnsibleJSONEncoder(json.JSONEncoder):
return super(AnsibleJSONEncoder, self).default(o)
registry.register(
'json-ansible',
lambda obj: json.dumps(obj, cls=AnsibleJSONEncoder),
lambda obj: json.loads(obj),
content_type='application/json',
content_encoding='utf-8'
)
class CallbackQueueDispatcher(object):
def __init__(self):
self.callback_connection = getattr(settings, 'BROKER_URL', None)
self.connection_queue = getattr(settings, 'CALLBACK_QUEUE', '')
self.connection = None
self.exchange = None
self.queue = getattr(settings, 'CALLBACK_QUEUE', '')
self.logger = logging.getLogger('awx.main.queue.CallbackQueueDispatcher')
self.connection = redis.Redis.from_url(settings.BROKER_URL)
def dispatch(self, obj):
if not self.callback_connection or not self.connection_queue:
return
active_pid = os.getpid()
for retry_count in range(4):
try:
if not hasattr(self, 'connection_pid'):
self.connection_pid = active_pid
if self.connection_pid != active_pid:
self.connection = None
if self.connection is None:
self.connection = Connection(self.callback_connection)
self.exchange = Exchange(self.connection_queue, type='direct')
producer = Producer(self.connection)
producer.publish(obj,
serializer='json-ansible',
compression='bzip2',
exchange=self.exchange,
declare=[self.exchange],
delivery_mode="persistent" if settings.PERSISTENT_CALLBACK_MESSAGES else "transient",
routing_key=self.connection_queue)
return
except Exception as e:
self.logger.info('Publish Job Event Exception: %r, retry=%d', e,
retry_count, exc_info=True)
self.connection.rpush(self.queue, json.dumps(obj, cls=AnsibleJSONEncoder))

View File

@@ -1,8 +1,15 @@
from channels.routing import route
from django.conf.urls import url
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
from . import consumers
channel_routing = [
route("websocket.connect", "awx.main.consumers.ws_connect", path=r'^/websocket/$'),
route("websocket.disconnect", "awx.main.consumers.ws_disconnect", path=r'^/websocket/$'),
route("websocket.receive", "awx.main.consumers.ws_receive", path=r'^/websocket/$'),
websocket_urlpatterns = [
url(r'websocket/$', consumers.EventConsumer),
url(r'websocket/broadcast/$', consumers.BroadcastConsumer),
]
application = ProtocolTypeRouter({
'websocket': AuthMiddlewareStack(
URLRouter(websocket_urlpatterns)
),
})

View File

@@ -123,7 +123,7 @@ class SimpleDAG(object):
self.root_nodes.discard(to_obj_ord)
if from_obj_ord is None and to_obj_ord is None:
raise LookupError("From object {} and to object not found".format(from_obj, to_obj))
raise LookupError("From object {} and to object {} not found".format(from_obj, to_obj))
elif from_obj_ord is None:
raise LookupError("From object not found {}".format(from_obj))
elif to_obj_ord is None:

View File

@@ -226,7 +226,7 @@ class TaskManager():
# non-Ansible jobs on isolated instances run on controller
task.instance_group = rampart_group.controller
task.execution_node = random.choice(list(rampart_group.controller.instances.all().values_list('hostname', flat=True)))
logger.debug('Submitting isolated {} to queue {}.'.format(
logger.debug('Submitting isolated {} to queue {} on node {}.'.format(
task.log_format, task.instance_group.name, task.execution_node))
elif controller_node:
task.instance_group = rampart_group

View File

@@ -5,11 +5,12 @@ import logging
# AWX
from awx.main.scheduler import TaskManager
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename
logger = logging.getLogger('awx.main.scheduler')
@task()
@task(queue=get_local_queuename)
def run_task_manager():
logger.debug("Running Tower task manager.")
TaskManager().schedule()

View File

@@ -6,7 +6,6 @@ import contextlib
import logging
import threading
import json
import pkg_resources
import sys
# Django
@@ -53,6 +52,7 @@ from awx.conf.utils import conf_to_dict
__all__ = []
logger = logging.getLogger('awx.main.signals')
analytics_logger = logging.getLogger('awx.analytics.activity_stream')
# Update has_active_failures for inventory/groups when a Host/Group is deleted,
# when a Host-Group or Group-Group relationship is updated, or when a Job is deleted
@@ -157,17 +157,26 @@ def cleanup_detached_labels_on_deleted_parent(sender, instance, **kwargs):
def save_related_job_templates(sender, instance, **kwargs):
'''save_related_job_templates loops through all of the
job templates that use an Inventory or Project that have had their
job templates that use an Inventory that have had their
Organization updated. This triggers the rebuilding of the RBAC hierarchy
and ensures the proper access restrictions.
'''
if sender not in (Project, Inventory):
if sender is not Inventory:
raise ValueError('This signal callback is only intended for use with Project or Inventory')
update_fields = kwargs.get('update_fields', None)
if ((update_fields and not ('organization' in update_fields or 'organization_id' in update_fields)) or
kwargs.get('created', False)):
return
if instance._prior_values_store.get('organization_id') != instance.organization_id:
jtq = JobTemplate.objects.filter(**{sender.__name__.lower(): instance})
for jt in jtq:
update_role_parentage_for_instance(jt)
parents_added, parents_removed = update_role_parentage_for_instance(jt)
if parents_added or parents_removed:
logger.info('Permissions on JT {} changed due to inventory {} organization change from {} to {}.'.format(
jt.pk, instance.pk, instance._prior_values_store.get('organization_id'), instance.organization_id
))
def connect_computed_field_signals():
@@ -183,7 +192,6 @@ def connect_computed_field_signals():
connect_computed_field_signals()
post_save.connect(save_related_job_templates, sender=Project)
post_save.connect(save_related_job_templates, sender=Inventory)
m2m_changed.connect(rebuild_role_ancestor_list, Role.parents.through)
m2m_changed.connect(rbac_activity_stream, Role.members.through)
@@ -356,12 +364,24 @@ def model_serializer_mapping():
}
def emit_activity_stream_change(instance):
if 'migrate' in sys.argv:
# don't emit activity stream external logs during migrations, it
# could be really noisy
return
from awx.api.serializers import ActivityStreamSerializer
actor = None
if instance.actor:
actor = instance.actor.username
summary_fields = ActivityStreamSerializer(instance).get_summary_fields(instance)
analytics_logger.info('Activity Stream update entry for %s' % str(instance.object1),
extra=dict(changes=instance.changes, relationship=instance.object_relationship_type,
actor=actor, operation=instance.operation,
object1=instance.object1, object2=instance.object2, summary_fields=summary_fields))
def activity_stream_create(sender, instance, created, **kwargs):
if created and activity_stream_enabled:
# TODO: remove deprecated_group conditional in 3.3
# Skip recording any inventory source directly associated with a group.
if isinstance(instance, InventorySource) and instance.deprecated_group:
return
_type = type(instance)
if getattr(_type, '_deferred', False):
return
@@ -392,6 +412,9 @@ def activity_stream_create(sender, instance, created, **kwargs):
else:
activity_entry.setting = conf_to_dict(instance)
activity_entry.save()
connection.on_commit(
lambda: emit_activity_stream_change(activity_entry)
)
def activity_stream_update(sender, instance, **kwargs):
@@ -423,15 +446,14 @@ def activity_stream_update(sender, instance, **kwargs):
else:
activity_entry.setting = conf_to_dict(instance)
activity_entry.save()
connection.on_commit(
lambda: emit_activity_stream_change(activity_entry)
)
def activity_stream_delete(sender, instance, **kwargs):
if not activity_stream_enabled:
return
# TODO: remove deprecated_group conditional in 3.3
# Skip recording any inventory source directly associated with a group.
if isinstance(instance, InventorySource) and instance.deprecated_group:
return
# Inventory delete happens in the task system rather than request-response-cycle.
# If we trigger this handler there we may fall into db-integrity-related race conditions.
# So we add flag verification to prevent normal signal handling. This funciton will be
@@ -460,6 +482,9 @@ def activity_stream_delete(sender, instance, **kwargs):
object1=object1,
actor=get_current_user_or_none())
activity_entry.save()
connection.on_commit(
lambda: emit_activity_stream_change(activity_entry)
)
def activity_stream_associate(sender, instance, **kwargs):
@@ -533,6 +558,9 @@ def activity_stream_associate(sender, instance, **kwargs):
activity_entry.role.add(role)
activity_entry.object_relationship_type = obj_rel
activity_entry.save()
connection.on_commit(
lambda: emit_activity_stream_change(activity_entry)
)
@receiver(current_user_getter)
@@ -585,16 +613,6 @@ def deny_orphaned_approvals(sender, instance, **kwargs):
@receiver(post_save, sender=Session)
def save_user_session_membership(sender, **kwargs):
session = kwargs.get('instance', None)
if pkg_resources.get_distribution('channels').version >= '2':
# If you get into this code block, it means we upgraded channels, but
# didn't make the settings.SESSIONS_PER_USER feature work
raise RuntimeError(
'save_user_session_membership must be updated for channels>=2: '
'http://channels.readthedocs.io/en/latest/one-to-two.html#requirements'
)
if 'runworker' in sys.argv:
# don't track user session membership for websocket per-channel sessions
return
if not session:
return
user_id = session.get_decoded().get(SESSION_KEY, None)

View File

@@ -26,7 +26,7 @@ import urllib.parse as urlparse
# Django
from django.conf import settings
from django.db import transaction, DatabaseError, IntegrityError
from django.db import transaction, DatabaseError, IntegrityError, ProgrammingError, connection
from django.db.models.fields.related import ForeignKey
from django.utils.timezone import now, timedelta
from django.utils.encoding import smart_str
@@ -59,7 +59,7 @@ from awx.main.models import (
Inventory, InventorySource, SmartInventoryMembership,
Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob,
JobEvent, ProjectUpdateEvent, InventoryUpdateEvent, AdHocCommandEvent, SystemJobEvent,
build_safe_env
build_safe_env, enforce_bigint_pk_migration
)
from awx.main.constants import ACTIVE_STATES
from awx.main.exceptions import AwxTaskError
@@ -73,6 +73,7 @@ from awx.main.utils import (get_ssh_version, update_scm_url,
get_awx_version)
from awx.main.utils.ansible import read_ansible_config
from awx.main.utils.common import _get_ansible_version, get_custom_venv_choices
from awx.main.utils.external_logging import reconfigure_rsyslog
from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja
from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock
@@ -135,6 +136,15 @@ def dispatch_startup():
if Instance.objects.me().is_controller():
awx_isolated_heartbeat()
# at process startup, detect the need to migrate old event records from int
# to bigint; at *some point* in the future, once certain versions of AWX
# and Tower fall out of use/support, we can probably just _assume_ that
# everybody has moved to bigint, and remove this code entirely
enforce_bigint_pk_migration()
# Update Tower's rsyslog.conf file based on loggins settings in the db
reconfigure_rsyslog()
def inform_cluster_of_shutdown():
try:
@@ -151,7 +161,7 @@ def inform_cluster_of_shutdown():
logger.exception('Encountered problem with normal shutdown signal.')
@task()
@task(queue=get_local_queuename)
def apply_cluster_membership_policies():
started_waiting = time.time()
with advisory_lock('cluster_policy_lock', wait=True):
@@ -264,7 +274,7 @@ def apply_cluster_membership_policies():
logger.debug('Cluster policy computation finished in {} seconds'.format(time.time() - started_compute))
@task(queue='tower_broadcast_all', exchange_type='fanout')
@task(queue='tower_broadcast_all')
def handle_setting_changes(setting_keys):
orig_len = len(setting_keys)
for i in range(orig_len):
@@ -274,8 +284,14 @@ def handle_setting_changes(setting_keys):
logger.debug('cache delete_many(%r)', cache_keys)
cache.delete_many(cache_keys)
if any([
setting.startswith('LOG_AGGREGATOR')
for setting in setting_keys
]):
connection.on_commit(reconfigure_rsyslog)
@task(queue='tower_broadcast_all', exchange_type='fanout')
@task(queue='tower_broadcast_all')
def delete_project_files(project_path):
# TODO: possibly implement some retry logic
lock_file = project_path + '.lock'
@@ -293,7 +309,7 @@ def delete_project_files(project_path):
logger.exception('Could not remove lock file {}'.format(lock_file))
@task(queue='tower_broadcast_all', exchange_type='fanout')
@task(queue='tower_broadcast_all')
def profile_sql(threshold=1, minutes=1):
if threshold == 0:
cache.delete('awx-profile-sql-threshold')
@@ -307,7 +323,7 @@ def profile_sql(threshold=1, minutes=1):
logger.error('SQL QUERIES >={}s ENABLED FOR {} MINUTE(S)'.format(threshold, minutes))
@task()
@task(queue=get_local_queuename)
def send_notifications(notification_list, job_id=None):
if not isinstance(notification_list, list):
raise TypeError("notification_list should be of type list")
@@ -336,7 +352,7 @@ def send_notifications(notification_list, job_id=None):
logger.exception('Error saving notification {} result.'.format(notification.id))
@task()
@task(queue=get_local_queuename)
def gather_analytics():
from awx.conf.models import Setting
from rest_framework.fields import DateTimeField
@@ -489,10 +505,10 @@ def awx_isolated_heartbeat():
# Slow pass looping over isolated IGs and their isolated instances
if len(isolated_instance_qs) > 0:
logger.debug("Managing isolated instances {}.".format(','.join([inst.hostname for inst in isolated_instance_qs])))
isolated_manager.IsolatedManager().health_check(isolated_instance_qs)
isolated_manager.IsolatedManager(CallbackQueueDispatcher.dispatch).health_check(isolated_instance_qs)
@task()
@task(queue=get_local_queuename)
def awx_periodic_scheduler():
with advisory_lock('awx_periodic_scheduler_lock', wait=False) as acquired:
if acquired is False:
@@ -549,7 +565,7 @@ def awx_periodic_scheduler():
state.save()
@task()
@task(queue=get_local_queuename)
def handle_work_success(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
@@ -562,7 +578,7 @@ def handle_work_success(task_actual):
schedule_task_manager()
@task()
@task(queue=get_local_queuename)
def handle_work_error(task_id, *args, **kwargs):
subtasks = kwargs.get('subtasks', None)
logger.debug('Executing error task id %s, subtasks: %s' % (task_id, str(subtasks)))
@@ -602,7 +618,26 @@ def handle_work_error(task_id, *args, **kwargs):
pass
@task()
@task(queue=get_local_queuename)
def handle_success_and_failure_notifications(job_id):
uj = UnifiedJob.objects.get(pk=job_id)
retries = 0
while retries < 5:
if uj.finished:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
return
else:
# wait a few seconds to avoid a race where the
# events are persisted _before_ the UJ.status
# changes from running -> successful
retries += 1
time.sleep(1)
uj = UnifiedJob.objects.get(pk=job_id)
logger.warn(f"Failed to even try to send notifications for job '{uj}' due to job not being in finished state.")
@task(queue=get_local_queuename)
def update_inventory_computed_fields(inventory_id):
'''
Signal handler and wrapper around inventory.update_computed_fields to
@@ -644,7 +679,7 @@ def update_smart_memberships_for_inventory(smart_inventory):
return False
@task()
@task(queue=get_local_queuename)
def update_host_smart_inventory_memberships():
smart_inventories = Inventory.objects.filter(kind='smart', host_filter__isnull=False, pending_deletion=False)
changed_inventories = set([])
@@ -660,7 +695,49 @@ def update_host_smart_inventory_memberships():
smart_inventory.update_computed_fields()
@task()
@task(queue=get_local_queuename)
def migrate_legacy_event_data(tblname):
if 'event' not in tblname:
return
with advisory_lock(f'bigint_migration_{tblname}', wait=False) as acquired:
if acquired is False:
return
chunk = settings.JOB_EVENT_MIGRATION_CHUNK_SIZE
def _remaining():
try:
cursor.execute(f'SELECT MAX(id) FROM _old_{tblname};')
return cursor.fetchone()[0]
except ProgrammingError:
# the table is gone (migration is unnecessary)
return None
with connection.cursor() as cursor:
total_rows = _remaining()
while total_rows:
with transaction.atomic():
cursor.execute(
f'INSERT INTO {tblname} SELECT * FROM _old_{tblname} ORDER BY id DESC LIMIT {chunk} RETURNING id;'
)
last_insert_pk = cursor.fetchone()
if last_insert_pk is None:
# this means that the SELECT from the old table was
# empty, and there was nothing to insert (so we're done)
break
last_insert_pk = last_insert_pk[0]
cursor.execute(
f'DELETE FROM _old_{tblname} WHERE id IN (SELECT id FROM _old_{tblname} ORDER BY id DESC LIMIT {chunk});'
)
logger.warn(
f'migrated int -> bigint rows to {tblname} from _old_{tblname}; # ({last_insert_pk} rows remaining)'
)
if _remaining() is None:
cursor.execute(f'DROP TABLE IF EXISTS _old_{tblname}')
logger.warn(f'{tblname} primary key migration to bigint has finished')
@task(queue=get_local_queuename)
def delete_inventory(inventory_id, user_id, retries=5):
# Delete inventory as user
if user_id is None:
@@ -1162,7 +1239,6 @@ class BaseTask(object):
except json.JSONDecodeError:
pass
should_write_event = False
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
self.event_ct += 1
@@ -1174,7 +1250,7 @@ class BaseTask(object):
self.instance.artifacts = event_data['event_data']['artifact_data']
self.instance.save(update_fields=['artifacts'])
return should_write_event
return False
def cancel_callback(self):
'''
@@ -1374,6 +1450,7 @@ class BaseTask(object):
if not params[v]:
del params[v]
self.dispatcher = CallbackQueueDispatcher()
if self.instance.is_isolated() or containerized:
module_args = None
if 'module_args' in params:
@@ -1388,6 +1465,7 @@ class BaseTask(object):
ansible_runner.utils.dump_artifacts(params)
isolated_manager_instance = isolated_manager.IsolatedManager(
self.event_handler,
canceled_callback=lambda: self.update_model(self.instance.pk).cancel_flag,
check_callback=self.check_handler,
pod_manager=pod_manager
@@ -1397,11 +1475,9 @@ class BaseTask(object):
params.get('playbook'),
params.get('module'),
module_args,
event_data_key=self.event_data_key,
ident=str(self.instance.pk))
self.event_ct = len(isolated_manager_instance.handled_events)
self.finished_callback(None)
else:
self.dispatcher = CallbackQueueDispatcher()
res = ansible_runner.interface.run(**params)
status = res.status
rc = res.rc
@@ -1479,7 +1555,7 @@ class BaseTask(object):
@task()
@task(queue=get_local_queuename)
class RunJob(BaseTask):
'''
Run a job using ansible-playbook.
@@ -1912,7 +1988,7 @@ class RunJob(BaseTask):
update_inventory_computed_fields.delay(inventory.id)
@task()
@task(queue=get_local_queuename)
class RunProjectUpdate(BaseTask):
model = ProjectUpdate
@@ -2273,7 +2349,7 @@ class RunProjectUpdate(BaseTask):
# force option is necessary because remote refs are not counted, although no information is lost
git_repo.delete_head(tmp_branch_name, force=True)
else:
copy_tree(project_path, destination_folder)
copy_tree(project_path, destination_folder, preserve_symlinks=1)
def post_run_hook(self, instance, status):
# To avoid hangs, very important to release lock even if errors happen here
@@ -2322,7 +2398,7 @@ class RunProjectUpdate(BaseTask):
return getattr(settings, 'AWX_PROOT_ENABLED', False)
@task()
@task(queue=get_local_queuename)
class RunInventoryUpdate(BaseTask):
model = InventoryUpdate
@@ -2590,7 +2666,7 @@ class RunInventoryUpdate(BaseTask):
)
@task()
@task(queue=get_local_queuename)
class RunAdHocCommand(BaseTask):
'''
Run an ad hoc command using ansible.
@@ -2780,7 +2856,7 @@ class RunAdHocCommand(BaseTask):
isolated_manager_instance.cleanup()
@task()
@task(queue=get_local_queuename)
class RunSystemJob(BaseTask):
model = SystemJob
@@ -2854,11 +2930,16 @@ def _reconstruct_relationships(copy_mapping):
new_obj.save()
@task()
@task(queue=get_local_queuename)
def deep_copy_model_obj(
model_module, model_name, obj_pk, new_obj_pk,
user_pk, sub_obj_list, permission_check_func=None
user_pk, uuid, permission_check_func=None
):
sub_obj_list = cache.get(uuid)
if sub_obj_list is None:
logger.error('Deep copy {} from {} to {} failed unexpectedly.'.format(model_name, obj_pk, new_obj_pk))
return
logger.debug('Deep copy {} from {} to {}.'.format(model_name, obj_pk, new_obj_pk))
from awx.api.generics import CopyAPIView
from awx.main.signals import disable_activity_stream

View File

@@ -220,7 +220,7 @@ def create_job_template(name, roles=None, persisted=True, webhook_service='', **
if 'organization' in kwargs:
org = kwargs['organization']
if type(org) is not Organization:
org = mk_organization(org, '%s-desc'.format(org), persisted=persisted)
org = mk_organization(org, org, persisted=persisted)
if 'credential' in kwargs:
cred = kwargs['credential']
@@ -298,7 +298,7 @@ def create_organization(name, roles=None, persisted=True, **kwargs):
labels = {}
notification_templates = {}
org = mk_organization(name, '%s-desc'.format(name), persisted=persisted)
org = mk_organization(name, name, persisted=persisted)
if 'inventories' in kwargs:
for i in kwargs['inventories']:

View File

@@ -0,0 +1,160 @@
import pytest
import tempfile
import os
import shutil
import csv
from django.utils.timezone import now
from django.db.backends.sqlite3.base import SQLiteCursorWrapper
from awx.main.analytics import collectors
from awx.main.models import (
ProjectUpdate,
InventorySource,
WorkflowJob,
WorkflowJobNode,
JobTemplate,
)
@pytest.fixture
def sqlite_copy_expert(request):
# copy_expert is postgres-specific, and SQLite doesn't support it; mock its
# behavior to test that it writes a file that contains stdout from events
path = tempfile.mkdtemp(prefix="copied_tables")
def write_stdout(self, sql, fd):
# Would be cool if we instead properly disected the SQL query and verified
# it that way. But instead, we just take the nieve approach here.
assert sql.startswith("COPY (")
assert sql.endswith(") TO STDOUT WITH CSV HEADER")
sql = sql.replace("COPY (", "")
sql = sql.replace(") TO STDOUT WITH CSV HEADER", "")
# sqlite equivalent
sql = sql.replace("ARRAY_AGG", "GROUP_CONCAT")
# Remove JSON style queries
# TODO: could replace JSON style queries with sqlite kind of equivalents
sql_new = []
for line in sql.split("\n"):
if line.find("main_jobevent.event_data::") == -1:
sql_new.append(line)
elif not line.endswith(","):
sql_new[-1] = sql_new[-1].rstrip(",")
sql = "\n".join(sql_new)
self.execute(sql)
results = self.fetchall()
headers = [i[0] for i in self.description]
csv_handle = csv.writer(
fd,
delimiter=",",
quoting=csv.QUOTE_ALL,
escapechar="\\",
lineterminator="\n",
)
csv_handle.writerow(headers)
csv_handle.writerows(results)
setattr(SQLiteCursorWrapper, "copy_expert", write_stdout)
request.addfinalizer(lambda: shutil.rmtree(path))
request.addfinalizer(lambda: delattr(SQLiteCursorWrapper, "copy_expert"))
return path
@pytest.mark.django_db
def test_copy_tables_unified_job_query(
sqlite_copy_expert, project, inventory, job_template
):
"""
Ensure that various unified job types are in the output of the query.
"""
time_start = now()
inv_src = InventorySource.objects.create(
name="inventory_update1", inventory=inventory, source="gce"
)
project_update_name = ProjectUpdate.objects.create(
project=project, name="project_update1"
).name
inventory_update_name = inv_src.create_unified_job().name
job_name = job_template.create_unified_job().name
with tempfile.TemporaryDirectory() as tmpdir:
collectors.copy_tables(time_start, tmpdir, subset="unified_jobs")
with open(os.path.join(tmpdir, "unified_jobs_table.csv")) as f:
lines = "".join([l for l in f])
assert project_update_name in lines
assert inventory_update_name in lines
assert job_name in lines
@pytest.fixture
def workflow_job(states=["new", "new", "new", "new", "new"]):
"""
Workflow topology:
node[0]
/\
s/ \f
/ \
node[1,5] node[3]
/ \
s/ \f
/ \
node[2] node[4]
"""
wfj = WorkflowJob.objects.create()
jt = JobTemplate.objects.create(name="test-jt")
nodes = [
WorkflowJobNode.objects.create(workflow_job=wfj, unified_job_template=jt)
for i in range(0, 6)
]
for node, state in zip(nodes, states):
if state:
node.job = jt.create_job()
node.job.status = state
node.job.save()
node.save()
nodes[0].success_nodes.add(nodes[1])
nodes[0].success_nodes.add(nodes[5])
nodes[1].success_nodes.add(nodes[2])
nodes[0].failure_nodes.add(nodes[3])
nodes[3].failure_nodes.add(nodes[4])
return wfj
@pytest.mark.django_db
def test_copy_tables_workflow_job_node_query(sqlite_copy_expert, workflow_job):
time_start = now()
with tempfile.TemporaryDirectory() as tmpdir:
collectors.copy_tables(time_start, tmpdir, subset="workflow_job_node_query")
with open(os.path.join(tmpdir, "workflow_job_node_table.csv")) as f:
reader = csv.reader(f)
# Pop the headers
next(reader)
lines = [l for l in reader]
ids = [int(l[0]) for l in lines]
assert ids == list(
workflow_job.workflow_nodes.all().values_list("id", flat=True)
)
for index, relationship in zip(
[7, 8, 9], ["success_nodes", "failure_nodes", "always_nodes"]
):
for i, l in enumerate(lines):
related_nodes = (
[int(e) for e in l[index].split(",")] if l[index] else []
)
assert related_nodes == list(
getattr(workflow_job.workflow_nodes.all()[i], relationship)
.all()
.values_list("id", flat=True)
), f"(right side) workflow_nodes.all()[{i}].{relationship}.all()"

View File

@@ -13,10 +13,10 @@ def test_empty():
"active_host_count": 0,
"credential": 0,
"custom_inventory_script": 0,
"custom_virtualenvs": 0, # dev env ansible3
"custom_virtualenvs": 0, # dev env ansible3
"host": 0,
"inventory": 0,
"inventories": {'normal': 0, 'smart': 0},
"inventories": {"normal": 0, "smart": 0},
"job_template": 0,
"notification_template": 0,
"organization": 0,
@@ -27,28 +27,97 @@ def test_empty():
"user": 0,
"workflow_job_template": 0,
"unified_job": 0,
"pending_jobs": 0
"pending_jobs": 0,
}
@pytest.mark.django_db
def test_database_counts(organization_factory, job_template_factory,
workflow_job_template_factory):
objs = organization_factory('org', superusers=['admin'])
jt = job_template_factory('test', organization=objs.organization,
inventory='test_inv', project='test_project',
credential='test_cred')
workflow_job_template_factory('test')
def test_database_counts(
organization_factory, job_template_factory, workflow_job_template_factory
):
objs = organization_factory("org", superusers=["admin"])
jt = job_template_factory(
"test",
organization=objs.organization,
inventory="test_inv",
project="test_project",
credential="test_cred",
)
workflow_job_template_factory("test")
models.Team(organization=objs.organization).save()
models.Host(inventory=jt.inventory).save()
models.Schedule(
rrule='DTSTART;TZID=America/New_York:20300504T150000',
unified_job_template=jt.job_template
rrule="DTSTART;TZID=America/New_York:20300504T150000",
unified_job_template=jt.job_template,
).save()
models.CustomInventoryScript(organization=objs.organization).save()
counts = collectors.counts(None)
for key in ('organization', 'team', 'user', 'inventory', 'credential',
'project', 'job_template', 'workflow_job_template', 'host',
'schedule', 'custom_inventory_script'):
for key in (
"organization",
"team",
"user",
"inventory",
"credential",
"project",
"job_template",
"workflow_job_template",
"host",
"schedule",
"custom_inventory_script",
):
assert counts[key] == 1
@pytest.mark.django_db
def test_inventory_counts(organization_factory, inventory_factory):
(inv1, inv2, inv3) = [inventory_factory(f"inv-{i}") for i in range(3)]
s1 = inv1.inventory_sources.create(name="src1", source="ec2")
s2 = inv1.inventory_sources.create(name="src2", source="file")
s3 = inv1.inventory_sources.create(name="src3", source="gce")
s1.hosts.create(name="host1", inventory=inv1)
s1.hosts.create(name="host2", inventory=inv1)
s1.hosts.create(name="host3", inventory=inv1)
s2.hosts.create(name="host4", inventory=inv1)
s2.hosts.create(name="host5", inventory=inv1)
s3.hosts.create(name="host6", inventory=inv1)
s1 = inv2.inventory_sources.create(name="src1", source="ec2")
s1.hosts.create(name="host1", inventory=inv2)
s1.hosts.create(name="host2", inventory=inv2)
s1.hosts.create(name="host3", inventory=inv2)
inv_counts = collectors.inventory_counts(None)
assert {
inv1.id: {
"name": "inv-0",
"kind": "",
"hosts": 6,
"sources": 3,
"source_list": [
{"name": "src1", "source": "ec2", "num_hosts": 3},
{"name": "src2", "source": "file", "num_hosts": 2},
{"name": "src3", "source": "gce", "num_hosts": 1},
],
},
inv2.id: {
"name": "inv-1",
"kind": "",
"hosts": 3,
"sources": 1,
"source_list": [{"name": "src1", "source": "ec2", "num_hosts": 3}],
},
inv3.id: {
"name": "inv-2",
"kind": "",
"hosts": 0,
"sources": 0,
"source_list": [],
},
} == inv_counts

View File

@@ -1,7 +1,6 @@
import pytest
from awx.api.versioning import reverse
from awx.main.middleware import ActivityStreamMiddleware
from awx.main.models.activity_stream import ActivityStream
from awx.main.access import ActivityStreamAccess
from awx.conf.models import Setting
@@ -61,28 +60,6 @@ def test_ctint_activity_stream(monkeypatch, get, user, settings):
assert response.data['summary_fields']['setting'][0]['name'] == 'FOO'
@pytest.mark.django_db
def test_middleware_actor_added(monkeypatch, post, get, user, settings):
settings.ACTIVITY_STREAM_ENABLED = True
u = user('admin-poster', True)
url = reverse('api:organization_list')
response = post(url,
dict(name='test-org', description='test-desc'),
u,
middleware=ActivityStreamMiddleware())
assert response.status_code == 201
org_id = response.data['id']
activity_stream = ActivityStream.objects.filter(organization__pk=org_id).first()
url = reverse('api:activity_stream_detail', kwargs={'pk': activity_stream.pk})
response = get(url, u)
assert response.status_code == 200
assert response.data['summary_fields']['actor']['username'] == 'admin-poster'
@pytest.mark.django_db
def test_rbac_stream_resource_roles(activity_stream_entry, organization, org_admin, settings):
settings.ACTIVITY_STREAM_ENABLED = True

View File

@@ -972,7 +972,7 @@ def test_field_removal(put, organization, admin, credentialtype_ssh):
['insights_inventories', Inventory()],
['unifiedjobs', Job()],
['unifiedjobtemplates', JobTemplate()],
['unifiedjobtemplates', InventorySource()],
['unifiedjobtemplates', InventorySource(source='ec2')],
['projects', Project()],
['workflowjobnodes', WorkflowJobNode()],
])

View File

@@ -39,6 +39,26 @@ def test_extra_credentials(get, organization_factory, job_template_factory, cred
@pytest.mark.django_db
def test_job_relaunch_permission_denied_response(
post, get, inventory, project, credential, net_credential, machine_credential):
jt = JobTemplate.objects.create(name='testjt', inventory=inventory, project=project, ask_credential_on_launch=True)
jt.credentials.add(machine_credential)
jt_user = User.objects.create(username='jobtemplateuser')
jt.execute_role.members.add(jt_user)
with impersonate(jt_user):
job = jt.create_unified_job()
# User capability is shown for this
r = get(job.get_absolute_url(), jt_user, expect=200)
assert r.data['summary_fields']['user_capabilities']['start']
# Job has prompted extra_credential, launch denied w/ message
job.launch_config.credentials.add(net_credential)
r = post(reverse('api:job_relaunch', kwargs={'pk':job.pk}), {}, jt_user, expect=403)
assert 'launched with prompted fields you do not have access to' in r.data['detail']
@pytest.mark.django_db
def test_job_relaunch_prompts_not_accepted_response(
post, get, inventory, project, credential, net_credential, machine_credential):
jt = JobTemplate.objects.create(name='testjt', inventory=inventory, project=project)
jt.credentials.add(machine_credential)
jt_user = User.objects.create(username='jobtemplateuser')
@@ -53,8 +73,6 @@ def test_job_relaunch_permission_denied_response(
# Job has prompted extra_credential, launch denied w/ message
job.launch_config.credentials.add(net_credential)
r = post(reverse('api:job_relaunch', kwargs={'pk':job.pk}), {}, jt_user, expect=403)
assert 'launched with prompted fields' in r.data['detail']
assert 'do not have permission' in r.data['detail']
@pytest.mark.django_db
@@ -209,7 +227,8 @@ def test_block_related_unprocessed_events(mocker, organization, project, delete,
status='finished',
finished=time_of_finish,
job_template=job_template,
project=project
project=project,
organization=project.organization
)
view = RelatedJobsPreventDeleteMixin()
time_of_request = time_of_finish + relativedelta(seconds=2)

View File

@@ -6,7 +6,7 @@ import pytest
# AWX
from awx.api.serializers import JobTemplateSerializer
from awx.api.versioning import reverse
from awx.main.models import Job, JobTemplate, CredentialType, WorkflowJobTemplate
from awx.main.models import Job, JobTemplate, CredentialType, WorkflowJobTemplate, Organization, Project
from awx.main.migrations import _save_password_keys as save_password_keys
# Django
@@ -30,14 +30,19 @@ def test_create(post, project, machine_credential, inventory, alice, grant_proje
project.use_role.members.add(alice)
if grant_inventory:
inventory.use_role.members.add(alice)
project.organization.job_template_admin_role.members.add(alice)
r = post(reverse('api:job_template_list'), {
'name': 'Some name',
'project': project.id,
'inventory': inventory.id,
'playbook': 'helloworld.yml',
}, alice)
assert r.status_code == expect
post(
url=reverse('api:job_template_list'),
data={
'name': 'Some name',
'project': project.id,
'inventory': inventory.id,
'playbook': 'helloworld.yml'
},
user=alice,
expect=expect
)
@pytest.mark.django_db
@@ -123,14 +128,18 @@ def test_create_with_forks_exceeding_maximum_xfail(alice, post, project, invento
project.use_role.members.add(alice)
inventory.use_role.members.add(alice)
settings.MAX_FORKS = 10
response = post(reverse('api:job_template_list'), {
'name': 'Some name',
'project': project.id,
'inventory': inventory.id,
'playbook': 'helloworld.yml',
'forks': 11,
}, alice)
assert response.status_code == 400
response = post(
url=reverse('api:job_template_list'),
data={
'name': 'Some name',
'project': project.id,
'inventory': inventory.id,
'playbook': 'helloworld.yml',
'forks': 11,
},
user=alice,
expect=400
)
assert 'Maximum number of forks (10) exceeded' in str(response.data)
@@ -510,6 +519,72 @@ def test_job_template_unset_custom_virtualenv(get, patch, organization_factory,
assert resp.data['custom_virtualenv'] is None
@pytest.mark.django_db
def test_jt_organization_follows_project(post, patch, admin_user):
org1 = Organization.objects.create(name='foo1')
org2 = Organization.objects.create(name='foo2')
project_common = dict(scm_type='git', playbook_files=['helloworld.yml'])
project1 = Project.objects.create(name='proj1', organization=org1, **project_common)
project2 = Project.objects.create(name='proj2', organization=org2, **project_common)
r = post(
url=reverse('api:job_template_list'),
data={
"name": "fooo",
"ask_inventory_on_launch": True,
"project": project1.pk,
"playbook": "helloworld.yml"
},
user=admin_user,
expect=201
)
data = r.data
assert data['organization'] == project1.organization_id
data['project'] = project2.id
jt = JobTemplate.objects.get(pk=data['id'])
r = patch(
url=jt.get_absolute_url(),
data=data,
user=admin_user,
expect=200
)
assert r.data['organization'] == project2.organization_id
@pytest.mark.django_db
def test_jt_organization_field_is_read_only(patch, post, project, admin_user):
org = project.organization
jt = JobTemplate.objects.create(
name='foo_jt',
ask_inventory_on_launch=True,
project=project, playbook='helloworld.yml'
)
org2 = Organization.objects.create(name='foo2')
r = patch(
url=jt.get_absolute_url(),
data={'organization': org2.id},
user=admin_user,
expect=200
)
assert r.data['organization'] == org.id
assert JobTemplate.objects.get(pk=jt.pk).organization == org
# similar test, but on creation
r = post(
url=reverse('api:job_template_list'),
data={
'name': 'foobar',
'project': project.id,
'organization': org2.id,
'ask_inventory_on_launch': True,
'playbook': 'helloworld.yml'
},
user=admin_user,
expect=201
)
assert r.data['organization'] == org.id
assert JobTemplate.objects.get(pk=r.data['id']).organization == org
@pytest.mark.django_db
def test_callback_disallowed_null_inventory(project):
jt = JobTemplate.objects.create(

View File

@@ -1,6 +1,8 @@
import pytest
import base64
import json
import time
import pytest
from django.db import connection
from django.test.utils import override_settings
@@ -326,6 +328,38 @@ def test_refresh_accesstoken(oauth_application, post, get, delete, admin):
assert original_refresh_token.revoked # is not None
@pytest.mark.django_db
def test_refresh_token_expiration_is_respected(oauth_application, post, get, delete, admin):
response = post(
reverse('api:o_auth2_application_token_list', kwargs={'pk': oauth_application.pk}),
{'scope': 'read'}, admin, expect=201
)
assert AccessToken.objects.count() == 1
assert RefreshToken.objects.count() == 1
refresh_token = RefreshToken.objects.get(token=response.data['refresh_token'])
refresh_url = drf_reverse('api:oauth_authorization_root_view') + 'token/'
short_lived = {
'ACCESS_TOKEN_EXPIRE_SECONDS': 1,
'AUTHORIZATION_CODE_EXPIRE_SECONDS': 1,
'REFRESH_TOKEN_EXPIRE_SECONDS': 1
}
time.sleep(1)
with override_settings(OAUTH2_PROVIDER=short_lived):
response = post(
refresh_url,
data='grant_type=refresh_token&refresh_token=' + refresh_token.token,
content_type='application/x-www-form-urlencoded',
HTTP_AUTHORIZATION='Basic ' + smart_str(base64.b64encode(smart_bytes(':'.join([
oauth_application.client_id, oauth_application.client_secret
]))))
)
assert response.status_code == 403
assert b'The refresh token has expired.' in response.content
assert RefreshToken.objects.filter(token=refresh_token).exists()
assert AccessToken.objects.count() == 1
assert RefreshToken.objects.count() == 1
@pytest.mark.django_db
def test_revoke_access_then_refreshtoken(oauth_application, post, get, delete, admin):

View File

@@ -2,6 +2,8 @@ import pytest
from awx.api.versioning import reverse
from awx.main.models import Project
@pytest.fixture
def organization_resource_creator(organization, user):
@@ -19,21 +21,26 @@ def organization_resource_creator(organization, user):
for i in range(inventories):
inventory = organization.inventories.create(name="associated-inv %s" % i)
for i in range(projects):
organization.projects.create(name="test-proj %s" % i,
description="test-proj-desc")
Project.objects.create(
name="test-proj %s" % i,
description="test-proj-desc",
organization=organization
)
# Mix up the inventories and projects used by the job templates
i_proj = 0
i_inv = 0
for i in range(job_templates):
project = organization.projects.all()[i_proj]
project = Project.objects.filter(organization=organization)[i_proj]
# project = organization.projects.all()[i_proj]
inventory = organization.inventories.all()[i_inv]
project.jobtemplates.create(name="test-jt %s" % i,
description="test-job-template-desc",
inventory=inventory,
playbook="test_playbook.yml")
playbook="test_playbook.yml",
organization=organization)
i_proj += 1
i_inv += 1
if i_proj >= organization.projects.count():
if i_proj >= Project.objects.filter(organization=organization).count():
i_proj = 0
if i_inv >= organization.inventories.count():
i_inv = 0
@@ -179,12 +186,14 @@ def test_scan_JT_counted(resourced_organization, user, get):
@pytest.mark.django_db
def test_JT_not_double_counted(resourced_organization, user, get):
admin_user = user('admin', True)
proj = Project.objects.filter(organization=resourced_organization).all()[0]
# Add a run job template to the org
resourced_organization.projects.all()[0].jobtemplates.create(
proj.jobtemplates.create(
job_type='run',
inventory=resourced_organization.inventories.all()[0],
project=resourced_organization.projects.all()[0],
name='double-linked-job-template')
project=proj,
name='double-linked-job-template',
organization=resourced_organization)
counts_dict = COUNTS_PRIMES
counts_dict['job_templates'] += 1
@@ -197,38 +206,3 @@ def test_JT_not_double_counted(resourced_organization, user, get):
detail_response = get(reverse('api:organization_detail', kwargs={'pk': resourced_organization.pk}), admin_user)
assert detail_response.status_code == 200
assert detail_response.data['summary_fields']['related_field_counts'] == counts_dict
@pytest.mark.django_db
def test_JT_associated_with_project(organizations, project, user, get):
# Check that adding a project to an organization gets the project's JT
# included in the organization's JT count
external_admin = user('admin', True)
two_orgs = organizations(2)
organization = two_orgs[0]
other_org = two_orgs[1]
unrelated_inv = other_org.inventories.create(name='not-in-organization')
organization.projects.add(project)
project.jobtemplates.create(name="test-jt",
description="test-job-template-desc",
inventory=unrelated_inv,
playbook="test_playbook.yml")
response = get(reverse('api:organization_list'), external_admin)
assert response.status_code == 200
org_id = organization.id
counts = {}
for org_json in response.data['results']:
working_id = org_json['id']
counts[working_id] = org_json['summary_fields']['related_field_counts']
assert counts[org_id] == {
'users': 0,
'admins': 0,
'job_templates': 1,
'projects': 1,
'inventories': 0,
'teams': 0
}

View File

@@ -1,6 +1,8 @@
import datetime
import pytest
from django.utils.encoding import smart_str
from django.utils.timezone import now
from awx.api.versioning import reverse
from awx.main.models import JobTemplate, Schedule
@@ -140,7 +142,6 @@ def test_encrypted_survey_answer(post, patch, admin_user, project, inventory, su
("DTSTART:20030925T104941Z RRULE:FREQ=DAILY;INTERVAL=10;COUNT=500;UNTIL=20040925T104941Z", "RRULE may not contain both COUNT and UNTIL"), # noqa
("DTSTART;TZID=America/New_York:20300308T050000Z RRULE:FREQ=DAILY;INTERVAL=1", "rrule parsing failed validation"),
("DTSTART:20300308T050000 RRULE:FREQ=DAILY;INTERVAL=1", "DTSTART cannot be a naive datetime"),
("DTSTART:19700101T000000Z RRULE:FREQ=MINUTELY;INTERVAL=1", "more than 1000 events are not allowed"), # noqa
])
def test_invalid_rrules(post, admin_user, project, inventory, rrule, error):
job_template = JobTemplate.objects.create(
@@ -342,6 +343,40 @@ def test_months_with_31_days(post, admin_user):
]
@pytest.mark.django_db
@pytest.mark.timeout(3)
@pytest.mark.parametrize('freq, delta, total_seconds', (
('MINUTELY', 1, 60),
('MINUTELY', 15, 15 * 60),
('HOURLY', 1, 3600),
('HOURLY', 4, 3600 * 4),
))
def test_really_old_dtstart(post, admin_user, freq, delta, total_seconds):
url = reverse('api:schedule_rrule')
# every <interval>, at the :30 second mark
rrule = f'DTSTART;TZID=America/New_York:20051231T000030 RRULE:FREQ={freq};INTERVAL={delta}'
start = now()
next_ten = post(url, {'rrule': rrule}, admin_user, expect=200).data['utc']
assert len(next_ten) == 10
# the first date is *in the future*
assert next_ten[0] >= start
# ...but *no more than* <interval> into the future
assert now() + datetime.timedelta(**{
'minutes' if freq == 'MINUTELY' else 'hours': delta
})
# every date in the list is <interval> greater than the last
for i, x in enumerate(next_ten):
if i == 0:
continue
assert x.second == 30
delta = (x - next_ten[i - 1])
assert delta.total_seconds() == total_seconds
def test_dst_rollback_duplicates(post, admin_user):
# From Nov 2 -> Nov 3, 2030, daylight savings ends and we "roll back" an hour.
# Make sure we don't "double count" duplicate times in the "rolled back"

View File

@@ -5,18 +5,13 @@
# Python
import pytest
import os
import time
from django.conf import settings
from kombu.utils.url import parse_url
# Mock
from unittest import mock
# AWX
from awx.api.versioning import reverse
from awx.conf.models import Setting
from awx.main.utils.handlers import AWXProxyHandler, LoggingConnectivityException
from awx.conf.registry import settings_registry
TEST_GIF_LOGO = 'data:image/gif;base64,R0lGODlhIQAjAPIAAP//////AP8AAMzMAJmZADNmAAAAAAAAACH/C05FVFNDQVBFMi4wAwEAAAAh+QQJCgAHACwAAAAAIQAjAAADo3i63P4wykmrvTjrzZsxXfR94WMQBFh6RECuixHMLyzPQ13ewZCvow9OpzEAjIBj79cJJmU+FceIVEZ3QRozxBttmyOBwPBtisdX4Bha3oxmS+llFIPHQXQKkiSEXz9PeklHBzx3hYNyEHt4fmmAhHp8Nz45KgV5FgWFOFEGmwWbGqEfniChohmoQZ+oqRiZDZhEgk81I4mwg4EKVbxzrDHBEAkAIfkECQoABwAsAAAAACEAIwAAA6V4utz+MMpJq724GpP15p1kEAQYQmOwnWjgrmxjuMEAx8rsDjZ+fJvdLWQAFAHGWo8FRM54JqIRmYTigDrDMqZTbbbMj0CgjTLHZKvPQH6CTx+a2vKR0XbbOsoZ7SphG057gjl+c0dGgzeGNiaBiSgbBQUHBV08NpOVlkMSk0FKjZuURHiiOJxQnSGfQJuoEKREejK0dFRGjoiQt7iOuLx0rgxYEQkAIfkECQoABwAsAAAAACEAIwAAA7h4utxnxslJDSGR6nrz/owxYB64QUEwlGaVqlB7vrAJscsd3Lhy+wBArGEICo3DUFH4QDqK0GMy51xOgcGlEAfJ+iAFie62chR+jYKaSAuQGOqwJp7jGQRDuol+F/jxZWsyCmoQfwYwgoM5Oyg1i2w0A2WQIW2TPYOIkleQmy+UlYygoaIPnJmapKmqKiusMmSdpjxypnALtrcHioq3ury7hGm3dnVosVpMWFmwREZbddDOSsjVswcJACH5BAkKAAcALAAAAAAhACMAAAOxeLrc/jDKSZUxNS9DCNYV54HURQwfGRlDEFwqdLVuGjOsW9/Odb0wnsUAKBKNwsMFQGwyNUHckVl8bqI4o43lA26PNkv1S9DtNuOeVirw+aTI3qWAQwnud1vhLSnQLS0GeFF+GoVKNF0fh4Z+LDQ6Bn5/MTNmL0mAl2E3j2aclTmRmYCQoKEDiaRDKFhJez6UmbKyQowHtzy1uEl8DLCnEktrQ2PBD1NxSlXKIW5hz6cJACH5BAkKAAcALAAAAAAhACMAAAOkeLrc/jDKSau9OOvNlTFd9H3hYxAEWDJfkK5LGwTq+g0zDR/GgM+10A04Cm56OANgqTRmkDTmSOiLMgFOTM9AnFJHuexzYBAIijZf2SweJ8ttbbXLmd5+wBiJosSCoGF/fXEeS1g8gHl9hxODKkh4gkwVIwUekESIhA4FlgV3PyCWG52WI2oGnR2lnUWpqhqVEF4Xi7QjhpsshpOFvLosrnpoEAkAIfkECQoABwAsAAAAACEAIwAAA6l4utz+MMpJq71YGpPr3t1kEAQXQltQnk8aBCa7bMMLy4wx1G8s072PL6SrGQDI4zBThCU/v50zCVhidIYgNPqxWZkDg0AgxB2K4vEXbBSvr1JtZ3uOext0x7FqovF6OXtfe1UzdjAxhINPM013ChtJER8FBQeVRX8GlpggFZWWfjwblTiigGZnfqRmpUKbljKxDrNMeY2eF4R8jUiSur6/Z8GFV2WBtwwJACH5BAkKAAcALAAAAAAhACMAAAO6eLrcZi3KyQwhkGpq8f6ONWQgaAxB8JTfg6YkO50pzD5xhaurhCsGAKCnEw6NucNDCAkyI8ugdAhFKpnJJdMaeiofBejowUseCr9GYa0j1GyMdVgjBxoEuPSZXWKf7gKBeHtzMms0gHgGfDIVLztmjScvNZEyk28qjT40b5aXlHCbDgOhnzedoqOOlKeopaqrCy56sgtotbYKhYW6e7e9tsHBssO6eSTIm1peV0iuFUZDyU7NJnmcuQsJACH5BAkKAAcALAAAAAAhACMAAAOteLrc/jDKSZsxNS9DCNYV54Hh4H0kdAXBgKaOwbYX/Miza1vrVe8KA2AoJL5gwiQgeZz4GMXlcHl8xozQ3kW3KTajL9zsBJ1+sV2fQfALem+XAlRApxu4ioI1UpC76zJ4fRqDBzI+LFyFhH1iiS59fkgziW07jjRAG5QDeECOLk2Tj6KjnZafW6hAej6Smgevr6yysza2tiCuMasUF2Yov2gZUUQbU8YaaqjLpQkAOw==' # NOQA
@@ -238,73 +233,95 @@ def test_ui_settings(get, put, patch, delete, admin):
@pytest.mark.django_db
def test_logging_aggregrator_connection_test_requires_superuser(get, post, alice):
def test_logging_aggregator_connection_test_requires_superuser(post, alice):
url = reverse('api:setting_logging_test')
post(url, {}, user=alice, expect=403)
@pytest.mark.parametrize('key', [
'LOG_AGGREGATOR_TYPE',
'LOG_AGGREGATOR_HOST',
@pytest.mark.django_db
def test_logging_aggregator_connection_test_not_enabled(post, admin):
url = reverse('api:setting_logging_test')
resp = post(url, {}, user=admin, expect=409)
assert 'Logging not enabled' in resp.data.get('error')
def _mock_logging_defaults():
# Pre-populate settings obj with defaults
class MockSettings:
pass
mock_settings_obj = MockSettings()
mock_settings_json = dict()
for key in settings_registry.get_registered_settings(category_slug='logging'):
value = settings_registry.get_setting_field(key).get_default()
setattr(mock_settings_obj, key, value)
mock_settings_json[key] = value
setattr(mock_settings_obj, 'MAX_EVENT_RES_DATA', 700000)
return mock_settings_obj, mock_settings_json
@pytest.mark.parametrize('key, value, error', [
['LOG_AGGREGATOR_TYPE', 'logstash', 'Cannot enable log aggregator without providing host.'],
['LOG_AGGREGATOR_HOST', 'https://logstash', 'Cannot enable log aggregator without providing type.']
])
@pytest.mark.django_db
def test_logging_aggregrator_connection_test_bad_request(get, post, admin, key):
url = reverse('api:setting_logging_test')
resp = post(url, {}, user=admin, expect=400)
assert 'This field is required.' in resp.data.get(key, [])
@pytest.mark.django_db
def test_logging_aggregrator_connection_test_valid(mocker, get, post, admin):
with mock.patch.object(AWXProxyHandler, 'perform_test') as perform_test:
url = reverse('api:setting_logging_test')
user_data = {
'LOG_AGGREGATOR_TYPE': 'logstash',
'LOG_AGGREGATOR_HOST': 'localhost',
'LOG_AGGREGATOR_PORT': 8080,
'LOG_AGGREGATOR_USERNAME': 'logger',
'LOG_AGGREGATOR_PASSWORD': 'mcstash'
}
post(url, user_data, user=admin, expect=200)
args, kwargs = perform_test.call_args_list[0]
create_settings = kwargs['custom_settings']
for k, v in user_data.items():
assert hasattr(create_settings, k)
assert getattr(create_settings, k) == v
@pytest.mark.django_db
def test_logging_aggregrator_connection_test_with_masked_password(mocker, patch, post, admin):
def test_logging_aggregator_missing_settings(put, post, admin, key, value, error):
_, mock_settings = _mock_logging_defaults()
mock_settings['LOG_AGGREGATOR_ENABLED'] = True
mock_settings[key] = value
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'logging'})
patch(url, user=admin, data={'LOG_AGGREGATOR_PASSWORD': 'password123'}, expect=200)
time.sleep(1) # log settings are cached slightly
response = put(url, data=mock_settings, user=admin, expect=400)
assert error in str(response.data)
with mock.patch.object(AWXProxyHandler, 'perform_test') as perform_test:
url = reverse('api:setting_logging_test')
user_data = {
'LOG_AGGREGATOR_TYPE': 'logstash',
'LOG_AGGREGATOR_HOST': 'localhost',
'LOG_AGGREGATOR_PORT': 8080,
'LOG_AGGREGATOR_USERNAME': 'logger',
'LOG_AGGREGATOR_PASSWORD': '$encrypted$'
}
post(url, user_data, user=admin, expect=200)
args, kwargs = perform_test.call_args_list[0]
create_settings = kwargs['custom_settings']
assert getattr(create_settings, 'LOG_AGGREGATOR_PASSWORD') == 'password123'
@pytest.mark.parametrize('type, host, port, username, password', [
['logstash', 'localhost', 8080, 'logger', 'mcstash'],
['loggly', 'http://logs-01.loggly.com/inputs/1fd38090-hash-h4a$h-8d80-t0k3n71/tag/http/', None, None, None],
['splunk', 'https://yoursplunk:8088/services/collector/event', None, None, None],
['other', '97.221.40.41', 9000, 'logger', 'mcstash'],
['sumologic', 'https://endpoint5.collection.us2.sumologic.com/receiver/v1/http/Zagnw_f9XGr_zZgd-_EPM0hb8_rUU7_RU8Q==',
None, None, None]
])
@pytest.mark.django_db
def test_logging_aggregator_valid_settings(put, post, admin, type, host, port, username, password):
_, mock_settings = _mock_logging_defaults()
# type = 'splunk'
# host = 'https://yoursplunk:8088/services/collector/event'
mock_settings['LOG_AGGREGATOR_ENABLED'] = True
mock_settings['LOG_AGGREGATOR_TYPE'] = type
mock_settings['LOG_AGGREGATOR_HOST'] = host
if port:
mock_settings['LOG_AGGREGATOR_PORT'] = port
if username:
mock_settings['LOG_AGGREGATOR_USERNAME'] = username
if password:
mock_settings['LOG_AGGREGATOR_PASSWORD'] = password
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'logging'})
response = put(url, data=mock_settings, user=admin, expect=200)
assert type in response.data.get('LOG_AGGREGATOR_TYPE')
assert host in response.data.get('LOG_AGGREGATOR_HOST')
if port:
assert port == response.data.get('LOG_AGGREGATOR_PORT')
if username:
assert username in response.data.get('LOG_AGGREGATOR_USERNAME')
if password: # Note: password should be encrypted
assert '$encrypted$' in response.data.get('LOG_AGGREGATOR_PASSWORD')
@pytest.mark.django_db
def test_logging_aggregrator_connection_test_invalid(mocker, get, post, admin):
with mock.patch.object(AWXProxyHandler, 'perform_test') as perform_test:
perform_test.side_effect = LoggingConnectivityException('404: Not Found')
url = reverse('api:setting_logging_test')
resp = post(url, {
'LOG_AGGREGATOR_TYPE': 'logstash',
'LOG_AGGREGATOR_HOST': 'localhost',
'LOG_AGGREGATOR_PORT': 8080
}, user=admin, expect=500)
assert resp.data == {'error': '404: Not Found'}
def test_logging_aggregator_connection_test_valid(put, post, admin):
_, mock_settings = _mock_logging_defaults()
type = 'other'
host = 'https://localhost'
mock_settings['LOG_AGGREGATOR_ENABLED'] = True
mock_settings['LOG_AGGREGATOR_TYPE'] = type
mock_settings['LOG_AGGREGATOR_HOST'] = host
# POST to save these mock settings
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'logging'})
put(url, data=mock_settings, user=admin, expect=200)
# "Test" the logger
url = reverse('api:setting_logging_test')
post(url, {}, user=admin, expect=202)
@pytest.mark.django_db
@@ -386,15 +403,3 @@ def test_saml_x509cert_validation(patch, get, admin, headers):
}
})
assert resp.status_code == 200
@pytest.mark.django_db
def test_broker_url_with_special_characters():
settings.BROKER_URL = 'amqp://guest:a@ns:ibl3#@rabbitmq:5672//'
url = parse_url(settings.BROKER_URL)
assert url['transport'] == 'amqp'
assert url['hostname'] == 'rabbitmq'
assert url['port'] == 5672
assert url['userid'] == 'guest'
assert url['password'] == 'a@ns:ibl3#'
assert url['virtual_host'] == '/'

View File

@@ -1,6 +1,7 @@
import pytest
from awx.api.versioning import reverse
from awx.main import models
@pytest.mark.django_db
@@ -9,3 +10,76 @@ def test_aliased_forward_reverse_field_searches(instance, options, get, admin):
response = options(url, None, admin)
assert 'job_template__search' in response.data['related_search_fields']
get(reverse("api:unified_job_template_list") + "?job_template__search=anything", user=admin, expect=200)
@pytest.mark.django_db
@pytest.mark.parametrize('model', (
'Project',
'JobTemplate',
'WorkflowJobTemplate'
))
class TestUnifiedOrganization:
def data_for_model(self, model, orm_style=False):
data = {
'name': 'foo',
'organization': None
}
if model == 'JobTemplate':
proj = models.Project.objects.create(
name="test-proj",
playbook_files=['helloworld.yml']
)
if orm_style:
data['project_id'] = proj.id
else:
data['project'] = proj.id
data['playbook'] = 'helloworld.yml'
data['ask_inventory_on_launch'] = True
return data
def test_organization_blank_on_edit_of_orphan(self, model, admin_user, patch):
cls = getattr(models, model)
data = self.data_for_model(model, orm_style=True)
obj = cls.objects.create(**data)
patch(
url=obj.get_absolute_url(),
data={'name': 'foooooo'},
user=admin_user,
expect=200
)
obj.refresh_from_db()
assert obj.name == 'foooooo'
def test_organization_blank_on_edit_of_orphan_as_nonsuperuser(self, model, rando, patch):
"""Test case reflects historical bug where ordinary users got weird error
message when editing an orphaned project
"""
cls = getattr(models, model)
data = self.data_for_model(model, orm_style=True)
obj = cls.objects.create(**data)
if model == 'JobTemplate':
obj.project.admin_role.members.add(rando)
obj.admin_role.members.add(rando)
patch(
url=obj.get_absolute_url(),
data={'name': 'foooooo'},
user=rando,
expect=200
)
obj.refresh_from_db()
assert obj.name == 'foooooo'
def test_organization_blank_on_edit_of_normal(self, model, admin_user, patch, organization):
cls = getattr(models, model)
data = self.data_for_model(model, orm_style=True)
data['organization'] = organization
obj = cls.objects.create(**data)
patch(
url=obj.get_absolute_url(),
data={'name': 'foooooo'},
user=admin_user,
expect=200
)
obj.refresh_from_db()
assert obj.name == 'foooooo'

View File

@@ -23,9 +23,9 @@ def _mk_project_update():
def _mk_inventory_update():
source = InventorySource()
source = InventorySource(source='ec2')
source.save()
iu = InventoryUpdate(inventory_source=source)
iu = InventoryUpdate(inventory_source=source, source='e2')
return iu

View File

@@ -123,7 +123,11 @@ def test_delete_project_update_in_active_state(project, delete, admin, status):
@pytest.mark.parametrize("status", list(TEST_STATES))
@pytest.mark.django_db
def test_delete_inventory_update_in_active_state(inventory_source, delete, admin, status):
i = InventoryUpdate.objects.create(inventory_source=inventory_source, status=status)
i = InventoryUpdate.objects.create(
inventory_source=inventory_source,
status=status,
source=inventory_source.source
)
url = reverse('api:inventory_update_detail', kwargs={'pk': i.pk})
delete(url, None, admin, expect=403)

View File

@@ -2,6 +2,7 @@ import pytest
from django.contrib.sessions.middleware import SessionMiddleware
from awx.main.models import User
from awx.api.versioning import reverse
@@ -48,3 +49,15 @@ def test_create_delete_create_user(post, delete, admin):
response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin, middleware=SessionMiddleware())
print(response.data)
assert response.status_code == 201
@pytest.mark.django_db
def test_user_cannot_update_last_login(patch, admin):
assert admin.last_login is None
patch(
reverse('api:user_detail', kwargs={'pk': admin.pk}),
{'last_login': '2020-03-13T16:39:47.303016Z'},
admin,
middleware=SessionMiddleware()
)
assert User.objects.get(pk=admin.pk).last_login is None

View File

@@ -0,0 +1,179 @@
import pytest
from datetime import datetime, timedelta
from pytz import timezone
from collections import OrderedDict
from django.db.models.deletion import Collector, SET_NULL, CASCADE
from django.core.management import call_command
from awx.main.management.commands.deletion import AWXCollector
from awx.main.models import (
JobTemplate, User, Job, JobEvent, Notification,
WorkflowJobNode, JobHostSummary
)
@pytest.fixture
def setup_environment(inventory, project, machine_credential, host, notification_template, label):
'''
Create old jobs and new jobs, with various other objects to hit the
related fields of Jobs. This makes sure on_delete() effects are tested
properly.
'''
old_jobs = []
new_jobs = []
days = 10
days_str = str(days)
jt = JobTemplate.objects.create(name='testjt', inventory=inventory, project=project)
jt.credentials.add(machine_credential)
jt_user = User.objects.create(username='jobtemplateuser')
jt.execute_role.members.add(jt_user)
notification = Notification()
notification.notification_template = notification_template
notification.save()
for i in range(3):
job1 = jt.create_job()
job1.created =datetime.now(tz=timezone('UTC'))
job1.save()
# create jobs with current time
JobEvent.create_from_data(job_id=job1.pk, uuid='abc123', event='runner_on_start',
stdout='a' * 1025).save()
new_jobs.append(job1)
job2 = jt.create_job()
# create jobs 10 days ago
job2.created = datetime.now(tz=timezone('UTC')) - timedelta(days=days)
job2.save()
job2.dependent_jobs.add(job1)
JobEvent.create_from_data(job_id=job2.pk, uuid='abc123', event='runner_on_start',
stdout='a' * 1025).save()
old_jobs.append(job2)
jt.last_job = job2
jt.current_job = job2
jt.save()
host.last_job = job2
host.save()
notification.unifiedjob_notifications.add(job2)
label.unifiedjob_labels.add(job2)
jn = WorkflowJobNode.objects.create(job=job2)
jn.save()
jh = JobHostSummary.objects.create(job=job2)
jh.save()
return (old_jobs, new_jobs, days_str)
@pytest.mark.django_db
def test_cleanup_jobs(setup_environment):
(old_jobs, new_jobs, days_str) = setup_environment
# related_fields
related = [f for f in Job._meta.get_fields(include_hidden=True)
if f.auto_created and not
f.concrete and
(f.one_to_one or f.one_to_many)]
job = old_jobs[-1] # last job
# gather related objects for job
related_should_be_removed = {}
related_should_be_null = {}
for r in related:
qs = r.related_model._base_manager.using('default').filter(
**{"%s__in" % r.field.name: [job.pk]}
)
if qs.exists():
if r.field.remote_field.on_delete == CASCADE:
related_should_be_removed[qs.model] = set(qs.values_list('pk', flat=True))
if r.field.remote_field.on_delete == SET_NULL:
related_should_be_null[(qs.model,r.field.name)] = set(qs.values_list('pk', flat=True))
assert related_should_be_removed
assert related_should_be_null
call_command('cleanup_jobs', '--days', days_str)
# make sure old jobs are removed
assert not Job.objects.filter(pk__in=[obj.pk for obj in old_jobs]).exists()
# make sure new jobs are untouched
assert len(new_jobs) == Job.objects.filter(pk__in=[obj.pk for obj in new_jobs]).count()
# make sure related objects are destroyed or set to NULL (none)
for model, values in related_should_be_removed.items():
assert not model.objects.filter(pk__in=values).exists()
for (model,fieldname), values in related_should_be_null.items():
for v in values:
assert not getattr(model.objects.get(pk=v), fieldname)
@pytest.mark.django_db
def test_awxcollector(setup_environment):
'''
Efforts to improve the performance of cleanup_jobs involved
sub-classing the django Collector class. This unit test will
check for parity between the django Collector and the modified
AWXCollector class. AWXCollector is used in cleanup_jobs to
bulk-delete old jobs from the database.
Specifically, Collector has four dictionaries to check:
.dependencies, .data, .fast_deletes, and .field_updates
These tests will convert each dictionary from AWXCollector
(after running .collect on jobs), from querysets to sets of
objects. The final result should be a dictionary that is
equivalent to django's Collector.
'''
(old_jobs, new_jobs, days_str) = setup_environment
collector = Collector('default')
collector.collect(old_jobs)
awx_col = AWXCollector('default')
# awx_col accepts a queryset as input
awx_col.collect(Job.objects.filter(pk__in=[obj.pk for obj in old_jobs]))
# check that dependencies are the same
assert awx_col.dependencies == collector.dependencies
# check that objects to delete are the same
awx_del_dict = OrderedDict()
for model, instances in awx_col.data.items():
awx_del_dict.setdefault(model, set())
for inst in instances:
# .update() will put each object in a queryset into the set
awx_del_dict[model].update(inst)
assert awx_del_dict == collector.data
# check that field updates are the same
awx_del_dict = OrderedDict()
for model, instances_for_fieldvalues in awx_col.field_updates.items():
awx_del_dict.setdefault(model, {})
for (field, value), instances in instances_for_fieldvalues.items():
awx_del_dict[model].setdefault((field,value), set())
for inst in instances:
awx_del_dict[model][(field,value)].update(inst)
# collector field updates don't use the base (polymorphic parent) model, e.g.
# it will use JobTemplate instead of UnifiedJobTemplate. Therefore,
# we need to rebuild the dictionary and grab the model from the field
collector_del_dict = OrderedDict()
for model, instances_for_fieldvalues in collector.field_updates.items():
for (field,value), instances in instances_for_fieldvalues.items():
collector_del_dict.setdefault(field.model, {})
collector_del_dict[field.model][(field, value)] = collector.field_updates[model][(field,value)]
assert awx_del_dict == collector_del_dict
# check that fast deletes are the same
collector_fast_deletes = set()
for q in collector.fast_deletes:
collector_fast_deletes.update(q)
awx_col_fast_deletes = set()
for q in awx_col.fast_deletes:
awx_col_fast_deletes.update(q)
assert collector_fast_deletes == awx_col_fast_deletes

View File

@@ -228,7 +228,7 @@ class TestINIImports:
assert inventory.hosts.count() == 1 # baseline worked
inv_src2 = inventory.inventory_sources.create(
name='bar', overwrite=True
name='bar', overwrite=True, source='ec2'
)
os.environ['INVENTORY_SOURCE_ID'] = str(inv_src2.pk)
os.environ['INVENTORY_UPDATE_ID'] = str(inv_src2.create_unified_job().pk)

View File

@@ -180,8 +180,8 @@ def project_factory(organization):
@pytest.fixture
def job_factory(job_template, admin):
def factory(job_template=job_template, initial_state='new', created_by=admin):
def job_factory(jt_linked, admin):
def factory(job_template=jt_linked, initial_state='new', created_by=admin):
return job_template.create_unified_job(_eager_fields={
'status': initial_state, 'created_by': created_by})
return factory
@@ -568,7 +568,10 @@ def inventory_source_factory(inventory_factory):
@pytest.fixture
def inventory_update(inventory_source):
return InventoryUpdate.objects.create(inventory_source=inventory_source)
return InventoryUpdate.objects.create(
inventory_source=inventory_source,
source=inventory_source.source
)
@pytest.fixture
@@ -701,11 +704,8 @@ def ad_hoc_command_factory(inventory, machine_credential, admin):
@pytest.fixture
def job_template(organization):
jt = JobTemplate(name='test-job_template')
jt.save()
return jt
def job_template():
return JobTemplate.objects.create(name='test-job_template')
@pytest.fixture
@@ -717,20 +717,16 @@ def job_template_labels(organization, job_template):
@pytest.fixture
def jt_linked(job_template_factory, credential, net_credential, vault_credential):
def jt_linked(organization, project, inventory, machine_credential, credential, net_credential, vault_credential):
'''
A job template with a reasonably complete set of related objects to
test RBAC and other functionality affected by related objects
'''
objects = job_template_factory(
'testJT', organization='org1', project='proj1', inventory='inventory1',
credential='cred1')
jt = objects.job_template
jt.credentials.add(vault_credential)
jt.save()
# Add AWS cloud credential and network credential
jt.credentials.add(credential)
jt.credentials.add(net_credential)
jt = JobTemplate.objects.create(
project=project, inventory=inventory, playbook='helloworld.yml',
organization=organization
)
jt.credentials.add(machine_credential, vault_credential, credential, net_credential)
return jt

View File

@@ -12,6 +12,7 @@ from awx.main.models import (
CredentialType,
Inventory,
InventorySource,
Project,
User
)
@@ -99,8 +100,8 @@ class TestRolesAssociationEntries:
).count() == 1, 'In loop %s' % i
def test_model_associations_are_recorded(self, organization):
proj1 = organization.projects.create(name='proj1')
proj2 = organization.projects.create(name='proj2')
proj1 = Project.objects.create(name='proj1', organization=organization)
proj2 = Project.objects.create(name='proj2', organization=organization)
proj2.use_role.parents.add(proj1.admin_role)
assert ActivityStream.objects.filter(role=proj1.admin_role, project=proj2).count() == 1

View File

@@ -197,9 +197,10 @@ class TestRelatedJobs:
assert job.id in [jerb.id for jerb in group._get_related_jobs()]
def test_related_group_update(self, group):
src = group.inventory_sources.create(name='foo')
src = group.inventory_sources.create(name='foo', source='ec2')
job = InventoryUpdate.objects.create(
inventory_source=src
inventory_source=src,
source=src.source
)
assert job.id in [jerb.id for jerb in group._get_related_jobs()]

Some files were not shown because too many files have changed in this diff Show More