Compare commits

..

698 Commits
2.0.1 ... 2.1.2

Author SHA1 Message Date
Shane McDonald
e3872ebd58 Bump version to 2.1.2 2018-12-11 12:30:10 -05:00
softwarefactory-project-zuul[bot]
eee716644b Merge pull request #2875 from MrMEEE/patch-1
Bumped Version to 2.1.1

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-11 17:25:26 +00:00
softwarefactory-project-zuul[bot]
2758a38485 Merge pull request #2898 from mabashian/2851-perms
Fixes bug where admin/member roles weren't showing up when adding user to org

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-10 16:51:03 +00:00
softwarefactory-project-zuul[bot]
9104f485e6 Merge pull request #2856 from wenottingham/one-small-step-foreman
Update foreman.py from Ansible devel, primarily for unicode fixes.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-10 16:27:52 +00:00
mabashian
ae7361f82d Fixes bug where admin/member roles weren't showing up when adding user to org 2018-12-10 11:12:09 -05:00
softwarefactory-project-zuul[bot]
42562e86e4 Merge pull request #2888 from mabashian/sanitize-app-token-list
Sanitize username and description in application tokens list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-07 21:42:38 +00:00
softwarefactory-project-zuul[bot]
982ed37b06 Merge pull request #2890 from mabashian/3198-survey
Fixes bug launching jt where first survey question is optional and empty

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-07 20:40:07 +00:00
mabashian
c0c666cc87 Fixes bug launching jt where first survey question is optional and empty 2018-12-07 15:12:22 -05:00
softwarefactory-project-zuul[bot]
8a284889f5 Merge pull request #2889 from mabashian/2887-workflow-cred
Properly POST credentials to workflow nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-07 20:08:36 +00:00
mabashian
b891e2c204 Properly POST credentials to workflow nodes 2018-12-07 14:37:56 -05:00
mabashian
a8bf7366cf Sanitize username and description in application tokens list 2018-12-07 14:24:12 -05:00
softwarefactory-project-zuul[bot]
e517f81b8f Merge pull request #2880 from AlanCoding/fix_v1_links
Fix links to some resources that lack v1 pages

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-07 14:33:15 +00:00
softwarefactory-project-zuul[bot]
c4c99332fc Merge pull request #2873 from ansible/related_slices
Show type in related_jobs, link based on type

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-06 20:51:00 +00:00
AlanCoding
40b5ce4b2e link v1 pages to v2 credential type page 2018-12-06 15:41:26 -05:00
AlanCoding
d2cd337c1f fix links to some resources that lack v1 pages 2018-12-06 08:29:23 -05:00
softwarefactory-project-zuul[bot]
b9913fb4f9 Merge pull request #2877 from chrismeyersfsu/improvement-better_default_loggers
more sane default log handlers

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-05 15:59:12 +00:00
chris meyers
d1705dd0cc more sane default log handlers
* Removed the emailing of admins on request error. When turned on, the
handler will include all django settings in the email. This is not
desirable from a security standpoint.
2018-12-05 09:38:27 -05:00
Martin Juhl
816cc29132 Bumped Version to 2.1.1 2018-12-05 00:33:04 +01:00
AlanCoding
f09b8efa87 tests and optimizations for UJT list with non-joblet recent_jobs 2018-12-04 16:16:05 -05:00
softwarefactory-project-zuul[bot]
6ebc6809eb Merge pull request #2869 from AlanCoding/syncs_do_not_update_project
Do not update project details after sync jobs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-04 20:10:01 +00:00
AlanCoding
4b31367945 Do not update project details after sync jobs 2018-12-04 13:00:23 -05:00
kialam
2a62e300a2 UI update to check recent job type for routing to detail pages. 2018-12-04 11:19:25 -05:00
softwarefactory-project-zuul[bot]
f6b075843e Merge pull request #2845 from marshmalien/fix-org-ig-modal
Fix instance group modal selection

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-04 15:42:49 +00:00
Marliana Lara
4723773354 Fix instance group modal selection 2018-12-04 10:15:38 -05:00
softwarefactory-project-zuul[bot]
70be95cec5 Merge pull request #2861 from wenottingham/tighten-up-the-slack
Fix tooltip for slack channel list to note '#' is required.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-04 04:23:20 +00:00
softwarefactory-project-zuul[bot]
201b17012d Merge pull request #2865 from ansible/output-search-docslink
point output search doc link to latest

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-04 04:17:57 +00:00
Jake McDermott
47264b0809 point output search doc link to latest 2018-12-03 17:09:00 -05:00
Bill Nottingham
c51f235fab Fix tooltip for slack channel list to note '#' is required. 2018-12-03 14:22:59 -05:00
Bill Nottingham
f1b1224a27 Update foreman.py from Ansible devel, primarily for unicode fixes. 2018-12-03 10:23:28 -05:00
softwarefactory-project-zuul[bot]
63b0796738 Merge pull request #2852 from jlmitch5/updateOrgCardsCountWhenDatasetChanges
update org cards count when dataset chnages

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-02 20:05:19 +00:00
softwarefactory-project-zuul[bot]
5961e3ef2e Merge pull request #2846 from kialam/fix-3016-missing-job-events
Fix 3016 missing job events

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-01 05:12:05 +00:00
softwarefactory-project-zuul[bot]
246d80f177 Merge pull request #2850 from jlmitch5/addUserTokenPagination
add pagination to user tokens list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 21:29:10 +00:00
softwarefactory-project-zuul[bot]
8005b47c14 Merge pull request #2849 from ryanpetrello/fix-custom-cred-encryption-nit
allow encrypted fields in custom credentials to be empty

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 21:07:30 +00:00
John Mitchell
1317572979 update org cards count when dataset chnages 2018-11-30 15:54:15 -05:00
AlanCoding
b763c51f8a add type to recent_jobs 2018-11-30 15:16:09 -05:00
softwarefactory-project-zuul[bot]
e70055a333 Merge pull request #2848 from wenottingham/more-fields-for-the-fields-god
Add timeout & slice count to the job field whitelist.

Reviewed-by: Bill Nottingham
             https://github.com/wenottingham
2018-11-30 20:02:38 +00:00
John Mitchell
52f86a206a add pagination to user tokens list 2018-11-30 14:27:23 -05:00
Ryan Petrello
7252883094 allow encrypted fields in custom credentials to be empty 2018-11-30 14:07:56 -05:00
kialam
afa7c2d69f Update unit tests. 2018-11-30 13:58:13 -05:00
Bill Nottingham
9c44d1f526 Add timeout & slice count to the job field whitelist. 2018-11-30 13:43:21 -05:00
kialam
473ce95c86 Fix failing unit tests. 2018-11-30 12:16:28 -05:00
kialam
3e1e068013 Add boundary checks for getReadyCount method. 2018-11-30 12:03:26 -05:00
kialam
746a154f2b Address missing job events.
- Fix off by one error.
- Add unit tests for Stream Service.
2018-11-30 11:23:15 -05:00
softwarefactory-project-zuul[bot]
28733800c4 Merge pull request #2842 from mabashian/2839-workflow-key
Changes workflow key icon from fa-key to fa-compass

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 13:21:45 +00:00
softwarefactory-project-zuul[bot]
cf2deefa41 Merge pull request #2843 from ansible/update-e2e-tests-2
Updating e2e tests to match change in order layout

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 02:04:21 +00:00
John Hill
e5645dd798 one more 2018-11-29 19:35:11 -05:00
John Hill
6205a5db83 Updating to fix linting error 2018-11-29 19:11:27 -05:00
John Hill
e50dd92425 Cannot depend on the id and order, reverting to workflow node names 2018-11-29 18:16:53 -05:00
softwarefactory-project-zuul[bot]
4955fc8bc4 Merge pull request #2840 from ryanpetrello/project-update-bug
resolve a nuanced traceback for JTs that run w/ a failed project

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 22:35:59 +00:00
mabashian
15adb1e828 Changes workflow key icon from fa-key to fa-compass 2018-11-29 17:16:25 -05:00
Ryan Petrello
c90d81b914 resolve a nuanced traceback for JTs that run w/ a failed project
related: https://github.com/ansible/awx/pull/2719
2018-11-29 17:10:23 -05:00
softwarefactory-project-zuul[bot]
8e9c28701e Merge pull request #2836 from ryanpetrello/better-dispatch-retries
add additional DB retry logic to the callback receiver

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 21:18:24 +00:00
softwarefactory-project-zuul[bot]
abc74fc9b8 Merge pull request #2824 from chrismeyersfsu/workflow-convergence_enforce2
enforce 1 edge between 2 nodes constraint

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 19:34:25 +00:00
chris meyers
21fce00102 python3 compliance
* This ones for you rydog
2018-11-29 14:07:43 -05:00
chris meyers
d347a06e3d do not deny existing workflow node relationships 2018-11-29 13:29:16 -05:00
Ryan Petrello
0391dbc292 add additional DB retry logic to the callback receiver
initially, I implemented this for _only_ the task worker, but it's
probably needed for callback event workers, too
2018-11-29 11:57:46 -05:00
softwarefactory-project-zuul[bot]
349c7efa69 Merge pull request #2792 from AlanCoding/how_many_slices
Prohibit relaunching sliced jobs with changed count

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 15:57:40 +00:00
softwarefactory-project-zuul[bot]
0f451595d7 Merge pull request #2826 from ryanpetrello/remove-deprovision-node
remove the deprecated `awx-manage deprovision_node` command

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 15:52:13 +00:00
Ryan Petrello
fcb6ce2907 remove a few deprecated awx-manage commands 2018-11-29 10:09:57 -05:00
softwarefactory-project-zuul[bot]
273d7a83f2 Merge pull request #2825 from ryanpetrello/dont-fear-the-reaper
don't reap jobs that aren't running

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 14:31:37 +00:00
chris meyers
916c92ffc7 save state 2018-11-29 08:53:46 -05:00
Ryan Petrello
38bf174bda don't reap jobs that aren't running
this is a simple sanity check, but it should help us avoid shooting
ourselves in the foot in complicated scenarios, such as:

1.  A dispatcher worker is running a job, and it's killed with `kill -9`
2.  The dispatcher attempts to reap jobs with a matching celery_task_id
3.  The associated sync project update has the *same* celery_task_id
    (an implementation detail of how we implemented that), and it ends
    up getting reaped _even though_ it's already finished and has
    status=successful
2018-11-28 18:11:12 -05:00
chris meyers
09dff99340 enforce 1 edge between 2 nodes constraint 2018-11-28 16:57:50 -05:00
softwarefactory-project-zuul[bot]
7f178ef28b Merge pull request #2822 from ansible/workflow-e2e-update
Small update to the node order to reflect e2e test workflow changes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 21:08:36 +00:00
softwarefactory-project-zuul[bot]
68328109d7 Merge pull request #2807 from AlanCoding/yuck_artifacts
Do not pass artifacts to non-job nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 21:05:24 +00:00
John Hill
d573a9a346 Small update to the node order to reflect workflow changes 2018-11-28 15:37:30 -05:00
AlanCoding
d6e89689ae do not pass artifacts to non-job nodes 2018-11-28 15:19:47 -05:00
softwarefactory-project-zuul[bot]
d1d97598e2 Merge pull request #2821 from AlanCoding/clean_nodes
Clean up unwanted data in activity stream of nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 19:13:45 +00:00
softwarefactory-project-zuul[bot]
f57fa9d1fb Merge pull request #2810 from chrismeyersfsu/feature-replay_job_status
emit job status lifecycle in event replayer

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 18:57:56 +00:00
chris meyers
83760deb9d align tests with new replay get_job interface 2018-11-28 13:33:44 -05:00
softwarefactory-project-zuul[bot]
3893e29a33 Merge pull request #2815 from ryanpetrello/fix-iso-nodes-dev
fix isolated nodes in the dev environment

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 17:06:16 +00:00
softwarefactory-project-zuul[bot]
feeaa0bf5c Merge pull request #2747 from kialam/remove-md5
Remove MD5 usage and dependency from UI

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 16:15:59 +00:00
kialam
22802e7a64 Update README to include needed npm version. 2018-11-28 10:43:53 -05:00
Ryan Petrello
1ac5bc5e2b remove angular-md5 license 2018-11-28 10:43:53 -05:00
kialam
362a3753d0 Remove 'angular-md5' from our dependencies. 2018-11-28 10:43:53 -05:00
kialam
71ee9d28b9 Add link to original gist and rename file. 2018-11-28 10:43:53 -05:00
kialam
d8d89d253d Remove instances of "md5" from the UI. 2018-11-28 10:43:53 -05:00
Ryan Petrello
a72f3d2f2f generate host_config_key using random UUIDs, not a time-based md5 hash 2018-11-28 10:43:45 -05:00
AlanCoding
1adeb833fb clean up unwanted data in activity stream of nodes 2018-11-28 10:41:32 -05:00
Ryan Petrello
a810aaf319 fix isolated nodes in the dev environment 2018-11-28 09:54:39 -05:00
softwarefactory-project-zuul[bot]
d9866c35b4 Merge pull request #2819 from ryanpetrello/fix-busted-tests
mock an HTTP call to fix busted unit tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 14:50:38 +00:00
Ryan Petrello
4e45c3a66c mock an HTTP call to fix busted unit tests 2018-11-28 09:17:50 -05:00
softwarefactory-project-zuul[bot]
87b55dc413 Merge pull request #2816 from ansible/jakemcdermott-smoke-break
update expected color vals of active tab in smoke test

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2018-11-28 04:02:31 +00:00
Jake McDermott
2e3949d612 update expected color vals of active tab in smoke test 2018-11-27 22:46:16 -05:00
softwarefactory-project-zuul[bot]
d928ccd922 Merge pull request #2799 from ansible/jakemcdermott-readonly-viz
don't conditionally hide workflow viz templates list button

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 03:39:51 +00:00
softwarefactory-project-zuul[bot]
a9c51b737c Merge pull request #2389 from ansible/workflow-convergence
Workflow convergence

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 22:04:50 +00:00
mabashian
51669c9765 Fixes hint/lint errors in workflow viz test 2018-11-27 16:12:42 -05:00
mabashian
17cc82d946 Ensure that selected row is cleared when adding new node after editing existing node 2018-11-27 16:12:42 -05:00
mabashian
10de5b6866 Fixes clicking on a wf in wf node. Also fixes editing wf in wf node with inv prompt 2018-11-27 16:12:42 -05:00
mabashian
55dc27f243 Set active tab to jobs when initially clicking a workflow_job_template type node 2018-11-27 16:12:42 -05:00
mabashian
6fc2ba3495 Fixes delete node shifting e2e test 2018-11-27 16:12:42 -05:00
mabashian
7bad01e193 Fixes e2e workflow visualizer tests 2018-11-27 16:12:42 -05:00
mabashian
62a1f10c42 Fix node pagination for project/inv 2018-11-27 16:12:42 -05:00
mabashian
3975a2ecdb fix linkpath class 2018-11-27 16:12:42 -05:00
Jake McDermott
bfa361c87f hide prompt button when not on jobs tab 2018-11-27 16:12:42 -05:00
Jake McDermott
d5f07a9652 hide inventory help message when not on jobs tab 2018-11-27 16:12:42 -05:00
Jake McDermott
65ec1d18ad skip missing inventory prompt value check when selecting workflow node 2018-11-27 16:12:42 -05:00
Jake McDermott
7b4521f980 workflow node prompt fixup
* use workflow model and endpoint when node is workflow
* always include template type in prompt data
* skip missing inventory checks when node is workflow
* skip checks for required credential fields when node is workflow
2018-11-27 16:12:42 -05:00
John Mitchell
3762ba7b24 add back in workflow_nodes in order to be able to use it for count of nodes 2018-11-27 16:12:42 -05:00
John Mitchell
762c882cd7 consume workflow maker total nodes label change 2018-11-27 16:12:42 -05:00
John Mitchell
343639d4b7 fix workflow maker total templates header to total nodes 2018-11-27 16:12:42 -05:00
John Mitchell
38dc0b8e90 fix workflow total jobs header to total nodes 2018-11-27 16:12:42 -05:00
mabashian
ed40ba6267 Fix searching on related fields 2018-11-27 16:12:42 -05:00
mabashian
54d56f2284 Fix node jobs column sorting. Adds arrows to potential workflow node links 2018-11-27 16:12:42 -05:00
mabashian
1477bbae30 Fixed error Cannot read property 'type' of undefined in console when selecting a project or inventory node 2018-11-27 16:12:42 -05:00
mabashian
625c6c30fc Fixed edge dropdown id 2018-11-27 16:12:42 -05:00
chris meyers
228e412478 simplify workflow job failure reason
* Log the more detailed reason for a workflow job failing but expose a
simplified reason to users via job_explanation
2018-11-27 16:12:42 -05:00
chris meyers
f8f2e005ba better comment for deciding parent's status 2018-11-27 16:12:42 -05:00
chris meyers
d8bf82a8cb add help_text to do_not_run workflow field 2018-11-27 16:12:41 -05:00
chris meyers
2eeca3cfd7 add example workflow run to docs 2018-11-27 16:12:41 -05:00
mabashian
28a4bbbe8a Fixed jshint errors that fell out of merge conflict 2018-11-27 16:12:41 -05:00
mabashian
1cfcaa72ad Fixed editNodeHelpMessage logic that was broken during merge conflict 2018-11-27 16:12:41 -05:00
AlanCoding
4c14727762 bump migration number 2018-11-27 16:12:41 -05:00
chris meyers
0c8dde9718 fix dfs_run_nodes()
* Tried to re-use the topological sort order to crawl the graph to find
the next node(s) to run. This is incorrect, we need to take into account
the fail/success of jobs and directionally crawl the graph.
2018-11-27 16:12:41 -05:00
chris meyers
febf051748 do not mark ujt None nodes dnr
* Leave workflow nodes with no related unified job template nodes
do_not_run = False. If we mark it True, we can't differentiate between
the actual want to not take that path vs. do not run this because I do
not have a valid related unified job template.
2018-11-27 16:12:41 -05:00
mabashian
56885a5da1 Remove reference to isStartNode and just check the id of the node to determine if it's our start node or not 2018-11-27 16:12:41 -05:00
mabashian
623cf54766 Added dagre and graphlib licenses 2018-11-27 16:12:41 -05:00
mabashian
a804c854bf Fix test failures and jshint errors 2018-11-27 16:12:41 -05:00
chris meyers
7b087d4a6c loop over dnr nodes by topological sort
* Perform topological sort on graph nodes before looping over them to
mark do not run. This guarantees that parent nodes will be processed
before calling dependent child nodes. The complexity of the sorting is
N. The complexity of marking the the nodes is N*V
2018-11-27 16:12:41 -05:00
chris meyers
cfa098479e Revert "optimize mark dnr nodes algorithm"
This reverts commit 6372c52772.
2018-11-27 16:12:41 -05:00
mabashian
3c510e6344 Fixed bug where root link became clickable. Fix workflow key on results page. 2018-11-27 16:12:41 -05:00
chris meyers
4c9a1d6b90 optimize mark dnr nodes algorithm
* Compute largest depth of each node and traverse graph by depth. This
allows us to check a node once, and only once, to determine if it needs
to be marked for do not run.
2018-11-27 16:12:41 -05:00
chris meyers
d1aa52a2a6 fix up mark dnr logic 2018-11-27 16:12:41 -05:00
chris meyers
f30f52a0a8 handle missing unified job template in workflow
* Workflow Node without unified_job_template is treated as a job marked
as failure; when deciding what path to execute.
* Remove optimization of marking dnr nodes due to it making the
algorithm incorrect.
2018-11-27 16:12:41 -05:00
mabashian
5b459e3c5d Code cleanup. Fixed bugs with workflow results page including details links 2018-11-27 16:12:41 -05:00
chris meyers
676c068b71 add job_description to failed workflow node
* When workflow job fails because a workflow job node doesn't have a
related unified_job_template note that with an error on the workflow
job's job_description
* When a workflow job fails because a failure path isn't defined, note
that on the workflow job job_description
2018-11-27 16:12:41 -05:00
chris meyers
00d71cea50 detect workflow nodes without job templates
* Fail workflow job run when encountering a Workflow Job Nodes with
no related job templates.
2018-11-27 16:12:41 -05:00
mabashian
72263c5c7b Addresses a number of workflow related bugs 2018-11-27 16:12:41 -05:00
chris meyers
281345dd67 flake8 fix 2018-11-27 16:12:41 -05:00
chris meyers
1a85fcd2d5 update docs to include workflow failure semantic 2018-11-27 16:12:41 -05:00
chris meyers
c1171fe4ff treat canceled nodes as failed when processing wf
* When deciding what jobs to run next, treat canceled as failed.
* Also add tests.
2018-11-27 16:12:41 -05:00
chris meyers
d6a8ad0b33 treat canceled jobs in wf the same as failed jobs
* Also fix spelling mistake that caused workflows to be falsely marked
successful in the case of a canceled job.
2018-11-27 16:12:41 -05:00
mabashian
4a6a3b27fa Fixed a number of workflow visualizer bugs. Added loading spinners while data is being loaded/processed. 2018-11-27 16:12:41 -05:00
chris meyers
266831e26d add cycle unit test 2018-11-27 16:12:41 -05:00
chris meyers
a6e20eeaaa update wf done and failed tests 2018-11-27 16:12:41 -05:00
chris meyers
6529c1bb46 update done and fail detection for workflow
* Instead of traversing the workflow graph to determine if a workflow is
done or has failed; instead, loop through all the nodes in the graph and
grab only the relevant nodes.
2018-11-27 16:12:41 -05:00
mabashian
ae0d0db62c Added dagre to handle our workflow graph layout. Fixed various workflow related bugs. 2018-11-27 16:12:41 -05:00
chris meyers
b81d795c00 fix up dot graph generator
* Update graph dot generator to use the new efficient graph
2018-11-27 16:12:41 -05:00
chris meyers
1b87e11d8f flake8 2018-11-27 16:12:41 -05:00
chris meyers
8bb9cfd62a add dag tests 2018-11-27 16:12:41 -05:00
chris meyers
a176a4b8cf remove unused code 2018-11-27 16:12:41 -05:00
chris meyers
3f4d14e48d crawl entire graph when marking DNR
* From the root, the code was only going down the did run path to find
nodes to mark DNR. This is incorrect, Now, we traverse the entire graph
each time to find nodes to mark DNR.
2018-11-27 16:12:41 -05:00
chris meyers
0499d419c3 more efficient graph processing
* Getting parent nodes from child was inefficient. Optimize it with a
hash table like we did for the getting of children.
* Getting leaf nodes was inefficient. Optimize it like we did getting
root nodes. A node is assumed to be a leaf node until it gets a child.
2018-11-27 16:12:41 -05:00
mabashian
700860e040 Fix long name tooltip. Fixed bug adding new node before finishing adding new link.
Fixed template list column layout.  Ensure that we're getting 200 workflow nodes per GET request
2018-11-27 16:12:41 -05:00
chris meyers
3dadeb3037 remove print statements 2018-11-27 16:12:41 -05:00
chris meyers
16a60412cf optimization fix
* WorkflowDAG accepts workflow job template and workflow jobs for which
to build a graph out of the nodes. The optimized query for each is
different. This changeset adds the differing queries for a workflow job.
2018-11-27 16:12:41 -05:00
chris meyers
9f3e272665 optimize cycle detection 2018-11-27 16:12:41 -05:00
mabashian
b84fc3b111 Fixes for post-rebase bugs 2018-11-27 16:12:41 -05:00
chris meyers
e1e8d3b372 bump migration 2018-11-27 16:12:40 -05:00
mabashian
05f4d94db2 Fixed serveral bugs including credential prompting. Added logic to bring links/nodes to the forefront when you hover over them in case there's some overlap 2018-11-27 16:12:40 -05:00
mabashian
61fb3eb390 First pass at implementing better node placement in the workflow graph 2018-11-27 16:12:40 -05:00
mabashian
7b95d2114d Implements workflow convergence without proper layout 2018-11-27 16:12:40 -05:00
chris meyers
07db7a41b3 more flake8 2018-11-27 16:12:40 -05:00
chris meyers
1120f8b1e1 try2 at the devil flake8 2018-11-27 16:12:40 -05:00
chris meyers
17b3996568 fix flake8 anyway I can 2018-11-27 16:12:40 -05:00
chris meyers
584b3f4e3d remove workflow test
* We now handle workflows with jobs that have errored. We treat them the
same as a failure result. Before, we would abort the workflow when we
encountered an error.
2018-11-27 16:12:40 -05:00
chris meyers
f8c53f4933 handle job error state in convergence 2018-11-27 16:12:40 -05:00
chris meyers
6e40e9c856 handle edge case ring cycle 2018-11-27 16:12:40 -05:00
chris meyers
2f9dc4d075 remove relationship in view if cycle detected 2018-11-27 16:12:40 -05:00
chris meyers
9afc38b714 fixup migrations 2018-11-27 16:12:40 -05:00
chris meyers
dfccc9e07d rework wf cycle detection for convergence 2018-11-27 16:12:40 -05:00
chris meyers
7b22d1b874 cycle detection when multiple parents 2018-11-27 16:12:40 -05:00
mabashian
29b4979736 Completed work necessary to support editing workflow links and nodes separately. Added hover and tooltip to links 2018-11-27 16:12:40 -05:00
mabashian
87d6253176 Decouple editing a wf node with editing a node link 2018-11-27 16:12:40 -05:00
chris meyers
1e10d4323f update docs 2018-11-27 16:12:40 -05:00
chris meyers
4111e53113 correctly name migration to align with 3.4.0 2018-11-27 16:12:40 -05:00
chris meyers
02df0c29e9 merge artifacts deterministically 2018-11-27 16:12:40 -05:00
chris meyers
475c90fd00 prevent job launching twice 2018-11-27 16:12:40 -05:00
chris meyers
2742b00a65 flake8 2018-11-27 16:12:40 -05:00
chris meyers
ea29e66a41 fix workflow finish state detector
* Take into account the new do_not_run field when finding if a workflow
is finished. If do_not_run is True then the node is considered finished.
2018-11-27 16:12:40 -05:00
chris meyers
6ef6b649e8 cleaner code 2018-11-27 16:12:40 -05:00
chris meyers
9bf2a49e0f save state 2018-11-27 16:12:40 -05:00
chris meyers
914892c3ac all parents should finish before start child 2018-11-27 16:12:40 -05:00
chris meyers
77661c6032 short circuit performance optimization 2018-11-27 16:12:40 -05:00
chris meyers
b4fc585495 stop DNR propogation on always path
* This makes sure DNR propogation stops when a job is successful, down
an always path
2018-11-27 16:12:40 -05:00
chris meyers
ff6db37a95 correct stop DNR propogation
* If a child has a parent that is not in the finished state then do not
propogate the DNR to the child in question.
* If a parent is in a finished state; do not propogate the DNR to the
child if the path to the child is traversed (based on the parent job
status).
2018-11-27 16:12:40 -05:00
chris meyers
1a064bdc59 satisfy flake8 2018-11-27 16:12:40 -05:00
chris meyers
ebabec0dad always find and mark dnr nodes 2018-11-27 16:12:40 -05:00
chris meyers
3506b9a7d8 Revert "mark dnr field read only"
This reverts commit 3dbc52d91223167683fd01174222bd6c22813dbd.

Workflow Job Nodes are read only already
2018-11-27 16:12:40 -05:00
chris meyers
cc374ca705 update debug dot graph to output dnr data 2018-11-27 16:12:40 -05:00
chris meyers
ad56a27cc0 mark dnr field read only 2018-11-27 16:12:40 -05:00
chris meyers
779e1a34db remove dnr field from jt wf node 2018-11-27 16:12:40 -05:00
chris meyers
447dfbb64d only visit nodes once for dnr 2018-11-27 16:12:40 -05:00
chris meyers
a9365a3967 code cleanup 2018-11-27 16:12:40 -05:00
chris meyers
f5c10f99b0 support workflow convergence nodes
* remove convergence restriction in API
* change task manager logic to be aware of and support convergence nodes
2018-11-27 16:12:40 -05:00
softwarefactory-project-zuul[bot]
c53ccc8d4a Merge pull request #2813 from shanemcd/update-translation-templates
Extract latest strings from source code for translations

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 17:53:49 +00:00
chris meyers
e214dcac85 add slowdown, final status delay, and debug
* slowdown by using --speed 0.1 <-- decimal
* optionally specify a delay between the event and the final status
* debug mode where you can step through emitting the job events
2018-11-27 12:45:49 -05:00
softwarefactory-project-zuul[bot]
de77f6bd1f Merge pull request #2809 from jlmitch5/fixManagmentJobScheduleEditLink
fix management job schedule edit link button

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 17:33:07 +00:00
Shane McDonald
18c4771a38 Extract latest strings from source code for translations 2018-11-27 12:31:52 -05:00
chris meyers
042c7ffe5b emit job status lifecycle in event replayer 2018-11-27 11:54:39 -05:00
John Mitchell
f25c6effa3 fix management job schedule edit link button 2018-11-27 11:35:13 -05:00
softwarefactory-project-zuul[bot]
46d303ceee Merge pull request #2805 from ryanpetrello/wcag
raise contrast on a few key page elements to pass WCAG contrast checks

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 15:45:15 +00:00
Ryan Petrello
b1bd87bcd2 raise contrast on a few key page elements to pass WCAG contrast checks 2018-11-27 10:20:24 -05:00
softwarefactory-project-zuul[bot]
50b0a5a54d Merge pull request #2756 from wenottingham/logged-out-damn-spot
Add a message to the resulting login dialog when a user explicitly logs out.

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2018-11-27 15:13:45 +00:00
Jake McDermott
4f731017ea don't conditionally hide workflow viz templates list button 2018-11-26 16:21:44 -05:00
softwarefactory-project-zuul[bot]
9eb2c02e92 Merge pull request #2788 from AlanCoding/container_names
Set fixed container names

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-26 18:53:41 +00:00
softwarefactory-project-zuul[bot]
55e5432027 Merge pull request #2794 from shanemcd/devel
Add a note about `—force-with-lease` to contributing documentation.

Reviewed-by: Marliana Lara <marliana.lara@gmail.com>
             https://github.com/marshmalien
2018-11-26 16:55:31 +00:00
Shane McDonald
a9ae4dc5a8 Clean up after yourselves, people! 2018-11-26 11:31:58 -05:00
Shane McDonald
0398b744a1 Add a note about —force-with-lease to contributing documentation. 2018-11-26 11:31:36 -05:00
AlanCoding
012511e4f0 prohibit relaunching sliced jobs with changed count 2018-11-26 10:54:19 -05:00
softwarefactory-project-zuul[bot]
4483d0320f Merge pull request #2791 from ryanpetrello/fix-iso-installs
only override django for FIPS in environments where Django is installed

Reviewed-by: Yanis Guenane
             https://github.com/Spredzy
2018-11-26 15:36:23 +00:00
Ryan Petrello
32e7ddd43a only override django for FIPS in environments where Django is installed
isolated awx installs don't have this tooling, and so they don't need
this specific monkey-patch
2018-11-26 09:17:48 -05:00
AlanCoding
0b32733dc8 set fixed container names 2018-11-26 08:26:57 -05:00
Christian Adams
d310c48988 Merge pull request #2758 from rooftopcellist/secure_current_user
make current_user ck secure and httponly
2018-11-21 15:26:35 -05:00
softwarefactory-project-zuul[bot]
5a6eefaf2c Merge pull request #2780 from kialam/update-readme-install-exact
Update README to include saving exact dependencies using npm.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 19:12:50 +00:00
kialam
8c5a94fa64 Update readme to include saving exact dependencies using npm. 2018-11-21 13:35:56 -05:00
adamscmRH
05d988349c make current_user ck secure and httponly 2018-11-21 10:36:35 -05:00
softwarefactory-project-zuul[bot]
bb1473f67f Merge pull request #2762 from kialam/fix-2554-converted-jts-from-sjt
UI WF results relaunch: handle any relaunch errors.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 15:01:17 +00:00
softwarefactory-project-zuul[bot]
79f483a66d Merge pull request #2772 from ansible/jakemcdermott-update
add package uninstall example to readme

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 03:11:45 +00:00
Jake McDermott
1b09a0230d Update README.md 2018-11-20 21:26:11 -05:00
kialam
f3344e9816 Fix failing tests. 2018-11-20 18:35:29 -05:00
softwarefactory-project-zuul[bot]
554e4d45aa Merge pull request #2763 from kialam/add-npm-precheck-script
Add precheck script.

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2018-11-20 21:12:58 +00:00
softwarefactory-project-zuul[bot]
bfe86cbc95 Merge pull request #2753 from jlmitch5/fixInstanceGroupLookup
fix instance group lookup in orgs scrolling behavior

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 20:05:14 +00:00
softwarefactory-project-zuul[bot]
29e4160d3e Merge pull request #2764 from ryanpetrello/dispatcher-sos
add dispatcher status to the sosreport

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2018-11-20 18:06:05 +00:00
Ryan Petrello
b4f906ceb1 add dispatcher status to the sosreport 2018-11-20 12:12:02 -05:00
kialam
e099fc58c7 Add precheck script. 2018-11-20 11:48:25 -05:00
softwarefactory-project-zuul[bot]
e342ef5cfa Merge pull request #2719 from AlanCoding/project_is_failed
Fail job run if project is failed

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 16:45:56 +00:00
John Mitchell
8997fca457 add sanitize 2018-11-20 11:41:58 -05:00
kialam
435ab4ad67 Handle any relaunch errors. 2018-11-20 11:25:36 -05:00
softwarefactory-project-zuul[bot]
d7a28dcea4 Merge pull request #2755 from kialam/fix-invalid-start-date
Scheduler: Start Date begins 1 day ahead.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 15:36:47 +00:00
softwarefactory-project-zuul[bot]
5f3024d395 Merge pull request #2703 from kialam/fix-2552-org-jt-list-websocket
Fix Organizations TemplateList websockets

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 15:01:05 +00:00
softwarefactory-project-zuul[bot]
3126480d1e Merge pull request #2754 from kdelee/rename-schema-change-detector
Rename schema job to be more clear about its purpose

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 14:22:08 +00:00
Elijah DeLee
ca84d312ce Rename schema job to be more clear about its purpose
The make target fails when it detects schema changes, not when schema is invalid.

Also update CONTRIBUTING.md to include information about zuul jobs.
2018-11-20 07:42:10 -06:00
AlanCoding
b790b50a1a Fail job run if project is failed
provide message in traceback and explanation fields

add log messages for dependency spawns
2018-11-20 07:55:37 -05:00
softwarefactory-project-zuul[bot]
6df26eb7a3 Merge pull request #2750 from ryanpetrello/timeout-modal-remove-close-button
remove the x icon on the session timeout modal (it doesn't work)

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 08:50:49 +00:00
softwarefactory-project-zuul[bot]
fccaebdc8e Merge pull request #2342 from ansible/workflow_inventory
Workflow level inventory

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 03:19:40 +00:00
Jake McDermott
45728dc1bb update workflow docs 2018-11-19 17:35:52 -05:00
AlanCoding
9cd8aa1667 further update of workflow docs for inventory feature 2018-11-19 17:35:39 -05:00
Jake McDermott
b74597f4dd fix bug when reverting non-default inventory prompts 2018-11-19 17:35:26 -05:00
Bill Nottingham
ce3d3c3490 Add a message to the resulting login dialog when a user explicitly logs out. 2018-11-19 17:16:55 -05:00
kialam
a9fe1ad9c1 Start Date begins 1 day ahead. 2018-11-19 16:20:46 -05:00
John Mitchell
22e7083d71 fix instance group lookup 2018-11-19 15:38:11 -05:00
Jake McDermott
951515da2f disable next and show warning when default workflow inventory is removed 2018-11-19 15:16:46 -05:00
Ryan Petrello
9e2f4cff08 remove the x icon on the session timeout modal (it doesn't work) 2018-11-19 14:37:49 -05:00
softwarefactory-project-zuul[bot]
0c1a4439ba Merge pull request #2745 from mabashian/2428-workflow-edit-patch
Use patch instead of put when updating a wfjt

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 19:23:26 +00:00
John Mitchell
c2a1603a56 Merge pull request #2746 from jlmitch5/navColorContrast
fix color contrast of nav
2018-11-19 13:50:04 -05:00
Jake McDermott
13e715aeb9 handle null inventory value on workflow launch 2018-11-19 12:53:21 -05:00
Jake McDermott
2bc75270e7 dry up org permissions test 2018-11-19 12:53:18 -05:00
Jake McDermott
fabe56088d fix workflow e2e tests again 2018-11-19 12:53:14 -05:00
Jake McDermott
0e3bf6db09 open workflow visualizer from form 2018-11-19 12:53:10 -05:00
Jake McDermott
c6a7d0859d add workflow jobs to inventory list status popup 2018-11-19 12:53:05 -05:00
Jake McDermott
fed00a18ad show workflow jobs on inventory completed jobs view 2018-11-19 12:53:02 -05:00
Jake McDermott
ecbdc55955 show related workflow counts on inventory deletion warning prompt 2018-11-19 12:52:57 -05:00
AlanCoding
bca9bcf6dd fix prompts contradiction: should be non-functional change 2018-11-19 12:52:54 -05:00
Jake McDermott
018a8e12de fix lookup message 2018-11-19 12:52:50 -05:00
AlanCoding
e0a28e32eb Tweak of error message wording for model-specific name 2018-11-19 12:52:47 -05:00
AlanCoding
c105885c7b Do not count template variables as prompted 2018-11-19 12:52:43 -05:00
Jake McDermott
89a0be64af fix bug with opening visualizer from list page 2018-11-19 12:52:38 -05:00
AlanCoding
c1d85f568c fix survey vars bug and inventory defaults display 2018-11-19 12:52:35 -05:00
Jake McDermott
75566bad39 fix workflow e2e tests 2018-11-19 12:52:32 -05:00
Jake McDermott
75c2d1eda1 add inventory help messages for workflow node edit 2018-11-19 12:52:29 -05:00
Jake McDermott
9a4667c6c7 add static messages to workflow inventory lookups 2018-11-19 12:52:26 -05:00
Jake McDermott
9917841585 open and close workflow visualizer from list 2018-11-19 12:52:23 -05:00
Jake McDermott
fbc3cd3758 redirect to workflow visualizer on workflow creation 2018-11-19 12:52:19 -05:00
Jake McDermott
d65687f14a add workflow inventory prompt to scheduler 2018-11-19 12:52:16 -05:00
Jake McDermott
4ea7511ae8 make workflow prompt inventory step optional 2018-11-19 12:52:12 -05:00
Jake McDermott
a8d22b9459 show correct ask_inventory state 2018-11-19 12:52:08 -05:00
Jake McDermott
f8453ffe68 accept inventory_id in workflow launch requests 2018-11-19 12:52:05 -05:00
Jake McDermott
38f43c147a fix exploding unit test 2018-11-19 12:52:01 -05:00
Jake McDermott
38fbcf8ee6 add missing api fields 2018-11-19 12:51:58 -05:00
Jake McDermott
2bd25b1fba add inventory prompt to wf editor 2018-11-19 12:51:54 -05:00
AlanCoding
7178fb83b0 migration number bumped again 2018-11-19 12:51:51 -05:00
Jake McDermott
2376013d49 add prompt on launch for workflow inventory 2018-11-19 12:51:47 -05:00
Jake McDermott
a94042def5 display inventory on workflow job details 2018-11-19 12:51:44 -05:00
Jake McDermott
2d2164a4ba add inventory lookup to workflow detail view 2018-11-19 12:51:41 -05:00
AlanCoding
3c980d373c bump migration number 2018-11-19 12:51:37 -05:00
AlanCoding
5b3ce1e999 add test for WFJT schedule inventory prompting 2018-11-19 12:51:33 -05:00
AlanCoding
6d4469ebbd handle inventory for WFJT editing RBAC 2018-11-19 12:51:29 -05:00
AlanCoding
eb58a6cc0e add test for launching with deleted inventory 2018-11-19 12:51:26 -05:00
AlanCoding
a60401abb9 fix bug with WFJT launch validation 2018-11-19 12:51:22 -05:00
AlanCoding
1203c8c0ee feature docs for workflow-level inventory 2018-11-19 12:51:18 -05:00
AlanCoding
0c52d17951 fix bug, handle RBAC, add test 2018-11-19 12:51:13 -05:00
AlanCoding
44fa3b18a9 Adjust prompt logic and views to accept workflow inventory 2018-11-19 12:50:57 -05:00
AlanCoding
33328c4ad7 initial model changes for workflow inventory 2018-11-19 12:50:29 -05:00
John Mitchell
11adcb9800 fix color contrast of nav 2018-11-19 12:35:55 -05:00
softwarefactory-project-zuul[bot]
932a1c6386 Merge pull request #2743 from ryanpetrello/survey_spec_stream
include survey_spec in activity stream

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 17:33:33 +00:00
softwarefactory-project-zuul[bot]
d1791fc48c Merge pull request #2585 from wenottingham/licensed-to-illegibility
Add a test that checks the included license files against included dependencies

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 17:22:49 +00:00
AlanCoding
5b274cfc2a include survey_spec in activity stream 2018-11-19 12:07:48 -05:00
mabashian
1d7d2820fd Use patch instead of put when updating a wfjt 2018-11-19 12:04:35 -05:00
Bill Nottingham
605c1355a8 Add updates to UI license grabber from jlmitch5. 2018-11-19 12:00:00 -05:00
Bill Nottingham
a6e00df041 Clean up included licenses such that tests pass.
Rename ui licenses to '.txt' for consistency.
Update bundled code as appropriate.
Remove dead licenses and dev-only UI licenses.
Add additional python licenses from Azure & related updates.
2018-11-19 12:00:00 -05:00
Bill Nottingham
67219e743f Add a test that we are including proper license files for all requirements. 2018-11-19 11:59:59 -05:00
softwarefactory-project-zuul[bot]
67273ff8c3 Merge pull request #2742 from jlmitch5/fixScrollTop
fix scroll top

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 16:56:27 +00:00
softwarefactory-project-zuul[bot]
a3bbe308a8 Merge pull request #2741 from matburt/fix_project_admin_add
Fix a bug that did not allow project_admin's to create a project.

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-19 16:53:16 +00:00
kialam
f1b5bbb1f6 Add websocket listener to Org > JT list view. 2018-11-19 11:46:07 -05:00
Matthew Jones
7330102961 Remove a warning message for dispatcher pool for tests 2018-11-19 11:19:57 -05:00
Matthew Jones
61916b86b5 Fix a bug that did not allow project_admin's to create a project.
This was a regression from previous functionality
2018-11-19 11:05:48 -05:00
John Mitchell
35d5bde690 fix scroll top 2018-11-19 11:03:03 -05:00
softwarefactory-project-zuul[bot]
b17c477af7 Merge pull request #2737 from AlanCoding/death_to_groups
Implement deprecation of groups_with_active_failures

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 15:31:26 +00:00
softwarefactory-project-zuul[bot]
01d891cd6e Merge pull request #2738 from ryanpetrello/inv-update-as
don't send activity stream create for unregistered models

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 15:02:33 +00:00
Ryan Petrello
e36335f68c only send activity stream create for registered unified jobs
see https://github.com/ansible/awx/issues/2733
2018-11-19 09:44:12 -05:00
softwarefactory-project-zuul[bot]
39369c7721 Merge pull request #2702 from kdelee/schema_validation
Pre-Merge Schema validation

Reviewed-by: Matthew Jones <mat@matburt.net>
             https://github.com/matburt
2018-11-19 14:26:42 +00:00
softwarefactory-project-zuul[bot]
4fbc39991d Merge pull request #2730 from AlanCoding/slice_license
License check for slicing >1

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 14:18:10 +00:00
softwarefactory-project-zuul[bot]
91075e8332 Merge pull request #2732 from AlanCoding/middle
Include migration middleware in timings and profiling

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 13:59:34 +00:00
AlanCoding
0ed50b380a Implement deprecation of groups_with_active_failures 2018-11-19 08:29:46 -05:00
AlanCoding
53716a4c5a include migration middleware in timings and profiling 2018-11-18 10:55:37 -05:00
AlanCoding
f30bbad07d License check for slicing >1 2018-11-17 22:48:46 -05:00
softwarefactory-project-zuul[bot]
b923efad37 Merge pull request #2717 from wenottingham/make-that-network-work
Add ncclient for use by networking modules.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 22:02:29 +00:00
softwarefactory-project-zuul[bot]
3b36372880 Merge pull request #2716 from ryanpetrello/insights-user-agent
add a user agent for requests to Insights

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 21:39:15 +00:00
Ryan Petrello
661cc896a9 add a user agent for requests to Insights 2018-11-16 16:25:08 -05:00
Bill Nottingham
e9c3623dfd specify a version 2018-11-16 15:34:42 -05:00
Bill Nottingham
c65b362841 Add ncclient for use by networking modules. 2018-11-16 15:21:04 -05:00
softwarefactory-project-zuul[bot]
6a0e11a233 Merge pull request #2713 from AlanCoding/system_env_vars
Minor cleanup of task environment vars

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 19:35:05 +00:00
AlanCoding
7417f9925f Minor cleanup of task environment vars 2018-11-16 13:28:42 -05:00
softwarefactory-project-zuul[bot]
2f669685d8 Merge pull request #2707 from ryanpetrello/min-max-workers
prevent the dispatcher from using a nonsensical max_workers value

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 15:41:42 +00:00
softwarefactory-project-zuul[bot]
86510029e1 Merge pull request #2692 from kialam/fix-2586-ldap-dropdown-fix
Fix LDAP and TACACS+ dropdowns

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 15:28:09 +00:00
Ryan Petrello
37234ca66e prevent the dispatcher from using a nonsensical max_workers value 2018-11-16 10:16:39 -05:00
Elijah DeLee
4ae1fdef05 Ignore differences in whitespace for schema validation 2018-11-16 09:47:33 -05:00
kialam
95e94a8ab5 Some styling changes; fix Server dropdown.
- Left align first dropdown on Github and LDAP tabs.
- Add border to give some whitespace
2018-11-15 20:46:34 -05:00
kialam
ea35d9713a Fix empty dropdowns for both LDAP and TACACS tabs. 2018-11-15 20:46:34 -05:00
Elijah DeLee
949cf53b89 Use r in front of regex string to make flake8 happy
This means we should not escape the \ character in the same way
2018-11-15 17:29:27 -05:00
Elijah DeLee
a68e22b114 Add tox target to detect schema changes
Fetches reference schema from public bucket
Still need define method for updating reference schema on merge.
2018-11-15 16:25:13 -05:00
Elijah DeLee
d70cd113e1 Reduce duplicated logic for genschema target 2018-11-15 15:29:35 -05:00
Matthew Jones
f737fc066f Generate schema suitable for comparing for schema changes 2018-11-15 15:29:35 -05:00
softwarefactory-project-zuul[bot]
9b992c971e Merge pull request #2672 from AlanCoding/sliceanator
Fix bug with non-sliced JT job spawn

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 20:16:19 +00:00
softwarefactory-project-zuul[bot]
e0d59766e0 Merge pull request #2696 from AlanCoding/bulk_del_inv
Pre-delete bulk delete related, fix parallel request conflicts

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-15 19:58:49 +00:00
AlanCoding
a9d88f728d Pre-delete bulk delete related, fix parallel request conflicts 2018-11-15 11:39:48 -05:00
softwarefactory-project-zuul[bot]
e24d63ea9a Merge pull request #2665 from ryanpetrello/fips
add support for running awx w/ FIPS enabled

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 15:18:07 +00:00
softwarefactory-project-zuul[bot]
1833a5b78b Merge pull request #2667 from saito-hideki/issue/admin_with_docker-compose
Fixed issue where admin_user and password change are not reflected

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 14:24:43 +00:00
softwarefactory-project-zuul[bot]
4213a00548 Merge pull request #2686 from AlanCoding/fast_workflows
Add task manager rescheduling hooks, de-duplication, lifecycle tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 14:19:09 +00:00
softwarefactory-project-zuul[bot]
7345512785 Merge pull request #2690 from ryanpetrello/ldap-long-name
truncate user first/last name if it exceeds 30 chars on LDAP auth

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-14 22:29:43 +00:00
Ryan Petrello
d3dc126d45 truncate user first/last name if it exceeds 30 chars on LDAP auth 2018-11-14 15:51:43 -05:00
AlanCoding
758a488aee Add task manager rescheduling hooks, de-duplication, lifecycle tests 2018-11-14 11:31:34 -05:00
softwarefactory-project-zuul[bot]
c0c358b640 Merge pull request #2682 from ryanpetrello/job-m2m-activity-stream
include M2M labels and credentials in Job creation activity stream

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-14 16:05:30 +00:00
Ryan Petrello
49f4ed10ca include M2M labels and credentials in Job creation activity stream 2018-11-14 10:36:01 -05:00
softwarefactory-project-zuul[bot]
5e18eccd19 Merge pull request #2671 from wenottingham/you-want-to-change-things---well---you-can't
Fix tooltip referring to PROJECTS_ROOT.

Reviewed-by: Bill Nottingham
             https://github.com/wenottingham
2018-11-13 21:01:19 +00:00
softwarefactory-project-zuul[bot]
c4e9daca4e Merge pull request #2669 from paradegoat/issue-2370
Updated JT project-related error message

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-13 20:22:31 +00:00
AlanCoding
5443e10697 Fix bug with non-sliced JT job spawn 2018-11-13 15:20:40 -05:00
Ryan Petrello
a3f9c0b012 warn about FIPS mode if the Django version changes 2018-11-13 15:04:36 -05:00
Geoff Humphreys
8517053934 Updated JT project-related error message
Signed-off-by: Geoff Humphreys <humphreys.geoff@gmail.com>
2018-11-13 14:56:22 -05:00
Bill Nottingham
0506968d4f Fix tooltip referring to PROJECTS_ROOT.
This can't be changed in AWX settings; it needs to be done in settings.py
either on the filesystem, or overridden in a k8s/openshift configmap.
2018-11-13 12:14:24 -05:00
softwarefactory-project-zuul[bot]
3bb91b20d0 Merge pull request #2663 from AlanCoding/unified_jobs_flake
Get rid of star import in unified_jobs.py

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-13 15:37:04 +00:00
softwarefactory-project-zuul[bot]
268b1ff436 Merge pull request #2662 from wwitzel3/devel
move root views new file

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-13 15:33:23 +00:00
Hideki Saito
f16a72081a Fixed issue where admin_user and password change are not reflected
- No effect of changing admin_user and admin_password when using docker-compose #2666
2018-11-13 18:21:18 +09:00
Ryan Petrello
cceac8d907 support PKCS8-formatted keys to enable FIPS compliance
see: https://access.redhat.com/solutions/1519083
2018-11-12 16:21:57 -05:00
adamscmRH
8d012de3e2 monkey-patch _digest for fips 2018-11-12 15:32:23 -05:00
AlanCoding
cee7ac9511 get rid of star import in unified_jobs.py 2018-11-12 13:38:58 -05:00
softwarefactory-project-zuul[bot]
36faaf4720 Merge pull request #2656 from iplaman/fix-postgres96
Fix default Postgresql version to 9.6

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-12 18:34:48 +00:00
Wayne Witzel III
33b8e7624b move root views new file 2018-11-12 12:36:16 -05:00
Idan Bidani
a213e01491 updating default Postgresql version to 9.6 2018-11-10 18:27:22 -05:00
softwarefactory-project-zuul[bot]
a00ed8e297 Merge pull request #2646 from jlmitch5/fixCredentialKindTranslation
Fix credential kind translation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-10 03:06:38 +00:00
softwarefactory-project-zuul[bot]
aeaebcd81a Merge pull request #2572 from AlanCoding/coalesce
Coalesce host and group Activity Stream deletion entries

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-09 21:04:07 +00:00
softwarefactory-project-zuul[bot]
b5849f3712 Merge pull request #2639 from ansible/enhancement-inv_summary
Update UJT with inv summary field and update schedule base route to include object to be scheduled

Reviewed-by: Chris Meyers
             https://github.com/chrismeyersfsu
2018-11-09 19:06:36 +00:00
AlanCoding
658f87953e coalesce data without setting 2018-11-09 14:00:45 -05:00
AlanCoding
5562e636ea Coalesce host and group A.S. deletion entries 2018-11-09 13:58:31 -05:00
John Mitchell
80fcdae50b update credential type usage to kind instead of name 2018-11-09 10:36:09 -05:00
softwarefactory-project-zuul[bot]
2c5f209996 Merge pull request #2631 from kialam/fix/2533-dashboard-hover
Fix Dashboard hover-over action so it shows the correct info

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-09 05:27:11 +00:00
softwarefactory-project-zuul[bot]
692b55311e Merge pull request #2645 from ryanpetrello/unbound
fix an unbound variable

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 20:40:57 +00:00
Ryan Petrello
10667fc855 fix an unbound variable
see: https://github.com/ansible/awx/issues/2642j
2018-11-08 15:14:47 -05:00
softwarefactory-project-zuul[bot]
1fc33b551d Merge pull request #2643 from kialam/fix/2606
Fix DETAILS link in WF viz not working until after job has ran

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 20:14:45 +00:00
kialam
729256c3d1 Fix DETAILS link in WF viz not working until after job has ran
- Make additional GET request if we need it in order to surface the job type so that we can properly redirect the user to the detail page.
2018-11-08 13:23:32 -05:00
chris meyers
23e1feba96 fill in summary inv for sched only when needed
* If scheduler inv exists then the inv summary will be filled in with
our generic summary filler inner. Else, if the related unified job has
an inventory, fill in the inv summary with that, explicitly.
2018-11-08 12:43:47 -05:00
John Mitchell
e3614c3012 update to using new inventory id from summary fields of UJT if applicable 2018-11-08 12:22:02 -05:00
chris meyers
f37391397e add inventory to schedule summary fields
* Use the same logic that related inventory uses. If there is an
inventory that overrides the inventory on the unified job  template then
summarize that field. Else, use the inventory on the unified job template
being scheduled.
2018-11-08 11:47:31 -05:00
softwarefactory-project-zuul[bot]
6a8454f748 Merge pull request #2352 from ansible/workflows_squared
[Feature] Allow use of workflows inside of workflows

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 16:28:37 +00:00
softwarefactory-project-zuul[bot]
d1328c7625 Merge pull request #2044 from cdvv7788/AWX-2030
Add command to revoke tokens (https://github.com/ansible/awx/issues/2030)

Reviewed-by: Cristian Vargas
             https://github.com/cdvv7788
2018-11-08 15:15:44 +00:00
John Mitchell
d1cce109fb update schedule base route to include resource being scheduled 2018-11-08 09:41:06 -05:00
kialam
32bd8b6473 Fix Dashboard hover-over action so it shows the correct info.
- Refactor our Dashboard directive to display the dashboard graph's x-axis according to the new changes in NVD3 v1.8.1. See https://github.com/nvd3-community/nvd3/blob/gh-pages/examples/lineChart.html for reference.
2018-11-08 09:15:54 -05:00
Ryan Petrello
001bd4ca59 resolve a few token revocation issues, and add tests 2018-11-08 08:15:24 -05:00
softwarefactory-project-zuul[bot]
541b503e06 Merge pull request #2634 from wwitzel3/views-inventory
inventory views

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 12:31:14 +00:00
Cristian Vargas
093c29e315 Add command to revoke tokens
Signed-off-by: Cristian Vargas <cristian@swapps.co>
2018-11-08 07:28:21 -05:00
softwarefactory-project-zuul[bot]
eec7d7199b Merge pull request #2614 from stokkie90/devel
Fixed bug where all group variables are not included

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 12:27:30 +00:00
Rick Stokkingreef
7dbb862673 Fixed test cases 2018-11-08 12:07:07 +01:00
softwarefactory-project-zuul[bot]
be1422d021 Merge pull request #2629 from ansible/org-view-ui-tests
Org view ui tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 06:32:59 +00:00
Wayne Witzel III
459ac0e5d9 inventory views 2018-11-07 22:06:33 -05:00
Wayne Witzel III
16c9d043c0 Merge pull request #2344 from wwitzel3/views-organization
organization views
2018-11-07 21:33:14 -05:00
Wayne Witzel III
1b465c4ed9 remove duplicate BaseUsersList 2018-11-07 18:18:41 -05:00
Wayne Witzel III
198a0db808 move organization views to their own file 2018-11-07 18:18:41 -05:00
softwarefactory-project-zuul[bot]
91dda0a164 Merge pull request #2625 from abedwardsw/feature/vmware_groupby_custom_field_excludes
update to latest vmware_inventory.py support for groupby_custom_field_excludes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 21:03:05 +00:00
softwarefactory-project-zuul[bot]
9341480209 Merge pull request #2627 from jakemcdermott/fix-form-cred-lookups
fix project and adhoc command form lookups when many credential types exist

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 20:43:36 +00:00
Adam Edwards
21877b3378 update to latest vmware_inventory.py with support for groupby_custom_field_excludes
e364d717cb/contrib/inventory/vmware_inventory.py

Signed-off-by: Adam Edwards <adam@middleware360.com>
2018-11-07 15:43:15 -05:00
softwarefactory-project-zuul[bot]
03169a96ef Merge pull request #2626 from jakemcdermott/fix-multicred
always recompile multicredential lists

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 20:27:20 +00:00
Marliana Lara
ebc3dbe7b6 Add source WF label to job details template
* Change label from "Parent WF" to "Source WF"
* Fix WF job result Firefox responsive style bugs
2018-11-07 13:22:41 -05:00
AlanCoding
1bed5d4af2 avoid nested on_commit use 2018-11-07 13:22:41 -05:00
AlanCoding
0783d86c6c adjust recursion error text 2018-11-07 13:22:41 -05:00
Marliana Lara
a3d5705cea Fix bug with workflow maker templates pagination and smart search 2018-11-07 13:22:40 -05:00
Marliana Lara
2ae8583a86 Fix Firefox labels bug 2018-11-07 13:22:40 -05:00
Marliana Lara
edda4bb265 Address PR review 2018-11-07 13:22:40 -05:00
AlanCoding
d068481aec link workflow job node based on job type, not UJT type 2018-11-07 13:22:40 -05:00
Marliana Lara
e20d8c8e81 Show workflow badge
Add Workflow tags

Hookup workflow details link

Add parent workflow and job explanation fields

Add workflow key icon to WF maker and WF results

Hookup wf prompting

Add wf key dropdown and hide wf info badge
2018-11-07 13:22:39 -05:00
Marliana Lara
f6cc351f7f Format workflow-chart directive code 2018-11-07 13:22:39 -05:00
Marliana Lara
c2d4887043 Include workflow jobs in workflow maker job templates list 2018-11-07 13:22:39 -05:00
AlanCoding
4428dbf1ff Allow use of role_level filter in UJT list 2018-11-07 13:22:39 -05:00
AlanCoding
e225489f43 workflows-in-workflows add docs and tests 2018-11-07 13:22:39 -05:00
AlanCoding
5169fe3484 safeguard against infinite loop if jobs have cycles 2018-11-07 13:22:38 -05:00
AlanCoding
01d1470544 workflow variables processing, recursion detection 2018-11-07 13:22:38 -05:00
AlanCoding
faa6ee47c5 allow use of workflows in workflows 2018-11-07 13:22:38 -05:00
softwarefactory-project-zuul[bot]
aa1d71148c Merge pull request #2583 from AlanCoding/superusers_manage
Do not block superusers with MANAGE_ORGANIZATION_AUTH setting

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 17:38:04 +00:00
Daniel Sami
e86ded6c68 removed namespace from user fixture 2018-11-07 09:02:00 -05:00
Daniel Sami
365bf4eb53 UI tests for org permission views 2018-11-07 09:01:23 -05:00
Jake McDermott
ceb9bfe486 fix adhoc command cred lookup when many credential types exist 2018-11-06 22:47:07 -05:00
Jake McDermott
0f85c867a0 fix project cred lookups when many credential types exist 2018-11-06 22:46:50 -05:00
Jake McDermott
f8a8186bd1 always recompile multicred lists 2018-11-06 20:58:08 -05:00
softwarefactory-project-zuul[bot]
33dfb6bf76 Merge pull request #2623 from ryanpetrello/this-is-the-end
properly validate cert data that happens to contain an END substring

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 21:14:26 +00:00
Ryan Petrello
28cd762dd7 properly validate cert data that happens to contain an END substring 2018-11-06 15:57:35 -05:00
softwarefactory-project-zuul[bot]
217cca47f5 Merge pull request #2620 from ansible/chrismeyersfsu-patch-1
Update saml.md

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 20:16:20 +00:00
softwarefactory-project-zuul[bot]
8faa5d8b7a Merge pull request #2591 from AlanCoding/no_more_ask
Implement deprecation of duplicated ask_ fields in job view

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 20:13:30 +00:00
softwarefactory-project-zuul[bot]
2c6711e183 Merge pull request #2618 from ryanpetrello/json-activity-stream-changes
send activity stream changes as raw JSON, not a JSON-ified string

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 19:45:19 +00:00
Chris Meyers
45328b6e6d Update saml.md 2018-11-06 14:45:10 -05:00
Ryan Petrello
1523feee91 send activity stream changes as raw JSON, not a JSON-ified string
see: https://github.com/ansible/awx/issues/2005
2018-11-06 14:28:57 -05:00
softwarefactory-project-zuul[bot]
856dc3645e Merge pull request #2605 from jakemcdermott/fix-2601
remove admin and member roles from organization->team permissions 

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 17:08:43 +00:00
Jake McDermott
5e4dd54112 remove admin and member roles from organization->team role assignment options 2018-11-06 11:52:19 -05:00
softwarefactory-project-zuul[bot]
5860689619 Merge pull request #2596 from jlmitch5/fixPermIssue
fix permission issue with regular user jt admins

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 16:40:58 +00:00
John Mitchell
da7834476b remove inadverdent scope variable that was added 2018-11-06 10:52:16 -05:00
John Mitchell
d5ba981515 remove inadverdent log statement 2018-11-06 10:50:15 -05:00
softwarefactory-project-zuul[bot]
afe07bd874 Merge pull request #2595 from ryanpetrello/fix-gce-json-creds
move from GCE_PEM_FILE_PATH to GCE_CREDENTIALS_FILE_PATH

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 15:09:11 +00:00
Rick Stokkingreef
f916bd7994 Fixed bug when all group vars are not included 2018-11-06 14:26:36 +01:00
softwarefactory-project-zuul[bot]
3fef7acaa8 Merge pull request #2457 from jakemcdermott/output-line-fixup
dont render blank output lines for some event types + adjust line wrapping

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 01:54:38 +00:00
Jake McDermott
95190c5509 remove unused constants 2018-11-05 20:13:00 -05:00
Jake McDermott
76e887f46d highlight entire row on hover 2018-11-05 20:12:52 -05:00
Jake McDermott
0c2b1b7747 don't compile html in real time 2018-11-05 20:12:44 -05:00
Jake McDermott
4c74c8c40c delete contents of slide array before reassigning 2018-11-05 20:12:37 -05:00
Jake McDermott
3a929919a3 enable expanded details for dynamic host events 2018-11-05 20:12:29 -05:00
Jake McDermott
c25af96c56 don't render events if stdout is zero-length string 2018-11-05 20:12:21 -05:00
Jake McDermott
f28f1e434d adjust output line wrapping 2018-11-05 20:12:12 -05:00
Jake McDermott
b3c5df193a don't render playbook_on_notify or runner_on_ok events if they have no stdout 2018-11-05 20:11:53 -05:00
John Mitchell
8645602b0a fix permission issue where regular users assigned jt admin could not add user jt roles they couldn't edit 2018-11-05 16:45:35 -05:00
Ryan Petrello
05156a5991 move from GEC_PEM_FILE_PATH to GCE_CREDENTIALS_FILE_PATH 2018-11-05 15:44:31 -05:00
softwarefactory-project-zuul[bot]
049d642df8 Merge pull request #1894 from AlanCoding/rm_sdonu
Remove unused project field

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-05 16:14:56 +00:00
AlanCoding
951ebf146a remove unused project field 2018-11-05 10:40:53 -05:00
AlanCoding
7a67e0f3d6 Implement deprecation of duplicated ask_ fields in job view 2018-11-05 10:38:55 -05:00
softwarefactory-project-zuul[bot]
37def8cf7c Merge pull request #2587 from jakemcdermott/fix-2563
remove admin and member roles from team->organizations role assignment options

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-05 15:13:18 +00:00
Jake McDermott
e4c28fed03 remove admin and member roles from team->organizations role assignment options 2018-11-04 22:59:03 -05:00
softwarefactory-project-zuul[bot]
f8b7259d7f Merge pull request #2543 from westfood/fix-helm-postgresql
Using new Helm parameters for PostgreSQL access.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-02 18:56:15 +00:00
AlanCoding
6ae1e156c8 do not block superusers with MANAGE_ORGANIZATION_AUTH setting 2018-11-02 14:13:05 -04:00
softwarefactory-project-zuul[bot]
9a055dbf78 Merge pull request #2550 from matburt/runner_on_start
Add support for runner_on_start

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-02 15:10:07 +00:00
Matthew Jones
80ac44565a Make sure we reference the actual hostname 2018-11-02 10:37:58 -04:00
softwarefactory-project-zuul[bot]
a338199198 Merge pull request #2577 from AlanCoding/fix_grandparents
Fix bug where grandparent groups were excluded

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-02 14:36:48 +00:00
AlanCoding
47fc0a759f fix bug where grandparent groups were excluded 2018-11-02 10:10:38 -04:00
softwarefactory-project-zuul[bot]
5eb4b35508 Merge pull request #2547 from AlanCoding/decorator
Get rid of decorator dependency

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-01 20:39:30 +00:00
softwarefactory-project-zuul[bot]
d93eedaedb Merge pull request #2571 from ryanpetrello/devel
Merge remote-tracking branch 'release_3.3.1' into devel

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-01 16:48:46 +00:00
Ryan Petrello
a748a272fb Merge remote-tracking branch 'tower/release_3.3.1' into devel 2018-11-01 12:07:02 -04:00
softwarefactory-project-zuul[bot]
a28f8c43cb Merge pull request #2569 from shanemcd/devel
Bump version to 2.1.0

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-01 15:53:59 +00:00
Shane McDonald
fbec6a60bf Bump version to 2.1.0 2018-11-01 11:37:28 -04:00
softwarefactory-project-zuul[bot]
6a4f3c8758 Merge pull request #2566 from shanemcd/devel
Bump version to 2.0.2

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-01 14:12:00 +00:00
Shane McDonald
04625f566b Bump version to 2.0.2 2018-11-01 09:50:02 -04:00
softwarefactory-project-zuul[bot]
895a567ed1 Merge pull request #2558 from ansible/org-view-ui
updated fixtures to use proper organization linking

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-31 18:13:40 +00:00
Daniel Sami
e152b30fc1 linting fixes 2018-10-31 13:55:50 -04:00
softwarefactory-project-zuul[bot]
a1fe60da78 Merge pull request #2174 from matburt/jobtemplate_sharding
Implement Job Template Sharding/Splitting/Slicing

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-31 15:49:51 +00:00
AlanCoding
d8d710a83d get rid of decorator dependency 2018-10-31 11:37:10 -04:00
softwarefactory-project-zuul[bot]
92f0893764 Merge pull request #2555 from ryanpetrello/old-access-cleanup
remove an old, unused migration file

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-31 15:33:37 +00:00
Ryan Petrello
479448ff09 remove an old, unused migration file 2018-10-31 11:15:09 -04:00
Jake McDermott
62a36e3704 update job slice count help text 2018-10-31 11:04:14 -04:00
kialam
2d286c5f68 Redirect to WF Details page after prompt for slice JT. 2018-10-31 11:04:14 -04:00
AlanCoding
f435e577b2 Adjust slicing tooltip text 2018-10-31 11:04:14 -04:00
AlanCoding
236b332a8b bump migration number 2018-10-31 11:04:13 -04:00
kialam
a7028df828 Fix one failing unit test. 2018-10-31 11:04:13 -04:00
kialam
a59017ceef Fix eslint errors. 2018-10-31 11:04:13 -04:00
AlanCoding
affacb8ab5 revert change of including slice wfj ids in recent_jobs list 2018-10-31 11:04:13 -04:00
AlanCoding
37f9024940 fix slicing task_impact and script gen bugs 2018-10-31 11:04:13 -04:00
kialam
f72fca5fcf Fix unit tests after "slice" rename.
- Update Jobs List unit tests with new schema and test cases.
- Update Job Details unit tests with new schema and test cases.
- Test both for expected behavior when handling a regular non-sliced job.
2018-10-31 11:04:13 -04:00
kialam
21aeda0f45 Add unit tests for Job Details
- Test `getSplitJobDetails` method.
- Fix failing tests.
- Rename unit tests.
2018-10-31 11:04:12 -04:00
kialam
65a0e5ed45 Fix failing tests. 2018-10-31 11:04:12 -04:00
kialam
571e34bf79 Begin adding unit tests for split jobs
- Test split job tag method within Jobs List Controller.
2018-10-31 11:04:12 -04:00
AlanCoding
6dc58af8e1 slicing rename test cleanup and bugfix 2018-10-31 11:04:12 -04:00
AlanCoding
bbd3edba47 rename to slicing and schema tweaks 2018-10-31 11:04:12 -04:00
Matthew Jones
46d6dce738 Mass rename of shard -> split 2018-10-31 11:04:12 -04:00
AlanCoding
475a701f78 Allow use of credential password prompting with split JTs
also
*update test to work with new JT callback call pattern
*fix spelling in template
2018-10-31 11:04:11 -04:00
AlanCoding
dccd7f2e9d do not split JT callback jobs 2018-10-31 11:04:11 -04:00
kialam
47711bc007 add package-lock.json to gitignore 2018-10-31 11:04:11 -04:00
kialam
04eec61387 Redirect to WF details page when a Split Job is launched 2018-10-31 11:04:11 -04:00
kialam
ef4a2cbebb Add Job Splitting feature to UI 2018-10-31 11:04:11 -04:00
AlanCoding
c8d76dbe78 update migration after rebase 2018-10-31 11:04:11 -04:00
Matthew Jones
61a706274b Adding architecture doc for job sharding 2018-10-31 11:04:10 -04:00
AlanCoding
20226f8984 Polish split jobs API info & add fields to UI
*clarify help text and squash migrations
*adds new internal_limit field to Job model for faster reference
*if field is non-blank, populate shard params in summary_fields
*add summary information to UI job/wfj details, JT selector
2018-10-31 11:04:10 -04:00
AlanCoding
7ff04dafd3 Fix IntegrityError deleting job splitting JT
misc:
*show sharded jobs in recent_jobs
*test updates
2018-10-31 11:04:10 -04:00
AlanCoding
f9bdb1da15 Job splitting access logic and more feature development
*allow sharding with prompts and schedules
*modify create_unified_job contract to pass class & parent_field name
*make parent field name instance method & set sharded UJT field
*access methods made compatible with job sharding
*move shard job special logic from task manager to workflows
*save sharded job prompts to workflow job exclusively
*allow using sharded jobs in workflows
2018-10-31 11:04:10 -04:00
AlanCoding
dab678c5cc Implement splitting logic in inventory & job task code 2018-10-31 11:04:10 -04:00
Matthew Jones
44ffcf86de Properly take prompted inventory into account
This also will rename shard jobs to add an index to the job name
2018-10-31 11:04:10 -04:00
Matthew Jones
8a18984be1 Spawn concrete workflow jobs from a job template launch 2018-10-31 11:04:09 -04:00
Matthew Jones
0b1776098b Implement model/view/launch paradigm for shard/split job templates 2018-10-31 11:04:09 -04:00
Daniel Sami
5da13683ce updated fixtures to use proper organization linking 2018-10-31 11:03:44 -04:00
Ryan Petrello
89c2038ea3 Merge pull request #2557 from ryanpetrello/fix-busted-docker-compose
pin docker-compose to a working version
2018-10-31 11:01:50 -04:00
Ryan Petrello
a1012b365c pin docker-compose to a working version 2018-10-31 10:47:45 -04:00
Matthew Jones
673068464a Add support for runner_on_start
This will be available in ansible 2.8
2018-10-30 13:23:55 -04:00
softwarefactory-project-zuul[bot]
484ef1b6a8 Merge pull request #2548 from wenottingham/mark-it-zero
Re-add markdown, which is used for rendering API help.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-30 16:47:46 +00:00
Bill Nottingham
7fc269b65a Re-add markdown, which is used for rendering API help. 2018-10-30 12:10:00 -04:00
westfood
694e494484 Using new Helm parameters for PostgreSQL access. 2018-10-28 11:55:36 +01:00
softwarefactory-project-zuul[bot]
ddda6b3d21 Merge pull request #2542 from jakemcdermott/fix-smoke
fix smoke test

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-28 00:32:26 +00:00
Jake McDermott
80adbf9c03 fix smoke test 2018-10-27 02:09:18 -04:00
softwarefactory-project-zuul[bot]
264f35d259 Merge pull request #2239 from AlanCoding/multi_pass_cancel
Do 2-pass cancel for workflow jobs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-26 14:43:14 +00:00
AlanCoding
e513f8fe31 do 2-pass cancel for workflow jobs 2018-10-26 10:28:30 -04:00
softwarefactory-project-zuul[bot]
b9f35e5b50 Merge pull request #2536 from ryanpetrello/deprecated_auth_token_middleware
remove DeprecatedAuthTokenMiddleware

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-26 14:27:29 +00:00
softwarefactory-project-zuul[bot]
002f463ffd Merge pull request #2274 from AlanCoding/callback_debugging
Reduce default verbosity of dev-specific callback logging

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-26 14:23:24 +00:00
Ryan Petrello
28512e042b remove DeprecatedAuthTokenMiddleware 2018-10-26 10:11:53 -04:00
AlanCoding
482395eb6a reduce default verbosity of devel-specific callback logging 2018-10-26 10:03:46 -04:00
softwarefactory-project-zuul[bot]
e1d44d6d14 Merge pull request #2529 from AlanCoding/split_personality
Apply docker-compose fix to cluster target too

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-26 13:52:32 +00:00
AlanCoding
19030b9d5f apply docker-compose fix to cluster target too 2018-10-26 09:36:11 -04:00
softwarefactory-project-zuul[bot]
3e4738d948 Merge pull request #2430 from dmt/devel
Fix installer volume definitions

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-25 22:12:25 +00:00
softwarefactory-project-zuul[bot]
94083f55c7 Merge pull request #2510 from Intermax-Cloudsourcing/awx-web-dockerfile-tmp
Empties /tmp in awx_web Dockerfile

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-25 21:59:42 +00:00
Daniel Temme
6ecd18b2e2 make volume concatenation work
The second list gets interpreted as part of the else block, effectively
dropping it. Separating both list definitions with braces seems to work.

# Conflicts:
#	installer/roles/local_docker/tasks/standalone.yml
2018-10-25 17:54:10 -04:00
Daniel Temme
4e9c705997 Partial revert for "Bugfix for ca_trust_dir"
# Conflicts:
#	installer/roles/local_docker/tasks/standalone.yml

# Conflicts:
#	installer/roles/local_docker/tasks/standalone.yml
2018-10-25 17:53:12 -04:00
softwarefactory-project-zuul[bot]
1803a76a4d Merge pull request #2485 from wwt/fix-tiller-namespace
Pass tiller namespace down to helm task

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-25 21:40:08 +00:00
softwarefactory-project-zuul[bot]
86ca1875f1 Merge pull request #2486 from wwt/remove-rabbit-cluster-name
Remove .cluster.local from service name for rabbitmq

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-25 21:37:54 +00:00
wilmardo
bf5c259d92 Empties /tmp in web Dockerfile 2018-10-25 17:12:26 -04:00
softwarefactory-project-zuul[bot]
9bca937fad Merge pull request #2287 from AlanCoding/files_are_in_the_computer
automatically delete project files in entire cluster

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-25 20:53:20 +00:00
AlanCoding
526ca3ae42 automatically delete project files in entire cluster 2018-10-25 16:36:58 -04:00
softwarefactory-project-zuul[bot]
695c7ade86 Merge pull request #2523 from ivuk/fix-variable-names
Update variable names for local Docker daemon installation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-25 17:30:44 +00:00
Igor Vuk
c133b35162 Update variable names for local Docker daemon installation
Signed-off-by: Igor Vuk <parcijala@gmail.com>
2018-10-25 12:47:25 -04:00
softwarefactory-project-zuul[bot]
afb3c0e31e Merge pull request #2498 from AlanCoding/relaunch_fix
Fix bug with relaunching with changed JT

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-25 16:40:16 +00:00
AlanCoding
8965f1934e fix bug with relaunching with changed JT 2018-10-25 11:45:47 -04:00
softwarefactory-project-zuul[bot]
556040fb8b Merge pull request #2497 from AlanCoding/fix_two_creds2
Fix server error using 2 creds of same type

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-24 19:14:32 +00:00
AlanCoding
8b3e49cb24 fix server error using 2 creds of same type 2018-10-24 14:59:01 -04:00
softwarefactory-project-zuul[bot]
331e272be0 Merge pull request #2504 from kialam/scheduler-template-fix-datepicker
Restore Date Picker field in Scheduler template.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-24 18:46:16 +00:00
kialam
9e7808f2c9 Restore Date Picker field in Scheduler template. 2018-10-24 14:29:46 -04:00
softwarefactory-project-zuul[bot]
0bb2de24f3 Merge pull request #2513 from AlanCoding/filter_things
Allow UI to filter by type again

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-24 17:57:49 +00:00
AlanCoding
72ce7b194f allow UI to filter by type again 2018-10-24 13:35:04 -04:00
softwarefactory-project-zuul[bot]
85958c51a8 Merge pull request #2517 from dmsimard/preload_data
Let users disable create_preload_data if it isn't necessary

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-24 15:53:57 +00:00
David Moreau Simard
1dd44df471 Let users disable create_preload_data if it isn't necessary
The demo things might not be desirable in a production environment.
2018-10-24 11:36:33 -04:00
softwarefactory-project-zuul[bot]
b132f855a0 Merge pull request #2508 from shanemcd/devel
Fix permissions when running dev container as non-root user

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-24 14:56:33 +00:00
Shane McDonald
a361b5da6e Fix permissions when running dev container as non-root user
I wanted to pass `—user` to `docker-compose` up, but that option doesnt exist. To get around this, I had to record the uid on the host (CURRENT_UID), interpolate the variable in tools/docker-compose.yml, and detect that inside the container. I then piggy-backed on the /etc/passwd hack we use for scenarios with unpredictable uids.
2018-10-24 10:30:04 -04:00
Shane McDonald
7df63830ed Remove reference to file that doesnt exist anymore 2018-10-24 10:30:03 -04:00
softwarefactory-project-zuul[bot]
b3cf93256b Merge pull request #2520 from ryanpetrello/fix-flake8
fix flake8

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-24 14:28:32 +00:00
Ryan Petrello
c695ba2e10 fix flake8 2018-10-24 10:11:53 -04:00
softwarefactory-project-zuul[bot]
c44160933d Merge pull request #2514 from farcaller/patch-1
Fix a typo

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-23 17:24:39 +00:00
Vladimir Pouzanov
9ae3e1c40f Fix a typo 2018-10-23 18:01:00 +01:00
softwarefactory-project-zuul[bot]
c7c5a9d2f7 Merge pull request #2512 from wenottingham/some-less-assembly-required
Remove some obsolete requirements.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-23 15:24:55 +00:00
Bill Nottingham
a56a231869 Remove some obsolete requirements.
Bump cryptography to latest.
2018-10-23 10:37:36 -04:00
softwarefactory-project-zuul[bot]
5087ca7f62 Merge pull request #2494 from ryanpetrello/drop-old-celery-tables
drop old celery/djcelery tables we no longer need

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2018-10-22 19:26:46 +00:00
Ryan Petrello
3b7336c570 drop old celery/djcelery tables we no longer need 2018-10-22 09:20:10 -04:00
softwarefactory-project-zuul[bot]
9b413afb2e Merge pull request #2492 from ansible/workflow-visualizer-search
fix to search for exact search matches

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-19 14:49:58 +00:00
Daniel Sami
eec05eac3c Merge branch 'devel' into workflow-visualizer-search 2018-10-19 10:33:51 -04:00
softwarefactory-project-zuul[bot]
41671b5868 Merge pull request #2493 from ryanpetrello/celery-inventory-delete-retry
implement simple retries for wayward inventory deletes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-19 14:30:04 +00:00
Daniel Sami
0bbf1d7014 Merge branch 'devel' into workflow-visualizer-search 2018-10-19 10:26:12 -04:00
Daniel Sami
c5ce62e11d added functionality to validate that search is complete before continuing 2018-10-19 10:23:50 -04:00
Ryan Petrello
9316c9ea3e implement simple retries for wayward inventory deletes 2018-10-19 10:10:52 -04:00
Daniel Sami
427b8bdabb lint fix 2018-10-19 10:01:50 -04:00
softwarefactory-project-zuul[bot]
cce470a5f8 Merge pull request #2487 from ryanpetrello/improved-amqp-cancel
fix a bug that breaks job cancellation on single node jobs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-19 13:53:57 +00:00
Daniel Sami
92baea2ee6 fix to search for exact search matches 2018-10-19 09:43:13 -04:00
Ryan Petrello
3be9113d6b fix a bug that breaks job cancel on single node jobs
1.  Install awx w/ a single node.
2.  Start a long-running job.
3.  Forcibly kill the `awx-manage run_dispatcher` process (e.g.,
    SIGKILL) and do not start it again.
4.  The job remains in running - without a second cluster to discover
    the job, it is never reaped.
5.  This PR allows you to cancel the job from the UI+API.
2018-10-19 09:10:33 -04:00
softwarefactory-project-zuul[bot]
785c6fe846 Merge pull request #2475 from ryanpetrello/more-celery-hardening
make the dispatcher more fault-tolerant to prolonged database outages

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-19 00:22:56 +00:00
Ryan Petrello
0d29bbfdc6 make the dispatcher more fault-tolerant to prolonged database outages 2018-10-18 20:00:07 -04:00
softwarefactory-project-zuul[bot]
ce8117ef19 Merge pull request #2356 from ansible/updateProjectList
Update project list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-18 21:43:13 +00:00
John Mitchell
bb921af146 fix badge updating and xss e2e test for projects list updates 2018-10-18 17:23:52 -04:00
John Mitchell
5e0ecc7f43 fix projects list search selectors 2018-10-18 17:23:52 -04:00
John Mitchell
73dc58e810 update project badge selector 2018-10-18 17:23:52 -04:00
John Mitchell
89344c2eee update project list selectors 2018-10-18 17:23:52 -04:00
John Mitchell
d61cd519d7 fix panel title and badge for new projects list 2018-10-18 17:23:51 -04:00
John Mitchell
8057438c67 add back in old-style project list json and relevant factories 2018-10-18 17:23:51 -04:00
John Mitchell
110671532d fix lint error with projects list route 2018-10-18 17:23:51 -04:00
Haokun-Chen
92ac3054c6 refactor projects list, clean up dependencies and old list generators and factory methods 2018-10-18 17:23:49 -04:00
softwarefactory-project-zuul[bot]
c95c2a4580 Merge pull request #2455 from wenottingham/into-the-deep-azure-yonder
Update Azure deps in Ansible venv to match Ansible 2.7 requirements.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-18 21:05:32 +00:00
Bill Nottingham
2c01476eca Don't explicitly remove certifi. 2018-10-18 16:41:33 -04:00
Bill Nottingham
8adbc8a026 Update Azure requirements to match Ansible 2.7 requirements.
Add comments for Ansible requirements to note where they're used.

Remove our custom docutils fork, as the fix was merged upstream.
2018-10-18 16:41:33 -04:00
softwarefactory-project-zuul[bot]
98b8d7fb69 Merge pull request #2483 from ansible/workflow-visualizer-search
added search for visualizer nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-18 19:48:10 +00:00
Daniel Sami
477551325d Merge branch 'workflow-visualizer-search' of https://github.com/ansible/awx into workflow-visualizer-search 2018-10-18 15:07:08 -04:00
Daniel Sami
fdedc472d1 lint fix 2018-10-18 15:06:37 -04:00
James Evans
88819ada6b Remove .cluster.local from service name for rabbitmq
FQDNs are not required for service discovery, and having the FQDN in the
name prevents the discovery from working in clusters not named
cluster.local.
2018-10-18 14:00:05 -05:00
Daniel Sami
f3ee93b67f Merge branch 'devel' into workflow-visualizer-search 2018-10-18 14:48:36 -04:00
Daniel Sami
b4549e5581 added search for visualizer nodes 2018-10-18 14:38:10 -04:00
softwarefactory-project-zuul[bot]
3afed6adb7 Merge pull request #2383 from ansible/updateSettingsNav
add additional settings sub navigation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-18 17:55:26 +00:00
John Mitchell
1bc78fd429 fix selectors for settings sub pane 2018-10-18 13:37:45 -04:00
John Mitchell
6ce1b50751 update config e2e tests to fix syntax and linting issues 2018-10-18 13:37:45 -04:00
John Hill
6c87b88e2c Updating configuration/settings page 2018-10-18 13:37:45 -04:00
John Hill
10f21b8817 Updating e2e tests to match new settings nav 2018-10-18 13:37:45 -04:00
John Mitchell
0d1b25131d fix scope location of json fields of settings auth form 2018-10-18 13:37:45 -04:00
John Mitchell
d2118b8d25 fix activity stream settings links 2018-10-18 13:37:44 -04:00
John Mitchell
b852caaaa3 update configuration controllers to fix syntax warnings 2018-10-18 13:37:44 -04:00
John Mitchell
b0dd10b538 sidenav sub pane feedback
make height the same as side nav items
no tooltip for collapsed settings
2018-10-18 13:37:44 -04:00
John Mitchell
8f4aa5511b update side nav settings pane show hide hover logic 2018-10-18 13:37:44 -04:00
John Mitchell
4b26ac06ba fix open/close on settings nav item hover 2018-10-18 13:37:44 -04:00
John Mitchell
4dc6452dea updating suit name and variabilize colors for sub nav pane 2018-10-18 13:37:43 -04:00
John Mitchell
5a17acb131 working commit 2018-10-18 13:37:43 -04:00
Haokun-Chen
6cfd9dbfe4 refactor configuration (settings)
sub-nav added
2018-10-18 13:37:41 -04:00
softwarefactory-project-zuul[bot]
110c5a8e84 Merge pull request #2431 from Numblesix/devel
Added some Doc for ca_trust_dir

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-18 16:15:46 +00:00
Yanis Guenane
b185c1e0a2 Merge branch 'devel' into devel 2018-10-18 18:00:16 +02:00
softwarefactory-project-zuul[bot]
f1a4a62304 Merge pull request #2432 from Numblesix/ldap-doc
Added some Doc for FREEipa

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-18 14:53:14 +00:00
Yanis Guenane
9f3e3bad54 Merge branch 'devel' into ldap-doc 2018-10-18 16:38:31 +02:00
James Evans
4198227116 Pass tiller namespace down to helm task 2018-10-18 09:34:13 -05:00
softwarefactory-project-zuul[bot]
56525bc34f Merge pull request #2476 from AlanCoding/rm_changelog
Remove changelog

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-18 14:15:21 +00:00
Yanis Guenane
3f2068e74e Merge branch 'devel' into ldap-doc 2018-10-18 15:53:34 +02:00
AlanCoding
6117f8297e remove changelog 2018-10-18 09:52:08 -04:00
softwarefactory-project-zuul[bot]
8953d06905 Merge pull request #2456 from wenottingham/insert-obvious-unchained-joke-here
Update to latest django subminor to pick up assorted fixes.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-18 13:50:55 +00:00
Numblesix
bf39a2a747 Added some Doc for FREEipa 2018-10-18 09:31:24 -04:00
Bill Nottingham
f27ec8cd89 Update Django version in version check. 2018-10-18 09:23:59 -04:00
Bill Nottingham
aec3244f52 Update to latest django subminor to pick up assorted fixes. 2018-10-18 09:23:57 -04:00
Wayne Witzel III
c8e208dea7 Merge pull request #3074 from wwitzel3/release_3.3.1
fix typo in length
2018-10-17 17:10:41 -04:00
Wayne Witzel III
f2cec03900 fix typo in length 2018-10-17 16:34:24 -04:00
softwarefactory-project-zuul[bot]
07aaad53aa Merge pull request #2037 from ikke-t/ikke-t-selinux-fix
fixes selinux permissions for awx data.

Reviewed-by: Shane McDonald <me@shanemcd.com>
             https://github.com/shanemcd
2018-10-17 19:05:21 +00:00
Ilkka Tengvall
42a0192425 Merge branch 'devel' into ikke-t-selinux-fix 2018-10-17 21:44:48 +03:00
softwarefactory-project-zuul[bot]
ac08033d3e Merge pull request #2472 from ryanpetrello/callback-receiver-log
use the proper logger for the callback receiver

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-17 15:35:32 +00:00
Numblesix
6d0fed6d9a Added some Doc for ca_trust_dir 2018-10-17 11:32:26 -04:00
Ryan Petrello
53ae05094e use the proper logger for the callback receiver 2018-10-17 10:56:29 -04:00
softwarefactory-project-zuul[bot]
78c4d5005e Merge pull request #2461 from ryanpetrello/upgrade-celery-and-kombu
upgrade to the latest kombu + celery

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-17 13:49:37 +00:00
Wayne Witzel III
0b4e0678e9 Merge pull request #3070 from wwitzel3/release_3.3.1
better error handling when over limit
2018-10-17 09:09:01 -04:00
Ryan Petrello
79002ae563 upgrade to the latest kombu + celery 2018-10-16 16:14:58 -04:00
softwarefactory-project-zuul[bot]
6c868c7552 Merge pull request #2449 from ryanpetrello/noisy-check-migrations
silence the noisy error that's printed w/ `awx-manage check_migrations`

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-16 18:04:05 +00:00
Ryan Petrello
6e4f3efc4b silence the noisy error that's printed w/ awx-manage check_migrations 2018-10-16 13:48:03 -04:00
softwarefactory-project-zuul[bot]
ce9da4edb7 Merge pull request #2454 from ryanpetrello/more-celery-cleanup
allow users to specify BROKER_URL with passwords that contain : and @

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-16 16:30:33 +00:00
Ryan Petrello
6ff1fe8548 allow users to specify BROKER_URL with passwords that contain : and @ 2018-10-16 11:56:57 -04:00
softwarefactory-project-zuul[bot]
140b85688f Merge pull request #2451 from matburt/fixup_test_userlaunch
Force openshift user behavior for uids over 2500

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-16 15:47:20 +00:00
Matthew Jones
0477581dea Fix up flake8 2018-10-16 11:30:07 -04:00
Matthew Jones
d5c557c639 Proper parameterization for scm tests 2018-10-16 11:30:06 -04:00
Matthew Jones
8e60cb1270 Purge an unneeded ansible 2.4 version check 2018-10-16 11:30:05 -04:00
chris meyers
906eb98d8e fixes dispatcher test that inadvertently access db
* Logger inadvertently triggered by dispatcher tests that do not need DB
access. Mock settings to sidestep DB access.
2018-10-16 11:30:04 -04:00
Matthew Jones
119b9475ea Force openshift user behavior for uids over 2500 2018-10-16 11:30:04 -04:00
softwarefactory-project-zuul[bot]
12c8994faf Merge pull request #2450 from ryanpetrello/iso-deprovision-fix
don't call rabbitmqctl forget_cluster_node for isolated instances

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-16 14:10:35 +00:00
Ryan Petrello
f3e73bbed8 don't call rabbitmqctl forget_cluster_node for isolated instances 2018-10-16 09:47:53 -04:00
softwarefactory-project-zuul[bot]
e2a1b7902c Merge pull request #2439 from jmferrer/change_openshift_vars_path
Change openshift vars path.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-16 12:17:51 +00:00
jmferrer
d65a3fa037 Restore per-deployment requirements. 2018-10-16 09:59:11 +02:00
jmferrer
f6600887bc Merge branch 'devel' of https://github.com/jmferrer/awx into change_openshift_vars_path 2018-10-16 09:55:05 +02:00
Michael Abashian
96c18fa311 Merge pull request #2141 from mabashian/remove-system-tracking
Removes system tracking code from the UI
2018-10-15 18:55:34 -06:00
mabashian
9645e5bcd3 Remove portalMode accidental portalMode inclusion that resulted from merge conflict 2018-10-15 18:50:58 -04:00
mabashian
0a09d98fe8 Removes system tracking code from the UI. Moves import of shared out to app.js 2018-10-15 18:50:58 -04:00
Ryan Petrello
1224e2c889 Merge pull request #2440 from ryanpetrello/fix-list-based-survey-choices
remove over-eager survey choices validation
2018-10-15 17:05:06 -04:00
softwarefactory-project-zuul[bot]
c8e6fa3bb3 Merge pull request #2438 from ryanpetrello/dispatcher-quit-race
don't attempt to recover special QUIT messages in the worker pool recovery code

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-15 20:40:20 +00:00
Ryan Petrello
00cae104b3 remove over-eager survey choices validation
it looks like choices can also be a list and _maybe_ comma delimited;
clearly there's a lot of history here; let's verify and test what's _really_ supported and _then_ add any necessary validation
2018-10-15 16:40:17 -04:00
Wayne Witzel III
6e3b2a5c2d better error handling when over limit 2018-10-15 16:07:14 -04:00
Ryan Petrello
3d378077d9 Merge pull request #3066 from saito-hideki/issue/3064
[3.3.1] Add files and output of commands to gather with sosreport
2018-10-15 12:28:08 -04:00
jmferrer
f27a34cd1c Change openshift vars path. 2018-10-15 18:27:49 +02:00
Ryan Petrello
720a634702 don't attempt to recover special QUIT messages in the worker pool
when `--reload` is sent to the dispatcher, it sends a special QUIT
message to each worker in the pool so that it will exit gracefully at
the next opportunity

when a worker process exits unexpectedly, the dispatcher attempts to
recover its queued messages and sends them to another worker in the
pool; in this scenario, we should _never_ re-enqueue these special
QUIT messages (because the process doesn't need to quit, it's already
gone)

To reproduce this race condition:

1.  Launch an adhoc that does `sleep 60`
2.  Run `awx-manage run_dispatcher --reload` to enqueue a `QUIT` message
    into the worker's queue
3.  Find the pid of the worker running the `sleep 60` and `SIGKILL` it.
4.  Observe that dispatcher attempts to requeue the `QUIT` message and
    logs a confusing error.
2018-10-15 12:17:52 -04:00
Chris Meyers
c722e50595 Merge pull request #2425 from chrismeyersfsu/fix-ldap_group_type
fix issue with ldap queries containing unicode
2018-10-15 10:49:43 -05:00
Ryan Petrello
716d440a76 Merge pull request #3068 from ryanpetrello/fix-3039
fix a typo on the JT add page that breaks the custom venv field
2018-10-15 11:40:33 -04:00
Ryan Petrello
9d81727d16 fix a typo on the JT add page that breaks the custom venv field 2018-10-15 11:19:58 -04:00
softwarefactory-project-zuul[bot]
1cecfd9771 Merge pull request #2437 from ryanpetrello/fix-3039
fix a typo on the JT add page that breaks the custom venv field

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-15 15:19:08 +00:00
Ryan Petrello
011c8ae822 fix a typo on the JT add page that breaks the custom venv field 2018-10-15 11:04:31 -04:00
Hideki Saito
d5626a4f3e [3.3.1] Add files and output of commands to gather with sosreport
- Fixed issue #3064
2018-10-15 11:40:51 +09:00
softwarefactory-project-zuul[bot]
73f54b2237 Merge pull request #2373 from marshmalien/always_nodes_ui
Display WF always nodes in conjunction with success and failure

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-12 22:43:32 +00:00
Ryan Petrello
6073e8e3b6 Merge pull request #3062 from ryanpetrello/fix-3043
minor nit for https://github.com/ansible/tower/pull/3060
2018-10-12 16:56:23 -04:00
Ryan Petrello
867ff5da71 minor nit for https://github.com/ansible/tower/pull/3060 2018-10-12 16:17:14 -04:00
Ryan Petrello
c8b2ca7fed Merge pull request #3060 from ryanpetrello/fix-3043
properly handle AnsibleVaultEncryptedUnicode objects in the callback
2018-10-12 15:42:11 -04:00
softwarefactory-project-zuul[bot]
0a964b2bf6 Merge pull request #2266 from ansible/celery-tastes-bad
replace the celery-based task queue with a kombu-based implementation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-12 18:40:54 +00:00
Ryan Petrello
d4e3127fb4 properly handle AnsibleVaultEncryptedUnicode objects in the callback 2018-10-12 12:29:46 -04:00
softwarefactory-project-zuul[bot]
fa18b94725 Merge pull request #2429 from ryanpetrello/more-shippable-cleanup
more shippable -> zuul cleanup

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-12 16:10:53 +00:00
softwarefactory-project-zuul[bot]
5ab6255c67 Merge pull request #2424 from ryanpetrello/oauth-toolkit-upgrade
update to the latest stable 1.1 django-oauth-toolkit

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-12 15:54:36 +00:00
Ryan Petrello
ac80bc874a more shippable -> zuul cleanup 2018-10-12 11:50:29 -04:00
chris meyers
2e98446394 fix issue with ldap queries containing unicode 2018-10-12 10:33:01 -04:00
softwarefactory-project-zuul[bot]
c4afbbc2ca Merge pull request #2420 from dmt/devel
fix indentation for register variable

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-12 14:27:19 +00:00
Ryan Petrello
517043e209 update to the latest stable 1.1 django-oauth-toolkit
see: https://github.com/jazzband/django-oauth-toolkit/pull/629
2018-10-12 10:21:57 -04:00
Marliana Lara
e7c52bc5e7 Merge pull request #8 from dsesami/always_nodes_ui_tests
Always nodes ui tests
2018-10-12 10:20:10 -04:00
Daniel Sami
c25d208465 added browser close at end, waits for spinners 2018-10-12 10:18:49 -04:00
Chris Meyers
503a47c509 Merge pull request #3054 from chrismeyersfsu/fix-ldap_posix_group_type
fix issue with ldap queries containing unicode
2018-10-12 09:14:09 -05:00
Daniel Temme
921231fe3d fix indentation for register variable 2018-10-12 11:13:42 +02:00
softwarefactory-project-zuul[bot]
6721ea54e9 Merge pull request #1956 from droopy4096/devel
allow nginx config extension

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 22:38:36 +00:00
softwarefactory-project-zuul[bot]
99a42e91fe Merge pull request #2235 from ChrisRo89/devel
Extracted more variables which a related to rabbitmq/postgresql from tasks to defaults

Reviewed-by: Shane McDonald <me@shanemcd.com>
             https://github.com/shanemcd
2018-10-11 21:54:38 +00:00
softwarefactory-project-zuul[bot]
9a580ba644 Merge pull request #2416 from fantashley/fix-openshift-auth
Fix openshift auth broken by undefined vars

Reviewed-by: Ashley Nelson <fantashley@gmail.com>
             https://github.com/fantashley
2018-10-11 21:51:20 +00:00
softwarefactory-project-zuul[bot]
74fcdabc22 Merge pull request #2156 from Decstasy/patch-1
Bugfix for ca_trust_dir

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 21:31:45 +00:00
Ashley Nelson
9bec7cf3b0 Fix openshift auth broken by undefined vars
Signed-off-by: Ashley Nelson <fantashley@gmail.com>
2018-10-11 16:25:55 -05:00
softwarefactory-project-zuul[bot]
f9e402658b Merge pull request #2414 from ryanpetrello/readme-updates
some minor README updates

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 20:53:19 +00:00
softwarefactory-project-zuul[bot]
9570981c7f Merge pull request #2351 from jakemcdermott/enhancement-2515
add views for organization permissions and roles

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 20:41:08 +00:00
softwarefactory-project-zuul[bot]
f79debac42 Merge pull request #2164 from atgreen/devel
Fix token based openshift logins during installation - fixes #489

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 20:36:39 +00:00
softwarefactory-project-zuul[bot]
a9f3eeef05 Merge pull request #2131 from walkafwalka/docker_install_awx_hostnames
Add inventory vars to set docker install hostnames

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 20:29:32 +00:00
softwarefactory-project-zuul[bot]
6eb1feffcd Merge pull request #2117 from walkafwalka/allow_awx_login_autocomplete
Allow autocomplete on the AWX login page

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 20:23:37 +00:00
softwarefactory-project-zuul[bot]
6f55cde6d3 Merge pull request #2091 from stoned/force_boolean_eval
force boolean evaluation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 20:17:48 +00:00
softwarefactory-project-zuul[bot]
48511b6c33 Merge pull request #2281 from AlanCoding/consistent2
Always allow resource creation via global list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 20:12:39 +00:00
softwarefactory-project-zuul[bot]
771daefcfd Merge pull request #2411 from fantashley/statefulset_servicename
Add serviceName to Kubernetes StatefulSet spec

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 20:06:50 +00:00
Ryan Petrello
1167361128 some minor README updates 2018-10-11 16:05:29 -04:00
softwarefactory-project-zuul[bot]
3a4cc0d464 Merge pull request #1911 from AlanCoding/spec_it_out
Much more comprehensive validation of survey specs

Reviewed-by: Shane McDonald <me@shanemcd.com>
             https://github.com/shanemcd
2018-10-11 20:00:43 +00:00
Jake McDermott
78901ab48e add organization permissions view 2018-10-11 14:21:44 -04:00
Jake McDermott
938bf1b531 add organizations tab to team permissions screen 2018-10-11 14:21:29 -04:00
Marliana Lara
27da141889 Address review comments 2018-10-11 13:13:01 -04:00
Ashley Nelson
2bf2412759 Add serviceName to Kubernetes StatefulSet spec
Signed-off-by: Ashley Nelson <fantashley@gmail.com>
2018-10-11 11:49:08 -05:00
Daniel Sami
1e3c229460 lint fixes 2018-10-11 12:24:55 -04:00
AlanCoding
cfa93b52b7 Always allow resource creation via global list 2018-10-11 12:21:45 -04:00
Christian.Rohr
96ad2b2b28 Extracted more variables which a related to rabbitmq 2018-10-11 12:16:01 -04:00
softwarefactory-project-zuul[bot]
f53a1fedf6 Merge pull request #2410 from ryanpetrello/update-azure-inv-script
update Azure inventory script to latest from Ansible

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 16:15:15 +00:00
Daniel Sami
8fceaf8810 Tests for UI workflow always nodes 2018-10-11 12:14:00 -04:00
Anthony Green
c39370dbd0 Fix token based openshift logins 2018-10-11 12:10:41 -04:00
AlanCoding
bdc7efb274 humble beginnings of survey question type validation 2018-10-11 12:10:40 -04:00
Ryan Petrello
10c76e2337 update Azure inventory script to latest from Ansible
rebased version of https://github.com/ansible/awx/pull/2234
2018-10-11 11:47:55 -04:00
Ryan Petrello
ff1e8cc356 replace celery task decorators with a kombu-based publisher
this commit implements the bulk of `awx-manage run_dispatcher`, a new
command that binds to RabbitMQ via kombu and balances messages across
a pool of workers that are similar to celeryd workers in spirit.
Specifically, this includes:

- a new decorator, `awx.main.dispatch.task`, which can be used to
  decorate functions or classes so that they can be designated as
  "Tasks"
- support for fanout/broadcast tasks (at this point in time, only
  `conf.Setting` memcached flushes use this functionality)
- support for job reaping
- support for success/failure hooks for job runs (i.e.,
  `handle_work_success` and `handle_work_error`)
- support for auto scaling worker pool that scale processes up and down
  on demand
- minimal support for RPC, such as status checks and pool recycle/reload
2018-10-11 10:53:30 -04:00
Ryan Petrello
da74f1d01f refactor and test the callback receiver as a base for a task dispatcher 2018-10-11 10:53:26 -04:00
softwarefactory-project-zuul[bot]
8ad46436df Merge pull request #2125 from wenottingham/the-first-purge
Purge inventory script requirements from the AWX virtual environment.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-10-11 14:14:36 +00:00
Bill Nottingham
be01bed34b Purge inventory script requirements from the AWX virtual environment.
boto is still used by AWX itself.
2018-10-11 09:45:41 -04:00
Ryan Petrello
71577bb00d Merge pull request #3052 from wwitzel3/bump-asgi_amqp
Use latest version of asgi_amqp
2018-10-10 16:07:57 -04:00
chris meyers
cfb58eb145 fix issue with ldap queries containing unicode 2018-10-10 12:32:27 -04:00
Wayne Witzel III
5994c35975 Use latest version of asgi_amqp 2018-10-10 11:33:11 -04:00
Ilkka Tengvall
b4919f9ebd Merge branch 'devel' into ikke-t-selinux-fix 2018-10-10 08:23:46 +03:00
Daniel Sami
b02677a8d0 Initial commit for UI tests for always nodes 2018-10-09 16:32:24 -04:00
Marliana Lara
1b25dd0127 Fix ui-lint error 2018-10-09 14:21:59 -04:00
Marliana Lara
a2f4e36e47 Show all wf options when node is not a root node
* Edge type of root node is always "always"
* If node is not a root node, show all options: always, success, fail
* Remove edge conflict logic
2018-10-09 11:30:53 -04:00
adamscmRH
ad566cc651 tests for always_nodes 2018-10-09 11:30:53 -04:00
adamscmRH
4d9523afa4 lift always node mutex restriction 2018-10-09 11:30:49 -04:00
Michael Abashian
3aa07baf26 Merge pull request #3035 from mabashian/3010-extra-vars
Fixes bug where schedule extra vars were not being displayed in the edit form
2018-09-28 09:24:41 -04:00
Chris Meyers
f1c53fcd85 Merge pull request #3034 from chrismeyersfsu/fix-ldap_params
at migration time, validate ldap group type params
2018-09-28 08:53:42 -04:00
mabashian
6c98e6c3a0 Actually fix extra vars on edit schedule. This commit takes into account survey question answers which need to get pulled out of extra vars and displayed in the prompt. 2018-09-27 16:49:23 -04:00
mabashian
8aec4ed72e Fixes bug where schedule extra vars were not being displayed in the edit form 2018-09-27 16:30:10 -04:00
chris meyers
0a0cdc2e21 at migration time, validate ldap group type params
* Previously, we have logic in the API to ensure that ldap group type
params, when changed, align with ldap group type Class init
expectations. However, we did not have this logic in the migrations.
This PR adds the validation check to migrations.
2018-09-27 12:18:39 -04:00
Ryan Petrello
f0776d6838 Merge pull request #3026 from ryanpetrello/fix-3004
properly support deprecated `Authorization: Token xyz`
2018-09-24 15:33:56 -04:00
Ryan Petrello
9de63832ce properly support deprecated Authorization: Token xyz 2018-09-24 15:16:09 -04:00
Dmytro Makovey
f8d2a32756 merge and resolve conflict 2018-09-18 11:35:35 -07:00
John Mitchell
70629ef7f3 Merge pull request #2997 from jlmitch5/fixPageSelector
fix filter page size selector
2018-09-13 10:42:50 -04:00
John Mitchell
1d8bb47726 fix filter page size selector 2018-09-12 17:31:10 -04:00
Matthew Jones
5e16c72d30 Merge pull request #2988 from mabashian/2982-wfjt-list-select
Fixes bug in wfjt node form where rows weren't remaining selected after being clicked
2018-09-12 14:00:32 -04:00
Matthew Jones
02f709f8d1 Merge pull request #2995 from jlmitch5/lodashFindUpdate
update syntax of lodash find call
2018-09-12 13:59:59 -04:00
Shane McDonald
90bd27f5a8 Whitespace fix
I’m not actually this pedantic, I just need something to tag.
2018-09-12 13:41:56 -04:00
John Mitchell
593ab90f92 update syntax of lodash find call 2018-09-12 10:54:17 -04:00
mabashian
27c06a7285 Fixes bug in wfjt node form where rows weren't remaining selected after being clicked 2018-09-11 16:34:02 -04:00
Ryan Petrello
b2c755ba76 Merge pull request #2980 from rooftopcellist/amend_changelog_networkui
rm network ui from changelog
2018-09-11 10:03:00 -04:00
Ryan Petrello
c88cab7d31 Merge pull request #2983 from ansible/deprecated_facts
deprecate fact endpoints
2018-09-11 10:02:25 -04:00
chris meyers
f82f4a9993 deprecate fact endpoints and commands 2018-09-07 17:46:33 -04:00
adamscmRH
5a6f1a342f rm network ui from changelog 2018-09-07 15:04:34 -04:00
Dennis U
a294a6f06e Bugfix for ca_trust_dir
Changed syntax as ca_trust_dir was not correctly mounted in awx_web container and added command to update CA trust inside awx_web container after creation.
2018-08-09 14:07:29 +02:00
walkafwalka
d2ab7bd54d Add inventory vars to set docker install hostnames
Signed-off-by: walkafwalka <41709139+walkafwalka@users.noreply.github.com>
2018-08-04 01:49:07 -07:00
walkafwalka
e02e8994ad Allow autocomplete on the AWX login page
Signed-off-by: walkafwalka <41709139+walkafwalka@users.noreply.github.com>
2018-08-01 00:21:38 +00:00
Stoned Elipot
ada2d65547 force boolean evaluation 2018-07-25 19:10:31 +02:00
Ilkka Tengvall
0443bd3099 fixes selinux permissions for awx data.
fixes issue #2036 and  #1896
2018-07-02 09:22:36 +03:00
Dmytro Makovey
adaa164a19 allow nginx config extension 2018-06-05 08:16:08 -07:00
635 changed files with 16560 additions and 16716 deletions

6
.gitignore vendored
View File

@@ -1,3 +1,8 @@
# Ignore generated schema
swagger.json
schema.json
reference-schema.json
# Tags
.tags
.tags1
@@ -52,6 +57,7 @@ __pycache__
**/node_modules/**
/tmp
**/npm-debug.log*
**/package-lock.json
# UI build flag files
awx/ui/.deps_built

View File

@@ -34,10 +34,11 @@ Have questions about this document or anything not covered here? Come chat with
## Things to know prior to submitting code
- All code submissions are done through pull requests against the `devel` branch.
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
## Setting up your development environment
@@ -49,7 +50,7 @@ The AWX development environment workflow and toolchain is based on Docker, and t
Prior to starting the development services, you'll need `docker` and `docker-compose`. On Linux, you can generally find these in your distro's packaging, but you may find that Docker themselves maintain a separate repo that tracks more closely to the latest releases.
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
respectively.
For Linux platforms, refer to the following from Docker:
@@ -137,21 +138,21 @@ Run the following to build the AWX UI:
```
### Running the environment
#### Start the containers
#### Start the containers
Start the development containers by running the following:
```bash
(host)$ make docker-compose
```
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django, celery, and the front end build process.
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django and the front end build process.
If you start a second terminal session, you can take a look at the running containers using the `docker ps` command. For example:
```bash
# List running containers
(host)$ docker ps
(host)$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@@ -174,7 +175,7 @@ The first time you start the environment, database migrations need to run in ord
```bash
awx_1 | Operations to perform:
awx_1 | Synchronize unmigrated apps: solo, api, staticfiles, debug_toolbar, messages, channels, django_extensions, ui, rest_framework, polymorphic
awx_1 | Apply all migrations: sso, taggit, sessions, djcelery, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
awx_1 | Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
awx_1 | Synchronizing apps without migrations:
awx_1 | Creating tables...
awx_1 | Running deferred SQL...
@@ -219,7 +220,7 @@ If you want to start and use the development environment, you'll first need to b
```
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes.
Now you can start each service individually, or start all services in a pre-configured tmux session like so:
```bash
@@ -248,9 +249,9 @@ Before you can log into AWX, you need to create an admin user. With this user yo
(container)# awx-manage createsuperuser
```
You will be prompted for a username, an email address, and a password, and you will be asked to confirm the password. The email address is not important, so just enter something that looks like an email address. Remember the username and password, as you will use them to log into the web interface for the first time.
##### Load demo data
You can optionally load some demo data. This will create a demo project, inventory, and job template. From within the container shell, run the following to load the data:
```bash
@@ -276,7 +277,7 @@ in OpenAPI format. A variety of online tools are available for translating
this data into more consumable formats (such as HTML). http://editor.swagger.io
is an example of one such service.
### Accessing the AWX web interface
### Accessing the AWX web interface
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).
@@ -289,7 +290,7 @@ When necessary, remove any AWX containers and images by running the following:
```bash
(host)$ make docker-clean
```
## What should I work on?
For feature work, take a look at the current [Enhancements](https://github.com/ansible/awx/issues?q=is%3Aissue+is%3Aopen+label%3Atype%3Aenhancement).
@@ -331,6 +332,23 @@ Sometimes it might take us a while to fully review your PR. We try to keep the `
All submitted PRs will have the linter and unit tests run against them via Zuul, and the status reported in the PR.
## PR Checks ran by Zuul
Zuul jobs for awx are defined in the [zuul-jobs](https://github.com/ansible/zuul-jobs) repo.
Zuul runs the following checks that must pass:
1) `tox-awx-api-lint`
2) `tox-awx-ui-lint`
3) `tox-awx-api`
4) `tox-awx-ui`
5) `tox-awx-swagger`
Zuul runs the following checks that are non-voting (can not pass but serve to inform PR reviewers):
1) `tox-awx-detect-schema-change`
This check generates the schema and diffs it against a reference copy of the `devel` version of the schema.
Reviewers should inspect the `job-output.txt.gz` related to the check if their is a failure (grep for `diff -u -b` to find beginning of diff).
If the schema change is expected and makes sense in relation to the changes made by the PR, then you are good to go!
If not, the schema changes should be fixed, but this decision must be enforced by reviewers.
## Reporting Issues
We welcome your feedback, and encourage you to file an issue when you run into a problem. But before opening a new issues, we ask that you please view our [Issues guide](./ISSUES.md).

View File

@@ -119,12 +119,12 @@ To complete a deployment to OpenShift, you will obviously need access to an Open
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
The default resource requests per-pod requires:
The default resource requests per-deployment requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/openshift/defaults/main.yml](/installer/openshift/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources](https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources)
@@ -236,7 +236,7 @@ Using /etc/ansible/ansible.cfg as config file
}
Operations to perform:
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic
Apply all migrations: sso, taggit, sessions, djcelery, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
@@ -426,7 +426,7 @@ If you choose to use the official images then the remote host will be the one to
> As mentioned above, in [Prerequisites](#prerequisites-1), the prerequisites are required on the remote host.
> When deploying to a remote host, the playook does not execute tasks with the `become` option. For this reason, make sure the user that connects to the remote host has privileges to run the `docker` command. This typically means that non-privileged users need to be part of the `docker` group.
> When deploying to a remote host, the playbook does not execute tasks with the `become` option. For this reason, make sure the user that connects to the remote host has privileges to run the `docker` command. This typically means that non-privileged users need to be part of the `docker` group.
#### Inventory variables
@@ -449,6 +449,10 @@ Before starting the build process, review the [inventory](./installer/inventory)
When using docker-compose, the `docker-compose.yml` file will be created there (default `/var/lib/awx`).
*ca_trust_dir*
> If you're using a non trusted CA, provide a path where the untrusted Certs are stored on your Host.
#### Docker registry
If you wish to tag and push built images to a Docker registry, set the following variables in the inventory file:
@@ -548,7 +552,7 @@ Using /etc/ansible/ansible.cfg as config file
}
Operations to perform:
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic
Apply all migrations: sso, taggit, sessions, djcelery, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...

View File

@@ -11,7 +11,6 @@ GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
IMAGE_REPOSITORY_AUTH ?=
IMAGE_REPOSITORY_BASE ?= https://gcr.io
VERSION := $(shell cat VERSION)
# NOTE: This defaults the container image version to the branch that's active
@@ -59,7 +58,7 @@ UI_RELEASE_FLAG_FILE = awx/ui/.release_built
I18N_FLAG_FILE = .i18n_built
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange dbshell runserver celeryd \
develop refresh adduser migrate dbchange dbshell runserver \
receiver test test_unit test_ansible test_coverage coverage_html \
dev_build release_build release_clean sdist \
ui-docker-machine ui-docker ui-release ui-devel \
@@ -85,6 +84,11 @@ clean-venv:
clean-dist:
rm -rf dist
clean-schema:
rm -rf swagger.json
rm -rf schema.json
rm -rf reference-schema.json
# Remove temporary build files, compiled Python files.
clean: clean-ui clean-dist
rm -rf awx/public
@@ -165,7 +169,7 @@ requirements_awx: virtualenv_awx
else \
cat requirements/requirements.txt requirements/requirements_git.txt | $(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) --ignore-installed -r /dev/stdin ; \
fi
$(VENV_BASE)/awx/bin/pip uninstall --yes -r requirements/requirements_tower_uninstall.txt
#$(VENV_BASE)/awx/bin/pip uninstall --yes -r requirements/requirements_tower_uninstall.txt
requirements_awx_dev:
$(VENV_BASE)/awx/bin/pip install -r requirements/requirements_dev.txt
@@ -204,7 +208,7 @@ init:
if [ "$(AWX_GROUP_QUEUES)" == "tower,thepentagon" ]; then \
$(MANAGEMENT_COMMAND) provision_instance --hostname=isolated; \
$(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \
$(MANAGEMENT_COMMAND) generate_isolated_key | ssh -o "StrictHostKeyChecking no" root@isolated 'cat >> /root/.ssh/authorized_keys'; \
$(MANAGEMENT_COMMAND) generate_isolated_key > /awx_devel/awx/main/expect/authorized_keys; \
fi;
# Refresh development environment after pulling new code.
@@ -233,7 +237,7 @@ server_noattach:
tmux new-session -d -s awx 'exec make uwsgi'
tmux rename-window 'AWX'
tmux select-window -t awx:0
tmux split-window -v 'exec make celeryd'
tmux split-window -v 'exec make dispatcher'
tmux new-window 'exec make daphne'
tmux select-window -t awx:1
tmux rename-window 'WebSockets'
@@ -265,12 +269,6 @@ honcho:
fi; \
honcho start -f tools/docker-compose/Procfile
flower:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
celery flower --address=0.0.0.0 --port=5555 --broker=amqp://guest:guest@$(RABBITMQ_HOST):5672//
collectstatic:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -281,7 +279,7 @@ uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
uwsgi -b 32768 --socket 127.0.0.1:8050 --module=awx.wsgi:application --home=/venv/awx --chdir=/awx_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --lazy-apps --logformat "%(addr) %(method) %(uri) - %(proto) %(status)" --hook-accepting1-once="exec:/bin/sh -c '[ -f /tmp/celery_pid ] && kill -1 `cat /tmp/celery_pid` || true'"
uwsgi -b 32768 --socket 127.0.0.1:8050 --module=awx.wsgi:application --home=/venv/awx --chdir=/awx_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --lazy-apps --logformat "%(addr) %(method) %(uri) - %(proto) %(status)" --hook-accepting1-once="exec:awx-manage run_dispatcher --reload"
daphne:
@if [ "$(VENV_BASE)" ]; then \
@@ -302,13 +300,13 @@ runserver:
fi; \
$(PYTHON) manage.py runserver
# Run to start the background celery worker for development.
celeryd:
rm -f /tmp/celery_pid
# Run to start the background task dispatcher for development.
dispatcher:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
celery worker -A awx -l DEBUG -B -Ofair --autoscale=100,4 --schedule=$(CELERY_SCHEDULE_FILE) -n celery@$(COMPOSE_HOST) --pidfile /tmp/celery_pid
$(PYTHON) manage.py run_dispatcher
# Run to start the zeromq callback receiver
receiver:
@@ -344,11 +342,14 @@ pyflakes: reports
pylint: reports
@(set -o pipefail && $@ | reports/$@.report)
genschema: reports
$(MAKE) swagger PYTEST_ARGS="--genschema"
swagger: reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
check: flake8 pep8 # pyflakes pylint
@@ -547,38 +548,43 @@ docker-isolated:
docker start tools_awx_1
docker start tools_isolated_1
echo "__version__ = '`git describe --long | cut -d - -f 1-1`'" | docker exec -i tools_isolated_1 /bin/bash -c "cat > /venv/awx/lib/python2.7/site-packages/awx.py"
if [ "`docker exec -i -t tools_isolated_1 cat /root/.ssh/authorized_keys`" == "`docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub`" ]; then \
echo "SSH keys already copied to isolated instance"; \
else \
docker exec "tools_isolated_1" bash -c "mkdir -p /root/.ssh && rm -f /root/.ssh/authorized_keys && echo $$(docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub) >> /root/.ssh/authorized_keys"; \
fi
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
# Docker Compose Development environment
docker-compose: docker-auth
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml up --no-recreate awx
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml up --no-recreate awx
docker-compose-cluster: docker-auth
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml up
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml up
docker-compose-test: docker-auth
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /bin/bash
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /bin/bash
docker-compose-runtest:
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --user=$(shell id -u) --rm --service-ports awx /start_tests.sh
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh
docker-compose-build-swagger:
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --user=$(shell id -u) --rm --service-ports awx /start_tests.sh swagger
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh swagger
docker-compose-genschema:
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh genschema
mv swagger.json schema.json
docker-compose-detect-schema-change:
$(MAKE) docker-compose-genschema
curl https://s3.amazonaws.com/awx-public-ci-files/schema.json -o reference-schema.json
# Ignore differences in whitespace with -b
diff -u -b schema.json reference-schema.json
docker-compose-clean:
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm -w /awx_devel --service-ports awx make clean
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm -w /awx_devel --service-ports awx make clean
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose rm -sf
docker-compose-build: awx-devel-build
# Base development image build
awx-devel-build:
docker build -t ansible/awx_devel --build-arg UID=$(shell id -u) -f tools/docker-compose/Dockerfile .
docker build -t ansible/awx_devel -f tools/docker-compose/Dockerfile .
docker tag ansible/awx_devel $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
@@ -598,7 +604,7 @@ docker-refresh: docker-clean docker-compose
# Docker Development Environment with Elastic Stack Connected
docker-compose-elk: docker-auth
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: docker-auth
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
@@ -606,7 +612,6 @@ docker-compose-cluster-elk: docker-auth
minishift-dev:
ansible-playbook -i localhost, -e devtree_directory=$(CURDIR) tools/clusterdevel/start_minishift_dev.yml
clean-elk:
docker stop tools_kibana_1
docker stop tools_logstash_1

View File

@@ -1,7 +1,6 @@
[![Run Status](https://api.shippable.com/projects/591c82a22f895107009e8b35/badge?branch=devel)](https://app.shippable.com/github/ansible/awx)
[![Gated by Zuul](https://zuul-ci.org/gated.svg)](https://ansible.softwarefactory-project.io/zuul/status)
AWX
===
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
AWX provides a web-based user interface, REST API, and task engine built on top of [Ansible](https://github.com/ansible/ansible). It is the upstream project for [Tower](https://www.ansible.com/tower), a commercial derivative of AWX.
@@ -11,6 +10,8 @@ To learn more about using AWX, and Tower, view the [Tower docs site](http://docs
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
The AWX logos and branding assets are covered by [our trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
Contributing
------------

View File

@@ -1 +1 @@
2.0.1
2.1.2

View File

@@ -12,14 +12,6 @@ __version__ = get_distribution('awx').version
__all__ = ['__version__']
# Isolated nodes do not have celery installed
try:
from .celery import app as celery_app # noqa
__all__.append('celery_app')
except ImportError:
pass
# Check for the presence/absence of "devonly" module to determine if running
# from a source code checkout or release packaage.
try:
@@ -29,6 +21,48 @@ except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
import django
from django.utils.encoding import force_bytes
from django.db.backends.base.schema import BaseDatabaseSchemaEditor
from django.db.backends.base import schema
HAS_DJANGO = True
except ImportError:
HAS_DJANGO = False
if HAS_DJANGO is True:
# This line exists to make sure we don't regress on FIPS support if we
# upgrade Django; if you're upgrading Django and see this error,
# update the version check below, and confirm that FIPS still works.
if django.__version__ != '1.11.16':
raise RuntimeError("Django version other than 1.11.16 detected {}. \
Subclassing BaseDatabaseSchemaEditor is known to work for Django 1.11.16 \
and may not work in newer Django versions.".format(django.__version__))
class FipsBaseDatabaseSchemaEditor(BaseDatabaseSchemaEditor):
@classmethod
def _digest(cls, *args):
"""
Generates a 32-bit digest of a set of arguments that can be used to
shorten identifying names.
"""
try:
h = hashlib.md5()
except ValueError:
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(force_bytes(arg))
return h.hexdigest()[:8]
schema.BaseDatabaseSchemaEditor = FipsBaseDatabaseSchemaEditor
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.
command_dir = os.path.join(management_dir, 'commands')

View File

@@ -25,7 +25,6 @@ from rest_framework.filters import BaseFilterBackend
from awx.main.utils import get_type_for_model, to_python_boolean
from awx.main.utils.db import get_all_field_names
from awx.main.models.credential import CredentialType
from awx.main.models.rbac import RoleAncestorEntry
class V1CredentialFilterBackend(BaseFilterBackend):
@@ -347,12 +346,12 @@ class FieldLookupBackend(BaseFilterBackend):
else:
args.append(Q(**{k:v}))
for role_name in role_filters:
if not hasattr(queryset.model, 'accessible_pk_qs'):
raise ParseError(_(
'Cannot apply role_level filter to this list because its model '
'does not use roles for access control.'))
args.append(
Q(pk__in=RoleAncestorEntry.objects.filter(
ancestor__in=request.user.roles.all(),
content_type_id=ContentType.objects.get_for_model(queryset.model).id,
role_field=role_name
).values_list('object_id').distinct())
Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name))
)
if or_filters:
q = Q()

View File

@@ -6,7 +6,7 @@ import inspect
import logging
import time
import six
import urllib
import urllib
# Django
from django.conf import settings
@@ -56,7 +56,7 @@ __all__ = ['APIView', 'GenericAPIView', 'ListAPIView', 'SimpleListAPIView',
'ParentMixin',
'DeleteLastUnattachLabelMixin',
'SubListAttachDetachAPIView',
'CopyAPIView']
'CopyAPIView', 'BaseUsersList',]
logger = logging.getLogger('awx.api.generics')
analytics_logger = logging.getLogger('awx.analytics.performance')
@@ -92,8 +92,7 @@ class LoggedLoginView(auth_views.LoginView):
current_user = UserSerializer(self.request.user)
current_user = JSONRenderer().render(current_user.data)
current_user = urllib.quote('%s' % current_user, '')
ret.set_cookie('current_user', current_user)
ret.set_cookie('current_user', current_user, secure=settings.SESSION_COOKIE_SECURE or None)
return ret
else:
ret.status_code = 401
@@ -552,9 +551,8 @@ class SubListDestroyAPIView(DestroyAPIView, SubListAPIView):
def perform_list_destroy(self, instance_list):
if self.check_sub_obj_permission:
# Check permissions for all before deleting, avoiding half-deleted lists
for instance in instance_list:
if self.has_delete_permission(instance):
if not self.has_delete_permission(instance):
raise PermissionDenied()
for instance in instance_list:
self.perform_destroy(instance, check_permission=False)
@@ -990,3 +988,22 @@ class CopyAPIView(GenericAPIView):
serializer = self._get_copy_return_serializer(new_obj)
headers = {'Location': new_obj.get_absolute_url(request=request)}
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class BaseUsersList(SubListCreateAttachDetachAPIView):
def post(self, request, *args, **kwargs):
ret = super(BaseUsersList, self).post( request, *args, **kwargs)
if ret.status_code != 201:
return ret
try:
if ret.data is not None and request.data.get('is_system_auditor', False):
# This is a faux-field that just maps to checking the system
# auditor role member list.. unfortunately this means we can't
# set it on creation, and thus needs to be set here.
user = User.objects.get(id=ret.data['id'])
user.is_system_auditor = request.data['is_system_auditor']
ret.data['is_system_auditor'] = request.data['is_system_auditor']
except AttributeError as exc:
print(exc)
pass
return ret

View File

@@ -63,12 +63,15 @@ class Metadata(metadata.SimpleMetadata):
verbose_name = smart_text(opts.verbose_name)
field_info['help_text'] = field_help_text[field.field_name].format(verbose_name)
for model_field in serializer.Meta.model._meta.fields:
if field.field_name == model_field.name:
field_info['filterable'] = True
break
if field.field_name == 'type':
field_info['filterable'] = True
else:
field_info['filterable'] = False
for model_field in serializer.Meta.model._meta.fields:
if field.field_name == model_field.name:
field_info['filterable'] = True
break
else:
field_info['filterable'] = False
# Indicate if a field has a default value.
# FIXME: Still isn't showing all default values?

View File

@@ -61,7 +61,7 @@ from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.validators import vars_validate_or_raise
from awx.conf.license import feature_enabled
from awx.conf.license import feature_enabled, LicenseForbids
from awx.api.versioning import reverse, get_request_version
from awx.api.fields import (BooleanNullField, CharNullField, ChoiceNullField,
VerbatimField, DeprecatedCredentialField)
@@ -104,7 +104,7 @@ SUMMARIZABLE_FK_FIELDS = {
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'vault_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed'),
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed', 'type'),
'job_template': DEFAULT_SUMMARY_FIELDS,
'workflow_job_template': DEFAULT_SUMMARY_FIELDS,
'workflow_job': DEFAULT_SUMMARY_FIELDS,
@@ -1311,16 +1311,12 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
'admin', 'update',
{'copy': 'organization.project_admin'}
]
scm_delete_on_next_update = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
class Meta:
model = Project
fields = ('*', 'organization', 'scm_delete_on_next_update', 'scm_update_on_launch',
fields = ('*', 'organization', 'scm_update_on_launch',
'scm_update_cache_timeout', 'scm_revision', 'custom_virtualenv',) + \
('last_update_failed', 'last_updated') # Backwards compatibility
read_only_fields = ('scm_delete_on_next_update',)
def get_related(self, obj):
res = super(ProjectSerializer, self).get_related(obj)
@@ -1503,6 +1499,12 @@ class InventorySerializer(BaseSerializerWithVariables):
'admin', 'adhoc',
{'copy': 'organization.inventory_admin'}
]
groups_with_active_failures = serializers.IntegerField(
read_only=True,
min_value=0,
help_text=_('This field has been deprecated and will be removed in a future release')
)
class Meta:
model = Inventory
@@ -1724,6 +1726,11 @@ class AnsibleFactsSerializer(BaseSerializer):
class GroupSerializer(BaseSerializerWithVariables):
capabilities_prefetch = ['inventory.admin', 'inventory.adhoc']
groups_with_active_failures = serializers.IntegerField(
read_only=True,
min_value=0,
help_text=_('This field has been deprecated and will be removed in a future release')
)
class Meta:
model = Group
@@ -2976,12 +2983,16 @@ class JobTemplateMixin(object):
'''
def _recent_jobs(self, obj):
if hasattr(obj, 'workflow_jobs'):
job_mgr = obj.workflow_jobs
else:
job_mgr = obj.jobs
return [{'id': x.id, 'status': x.status, 'finished': x.finished}
for x in job_mgr.all().order_by('-created')[:10]]
# Exclude "joblets", jobs that ran as part of a sliced workflow job
uj_qs = obj.unifiedjob_unified_jobs.exclude(job__job_slice_count__gt=1).order_by('-created')
# Would like to apply an .only, but does not play well with non_polymorphic
# .only('id', 'status', 'finished', 'polymorphic_ctype_id')
optimized_qs = uj_qs.non_polymorphic()
return [{
'id': x.id, 'status': x.status, 'finished': x.finished,
# Make type consistent with API top-level key, for instance workflow_job
'type': x.get_real_instance_class()._meta.verbose_name.replace(' ', '_')
} for x in optimized_qs[:10]]
def get_summary_fields(self, obj):
d = super(JobTemplateMixin, self).get_summary_fields(obj)
@@ -3011,7 +3022,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
fields = ('*', 'host_config_key', 'ask_diff_mode_on_launch', 'ask_variables_on_launch', 'ask_limit_on_launch', 'ask_tags_on_launch',
'ask_skip_tags_on_launch', 'ask_job_type_on_launch', 'ask_verbosity_on_launch', 'ask_inventory_on_launch',
'ask_credential_on_launch', 'survey_enabled', 'become_enabled', 'diff_mode',
'allow_simultaneous', 'custom_virtualenv')
'allow_simultaneous', 'custom_virtualenv', 'job_slice_count')
def get_related(self, obj):
res = super(JobTemplateSerializer, self).get_related(obj)
@@ -3028,6 +3039,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
labels = self.reverse('api:job_template_label_list', kwargs={'pk': obj.pk}),
object_roles = self.reverse('api:job_template_object_roles_list', kwargs={'pk': obj.pk}),
instance_groups = self.reverse('api:job_template_instance_groups_list', kwargs={'pk': obj.pk}),
slice_workflow_jobs = self.reverse('api:job_template_slice_workflow_jobs_list', kwargs={'pk': obj.pk}),
))
if self.version > 1:
res['copy'] = self.reverse('api:job_template_copy', kwargs={'pk': obj.pk})
@@ -3049,7 +3061,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
prompting_error_message = _("Must either set a default value or ask to prompt on launch.")
if project is None:
raise serializers.ValidationError({'project': _("Job types 'run' and 'check' must have assigned a project.")})
raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):
raise serializers.ValidationError({'inventory': prompting_error_message})
@@ -3058,6 +3070,13 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
def validate_extra_vars(self, value):
return vars_validate_or_raise(value)
def validate_job_slice_count(self, value):
if value > 1 and not feature_enabled('workflows'):
raise LicenseForbids({'job_slice_count': [_(
"Job slicing is a workflows-based feature and your license does not allow use of workflows."
)]})
return value
def get_summary_fields(self, obj):
summary_fields = super(JobTemplateSerializer, self).get_summary_fields(obj)
all_creds = []
@@ -3102,19 +3121,46 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
return summary_fields
class JobTemplateWithSpecSerializer(JobTemplateSerializer):
'''
Used for activity stream entries.
'''
class Meta:
model = JobTemplate
fields = ('*', 'survey_spec')
class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
passwords_needed_to_start = serializers.ReadOnlyField()
ask_diff_mode_on_launch = serializers.ReadOnlyField()
ask_variables_on_launch = serializers.ReadOnlyField()
ask_limit_on_launch = serializers.ReadOnlyField()
ask_skip_tags_on_launch = serializers.ReadOnlyField()
ask_tags_on_launch = serializers.ReadOnlyField()
ask_job_type_on_launch = serializers.ReadOnlyField()
ask_verbosity_on_launch = serializers.ReadOnlyField()
ask_inventory_on_launch = serializers.ReadOnlyField()
ask_credential_on_launch = serializers.ReadOnlyField()
ask_diff_mode_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_variables_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_limit_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_skip_tags_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_tags_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_job_type_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_verbosity_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_inventory_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_credential_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
artifacts = serializers.SerializerMethodField()
class Meta:
@@ -3123,7 +3169,7 @@ class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
'ask_variables_on_launch', 'ask_limit_on_launch', 'ask_tags_on_launch', 'ask_skip_tags_on_launch',
'ask_job_type_on_launch', 'ask_verbosity_on_launch', 'ask_inventory_on_launch',
'ask_credential_on_launch', 'allow_simultaneous', 'artifacts', 'scm_revision',
'instance_group', 'diff_mode')
'instance_group', 'diff_mode', 'job_slice_number', 'job_slice_count')
def get_related(self, obj):
res = super(JobSerializer, self).get_related(obj)
@@ -3557,7 +3603,7 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
class Meta:
model = WorkflowJobTemplate
fields = ('*', 'extra_vars', 'organization', 'survey_enabled', 'allow_simultaneous',
'ask_variables_on_launch',)
'ask_variables_on_launch', 'inventory', 'ask_inventory_on_launch',)
def get_related(self, obj):
res = super(WorkflowJobTemplateSerializer, self).get_related(obj)
@@ -3585,12 +3631,24 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
return vars_validate_or_raise(value)
class WorkflowJobTemplateWithSpecSerializer(WorkflowJobTemplateSerializer):
'''
Used for activity stream entries.
'''
class Meta:
model = WorkflowJobTemplate
fields = ('*', 'survey_spec')
class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
class Meta:
model = WorkflowJob
fields = ('*', 'workflow_job_template', 'extra_vars', 'allow_simultaneous',
'-execution_node', '-event_processing_finished', '-controller_node',)
'job_template', 'is_sliced_job',
'-execution_node', '-event_processing_finished', '-controller_node',
'inventory',)
def get_related(self, obj):
res = super(WorkflowJobSerializer, self).get_related(obj)
@@ -3598,6 +3656,8 @@ class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
res['workflow_job_template'] = self.reverse('api:workflow_job_template_detail',
kwargs={'pk': obj.workflow_job_template.pk})
res['notifications'] = self.reverse('api:workflow_job_notifications_list', kwargs={'pk': obj.pk})
if obj.job_template_id:
res['job_template'] = self.reverse('api:job_template_detail', kwargs={'pk': obj.job_template_id})
res['workflow_nodes'] = self.reverse('api:workflow_job_workflow_nodes_list', kwargs={'pk': obj.pk})
res['labels'] = self.reverse('api:workflow_job_label_list', kwargs={'pk': obj.pk})
res['activity_stream'] = self.reverse('api:workflow_job_activity_stream_list', kwargs={'pk': obj.pk})
@@ -3671,7 +3731,7 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
if obj is None:
return ret
if 'extra_data' in ret and obj.survey_passwords:
ret['extra_data'] = obj.display_extra_data()
ret['extra_data'] = obj.display_extra_vars()
return ret
def get_summary_fields(self, obj):
@@ -3812,9 +3872,6 @@ class WorkflowJobTemplateNodeSerializer(LaunchConfigurationBaseSerializer):
ujt_obj = attrs['unified_job_template']
elif self.instance:
ujt_obj = self.instance.unified_job_template
if isinstance(ujt_obj, (WorkflowJobTemplate)):
raise serializers.ValidationError({
"unified_job_template": _("Cannot nest a %s inside a WorkflowJobTemplate") % ujt_obj.__class__.__name__})
if 'credential' in deprecated_fields: # TODO: remove when v2 API is deprecated
cred = deprecated_fields['credential']
attrs['credential'] = cred
@@ -3863,7 +3920,8 @@ class WorkflowJobNodeSerializer(LaunchConfigurationBaseSerializer):
class Meta:
model = WorkflowJobNode
fields = ('*', 'credential', 'job', 'workflow_job', '-name', '-description', 'id', 'url', 'related',
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes',)
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes',
'do_not_run',)
def get_related(self, obj):
res = super(WorkflowJobNodeSerializer, self).get_related(obj)
@@ -4365,37 +4423,63 @@ class JobLaunchSerializer(BaseSerializer):
class WorkflowJobLaunchSerializer(BaseSerializer):
can_start_without_user_input = serializers.BooleanField(read_only=True)
defaults = serializers.SerializerMethodField()
variables_needed_to_start = serializers.ReadOnlyField()
survey_enabled = serializers.SerializerMethodField()
extra_vars = VerbatimField(required=False, write_only=True)
inventory = serializers.PrimaryKeyRelatedField(
queryset=Inventory.objects.all(),
required=False, write_only=True
)
workflow_job_template_data = serializers.SerializerMethodField()
class Meta:
model = WorkflowJobTemplate
fields = ('can_start_without_user_input', 'extra_vars',
'survey_enabled', 'variables_needed_to_start',
fields = ('ask_inventory_on_launch', 'can_start_without_user_input', 'defaults', 'extra_vars',
'inventory', 'survey_enabled', 'variables_needed_to_start',
'node_templates_missing', 'node_prompts_rejected',
'workflow_job_template_data')
'workflow_job_template_data', 'survey_enabled')
read_only_fields = ('ask_inventory_on_launch',)
def get_survey_enabled(self, obj):
if obj:
return obj.survey_enabled and 'spec' in obj.survey_spec
return False
def get_defaults(self, obj):
defaults_dict = {}
for field_name in WorkflowJobTemplate.get_ask_mapping().keys():
if field_name == 'inventory':
defaults_dict[field_name] = dict(
name=getattrd(obj, '%s.name' % field_name, None),
id=getattrd(obj, '%s.pk' % field_name, None))
else:
defaults_dict[field_name] = getattr(obj, field_name)
return defaults_dict
def get_workflow_job_template_data(self, obj):
return dict(name=obj.name, id=obj.id, description=obj.description)
def validate(self, attrs):
obj = self.instance
template = self.instance
accepted, rejected, errors = obj._accept_or_ignore_job_kwargs(
_exclude_errors=['required'],
**attrs)
accepted, rejected, errors = template._accept_or_ignore_job_kwargs(**attrs)
self._ignored_fields = rejected
WFJT_extra_vars = obj.extra_vars
attrs = super(WorkflowJobLaunchSerializer, self).validate(attrs)
obj.extra_vars = WFJT_extra_vars
return attrs
if template.inventory and template.inventory.pending_deletion is True:
errors['inventory'] = _("The inventory associated with this Workflow is being deleted.")
elif 'inventory' in accepted and accepted['inventory'].pending_deletion:
errors['inventory'] = _("The provided inventory is being deleted.")
if errors:
raise serializers.ValidationError(errors)
WFJT_extra_vars = template.extra_vars
WFJT_inventory = template.inventory
super(WorkflowJobLaunchSerializer, self).validate(attrs)
template.extra_vars = WFJT_extra_vars
template.inventory = WFJT_inventory
return accepted
class NotificationTemplateSerializer(BaseSerializer):
@@ -4535,13 +4619,13 @@ class SchedulePreviewSerializer(BaseSerializer):
# - COUNT > 999
def validate_rrule(self, value):
rrule_value = value
multi_by_month_day = ".*?BYMONTHDAY[\:\=][0-9]+,-*[0-9]+"
multi_by_month = ".*?BYMONTH[\:\=][0-9]+,[0-9]+"
by_day_with_numeric_prefix = ".*?BYDAY[\:\=][0-9]+[a-zA-Z]{2}"
match_count = re.match(".*?(COUNT\=[0-9]+)", rrule_value)
match_multiple_dtstart = re.findall(".*?(DTSTART(;[^:]+)?\:[0-9]+T[0-9]+Z?)", rrule_value)
match_native_dtstart = re.findall(".*?(DTSTART:[0-9]+T[0-9]+) ", rrule_value)
match_multiple_rrule = re.findall(".*?(RRULE\:)", rrule_value)
multi_by_month_day = r".*?BYMONTHDAY[\:\=][0-9]+,-*[0-9]+"
multi_by_month = r".*?BYMONTH[\:\=][0-9]+,[0-9]+"
by_day_with_numeric_prefix = r".*?BYDAY[\:\=][0-9]+[a-zA-Z]{2}"
match_count = re.match(r".*?(COUNT\=[0-9]+)", rrule_value)
match_multiple_dtstart = re.findall(r".*?(DTSTART(;[^:]+)?\:[0-9]+T[0-9]+Z?)", rrule_value)
match_native_dtstart = re.findall(r".*?(DTSTART:[0-9]+T[0-9]+) ", rrule_value)
match_multiple_rrule = re.findall(r".*?(RRULE\:)", rrule_value)
if not len(match_multiple_dtstart):
raise serializers.ValidationError(_('Valid DTSTART required in rrule. Value should start with: DTSTART:YYYYMMDDTHHMMSSZ'))
if len(match_native_dtstart):
@@ -4614,6 +4698,23 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
res['inventory'] = obj.unified_job_template.inventory.get_absolute_url(self.context.get('request'))
return res
def get_summary_fields(self, obj):
summary_fields = super(ScheduleSerializer, self).get_summary_fields(obj)
if 'inventory' in summary_fields:
return summary_fields
inventory = None
if obj.unified_job_template and getattr(obj.unified_job_template, 'inventory', None):
inventory = obj.unified_job_template.inventory
else:
return summary_fields
summary_fields['inventory'] = dict()
for field in SUMMARIZABLE_FK_FIELDS['inventory']:
summary_fields['inventory'][field] = getattr(inventory, field, None)
return summary_fields
def validate_unified_job_template(self, value):
if type(value) == InventorySource and value.source not in SCHEDULEABLE_PROVIDERS:
raise serializers.ValidationError(_('Inventory Source must be a cloud resource.'))
@@ -4837,10 +4938,6 @@ class ActivityStreamSerializer(BaseSerializer):
def get_related(self, obj):
rel = {}
VIEW_NAME_EXCEPTIONS = {
'custom_inventory_script': 'inventory_script_detail',
'o_auth2_access_token': 'o_auth2_token_detail'
}
if obj.actor is not None:
rel['actor'] = self.reverse('api:user_detail', kwargs={'pk': obj.actor.pk})
for fk, __ in self._local_summarizable_fk_fields:
@@ -4854,11 +4951,12 @@ class ActivityStreamSerializer(BaseSerializer):
if getattr(thisItem, 'id', None) in id_list:
continue
id_list.append(getattr(thisItem, 'id', None))
if fk in VIEW_NAME_EXCEPTIONS:
view_name = VIEW_NAME_EXCEPTIONS[fk]
if hasattr(thisItem, 'get_absolute_url'):
rel_url = thisItem.get_absolute_url(self.context.get('request'))
else:
view_name = fk + '_detail'
rel[fk].append(self.reverse('api:' + view_name, kwargs={'pk': thisItem.id}))
rel_url = self.reverse('api:' + view_name, kwargs={'pk': thisItem.id})
rel[fk].append(rel_url)
if fk == 'schedule':
rel['unified_job_template'] = thisItem.unified_job_template.get_absolute_url(self.context.get('request'))

View File

@@ -26,6 +26,9 @@ string of `?all=1` to return all hosts, including disabled ones.
Specify a query string of `?towervars=1` to add variables
to the hostvars of each host that specifies its enabled state and database ID.
Specify a query string of `?subset=slice2of5` to produce an inventory that
has a restricted number of hosts according to the rules of job slicing.
To apply multiple query strings, join them with the `&` character, like `?hostvars=1&all=1`.
## Host Response

View File

@@ -8,6 +8,7 @@ from awx.api.views import (
JobTemplateDetail,
JobTemplateLaunch,
JobTemplateJobsList,
JobTemplateSliceWorkflowJobsList,
JobTemplateCallback,
JobTemplateSchedulesList,
JobTemplateSurveySpec,
@@ -28,6 +29,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/$', JobTemplateDetail.as_view(), name='job_template_detail'),
url(r'^(?P<pk>[0-9]+)/launch/$', JobTemplateLaunch.as_view(), name='job_template_launch'),
url(r'^(?P<pk>[0-9]+)/jobs/$', JobTemplateJobsList.as_view(), name='job_template_jobs_list'),
url(r'^(?P<pk>[0-9]+)/slice_workflow_jobs/$', JobTemplateSliceWorkflowJobsList.as_view(), name='job_template_slice_workflow_jobs_list'),
url(r'^(?P<pk>[0-9]+)/callback/$', JobTemplateCallback.as_view(), name='job_template_callback'),
url(r'^(?P<pk>[0-9]+)/schedules/$', JobTemplateSchedulesList.as_view(), name='job_template_schedules_list'),
url(r'^(?P<pk>[0-9]+)/survey_spec/$', JobTemplateSurveySpec.as_view(), name='job_template_survey_spec'),

File diff suppressed because it is too large Load Diff

211
awx/api/views/inventory.py Normal file
View File

@@ -0,0 +1,211 @@
# Copyright (c) 2018 Red Hat, Inc.
# All Rights Reserved.
# Python
import logging
# Django
from django.conf import settings
from django.db.models import Q
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import status
# AWX
from awx.main.models import (
ActivityStream,
Inventory,
JobTemplate,
Role,
User,
InstanceGroup,
InventoryUpdateEvent,
InventoryUpdate,
InventorySource,
CustomInventoryScript,
)
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
SubListAPIView,
SubListAttachDetachAPIView,
ResourceAccessList,
CopyAPIView,
)
from awx.api.serializers import (
InventorySerializer,
ActivityStreamSerializer,
RoleSerializer,
InstanceGroupSerializer,
InventoryUpdateEventSerializer,
CustomInventoryScriptSerializer,
InventoryDetailSerializer,
JobTemplateSerializer,
)
from awx.api.views.mixin import (
ActivityStreamEnforcementMixin,
RelatedJobsPreventDeleteMixin,
ControlledByScmMixin,
)
logger = logging.getLogger('awx.api.views.organization')
class InventoryUpdateEventsList(SubListAPIView):
model = InventoryUpdateEvent
serializer_class = InventoryUpdateEventSerializer
parent_model = InventoryUpdate
relationship = 'inventory_update_events'
view_name = _('Inventory Update Events List')
search_fields = ('stdout',)
def finalize_response(self, request, response, *args, **kwargs):
response['X-UI-Max-Events'] = settings.MAX_UI_JOB_EVENTS
return super(InventoryUpdateEventsList, self).finalize_response(request, response, *args, **kwargs)
class InventoryScriptList(ListCreateAPIView):
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
class InventoryScriptDetail(RetrieveUpdateDestroyAPIView):
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
can_delete = request.user.can_access(self.model, 'delete', instance)
if not can_delete:
raise PermissionDenied(_("Cannot delete inventory script."))
for inv_src in InventorySource.objects.filter(source_script=instance):
inv_src.source_script = None
inv_src.save()
return super(InventoryScriptDetail, self).destroy(request, *args, **kwargs)
class InventoryScriptObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = CustomInventoryScript
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class InventoryScriptCopy(CopyAPIView):
model = CustomInventoryScript
copy_return_serializer_class = CustomInventoryScriptSerializer
class InventoryList(ListCreateAPIView):
model = Inventory
serializer_class = InventorySerializer
def get_queryset(self):
qs = Inventory.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'read_role', 'update_role', 'use_role', 'adhoc_role')
qs = qs.prefetch_related('created_by', 'modified_by', 'organization')
return qs
class InventoryDetail(RelatedJobsPreventDeleteMixin, ControlledByScmMixin, RetrieveUpdateDestroyAPIView):
model = Inventory
serializer_class = InventoryDetailSerializer
def update(self, request, *args, **kwargs):
obj = self.get_object()
kind = self.request.data.get('kind') or kwargs.get('kind')
# Do not allow changes to an Inventory kind.
if kind is not None and obj.kind != kind:
return self.http_method_not_allowed(request, *args, **kwargs)
return super(InventoryDetail, self).update(request, *args, **kwargs)
def destroy(self, request, *args, **kwargs):
obj = self.get_object()
if not request.user.can_access(self.model, 'delete', obj):
raise PermissionDenied()
self.check_related_active_jobs(obj) # related jobs mixin
try:
obj.schedule_deletion(getattr(request.user, 'id', None))
return Response(status=status.HTTP_202_ACCEPTED)
except RuntimeError as e:
return Response(dict(error=_("{0}".format(e))), status=status.HTTP_400_BAD_REQUEST)
class InventoryActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Inventory
relationship = 'activitystream_set'
search_fields = ('changes',)
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(Q(inventory=parent) | Q(host__in=parent.hosts.all()) | Q(group__in=parent.groups.all()))
class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Inventory
relationship = 'instance_groups'
class InventoryAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Inventory
class InventoryObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = Inventory
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class InventoryJobTemplateList(SubListAPIView):
model = JobTemplate
serializer_class = JobTemplateSerializer
parent_model = Inventory
relationship = 'jobtemplates'
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(inventory=parent)
class InventoryCopy(CopyAPIView):
model = Inventory
copy_return_serializer_class = InventorySerializer

View File

@@ -13,12 +13,16 @@ from django.shortcuts import get_object_or_404
from django.utils.timezone import now
from django.utils.translation import ugettext_lazy as _
from rest_framework.permissions import SAFE_METHODS
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import status
from awx.main.constants import ACTIVE_STATES
from awx.main.utils import get_object_or_400
from awx.main.utils import (
get_object_or_400,
parse_yaml_or_json,
)
from awx.main.models.ha import (
Instance,
InstanceGroup,
@@ -101,7 +105,9 @@ class UnifiedJobDeletionMixin(object):
class InstanceGroupMembershipMixin(object):
'''
Manages signaling celery to reload its queue configuration on Instance Group membership changes
This mixin overloads attach/detach so that it calls InstanceGroup.save(),
triggering a background recalculation of policy-based instance group
membership.
'''
def attach(self, request, *args, **kwargs):
response = super(InstanceGroupMembershipMixin, self).attach(request, *args, **kwargs)
@@ -271,3 +277,33 @@ class OrganizationCountsMixin(object):
full_context['related_field_counts'] = count_context
return full_context
class ControlledByScmMixin(object):
'''
Special method to reset SCM inventory commit hash
if anything that it manages changes.
'''
def _reset_inv_src_rev(self, obj):
if self.request.method in SAFE_METHODS or not obj:
return
project_following_sources = obj.inventory_sources.filter(
update_on_project_update=True, source='scm')
if project_following_sources:
# Allow inventory changes unrelated to variables
if self.model == Inventory and (
not self.request or not self.request.data or
parse_yaml_or_json(self.request.data.get('variables', '')) == parse_yaml_or_json(obj.variables)):
return
project_following_sources.update(scm_last_revision='')
def get_object(self):
obj = super(ControlledByScmMixin, self).get_object()
self._reset_inv_src_rev(obj)
return obj
def get_parent_object(self):
obj = super(ControlledByScmMixin, self).get_parent_object()
self._reset_inv_src_rev(obj)
return obj

View File

@@ -0,0 +1,247 @@
# Copyright (c) 2018 Red Hat, Inc.
# All Rights Reserved.
# Python
import logging
# Django
from django.db.models import Count
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
# AWX
from awx.conf.license import (
feature_enabled,
LicenseForbids,
)
from awx.main.models import (
ActivityStream,
Inventory,
Project,
JobTemplate,
WorkflowJobTemplate,
Organization,
NotificationTemplate,
Role,
User,
Team,
InstanceGroup,
)
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
SubListAPIView,
SubListCreateAttachDetachAPIView,
SubListAttachDetachAPIView,
ResourceAccessList,
BaseUsersList,
)
from awx.api.serializers import (
OrganizationSerializer,
InventorySerializer,
ProjectSerializer,
UserSerializer,
TeamSerializer,
ActivityStreamSerializer,
RoleSerializer,
NotificationTemplateSerializer,
WorkflowJobTemplateSerializer,
InstanceGroupSerializer,
)
from awx.api.views.mixin import (
ActivityStreamEnforcementMixin,
RelatedJobsPreventDeleteMixin,
OrganizationCountsMixin,
)
logger = logging.getLogger('awx.api.views.organization')
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_queryset(self):
qs = Organization.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'auditor_role', 'member_role', 'read_role')
qs = qs.prefetch_related('created_by', 'modified_by')
return qs
def create(self, request, *args, **kwargs):
"""Create a new organzation.
If there is already an organization and the license of this
instance does not permit multiple organizations, then raise
LicenseForbids.
"""
# Sanity check: If the multiple organizations feature is disallowed
# by the license, then we are only willing to create this organization
# if no organizations exist in the system.
if (not feature_enabled('multiple_organizations') and
self.model.objects.exists()):
raise LicenseForbids(_('Your license only permits a single '
'organization to exist.'))
# Okay, create the organization as usual.
return super(OrganizationList, self).create(request, *args, **kwargs)
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_serializer_context(self, *args, **kwargs):
full_context = super(OrganizationDetail, self).get_serializer_context(*args, **kwargs)
if not hasattr(self, 'kwargs') or 'pk' not in self.kwargs:
return full_context
org_id = int(self.kwargs['pk'])
org_counts = {}
access_kwargs = {'accessor': self.request.user, 'role_field': 'read_role'}
direct_counts = Organization.objects.filter(id=org_id).annotate(
users=Count('member_role__members', distinct=True),
admins=Count('admin_role__members', distinct=True)
).values('users', 'admins')
if not direct_counts:
return full_context
org_counts = direct_counts[0]
org_counts['inventories'] = Inventory.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['teams'] = Team.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['projects'] = Project.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['job_templates'] = JobTemplate.accessible_objects(**access_kwargs).filter(
project__organization__id=org_id).count()
full_context['related_field_counts'] = {}
full_context['related_field_counts'][org_id] = org_counts
return full_context
class OrganizationInventoriesList(SubListAPIView):
model = Inventory
serializer_class = InventorySerializer
parent_model = Organization
relationship = 'inventories'
class OrganizationUsersList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'member_role.members'
class OrganizationAdminsList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'admin_role.members'
class OrganizationProjectsList(SubListCreateAttachDetachAPIView):
model = Project
serializer_class = ProjectSerializer
parent_model = Organization
relationship = 'projects'
parent_key = 'organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAttachDetachAPIView):
model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization
relationship = 'workflows'
parent_key = 'organization'
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team
serializer_class = TeamSerializer
parent_model = Organization
relationship = 'teams'
parent_key = 'organization'
class OrganizationActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Organization
relationship = 'activitystream_set'
search_fields = ('changes',)
class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates'
parent_key = 'organization'
class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_any'
class OrganizationNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_error'
class OrganizationNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_success'
class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Organization
relationship = 'instance_groups'
class OrganizationAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Organization
class OrganizationObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = Organization
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)

278
awx/api/views/root.py Normal file
View File

@@ -0,0 +1,278 @@
# Copyright (c) 2018 Ansible, Inc.
# All Rights Reserved.
import logging
import json
from collections import OrderedDict
from django.conf import settings
from django.utils.encoding import smart_text
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import ensure_csrf_cookie
from django.template.loader import render_to_string
from django.utils.translation import ugettext_lazy as _
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
from rest_framework import status
from awx.api.generics import APIView
from awx.main.ha import is_ha_environment
from awx.main.utils import (
get_awx_version,
get_ansible_version,
get_custom_venv_choices,
to_python_boolean,
)
from awx.api.versioning import reverse, get_request_version, drf_reverse
from awx.conf.license import get_license, feature_enabled
from awx.main.models import (
Project,
Organization,
Instance,
InstanceGroup,
JobTemplate,
)
logger = logging.getLogger('awx.api.views.root')
class ApiRootView(APIView):
permission_classes = (AllowAny,)
view_name = _('REST API')
versioning_class = None
swagger_topic = 'Versioning'
@method_decorator(ensure_csrf_cookie)
def get(self, request, format=None):
''' List supported API versions '''
v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'})
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
data = OrderedDict()
data['description'] = _('AWX REST API')
data['current_version'] = v2
data['available_versions'] = dict(v1 = v1, v2 = v2)
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
if feature_enabled('rebranding'):
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
return Response(data)
class ApiOAuthAuthorizationRootView(APIView):
permission_classes = (AllowAny,)
view_name = _("API OAuth 2 Authorization Root")
versioning_class = None
swagger_topic = 'Authentication'
def get(self, request, format=None):
data = OrderedDict()
data['authorize'] = drf_reverse('api:authorize')
data['token'] = drf_reverse('api:token')
data['revoke_token'] = drf_reverse('api:revoke-token')
return Response(data)
class ApiVersionRootView(APIView):
permission_classes = (AllowAny,)
swagger_topic = 'Versioning'
def get(self, request, format=None):
''' List top level resources '''
data = OrderedDict()
data['ping'] = reverse('api:api_v1_ping_view', request=request)
data['instances'] = reverse('api:instance_list', request=request)
data['instance_groups'] = reverse('api:instance_group_list', request=request)
data['config'] = reverse('api:api_v1_config_view', request=request)
data['settings'] = reverse('api:setting_category_list', request=request)
data['me'] = reverse('api:user_me_list', request=request)
data['dashboard'] = reverse('api:dashboard_view', request=request)
data['organizations'] = reverse('api:organization_list', request=request)
data['users'] = reverse('api:user_list', request=request)
data['projects'] = reverse('api:project_list', request=request)
data['project_updates'] = reverse('api:project_update_list', request=request)
data['teams'] = reverse('api:team_list', request=request)
data['credentials'] = reverse('api:credential_list', request=request)
if get_request_version(request) > 1:
data['credential_types'] = reverse('api:credential_type_list', request=request)
data['applications'] = reverse('api:o_auth2_application_list', request=request)
data['tokens'] = reverse('api:o_auth2_token_list', request=request)
data['inventory'] = reverse('api:inventory_list', request=request)
data['inventory_scripts'] = reverse('api:inventory_script_list', request=request)
data['inventory_sources'] = reverse('api:inventory_source_list', request=request)
data['inventory_updates'] = reverse('api:inventory_update_list', request=request)
data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request)
data['job_events'] = reverse('api:job_event_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)
data['system_job_templates'] = reverse('api:system_job_template_list', request=request)
data['system_jobs'] = reverse('api:system_job_list', request=request)
data['schedules'] = reverse('api:schedule_list', request=request)
data['roles'] = reverse('api:role_list', request=request)
data['notification_templates'] = reverse('api:notification_template_list', request=request)
data['notifications'] = reverse('api:notification_list', request=request)
data['labels'] = reverse('api:label_list', request=request)
data['unified_job_templates'] = reverse('api:unified_job_template_list', request=request)
data['unified_jobs'] = reverse('api:unified_job_list', request=request)
data['activity_stream'] = reverse('api:activity_stream_list', request=request)
data['workflow_job_templates'] = reverse('api:workflow_job_template_list', request=request)
data['workflow_jobs'] = reverse('api:workflow_job_list', request=request)
data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request)
data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request)
return Response(data)
class ApiV1RootView(ApiVersionRootView):
view_name = _('Version 1')
class ApiV2RootView(ApiVersionRootView):
view_name = _('Version 2')
class ApiV1PingView(APIView):
"""A simple view that reports very basic information about this
instance, which is acceptable to be public information.
"""
permission_classes = (AllowAny,)
authentication_classes = ()
view_name = _('Ping')
swagger_topic = 'System Configuration'
def get(self, request, format=None):
"""Return some basic information about this instance
Everything returned here should be considered public / insecure, as
this requires no auth and is intended for use by the installer process.
"""
response = {
'ha': is_ha_environment(),
'version': get_awx_version(),
'active_node': settings.CLUSTER_HOST_ID,
}
response['instances'] = []
for instance in Instance.objects.all():
response['instances'].append(dict(node=instance.hostname, heartbeat=instance.modified,
capacity=instance.capacity, version=instance.version))
response['instances'].sort()
response['instance_groups'] = []
for instance_group in InstanceGroup.objects.all():
response['instance_groups'].append(dict(name=instance_group.name,
capacity=instance_group.capacity,
instances=[x.hostname for x in instance_group.instances.all()]))
return Response(response)
class ApiV1ConfigView(APIView):
permission_classes = (IsAuthenticated,)
view_name = _('Configuration')
swagger_topic = 'System Configuration'
def check_permissions(self, request):
super(ApiV1ConfigView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head', 'get'}:
self.permission_denied(request) # Raises PermissionDenied exception.
def get(self, request, format=None):
'''Return various sitewide configuration settings'''
if request.user.is_superuser or request.user.is_system_auditor:
license_data = get_license(show_key=True)
else:
license_data = get_license(show_key=False)
if not license_data.get('valid_key', False):
license_data = {}
if license_data and 'features' in license_data and 'activity_streams' in license_data['features']:
# FIXME: Make the final setting value dependent on the feature?
license_data['features']['activity_streams'] &= settings.ACTIVITY_STREAM_ENABLED
pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off'
data = dict(
time_zone=settings.TIME_ZONE,
license_info=license_data,
version=get_awx_version(),
ansible_version=get_ansible_version(),
eula=render_to_string("eula.md") if license_data.get('license_type', 'UNLICENSED') != 'open' else '',
analytics_status=pendo_state
)
# If LDAP is enabled, user_ldap_fields will return a list of field
# names that are managed by LDAP and should be read-only for users with
# a non-empty ldap_dn attribute.
if getattr(settings, 'AUTH_LDAP_SERVER_URI', None) and feature_enabled('ldap'):
user_ldap_fields = ['username', 'password']
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_ATTR_MAP', {}).keys())
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_FLAGS_BY_GROUP', {}).keys())
data['user_ldap_fields'] = user_ldap_fields
if request.user.is_superuser \
or request.user.is_system_auditor \
or Organization.accessible_objects(request.user, 'admin_role').exists() \
or Organization.accessible_objects(request.user, 'auditor_role').exists():
data.update(dict(
project_base_dir = settings.PROJECTS_ROOT,
project_local_paths = Project.get_local_path_choices(),
custom_virtualenvs = get_custom_venv_choices()
))
elif JobTemplate.accessible_objects(request.user, 'admin_role').exists():
data['custom_virtualenvs'] = get_custom_venv_choices()
return Response(data)
def post(self, request):
if not isinstance(request.data, dict):
return Response({"error": _("Invalid license data")}, status=status.HTTP_400_BAD_REQUEST)
if "eula_accepted" not in request.data:
return Response({"error": _("Missing 'eula_accepted' property")}, status=status.HTTP_400_BAD_REQUEST)
try:
eula_accepted = to_python_boolean(request.data["eula_accepted"])
except ValueError:
return Response({"error": _("'eula_accepted' value is invalid")}, status=status.HTTP_400_BAD_REQUEST)
if not eula_accepted:
return Response({"error": _("'eula_accepted' must be True")}, status=status.HTTP_400_BAD_REQUEST)
request.data.pop("eula_accepted")
try:
data_actual = json.dumps(request.data)
except Exception:
logger.info(smart_text(u"Invalid JSON submitted for license."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid JSON")}, status=status.HTTP_400_BAD_REQUEST)
try:
from awx.main.utils.common import get_licenser
license_data = json.loads(data_actual)
license_data_validated = get_licenser(**license_data).validate()
except Exception:
logger.warning(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid License")}, status=status.HTTP_400_BAD_REQUEST)
# If the license is valid, write it to the database.
if license_data_validated['valid_key']:
settings.LICENSE = license_data
settings.TOWER_URL_BASE = "{}://{}".format(request.scheme, request.get_host())
return Response(license_data_validated)
logger.warning(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid license")}, status=status.HTTP_400_BAD_REQUEST)
def delete(self, request):
try:
settings.LICENSE = {}
return Response(status=status.HTTP_204_NO_CONTENT)
except Exception:
# FIX: Log
return Response({"error": _("Failed to remove license.")}, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -1,25 +0,0 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings # noqa
try:
import awx.devonly # noqa
MODE = 'development'
except ImportError: # pragma: no cover
MODE = 'production'
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'awx.settings.%s' % MODE)
app = Celery('awx')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
if __name__ == '__main__':
app.start()

View File

@@ -0,0 +1,18 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
# AWX
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('conf', '0005_v330_rename_two_session_settings'),
]
operations = [
migrations.RunPython(fill_ldap_group_type_params),
]

View File

@@ -0,0 +1,30 @@
import inspect
from django.conf import settings
from django.utils.timezone import now
def fill_ldap_group_type_params(apps, schema_editor):
group_type = settings.AUTH_LDAP_GROUP_TYPE
Setting = apps.get_model('conf', 'Setting')
group_type_params = {'name_attr': 'cn', 'member_attr': 'member'}
qs = Setting.objects.filter(key='AUTH_LDAP_GROUP_TYPE_PARAMS')
entry = None
if qs.exists():
entry = qs[0]
group_type_params = entry.value
else:
entry = Setting(key='AUTH_LDAP_GROUP_TYPE_PARAMS',
value=group_type_params,
created=now(),
modified=now())
init_attrs = set(inspect.getargspec(group_type.__init__).args[1:])
for k in group_type_params.keys():
if k not in init_attrs:
del group_type_params[k]
entry.value = group_type_params
entry.save()

View File

@@ -2,11 +2,13 @@
from collections import namedtuple
import contextlib
import logging
import re
import sys
import threading
import time
import StringIO
import traceback
import urllib
import six
@@ -60,6 +62,15 @@ SETTING_CACHE_DEFAULTS = True
__all__ = ['SettingsWrapper', 'get_settings_to_cache', 'SETTING_CACHE_NOTSET']
def normalize_broker_url(value):
parts = value.rsplit('@', 1)
match = re.search('(amqp://[^:]+:)(.*)', parts[0])
if match:
prefix, password = match.group(1), match.group(2)
parts[0] = prefix + urllib.quote(password)
return '@'.join(parts)
@contextlib.contextmanager
def _ctit_db_wrapper(trans_safe=False):
'''
@@ -115,7 +126,8 @@ def _ctit_db_wrapper(trans_safe=False):
bottom_stack.close()
# Log the combined stack
if trans_safe:
logger.warning('Database settings are not available, using defaults, error:\n{}'.format(tb_string))
if 'check_migrations' not in sys.argv:
logger.warning('Database settings are not available, using defaults, error:\n{}'.format(tb_string))
else:
logger.error('Error modifying something related to database settings.\n{}'.format(tb_string))
finally:
@@ -444,7 +456,16 @@ class SettingsWrapper(UserSettingsHolder):
value = self._get_local(name)
if value is not empty:
return value
return self._get_default(name)
value = self._get_default(name)
# sometimes users specify RabbitMQ passwords that contain
# unescaped : and @ characters that confused urlparse, e.g.,
# amqp://guest:a@ns:ibl3#@localhost:5672//
#
# detect these scenarios, and automatically escape the user's
# password so it just works
if name == 'BROKER_URL':
value = normalize_broker_url(value)
return value
def _set_local(self, name, value):
field = self.registry.get_setting_field(name)

View File

@@ -475,6 +475,15 @@ class BaseCallbackModule(CallbackBase):
with self.capture_event_data('runner_retry', **event_data):
super(BaseCallbackModule, self).v2_runner_retry(result)
def v2_runner_on_start(self, host, task):
event_data = dict(
host=host.get_name(),
task=task
)
with self.capture_event_data('runner_on_start', **event_data):
super(BaseCallbackModule, self).v2_runner_on_start(host, task)
class AWXDefaultCallbackModule(BaseCallbackModule, DefaultCallbackModule):

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -524,7 +524,7 @@ class UserAccess(BaseAccess):
# A user can be changed if they are themselves, or by org admins or
# superusers. Change permission implies changing only certain fields
# that a user should be able to edit for themselves.
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
return bool(self.user == obj or self.can_admin(obj, data))
@@ -577,7 +577,7 @@ class UserAccess(BaseAccess):
return False
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
# Reverse obj and sub_obj, defer to RoleAccess if this is a role assignment.
@@ -587,7 +587,7 @@ class UserAccess(BaseAccess):
return super(UserAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
if relationship == 'roles':
@@ -1157,13 +1157,10 @@ class TeamAccess(BaseAccess):
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
"""Reverse obj and sub_obj, defer to RoleAccess if this is an assignment
of a resource role to the team."""
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
# MANAGE_ORGANIZATION_AUTH setting checked in RoleAccess
if isinstance(sub_obj, Role):
if sub_obj.content_object is None:
raise PermissionDenied(_("The {} role cannot be assigned to a team").format(sub_obj.name))
elif isinstance(sub_obj.content_object, User):
raise PermissionDenied(_("The admin_role for a User cannot be assigned to a team"))
if isinstance(sub_obj.content_object, ResourceMixin):
role_access = RoleAccess(self.user)
@@ -1175,9 +1172,7 @@ class TeamAccess(BaseAccess):
*args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
# MANAGE_ORGANIZATION_AUTH setting checked in RoleAccess
if isinstance(sub_obj, Role):
if isinstance(sub_obj.content_object, ResourceMixin):
role_access = RoleAccess(self.user)
@@ -1213,7 +1208,7 @@ class ProjectAccess(BaseAccess):
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'admin_role').exists()
return Organization.accessible_objects(self.user, 'project_admin_role').exists()
return (self.check_related('organization', Organization, data, role_field='project_admin_role', mandatory=True) and
self.check_related('credential', Credential, data, role_field='use_role'))
@@ -1281,6 +1276,7 @@ class JobTemplateAccess(BaseAccess):
'instance_groups',
'credentials__credential_type',
Prefetch('labels', queryset=Label.objects.all().order_by('name')),
Prefetch('last_job', queryset=UnifiedJob.objects.non_polymorphic()),
)
def filtered_queryset(self):
@@ -1394,7 +1390,7 @@ class JobTemplateAccess(BaseAccess):
'job_tags', 'force_handlers', 'skip_tags', 'ask_variables_on_launch',
'ask_tags_on_launch', 'ask_job_type_on_launch', 'ask_skip_tags_on_launch',
'ask_inventory_on_launch', 'ask_credential_on_launch', 'survey_enabled',
'custom_virtualenv', 'diff_mode',
'custom_virtualenv', 'diff_mode', 'timeout', 'job_slice_count',
# These fields are ignored, but it is convenient for QA to allow clients to post them
'last_job_run', 'created', 'modified',
@@ -1789,7 +1785,7 @@ class WorkflowJobNodeAccess(BaseAccess):
def filtered_queryset(self):
return self.model.objects.filter(
workflow_job__workflow_job_template__in=WorkflowJobTemplate.accessible_objects(
workflow_job__unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(
self.user, 'read_role'))
@check_superuser
@@ -1840,8 +1836,10 @@ class WorkflowJobTemplateAccess(BaseAccess):
if 'survey_enabled' in data and data['survey_enabled']:
self.check_license(feature='surveys')
return self.check_related('organization', Organization, data, role_field='workflow_admin_role',
mandatory=True)
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True) and
self.check_related('inventory', Inventory, data, role_field='use_role')
)
def can_copy(self, obj):
if self.save_messages:
@@ -1895,8 +1893,11 @@ class WorkflowJobTemplateAccess(BaseAccess):
if self.user.is_superuser:
return True
return (self.check_related('organization', Organization, data, role_field='workflow_admin_role', obj=obj) and
self.user in obj.admin_role)
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', obj=obj) and
self.check_related('inventory', Inventory, data, role_field='use_role', obj=obj) and
self.user in obj.admin_role
)
def can_delete(self, obj):
return self.user.is_superuser or self.user in obj.admin_role
@@ -1915,7 +1916,7 @@ class WorkflowJobAccess(BaseAccess):
def filtered_queryset(self):
return WorkflowJob.objects.filter(
workflow_job_template__in=WorkflowJobTemplate.accessible_objects(
unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(
self.user, 'read_role'))
def can_add(self, data):
@@ -1947,27 +1948,39 @@ class WorkflowJobAccess(BaseAccess):
if self.user.is_superuser:
return True
wfjt = obj.workflow_job_template
template = obj.workflow_job_template
if not template and obj.job_template_id:
template = obj.job_template
# only superusers can relaunch orphans
if not wfjt:
if not template:
return False
# If job was launched by another user, it could have survey passwords
if obj.created_by_id != self.user.pk:
# Obtain prompts used to start original job
JobLaunchConfig = obj._meta.get_field('launch_config').related_model
try:
config = JobLaunchConfig.objects.get(job=obj)
except JobLaunchConfig.DoesNotExist:
config = None
# Obtain prompts used to start original job
JobLaunchConfig = obj._meta.get_field('launch_config').related_model
try:
config = JobLaunchConfig.objects.get(job=obj)
except JobLaunchConfig.DoesNotExist:
if self.save_messages:
self.messages['detail'] = _('Workflow Job was launched with unknown prompts.')
return False
if config is None or config.prompts_dict():
# Check if access to prompts to prevent relaunch
if config.prompts_dict():
if obj.created_by_id != self.user.pk:
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts provided by another user.')
return False
if not JobLaunchConfigAccess(self.user).can_add({'reference_obj': config}):
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts you lack access to.')
return False
if config.has_unprompted(template):
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts no longer accepted.')
return False
# execute permission to WFJT is mandatory for any relaunch
return (self.user in wfjt.execute_role)
return (self.user in template.execute_role)
def can_recreate(self, obj):
node_qs = obj.workflow_job_nodes.all().prefetch_related('inventory', 'credentials', 'unified_job_template')
@@ -2550,14 +2563,13 @@ class RoleAccess(BaseAccess):
# Unsupported for now
return False
def can_attach(self, obj, sub_obj, relationship, data,
skip_sub_obj_read_check=False):
return self.can_unattach(obj, sub_obj, relationship, data, skip_sub_obj_read_check)
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
return self.can_unattach(obj, sub_obj, relationship, *args, **kwargs)
@check_superuser
def can_unattach(self, obj, sub_obj, relationship, data=None, skip_sub_obj_read_check=False):
if isinstance(obj.content_object, Team):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
if not skip_sub_obj_read_check and relationship in ['members', 'member_role.parents', 'parents']:

View File

@@ -0,0 +1,5 @@
from django.conf import settings
def get_local_queuename():
return settings.CLUSTER_HOST_ID.encode('utf-8')

View File

@@ -0,0 +1,58 @@
import logging
import socket
from django.conf import settings
from awx.main.dispatch import get_local_queuename
from kombu import Connection, Queue, Exchange, Producer, Consumer
logger = logging.getLogger('awx.main.dispatch')
class Control(object):
services = ('dispatcher', 'callback_receiver')
result = None
def __init__(self, service, host=None):
if service not in self.services:
raise RuntimeError('{} must be in {}'.format(service, self.services))
self.service = service
self.queuename = host or get_local_queuename()
self.queue = Queue(self.queuename, Exchange(self.queuename), routing_key=self.queuename)
def publish(self, msg, conn, **kwargs):
producer = Producer(
exchange=self.queue.exchange,
channel=conn,
routing_key=self.queuename
)
producer.publish(msg, expiration=5, **kwargs)
def status(self, *args, **kwargs):
return self.control_with_reply('status', *args, **kwargs)
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
def control_with_reply(self, command, timeout=5):
logger.warn('checking {} {} for {}'.format(self.service, command, self.queuename))
reply_queue = Queue(name="amq.rabbitmq.reply-to")
self.result = None
with Connection(settings.BROKER_URL) as conn:
with Consumer(conn, reply_queue, callbacks=[self.process_message], no_ack=True):
self.publish({'control': command}, conn, reply_to='amq.rabbitmq.reply-to')
try:
conn.drain_events(timeout=timeout)
except socket.timeout:
logger.error('{} did not reply within {}s'.format(self.service, timeout))
raise
return self.result
def control(self, msg, **kwargs):
with Connection(settings.BROKER_URL) as conn:
self.publish(msg, conn)
def process_message(self, body, message):
self.result = body
message.ack()

408
awx/main/dispatch/pool.py Normal file
View File

@@ -0,0 +1,408 @@
import logging
import os
import sys
import random
import traceback
from uuid import uuid4
import collections
from multiprocessing import Process
from multiprocessing import Queue as MPQueue
from Queue import Full as QueueFull, Empty as QueueEmpty
from django.conf import settings
from django.db import connection as django_connection, connections
from django.core.cache import cache as django_cache
from jinja2 import Template
import psutil
from awx.main.models import UnifiedJob
from awx.main.dispatch import reaper
logger = logging.getLogger('awx.main.dispatch')
class PoolWorker(object):
'''
Used to track a worker child process and its pending and finished messages.
This class makes use of two distinct multiprocessing.Queues to track state:
- self.queue: this is a queue which represents pending messages that should
be handled by this worker process; as new AMQP messages come
in, a pool will put() them into this queue; the child
process that is forked will get() from this queue and handle
received messages in an endless loop
- self.finished: this is a queue which the worker process uses to signal
that it has finished processing a message
When a message is put() onto this worker, it is tracked in
self.managed_tasks.
Periodically, the worker will call .calculate_managed_tasks(), which will
cause messages in self.finished to be removed from self.managed_tasks.
In this way, self.managed_tasks represents a view of the messages assigned
to a specific process. The message at [0] is the least-recently inserted
message, and it represents what the worker is running _right now_
(self.current_task).
A worker is "busy" when it has at least one message in self.managed_tasks.
It is "idle" when self.managed_tasks is empty.
'''
def __init__(self, queue_size, target, args):
self.messages_sent = 0
self.messages_finished = 0
self.managed_tasks = collections.OrderedDict()
self.finished = MPQueue(queue_size)
self.queue = MPQueue(queue_size)
self.process = Process(target=target, args=(self.queue, self.finished) + args)
self.process.daemon = True
def start(self):
self.process.start()
def put(self, body):
uuid = '?'
if isinstance(body, dict):
if not body.get('uuid'):
body['uuid'] = str(uuid4())
uuid = body['uuid']
logger.debug('delivered {} to worker[{}] qsize {}'.format(
uuid, self.pid, self.qsize
))
self.managed_tasks[uuid] = body
self.queue.put(body, block=True, timeout=5)
self.messages_sent += 1
self.calculate_managed_tasks()
def quit(self):
'''
Send a special control message to the worker that tells it to exit
gracefully.
'''
self.queue.put('QUIT')
@property
def pid(self):
return self.process.pid
@property
def qsize(self):
return self.queue.qsize()
@property
def alive(self):
return self.process.is_alive()
@property
def mb(self):
if self.alive:
return '{:0.3f}'.format(
psutil.Process(self.pid).memory_info().rss / 1024.0 / 1024.0
)
return '0'
@property
def exitcode(self):
return str(self.process.exitcode)
def calculate_managed_tasks(self):
# look to see if any tasks were finished
finished = []
for _ in range(self.finished.qsize()):
try:
finished.append(self.finished.get(block=False))
except QueueEmpty:
break # qsize is not always _totally_ up to date
# if any tasks were finished, removed them from the managed tasks for
# this worker
for uuid in finished:
self.messages_finished += 1
del self.managed_tasks[uuid]
@property
def current_task(self):
self.calculate_managed_tasks()
# the task at [0] is the one that's running right now (or is about to
# be running)
if len(self.managed_tasks):
return self.managed_tasks[self.managed_tasks.keys()[0]]
return None
@property
def orphaned_tasks(self):
orphaned = []
if not self.alive:
# if this process had a running task that never finished,
# requeue its error callbacks
current_task = self.current_task
if isinstance(current_task, dict):
orphaned.extend(current_task.get('errbacks', []))
# if this process has any pending messages requeue them
for _ in range(self.qsize):
try:
message = self.queue.get(block=False)
if message != 'QUIT':
orphaned.append(message)
except QueueEmpty:
break # qsize is not always _totally_ up to date
if len(orphaned):
logger.error(
'requeuing {} messages from gone worker pid:{}'.format(
len(orphaned), self.pid
)
)
return orphaned
@property
def busy(self):
self.calculate_managed_tasks()
return len(self.managed_tasks) > 0
@property
def idle(self):
return not self.busy
class WorkerPool(object):
'''
Creates a pool of forked PoolWorkers.
As WorkerPool.write(...) is called (generally, by a kombu consumer
implementation when it receives an AMQP message), messages are passed to
one of the multiprocessing Queues where some work can be done on them.
class MessagePrinter(awx.main.dispatch.worker.BaseWorker):
def perform_work(self, body):
print body
pool = WorkerPool(min_workers=4) # spawn four worker processes
pool.init_workers(MessagePrint().work_loop)
pool.write(
0, # preferred worker 0
'Hello, World!'
)
'''
debug_meta = ''
def __init__(self, min_workers=None, queue_size=None):
self.name = settings.CLUSTER_HOST_ID
self.pid = os.getpid()
self.min_workers = min_workers or settings.JOB_EVENT_WORKERS
self.queue_size = queue_size or settings.JOB_EVENT_MAX_QUEUE_SIZE
self.workers = []
def __len__(self):
return len(self.workers)
def init_workers(self, target, *target_args):
self.target = target
self.target_args = target_args
for idx in range(self.min_workers):
self.up()
def up(self):
idx = len(self.workers)
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and memcached connections (that way lies race conditions)
django_connection.close()
django_cache.close()
worker = PoolWorker(self.queue_size, self.target, (idx,) + self.target_args)
self.workers.append(worker)
try:
worker.start()
except Exception:
logger.exception('could not fork')
else:
logger.warn('scaling up worker pid:{}'.format(worker.pid))
return idx, worker
def debug(self, *args, **kwargs):
self.cleanup()
tmpl = Template(
'{{ pool.name }}[pid:{{ pool.pid }}] workers total={{ workers|length }} {{ meta }} \n'
'{% for w in workers %}'
'. worker[pid:{{ w.pid }}]{% if not w.alive %} GONE exit={{ w.exitcode }}{% endif %}'
' sent={{ w.messages_sent }}'
' finished={{ w.messages_finished }}'
' qsize={{ w.managed_tasks|length }}'
' rss={{ w.mb }}MB'
'{% for task in w.managed_tasks.values() %}'
'\n - {% if loop.index0 == 0 %}running {% else %}queued {% endif %}'
'{{ task["uuid"] }} '
'{% if "task" in task %}'
'{{ task["task"].rsplit(".", 1)[-1] }}'
# don't print kwargs, they often contain launch-time secrets
'(*{{ task.get("args", []) }})'
'{% endif %}'
'{% endfor %}'
'{% if not w.managed_tasks|length %}'
' [IDLE]'
'{% endif %}'
'\n'
'{% endfor %}'
)
return tmpl.render(pool=self, workers=self.workers, meta=self.debug_meta)
def write(self, preferred_queue, body):
queue_order = sorted(range(len(self.workers)), cmp=lambda x, y: -1 if x==preferred_queue else 0)
write_attempt_order = []
for queue_actual in queue_order:
try:
self.workers[queue_actual].put(body)
return queue_actual
except QueueFull:
pass
except Exception:
tb = traceback.format_exc()
logger.warn("could not write to queue %s" % preferred_queue)
logger.warn("detail: {}".format(tb))
write_attempt_order.append(preferred_queue)
logger.warn("could not write payload to any queue, attempted order: {}".format(write_attempt_order))
return None
def stop(self, signum):
try:
for worker in self.workers:
os.kill(worker.pid, signum)
except Exception:
logger.exception('could not kill {}'.format(worker.pid))
class AutoscalePool(WorkerPool):
'''
An extended pool implementation that automatically scales workers up and
down based on demand
'''
def __init__(self, *args, **kwargs):
self.max_workers = kwargs.pop('max_workers', None)
super(AutoscalePool, self).__init__(*args, **kwargs)
if self.max_workers is None:
settings_absmem = getattr(settings, 'SYSTEM_TASK_ABS_MEM', None)
if settings_absmem is not None:
total_memory_gb = int(settings_absmem)
else:
total_memory_gb = (psutil.virtual_memory().total >> 30) + 1 # noqa: round up
# 5 workers per GB of total memory
self.max_workers = (total_memory_gb * 5)
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)
@property
def should_grow(self):
if len(self.workers) < self.min_workers:
# If we don't have at least min_workers, add more
return True
# If every worker is busy doing something, add more
return all([w.busy for w in self.workers])
@property
def full(self):
return len(self.workers) == self.max_workers
@property
def debug_meta(self):
return 'min={} max={}'.format(self.min_workers, self.max_workers)
def cleanup(self):
"""
Perform some internal account and cleanup. This is run on
every cluster node heartbeat:
1. Discover worker processes that exited, and recover messages they
were handling.
2. Clean up unnecessary, idle workers.
3. Check to see if the database says this node is running any tasks
that aren't actually running. If so, reap them.
"""
orphaned = []
for w in self.workers[::]:
if not w.alive:
# the worker process has exited
# 1. take the task it was running and enqueue the error
# callbacks
# 2. take any pending tasks delivered to its queue and
# send them to another worker
logger.error('worker pid:{} is gone (exit={})'.format(w.pid, w.exitcode))
if w.current_task:
if w.current_task != 'QUIT':
try:
for j in UnifiedJob.objects.filter(celery_task_id=w.current_task['uuid']):
reaper.reap_job(j, 'failed')
except Exception:
logger.exception('failed to reap job UUID {}'.format(w.current_task['uuid']))
orphaned.extend(w.orphaned_tasks)
self.workers.remove(w)
elif w.idle and len(self.workers) > self.min_workers:
# the process has an empty queue (it's idle) and we have
# more processes in the pool than we need (> min)
# send this process a message so it will exit gracefully
# at the next opportunity
logger.warn('scaling down worker pid:{}'.format(w.pid))
w.quit()
self.workers.remove(w)
for m in orphaned:
# if all the workers are dead, spawn at least one
if not len(self.workers):
self.up()
idx = random.choice(range(len(self.workers)))
self.write(idx, m)
# if the database says a job is running on this node, but it's *not*,
# then reap it
running_uuids = []
for worker in self.workers:
worker.calculate_managed_tasks()
running_uuids.extend(worker.managed_tasks.keys())
try:
reaper.reap(excluded_uuids=running_uuids)
except Exception:
# we _probably_ failed here due to DB connectivity issues, so
# don't use our logger (it accesses the database for configuration)
_, _, tb = sys.exc_info()
traceback.print_tb(tb)
def up(self):
if self.full:
# if we can't spawn more workers, just toss this message into a
# random worker's backlog
idx = random.choice(range(len(self.workers)))
return idx, self.workers[idx]
else:
return super(AutoscalePool, self).up()
def write(self, preferred_queue, body):
try:
# when the cluster heartbeat occurs, clean up internally
if isinstance(body, dict) and 'cluster_node_heartbeat' in body['task']:
self.cleanup()
if self.should_grow:
self.up()
# we don't care about "preferred queue" round robin distribution, just
# find the first non-busy worker and claim it
workers = self.workers[:]
random.shuffle(workers)
for w in workers:
if not w.busy:
w.put(body)
break
else:
return super(AutoscalePool, self).write(preferred_queue, body)
except Exception:
for conn in connections.all():
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
logger.exception('failed to write inbound message')

View File

@@ -0,0 +1,128 @@
import inspect
import logging
import sys
from uuid import uuid4
from django.conf import settings
from kombu import Connection, Exchange, Producer
logger = logging.getLogger('awx.main.dispatch')
def serialize_task(f):
return '.'.join([f.__module__, f.__name__])
class task:
"""
Used to decorate a function or class so that it can be run asynchronously
via the task dispatcher. Tasks can be simple functions:
@task()
def add(a, b):
return a + b
...or classes that define a `run` method:
@task()
class Adder:
def run(self, a, b):
return a + b
# Tasks can be run synchronously...
assert add(1, 1) == 2
assert Adder().run(1, 1) == 2
# ...or published to a queue:
add.apply_async([1, 1])
Adder.apply_async([1, 1])
# Tasks can also define a specific target queue or exchange type:
@task(queue='slow-tasks')
def snooze():
time.sleep(10)
@task(queue='tower_broadcast', exchange_type='fanout')
def announce():
print "Run this everywhere!"
"""
def __init__(self, queue=None, exchange_type=None):
self.queue = queue
self.exchange_type = exchange_type
def __call__(self, fn=None):
queue = self.queue
exchange_type = self.exchange_type
class PublisherMixin(object):
queue = None
@classmethod
def delay(cls, *args, **kwargs):
return cls.apply_async(args, kwargs)
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
task_id = uuid or str(uuid4())
args = args or []
kwargs = kwargs or {}
queue = (
queue or
getattr(cls.queue, 'im_func', cls.queue) or
settings.CELERY_DEFAULT_QUEUE
)
obj = {
'uuid': task_id,
'args': args,
'kwargs': kwargs,
'task': cls.name
}
obj.update(**kw)
if callable(queue):
queue = queue()
if not settings.IS_TESTING(sys.argv):
with Connection(settings.BROKER_URL) as conn:
exchange = Exchange(queue, type=exchange_type or 'direct')
producer = Producer(conn)
logger.debug('publish {}({}, queue={})'.format(
cls.name,
task_id,
queue
))
producer.publish(obj,
serializer='json',
compression='bzip2',
exchange=exchange,
declare=[exchange],
delivery_mode="persistent",
routing_key=queue)
return (obj, queue)
# If the object we're wrapping *is* a class (e.g., RunJob), return
# a *new* class that inherits from the wrapped class *and* BaseTask
# In this way, the new class returned by our decorator is the class
# being decorated *plus* PublisherMixin so cls.apply_async() and
# cls.delay() work
bases = []
ns = {'name': serialize_task(fn), 'queue': queue}
if inspect.isclass(fn):
bases = list(fn.__bases__)
ns.update(fn.__dict__)
cls = type(
fn.__name__,
tuple(bases + [PublisherMixin]),
ns
)
if inspect.isclass(fn):
return cls
# if the object being decorated is *not* a class (it's a Python
# function), make fn.apply_async and fn.delay proxy through to the
# PublisherMixin we dynamically created above
setattr(fn, 'name', cls.name)
setattr(fn, 'apply_async', cls.apply_async)
setattr(fn, 'delay', cls.delay)
return fn

View File

@@ -0,0 +1,49 @@
from datetime import timedelta
import logging
from django.db.models import Q
from django.utils.timezone import now as tz_now
from django.contrib.contenttypes.models import ContentType
from awx.main.models import Instance, UnifiedJob, WorkflowJob
logger = logging.getLogger('awx.main.dispatch')
def reap_job(j, status):
if UnifiedJob.objects.get(id=j.id).status not in ('running', 'waiting'):
# just in case, don't reap jobs that aren't running
return
j.status = status
j.start_args = '' # blank field to remove encrypted passwords
j.job_explanation += ' '.join((
'Task was marked as running in Tower but was not present in',
'the job queue, so it has been marked as failed.',
))
j.save(update_fields=['status', 'start_args', 'job_explanation'])
if hasattr(j, 'send_notification_templates'):
j.send_notification_templates('failed')
j.websocket_emit_status(status)
logger.error(
'{} is no longer running; reaping'.format(j.log_format)
)
def reap(instance=None, status='failed', excluded_uuids=[]):
'''
Reap all jobs in waiting|running for this instance.
'''
me = instance or Instance.objects.me()
now = tz_now()
workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id
jobs = UnifiedJob.objects.filter(
(
Q(status='running') |
Q(status='waiting', modified__lte=now - timedelta(seconds=60))
) & (
Q(execution_node=me.hostname) |
Q(controller_node=me.hostname)
) & ~Q(polymorphic_ctype_id=workflow_ctype_id)
).exclude(celery_task_id__in=excluded_uuids)
for j in jobs:
reap_job(j, status)

View File

@@ -0,0 +1,3 @@
from .base import AWXConsumer, BaseWorker # noqa
from .callback import CallbackBrokerWorker # noqa
from .task import TaskWorker # noqa

View File

@@ -0,0 +1,151 @@
# Copyright (c) 2018 Ansible by Red Hat
# All Rights Reserved.
import os
import logging
import signal
from uuid import UUID
from Queue import Empty as QueueEmpty
from django import db
from kombu import Producer
from kombu.mixins import ConsumerMixin
from awx.main.dispatch.pool import WorkerPool
logger = logging.getLogger('awx.main.dispatch')
def signame(sig):
return dict(
(k, v) for v, k in signal.__dict__.items()
if v.startswith('SIG') and not v.startswith('SIG_')
)[sig]
class WorkerSignalHandler:
def __init__(self):
self.kill_now = False
signal.signal(signal.SIGINT, self.exit_gracefully)
def exit_gracefully(self, *args, **kwargs):
self.kill_now = True
class AWXConsumer(ConsumerMixin):
def __init__(self, name, connection, worker, queues=[], pool=None):
self.connection = connection
self.total_messages = 0
self.queues = queues
self.worker = worker
self.pool = pool
if pool is None:
self.pool = WorkerPool()
self.pool.init_workers(self.worker.work_loop)
def get_consumers(self, Consumer, channel):
logger.debug(self.listening_on)
return [Consumer(queues=self.queues, accept=['json'],
callbacks=[self.process_task])]
@property
def listening_on(self):
return 'listening on {}'.format([
'{} [{}]'.format(q.name, q.exchange.type) for q in self.queues
])
def control(self, body, message):
logger.warn(body)
control = body.get('control')
if control in ('status', 'running'):
producer = Producer(
channel=self.connection,
routing_key=message.properties['reply_to']
)
if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()])
elif control == 'running':
msg = []
for worker in self.pool.workers:
worker.calculate_managed_tasks()
msg.extend(worker.managed_tasks.keys())
producer.publish(msg)
elif control == 'reload':
for worker in self.pool.workers:
worker.quit()
else:
logger.error('unrecognized control message: {}'.format(control))
message.ack()
def process_task(self, body, message):
if 'control' in body:
return self.control(body, message)
if len(self.pool):
if "uuid" in body and body['uuid']:
try:
queue = UUID(body['uuid']).int % len(self.pool)
except Exception:
queue = self.total_messages % len(self.pool)
else:
queue = self.total_messages % len(self.pool)
else:
queue = 0
self.pool.write(queue, body)
self.total_messages += 1
message.ack()
def run(self, *args, **kwargs):
signal.signal(signal.SIGINT, self.stop)
signal.signal(signal.SIGTERM, self.stop)
self.worker.on_start()
super(AWXConsumer, self).run(*args, **kwargs)
def stop(self, signum, frame):
self.should_stop = True # this makes the kombu mixin stop consuming
logger.debug('received {}, stopping'.format(signame(signum)))
self.worker.on_stop()
raise SystemExit()
class BaseWorker(object):
def work_loop(self, queue, finished, idx, *args):
ppid = os.getppid()
signal_handler = WorkerSignalHandler()
while not signal_handler.kill_now:
# if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too
if os.getppid() != ppid:
break
try:
body = queue.get(block=True, timeout=1)
if body == 'QUIT':
break
except QueueEmpty:
continue
except Exception as e:
logger.error("Exception on worker {}, restarting: ".format(idx) + str(e))
continue
try:
for conn in db.connections.all():
# If the database connection has a hiccup during the prior message, close it
# so we can establish a new connection
conn.close_if_unusable_or_obsolete()
self.perform_work(body, *args)
finally:
if 'uuid' in body:
uuid = body['uuid']
logger.debug('task {} is finished'.format(uuid))
finished.put(uuid)
logger.warn('worker exiting gracefully pid:{}'.format(os.getpid()))
def perform_work(self, body):
raise NotImplementedError()
def on_start(self):
pass
def on_stop(self):
pass

View File

@@ -0,0 +1,127 @@
import logging
import time
import traceback
from django.conf import settings
from django.db import DatabaseError, OperationalError, connection as django_connection
from django.db.utils import InterfaceError, InternalError
from awx.main.consumers import emit_channel_notification
from awx.main.models import (JobEvent, AdHocCommandEvent, ProjectUpdateEvent,
InventoryUpdateEvent, SystemJobEvent, UnifiedJob)
from .base import BaseWorker
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
class CallbackBrokerWorker(BaseWorker):
'''
A worker implementation that deserializes callback event data and persists
it into the database.
The code that *builds* these types of messages is found in the AWX display
callback (`awx.lib.awx_display_callback`).
'''
MAX_RETRIES = 2
def perform_work(self, body):
try:
event_map = {
'job_id': JobEvent,
'ad_hoc_command_id': AdHocCommandEvent,
'project_update_id': ProjectUpdateEvent,
'inventory_update_id': InventoryUpdateEvent,
'system_job_id': SystemJobEvent,
}
if not any([key in body for key in event_map]):
raise Exception('Payload does not have a job identifier')
def _save_event_data():
for key, cls in event_map.items():
if key in body:
cls.create_from_data(**body)
job_identifier = 'unknown job'
job_key = 'unknown'
for key in event_map.keys():
if key in body:
job_identifier = body[key]
job_key = key
break
if settings.DEBUG:
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import Terminal256Formatter
from pprint import pformat
if body.get('event') == 'EOF':
event_thing = 'EOF event'
else:
event_thing = 'event {}'.format(body.get('counter', 'unknown'))
logger.info('Callback worker received {} for {} {}'.format(
event_thing, job_key[:-len('_id')], job_identifier
))
logger.debug('Body: {}'.format(
highlight(pformat(body, width=160), PythonLexer(), Terminal256Formatter(style='friendly'))
)[:1024 * 4])
if body.get('event') == 'EOF':
try:
final_counter = body.get('final_counter', 0)
logger.info('Event processing is finished for Job {}, sending notifications'.format(job_identifier))
# EOF events are sent when stdout for the running task is
# closed. don't actually persist them to the database; we
# just use them to report `summary` websocket events as an
# approximation for when a job is "done"
emit_channel_notification(
'jobs-summary',
dict(group_name='jobs', unified_job_id=job_identifier, final_counter=final_counter)
)
# Additionally, when we've processed all events, we should
# have all the data we need to send out success/failure
# notification templates
uj = UnifiedJob.objects.get(pk=job_identifier)
if hasattr(uj, 'send_notification_templates'):
retries = 0
while retries < 5:
if uj.finished:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
break
else:
# wait a few seconds to avoid a race where the
# events are persisted _before_ the UJ.status
# changes from running -> successful
retries += 1
time.sleep(1)
uj = UnifiedJob.objects.get(pk=job_identifier)
except Exception:
logger.exception('Worker failed to emit notifications: Job {}'.format(job_identifier))
return
retries = 0
while retries <= self.MAX_RETRIES:
try:
_save_event_data()
break
except (OperationalError, InterfaceError, InternalError):
if retries >= self.MAX_RETRIES:
logger.exception('Worker could not re-establish database connectivity, giving up on event for Job {}'.format(job_identifier))
return
delay = 60 * retries
logger.exception('Database Error Saving Job Event, retry #{i} in {delay} seconds:'.format(
i=retries + 1,
delay=delay
))
django_connection.close()
time.sleep(delay)
retries += 1
except DatabaseError:
logger.exception('Database Error Saving Job Event for Job {}'.format(job_identifier))
break
except Exception as exc:
tb = traceback.format_exc()
logger.error('Callback Task Processor Raised Exception: %r', exc)
logger.error('Detail: {}'.format(tb))

View File

@@ -0,0 +1,113 @@
import inspect
import logging
import importlib
import sys
import traceback
import six
from awx.main.tasks import dispatch_startup, inform_cluster_of_shutdown
from .base import BaseWorker
logger = logging.getLogger('awx.main.dispatch')
class TaskWorker(BaseWorker):
'''
A worker implementation that deserializes task messages and runs native
Python code.
The code that *builds* these types of messages is found in
`awx.main.dispatch.publish`.
'''
@classmethod
def resolve_callable(cls, task):
'''
Transform a dotted notation task into an imported, callable function, e.g.,
awx.main.tasks.delete_inventory
awx.main.tasks.RunProjectUpdate
'''
module, target = task.rsplit('.', 1)
module = importlib.import_module(module)
_call = None
if hasattr(module, target):
_call = getattr(module, target, None)
return _call
def run_callable(self, body):
'''
Given some AMQP message, import the correct Python code and run it.
'''
task = body['task']
uuid = body.get('uuid', '<unknown>')
args = body.get('args', [])
kwargs = body.get('kwargs', {})
_call = TaskWorker.resolve_callable(task)
if inspect.isclass(_call):
# the callable is a class, e.g., RunJob; instantiate and
# return its `run()` method
_call = _call().run
# don't print kwargs, they often contain launch-time secrets
logger.debug('task {} starting {}(*{})'.format(uuid, task, args))
return _call(*args, **kwargs)
def perform_work(self, body):
'''
Import and run code for a task e.g.,
body = {
'args': [8],
'callbacks': [{
'args': [],
'kwargs': {}
'task': u'awx.main.tasks.handle_work_success'
}],
'errbacks': [{
'args': [],
'kwargs': {},
'task': 'awx.main.tasks.handle_work_error'
}],
'kwargs': {},
'task': u'awx.main.tasks.RunProjectUpdate'
}
'''
result = None
try:
result = self.run_callable(body)
except Exception as exc:
try:
if getattr(exc, 'is_awx_task_error', False):
# Error caused by user / tracked in job output
logger.warning(six.text_type("{}").format(exc))
else:
task = body['task']
args = body.get('args', [])
kwargs = body.get('kwargs', {})
logger.exception('Worker failed to run task {}(*{}, **{}'.format(
task, args, kwargs
))
except Exception:
# It's fairly critical that this code _not_ raise exceptions on logging
# If you configure external logging in a way that _it_ fails, there's
# not a lot we can do here; sys.stderr.write is a final hail mary
_, _, tb = sys.exc_info()
traceback.print_tb(tb)
for callback in body.get('errbacks', []) or []:
callback['uuid'] = body['uuid']
self.perform_work(callback)
for callback in body.get('callbacks', []) or []:
callback['uuid'] = body['uuid']
self.perform_work(callback)
return result
def on_start(self):
dispatch_startup()
def on_stop(self):
inform_cluster_of_shutdown()

View File

@@ -4,11 +4,6 @@
import six
# Celery does not respect exception type when using a serializer different than pickle;
# and awx uses the json serializer
# https://github.com/celery/celery/issues/3586
class _AwxTaskError():
def build_exception(self, task, message=None):
if message is None:
@@ -36,5 +31,3 @@ class _AwxTaskError():
AwxTaskError = _AwxTaskError()

1
awx/main/expect/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
authorized_keys

View File

View File

@@ -38,7 +38,7 @@ class IsolatedManager(object):
:param stdout_handle: a file-like object for capturing stdout
:param ssh_key_path: a filepath where SSH key data can be read
:param expect_passwords: a dict of regular expression password prompts
to input values, i.e., {r'Password:\s*?$':
to input values, i.e., {r'Password:*?$':
'some_password'}
:param cancelled_callback: a callable - which returns `True` or `False`
- signifying if the job has been prematurely

View File

@@ -71,7 +71,7 @@ def run_pexpect(args, cwd, env, logfile,
- signifying if the job has been prematurely
cancelled
:param expect_passwords: a dict of regular expression password prompts
to input values, i.e., {r'Password:\s*?$':
to input values, i.e., {r'Password:*?$':
'some_password'}
:param extra_update_fields: a dict used to specify DB fields which should
be updated on the underlying model

View File

@@ -573,10 +573,10 @@ class CredentialInputField(JSONSchemaField):
# string)
match = re.search(
# 'foo' is a dependency of 'bar'
"'" # apostrophe
"([^']+)" # one or more non-apostrophes (first group)
"'[\w ]+'" # one or more words/spaces
"([^']+)", # second group
r"'" # apostrophe
r"([^']+)" # one or more non-apostrophes (first group)
r"'[\w ]+'" # one or more words/spaces
r"([^']+)", # second group
error.message,
)
if match:
@@ -755,7 +755,7 @@ class CredentialTypeInjectorField(JSONSchemaField):
'file': {
'type': 'object',
'patternProperties': {
'^template(\.[a-zA-Z_]+[a-zA-Z0-9_]*)?$': {'type': 'string'},
r'^template(\.[a-zA-Z_]+[a-zA-Z0-9_]*)?$': {'type': 'string'},
},
'additionalProperties': False,
},

View File

@@ -3,6 +3,7 @@
# Python
import re
import sys
from dateutil.relativedelta import relativedelta
# Django
@@ -129,6 +130,7 @@ class Command(BaseCommand):
@transaction.atomic
def handle(self, *args, **options):
sys.stderr.write("This command has been deprecated and will be removed in a future release.\n")
if not feature_enabled('system_tracking'):
raise CommandError("The System Tracking feature is not enabled for your instance")
cleanup_facts = CleanupFacts()

View File

@@ -2,7 +2,6 @@
# All Rights Reserved
import subprocess
import warnings
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError
@@ -24,31 +23,28 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str,
help='Hostname used during provisioning')
parser.add_argument('--name', dest='name', type=str,
help='(PENDING DEPRECIATION) Hostname used during provisioning')
@transaction.atomic
def handle(self, *args, **options):
# TODO: remove in 3.3
if options.get('name'):
warnings.warn("`--name` is depreciated in favor of `--hostname`, and will be removed in release 3.3.")
if options.get('hostname'):
raise CommandError("Cannot accept both --name and --hostname.")
options['hostname'] = options['name']
hostname = options.get('hostname')
if not hostname:
raise CommandError("--hostname is a required argument")
with advisory_lock('instance_registration_%s' % hostname):
instance = Instance.objects.filter(hostname=hostname)
if instance.exists():
isolated = instance.first().is_isolated()
instance.delete()
print("Instance Removed")
result = subprocess.Popen("rabbitmqctl forget_cluster_node rabbitmq@{}".format(hostname), shell=True).wait()
if result != 0:
print("Node deprovisioning may have failed when attempting to "
"remove the RabbitMQ instance {} from the cluster".format(hostname))
else:
if isolated:
print('Successfully deprovisioned {}'.format(hostname))
else:
result = subprocess.Popen("rabbitmqctl forget_cluster_node rabbitmq@{}".format(hostname), shell=True).wait()
if result != 0:
print("Node deprovisioning may have failed when attempting to "
"remove the RabbitMQ instance {} from the cluster".format(hostname))
else:
print('Successfully deprovisioned {}'.format(hostname))
print('(changed: True)')
else:
print('No instance found matching name {}'.format(hostname))

View File

@@ -1,17 +0,0 @@
# Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved
# Borrow from another AWX command
from awx.main.management.commands.deprovision_instance import Command as OtherCommand
# Python
import warnings
class Command(OtherCommand):
def handle(self, *args, **options):
# TODO: delete this entire file in 3.3
warnings.warn('This command is replaced with `deprovision_instance` and will '
'be removed in release 3.3.')
return super(Command, self).handle(*args, **options)

View File

@@ -938,7 +938,7 @@ class Command(BaseCommand):
self.exclude_empty_groups = bool(options.get('exclude_empty_groups', False))
self.instance_id_var = options.get('instance_id_var', None)
self.celery_invoked = False if os.getenv('INVENTORY_SOURCE_ID', None) is None else True
self.invoked_from_dispatcher = False if os.getenv('INVENTORY_SOURCE_ID', None) is None else True
# Load inventory and related objects from database.
if self.inventory_name and self.inventory_id:
@@ -1062,7 +1062,7 @@ class Command(BaseCommand):
exc = e
transaction.rollback()
if self.celery_invoked is False:
if self.invoked_from_dispatcher is False:
with ignore_inventory_computed_fields():
self.inventory_update = InventoryUpdate.objects.get(pk=self.inventory_update.pk)
self.inventory_update.result_traceback = tb

View File

@@ -1,17 +0,0 @@
# Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved
# Borrow from another AWX command
from awx.main.management.commands.provision_instance import Command as OtherCommand
# Python
import warnings
class Command(OtherCommand):
def handle(self, *args, **options):
# TODO: delete this entire file in 3.3
warnings.warn('This command is replaced with `provision_instance` and will '
'be removed in release 3.3.')
return super(Command, self).handle(*args, **options)

View File

@@ -4,6 +4,7 @@
import sys
import time
import json
import random
from django.utils import timezone
from django.core.management.base import BaseCommand
@@ -26,7 +27,21 @@ from awx.api.serializers import (
)
class ReplayJobEvents():
class JobStatusLifeCycle():
def emit_job_status(self, job, status):
# {"status": "successful", "project_id": 13, "unified_job_id": 659, "group_name": "jobs"}
job.websocket_emit_status(status)
def determine_job_event_finish_status_index(self, job_event_count, random_seed):
if random_seed == 0:
return job_event_count - 1
random.seed(random_seed)
job_event_index = random.randint(0, job_event_count - 1)
return job_event_index
class ReplayJobEvents(JobStatusLifeCycle):
recording_start = None
replay_start = None
@@ -76,9 +91,10 @@ class ReplayJobEvents():
job_events = job.inventory_update_events.order_by('created')
elif type(job) is SystemJob:
job_events = job.system_job_events.order_by('created')
if job_events.count() == 0:
count = job_events.count()
if count == 0:
raise RuntimeError("No events for job id {}".format(job.id))
return job_events
return job_events, count
def get_serializer(self, job):
if type(job) is Job:
@@ -95,7 +111,7 @@ class ReplayJobEvents():
raise RuntimeError("Job is of type {} and replay is not yet supported.".format(type(job)))
sys.exit(1)
def run(self, job_id, speed=1.0, verbosity=0, skip_range=[]):
def run(self, job_id, speed=1.0, verbosity=0, skip_range=[], random_seed=0, final_status_delay=0, debug=False):
stats = {
'events_ontime': {
'total': 0,
@@ -119,17 +135,27 @@ class ReplayJobEvents():
}
try:
job = self.get_job(job_id)
job_events = self.get_job_events(job)
job_events, job_event_count = self.get_job_events(job)
serializer = self.get_serializer(job)
except RuntimeError as e:
print("{}".format(e.message))
sys.exit(1)
je_previous = None
self.emit_job_status(job, 'pending')
self.emit_job_status(job, 'waiting')
self.emit_job_status(job, 'running')
finish_status_index = self.determine_job_event_finish_status_index(job_event_count, random_seed)
for n, je_current in enumerate(job_events):
if je_current.counter in skip_range:
continue
if debug:
raw_input("{} of {}:".format(n, job_event_count))
if not je_previous:
stats['recording_start'] = je_current.created
self.start(je_current.created)
@@ -146,7 +172,7 @@ class ReplayJobEvents():
print("recording: next job in {} seconds".format(recording_diff))
if replay_offset >= 0:
replay_diff = recording_diff - replay_offset
if replay_diff > 0:
stats['events_ontime']['total'] += 1
if verbosity >= 3:
@@ -167,6 +193,11 @@ class ReplayJobEvents():
stats['events_total'] += 1
je_previous = je_current
if n == finish_status_index:
if final_status_delay != 0:
self.sleep(final_status_delay)
self.emit_job_status(job, job.status)
if stats['events_total'] > 2:
stats['replay_end'] = self.now()
stats['replay_duration'] = (stats['replay_end'] - stats['replay_start']).total_seconds()
@@ -206,16 +237,26 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--job_id', dest='job_id', type=int, metavar='j',
help='Id of the job to replay (job or adhoc)')
parser.add_argument('--speed', dest='speed', type=int, metavar='s',
parser.add_argument('--speed', dest='speed', type=float, metavar='s',
help='Speedup factor.')
parser.add_argument('--skip-range', dest='skip_range', type=str, metavar='k',
default='0:-1:1', help='Range of events to skip')
parser.add_argument('--random-seed', dest='random_seed', type=int, metavar='r',
default=0, help='Random number generator seed to use when determining job_event index to emit final job status')
parser.add_argument('--final-status-delay', dest='final_status_delay', type=float, metavar='f',
default=0, help='Delay between event and final status emit')
parser.add_argument('--debug', dest='debug', type=bool, metavar='d',
default=False, help='Enable step mode to control emission of job events one at a time.')
def handle(self, *args, **options):
job_id = options.get('job_id')
speed = options.get('speed') or 1
verbosity = options.get('verbosity') or 0
random_seed = options.get('random_seed')
final_status_delay = options.get('final_status_delay')
debug = options.get('debug')
skip = self._parse_slice_range(options.get('skip_range'))
replayer = ReplayJobEvents()
replayer.run(job_id, speed, verbosity, skip)
replayer.run(job_id, speed=speed, verbosity=verbosity, skip_range=skip, random_seed=random_seed,
final_status_delay=final_status_delay, debug=debug)

View File

@@ -0,0 +1,37 @@
# Django
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
# AWX
from awx.main.models.oauth import OAuth2AccessToken
from oauth2_provider.models import RefreshToken
def revoke_tokens(token_list):
for token in token_list:
token.revoke()
print('revoked {} {}'.format(token.__class__.__name__, token.token))
class Command(BaseCommand):
"""Command that revokes OAuth2 access tokens."""
help='Revokes OAuth2 access tokens. Use --all to revoke access and refresh tokens.'
def add_arguments(self, parser):
parser.add_argument('--user', dest='user', type=str, help='revoke OAuth2 tokens for a specific username')
parser.add_argument('--all', dest='all', action='store_true', help='revoke OAuth2 access tokens and refresh tokens')
def handle(self, *args, **options):
if not options['user']:
if options['all']:
revoke_tokens(RefreshToken.objects.filter(revoked=None))
revoke_tokens(OAuth2AccessToken.objects.all())
else:
try:
user = User.objects.get(username=options['user'])
except ObjectDoesNotExist:
raise CommandError('A user with that username does not exist.')
if options['all']:
revoke_tokens(RefreshToken.objects.filter(revoked=None).filter(user=user))
revoke_tokens(user.main_oauth2accesstoken.filter(user=user))

View File

@@ -1,231 +1,11 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
# Python
import logging
import os
import signal
import time
from uuid import UUID
from multiprocessing import Process
from multiprocessing import Queue as MPQueue
from Queue import Empty as QueueEmpty
from Queue import Full as QueueFull
from kombu import Connection, Exchange, Queue
from kombu.mixins import ConsumerMixin
# Django
from django.conf import settings
from django.core.management.base import BaseCommand
from django.db import connection as django_connection
from django.db import DatabaseError, OperationalError
from django.db.utils import InterfaceError, InternalError
from django.core.cache import cache as django_cache
from kombu import Connection, Exchange, Queue
# AWX
from awx.main.models import * # noqa
from awx.main.consumers import emit_channel_notification
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
class WorkerSignalHandler:
def __init__(self):
self.kill_now = False
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, *args, **kwargs):
self.kill_now = True
class CallbackBrokerWorker(ConsumerMixin):
MAX_RETRIES = 2
def __init__(self, connection, use_workers=True):
self.connection = connection
self.worker_queues = []
self.total_messages = 0
self.init_workers(use_workers)
def init_workers(self, use_workers=True):
def shutdown_handler(active_workers):
def _handler(signum, frame):
try:
for active_worker in active_workers:
active_worker.terminate()
signal.signal(signum, signal.SIG_DFL)
os.kill(os.getpid(), signum) # Rethrow signal, this time without catching it
except Exception:
logger.exception('Error in shutdown_handler')
return _handler
if use_workers:
for idx in range(settings.JOB_EVENT_WORKERS):
queue_actual = MPQueue(settings.JOB_EVENT_MAX_QUEUE_SIZE)
w = Process(target=self.callback_worker, args=(queue_actual, idx,))
if settings.DEBUG:
logger.info('Starting worker %s' % str(idx))
self.worker_queues.append([0, queue_actual, w])
# It's important to close these _right before_ we fork; we
# don't want the forked processes to inherit the open sockets
# for the DB and memcached connections (that way lies race
# conditions)
django_connection.close()
django_cache.close()
for _, _, w in self.worker_queues:
w.start()
elif settings.DEBUG:
logger.warn('Started callback receiver (no workers)')
signal.signal(signal.SIGINT, shutdown_handler([p[2] for p in self.worker_queues]))
signal.signal(signal.SIGTERM, shutdown_handler([p[2] for p in self.worker_queues]))
def get_consumers(self, Consumer, channel):
return [Consumer(queues=[Queue(settings.CALLBACK_QUEUE,
Exchange(settings.CALLBACK_QUEUE, type='direct'),
routing_key=settings.CALLBACK_QUEUE)],
accept=['json'],
callbacks=[self.process_task])]
def process_task(self, body, message):
if "uuid" in body and body['uuid']:
try:
queue = UUID(body['uuid']).int % settings.JOB_EVENT_WORKERS
except Exception:
queue = self.total_messages % settings.JOB_EVENT_WORKERS
else:
queue = self.total_messages % settings.JOB_EVENT_WORKERS
self.write_queue_worker(queue, body)
self.total_messages += 1
message.ack()
def write_queue_worker(self, preferred_queue, body):
queue_order = sorted(range(settings.JOB_EVENT_WORKERS), cmp=lambda x, y: -1 if x==preferred_queue else 0)
write_attempt_order = []
for queue_actual in queue_order:
try:
worker_actual = self.worker_queues[queue_actual]
worker_actual[1].put(body, block=True, timeout=5)
worker_actual[0] += 1
return queue_actual
except QueueFull:
pass
except Exception:
import traceback
tb = traceback.format_exc()
logger.warn("Could not write to queue %s" % preferred_queue)
logger.warn("Detail: {}".format(tb))
write_attempt_order.append(preferred_queue)
logger.warn("Could not write payload to any queue, attempted order: {}".format(write_attempt_order))
return None
def callback_worker(self, queue_actual, idx):
signal_handler = WorkerSignalHandler()
while not signal_handler.kill_now:
try:
body = queue_actual.get(block=True, timeout=1)
except QueueEmpty:
continue
except Exception as e:
logger.error("Exception on worker thread, restarting: " + str(e))
continue
try:
event_map = {
'job_id': JobEvent,
'ad_hoc_command_id': AdHocCommandEvent,
'project_update_id': ProjectUpdateEvent,
'inventory_update_id': InventoryUpdateEvent,
'system_job_id': SystemJobEvent,
}
if not any([key in body for key in event_map]):
raise Exception('Payload does not have a job identifier')
if settings.DEBUG:
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import Terminal256Formatter
from pprint import pformat
logger.info('Body: {}'.format(
highlight(pformat(body, width=160), PythonLexer(), Terminal256Formatter(style='friendly'))
)[:1024 * 4])
def _save_event_data():
for key, cls in event_map.items():
if key in body:
cls.create_from_data(**body)
job_identifier = 'unknown job'
for key in event_map.keys():
if key in body:
job_identifier = body[key]
break
if body.get('event') == 'EOF':
try:
final_counter = body.get('final_counter', 0)
logger.info('Event processing is finished for Job {}, sending notifications'.format(job_identifier))
# EOF events are sent when stdout for the running task is
# closed. don't actually persist them to the database; we
# just use them to report `summary` websocket events as an
# approximation for when a job is "done"
emit_channel_notification(
'jobs-summary',
dict(group_name='jobs', unified_job_id=job_identifier, final_counter=final_counter)
)
# Additionally, when we've processed all events, we should
# have all the data we need to send out success/failure
# notification templates
uj = UnifiedJob.objects.get(pk=job_identifier)
if hasattr(uj, 'send_notification_templates'):
retries = 0
while retries < 5:
if uj.finished:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
break
else:
# wait a few seconds to avoid a race where the
# events are persisted _before_ the UJ.status
# changes from running -> successful
retries += 1
time.sleep(1)
uj = UnifiedJob.objects.get(pk=job_identifier)
except Exception:
logger.exception('Worker failed to emit notifications: Job {}'.format(job_identifier))
continue
retries = 0
while retries <= self.MAX_RETRIES:
try:
_save_event_data()
break
except (OperationalError, InterfaceError, InternalError) as e:
if retries >= self.MAX_RETRIES:
logger.exception('Worker could not re-establish database connectivity, shutting down gracefully: Job {}'.format(job_identifier))
os.kill(os.getppid(), signal.SIGINT)
return
delay = 60 * retries
logger.exception('Database Error Saving Job Event, retry #{i} in {delay} seconds:'.format(
i=retries + 1,
delay=delay
))
django_connection.close()
time.sleep(delay)
retries += 1
except DatabaseError as e:
logger.exception('Database Error Saving Job Event for Job {}'.format(job_identifier))
break
except Exception as exc:
import traceback
tb = traceback.format_exc()
logger.error('Callback Task Processor Raised Exception: %r', exc)
logger.error('Detail: {}'.format(tb))
from awx.main.dispatch.worker import AWXConsumer, CallbackBrokerWorker
class Command(BaseCommand):
@@ -238,8 +18,22 @@ class Command(BaseCommand):
def handle(self, *arg, **options):
with Connection(settings.BROKER_URL) as conn:
consumer = None
try:
worker = CallbackBrokerWorker(conn)
worker.run()
consumer = AWXConsumer(
'callback_receiver',
conn,
CallbackBrokerWorker(),
[
Queue(
settings.CALLBACK_QUEUE,
Exchange(settings.CALLBACK_QUEUE, type='direct'),
routing_key=settings.CALLBACK_QUEUE
)
]
)
consumer.run()
except KeyboardInterrupt:
print('Terminating Callback Receiver')
if consumer:
consumer.stop()

View File

@@ -0,0 +1,128 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
import os
import logging
from multiprocessing import Process
from django.conf import settings
from django.core.cache import cache as django_cache
from django.core.management.base import BaseCommand
from django.db import connection as django_connection, connections
from kombu import Connection, Exchange, Queue
from awx.main.dispatch import get_local_queuename, reaper
from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumer, TaskWorker
logger = logging.getLogger('awx.main.dispatch')
def construct_bcast_queue_name(common_name):
return common_name.encode('utf8') + '_' + settings.CLUSTER_HOST_ID
class Command(BaseCommand):
help = 'Launch the task dispatcher'
def add_arguments(self, parser):
parser.add_argument('--status', dest='status', action='store_true',
help='print the internal state of any running dispatchers')
parser.add_argument('--running', dest='running', action='store_true',
help='print the UUIDs of any tasked managed by this dispatcher')
parser.add_argument('--reload', dest='reload', action='store_true',
help=('cause the dispatcher to recycle all of its worker processes;'
'running jobs will run to completion first'))
def beat(self):
from celery import Celery
from celery.beat import PersistentScheduler
from celery.apps import beat
class AWXScheduler(PersistentScheduler):
def __init__(self, *args, **kwargs):
self.ppid = os.getppid()
super(AWXScheduler, self).__init__(*args, **kwargs)
def setup_schedule(self):
super(AWXScheduler, self).setup_schedule()
self.update_from_dict(settings.CELERYBEAT_SCHEDULE)
def tick(self, *args, **kwargs):
if os.getppid() != self.ppid:
# if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too
raise SystemExit()
return super(AWXScheduler, self).tick(*args, **kwargs)
def apply_async(self, entry, producer=None, advance=True, **kwargs):
for conn in connections.all():
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
task = TaskWorker.resolve_callable(entry.task)
result, queue = task.apply_async()
class TaskResult(object):
id = result['uuid']
return TaskResult()
app = Celery()
app.conf.BROKER_URL = settings.BROKER_URL
app.conf.CELERY_TASK_RESULT_EXPIRES = False
beat.Beat(
30,
app,
schedule='/var/lib/awx/beat.db', scheduler_cls=AWXScheduler
).run()
def handle(self, *arg, **options):
if options.get('status'):
print Control('dispatcher').status()
return
if options.get('running'):
print Control('dispatcher').running()
return
if options.get('reload'):
return Control('dispatcher').control({'control': 'reload'})
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and memcached connections (that way lies race conditions)
django_connection.close()
django_cache.close()
beat = Process(target=self.beat)
beat.daemon = True
beat.start()
reaper.reap()
consumer = None
with Connection(settings.BROKER_URL) as conn:
try:
bcast = 'tower_broadcast_all'
queues = [
Queue(q, Exchange(q), routing_key=q)
for q in (settings.AWX_CELERY_QUEUES_STATIC + [get_local_queuename()])
]
queues.append(
Queue(
construct_bcast_queue_name(bcast),
exchange=Exchange(bcast, type='fanout'),
routing_key=bcast,
reply=True
)
)
consumer = AWXConsumer(
'dispatcher',
conn,
TaskWorker(),
queues,
AutoscalePool(min_workers=4)
)
consumer.run()
except KeyboardInterrupt:
logger.debug('Terminating Task Dispatcher')
if consumer:
consumer.stop()

View File

@@ -1,66 +0,0 @@
import datetime
import os
import signal
import subprocess
import sys
import time
from celery import Celery
from django.core.management.base import BaseCommand
from django.conf import settings
class Command(BaseCommand):
"""Watch local celery workers"""
help=("Sends a periodic ping to the local celery process over AMQP to ensure "
"it's responsive; this command is only intended to run in an environment "
"where celeryd is running")
#
# Just because celery is _running_ doesn't mean it's _working_; it's
# imperative that celery workers are _actually_ handling AMQP messages on
# their appropriate queues for awx to function. Unfortunately, we've been
# plagued by a variety of bugs in celery that cause it to hang and become
# an unresponsive zombie, such as:
#
# https://github.com/celery/celery/issues/4185
# https://github.com/celery/celery/issues/4457
#
# The goal of this code is periodically send a broadcast AMQP message to
# the celery process on the local host via celery.app.control.ping;
# If that _fails_, we attempt to determine the pid of the celery process
# and send SIGHUP (which tends to resolve these sorts of issues for us).
#
INTERVAL = 60
def _log(self, msg):
sys.stderr.write(datetime.datetime.utcnow().isoformat())
sys.stderr.write(' ')
sys.stderr.write(msg)
sys.stderr.write('\n')
def handle(self, **options):
app = Celery('awx')
app.config_from_object('django.conf:settings')
while True:
try:
pongs = app.control.ping(['celery@{}'.format(settings.CLUSTER_HOST_ID)], timeout=30)
except Exception:
pongs = []
if not pongs:
self._log('celery is not responsive to ping over local AMQP')
pid = self.getpid()
if pid:
self._log('sending SIGHUP to {}'.format(pid))
os.kill(pid, signal.SIGHUP)
time.sleep(self.INTERVAL)
def getpid(self):
cmd = 'supervisorctl pid tower-processes:awx-celeryd'
if os.path.exists('/supervisor_task.conf'):
cmd = 'supervisorctl -c /supervisor_task.conf pid tower-processes:celery'
try:
return int(subprocess.check_output(cmd, shell=True))
except Exception:
self._log('could not detect celery pid')

View File

@@ -1,25 +1,20 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
import base64
import json
import uuid
import logging
import threading
import uuid
import six
import time
import cProfile
import pstats
import os
import re
from django.conf import settings
from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
from django.db.models.signals import post_save
from django.db.migrations.executor import MigrationExecutor
from django.db import IntegrityError, connection
from django.http import HttpResponse
from django.utils.functional import curry
from django.shortcuts import get_object_or_404, redirect
from django.apps import apps
@@ -209,59 +204,6 @@ class URLModificationMiddleware(object):
request.path_info = new_path
class DeprecatedAuthTokenMiddleware(object):
"""
Used to emulate support for the old Auth Token endpoint to ease the
transition to OAuth2.0. Specifically, this middleware:
1. Intercepts POST requests to `/api/v2/authtoken/` (which now no longer
_actually_ exists in our urls.py)
2. Rewrites `request.path` to `/api/v2/users/N/personal_tokens/`
3. Detects the username and password in the request body (either in JSON,
or form-encoded variables) and builds an appropriate HTTP_AUTHORIZATION
Basic header
"""
def process_request(self, request):
if re.match('^/api/v[12]/authtoken/?$', request.path):
if request.method != 'POST':
return HttpResponse('HTTP {} is not allowed.'.format(request.method), status=405)
try:
payload = json.loads(request.body)
except (ValueError, TypeError):
payload = request.POST
if 'username' not in payload or 'password' not in payload:
return HttpResponse('Unable to login with provided credentials.', status=401)
username = payload['username']
password = payload['password']
try:
pk = User.objects.get(username=username).pk
except ObjectDoesNotExist:
return HttpResponse('Unable to login with provided credentials.', status=401)
new_path = reverse('api:user_personal_token_list', kwargs={
'pk': pk,
'version': 'v2'
})
request._body = ''
request.META['CONTENT_TYPE'] = 'application/json'
request.path = request.path_info = new_path
auth = ' '.join([
'Basic',
base64.b64encode(
six.text_type('{}:{}').format(username, password)
)
])
request.environ['HTTP_AUTHORIZATION'] = auth
logger.warn(
'The Auth Token API (/api/v2/authtoken/) is deprecated and will '
'be replaced with OAuth2.0 in the next version of Ansible Tower '
'(see /api/o/ for more details).'
)
elif request.environ.get('HTTP_AUTHORIZATION', '').startswith('Token '):
token = request.environ['HTTP_AUTHORIZATION'].split(' ', 1)[-1].strip()
request.environ['HTTP_AUTHORIZATION'] = six.text_type('Bearer {}').format(token)
class MigrationRanCheckMiddleware(object):
def process_request(self, request):

View File

@@ -0,0 +1,21 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0049_v330_validate_instance_capacity_adjustment'),
]
operations = [
migrations.RunSQL([
("DROP TABLE IF EXISTS {} CASCADE;".format(table))
])
for table in ('celery_taskmeta', 'celery_tasksetmeta', 'djcelery_crontabschedule',
'djcelery_intervalschedule', 'djcelery_periodictask',
'djcelery_periodictasks', 'djcelery_taskstate', 'djcelery_workerstate',
'djkombu_message', 'djkombu_queue')
]

View File

@@ -0,0 +1,47 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-10-15 16:21
from __future__ import unicode_literals
import awx.main.utils.polymorphic
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0050_v340_drop_celery_tables'),
]
operations = [
migrations.AddField(
model_name='job',
name='job_slice_count',
field=models.PositiveIntegerField(blank=True, default=1, help_text='If ran as part of sliced jobs, the total number of slices. If 1, job is not part of a sliced job.'),
),
migrations.AddField(
model_name='job',
name='job_slice_number',
field=models.PositiveIntegerField(blank=True, default=0, help_text='If part of a sliced job, the ID of the inventory slice operated on. If not part of sliced job, parameter is not used.'),
),
migrations.AddField(
model_name='jobtemplate',
name='job_slice_count',
field=models.PositiveIntegerField(blank=True, default=1, help_text='The number of jobs to slice into at runtime. Will cause the Job Template to launch a workflow if value is greater than 1.'),
),
migrations.AddField(
model_name='workflowjob',
name='is_sliced_job',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='workflowjob',
name='job_template',
field=models.ForeignKey(blank=True, default=None, help_text='If automatically created for a sliced job run, the job template the workflow job was created from.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='slice_workflow_jobs', to='main.JobTemplate'),
),
migrations.AlterField(
model_name='unifiedjob',
name='unified_job_template',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='unifiedjob_unified_jobs', to='main.UnifiedJobTemplate'),
),
]

View File

@@ -0,0 +1,19 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-05-18 17:49
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0051_v340_job_slicing'),
]
operations = [
migrations.RemoveField(
model_name='project',
name='scm_delete_on_next_update',
),
]

View File

@@ -0,0 +1,37 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-09-27 19:50
from __future__ import unicode_literals
import awx.main.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0052_v340_remove_project_scm_delete_on_next_update'),
]
operations = [
migrations.AddField(
model_name='workflowjob',
name='char_prompts',
field=awx.main.fields.JSONField(blank=True, default={}),
),
migrations.AddField(
model_name='workflowjob',
name='inventory',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='workflowjobs', to='main.Inventory'),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='ask_inventory_on_launch',
field=awx.main.fields.AskForField(default=False),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='inventory',
field=models.ForeignKey(blank=True, default=None, help_text='Inventory applied to all job templates in workflow that prompt for inventory.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='workflowjobtemplates', to='main.Inventory'),
),
]

View File

@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-09-28 14:23
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0053_v340_workflow_inventory'),
]
operations = [
migrations.AddField(
model_name='workflowjobnode',
name='do_not_run',
field=models.BooleanField(default=False, help_text='Indidcates that a job will not be created when True. Workflow runtime semantics will mark this True if the node is in a path that will decidedly not be ran. A value of False means the node may not run.'),
),
]

File diff suppressed because it is too large Load Diff

View File

@@ -134,6 +134,9 @@ User.add_to_class('is_in_enterprise_category', user_is_in_enterprise_category)
def o_auth2_application_get_absolute_url(self, request=None):
# this page does not exist in v1
if request.version == 'v1':
return reverse('api:o_auth2_application_detail', kwargs={'pk': self.pk}) # use default version
return reverse('api:o_auth2_application_detail', kwargs={'pk': self.pk}, request=request)
@@ -141,6 +144,9 @@ OAuth2Application.add_to_class('get_absolute_url', o_auth2_application_get_absol
def o_auth2_token_get_absolute_url(self, request=None):
# this page does not exist in v1
if request.version == 'v1':
return reverse('api:o_auth2_token_detail', kwargs={'pk': self.pk}) # use default version
return reverse('api:o_auth2_token_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -136,8 +136,7 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
else:
return []
@classmethod
def _get_parent_field_name(cls):
def _get_parent_field_name(self):
return ''
@classmethod

View File

@@ -383,6 +383,8 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin):
super(Credential, self).save(*args, **kwargs)
def encrypt_field(self, field, ask):
if not hasattr(self, field):
return None
encrypted = encrypt_field(self, field, ask=ask)
if encrypted:
self.inputs[field] = encrypted
@@ -477,6 +479,9 @@ class CredentialType(CommonModelNameNotUnique):
)
def get_absolute_url(self, request=None):
# Page does not exist in API v1
if request.version == 'v1':
return reverse('api:credential_type_detail', kwargs={'pk': self.pk})
return reverse('api:credential_type_detail', kwargs={'pk': self.pk}, request=request)
@property
@@ -580,7 +585,7 @@ class CredentialType(CommonModelNameNotUnique):
if not self.injectors:
if self.managed_by_tower and credential.kind in dir(builtin_injectors):
injected_env = {}
getattr(builtin_injectors, credential.kind)(credential, injected_env)
getattr(builtin_injectors, credential.kind)(credential, injected_env, private_data_dir)
env.update(injected_env)
safe_env.update(build_safe_env(injected_env))
return

View File

@@ -1,20 +1,37 @@
import json
import os
import stat
import tempfile
from awx.main.utils import decrypt_field
from django.conf import settings
def aws(cred, env):
def aws(cred, env, private_data_dir):
env['AWS_ACCESS_KEY_ID'] = cred.username
env['AWS_SECRET_ACCESS_KEY'] = decrypt_field(cred, 'password')
if len(cred.security_token) > 0:
env['AWS_SECURITY_TOKEN'] = decrypt_field(cred, 'security_token')
def gce(cred, env):
def gce(cred, env, private_data_dir):
env['GCE_EMAIL'] = cred.username
env['GCE_PROJECT'] = cred.project
json_cred = {
'type': 'service_account',
'private_key': decrypt_field(cred, 'ssh_key_data'),
'client_email': cred.username,
'project_id': cred.project
}
handle, path = tempfile.mkstemp(dir=private_data_dir)
f = os.fdopen(handle, 'w')
json.dump(json_cred, f)
f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
env['GCE_CREDENTIALS_FILE_PATH'] = path
def azure_rm(cred, env):
def azure_rm(cred, env, private_data_dir):
if len(cred.client) and len(cred.tenant):
env['AZURE_CLIENT_ID'] = cred.client
env['AZURE_SECRET'] = decrypt_field(cred, 'secret')
@@ -28,7 +45,7 @@ def azure_rm(cred, env):
env['AZURE_CLOUD_ENVIRONMENT'] = cred.inputs['cloud_environment']
def vmware(cred, env):
def vmware(cred, env, private_data_dir):
env['VMWARE_USER'] = cred.username
env['VMWARE_PASSWORD'] = decrypt_field(cred, 'password')
env['VMWARE_HOST'] = cred.host

View File

@@ -32,7 +32,7 @@ __all__ = ('Instance', 'InstanceGroup', 'JobOrigin', 'TowerScheduleState',)
def validate_queuename(v):
# celery and kombu don't play nice with unicode in queue names
# kombu doesn't play nice with unicode in queue names
if v:
try:
'{}'.format(v.decode('utf-8'))

View File

@@ -9,7 +9,6 @@ import copy
from urlparse import urljoin
import os.path
import six
from distutils.version import LooseVersion
# Django
from django.conf import settings
@@ -20,6 +19,9 @@ from django.core.exceptions import ValidationError
from django.utils.timezone import now
from django.db.models import Q
# REST Framework
from rest_framework.exceptions import ParseError
# AWX
from awx.api.versioning import reverse
from awx.main.constants import CLOUD_PROVIDERS
@@ -42,7 +44,7 @@ from awx.main.models.notifications import (
NotificationTemplate,
JobNotificationMixin,
)
from awx.main.utils import _inventory_updates, get_ansible_version, region_sorting
from awx.main.utils import _inventory_updates, region_sorting
__all__ = ['Inventory', 'Host', 'Group', 'InventorySource', 'InventoryUpdate',
@@ -218,67 +220,89 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
group_children.add(from_group_id)
return group_children_map
def get_script_data(self, hostvars=False, towervars=False, show_all=False):
if show_all:
hosts_q = dict()
else:
hosts_q = dict(enabled=True)
@staticmethod
def parse_slice_params(slice_str):
m = re.match(r"slice(?P<number>\d+)of(?P<step>\d+)", slice_str)
if not m:
raise ParseError(_('Could not parse subset as slice specification.'))
number = int(m.group('number'))
step = int(m.group('step'))
if number > step:
raise ParseError(_('Slice number must be less than total number of slices.'))
elif number < 1:
raise ParseError(_('Slice number must be 1 or higher.'))
return (number, step)
def get_script_data(self, hostvars=False, towervars=False, show_all=False, slice_number=1, slice_count=1):
hosts_kw = dict()
if not show_all:
hosts_kw['enabled'] = True
fetch_fields = ['name', 'id', 'variables']
if towervars:
fetch_fields.append('enabled')
hosts = self.hosts.filter(**hosts_kw).order_by('name').only(*fetch_fields)
if slice_count > 1:
offset = slice_number - 1
hosts = hosts[offset::slice_count]
data = dict()
all_group = data.setdefault('all', dict())
all_hostnames = set(host.name for host in hosts)
if self.variables_dict:
all_group = data.setdefault('all', dict())
all_group['vars'] = self.variables_dict
if self.kind == 'smart':
if len(self.hosts.all()) == 0:
return {}
else:
all_group = data.setdefault('all', dict())
smart_hosts_qs = self.hosts.filter(**hosts_q).all()
smart_hosts = list(smart_hosts_qs.values_list('name', flat=True))
all_group['hosts'] = smart_hosts
all_group['hosts'] = [host.name for host in hosts]
else:
# Add hosts without a group to the all group.
groupless_hosts_qs = self.hosts.filter(groups__isnull=True, **hosts_q)
groupless_hosts = list(groupless_hosts_qs.values_list('name', flat=True))
if groupless_hosts:
all_group = data.setdefault('all', dict())
all_group['hosts'] = groupless_hosts
# Keep track of hosts that are members of a group
grouped_hosts = set([])
# Build in-memory mapping of groups and their hosts.
group_hosts_kw = dict(group__inventory_id=self.id, host__inventory_id=self.id)
if 'enabled' in hosts_q:
group_hosts_kw['host__enabled'] = hosts_q['enabled']
group_hosts_qs = Group.hosts.through.objects.filter(**group_hosts_kw)
group_hosts_qs = group_hosts_qs.values_list('group_id', 'host_id', 'host__name')
group_hosts_qs = Group.hosts.through.objects.filter(
group__inventory_id=self.id,
host__inventory_id=self.id
).values_list('group_id', 'host_id', 'host__name')
group_hosts_map = {}
for group_id, host_id, host_name in group_hosts_qs:
if host_name not in all_hostnames:
continue # host might not be in current shard
group_hostnames = group_hosts_map.setdefault(group_id, [])
group_hostnames.append(host_name)
grouped_hosts.add(host_name)
# Build in-memory mapping of groups and their children.
group_parents_qs = Group.parents.through.objects.filter(
from_group__inventory_id=self.id,
to_group__inventory_id=self.id,
)
group_parents_qs = group_parents_qs.values_list('from_group_id', 'from_group__name',
'to_group_id')
).values_list('from_group_id', 'from_group__name', 'to_group_id')
group_children_map = {}
for from_group_id, from_group_name, to_group_id in group_parents_qs:
group_children = group_children_map.setdefault(to_group_id, [])
group_children.append(from_group_name)
# Now use in-memory maps to build up group info.
for group in self.groups.all():
for group in self.groups.only('name', 'id', 'variables'):
group_info = dict()
group_info['hosts'] = group_hosts_map.get(group.id, [])
group_info['children'] = group_children_map.get(group.id, [])
group_info['vars'] = group.variables_dict
data[group.name] = group_info
# Add ungrouped hosts to all group
all_group['hosts'] = [host.name for host in hosts if host.name not in grouped_hosts]
# Remove any empty groups
for group_name in list(data.keys()):
if group_name == 'all':
continue
if not (data.get(group_name, {}).get('hosts', []) or data.get(group_name, {}).get('children', [])):
data.pop(group_name)
if hostvars:
data.setdefault('_meta', dict())
data['_meta'].setdefault('hostvars', dict())
for host in self.hosts.filter(**hosts_q):
for host in hosts:
data['_meta']['hostvars'][host.name] = host.variables_dict
if towervars:
tower_dict = dict(remote_tower_enabled=str(host.enabled).lower(),
@@ -1578,12 +1602,6 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, RelatedJobsMix
"Instead, configure the corresponding source project to update on launch."))
return self.update_on_launch
def clean_overwrite_vars(self): # TODO: remove when Ansible 2.4 becomes unsupported, obviously
if self.source == 'scm' and not self.overwrite_vars:
if get_ansible_version() < LooseVersion('2.5'):
raise ValidationError(_("SCM type sources must set `overwrite_vars` to `true` until Ansible 2.5."))
return self.overwrite_vars
def clean_source_path(self):
if self.source != 'scm' and self.source_path:
raise ValidationError(_("Cannot set source_path if not SCM type."))
@@ -1631,8 +1649,7 @@ class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin,
null=True
)
@classmethod
def _get_parent_field_name(cls):
def _get_parent_field_name(self):
return 'inventory_source'
@classmethod

View File

@@ -34,7 +34,7 @@ from awx.main.models.notifications import (
JobNotificationMixin,
)
from awx.main.utils import parse_yaml_or_json, getattr_dne
from awx.main.fields import ImplicitRoleField
from awx.main.fields import ImplicitRoleField, JSONField, AskForField
from awx.main.models.mixins import (
ResourceMixin,
SurveyJobTemplateMixin,
@@ -43,7 +43,6 @@ from awx.main.models.mixins import (
CustomVirtualEnvMixin,
RelatedJobsMixin,
)
from awx.main.fields import JSONField, AskForField
logger = logging.getLogger('awx.main.models.jobs')
@@ -277,6 +276,13 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
default=False,
allows_field='credentials'
)
job_slice_count = models.PositiveIntegerField(
blank=True,
default=1,
help_text=_("The number of jobs to slice into at runtime. "
"Will cause the Job Template to launch a workflow if value is greater than 1."),
)
admin_role = ImplicitRoleField(
parent_role=['project.organization.job_template_admin_role', 'inventory.organization.job_template_admin_role']
)
@@ -295,7 +301,8 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in JobOptions._meta.fields) | set(
['name', 'description', 'schedule', 'survey_passwords', 'labels', 'credentials']
['name', 'description', 'schedule', 'survey_passwords', 'labels', 'credentials',
'job_slice_number', 'job_slice_count']
)
@property
@@ -307,7 +314,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
if self.inventory is None and not self.ask_inventory_on_launch:
validation_errors['inventory'] = [_("Job Template must provide 'inventory' or allow prompting for it."),]
if self.project is None:
validation_errors['project'] = [_("Job types 'run' and 'check' must have assigned a project."),]
validation_errors['project'] = [_("Job Templates must have a project assigned."),]
return validation_errors
@property
@@ -320,6 +327,34 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
'''
return self.create_unified_job(**kwargs)
def create_unified_job(self, **kwargs):
prevent_slicing = kwargs.pop('_prevent_slicing', False)
slice_event = bool(self.job_slice_count > 1 and (not prevent_slicing))
if slice_event:
# A Slice Job Template will generate a WorkflowJob rather than a Job
from awx.main.models.workflow import WorkflowJobTemplate, WorkflowJobNode
kwargs['_unified_job_class'] = WorkflowJobTemplate._get_unified_job_class()
kwargs['_parent_field_name'] = "job_template"
kwargs.setdefault('_eager_fields', {})
kwargs['_eager_fields']['is_sliced_job'] = True
elif prevent_slicing:
kwargs.setdefault('_eager_fields', {})
kwargs['_eager_fields'].setdefault('job_slice_count', 1)
job = super(JobTemplate, self).create_unified_job(**kwargs)
if slice_event:
try:
wj_config = job.launch_config
except JobLaunchConfig.DoesNotExist:
wj_config = JobLaunchConfig()
actual_inventory = wj_config.inventory if wj_config.inventory else self.inventory
for idx in xrange(min(self.job_slice_count,
actual_inventory.hosts.count())):
create_kwargs = dict(workflow_job=job,
unified_job_template=self,
ancestor_artifacts=dict(job_slice=idx + 1))
WorkflowJobNode.objects.create(**create_kwargs)
return job
def get_absolute_url(self, request=None):
return reverse('api:job_template_detail', kwargs={'pk': self.pk}, request=request)
@@ -452,7 +487,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
RelatedJobsMixin
'''
def _get_related_jobs(self):
return Job.objects.filter(job_template=self)
return UnifiedJob.objects.filter(unified_job_template=self)
class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskManagerJobMixin):
@@ -501,10 +536,21 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
on_delete=models.SET_NULL,
help_text=_('The SCM Refresh task used to make sure the playbooks were available for the job run'),
)
job_slice_number = models.PositiveIntegerField(
blank=True,
default=0,
help_text=_("If part of a sliced job, the ID of the inventory slice operated on. "
"If not part of sliced job, parameter is not used."),
)
job_slice_count = models.PositiveIntegerField(
blank=True,
default=1,
help_text=_("If ran as part of sliced jobs, the total number of slices. "
"If 1, job is not part of a sliced job."),
)
@classmethod
def _get_parent_field_name(cls):
def _get_parent_field_name(self):
return 'job_template'
@classmethod
@@ -545,6 +591,15 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
def event_class(self):
return JobEvent
def copy_unified_job(self, **new_prompts):
# Needed for job slice relaunch consistency, do no re-spawn workflow job
# target same slice as original job
new_prompts['_prevent_slicing'] = True
new_prompts.setdefault('_eager_fields', {})
new_prompts['_eager_fields']['job_slice_number'] = self.job_slice_number
new_prompts['_eager_fields']['job_slice_count'] = self.job_slice_count
return super(Job, self).copy_unified_job(**new_prompts)
@property
def ask_diff_mode_on_launch(self):
if self.job_template is not None:
@@ -638,6 +693,9 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
count_hosts = 2
else:
count_hosts = Host.objects.filter(inventory__jobs__pk=self.pk).count()
if self.job_slice_count > 1:
# Integer division intentional
count_hosts = (count_hosts + self.job_slice_count - self.job_slice_number) / self.job_slice_count
return min(count_hosts, 5 if self.forks == 0 else self.forks) + 1
@property
@@ -836,19 +894,19 @@ class NullablePromptPsuedoField(object):
instance.char_prompts[self.field_name] = value
class LaunchTimeConfig(BaseModel):
class LaunchTimeConfigBase(BaseModel):
'''
Common model for all objects that save details of a saved launch config
WFJT / WJ nodes, schedules, and job launch configs (not all implemented yet)
Needed as separate class from LaunchTimeConfig because some models
use `extra_data` and some use `extra_vars`. We cannot change the API,
so we force fake it in the model definitions
- model defines extra_vars - use this class
- model needs to use extra data - use LaunchTimeConfig
Use this for models which are SurveyMixins and UnifiedJobs or Templates
'''
class Meta:
abstract = True
# Prompting-related fields that have to be handled as special cases
credentials = models.ManyToManyField(
'Credential',
related_name='%(class)ss'
)
inventory = models.ForeignKey(
'Inventory',
related_name='%(class)ss',
@@ -857,15 +915,6 @@ class LaunchTimeConfig(BaseModel):
default=None,
on_delete=models.SET_NULL,
)
extra_data = JSONField(
blank=True,
default={}
)
survey_passwords = prevent_search(JSONField(
blank=True,
default={},
editable=False,
))
# All standard fields are stored in this dictionary field
# This is a solution to the nullable CharField problem, specific to prompting
char_prompts = JSONField(
@@ -875,6 +924,7 @@ class LaunchTimeConfig(BaseModel):
def prompts_dict(self, display=False):
data = {}
# Some types may have different prompts, but always subset of JT prompts
for prompt_name in JobTemplate.get_ask_mapping().keys():
try:
field = self._meta.get_field(prompt_name)
@@ -887,11 +937,11 @@ class LaunchTimeConfig(BaseModel):
if len(prompt_val) > 0:
data[prompt_name] = prompt_val
elif prompt_name == 'extra_vars':
if self.extra_data:
if self.extra_vars:
if display:
data[prompt_name] = self.display_extra_data()
data[prompt_name] = self.display_extra_vars()
else:
data[prompt_name] = self.extra_data
data[prompt_name] = self.extra_vars
if self.survey_passwords and not display:
data['survey_passwords'] = self.survey_passwords
else:
@@ -900,18 +950,21 @@ class LaunchTimeConfig(BaseModel):
data[prompt_name] = prompt_val
return data
def display_extra_data(self):
def display_extra_vars(self):
'''
Hides fields marked as passwords in survey.
'''
if self.survey_passwords:
extra_data = parse_yaml_or_json(self.extra_data).copy()
extra_vars = parse_yaml_or_json(self.extra_vars).copy()
for key, value in self.survey_passwords.items():
if key in extra_data:
extra_data[key] = value
return extra_data
if key in extra_vars:
extra_vars[key] = value
return extra_vars
else:
return self.extra_data
return self.extra_vars
def display_extra_data(self):
return self.display_extra_vars()
@property
def _credential(self):
@@ -935,7 +988,42 @@ class LaunchTimeConfig(BaseModel):
return None
class LaunchTimeConfig(LaunchTimeConfigBase):
'''
Common model for all objects that save details of a saved launch config
WFJT / WJ nodes, schedules, and job launch configs (not all implemented yet)
'''
class Meta:
abstract = True
# Special case prompting fields, even more special than the other ones
extra_data = JSONField(
blank=True,
default={}
)
survey_passwords = prevent_search(JSONField(
blank=True,
default={},
editable=False,
))
# Credentials needed for non-unified job / unified JT models
credentials = models.ManyToManyField(
'Credential',
related_name='%(class)ss'
)
@property
def extra_vars(self):
return self.extra_data
@extra_vars.setter
def extra_vars(self, extra_vars):
self.extra_data = extra_vars
for field_name in JobTemplate.get_ask_mapping().keys():
if field_name == 'extra_vars':
continue
try:
LaunchTimeConfig._meta.get_field(field_name)
except FieldDoesNotExist:

View File

@@ -301,14 +301,22 @@ class SurveyJobTemplateMixin(models.Model):
accepted.update(extra_vars)
extra_vars = {}
if extra_vars:
# Prune the prompted variables for those identical to template
tmp_extra_vars = self.extra_vars_dict
for key in (set(tmp_extra_vars.keys()) & set(extra_vars.keys())):
if tmp_extra_vars[key] == extra_vars[key]:
extra_vars.pop(key)
if extra_vars:
# Leftover extra_vars, keys provided that are not allowed
rejected.update(extra_vars)
# ignored variables does not block manual launch
if 'prompts' not in _exclude_errors:
errors['extra_vars'] = [_('Variables {list_of_keys} are not allowed on launch. Check the Prompt on Launch setting '+
'on the Job Template to include Extra Variables.').format(
list_of_keys=', '.join(extra_vars.keys()))]
'on the {model_name} to include Extra Variables.').format(
list_of_keys=six.text_type(', ').join([six.text_type(key) for key in extra_vars.keys()]),
model_name=self._meta.verbose_name.title())]
return (accepted, rejected, errors)

View File

@@ -254,10 +254,6 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
on_delete=models.CASCADE,
related_name='projects',
)
scm_delete_on_next_update = models.BooleanField(
default=False,
editable=False,
)
scm_update_on_launch = models.BooleanField(
default=False,
help_text=_('Update the project when a job is launched that uses the project.'),
@@ -331,13 +327,6 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
# if it hasn't been specified, then we're just doing a normal save.
update_fields = kwargs.get('update_fields', [])
skip_update = bool(kwargs.pop('skip_update', False))
# Check if scm_type or scm_url changes.
if self.pk:
project_before = self.__class__.objects.get(pk=self.pk)
if project_before.scm_type != self.scm_type or project_before.scm_url != self.scm_url:
self.scm_delete_on_next_update = True
if 'scm_delete_on_next_update' not in update_fields:
update_fields.append('scm_delete_on_next_update')
# Create auto-generated local path if project uses SCM.
if self.pk and self.scm_type and not self.local_path.startswith('_'):
slug_name = slugify(six.text_type(self.name)).replace(u'-', u'_')
@@ -397,19 +386,6 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
def _can_update(self):
return bool(self.scm_type)
def _update_unified_job_kwargs(self, create_kwargs, kwargs):
'''
:param create_kwargs: key-worded arguments to be updated and later used for creating unified job.
:type create_kwargs: dict
:param kwargs: request parameters used to override unified job template fields with runtime values.
:type kwargs: dict
:return: modified create_kwargs.
:rtype: dict
'''
if self.scm_delete_on_next_update:
create_kwargs['scm_delete_on_update'] = True
return create_kwargs
def create_project_update(self, **kwargs):
return self.create_unified_job(**kwargs)
@@ -466,6 +442,14 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
models.Q(ProjectUpdate___project=self)
)
def delete(self, *args, **kwargs):
path_to_delete = self.get_project_path(check_if_exists=False)
r = super(Project, self).delete(*args, **kwargs)
if self.scm_type and path_to_delete: # non-manual, concrete path
from awx.main.tasks import delete_project_files
delete_project_files.delay(path_to_delete)
return r
class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManagerProjectUpdateMixin):
'''
@@ -488,10 +472,24 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
default='check',
)
@classmethod
def _get_parent_field_name(cls):
def _get_parent_field_name(self):
return 'project'
def _update_parent_instance(self):
if not self.project:
return # no parent instance to update
if self.job_type == PERM_INVENTORY_DEPLOY:
# Do not update project status if this is sync job
# unless no other updates have happened or started
first_update = False
if self.project.status == 'never updated' and self.status == 'running':
first_update = True
elif self.project.current_job == self:
first_update = True
if not first_update:
return
return super(ProjectUpdate, self)._update_parent_instance()
@classmethod
def _get_task_class(cls):
from awx.main.tasks import RunProjectUpdate
@@ -542,17 +540,6 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
def get_ui_url(self):
return urlparse.urljoin(settings.TOWER_URL_BASE, "/#/jobs/project/{}".format(self.pk))
def _update_parent_instance(self):
parent_instance = self._get_parent_instance()
if parent_instance and self.job_type == 'check':
update_fields = self._update_parent_instance_no_save(parent_instance)
if self.status in ('successful', 'failed', 'error', 'canceled'):
if not self.failed and parent_instance.scm_delete_on_next_update:
parent_instance.scm_delete_on_next_update = False
if 'scm_delete_on_next_update' not in update_fields:
update_fields.append('scm_delete_on_next_update')
parent_instance.save(update_fields=update_fields)
def cancel(self, job_explanation=None, is_chain=False):
res = super(ProjectUpdate, self).cancel(job_explanation=job_explanation, is_chain=is_chain)
if res and self.launch_type != 'sync':

View File

@@ -153,7 +153,7 @@ class Schedule(CommonModel, LaunchTimeConfig):
if 'until=' in rrule.lower():
# if DTSTART;TZID= is used, coerce "naive" UNTIL values
# to the proper UTC date
match_until = re.match(".*?(?P<until>UNTIL\=[0-9]+T[0-9]+)(?P<utcflag>Z?)", rrule)
match_until = re.match(r".*?(?P<until>UNTIL\=[0-9]+T[0-9]+)(?P<utcflag>Z?)", rrule)
if not len(match_until.group('utcflag')):
# rrule = DTSTART;TZID=America/New_York:20200601T120000 RRULE:...;UNTIL=20200601T170000

View File

@@ -7,9 +7,11 @@ import json
import logging
import os
import re
import socket
import subprocess
import tempfile
from collections import OrderedDict
import six
# Django
from django.conf import settings
@@ -27,18 +29,22 @@ from rest_framework.exceptions import ParseError
# Django-Polymorphic
from polymorphic.models import PolymorphicModel
# Django-Celery
from djcelery.models import TaskMeta
# AWX
from awx.main.models.base import * # noqa
from awx.main.models.base import (
CommonModelNameNotUnique,
PasswordFieldsModel,
NotificationFieldsModel,
prevent_search
)
from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin
from awx.main.utils import (
encrypt_dict, decrypt_field, _inventory_updates,
copy_model_by_class, copy_m2m_relationships,
get_type_for_model, parse_yaml_or_json, getattr_dne
)
from awx.main.utils import polymorphic
from awx.main.utils import polymorphic, schedule_task_manager
from awx.main.constants import ACTIVE_STATES, CAN_CANCEL
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.consumers import emit_channel_notification
@@ -309,19 +315,13 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
'''
raise NotImplementedError # Implement in subclass.
@classmethod
def _get_unified_job_field_names(cls):
'''
Return field names that should be copied from template to new job.
'''
raise NotImplementedError # Implement in subclass.
@property
def notification_templates(self):
'''
Return notification_templates relevant to this Unified Job Template
'''
# NOTE: Derived classes should implement
from awx.main.models.notifications import NotificationTemplate
return NotificationTemplate.objects.none()
def create_unified_job(self, **kwargs):
@@ -338,19 +338,33 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
unified_job_class = self._get_unified_job_class()
fields = self._get_unified_job_field_names()
unallowed_fields = set(kwargs.keys()) - set(fields)
if unallowed_fields:
logger.warn('Fields {} are not allowed as overrides.'.format(unallowed_fields))
map(kwargs.pop, unallowed_fields)
parent_field_name = None
if "_unified_job_class" in kwargs:
# Special case where spawned job is different type than usual
# Only used for slice jobs
unified_job_class = kwargs.pop("_unified_job_class")
fields = unified_job_class._get_unified_job_field_names() & fields
parent_field_name = kwargs.pop('_parent_field_name')
unified_job = copy_model_by_class(self, unified_job_class, fields, kwargs)
unallowed_fields = set(kwargs.keys()) - set(fields)
validated_kwargs = kwargs.copy()
if unallowed_fields:
if parent_field_name is None:
logger.warn(six.text_type('Fields {} are not allowed as overrides to spawn from {}.').format(
six.text_type(', ').join(unallowed_fields), self
))
map(validated_kwargs.pop, unallowed_fields)
unified_job = copy_model_by_class(self, unified_job_class, fields, validated_kwargs)
if eager_fields:
for fd, val in eager_fields.items():
setattr(unified_job, fd, val)
# Set the unified job template back-link on the job
parent_field_name = unified_job_class._get_parent_field_name()
# NOTE: slice workflow jobs _get_parent_field_name method
# is not correct until this is set
if not parent_field_name:
parent_field_name = unified_job._get_parent_field_name()
setattr(unified_job, parent_field_name, self)
# For JobTemplate-based jobs with surveys, add passwords to list for perma-redaction
@@ -361,27 +375,37 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
unified_job.survey_passwords = new_job_passwords
kwargs['survey_passwords'] = new_job_passwords # saved in config object for relaunch
unified_job.save()
from awx.main.signals import disable_activity_stream, activity_stream_create
with disable_activity_stream():
# Don't emit the activity stream record here for creation,
# because we haven't attached important M2M relations yet, like
# credentials and labels
unified_job.save()
# Labels and credentials copied here
if kwargs.get('credentials'):
if validated_kwargs.get('credentials'):
Credential = UnifiedJob._meta.get_field('credentials').related_model
cred_dict = Credential.unique_dict(self.credentials.all())
prompted_dict = Credential.unique_dict(kwargs['credentials'])
prompted_dict = Credential.unique_dict(validated_kwargs['credentials'])
# combine prompted credentials with JT
cred_dict.update(prompted_dict)
kwargs['credentials'] = [cred for cred in cred_dict.values()]
validated_kwargs['credentials'] = [cred for cred in cred_dict.values()]
kwargs['credentials'] = validated_kwargs['credentials']
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
copy_m2m_relationships(self, unified_job, fields, kwargs=kwargs)
copy_m2m_relationships(self, unified_job, fields, kwargs=validated_kwargs)
if 'extra_vars' in kwargs:
unified_job.handle_extra_data(kwargs['extra_vars'])
if 'extra_vars' in validated_kwargs:
unified_job.handle_extra_data(validated_kwargs['extra_vars'])
if not getattr(self, '_deprecated_credential_launch', False):
# Create record of provided prompts for relaunch and rescheduling
unified_job.create_config_from_prompts(kwargs)
unified_job.create_config_from_prompts(kwargs, parent=self)
# manually issue the create activity stream entry _after_ M2M relations
# have been associated to the UJ
if unified_job.__class__ in activity_stream_registrar.models:
activity_stream_create(None, unified_job, True)
return unified_job
@@ -546,7 +570,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
default=None,
editable=False,
related_name='%(class)s_unified_jobs',
on_delete=models.SET_NULL,
on_delete=polymorphic.SET_NULL,
)
launch_type = models.CharField(
max_length=20,
@@ -673,9 +697,9 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
)
def get_absolute_url(self, request=None):
real_instance = self.get_real_instance()
if real_instance != self:
return real_instance.get_absolute_url(request=request)
RealClass = self.get_real_instance_class()
if RealClass != UnifiedJob:
return RealClass.get_absolute_url(RealClass(pk=self.pk), request=request)
else:
return ''
@@ -694,8 +718,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
def supports_isolation(cls):
return False
@classmethod
def _get_parent_field_name(cls):
def _get_parent_field_name(self):
return 'unified_job_template' # Override in subclasses.
@classmethod
@@ -828,7 +851,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
'''
unified_job_class = self.__class__
unified_jt_class = self._get_unified_job_template_class()
parent_field_name = unified_job_class._get_parent_field_name()
parent_field_name = self._get_parent_field_name()
fields = unified_jt_class._get_unified_job_field_names() | set([parent_field_name])
create_data = {}
@@ -847,10 +870,10 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
setattr(unified_job, fd, val)
unified_job.save()
# Labels copied here
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
copy_m2m_relationships(self, unified_job, fields)
# Labels copied here
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
copy_m2m_relationships(self, unified_job, fields)
return unified_job
@@ -866,16 +889,18 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
except JobLaunchConfig.DoesNotExist:
return None
def create_config_from_prompts(self, kwargs):
def create_config_from_prompts(self, kwargs, parent=None):
'''
Create a launch configuration entry for this job, given prompts
returns None if it can not be created
'''
if self.unified_job_template is None:
return None
JobLaunchConfig = self._meta.get_field('launch_config').related_model
config = JobLaunchConfig(job=self)
valid_fields = self.unified_job_template.get_ask_mapping().keys()
if parent is None:
parent = getattr(self, self._get_parent_field_name())
if parent is None:
return
valid_fields = parent.get_ask_mapping().keys()
# Special cases allowed for workflows
if hasattr(self, 'extra_vars'):
valid_fields.extend(['survey_passwords', 'extra_vars'])
@@ -892,8 +917,9 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
setattr(config, key, value)
config.save()
job_creds = (set(kwargs.get('credentials', [])) -
set(self.unified_job_template.credentials.all()))
job_creds = set(kwargs.get('credentials', []))
if 'credentials' in [field.name for field in parent._meta.get_fields()]:
job_creds = job_creds - set(parent.credentials.all())
if job_creds:
config.credentials.add(*job_creds)
return config
@@ -1112,14 +1138,6 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
pass
return None
@property
def celery_task(self):
try:
if self.celery_task_id:
return TaskMeta.objects.get(task_id=self.celery_task_id)
except TaskMeta.DoesNotExist:
pass
def get_passwords_needed_to_start(self):
return []
@@ -1224,29 +1242,6 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
return (True, opts)
def start_celery_task(self, opts, error_callback, success_callback, queue):
kwargs = {
'link_error': error_callback,
'link': success_callback,
'queue': None,
'task_id': None,
}
if not self.celery_task_id:
raise RuntimeError("Expected celery_task_id to be set on model.")
kwargs['task_id'] = self.celery_task_id
task_class = self._get_task_class()
kwargs['queue'] = queue
task_class().apply_async([self.pk], opts, **kwargs)
def start(self, error_callback, success_callback, **kwargs):
'''
Start the task running via Celery.
'''
(res, opts) = self.pre_start(**kwargs)
if res:
self.start_celery_task(opts, error_callback, success_callback)
return res
def signal_start(self, **kwargs):
"""Notify the task runner system to begin work on this task."""
@@ -1266,8 +1261,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
self.update_fields(start_args=json.dumps(kwargs), status='pending')
self.websocket_emit_status("pending")
from awx.main.scheduler.tasks import run_job_launch
connection.on_commit(lambda: run_job_launch.delay(self.id))
schedule_task_manager()
# Each type of unified job has a different Task class; get the
# appropirate one.
@@ -1282,46 +1276,35 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
# Done!
return True
@property
def actually_running(self):
# returns True if the job is running in the appropriate dispatcher process
running = False
if all([
self.status == 'running',
self.celery_task_id,
self.execution_node
]):
# If the job is marked as running, but the dispatcher
# doesn't know about it (or the dispatcher doesn't reply),
# then cancel the job
timeout = 5
try:
running = self.celery_task_id in ControlDispatcher(
'dispatcher', self.execution_node
).running(timeout=timeout)
except socket.timeout:
logger.error(six.text_type(
'could not reach dispatcher on {} within {}s'
).format(self.execution_node, timeout))
running = False
return running
@property
def can_cancel(self):
return bool(self.status in CAN_CANCEL)
def _force_cancel(self):
# Update the status to 'canceled' if we can detect that the job
# really isn't running (i.e. celery has crashed or forcefully
# killed the worker).
task_statuses = ('STARTED', 'SUCCESS', 'FAILED', 'RETRY', 'REVOKED')
try:
taskmeta = self.celery_task
if not taskmeta or taskmeta.status not in task_statuses:
return
from celery import current_app
i = current_app.control.inspect()
for v in (i.active() or {}).values():
if taskmeta.task_id in [x['id'] for x in v]:
return
for v in (i.reserved() or {}).values():
if taskmeta.task_id in [x['id'] for x in v]:
return
for v in (i.revoked() or {}).values():
if taskmeta.task_id in [x['id'] for x in v]:
return
for v in (i.scheduled() or {}).values():
if taskmeta.task_id in [x['id'] for x in v]:
return
instance = self.__class__.objects.get(pk=self.pk)
if instance.can_cancel:
instance.status = 'canceled'
update_fields = ['status']
if not instance.job_explanation:
instance.job_explanation = 'Forced cancel'
update_fields.append('job_explanation')
instance.save(update_fields=update_fields)
self.websocket_emit_status("canceled")
except Exception: # FIXME: Log this exception!
if settings.DEBUG:
raise
def _build_job_explanation(self):
if not self.job_explanation:
return 'Previous Task Canceled: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % \
@@ -1340,13 +1323,14 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
if self.status in ('pending', 'waiting', 'new'):
self.status = 'canceled'
cancel_fields.append('status')
if self.status == 'running' and not self.actually_running:
self.status = 'canceled'
cancel_fields.append('status')
if job_explanation is not None:
self.job_explanation = job_explanation
cancel_fields.append('job_explanation')
self.save(update_fields=cancel_fields)
self.websocket_emit_status("canceled")
if settings.BROKER_URL.startswith('amqp://'):
self._force_cancel()
return self.cancel_flag
@property
@@ -1402,7 +1386,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
r['{}_user_last_name'.format(name)] = created_by.last_name
return r
def get_celery_queue_name(self):
def get_queue_name(self):
return self.controller_node or self.execution_node or settings.CELERY_DEFAULT_QUEUE
def is_isolated(self):

View File

@@ -9,6 +9,7 @@ import logging
from django.db import models
from django.conf import settings
from django.utils.translation import ugettext_lazy as _
from django.core.exceptions import ObjectDoesNotExist
#from django import settings as tower_settings
# AWX
@@ -23,14 +24,14 @@ from awx.main.models.rbac import (
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR
)
from awx.main.fields import ImplicitRoleField
from awx.main.fields import ImplicitRoleField, AskForField
from awx.main.models.mixins import (
ResourceMixin,
SurveyJobTemplateMixin,
SurveyJobMixin,
RelatedJobsMixin,
)
from awx.main.models.jobs import LaunchTimeConfig
from awx.main.models.jobs import LaunchTimeConfigBase, LaunchTimeConfig, JobTemplate
from awx.main.models.credential import Credential
from awx.main.redact import REPLACE_STR
from awx.main.fields import JSONField
@@ -81,7 +82,7 @@ class WorkflowNodeBase(CreatedModifiedModel, LaunchTimeConfig):
success_parents = getattr(self, '%ss_success' % self.__class__.__name__.lower()).all()
failure_parents = getattr(self, '%ss_failure' % self.__class__.__name__.lower()).all()
always_parents = getattr(self, '%ss_always' % self.__class__.__name__.lower()).all()
return success_parents | failure_parents | always_parents
return (success_parents | failure_parents | always_parents).order_by('id')
@classmethod
def _get_workflow_job_field_names(cls):
@@ -183,10 +184,26 @@ class WorkflowJobNode(WorkflowNodeBase):
default={},
editable=False,
)
do_not_run = models.BooleanField(
default=False,
help_text=_("Indidcates that a job will not be created when True. Workflow runtime "
"semantics will mark this True if the node is in a path that will "
"decidedly not be ran. A value of False means the node may not run."),
)
def get_absolute_url(self, request=None):
return reverse('api:workflow_job_node_detail', kwargs={'pk': self.pk}, request=request)
def prompts_dict(self, *args, **kwargs):
r = super(WorkflowJobNode, self).prompts_dict(*args, **kwargs)
# Explanation - WFJT extra_vars still break pattern, so they are not
# put through prompts processing, but inventory is only accepted
# if JT prompts for it, so it goes through this mechanism
if self.workflow_job and self.workflow_job.inventory_id:
# workflow job inventory takes precedence
r['inventory'] = self.workflow_job.inventory
return r
def get_job_kwargs(self):
'''
In advance of creating a new unified job as part of a workflow,
@@ -198,7 +215,14 @@ class WorkflowJobNode(WorkflowNodeBase):
data = {}
ujt_obj = self.unified_job_template
if ujt_obj is not None:
accepted_fields, ignored_fields, errors = ujt_obj._accept_or_ignore_job_kwargs(**self.prompts_dict())
# MERGE note: move this to prompts_dict method on node when merging
# with the workflow inventory branch
prompts_data = self.prompts_dict()
if isinstance(ujt_obj, WorkflowJobTemplate):
if self.workflow_job.extra_vars:
prompts_data.setdefault('extra_vars', {})
prompts_data['extra_vars'].update(self.workflow_job.extra_vars_dict)
accepted_fields, ignored_fields, errors = ujt_obj._accept_or_ignore_job_kwargs(**prompts_data)
if errors:
logger.info(_('Bad launch configuration starting template {template_pk} as part of '
'workflow {workflow_pk}. Errors:\n{error_text}').format(
@@ -206,13 +230,24 @@ class WorkflowJobNode(WorkflowNodeBase):
workflow_pk=self.pk,
error_text=errors))
data.update(accepted_fields) # missing fields are handled in the scheduler
try:
# config saved on the workflow job itself
wj_config = self.workflow_job.launch_config
except ObjectDoesNotExist:
wj_config = None
if wj_config:
accepted_fields, ignored_fields, errors = ujt_obj._accept_or_ignore_job_kwargs(**wj_config.prompts_dict())
accepted_fields.pop('extra_vars', None) # merge handled with other extra_vars later
data.update(accepted_fields)
# build ancestor artifacts, save them to node model for later
aa_dict = {}
is_root_node = True
for parent_node in self.get_parent_nodes():
is_root_node = False
aa_dict.update(parent_node.ancestor_artifacts)
if parent_node.job and hasattr(parent_node.job, 'artifacts'):
aa_dict.update(parent_node.job.artifacts)
if aa_dict:
if aa_dict and not is_root_node:
self.ancestor_artifacts = aa_dict
self.save(update_fields=['ancestor_artifacts'])
# process password list
@@ -229,17 +264,25 @@ class WorkflowJobNode(WorkflowNodeBase):
data['survey_passwords'] = password_dict
# process extra_vars
extra_vars = data.get('extra_vars', {})
if aa_dict:
functional_aa_dict = copy(aa_dict)
functional_aa_dict.pop('_ansible_no_log', None)
extra_vars.update(functional_aa_dict)
# Workflow Job extra_vars higher precedence than ancestor artifacts
if self.workflow_job and self.workflow_job.extra_vars:
extra_vars.update(self.workflow_job.extra_vars_dict)
if ujt_obj and isinstance(ujt_obj, (JobTemplate, WorkflowJobTemplate)):
if aa_dict:
functional_aa_dict = copy(aa_dict)
functional_aa_dict.pop('_ansible_no_log', None)
extra_vars.update(functional_aa_dict)
if ujt_obj and isinstance(ujt_obj, JobTemplate):
# Workflow Job extra_vars higher precedence than ancestor artifacts
if self.workflow_job and self.workflow_job.extra_vars:
extra_vars.update(self.workflow_job.extra_vars_dict)
if extra_vars:
data['extra_vars'] = extra_vars
# ensure that unified jobs created by WorkflowJobs are marked
data['_eager_fields'] = {'launch_type': 'workflow'}
# Extra processing in the case that this is a slice job
if 'job_slice' in self.ancestor_artifacts and is_root_node:
data['_eager_fields']['allow_simultaneous'] = True
data['_eager_fields']['job_slice_number'] = self.ancestor_artifacts['job_slice']
data['_eager_fields']['job_slice_count'] = self.workflow_job.workflow_job_nodes.count()
data['_prevent_slicing'] = True
return data
@@ -261,6 +304,13 @@ class WorkflowJobOptions(BaseModel):
def workflow_nodes(self):
raise NotImplementedError()
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in WorkflowJobOptions._meta.fields) | set(
# NOTE: if other prompts are added to WFJT, put fields in WJOptions, remove inventory
['name', 'description', 'schedule', 'survey_passwords', 'labels', 'inventory']
)
def _create_workflow_nodes(self, old_node_list, user=None):
node_links = {}
for old_node in old_node_list:
@@ -288,7 +338,7 @@ class WorkflowJobOptions(BaseModel):
def create_relaunch_workflow_job(self):
new_workflow_job = self.copy_unified_job()
if self.workflow_job_template is None:
if self.unified_job_template_id is None:
new_workflow_job.copy_nodes_from_original(original=self)
return new_workflow_job
@@ -310,6 +360,19 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
on_delete=models.SET_NULL,
related_name='workflows',
)
inventory = models.ForeignKey(
'Inventory',
related_name='%(class)ss',
blank=True,
null=True,
default=None,
on_delete=models.SET_NULL,
help_text=_('Inventory applied to all job templates in workflow that prompt for inventory.'),
)
ask_inventory_on_launch = AskForField(
blank=True,
default=False,
)
admin_role = ImplicitRoleField(parent_role=[
'singleton:' + ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
'organization.workflow_admin_role'
@@ -331,12 +394,6 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
def _get_unified_job_class(cls):
return WorkflowJob
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in WorkflowJobOptions._meta.fields) | set(
['name', 'description', 'schedule', 'survey_passwords', 'labels']
)
@classmethod
def _get_unified_jt_copy_names(cls):
base_list = super(WorkflowJobTemplate, cls)._get_unified_jt_copy_names()
@@ -370,27 +427,45 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
workflow_job.copy_nodes_from_original(original=self)
return workflow_job
def _accept_or_ignore_job_kwargs(self, _exclude_errors=(), **kwargs):
def _accept_or_ignore_job_kwargs(self, **kwargs):
exclude_errors = kwargs.pop('_exclude_errors', [])
prompted_data = {}
rejected_data = {}
accepted_vars, rejected_vars, errors_dict = self.accept_or_ignore_variables(
kwargs.get('extra_vars', {}),
_exclude_errors=exclude_errors,
extra_passwords=kwargs.get('survey_passwords', {}))
if accepted_vars:
prompted_data['extra_vars'] = accepted_vars
if rejected_vars:
rejected_data['extra_vars'] = rejected_vars
errors_dict = {}
# WFJTs do not behave like JTs, it can not accept inventory, credential, etc.
bad_kwargs = kwargs.copy()
bad_kwargs.pop('extra_vars', None)
bad_kwargs.pop('survey_passwords', None)
if bad_kwargs:
rejected_data.update(bad_kwargs)
for field in bad_kwargs:
errors_dict[field] = _('Field is not allowed for use in workflows.')
# Handle all the fields that have prompting rules
# NOTE: If WFJTs prompt for other things, this logic can be combined with jobs
for field_name, ask_field_name in self.get_ask_mapping().items():
if field_name == 'extra_vars':
accepted_vars, rejected_vars, vars_errors = self.accept_or_ignore_variables(
kwargs.get('extra_vars', {}),
_exclude_errors=exclude_errors,
extra_passwords=kwargs.get('survey_passwords', {}))
if accepted_vars:
prompted_data['extra_vars'] = accepted_vars
if rejected_vars:
rejected_data['extra_vars'] = rejected_vars
errors_dict.update(vars_errors)
continue
if field_name not in kwargs:
continue
new_value = kwargs[field_name]
old_value = getattr(self, field_name)
if new_value == old_value:
continue # no-op case: Counted as neither accepted or ignored
elif getattr(self, ask_field_name):
# accepted prompt
prompted_data[field_name] = new_value
else:
# unprompted - template is not configured to accept field on launch
rejected_data[field_name] = new_value
# Not considered an error for manual launch, to support old
# behavior of putting them in ignored_fields and launching anyway
if 'prompts' not in exclude_errors:
errors_dict[field_name] = _('Field is not configured to prompt on launch.').format(field_name=field_name)
return prompted_data, rejected_data, errors_dict
@@ -420,7 +495,7 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
return WorkflowJob.objects.filter(workflow_job_template=self)
class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificationMixin):
class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificationMixin, LaunchTimeConfigBase):
class Meta:
app_label = 'main'
ordering = ('id',)
@@ -433,13 +508,28 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio
default=None,
on_delete=models.SET_NULL,
)
job_template = models.ForeignKey(
'JobTemplate',
related_name='slice_workflow_jobs',
blank=True,
null=True,
default=None,
on_delete=models.SET_NULL,
help_text=_("If automatically created for a sliced job run, the job template "
"the workflow job was created from."),
)
is_sliced_job = models.BooleanField(
default=False
)
@property
def workflow_nodes(self):
return self.workflow_job_nodes
@classmethod
def _get_parent_field_name(cls):
def _get_parent_field_name(self):
if self.job_template_id:
# This is a workflow job which is a container for slice jobs
return 'job_template'
return 'workflow_job_template'
@classmethod
@@ -472,6 +562,24 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio
def task_impact(self):
return 0
def get_ancestor_workflows(self):
"""Returns a list of WFJTs that are indirect parents of this workflow job
say WFJTs are set up to spawn in order of A->B->C, and this workflow job
came from C, then C is the parent and [B, A] will be returned from this.
"""
ancestors = []
wj_ids = set([self.pk])
wj = self.get_workflow_job()
while wj and wj.workflow_job_template_id:
if wj.pk in wj_ids:
logger.critical('Cycles detected in the workflow jobs graph, '
'this is not normal and suggests task manager degeneracy.')
break
wj_ids.add(wj.pk)
ancestors.append(wj.workflow_job_template)
wj = wj.get_workflow_job()
return ancestors
def get_notification_templates(self):
return self.workflow_job_template.notification_templates
@@ -482,8 +590,8 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio
def preferred_instance_groups(self):
return []
'''
A WorkflowJob is a virtual job. It doesn't result in a celery task.
'''
def start_celery_task(self, opts, error_callback, success_callback, queue):
return None
@property
def actually_running(self):
# WorkflowJobs don't _actually_ run anything in the dispatcher, so
# there's no point in asking the dispatcher if it knows about this task
return self.status == 'running'

View File

@@ -2,6 +2,7 @@
# All Rights Reserved.
# Python
import json
import logging
import os
@@ -12,10 +13,31 @@ from django.conf import settings
# Kombu
from kombu import Connection, Exchange, Producer
from kombu.serialization import registry
__all__ = ['CallbackQueueDispatcher']
# use a custom JSON serializer so we can properly handle !unsafe and !vault
# objects that may exist in events emitted by the callback plugin
# see: https://github.com/ansible/ansible/pull/38759
class AnsibleJSONEncoder(json.JSONEncoder):
def default(self, o):
if getattr(o, 'yaml_tag', None) == '!vault':
return o.data
return super(AnsibleJSONEncoder, self).default(o)
registry.register(
'json-ansible',
lambda obj: json.dumps(obj, cls=AnsibleJSONEncoder),
lambda obj: json.loads(obj),
content_type='application/json',
content_encoding='utf-8'
)
class CallbackQueueDispatcher(object):
def __init__(self):
@@ -41,7 +63,7 @@ class CallbackQueueDispatcher(object):
producer = Producer(self.connection)
producer.publish(obj,
serializer='json',
serializer='json-ansible',
compression='bzip2',
exchange=self.exchange,
declare=[self.exchange],

View File

@@ -1,11 +1,4 @@
from awx.main.models import (
Job,
AdHocCommand,
InventoryUpdate,
ProjectUpdate,
WorkflowJob,
)
from collections import deque
class SimpleDAG(object):
@@ -13,12 +6,51 @@ class SimpleDAG(object):
def __init__(self):
self.nodes = []
self.edges = []
self.root_nodes = set([])
r'''
Track node_obj->node index
dict where key is a full workflow node object or whatever we are
storing in ['node_object'] and value is an index to be used into
self.nodes
'''
self.node_obj_to_node_index = dict()
r'''
Track per-node from->to edges
i.e.
{
'success': {
1: [2, 3],
4: [2, 3],
},
'failed': {
1: [5],
}
}
'''
self.node_from_edges_by_label = dict()
r'''
Track per-node reverse relationship (child to parent)
i.e.
{
'success': {
2: [1, 4],
3: [1, 4],
},
'failed': {
5: [1],
}
}
'''
self.node_to_edges_by_label = dict()
def __contains__(self, obj):
for node in self.nodes:
if node['node_object'] == obj:
return True
if self.node['node_object'] in self.node_obj_to_node_index:
return True
return False
def __len__(self):
@@ -27,98 +59,169 @@ class SimpleDAG(object):
def __iter__(self):
return self.nodes.__iter__()
def generate_graphviz_plot(self):
def short_string_obj(obj):
if type(obj) == Job:
type_str = "Job"
if type(obj) == AdHocCommand:
type_str = "AdHocCommand"
elif type(obj) == InventoryUpdate:
type_str = "Inventory"
elif type(obj) == ProjectUpdate:
type_str = "Project"
elif type(obj) == WorkflowJob:
type_str = "Workflow"
else:
type_str = "Unknown"
type_str += "%s" % str(obj.id)
return type_str
def generate_graphviz_plot(self, file_name="/awx_devel/graph.gv"):
def run_status(obj):
dnr = "RUN"
status = "NA"
if hasattr(obj, 'job') and obj.job and hasattr(obj.job, 'status'):
status = obj.job.status
if hasattr(obj, 'do_not_run') and obj.do_not_run is True:
dnr = "DNR"
return "{}_{}_{}".format(dnr, status, obj.id)
doc = """
digraph g {
rankdir = LR
"""
for n in self.nodes:
obj = n['node_object']
status = "NA"
if hasattr(obj, 'job') and obj.job:
status = obj.job.status
color = 'black'
if status == 'successful':
color = 'green'
elif status == 'failed':
color = 'red'
elif obj.do_not_run is True:
color = 'gray'
doc += "%s [color = %s]\n" % (
short_string_obj(n['node_object']),
"red" if n['node_object'].status == 'running' else "black",
)
for from_node, to_node, label in self.edges:
doc += "%s -> %s [ label=\"%s\" ];\n" % (
short_string_obj(self.nodes[from_node]['node_object']),
short_string_obj(self.nodes[to_node]['node_object']),
label,
run_status(n['node_object']),
color
)
for label, edges in self.node_from_edges_by_label.iteritems():
for from_node, to_nodes in edges.iteritems():
for to_node in to_nodes:
doc += "%s -> %s [ label=\"%s\" ];\n" % (
run_status(self.nodes[from_node]['node_object']),
run_status(self.nodes[to_node]['node_object']),
label,
)
doc += "}\n"
gv_file = open('/tmp/graph.gv', 'w')
gv_file = open(file_name, 'w')
gv_file.write(doc)
gv_file.close()
def add_node(self, obj, metadata=None):
if self.find_ord(obj) is None:
self.nodes.append(dict(node_object=obj, metadata=metadata))
'''
Assume node is a root node until a child is added
'''
node_index = len(self.nodes)
self.root_nodes.add(node_index)
self.node_obj_to_node_index[obj] = node_index
entry = dict(node_object=obj, metadata=metadata)
self.nodes.append(entry)
def add_edge(self, from_obj, to_obj, label=None):
def add_edge(self, from_obj, to_obj, label):
from_obj_ord = self.find_ord(from_obj)
to_obj_ord = self.find_ord(to_obj)
if from_obj_ord is None or to_obj_ord is None:
raise LookupError("Object not found")
self.edges.append((from_obj_ord, to_obj_ord, label))
def add_edges(self, edgelist):
for edge_pair in edgelist:
self.add_edge(edge_pair[0], edge_pair[1], edge_pair[2])
'''
To node is no longer a root node
'''
self.root_nodes.discard(to_obj_ord)
if from_obj_ord is None and to_obj_ord is None:
raise LookupError("From object {} and to object not found".format(from_obj, to_obj))
elif from_obj_ord is None:
raise LookupError("From object not found {}".format(from_obj))
elif to_obj_ord is None:
raise LookupError("To object not found {}".format(to_obj))
self.node_from_edges_by_label.setdefault(label, dict()) \
.setdefault(from_obj_ord, [])
self.node_to_edges_by_label.setdefault(label, dict()) \
.setdefault(to_obj_ord, [])
self.node_from_edges_by_label[label][from_obj_ord].append(to_obj_ord)
self.node_to_edges_by_label[label][to_obj_ord].append(from_obj_ord)
def find_ord(self, obj):
for idx in range(len(self.nodes)):
if obj == self.nodes[idx]['node_object']:
return idx
return None
return self.node_obj_to_node_index.get(obj, None)
def _get_dependencies_by_label(self, node_index, label):
return [self.nodes[index] for index in
self.node_from_edges_by_label.get(label, {})
.get(node_index, [])]
def get_dependencies(self, obj, label=None):
antecedents = []
this_ord = self.find_ord(obj)
for node, dep, lbl in self.edges:
if label:
if node == this_ord and lbl == label:
antecedents.append(self.nodes[dep])
else:
if node == this_ord:
antecedents.append(self.nodes[dep])
return antecedents
nodes = []
if label:
return self._get_dependencies_by_label(this_ord, label)
else:
nodes = []
map(lambda l: nodes.extend(self._get_dependencies_by_label(this_ord, l)),
self.node_from_edges_by_label.keys())
return nodes
def _get_dependents_by_label(self, node_index, label):
return [self.nodes[index] for index in
self.node_to_edges_by_label.get(label, {})
.get(node_index, [])]
def get_dependents(self, obj, label=None):
decendents = []
this_ord = self.find_ord(obj)
for node, dep, lbl in self.edges:
if label:
if dep == this_ord and lbl == label:
decendents.append(self.nodes[node])
else:
if dep == this_ord:
decendents.append(self.nodes[node])
return decendents
def get_leaf_nodes(self):
leafs = []
for n in self.nodes:
if len(self.get_dependencies(n['node_object'])) < 1:
leafs.append(n)
return leafs
nodes = []
if label:
return self._get_dependents_by_label(this_ord, label)
else:
nodes = []
map(lambda l: nodes.extend(self._get_dependents_by_label(this_ord, l)),
self.node_to_edges_by_label.keys())
return nodes
def get_root_nodes(self):
roots = []
for n in self.nodes:
if len(self.get_dependents(n['node_object'])) < 1:
roots.append(n)
return roots
return [self.nodes[index] for index in self.root_nodes]
def has_cycle(self):
node_objs = [node['node_object'] for node in self.get_root_nodes()]
node_objs_visited = set([])
path = set([])
stack = node_objs
res = False
if len(self.nodes) != 0 and len(node_objs) == 0:
return True
while stack:
node_obj = stack.pop()
children = [node['node_object'] for node in self.get_dependencies(node_obj)]
children_to_add = filter(lambda node_obj: node_obj not in node_objs_visited, children)
if children_to_add:
if node_obj in path:
res = True
break
path.add(node_obj)
stack.append(node_obj)
stack.extend(children_to_add)
else:
node_objs_visited.add(node_obj)
path.discard(node_obj)
return res
def sort_nodes_topological(self):
nodes_sorted = deque()
obj_ids_processed = set([])
def visit(node):
obj = node['node_object']
if obj.id in obj_ids_processed:
return
for child in self.get_dependencies(obj):
visit(child)
obj_ids_processed.add(obj.id)
nodes_sorted.appendleft(node)
for node in self.nodes:
obj = node['node_object']
if obj.id in obj_ids_processed:
continue
visit(node)
return nodes_sorted

View File

@@ -1,4 +1,13 @@
from django.utils.translation import ugettext_lazy as _
from django.utils.encoding import smart_text
# Python
from awx.main.models import (
WorkflowJobTemplateNode,
WorkflowJobNode,
)
# AWX
from awx.main.scheduler.dag_simple import SimpleDAG
@@ -9,47 +18,88 @@ class WorkflowDAG(SimpleDAG):
if workflow_job:
self._init_graph(workflow_job)
def _init_graph(self, workflow_job):
node_qs = workflow_job.workflow_job_nodes
workflow_nodes = node_qs.prefetch_related('success_nodes', 'failure_nodes', 'always_nodes').all()
for workflow_node in workflow_nodes:
def _init_graph(self, workflow_job_or_jt):
if hasattr(workflow_job_or_jt, 'workflow_job_template_nodes'):
vals = ['from_workflowjobtemplatenode_id', 'to_workflowjobtemplatenode_id']
filters = {
'from_workflowjobtemplatenode__workflow_job_template_id': workflow_job_or_jt.id
}
workflow_nodes = workflow_job_or_jt.workflow_job_template_nodes
success_nodes = WorkflowJobTemplateNode.success_nodes.through.objects.filter(**filters).values_list(*vals)
failure_nodes = WorkflowJobTemplateNode.failure_nodes.through.objects.filter(**filters).values_list(*vals)
always_nodes = WorkflowJobTemplateNode.always_nodes.through.objects.filter(**filters).values_list(*vals)
elif hasattr(workflow_job_or_jt, 'workflow_job_nodes'):
vals = ['from_workflowjobnode_id', 'to_workflowjobnode_id']
filters = {
'from_workflowjobnode__workflow_job_id': workflow_job_or_jt.id
}
workflow_nodes = workflow_job_or_jt.workflow_job_nodes
success_nodes = WorkflowJobNode.success_nodes.through.objects.filter(**filters).values_list(*vals)
failure_nodes = WorkflowJobNode.failure_nodes.through.objects.filter(**filters).values_list(*vals)
always_nodes = WorkflowJobNode.always_nodes.through.objects.filter(**filters).values_list(*vals)
else:
raise RuntimeError("Unexpected object {} {}".format(type(workflow_job_or_jt), workflow_job_or_jt))
wfn_by_id = dict()
for workflow_node in workflow_nodes.all():
wfn_by_id[workflow_node.id] = workflow_node
self.add_node(workflow_node)
for node_type in ['success_nodes', 'failure_nodes', 'always_nodes']:
for workflow_node in workflow_nodes:
related_nodes = getattr(workflow_node, node_type).all()
for related_node in related_nodes:
self.add_edge(workflow_node, related_node, node_type)
for edge in success_nodes:
self.add_edge(wfn_by_id[edge[0]], wfn_by_id[edge[1]], 'success_nodes')
for edge in failure_nodes:
self.add_edge(wfn_by_id[edge[0]], wfn_by_id[edge[1]], 'failure_nodes')
for edge in always_nodes:
self.add_edge(wfn_by_id[edge[0]], wfn_by_id[edge[1]], 'always_nodes')
def _are_relevant_parents_finished(self, node):
obj = node['node_object']
parent_nodes = [p['node_object'] for p in self.get_dependents(obj)]
for p in parent_nodes:
if p.do_not_run is True:
continue
elif p.unified_job_template is None:
continue
# do_not_run is False, node might still run a job and thus blocks children
elif not p.job:
return False
# Node decidedly got a job; check if job is done
elif p.job and p.job.status not in ['successful', 'failed', 'error', 'canceled']:
return False
return True
def bfs_nodes_to_run(self):
root_nodes = self.get_root_nodes()
nodes = root_nodes
nodes = self.get_root_nodes()
nodes_found = []
node_ids_visited = set()
for index, n in enumerate(nodes):
obj = n['node_object']
job = obj.job
if not job:
nodes_found.append(n)
# Job is about to run or is running. Hold our horses and wait for
# the job to finish. We can't proceed down the graph path until we
# have the job result.
elif job.status not in ['failed', 'successful']:
if obj.id in node_ids_visited:
continue
elif job.status == 'failed':
children_failed = self.get_dependencies(obj, 'failure_nodes')
children_always = self.get_dependencies(obj, 'always_nodes')
children_all = children_failed + children_always
nodes.extend(children_all)
elif job.status == 'successful':
children_success = self.get_dependencies(obj, 'success_nodes')
children_always = self.get_dependencies(obj, 'always_nodes')
children_all = children_success + children_always
nodes.extend(children_all)
node_ids_visited.add(obj.id)
if obj.do_not_run is True:
continue
if obj.job:
if obj.job.status in ['failed', 'error', 'canceled']:
nodes.extend(self.get_dependencies(obj, 'failure_nodes') +
self.get_dependencies(obj, 'always_nodes'))
elif obj.job.status == 'successful':
nodes.extend(self.get_dependencies(obj, 'success_nodes') +
self.get_dependencies(obj, 'always_nodes'))
elif obj.unified_job_template is None:
nodes.extend(self.get_dependencies(obj, 'failure_nodes') +
self.get_dependencies(obj, 'always_nodes'))
else:
if self._are_relevant_parents_finished(n):
nodes_found.append(n)
return [n['node_object'] for n in nodes_found]
def cancel_node_jobs(self):
cancel_finished = True
for n in self.nodes:
obj = n['node_object']
job = obj.job
@@ -57,43 +107,118 @@ class WorkflowDAG(SimpleDAG):
if not job:
continue
elif job.can_cancel:
cancel_finished = False
job.cancel()
return cancel_finished
def is_workflow_done(self):
root_nodes = self.get_root_nodes()
nodes = root_nodes
is_failed = False
for node in self.nodes:
obj = node['node_object']
if obj.do_not_run is False and not obj.job and obj.unified_job_template:
return False
elif obj.job and obj.job.status not in ['successful', 'failed', 'canceled', 'error']:
return False
return True
for index, n in enumerate(nodes):
obj = n['node_object']
job = obj.job
def has_workflow_failed(self):
failed_nodes = []
res = False
failed_path_nodes_id_status = []
failed_unified_job_template_node_ids = []
if obj.unified_job_template is None:
is_failed = True
continue
elif not job:
return False, False
for node in self.nodes:
obj = node['node_object']
if obj.do_not_run is False and obj.unified_job_template is None:
failed_nodes.append(node)
elif obj.job and obj.job.status in ['failed', 'canceled', 'error']:
failed_nodes.append(node)
children_success = self.get_dependencies(obj, 'success_nodes')
children_failed = self.get_dependencies(obj, 'failure_nodes')
children_always = self.get_dependencies(obj, 'always_nodes')
if not is_failed and job.status != 'successful':
children_all = children_success + children_failed + children_always
for child in children_all:
if child['node_object'].job:
break
for node in failed_nodes:
obj = node['node_object']
if (len(self.get_dependencies(obj, 'failure_nodes')) +
len(self.get_dependencies(obj, 'always_nodes'))) == 0:
if obj.unified_job_template is None:
res = True
failed_unified_job_template_node_ids.append(str(obj.id))
else:
is_failed = True if children_all else job.status in ['failed', 'canceled', 'error']
res = True
failed_path_nodes_id_status.append((str(obj.id), obj.job.status))
if job.status in ['canceled', 'error']:
continue
elif job.status == 'failed':
nodes.extend(children_failed + children_always)
elif job.status == 'successful':
nodes.extend(children_success + children_always)
if res is True:
s = _("No error handle path for workflow job node(s) [{node_status}] workflow job "
"node(s) missing unified job template and error handle path [{no_ufjt}].")
parms = {
'node_status': '',
'no_ufjt': '',
}
if len(failed_path_nodes_id_status) > 0:
parms['node_status'] = ",".join(["({},{})".format(id, status) for id, status in failed_path_nodes_id_status])
if len(failed_unified_job_template_node_ids) > 0:
parms['no_ufjt'] = ",".join(failed_unified_job_template_node_ids)
return True, smart_text(s.format(**parms))
return False, None
r'''
Determine if all nodes have been decided on being marked do_not_run.
Nodes that are do_not_run False may become do_not_run True in the future.
We know a do_not_run False node will NOT be marked do_not_run True if there
is a job run for that node.
:param workflow_nodes: list of workflow_nodes
Return a boolean
'''
def _are_all_nodes_dnr_decided(self, workflow_nodes):
for n in workflow_nodes:
if n.do_not_run is False and not n.job and n.unified_job_template:
return False
return True
r'''
Determine if a node (1) is ready to be marked do_not_run and (2) should
be marked do_not_run.
:param node: SimpleDAG internal node
:param parent_nodes: list of workflow_nodes
Return a boolean
'''
def _should_mark_node_dnr(self, node, parent_nodes):
for p in parent_nodes:
if p.do_not_run is True:
pass
elif p.job:
if p.job.status == 'successful':
if node in (self.get_dependencies(p, 'success_nodes') +
self.get_dependencies(p, 'always_nodes')):
return False
elif p.job.status in ['failed', 'error', 'canceled']:
if node in (self.get_dependencies(p, 'failure_nodes') +
self.get_dependencies(p, 'always_nodes')):
return False
else:
return False
elif p.do_not_run is False and p.unified_job_template is None:
if node in (self.get_dependencies(p, 'failure_nodes') +
self.get_dependencies(p, 'always_nodes')):
return False
else:
# Job is about to run or is running. Hold our horses and wait for
# the job to finish. We can't proceed down the graph path until we
# have the job result.
return False, False
return True, is_failed
return False
return True
def mark_dnr_nodes(self):
root_nodes = self.get_root_nodes()
nodes_marked_do_not_run = []
for node in self.sort_nodes_topological():
obj = node['node_object']
if obj.do_not_run is False and not obj.job and node not in root_nodes:
parent_nodes = [p['node_object'] for p in self.get_dependents(obj)]
if self._are_all_nodes_dnr_decided(parent_nodes):
if self._should_mark_node_dnr(node, parent_nodes):
obj.do_not_run = True
nodes_marked_do_not_run.append(node)
return [n['node_object'] for n in nodes_marked_do_not_run]

View File

@@ -2,7 +2,7 @@
# All Rights Reserved
# Python
from datetime import datetime, timedelta
from datetime import timedelta
import logging
import uuid
import json
@@ -11,18 +11,13 @@ import random
from sets import Set
# Django
from django.conf import settings
from django.core.cache import cache
from django.db import transaction, connection, DatabaseError
from django.db import transaction, connection
from django.utils.translation import ugettext_lazy as _
from django.utils.timezone import now as tz_now, utc
from django.db.models import Q
from django.contrib.contenttypes.models import ContentType
from django.utils.timezone import now as tz_now
# AWX
from awx.main.models import (
AdHocCommand,
Instance,
InstanceGroup,
InventorySource,
InventoryUpdate,
@@ -30,21 +25,16 @@ from awx.main.models import (
Project,
ProjectUpdate,
SystemJob,
UnifiedJob,
WorkflowJob,
WorkflowJobTemplate
)
from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.main.utils.pglock import advisory_lock
from awx.main.utils import get_type_for_model
from awx.main.utils import get_type_for_model, task_manager_bulk_reschedule, schedule_task_manager
from awx.main.signals import disable_activity_stream
from awx.main.scheduler.dependency_graph import DependencyGraph
from awx.main.utils import decrypt_field
# Celery
from celery import Celery
from celery.app.control import Inspect
logger = logging.getLogger('awx.main.scheduler')
@@ -85,79 +75,6 @@ class TaskManager():
key=lambda task: task.created)
return all_tasks
'''
Tasks that are running and SHOULD have a celery task.
{
'execution_node': [j1, j2,...],
'execution_node': [j3],
...
}
'''
def get_running_tasks(self):
execution_nodes = {}
waiting_jobs = []
now = tz_now()
workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id
jobs = UnifiedJob.objects.filter((Q(status='running') |
Q(status='waiting', modified__lte=now - timedelta(seconds=60))) &
~Q(polymorphic_ctype_id=workflow_ctype_id))
for j in jobs:
if j.execution_node:
execution_nodes.setdefault(j.execution_node, []).append(j)
else:
waiting_jobs.append(j)
return (execution_nodes, waiting_jobs)
'''
Tasks that are currently running in celery
Transform:
{
"celery@ec2-54-204-222-62.compute-1.amazonaws.com": [],
"celery@ec2-54-163-144-168.compute-1.amazonaws.com": [{
...
"id": "5238466a-f8c7-43b3-9180-5b78e9da8304",
...
}, {
...,
}, ...]
}
to:
{
"ec2-54-204-222-62.compute-1.amazonaws.com": [
"5238466a-f8c7-43b3-9180-5b78e9da8304",
"5238466a-f8c7-43b3-9180-5b78e9da8306",
...
]
}
'''
def get_active_tasks(self):
if not hasattr(settings, 'IGNORE_CELERY_INSPECTOR'):
app = Celery('awx')
app.config_from_object('django.conf:settings')
inspector = Inspect(app=app)
active_task_queues = inspector.active()
else:
logger.warn("Ignoring celery task inspector")
active_task_queues = None
queues = None
if active_task_queues is not None:
queues = {}
for queue in active_task_queues:
active_tasks = set()
map(lambda at: active_tasks.add(at['id']), active_task_queues[queue])
# celery worker name is of the form celery@myhost.com
queue_name = queue.split('@')
queue_name = queue_name[1 if len(queue_name) > 1 else 0]
queues[queue_name] = active_tasks
else:
return (None, None)
return (active_task_queues, queues)
def get_latest_project_update_tasks(self, all_sorted_tasks):
project_ids = Set()
@@ -187,8 +104,15 @@ class TaskManager():
def spawn_workflow_graph_jobs(self, workflow_jobs):
for workflow_job in workflow_jobs:
if workflow_job.cancel_flag:
logger.debug('Not spawning jobs for %s because it is pending cancelation.', workflow_job.log_format)
continue
dag = WorkflowDAG(workflow_job)
spawn_nodes = dag.bfs_nodes_to_run()
if spawn_nodes:
logger.info('Spawning jobs for %s', workflow_job.log_format)
else:
logger.debug('No nodes to spawn for %s', workflow_job.log_format)
for spawn_node in spawn_nodes:
if spawn_node.unified_job_template is None:
continue
@@ -196,41 +120,82 @@ class TaskManager():
job = spawn_node.unified_job_template.create_unified_job(**kv)
spawn_node.job = job
spawn_node.save()
if job._resources_sufficient_for_launch():
can_start = job.signal_start()
if not can_start:
job.job_explanation = _("Job spawned from workflow could not start because it "
"was not in the right state or required manual credentials")
else:
logger.info('Spawned %s in %s for node %s', job.log_format, workflow_job.log_format, spawn_node.pk)
can_start = True
if isinstance(spawn_node.unified_job_template, WorkflowJobTemplate):
workflow_ancestors = job.get_ancestor_workflows()
if spawn_node.unified_job_template in set(workflow_ancestors):
can_start = False
logger.info('Refusing to start recursive workflow-in-workflow id={}, wfjt={}, ancestors={}'.format(
job.id, spawn_node.unified_job_template.pk, [wa.pk for wa in workflow_ancestors]))
display_list = [spawn_node.unified_job_template] + workflow_ancestors
job.job_explanation = _(
"Workflow Job spawned from workflow could not start because it "
"would result in recursion (spawn order, most recent first: {})"
).format(six.text_type(', ').join([six.text_type('<{}>').format(tmp) for tmp in display_list]))
else:
logger.debug('Starting workflow-in-workflow id={}, wfjt={}, ancestors={}'.format(
job.id, spawn_node.unified_job_template.pk, [wa.pk for wa in workflow_ancestors]))
if not job._resources_sufficient_for_launch():
can_start = False
job.job_explanation = _("Job spawned from workflow could not start because it "
"was missing a related resource such as project or inventory")
if can_start:
if workflow_job.start_args:
start_args = json.loads(decrypt_field(workflow_job, 'start_args'))
else:
start_args = {}
can_start = job.signal_start(**start_args)
if not can_start:
job.job_explanation = _("Job spawned from workflow could not start because it "
"was not in the right state or required manual credentials")
if not can_start:
job.status = 'failed'
job.save(update_fields=['status', 'job_explanation'])
connection.on_commit(lambda: job.websocket_emit_status('failed'))
job.websocket_emit_status('failed')
# TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
#emit_websocket_notification('/socket.io/jobs', '', dict(id=))
# See comment in tasks.py::RunWorkflowJob::run()
def process_finished_workflow_jobs(self, workflow_jobs):
result = []
for workflow_job in workflow_jobs:
dag = WorkflowDAG(workflow_job)
status_changed = False
if workflow_job.cancel_flag:
workflow_job.status = 'canceled'
workflow_job.save()
dag.cancel_node_jobs()
connection.on_commit(lambda: workflow_job.websocket_emit_status(workflow_job.status))
workflow_job.workflow_nodes.filter(do_not_run=False, job__isnull=True).update(do_not_run=True)
logger.debug('Canceling spawned jobs of %s due to cancel flag.', workflow_job.log_format)
cancel_finished = dag.cancel_node_jobs()
if cancel_finished:
logger.info('Marking %s as canceled, all spawned jobs have concluded.', workflow_job.log_format)
workflow_job.status = 'canceled'
workflow_job.start_args = '' # blank field to remove encrypted passwords
workflow_job.save(update_fields=['status', 'start_args'])
status_changed = True
else:
is_done, has_failed = dag.is_workflow_done()
workflow_nodes = dag.mark_dnr_nodes()
map(lambda n: n.save(update_fields=['do_not_run']), workflow_nodes)
is_done = dag.is_workflow_done()
if not is_done:
continue
has_failed, reason = dag.has_workflow_failed()
logger.info('Marking %s as %s.', workflow_job.log_format, 'failed' if has_failed else 'successful')
result.append(workflow_job.id)
workflow_job.status = 'failed' if has_failed else 'successful'
workflow_job.save()
connection.on_commit(lambda: workflow_job.websocket_emit_status(workflow_job.status))
new_status = 'failed' if has_failed else 'successful'
logger.debug(six.text_type("Transitioning {} to {} status.").format(workflow_job.log_format, new_status))
update_fields = ['status', 'start_args']
workflow_job.status = new_status
if reason:
logger.info(reason)
workflow_job.job_explanation = "No error handling paths found, marking workflow as failed"
update_fields.append('job_explanation')
workflow_job.start_args = '' # blank field to remove encrypted passwords
workflow_job.save(update_fields=update_fields)
status_changed = True
if status_changed:
workflow_job.websocket_emit_status(workflow_job.status)
if workflow_job.spawned_by_workflow:
schedule_task_manager()
return result
def get_dependent_jobs_for_inv_and_proj_update(self, job_obj):
@@ -256,9 +221,6 @@ class TaskManager():
rampart_group.name, task.log_format))
return
error_handler = handle_work_error.s(subtasks=[task_actual] + dependencies)
success_handler = handle_work_success.s(task_actual=task_actual)
task.status = 'waiting'
(start_status, opts) = task.pre_start()
@@ -273,6 +235,7 @@ class TaskManager():
if type(task) is WorkflowJob:
task.status = 'running'
logger.info('Transitioning %s to running status.', task.log_format)
schedule_task_manager()
elif not task.supports_isolation() and rampart_group.controller_id:
# non-Ansible jobs on isolated instances run on controller
task.instance_group = rampart_group.controller
@@ -299,13 +262,25 @@ class TaskManager():
self.consume_capacity(task, rampart_group.name)
def post_commit():
task.websocket_emit_status(task.status)
if task.status != 'failed':
task.start_celery_task(opts,
error_callback=error_handler,
success_callback=success_handler,
queue=task.get_celery_queue_name())
if task.status != 'failed' and type(task) is not WorkflowJob:
task_cls = task._get_task_class()
task_cls.apply_async(
[task.pk],
opts,
queue=task.get_queue_name(),
uuid=task.celery_task_id,
callbacks=[{
'task': handle_work_success.name,
'kwargs': {'task_actual': task_actual}
}],
errbacks=[{
'task': handle_work_error.name,
'args': [task.celery_task_id],
'kwargs': {'subtasks': [task_actual] + dependencies}
}],
)
task.websocket_emit_status(task.status) # adds to on_commit
connection.on_commit(post_commit)
def process_running_tasks(self, running_tasks):
@@ -319,6 +294,11 @@ class TaskManager():
project_task.created = task.created - timedelta(seconds=1)
project_task.status = 'pending'
project_task.save()
logger.info(
'Spawned {} as dependency of {}'.format(
project_task.log_format, task.log_format
)
)
return project_task
def create_inventory_update(self, task, inventory_source_task):
@@ -328,6 +308,11 @@ class TaskManager():
inventory_task.created = task.created - timedelta(seconds=2)
inventory_task.status = 'pending'
inventory_task.save()
logger.info(
'Spawned {} as dependency of {}'.format(
inventory_task.log_format, task.log_format
)
)
# inventory_sources = self.get_inventory_source_tasks([task])
# self.process_inventory_sources(inventory_sources)
return inventory_task
@@ -483,7 +468,7 @@ class TaskManager():
logger.debug(six.text_type("Dependent {} couldn't be scheduled on graph, waiting for next cycle").format(task.log_format))
def process_pending_tasks(self, pending_tasks):
running_workflow_templates = set([wf.workflow_job_template_id for wf in self.get_running_workflow_jobs()])
running_workflow_templates = set([wf.unified_job_template_id for wf in self.get_running_workflow_jobs()])
for task in pending_tasks:
self.process_dependencies(task, self.generate_dependencies(task))
if self.is_job_blocked(task):
@@ -493,12 +478,12 @@ class TaskManager():
found_acceptable_queue = False
idle_instance_that_fits = None
if isinstance(task, WorkflowJob):
if task.workflow_job_template_id in running_workflow_templates:
if task.unified_job_template_id in running_workflow_templates:
if not task.allow_simultaneous:
logger.debug(six.text_type("{} is blocked from running, workflow already running").format(task.log_format))
continue
else:
running_workflow_templates.add(task.workflow_job_template_id)
running_workflow_templates.add(task.unified_job_template_id)
self.start_task(task, None, task.get_jobs_fail_chain(), None)
continue
for rampart_group in preferred_instance_groups:
@@ -529,105 +514,6 @@ class TaskManager():
if not found_acceptable_queue:
logger.debug(six.text_type("{} couldn't be scheduled on graph, waiting for next cycle").format(task.log_format))
def fail_jobs_if_not_in_celery(self, node_jobs, active_tasks, celery_task_start_time,
isolated=False):
for task in node_jobs:
if (task.celery_task_id not in active_tasks and not hasattr(settings, 'IGNORE_CELERY_INSPECTOR')):
if isinstance(task, WorkflowJob):
continue
if task.modified > celery_task_start_time:
continue
new_status = 'failed'
if isolated:
new_status = 'error'
task.status = new_status
task.start_args = '' # blank field to remove encrypted passwords
if isolated:
# TODO: cancel and reap artifacts of lost jobs from heartbeat
task.job_explanation += ' '.join((
'Task was marked as running in Tower but its ',
'controller management daemon was not present in',
'the job queue, so it has been marked as failed.',
'Task may still be running, but contactability is unknown.'
))
else:
task.job_explanation += ' '.join((
'Task was marked as running in Tower but was not present in',
'the job queue, so it has been marked as failed.',
))
try:
task.save(update_fields=['status', 'start_args', 'job_explanation'])
except DatabaseError:
logger.error("Task {} DB error in marking failed. Job possibly deleted.".format(task.log_format))
continue
if hasattr(task, 'send_notification_templates'):
task.send_notification_templates('failed')
task.websocket_emit_status(new_status)
logger.error("{}Task {} has no record in celery. Marking as failed".format(
'Isolated ' if isolated else '', task.log_format))
def cleanup_inconsistent_celery_tasks(self):
'''
Rectify tower db <-> celery inconsistent view of jobs state
'''
last_cleanup = cache.get('last_celery_task_cleanup') or datetime.min.replace(tzinfo=utc)
if (tz_now() - last_cleanup).seconds < settings.AWX_INCONSISTENT_TASK_INTERVAL:
return
logger.debug("Failing inconsistent running jobs.")
celery_task_start_time = tz_now()
active_task_queues, active_queues = self.get_active_tasks()
cache.set('last_celery_task_cleanup', tz_now())
if active_queues is None:
logger.error('Failed to retrieve active tasks from celery')
return None
'''
Only consider failing tasks on instances for which we obtained a task
list from celery for.
'''
running_tasks, waiting_tasks = self.get_running_tasks()
all_celery_task_ids = []
for node, node_jobs in active_queues.iteritems():
all_celery_task_ids.extend(node_jobs)
self.fail_jobs_if_not_in_celery(waiting_tasks, all_celery_task_ids, celery_task_start_time)
for node, node_jobs in running_tasks.iteritems():
isolated = False
if node in active_queues:
active_tasks = active_queues[node]
else:
'''
Node task list not found in celery. We may branch into cases:
- instance is unknown to tower, system is improperly configured
- instance is reported as down, then fail all jobs on the node
- instance is an isolated node, then check running tasks
among all allowed controller nodes for management process
- valid healthy instance not included in celery task list
probably a netsplit case, leave it alone
'''
instance = Instance.objects.filter(hostname=node).first()
if instance is None:
logger.error("Execution node Instance {} not found in database. "
"The node is currently executing jobs {}".format(
node, [j.log_format for j in node_jobs]))
active_tasks = []
elif instance.capacity == 0:
active_tasks = []
elif instance.rampart_groups.filter(controller__isnull=False).exists():
active_tasks = all_celery_task_ids
isolated = True
else:
continue
self.fail_jobs_if_not_in_celery(
node_jobs, active_tasks, celery_task_start_time,
isolated=isolated
)
def calculate_capacity_consumed(self, tasks):
self.graph = InstanceGroup.objects.capacity_values(tasks=tasks, graph=self.graph)
@@ -673,6 +559,14 @@ class TaskManager():
running_workflow_tasks = self.get_running_workflow_jobs()
finished_wfjs = self.process_finished_workflow_jobs(running_workflow_tasks)
previously_running_workflow_tasks = running_workflow_tasks
running_workflow_tasks = []
for workflow_job in previously_running_workflow_tasks:
if workflow_job.status == 'running':
running_workflow_tasks.append(workflow_job)
else:
logger.debug('Removed %s from job spawning consideration.', workflow_job.log_format)
self.spawn_workflow_graph_jobs(running_workflow_tasks)
self.process_tasks(all_sorted_tasks)
@@ -687,8 +581,8 @@ class TaskManager():
return
logger.debug("Starting Scheduler")
self.cleanup_inconsistent_celery_tasks()
finished_wfjs = self._schedule()
with task_manager_bulk_reschedule():
finished_wfjs = self._schedule()
# Operations whose queries rely on modifications made during the atomic scheduling session
for wfj in WorkflowJob.objects.filter(id__in=finished_wfjs):

View File

@@ -2,30 +2,14 @@
# Python
import logging
# Celery
from celery import shared_task
# AWX
from awx.main.scheduler import TaskManager
from awx.main.dispatch.publish import task
logger = logging.getLogger('awx.main.scheduler')
# TODO: move logic to UnifiedJob model and use bind=True feature of celery.
# Would we need the request loop then? I think so. Even if we get the in-memory
# updated model, the call to schedule() may get stale data.
@shared_task()
def run_job_launch(job_id):
TaskManager().schedule()
@shared_task()
def run_job_complete(job_id):
TaskManager().schedule()
@shared_task()
@task()
def run_task_manager():
logger.debug("Running Tower task manager.")
TaskManager().schedule()

View File

@@ -396,7 +396,7 @@ model_serializer_mapping = {
Credential: CredentialSerializer,
Team: TeamSerializer,
Project: ProjectSerializer,
JobTemplate: JobTemplateSerializer,
JobTemplate: JobTemplateWithSpecSerializer,
Job: JobSerializer,
AdHocCommand: AdHocCommandSerializer,
NotificationTemplate: NotificationTemplateSerializer,
@@ -404,7 +404,7 @@ model_serializer_mapping = {
CredentialType: CredentialTypeSerializer,
Schedule: ScheduleSerializer,
Label: LabelSerializer,
WorkflowJobTemplate: WorkflowJobTemplateSerializer,
WorkflowJobTemplate: WorkflowJobTemplateWithSpecSerializer,
WorkflowJobTemplateNode: WorkflowJobTemplateNodeSerializer,
WorkflowJob: WorkflowJobSerializer,
OAuth2AccessToken: OAuth2TokenSerializer,
@@ -425,6 +425,11 @@ def activity_stream_create(sender, instance, created, **kwargs):
changes = model_to_dict(instance, model_serializer_mapping)
# Special case where Job survey password variables need to be hidden
if type(instance) == Job:
changes['credentials'] = [
six.text_type('{} ({})').format(c.name, c.id)
for c in instance.credentials.iterator()
]
changes['labels'] = [l.name for l in instance.labels.iterator()]
if 'extra_vars' in changes:
changes['extra_vars'] = instance.display_extra_vars()
if type(instance) == OAuth2AccessToken:
@@ -487,12 +492,21 @@ def activity_stream_delete(sender, instance, **kwargs):
# If we trigger this handler there we may fall into db-integrity-related race conditions.
# So we add flag verification to prevent normal signal handling. This funciton will be
# explicitly called with flag on in Inventory.schedule_deletion.
if isinstance(instance, Inventory) and not kwargs.get('inventory_delete_flag', False):
return
changes = {}
if isinstance(instance, Inventory):
if not kwargs.get('inventory_delete_flag', False):
return
# Add additional data about child hosts / groups that will be deleted
changes['coalesced_data'] = {
'hosts_deleted': instance.hosts.count(),
'groups_deleted': instance.groups.count()
}
elif isinstance(instance, (Host, Group)) and instance.inventory.pending_deletion:
return # accounted for by inventory entry, above
_type = type(instance)
if getattr(_type, '_deferred', False):
return
changes = model_to_dict(instance)
changes.update(model_to_dict(instance, model_serializer_mapping))
object1 = camelcase_to_underscore(instance.__class__.__name__)
if type(instance) == OAuth2AccessToken:
changes['token'] = CENSOR_VALUE

View File

@@ -13,12 +13,11 @@ import logging
import os
import re
import shutil
import six
import stat
import sys
import tempfile
import time
import traceback
import six
import urlparse
from distutils.version import LooseVersion as Version
import yaml
@@ -28,12 +27,6 @@ try:
except Exception:
psutil = None
# Celery
from kombu import Queue, Exchange
from kombu.common import Broadcast
from celery import Task, shared_task
from celery.signals import celeryd_init, worker_shutdown
# Django
from django.conf import settings
from django.db import transaction, DatabaseError, IntegrityError
@@ -58,10 +51,12 @@ from awx.main.constants import ACTIVE_STATES
from awx.main.exceptions import AwxTaskError
from awx.main.queue import CallbackQueueDispatcher
from awx.main.expect import run, isolated_manager
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename, reaper
from awx.main.utils import (get_ansible_version, get_ssh_version, decrypt_field, update_scm_url,
check_proot_installed, build_proot_temp_dir, get_licenser,
wrap_args_with_proot, OutputEventFilter, OutputVerboseFilter, ignore_inventory_computed_fields,
ignore_inventory_group_removal, get_type_for_model, extract_ansible_vars)
ignore_inventory_group_removal, extract_ansible_vars, schedule_task_manager)
from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja
from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock
@@ -87,52 +82,7 @@ Try upgrading OpenSSH or providing your private key in an different format. \
logger = logging.getLogger('awx.main.tasks')
def log_celery_failure(self, exc, task_id, args, kwargs, einfo):
try:
if getattr(exc, 'is_awx_task_error', False):
# Error caused by user / tracked in job output
logger.warning(six.text_type("{}").format(exc))
elif isinstance(self, BaseTask):
logger.exception(six.text_type(
'{!s} {!s} execution encountered exception.')
.format(get_type_for_model(self.model), args[0]))
else:
logger.exception(six.text_type('Task {} encountered exception.').format(self.name), exc_info=exc)
except Exception:
# It's fairly critical that this code _not_ raise exceptions on logging
# If you configure external logging in a way that _it_ fails, there's
# not a lot we can do here; sys.stderr.write is a final hail mary
_, _, tb = sys.exc_info()
traceback.print_tb(tb)
@celeryd_init.connect
def celery_startup(conf=None, **kwargs):
#
# When celeryd starts, if the instance cannot be found in the database,
# automatically register it. This is mostly useful for openshift-based
# deployments where:
#
# 2 Instances come online
# Instance B encounters a network blip, Instance A notices, and
# deprovisions it
# Instance B's connectivity is restored, celeryd starts, and it
# re-registers itself
#
# In traditional container-less deployments, instances don't get
# deprovisioned when they miss their heartbeat, so this code is mostly a
# no-op.
#
if kwargs['instance'].hostname != 'celery@{}'.format(settings.CLUSTER_HOST_ID):
error = six.text_type('celery -n {} does not match settings.CLUSTER_HOST_ID={}').format(
instance.hostname, settings.CLUSTER_HOST_ID
)
logger.error(error)
raise RuntimeError(error)
(changed, tower_instance) = Instance.objects.get_or_register()
if changed:
logger.info(six.text_type("Registered tower node '{}'").format(tower_instance.hostname))
def dispatch_startup():
startup_logger = logging.getLogger('awx.main.tasks')
startup_logger.info("Syncing Schedules")
for sch in Schedule.objects.all():
@@ -144,34 +94,44 @@ def celery_startup(conf=None, **kwargs):
except Exception:
logger.exception(six.text_type("Failed to rebuild schedule {}.").format(sch))
# set the queues we want to bind to dynamically at startup
queues = []
me = Instance.objects.me()
for q in [me.hostname] + settings.AWX_CELERY_QUEUES_STATIC:
q = q.encode('utf-8')
queues.append(Queue(q, Exchange(q), routing_key=q))
for q in settings.AWX_CELERY_BCAST_QUEUES_STATIC:
queues.append(Broadcast(q.encode('utf-8')))
conf.CELERY_QUEUES = list(set(queues))
# Expedite the first hearbeat run so a node comes online quickly.
cluster_node_heartbeat.apply([])
#
# When the dispatcher starts, if the instance cannot be found in the database,
# automatically register it. This is mostly useful for openshift-based
# deployments where:
#
# 2 Instances come online
# Instance B encounters a network blip, Instance A notices, and
# deprovisions it
# Instance B's connectivity is restored, the dispatcher starts, and it
# re-registers itself
#
# In traditional container-less deployments, instances don't get
# deprovisioned when they miss their heartbeat, so this code is mostly a
# no-op.
#
apply_cluster_membership_policies()
cluster_node_heartbeat()
if Instance.objects.me().is_controller():
awx_isolated_heartbeat()
@worker_shutdown.connect
def inform_cluster_of_shutdown(*args, **kwargs):
def inform_cluster_of_shutdown():
try:
this_inst = Instance.objects.get(hostname=settings.CLUSTER_HOST_ID)
this_inst.capacity = 0 # No thank you to new jobs while shut down
this_inst.save(update_fields=['capacity', 'modified'])
try:
reaper.reap(this_inst)
except Exception:
logger.exception('failed to reap jobs for {}'.format(this_inst.hostname))
logger.warning(six.text_type('Normal shutdown signal for instance {}, '
'removed self from capacity pool.').format(this_inst.hostname))
except Exception:
logger.exception('Encountered problem with normal shutdown signal.')
@shared_task(bind=True, queue=settings.CELERY_DEFAULT_QUEUE)
def apply_cluster_membership_policies(self):
@task()
def apply_cluster_membership_policies():
started_waiting = time.time()
with advisory_lock('cluster_policy_lock', wait=True):
lock_time = time.time() - started_waiting
@@ -280,20 +240,36 @@ def apply_cluster_membership_policies(self):
logger.info('Cluster policy computation finished in {} seconds'.format(time.time() - started_compute))
@shared_task(exchange='tower_broadcast_all', bind=True)
def handle_setting_changes(self, setting_keys):
@task(queue='tower_broadcast_all', exchange_type='fanout')
def handle_setting_changes(setting_keys):
orig_len = len(setting_keys)
for i in range(orig_len):
for dependent_key in settings_registry.get_dependent_settings(setting_keys[i]):
setting_keys.append(dependent_key)
logger.warn('Processing cache changes, task args: {0.args!r} kwargs: {0.kwargs!r}'.format(
self.request))
cache_keys = set(setting_keys)
logger.debug('cache delete_many(%r)', cache_keys)
cache.delete_many(cache_keys)
@shared_task(queue=settings.CELERY_DEFAULT_QUEUE)
@task(queue='tower_broadcast_all', exchange_type='fanout')
def delete_project_files(project_path):
# TODO: possibly implement some retry logic
lock_file = project_path + '.lock'
if os.path.exists(project_path):
try:
shutil.rmtree(project_path)
logger.info(six.text_type('Success removing project files {}').format(project_path))
except Exception:
logger.exception(six.text_type('Could not remove project directory {}').format(project_path))
if os.path.exists(lock_file):
try:
os.remove(lock_file)
logger.debug(six.text_type('Success removing {}').format(lock_file))
except Exception:
logger.exception(six.text_type('Could not remove lock file {}').format(lock_file))
@task()
def send_notifications(notification_list, job_id=None):
if not isinstance(notification_list, list):
raise TypeError("notification_list should be of type list")
@@ -322,8 +298,8 @@ def send_notifications(notification_list, job_id=None):
logger.exception(six.text_type('Error saving notification {} result.').format(notification.id))
@shared_task(bind=True, queue=settings.CELERY_DEFAULT_QUEUE)
def run_administrative_checks(self):
@task()
def run_administrative_checks():
logger.warn("Running administrative checks.")
if not settings.TOWER_ADMIN_ALERTS:
return
@@ -344,8 +320,8 @@ def run_administrative_checks(self):
fail_silently=True)
@shared_task(bind=True)
def purge_old_stdout_files(self):
@task(queue=get_local_queuename)
def purge_old_stdout_files():
nowtime = time.time()
for f in os.listdir(settings.JOBOUTPUT_ROOT):
if os.path.getctime(os.path.join(settings.JOBOUTPUT_ROOT,f)) < nowtime - settings.LOCAL_STDOUT_EXPIRE_TIME:
@@ -353,8 +329,8 @@ def purge_old_stdout_files(self):
logger.info(six.text_type("Removing {}").format(os.path.join(settings.JOBOUTPUT_ROOT,f)))
@shared_task(bind=True)
def cluster_node_heartbeat(self):
@task(queue=get_local_queuename)
def cluster_node_heartbeat():
logger.debug("Cluster node heartbeat task.")
nowtime = now()
instance_list = list(Instance.objects.all_non_isolated())
@@ -397,9 +373,13 @@ def cluster_node_heartbeat(self):
this_inst.version))
# Shutdown signal will set the capacity to zero to ensure no Jobs get added to this instance.
# The heartbeat task will reset the capacity to the system capacity after upgrade.
stop_local_services(['uwsgi', 'celery', 'beat', 'callback'], communicate=False)
stop_local_services(communicate=False)
raise RuntimeError("Shutting down.")
for other_inst in lost_instances:
try:
reaper.reap(other_inst)
except Exception:
logger.exception('failed to reap jobs for {}'.format(other_inst.hostname))
try:
# Capacity could already be 0 because:
# * It's a new node and it never had a heartbeat
@@ -424,8 +404,8 @@ def cluster_node_heartbeat(self):
logger.exception(six.text_type('Error marking {} as lost').format(other_inst.hostname))
@shared_task(bind=True)
def awx_isolated_heartbeat(self):
@task(queue=get_local_queuename)
def awx_isolated_heartbeat():
local_hostname = settings.CLUSTER_HOST_ID
logger.debug("Controlling node checking for any isolated management tasks.")
poll_interval = settings.AWX_ISOLATED_PERIODIC_CHECK
@@ -452,8 +432,8 @@ def awx_isolated_heartbeat(self):
isolated_manager.IsolatedManager.health_check(isolated_instance_qs, awx_application_version)
@shared_task(bind=True, queue=settings.CELERY_DEFAULT_QUEUE)
def awx_periodic_scheduler(self):
@task()
def awx_periodic_scheduler():
run_now = now()
state = TowerScheduleState.get_solo()
last_run = state.schedule_last_run
@@ -503,8 +483,8 @@ def awx_periodic_scheduler(self):
state.save()
@shared_task(bind=True, queue=settings.CELERY_DEFAULT_QUEUE)
def handle_work_success(self, result, task_actual):
@task()
def handle_work_success(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
@@ -513,11 +493,10 @@ def handle_work_success(self, result, task_actual):
if not instance:
return
from awx.main.scheduler.tasks import run_job_complete
run_job_complete.delay(instance.id)
schedule_task_manager()
@shared_task(queue=settings.CELERY_DEFAULT_QUEUE)
@task()
def handle_work_error(task_id, *args, **kwargs):
subtasks = kwargs.get('subtasks', None)
logger.debug('Executing error task id %s, subtasks: %s' % (task_id, str(subtasks)))
@@ -553,12 +532,11 @@ def handle_work_error(task_id, *args, **kwargs):
# what the job complete message handler does then we may want to send a
# completion event for each job here.
if first_instance:
from awx.main.scheduler.tasks import run_job_complete
run_job_complete.delay(first_instance.id)
schedule_task_manager()
pass
@shared_task(queue=settings.CELERY_DEFAULT_QUEUE)
@task()
def update_inventory_computed_fields(inventory_id, should_update_hosts=True):
'''
Signal handler and wrapper around inventory.update_computed_fields to
@@ -578,7 +556,7 @@ def update_inventory_computed_fields(inventory_id, should_update_hosts=True):
raise
@shared_task(queue=settings.CELERY_DEFAULT_QUEUE)
@task()
def update_host_smart_inventory_memberships():
try:
with transaction.atomic():
@@ -603,8 +581,8 @@ def update_host_smart_inventory_memberships():
smart_inventory.update_computed_fields(update_groups=False, update_hosts=False)
@shared_task(bind=True, queue=settings.CELERY_DEFAULT_QUEUE, max_retries=5)
def delete_inventory(self, inventory_id, user_id):
@task()
def delete_inventory(inventory_id, user_id, retries=5):
# Delete inventory as user
if user_id is None:
user = None
@@ -629,7 +607,9 @@ def delete_inventory(self, inventory_id, user_id):
return
except DatabaseError:
logger.exception('Database error deleting inventory {}, but will retry.'.format(inventory_id))
self.retry(countdown=10)
if retries > 0:
time.sleep(10)
delete_inventory(inventory_id, user_id, retries=retries - 1)
def with_path_cleanup(f):
@@ -650,8 +630,7 @@ def with_path_cleanup(f):
return _wrapped
class BaseTask(Task):
name = None
class BaseTask(object):
model = None
event_model = None
abstract = True
@@ -844,7 +823,12 @@ class BaseTask(Task):
return False
def build_inventory(self, instance, **kwargs):
json_data = json.dumps(instance.inventory.get_script_data(hostvars=True))
script_params = dict(hostvars=True)
if hasattr(instance, 'job_slice_number'):
script_params['slice_number'] = instance.job_slice_number
script_params['slice_count'] = instance.job_slice_count
script_data = instance.inventory.get_script_data(**script_params)
json_data = json.dumps(script_data)
handle, path = tempfile.mkstemp(dir=kwargs.get('private_data_dir', None))
f = os.fdopen(handle, 'w')
f.write('#! /usr/bin/env python\n# -*- coding: utf-8 -*-\nprint %r\n' % json_data)
@@ -945,14 +929,11 @@ class BaseTask(Task):
if instance.cancel_flag:
instance = self.update_model(instance.pk, status='canceled')
if instance.status != 'running':
if hasattr(settings, 'CELERY_UNIT_TEST'):
return
else:
# Stop the task chain and prevent starting the job if it has
# already been canceled.
instance = self.update_model(pk)
status = instance.status
raise RuntimeError('not starting %s task' % instance.status)
# Stop the task chain and prevent starting the job if it has
# already been canceled.
instance = self.update_model(pk)
status = instance.status
raise RuntimeError('not starting %s task' % instance.status)
if not os.path.exists(settings.AWX_PROOT_BASE_PATH):
raise RuntimeError('AWX_PROOT_BASE_PATH=%s does not exist' % settings.AWX_PROOT_BASE_PATH)
@@ -1085,8 +1066,6 @@ class BaseTask(Task):
logger.exception(six.text_type('{} Final run hook errored.').format(instance.log_format))
instance.websocket_emit_status(status)
if status != 'successful':
# Raising an exception will mark the job as 'failed' in celery
# and will stop a task chain from continuing to execute
if status == 'canceled':
raise AwxTaskError.TaskCancel(instance, rc)
else:
@@ -1109,12 +1088,12 @@ class BaseTask(Task):
return ''
@task()
class RunJob(BaseTask):
'''
Celery task to run a job using ansible-playbook.
Run a job using ansible-playbook.
'''
name = 'awx.main.tasks.run_job'
model = Job
event_model = JobEvent
event_data_key = 'job_id'
@@ -1229,8 +1208,6 @@ class RunJob(BaseTask):
if job.project:
env['PROJECT_REVISION'] = job.project.scm_revision
env['ANSIBLE_RETRY_FILES_ENABLED'] = "False"
env['ANSIBLE_INVENTORY_ENABLED'] = 'script'
env['ANSIBLE_INVENTORY_UNPARSED_FAILED'] = 'True'
env['MAX_EVENT_RES'] = str(settings.MAX_EVENT_RES_DATA)
if not kwargs.get('isolated'):
env['ANSIBLE_CALLBACK_PLUGINS'] = plugin_path
@@ -1245,15 +1222,10 @@ class RunJob(BaseTask):
os.mkdir(cp_dir, 0o700)
env['ANSIBLE_SSH_CONTROL_PATH_DIR'] = cp_dir
# Allow the inventory script to include host variables inline via ['_meta']['hostvars'].
env['INVENTORY_HOSTVARS'] = str(True)
# Set environment variables for cloud credentials.
cred_files = kwargs.get('private_data_files', {}).get('credentials', {})
for cloud_cred in job.cloud_credentials:
if cloud_cred and cloud_cred.kind == 'gce':
env['GCE_PEM_FILE_PATH'] = cred_files.get(cloud_cred, '')
elif cloud_cred and cloud_cred.kind == 'openstack':
if cloud_cred and cloud_cred.kind == 'openstack':
env['OS_CLIENT_CONFIG_FILE'] = cred_files.get(cloud_cred, '')
for network_cred in job.network_credentials:
@@ -1404,12 +1376,17 @@ class RunJob(BaseTask):
self.update_model(job.pk, status='failed', job_explanation=error)
raise RuntimeError(error)
if job.project and job.project.scm_type:
job_request_id = '' if self.request.id is None else self.request.id
pu_ig = job.instance_group
pu_en = job.execution_node
if job.is_isolated() is True:
pu_ig = pu_ig.controller
pu_en = settings.CLUSTER_HOST_ID
if job.project.status in ('error', 'failed'):
msg = _(
'The project revision for this job template is unknown due to a failed update.'
)
job = self.update_model(job.pk, status='failed', job_explanation=msg)
raise RuntimeError(msg)
local_project_sync = job.project.create_project_update(
_eager_fields=dict(
launch_type="sync",
@@ -1417,16 +1394,14 @@ class RunJob(BaseTask):
status='running',
instance_group = pu_ig,
execution_node=pu_en,
celery_task_id=job_request_id))
celery_task_id=job.celery_task_id))
# save the associated job before calling run() so that a
# cancel() call on the job can cancel the project update
job = self.update_model(job.pk, project_update=local_project_sync)
project_update_task = local_project_sync._get_task_class()
try:
task_instance = project_update_task()
task_instance.request.id = job_request_id
task_instance.run(local_project_sync.id)
project_update_task().run(local_project_sync.id)
job = self.update_model(job.pk, scm_revision=job.project.scm_revision)
except Exception:
local_project_sync.refresh_from_db()
@@ -1436,9 +1411,13 @@ class RunJob(BaseTask):
('project_update', local_project_sync.name, local_project_sync.id)))
raise
def final_run_hook(self, job, status, **kwargs):
super(RunJob, self).final_run_hook(job, status, **kwargs)
if 'private_data_dir' not in kwargs:
# If there's no private data dir, that means we didn't get into the
# actual `run()` call; this _usually_ means something failed in
# the pre_run_hook method
return
if job.use_fact_cache:
job.finish_job_fact_cache(
kwargs['private_data_dir'],
@@ -1467,9 +1446,9 @@ class RunJob(BaseTask):
update_inventory_computed_fields.delay(inventory.id, True)
@task()
class RunProjectUpdate(BaseTask):
name = 'awx.main.tasks.run_project_update'
model = ProjectUpdate
event_model = ProjectUpdateEvent
event_data_key = 'project_update_id'
@@ -1670,7 +1649,6 @@ class RunProjectUpdate(BaseTask):
return getattr(settings, 'PROJECT_UPDATE_IDLE_TIMEOUT', None)
def _update_dependent_inventories(self, project_update, dependent_inventory_sources):
project_request_id = '' if self.request.id is None else self.request.id
scm_revision = project_update.project.scm_revision
inv_update_class = InventoryUpdate._get_task_class()
for inv_src in dependent_inventory_sources:
@@ -1693,13 +1671,10 @@ class RunProjectUpdate(BaseTask):
status='running',
instance_group=project_update.instance_group,
execution_node=project_update.execution_node,
celery_task_id=str(project_request_id),
source_project_update=project_update))
source_project_update=project_update,
celery_task_id=project_update.celery_task_id))
try:
task_instance = inv_update_class()
# Runs in the same Celery task as project update
task_instance.request.id = project_request_id
task_instance.run(local_inv_update.id)
inv_update_class().run(local_inv_update.id)
except Exception:
logger.exception(six.text_type('{} Unhandled exception updating dependent SCM inventory sources.')
.format(project_update.log_format))
@@ -1804,9 +1779,9 @@ class RunProjectUpdate(BaseTask):
return getattr(settings, 'AWX_PROOT_ENABLED', False)
@task()
class RunInventoryUpdate(BaseTask):
name = 'awx.main.tasks.run_inventory_update'
model = InventoryUpdate
event_model = InventoryUpdateEvent
event_data_key = 'inventory_update_id'
@@ -1832,10 +1807,6 @@ class RunInventoryUpdate(BaseTask):
"""
private_data = {'credentials': {}}
credential = inventory_update.get_cloud_credential()
# If this is GCE, return the RSA key
if inventory_update.source == 'gce':
private_data['credentials'][credential] = decrypt_field(credential, 'ssh_key_data')
return private_data
if inventory_update.source == 'openstack':
openstack_auth = dict(auth_url=credential.host,
@@ -2024,8 +1995,7 @@ class RunInventoryUpdate(BaseTask):
This dictionary is used by `build_env`, below.
"""
# Run the superclass implementation.
super_ = super(RunInventoryUpdate, self).build_passwords
passwords = super_(inventory_update, **kwargs)
passwords = super(RunInventoryUpdate, self).build_passwords(inventory_update, **kwargs)
# Take key fields from the credential in use and add them to the
# passwords dictionary.
@@ -2069,7 +2039,6 @@ class RunInventoryUpdate(BaseTask):
'ec2': 'EC2_INI_PATH',
'vmware': 'VMWARE_INI_PATH',
'azure_rm': 'AZURE_INI_PATH',
'gce': 'GCE_PEM_FILE_PATH',
'openstack': 'OS_CLIENT_CONFIG_FILE',
'satellite6': 'FOREMAN_INI_PATH',
'cloudforms': 'CLOUDFORMS_INI_PATH'
@@ -2188,7 +2157,6 @@ class RunInventoryUpdate(BaseTask):
if inventory_update.inventory_source:
source_project = inventory_update.inventory_source.source_project
if (inventory_update.source=='scm' and inventory_update.launch_type!='scm' and source_project):
request_id = '' if self.request.id is None else self.request.id
local_project_sync = source_project.create_project_update(
_eager_fields=dict(
launch_type="sync",
@@ -2196,16 +2164,14 @@ class RunInventoryUpdate(BaseTask):
status='running',
execution_node=inventory_update.execution_node,
instance_group = inventory_update.instance_group,
celery_task_id=request_id))
celery_task_id=inventory_update.celery_task_id))
# associate the inventory update before calling run() so that a
# cancel() call on the inventory update can cancel the project update
local_project_sync.scm_inventory_updates.add(inventory_update)
project_update_task = local_project_sync._get_task_class()
try:
task_instance = project_update_task()
task_instance.request.id = request_id
task_instance.run(local_project_sync.id)
project_update_task().run(local_project_sync.id)
inventory_update.inventory_source.scm_last_revision = local_project_sync.project.scm_revision
inventory_update.inventory_source.save(update_fields=['scm_last_revision'])
except Exception:
@@ -2216,12 +2182,12 @@ class RunInventoryUpdate(BaseTask):
raise
@task()
class RunAdHocCommand(BaseTask):
'''
Celery task to run an ad hoc command using ansible.
Run an ad hoc command using ansible.
'''
name = 'awx.main.tasks.run_ad_hoc_command'
model = AdHocCommand
event_model = AdHocCommandEvent
event_data_key = 'ad_hoc_command_id'
@@ -2382,9 +2348,9 @@ class RunAdHocCommand(BaseTask):
return getattr(settings, 'AWX_PROOT_ENABLED', False)
@task()
class RunSystemJob(BaseTask):
name = 'awx.main.tasks.run_system_job'
model = SystemJob
event_model = SystemJobEvent
event_data_key = 'system_job_id'
@@ -2439,9 +2405,9 @@ def _reconstruct_relationships(copy_mapping):
new_obj.save()
@shared_task(bind=True, queue=settings.CELERY_DEFAULT_QUEUE)
@task()
def deep_copy_model_obj(
self, model_module, model_name, obj_pk, new_obj_pk,
model_module, model_name, obj_pk, new_obj_pk,
user_pk, sub_obj_list, permission_check_func=None
):
logger.info(six.text_type('Deep copy {} from {} to {}.').format(model_name, obj_pk, new_obj_pk))

View File

@@ -15,6 +15,12 @@ from awx.main.tests.factories import (
)
def pytest_addoption(parser):
parser.addoption(
"--genschema", action="store_true", default=False, help="execute schema validator"
)
def pytest_configure(config):
import sys
sys._called_from_test = True

View File

@@ -49,7 +49,8 @@ class TestSwaggerGeneration():
data.update(response.accepted_renderer.get_customizations() or {})
data['host'] = None
data['modified'] = datetime.datetime.utcnow().isoformat()
if not pytest.config.getoption("--genschema"):
data['modified'] = datetime.datetime.utcnow().isoformat()
data['schemes'] = ['https']
data['consumes'] = ['application/json']
@@ -119,9 +120,9 @@ class TestSwaggerGeneration():
def test_autogen_response_examples(self, swagger_autogen):
for pattern, node in TestSwaggerGeneration.JSON['paths'].items():
pattern = pattern.replace('{id}', '[0-9]+')
pattern = pattern.replace('{category_slug}', '[a-zA-Z0-9\-]+')
pattern = pattern.replace(r'{category_slug}', r'[a-zA-Z0-9\-]+')
for path, result in swagger_autogen.items():
if re.match('^{}$'.format(pattern), path):
if re.match(r'^{}$'.format(pattern), path):
for key, value in result.items():
method, status_code = key
content_type, resp, request_data = value
@@ -139,11 +140,14 @@ class TestSwaggerGeneration():
for param in node[method].get('parameters'):
if param['in'] == 'body':
node[method]['parameters'].remove(param)
node[method].setdefault('parameters', []).append({
'name': 'data',
'in': 'body',
'schema': {'example': request_data},
})
if pytest.config.getoption("--genschema"):
pytest.skip("In schema generator skipping swagger generator", allow_module_level=True)
else:
node[method].setdefault('parameters', []).append({
'name': 'data',
'in': 'body',
'schema': {'example': request_data},
})
# Build response examples
if resp:
@@ -164,8 +168,13 @@ class TestSwaggerGeneration():
# replace ISO dates w/ the same value so we don't generate
# needless diffs
data = re.sub(
'[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]+Z',
'2018-02-01T08:00:00.000000Z',
r'[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]+Z',
r'2018-02-01T08:00:00.000000Z',
data
)
data = re.sub(
r'''(\s+"client_id": ")([a-zA-Z0-9]{40})("\,\s*)''',
r'\1xxxx\3',
data
)
f.write(data)

View File

@@ -4,9 +4,11 @@ import mock
from dateutil.parser import parse
from dateutil.relativedelta import relativedelta
from crum import impersonate
import datetime
# Django rest framework
from rest_framework.exceptions import PermissionDenied
from django.utils import timezone
# AWX
from awx.api.versioning import reverse
@@ -122,6 +124,46 @@ def test_job_relaunch_on_failed_hosts(post, inventory, project, machine_credenti
assert r.data.get('limit') == hosts
@pytest.mark.django_db
def test_summary_fields_recent_jobs(job_template, admin_user, get):
jobs = []
for i in range(13):
jobs.append(Job.objects.create(
job_template=job_template,
status='failed',
created=timezone.make_aware(datetime.datetime(2017, 3, 21, 9, i)),
finished=timezone.make_aware(datetime.datetime(2017, 3, 21, 10, i))
))
r = get(
url = job_template.get_absolute_url(),
user = admin_user,
exepect = 200
)
recent_jobs = r.data['summary_fields']['recent_jobs']
assert len(recent_jobs) == 10
assert recent_jobs == [{
'id': job.id,
'status': 'failed',
'finished': job.finished,
'type': 'job'
} for job in jobs[-10:][::-1]]
@pytest.mark.django_db
def test_slice_jt_recent_jobs(slice_job_factory, admin_user, get):
workflow_job = slice_job_factory(3, spawn=True)
slice_jt = workflow_job.job_template
r = get(
url=slice_jt.get_absolute_url(),
user=admin_user,
expect=200
)
job_ids = [entry['id'] for entry in r.data['summary_fields']['recent_jobs']]
# decision is that workflow job should be shown in the related jobs
# joblets of the workflow job should NOT be shown
assert job_ids == [workflow_job.pk]
@pytest.mark.django_db
def test_block_unprocessed_events(delete, admin_user, mocker):
time_of_finish = parse("Thu Feb 28 09:10:20 2013 -0500")

View File

@@ -6,7 +6,7 @@ import json
from awx.api.serializers import JobLaunchSerializer
from awx.main.models.credential import Credential
from awx.main.models.inventory import Inventory, Host
from awx.main.models.jobs import Job, JobTemplate
from awx.main.models.jobs import Job, JobTemplate, UnifiedJobTemplate
from awx.api.versioning import reverse
@@ -553,19 +553,20 @@ def test_callback_accept_prompted_extra_var(mocker, survey_spec_factory, job_tem
with mocker.patch('awx.main.access.BaseAccess.check_license'):
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}):
with mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host]):
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user, expect=201, format='json')
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {
'extra_vars': {'survey_var': 4, 'job_launch_var': 3},
'_eager_fields': {'launch_type': 'callback'},
'limit': 'single-host'},
)
'limit': 'single-host'
}
mock_job.signal_start.assert_called_once()
@@ -579,18 +580,19 @@ def test_callback_ignore_unprompted_extra_var(mocker, survey_spec_factory, job_t
with mocker.patch('awx.main.access.BaseAccess.check_license'):
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}):
with mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host]):
post(
reverse('api:job_template_callback', kwargs={'pk':job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user, expect=201, format='json')
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({
'_eager_fields': {'launch_type': 'callback'},
'limit': 'single-host'},
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {
'limit': 'single-host'
}
mock_job.signal_start.assert_called_once()

View File

@@ -6,7 +6,7 @@ import pytest
# AWX
from awx.api.serializers import JobTemplateSerializer
from awx.api.versioning import reverse
from awx.main.models import Job, JobTemplate, CredentialType
from awx.main.models import Job, JobTemplate, CredentialType, WorkflowJobTemplate
from awx.main.migrations import _save_password_keys as save_password_keys
# Django
@@ -519,6 +519,24 @@ def test_launch_with_pending_deletion_inventory(get, post, organization_factory,
assert resp.data['inventory'] == ['The inventory associated with this Job Template is being deleted.']
@pytest.mark.django_db
def test_launch_with_pending_deletion_inventory_workflow(get, post, organization, inventory, admin_user):
wfjt = WorkflowJobTemplate.objects.create(
name='wfjt',
organization=organization,
inventory=inventory
)
inventory.pending_deletion = True
inventory.save()
resp = post(
url=reverse('api:workflow_job_template_launch', kwargs={'pk': wfjt.pk}),
user=admin_user, expect=400
)
assert resp.data['inventory'] == ['The inventory associated with this Workflow is being deleted.']
@pytest.mark.django_db
def test_launch_with_extra_credentials(get, post, organization_factory,
job_template_factory, machine_credential,

View File

@@ -5,10 +5,7 @@ import json
from django.db import connection
from django.test.utils import override_settings
from django.test import Client
from django.core.urlresolvers import resolve
from rest_framework.test import APIRequestFactory
from awx.main.middleware import DeprecatedAuthTokenMiddleware
from awx.main.utils.encryption import decrypt_value, get_encryption_key
from awx.api.versioning import reverse, drf_reverse
from awx.main.models.oauth import (OAuth2Application as Application,
@@ -361,52 +358,3 @@ def test_revoke_refreshtoken(oauth_application, post, get, delete, admin):
new_refresh_token = RefreshToken.objects.all().first()
assert refresh_token == new_refresh_token
assert new_refresh_token.revoked
@pytest.mark.django_db
@pytest.mark.parametrize('fmt', ['json', 'multipart'])
def test_deprecated_authtoken_support(alice, fmt):
kwargs = {
'data': {'username': 'alice', 'password': 'alice'},
'format': fmt
}
request = getattr(APIRequestFactory(), 'post')('/api/v2/authtoken/', **kwargs)
DeprecatedAuthTokenMiddleware().process_request(request)
assert request.path == request.path_info == '/api/v2/users/{}/personal_tokens/'.format(alice.pk)
view, view_args, view_kwargs = resolve(request.path)
resp = view(request, *view_args, **view_kwargs)
assert resp.status_code == 201
assert 'token' in resp.data
assert resp.data['refresh_token'] is None
assert resp.data['scope'] == 'write'
for _type in ('Token', 'Bearer'):
request = getattr(APIRequestFactory(), 'get')(
'/api/v2/me/',
HTTP_AUTHORIZATION=' '.join([_type, resp.data['token']])
)
DeprecatedAuthTokenMiddleware().process_request(request)
view, view_args, view_kwargs = resolve(request.path)
assert view(request, *view_args, **view_kwargs).status_code == 200
@pytest.mark.django_db
def test_deprecated_authtoken_invalid_username(alice):
kwargs = {
'data': {'username': 'nobody', 'password': 'nobody'},
'format': 'json'
}
request = getattr(APIRequestFactory(), 'post')('/api/v2/authtoken/', **kwargs)
resp = DeprecatedAuthTokenMiddleware().process_request(request)
assert resp.status_code == 401
@pytest.mark.django_db
def test_deprecated_authtoken_missing_credentials(alice):
kwargs = {
'data': {},
'format': 'json'
}
request = getattr(APIRequestFactory(), 'post')('/api/v2/authtoken/', **kwargs)
resp = DeprecatedAuthTokenMiddleware().process_request(request)
assert resp.status_code == 401

View File

@@ -34,6 +34,30 @@ def test_wfjt_schedule_accepted(post, workflow_job_template, admin_user):
post(url, {'name': 'test sch', 'rrule': RRULE_EXAMPLE}, admin_user, expect=201)
@pytest.mark.django_db
def test_wfjt_unprompted_inventory_rejected(post, workflow_job_template, inventory, admin_user):
r = post(
url=reverse('api:workflow_job_template_schedules_list', kwargs={'pk': workflow_job_template.id}),
data={'name': 'test sch', 'rrule': RRULE_EXAMPLE, 'inventory': inventory.pk},
user=admin_user,
expect=400
)
assert r.data['inventory'] == ['Field is not configured to prompt on launch.']
@pytest.mark.django_db
def test_wfjt_unprompted_inventory_accepted(post, workflow_job_template, inventory, admin_user):
workflow_job_template.ask_inventory_on_launch = True
workflow_job_template.save()
r = post(
url=reverse('api:workflow_job_template_schedules_list', kwargs={'pk': workflow_job_template.id}),
data={'name': 'test sch', 'rrule': RRULE_EXAMPLE, 'inventory': inventory.pk},
user=admin_user,
expect=201
)
assert Schedule.objects.get(pk=r.data['id']).inventory == inventory
@pytest.mark.django_db
def test_valid_survey_answer(post, admin_user, project, inventory, survey_spec_factory):
job_template = JobTemplate.objects.create(

View File

@@ -13,6 +13,7 @@ def test_empty_inventory(post, get, admin_user, organization, group_factory):
inventory.save()
resp = get(reverse('api:inventory_script_view', kwargs={'version': 'v2', 'pk': inventory.pk}), admin_user)
jdata = json.loads(resp.content)
jdata.pop('all')
assert inventory.hosts.count() == 0
assert jdata == {}

View File

@@ -7,6 +7,7 @@ import pytest
import os
from django.conf import settings
from kombu.utils.url import parse_url
# Mock
import mock
@@ -358,3 +359,49 @@ def test_isolated_key_flag_readonly(get, patch, delete, admin):
delete(url, user=admin)
assert settings.AWX_ISOLATED_KEY_GENERATION is True
@pytest.mark.django_db
@pytest.mark.parametrize('headers', [True, False])
def test_saml_x509cert_validation(patch, get, admin, headers):
cert = "MIIEogIBAAKCAQEA1T4za6qBbHxFpN5f9eFvA74MFjrsjcp1uvzOaE23AYKMDEJghJ6dqQ7GwHLNIeIeumqDFmODauIzrgSDJTT5+NG30Rr+rRi0zDkrkBAj/AtA+SaVhbzqB6ZSd7LaMly9XAc+82OKlNpuWS9hPmFaSShzDTXRu5RRyvm4NDCAOGDu5hyVR2pV/ffKDNfNkChnqzvRRW9laQcVmliZhlTGn7nPZ+JbjpwEy0nwW+4zoAiEvwnT52N4xTqIcYOnXtGiaf13dh7FkUfYmS0tzF3+h8QRKwtIm4y+sq84R/kr79/0t5aRUpJynNrECajzmArpL4IjXKTPIyUpTKirJgGnCwIDAQABAoIBAC6bbbm2hpsjfkVOpUKkhxMWUqX5MwK6oYjBAIwjkEAwPFPhnh7eXC87H42oidVCCt1LsmMOVQbjcdAzBEb5kTkk/Twi3k8O+1U3maHfJT5NZ2INYNjeNXh+jb/Dw5UGWAzpOIUR2JQ4Oa4cgPCVbppW0O6uOKz6+fWXJv+hKiUoBCC0TiY52iseHJdUOaKNxYRD2IyIzCAxFSd5tZRaARIYDsugXp3E/TdbsVWA7bmjIBOXq+SquTrlB8x7j3B7+Pi09nAJ2U/uV4PHE+/2Fl009ywfmqancvnhwnz+GQ5jjP+gTfghJfbO+Z6M346rS0Vw+osrPgfyudNHlCswHOECgYEA/Cfq25gDP07wo6+wYWbx6LIzj/SSZy/Ux9P8zghQfoZiPoaq7BQBPAzwLNt7JWST8U11LZA8/wo6ch+HSTMk+m5ieVuru2cHxTDqeNlh94eCrNwPJ5ayA5U6LxAuSCTAzp+rv6KQUx1JcKSEHuh+nRYTKvUDE6iA6YtPLO96lLUCgYEA2H5rOPX2M4w1Q9zjol77lplbPRdczXNd0PIzhy8Z2ID65qvmr1nxBG4f2H96ykW8CKLXNvSXreNZ1BhOXc/3Hv+3mm46iitB33gDX4mlV4Jyo/w5IWhUKRyoW6qXquFFsScxRzTrx/9M+aZeRRLdsBk27HavFEg6jrbQ0SleZL8CgYAaM6Op8d/UgkVrHOR9Go9kmK/W85kK8+NuaE7Ksf57R0eKK8AzC9kc/lMuthfTyOG+n0ff1i8gaVWtai1Ko+/hvfqplacAsDIUgYK70AroB8LCZ5ODj5sr2CPVpB7LDFakod7c6O2KVW6+L7oy5AHUHOkc+5y4PDg5DGrLxo68SQKBgAlGoWF3aG0c/MtDk51JZI43U+lyLs++ua5SMlMAeaMFI7rucpvgxqrh7Qthqukvw7a7A22fXUBeFWM5B2KNnpD9c+hyAKAa6l+gzMQzKZpuRGsyS2BbEAAS8kO7M3Rm4o2MmFfstI2FKs8nibJ79HOvIONQ0n+T+K5Utu2/UAQRAoGAFB4fiIyQ0nYzCf18Z4Wvi/qeIOW+UoBonIN3y1h4wruBywINHxFMHx4aVImJ6R09hoJ9D3Mxli3xF/8JIjfTG5fBSGrGnuofl14d/XtRDXbT2uhVXrIkeLL/ojODwwEx0VhxIRUEjPTvEl6AFSRRcBp3KKzQ/cu7ENDY6GTlOUI=" # noqa
if headers:
cert = '-----BEGIN CERTIFICATE-----\n' + cert + '\n-----END CERTIFICATE-----'
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'saml'})
resp = patch(url, user=admin, data={
'SOCIAL_AUTH_SAML_ENABLED_IDPS': {
"okta": {
"attr_last_name": "LastName",
"attr_username": "login",
"entity_id": "http://www.okta.com/abc123",
"attr_user_permanent_id": "login",
"url": "https://example.okta.com/app/abc123/xyz123/sso/saml",
"attr_email": "Email",
"x509cert": cert,
"attr_first_name": "FirstName"
}
}
})
assert resp.status_code == 200
@pytest.mark.django_db
def test_default_broker_url():
url = parse_url(settings.BROKER_URL)
assert url['transport'] == 'amqp'
assert url['hostname'] == 'rabbitmq'
assert url['userid'] == 'guest'
assert url['password'] == 'guest'
assert url['virtual_host'] == '/'
@pytest.mark.django_db
def test_broker_url_with_special_characters():
settings.BROKER_URL = 'amqp://guest:a@ns:ibl3#@rabbitmq:5672//'
url = parse_url(settings.BROKER_URL)
assert url['transport'] == 'amqp'
assert url['hostname'] == 'rabbitmq'
assert url['port'] == 5672
assert url['userid'] == 'guest'
assert url['password'] == 'a@ns:ibl3#'
assert url['virtual_host'] == '/'

View File

@@ -285,9 +285,7 @@ def test_survey_spec_passwords_with_default_required(job_template_factory, post,
@pytest.mark.django_db
@pytest.mark.parametrize('default, status', [
('SUPERSECRET', 200),
(['some', 'invalid', 'list'], 400),
({'some-invalid': 'dict'}, 400),
(False, 400)
])
def test_survey_spec_default_passwords_are_encrypted(job_template, post, admin_user, default, status):
job_template.survey_enabled = True
@@ -317,7 +315,7 @@ def test_survey_spec_default_passwords_are_encrypted(job_template, post, admin_u
'secret_value': default
}
else:
assert "for 'secret_value' expected to be a string." in str(resp.data)
assert "expected to be string." in str(resp.data)
@mock.patch('awx.api.views.feature_enabled', lambda feature: True)
@@ -346,37 +344,6 @@ def test_survey_spec_default_passwords_encrypted_on_update(job_template, post, p
assert updated_jt.survey_spec == JobTemplate.objects.get(pk=job_template.pk).survey_spec
# Tests related to survey content validation
@mock.patch('awx.api.views.feature_enabled', lambda feature: True)
@pytest.mark.django_db
@pytest.mark.survey
def test_survey_spec_non_dict_error(deploy_jobtemplate, post, admin_user):
"""When a question doesn't follow the standard format, verify error thrown."""
response = post(
url=reverse('api:job_template_survey_spec', kwargs={'pk': deploy_jobtemplate.id}),
data={
"description": "Email of the submitter",
"spec": ["What is your email?"], "name": "Email survey"
},
user=admin_user,
expect=400
)
assert response.data['error'] == "Survey question 0 is not a json object."
@mock.patch('awx.api.views.feature_enabled', lambda feature: True)
@pytest.mark.django_db
@pytest.mark.survey
def test_survey_spec_dual_names_error(survey_spec_factory, deploy_jobtemplate, post, user):
response = post(
url=reverse('api:job_template_survey_spec', kwargs={'pk': deploy_jobtemplate.id}),
data=survey_spec_factory(['submitter_email', 'submitter_email']),
user=user('admin', True),
expect=400
)
assert response.data['error'] == "'variable' 'submitter_email' duplicated in survey question 1."
# Test actions that should be allowed with non-survey license
@mock.patch('awx.main.access.BaseAccess.check_license', new=mock_no_surveys)
@pytest.mark.django_db

View File

@@ -1,4 +1,5 @@
import pytest
import json
from awx.api.versioning import reverse
@@ -75,6 +76,52 @@ def test_node_accepts_prompted_fields(inventory, project, workflow_job_template,
user=admin_user, expect=201)
@pytest.mark.django_db
class TestExclusiveRelationshipEnforcement():
@pytest.fixture
def n1(self, workflow_job_template):
return WorkflowJobTemplateNode.objects.create(workflow_job_template=workflow_job_template)
@pytest.fixture
def n2(self, workflow_job_template):
return WorkflowJobTemplateNode.objects.create(workflow_job_template=workflow_job_template)
def generate_url(self, relationship, id):
return reverse('api:workflow_job_template_node_{}_nodes_list'.format(relationship),
kwargs={'pk': id})
relationship_permutations = [
['success', 'failure', 'always'],
['success', 'always', 'failure'],
['failure', 'always', 'success'],
['failure', 'success', 'always'],
['always', 'success', 'failure'],
['always', 'failure', 'success'],
]
@pytest.mark.parametrize("relationships", relationship_permutations, ids=["-".join(item) for item in relationship_permutations])
def test_multi_connections_same_parent_disallowed(self, post, admin_user, n1, n2, relationships):
for index, relationship in enumerate(relationships):
r = post(self.generate_url(relationship, n1.id),
data={'associate': True, 'id': n2.id},
user=admin_user,
expect=204 if index == 0 else 400)
if index != 0:
assert {'Error': 'Relationship not allowed.'} == json.loads(r.content)
@pytest.mark.parametrize("relationship", ['success', 'failure', 'always'])
def test_existing_relationship_allowed(self, post, admin_user, n1, n2, relationship):
post(self.generate_url(relationship, n1.id),
data={'associate': True, 'id': n2.id},
user=admin_user,
expect=204)
post(self.generate_url(relationship, n1.id),
data={'associate': True, 'id': n2.id},
user=admin_user,
expect=204)
@pytest.mark.django_db
class TestNodeCredentials:
'''

View File

@@ -0,0 +1,79 @@
# Python
import datetime
import pytest
import string
import random
import StringIO
# Django
from django.core.management import call_command
from django.core.management.base import CommandError
# AWX
from awx.main.models import RefreshToken
from awx.main.models.oauth import OAuth2AccessToken
from awx.api.versioning import reverse
@pytest.mark.django_db
class TestOAuth2RevokeCommand:
def test_non_existing_user(self):
out = StringIO.StringIO()
fake_username = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
arg = '--user=' + fake_username
with pytest.raises(CommandError) as excinfo:
call_command('revoke_oauth2_tokens', arg, stdout=out)
assert 'A user with that username does not exist' in excinfo.value.message
out.close()
def test_revoke_all_access_tokens(self, post, admin, alice):
url = reverse('api:o_auth2_token_list')
for user in (admin, alice):
post(
url,
{'description': 'test token', 'scope': 'read'},
user
)
assert OAuth2AccessToken.objects.count() == 2
call_command('revoke_oauth2_tokens')
assert OAuth2AccessToken.objects.count() == 0
def test_revoke_access_token_for_user(self, post, admin, alice):
url = reverse('api:o_auth2_token_list')
post(
url,
{'description': 'test token', 'scope': 'read'},
alice
)
assert OAuth2AccessToken.objects.count() == 1
call_command('revoke_oauth2_tokens', '--user=admin')
assert OAuth2AccessToken.objects.count() == 1
call_command('revoke_oauth2_tokens', '--user=alice')
assert OAuth2AccessToken.objects.count() == 0
def test_revoke_all_refresh_tokens(self, post, admin, oauth_application):
url = reverse('api:o_auth2_token_list')
post(
url,
{
'description': 'test token for',
'scope': 'read',
'application': oauth_application.pk
},
admin
)
assert OAuth2AccessToken.objects.count() == 1
assert RefreshToken.objects.count() == 1
call_command('revoke_oauth2_tokens')
assert OAuth2AccessToken.objects.count() == 0
assert RefreshToken.objects.count() == 1
for r in RefreshToken.objects.all():
assert r.revoked is None
call_command('revoke_oauth2_tokens', '--all')
assert RefreshToken.objects.count() == 1
for r in RefreshToken.objects.all():
assert r.revoked is not None
assert isinstance(r.revoked, datetime.datetime)

View File

@@ -8,13 +8,13 @@ import tempfile
import shutil
from datetime import timedelta
from six.moves import xrange
from mock import PropertyMock
# Django
from django.core.urlresolvers import resolve
from django.utils.six.moves.urllib.parse import urlparse
from django.utils import timezone
from django.contrib.auth.models import User
from django.conf import settings
from django.core.serializers.json import DjangoJSONEncoder
from django.db.backends.sqlite3.base import SQLiteCursorWrapper
from jsonbfield.fields import JSONField
@@ -66,17 +66,6 @@ def swagger_autogen(requests=__SWAGGER_REQUESTS__):
return requests
@pytest.fixture(scope="session", autouse=True)
def celery_memory_broker():
'''
FIXME: Not sure how "far" just setting the BROKER_URL will get us.
We may need to incluence CELERY's configuration like we do in the old unit tests (see base.py)
Allows django signal code to execute without the need for redis
'''
settings.BROKER_URL='memory://localhost/'
@pytest.fixture
def user():
def u(name, is_superuser=False):
@@ -682,6 +671,24 @@ def job_template_labels(organization, job_template):
return job_template
@pytest.fixture
def jt_linked(job_template_factory, credential, net_credential, vault_credential):
'''
A job template with a reasonably complete set of related objects to
test RBAC and other functionality affected by related objects
'''
objects = job_template_factory(
'testJT', organization='org1', project='proj1', inventory='inventory1',
credential='cred1')
jt = objects.job_template
jt.credentials.add(vault_credential)
jt.save()
# Add AWS cloud credential and network credential
jt.credentials.add(credential)
jt.credentials.add(net_credential)
return jt
@pytest.fixture
def workflow_job_template(organization):
wjt = WorkflowJobTemplate(name='test-workflow_job_template', organization=organization)
@@ -764,3 +771,42 @@ def sqlite_copy_expert(request):
request.addfinalizer(lambda: delattr(SQLiteCursorWrapper, 'copy_expert'))
return path
@pytest.fixture
def disable_database_settings(mocker):
m = mocker.patch('awx.conf.settings.SettingsWrapper.all_supported_settings', new_callable=PropertyMock)
m.return_value = []
@pytest.fixture
def slice_jt_factory(inventory):
def r(N, jt_kwargs=None):
for i in range(N):
inventory.hosts.create(name='foo{}'.format(i))
if not jt_kwargs:
jt_kwargs = {}
return JobTemplate.objects.create(
name='slice-jt-from-factory',
job_slice_count=N,
inventory=inventory,
**jt_kwargs
)
return r
@pytest.fixture
def slice_job_factory(slice_jt_factory):
def r(N, jt_kwargs=None, prompts=None, spawn=False):
slice_jt = slice_jt_factory(N, jt_kwargs=jt_kwargs)
if not prompts:
prompts = {}
slice_job = slice_jt.create_unified_job(**prompts)
if spawn:
for node in slice_job.workflow_nodes.all():
# does what the task manager does for spawning workflow jobs
kv = node.get_job_kwargs()
job = node.unified_job_template.create_unified_job(**kv)
node.job = job
node.save()
return slice_job
return r

View File

@@ -15,9 +15,9 @@ from awx.main.models import (
)
# other AWX
from awx.main.utils import model_to_dict
from awx.main.utils import model_to_dict, model_instance_diff
from awx.main.utils.common import get_allowed_fields
from awx.api.serializers import InventorySourceSerializer
from awx.main.signals import model_serializer_mapping
# Django
from django.contrib.auth.models import AnonymousUser
@@ -26,11 +26,6 @@ from django.contrib.auth.models import AnonymousUser
from crum import impersonate
model_serializer_mapping = {
InventorySource: InventorySourceSerializer
}
class TestImplicitRolesOmitted:
'''
Test that there is exactly 1 "create" entry in the activity stream for
@@ -220,8 +215,38 @@ def test_modified_not_allowed_field(somecloud_type):
activity_stream_registrar, but did not add its serializer to
the model->serializer mapping.
'''
from awx.main.signals import model_serializer_mapping
from awx.main.registrar import activity_stream_registrar
for Model in activity_stream_registrar.models:
assert 'modified' not in get_allowed_fields(Model(), model_serializer_mapping), Model
@pytest.mark.django_db
def test_survey_spec_create_entry(job_template, survey_spec_factory):
start_count = job_template.activitystream_set.count()
job_template.survey_spec = survey_spec_factory('foo')
job_template.save()
assert job_template.activitystream_set.count() == start_count + 1
@pytest.mark.django_db
def test_survey_create_diff(job_template, survey_spec_factory):
old = JobTemplate.objects.get(pk=job_template.pk)
job_template.survey_spec = survey_spec_factory('foo')
before, after = model_instance_diff(old, job_template, model_serializer_mapping)['survey_spec']
assert before == '{}'
assert json.loads(after) == survey_spec_factory('foo')
@pytest.mark.django_db
def test_saved_passwords_hidden_activity(workflow_job_template, job_template_with_survey_passwords):
node_with_passwords = workflow_job_template.workflow_nodes.create(
unified_job_template=job_template_with_survey_passwords,
extra_data={'bbbb': '$encrypted$fooooo'},
survey_passwords={'bbbb': '$encrypted$'}
)
node_with_passwords.delete()
entry = ActivityStream.objects.order_by('timestamp').last()
changes = json.loads(entry.changes)
assert 'survey_passwords' not in changes
assert json.loads(changes['extra_data'])['bbbb'] == '$encrypted$'

View File

@@ -38,6 +38,70 @@ class TestInventoryScript:
'remote_tower_id': host.id
}
def test_all_group(self, inventory):
inventory.groups.create(name='all', variables={'a1': 'a1'})
# make sure we return a1 details in output
data = inventory.get_script_data()
assert 'all' in data
assert data['all'] == {
'hosts': [],
'children': [],
'vars': {
'a1': 'a1'
}
}
def test_grandparent_group(self, inventory):
g1 = inventory.groups.create(name='g1', variables={'v1': 'v1'})
g2 = inventory.groups.create(name='g2', variables={'v2': 'v2'})
h1 = inventory.hosts.create(name='h1')
# h1 becomes indirect member of g1 group
g1.children.add(g2)
g2.hosts.add(h1)
# make sure we return g1 details in output
data = inventory.get_script_data(hostvars=1)
assert 'g1' in data
assert 'g2' in data
assert data['g1'] == {
'hosts': [],
'children': ['g2'],
'vars': {'v1': 'v1'}
}
assert data['g2'] == {
'hosts': ['h1'],
'children': [],
'vars': {'v2': 'v2'}
}
def test_slice_subset(self, inventory):
for i in range(3):
inventory.hosts.create(name='host{}'.format(i))
for i in range(3):
assert inventory.get_script_data(slice_number=i + 1, slice_count=3) == {
'all': {'hosts': ['host{}'.format(i)]}
}
def test_slice_subset_with_groups(self, inventory):
hosts = []
for i in range(3):
host = inventory.hosts.create(name='host{}'.format(i))
hosts.append(host)
g1 = inventory.groups.create(name='contains_all_hosts')
for host in hosts:
g1.hosts.add(host)
g2 = inventory.groups.create(name='contains_two_hosts')
for host in hosts[:2]:
g2.hosts.add(host)
for i in range(3):
expected_data = {
'contains_all_hosts': {'hosts': ['host{}'.format(i)], 'children': [], 'vars': {}},
}
if i < 2:
expected_data['contains_two_hosts'] = {'hosts': ['host{}'.format(i)], 'children': [], 'vars': {}}
data = inventory.get_script_data(slice_number=i + 1, slice_count=3)
data.pop('all')
assert data == expected_data
@pytest.mark.django_db
class TestActiveCount:

View File

@@ -1,8 +1,7 @@
import pytest
import six
from awx.main.models import JobTemplate, Job, JobHostSummary
from crum import impersonate
from awx.main.models import JobTemplate, Job, JobHostSummary, WorkflowJob
@pytest.mark.django_db
@@ -18,6 +17,18 @@ def test_awx_virtualenv_from_settings(inventory, project, machine_credential):
assert job.ansible_virtualenv_path == '/venv/ansible'
@pytest.mark.django_db
def test_prevent_slicing():
jt = JobTemplate.objects.create(
name='foo',
job_slice_count=4
)
job = jt.create_unified_job(_prevent_slicing=True)
assert job.job_slice_count == 1
assert job.job_slice_number == 0
assert isinstance(job, Job)
@pytest.mark.django_db
def test_awx_custom_virtualenv(inventory, project, machine_credential):
jt = JobTemplate.objects.create(
@@ -53,21 +64,6 @@ def test_awx_custom_virtualenv_without_jt(project):
assert job.ansible_virtualenv_path == '/venv/fancy-proj'
@pytest.mark.django_db
def test_update_parent_instance(job_template, alice):
# jobs are launched as a particular user, user not saved as modified_by
with impersonate(alice):
assert job_template.current_job is None
assert job_template.status == 'never updated'
assert job_template.modified_by is None
job = job_template.jobs.create(status='new')
job.status = 'pending'
job.save()
assert job_template.current_job == job
assert job_template.status == 'pending'
assert job_template.modified_by is None
@pytest.mark.django_db
def test_job_host_summary_representation(host):
job = Job.objects.create(name='foo')
@@ -81,3 +77,23 @@ def test_job_host_summary_representation(host):
jhs = JobHostSummary.objects.get(pk=jhs.id)
host.delete()
assert 'N/A changed=1 dark=2 failures=3 ok=4 processed=5 skipped=6' == six.text_type(jhs)
@pytest.mark.django_db
class TestSlicingModels:
def test_slice_workflow_spawn(self, slice_jt_factory):
slice_jt = slice_jt_factory(3)
job = slice_jt.create_unified_job()
assert isinstance(job, WorkflowJob)
assert job.job_template == slice_jt
assert job.unified_job_template == slice_jt
assert job.workflow_nodes.count() == 3
def test_slices_with_JT_and_prompts(self, slice_job_factory):
job = slice_job_factory(3, jt_kwargs={'ask_limit_on_launch': True}, prompts={'limit': 'foobar'}, spawn=True)
assert job.launch_config.prompts_dict() == {'limit': 'foobar'}
for node in job.workflow_nodes.all():
assert node.limit is None # data not saved in node prompts
job = node.job
assert job.limit == 'foobar'

View File

@@ -1,13 +1,19 @@
import itertools
import pytest
import mock
import six
# CRUM
from crum import impersonate
# Django
from django.contrib.contenttypes.models import ContentType
# AWX
from awx.main.models import UnifiedJobTemplate, Job, JobTemplate, WorkflowJobTemplate, Project, WorkflowJob, Schedule
from awx.main.models.ha import InstanceGroup
from awx.main.models import (
UnifiedJobTemplate, Job, JobTemplate, WorkflowJobTemplate,
Project, WorkflowJob, Schedule,
Credential
)
@pytest.mark.django_db
@@ -55,9 +61,7 @@ class TestCreateUnifiedJob:
job_with_links.save()
job_with_links.credentials.add(machine_credential)
job_with_links.credentials.add(net_credential)
with mocker.patch('awx.main.models.unified_jobs.UnifiedJobTemplate._get_unified_job_field_names',
return_value=['inventory', 'credential', 'limit']):
second_job = job_with_links.copy_unified_job()
second_job = job_with_links.copy_unified_job()
# Check that job data matches the original variables
assert second_job.credential == job_with_links.credential
@@ -65,47 +69,20 @@ class TestCreateUnifiedJob:
assert second_job.limit == 'my_server'
assert net_credential in second_job.credentials.all()
@pytest.mark.django_db
class TestIsolatedRuns:
def test_low_capacity_isolated_instance_selected(self):
ig = InstanceGroup.objects.create(name='tower')
iso_ig = InstanceGroup.objects.create(name='thepentagon', controller=ig)
iso_ig.instances.create(hostname='iso1', capacity=50)
i2 = iso_ig.instances.create(hostname='iso2', capacity=200)
job = Job.objects.create(
instance_group=iso_ig,
celery_task_id='something',
)
mock_async = mock.MagicMock()
success_callback = mock.MagicMock()
error_callback = mock.MagicMock()
class MockTaskClass:
apply_async = mock_async
with mock.patch.object(job, '_get_task_class') as task_class:
task_class.return_value = MockTaskClass
job.start_celery_task([], error_callback, success_callback, 'thepentagon')
mock_async.assert_called_with([job.id], [],
link_error=error_callback,
link=success_callback,
queue='thepentagon',
task_id='something')
i2.capacity = 20
i2.save()
with mock.patch.object(job, '_get_task_class') as task_class:
task_class.return_value = MockTaskClass
job.start_celery_task([], error_callback, success_callback, 'thepentagon')
mock_async.assert_called_with([job.id], [],
link_error=error_callback,
link=success_callback,
queue='thepentagon',
task_id='something')
def test_job_relaunch_modifed_jt(self, jt_linked):
# Replace all credentials with a new one of same type
new_creds = []
for cred in jt_linked.credentials.all():
new_creds.append(Credential.objects.create(
name=six.text_type(cred.name) + six.text_type('_new'),
credential_type=cred.credential_type,
inputs=cred.inputs
))
job = jt_linked.create_unified_job()
jt_linked.credentials.clear()
jt_linked.credentials.add(*new_creds)
relaunched_job = job.copy_unified_job()
assert set(relaunched_job.credentials.all()) == set(new_creds)
@pytest.mark.django_db
@@ -178,3 +155,120 @@ def test_event_processing_not_finished():
def test_event_model_undefined():
wj = WorkflowJob.objects.create(name='foobar', status='finished')
assert wj.event_processing_finished
@pytest.mark.django_db
class TestUpdateParentInstance:
def test_template_modified_by_not_changed_on_launch(self, job_template, alice):
# jobs are launched as a particular user, user not saved as JT modified_by
with impersonate(alice):
assert job_template.current_job is None
assert job_template.status == 'never updated'
assert job_template.modified_by is None
job = job_template.jobs.create(status='new')
job.status = 'pending'
job.save()
assert job_template.current_job == job
assert job_template.status == 'pending'
assert job_template.modified_by is None
def check_update(self, project, status):
pu_check = project.project_updates.create(
job_type='check', status='new', launch_type='manual'
)
pu_check.status = 'running'
pu_check.save()
# these should always be updated for a running check job
assert project.current_job == pu_check
assert project.status == 'running'
pu_check.status = status
pu_check.save()
return pu_check
def run_update(self, project, status):
pu_run = project.project_updates.create(
job_type='run', status='new', launch_type='sync'
)
pu_run.status = 'running'
pu_run.save()
pu_run.status = status
pu_run.save()
return pu_run
def test_project_update_fails_project(self, project):
# This is the normal server auto-update on create behavior
assert project.status == 'never updated'
pu_check = self.check_update(project, status='failed')
assert project.last_job == pu_check
assert project.status == 'failed'
def test_project_sync_with_skip_update(self, project):
# syncs may be permitted to change project status
# only if prior status is "never updated"
assert project.status == 'never updated'
pu_run = self.run_update(project, status='successful')
assert project.last_job == pu_run
assert project.status == 'successful'
def test_project_sync_does_not_fail_project(self, project):
# Accurate normal server behavior, creating a project auto-updates
# have to create update, otherwise will fight with last_job logic
assert project.status == 'never updated'
pu_check = self.check_update(project, status='successful')
assert project.status == 'successful'
self.run_update(project, status='failed')
assert project.last_job == pu_check
assert project.status == 'successful'
@pytest.mark.django_db
class TestTaskImpact:
@pytest.fixture
def job_host_limit(self, job_template, inventory):
def r(hosts, forks):
for i in range(hosts):
inventory.hosts.create(name='foo' + str(i))
job = Job.objects.create(
name='fake-job',
launch_type='workflow',
job_template=job_template,
inventory=inventory,
forks=forks
)
return job
return r
def test_limit_task_impact(self, job_host_limit):
job = job_host_limit(5, 2)
assert job.task_impact == 2 + 1 # forks becomes constraint
def test_host_task_impact(self, job_host_limit):
job = job_host_limit(3, 5)
assert job.task_impact == 3 + 1 # hosts becomes constraint
def test_shard_task_impact(self, slice_job_factory):
# factory creates on host per slice
workflow_job = slice_job_factory(3, jt_kwargs={'forks': 50}, spawn=True)
# arrange the jobs by their number
jobs = [None for i in range(3)]
for node in workflow_job.workflow_nodes.all():
jobs[node.job.job_slice_number - 1] = node.job
# Even distribution - all jobs run on 1 host
assert [
len(jobs[0].inventory.get_script_data(slice_number=i + 1, slice_count=3)['all']['hosts'])
for i in range(3)
] == [1, 1, 1]
assert [job.task_impact for job in jobs] == [2, 2, 2] # plus one base task impact
# Uneven distribution - first job takes the extra host
jobs[0].inventory.hosts.create(name='remainder_foo')
assert [
len(jobs[0].inventory.get_script_data(slice_number=i + 1, slice_count=3)['all']['hosts'])
for i in range(3)
] == [2, 1, 1]
assert [job.task_impact for job in jobs] == [3, 2, 2]

View File

@@ -3,10 +3,17 @@
import pytest
# AWX
from awx.main.models.workflow import WorkflowJob, WorkflowJobNode, WorkflowJobTemplateNode, WorkflowJobTemplate
from awx.main.models.workflow import (
WorkflowJob,
WorkflowJobNode,
WorkflowJobTemplateNode,
WorkflowJobTemplate,
)
from awx.main.models.jobs import JobTemplate, Job
from awx.main.models.projects import ProjectUpdate
from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.api.versioning import reverse
from awx.api.views import WorkflowJobTemplateNodeSuccessNodesList
# Django
from django.test import TransactionTestCase
@@ -57,46 +64,100 @@ class TestWorkflowDAGFunctional(TransactionTestCase):
def test_workflow_done(self):
wfj = self.workflow_job(states=['failed', None, None, 'successful', None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
assert 3 == len(dag.mark_dnr_nodes())
is_done = dag.is_workflow_done()
has_failed, reason = dag.has_workflow_failed()
self.assertTrue(is_done)
self.assertFalse(has_failed)
assert reason is None
# verify that relaunched WFJ fails if a JT leaf is deleted
for jt in JobTemplate.objects.all():
jt.delete()
relaunched = wfj.create_relaunch_workflow_job()
dag = WorkflowDAG(workflow_job=relaunched)
is_done, has_failed = dag.is_workflow_done()
self.assertTrue(is_done)
self.assertTrue(has_failed)
def test_workflow_fails_for_unfinished_node(self):
wfj = self.workflow_job(states=['error', None, None, None, None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
dag.mark_dnr_nodes()
is_done = dag.is_workflow_done()
has_failed, reason = dag.has_workflow_failed()
self.assertTrue(is_done)
self.assertTrue(has_failed)
assert "Workflow job node {} related unified job template missing".format(wfj.workflow_nodes.all()[0].id)
def test_workflow_fails_for_no_error_handler(self):
wfj = self.workflow_job(states=['successful', 'failed', None, None, None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
dag.mark_dnr_nodes()
is_done = dag.is_workflow_done()
has_failed = dag.has_workflow_failed()
self.assertTrue(is_done)
self.assertTrue(has_failed)
def test_workflow_fails_leaf(self):
wfj = self.workflow_job(states=['successful', 'successful', 'failed', None, None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
dag.mark_dnr_nodes()
is_done = dag.is_workflow_done()
has_failed = dag.has_workflow_failed()
self.assertTrue(is_done)
self.assertTrue(has_failed)
def test_workflow_not_finished(self):
wfj = self.workflow_job(states=['new', None, None, None, None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
dag.mark_dnr_nodes()
is_done = dag.is_workflow_done()
has_failed, reason = dag.has_workflow_failed()
self.assertFalse(is_done)
self.assertFalse(has_failed)
assert reason is None
@pytest.mark.django_db
class TestWorkflowDNR():
@pytest.fixture
def workflow_job_fn(self):
def fn(states=['new', 'new', 'new', 'new', 'new', 'new']):
r"""
Workflow topology:
node[0]
/ |
s f
/ |
node[1] node[3]
/ |
s f
/ |
node[2] node[4]
\ |
s f
\ |
node[5]
"""
wfj = WorkflowJob.objects.create()
jt = JobTemplate.objects.create(name='test-jt')
nodes = [WorkflowJobNode.objects.create(workflow_job=wfj, unified_job_template=jt) for i in range(0, 6)]
for node, state in zip(nodes, states):
if state:
node.job = jt.create_job()
node.job.status = state
node.job.save()
node.save()
nodes[0].success_nodes.add(nodes[1])
nodes[1].success_nodes.add(nodes[2])
nodes[0].failure_nodes.add(nodes[3])
nodes[3].failure_nodes.add(nodes[4])
nodes[2].success_nodes.add(nodes[5])
nodes[4].failure_nodes.add(nodes[5])
return wfj, nodes
return fn
def test_workflow_dnr_because_parent(self, workflow_job_fn):
wfj, nodes = workflow_job_fn(states=['successful', None, None, None, None, None,])
dag = WorkflowDAG(workflow_job=wfj)
workflow_nodes = dag.mark_dnr_nodes()
assert 2 == len(workflow_nodes)
assert nodes[3] in workflow_nodes
assert nodes[4] in workflow_nodes
@pytest.mark.django_db
@@ -125,7 +186,7 @@ class TestWorkflowJob:
assert nodes[0].failure_nodes.filter(id=nodes[3].id).exists()
assert nodes[3].failure_nodes.filter(id=nodes[4].id).exists()
def test_inherit_ancestor_artifacts_from_job(self, project, mocker):
def test_inherit_ancestor_artifacts_from_job(self, job_template, mocker):
"""
Assure that nodes along the line of execution inherit artifacts
from both jobs ran, and from the accumulation of old jobs
@@ -136,13 +197,13 @@ class TestWorkflowJob:
# Workflow job nodes
job_node = WorkflowJobNode.objects.create(workflow_job=wfj, job=job,
ancestor_artifacts={'a': 42})
queued_node = WorkflowJobNode.objects.create(workflow_job=wfj)
queued_node = WorkflowJobNode.objects.create(workflow_job=wfj, unified_job_template=job_template)
# Connect old job -> new job
mocker.patch.object(queued_node, 'get_parent_nodes', lambda: [job_node])
assert queued_node.get_job_kwargs()['extra_vars'] == {'a': 42, 'b': 43}
assert queued_node.ancestor_artifacts == {'a': 42, 'b': 43}
def test_inherit_ancestor_artifacts_from_project_update(self, project, mocker):
def test_inherit_ancestor_artifacts_from_project_update(self, project, job_template, mocker):
"""
Test that the existence of a project update (no artifacts) does
not break the flow of ancestor_artifacts
@@ -153,7 +214,7 @@ class TestWorkflowJob:
# Workflow job nodes
project_node = WorkflowJobNode.objects.create(workflow_job=wfj, job=update,
ancestor_artifacts={'a': 42, 'b': 43})
queued_node = WorkflowJobNode.objects.create(workflow_job=wfj)
queued_node = WorkflowJobNode.objects.create(workflow_job=wfj, unified_job_template=job_template)
# Connect project update -> new job
mocker.patch.object(queued_node, 'get_parent_nodes', lambda: [project_node])
assert queued_node.get_job_kwargs()['extra_vars'] == {'a': 42, 'b': 43}
@@ -185,20 +246,20 @@ class TestWorkflowJobTemplate:
assert parent_qs[0] == wfjt.workflow_job_template_nodes.all()[1]
def test_topology_validator(self, wfjt):
from awx.api.views import WorkflowJobTemplateNodeChildrenBaseList
test_view = WorkflowJobTemplateNodeChildrenBaseList()
test_view = WorkflowJobTemplateNodeSuccessNodesList()
nodes = wfjt.workflow_job_template_nodes.all()
node_assoc = WorkflowJobTemplateNode.objects.create(workflow_job_template=wfjt)
nodes[2].always_nodes.add(node_assoc)
# test cycle validation
assert test_view.is_valid_relation(node_assoc, nodes[0]) == {'Error': 'Cycle detected.'}
# test multi-ancestor validation
assert test_view.is_valid_relation(node_assoc, nodes[1]) == {'Error': 'Multiple parent relationship not allowed.'}
# test mutex validation
test_view.relationship = 'failure_nodes'
node_assoc_1 = WorkflowJobTemplateNode.objects.create(workflow_job_template=wfjt)
assert (test_view.is_valid_relation(nodes[2], node_assoc_1) ==
{'Error': 'Cannot associate failure_nodes when always_nodes have been associated.'})
print(nodes[0].success_nodes.get(id=nodes[1].id).failure_nodes.get(id=nodes[2].id))
assert test_view.is_valid_relation(nodes[2], nodes[0]) == {'Error': 'Cycle detected.'}
def test_always_success_failure_creation(self, wfjt, admin, get):
wfjt_node = wfjt.workflow_job_template_nodes.all()[1]
node = WorkflowJobTemplateNode.objects.create(workflow_job_template=wfjt)
wfjt_node.always_nodes.add(node)
assert len(node.get_parent_nodes()) == 1
url = reverse('api:workflow_job_template_node_list') + str(wfjt_node.id) + '/'
resp = get(url, admin)
assert node.id in resp.data['always_nodes']
def test_wfjt_unique_together_with_org(self, organization):
wfjt1 = WorkflowJobTemplate(name='foo', organization=organization)
@@ -208,3 +269,55 @@ class TestWorkflowJobTemplate:
wfjt2.validate_unique()
wfjt2 = WorkflowJobTemplate(name='foo', organization=None)
wfjt2.validate_unique()
@pytest.mark.django_db
def test_workflow_ancestors(organization):
# Spawn order of templates grandparent -> parent -> child
# create child WFJT and workflow job
child = WorkflowJobTemplate.objects.create(organization=organization, name='child')
child_job = WorkflowJob.objects.create(
workflow_job_template=child,
launch_type='workflow'
)
# create parent WFJT and workflow job, and link it up
parent = WorkflowJobTemplate.objects.create(organization=organization, name='parent')
parent_job = WorkflowJob.objects.create(
workflow_job_template=parent,
launch_type='workflow'
)
WorkflowJobNode.objects.create(
workflow_job=parent_job,
unified_job_template=child,
job=child_job
)
# create grandparent WFJT and workflow job and link it up
grandparent = WorkflowJobTemplate.objects.create(organization=organization, name='grandparent')
grandparent_job = WorkflowJob.objects.create(
workflow_job_template=grandparent,
launch_type='schedule'
)
WorkflowJobNode.objects.create(
workflow_job=grandparent_job,
unified_job_template=parent,
job=parent_job
)
# ancestors method gives a list of WFJT ids
assert child_job.get_ancestor_workflows() == [parent, grandparent]
@pytest.mark.django_db
def test_workflow_ancestors_recursion_prevention(organization):
# This is toxic database data, this tests that it doesn't create an infinite loop
wfjt = WorkflowJobTemplate.objects.create(organization=organization, name='child')
wfj = WorkflowJob.objects.create(
workflow_job_template=wfjt,
launch_type='workflow'
)
WorkflowJobNode.objects.create(
workflow_job=wfj,
unified_job_template=wfjt,
job=wfj # well, this is a problem
)
# mostly, we just care that this assertion finishes in finite time
assert wfj.get_ancestor_workflows() == []

View File

@@ -1,19 +1,11 @@
import pytest
import mock
import json
from datetime import timedelta, datetime
from django.core.cache import cache
from django.utils.timezone import now as tz_now
from datetime import timedelta
from awx.main.scheduler import TaskManager
from awx.main.utils import encrypt_field
from awx.main.models import (
Job,
Instance,
WorkflowJob,
)
from awx.main.models.notifications import JobNotificationMixin
from awx.main.models import WorkflowJobTemplate, JobTemplate
@pytest.mark.django_db
@@ -30,6 +22,95 @@ def test_single_job_scheduler_launch(default_instance_group, job_template_factor
TaskManager.start_task.assert_called_once_with(j, default_instance_group, [], instance)
@pytest.mark.django_db
class TestJobLifeCycle:
def run_tm(self, tm, expect_channel=None, expect_schedule=None, expect_commit=None):
"""Test helper method that takes parameters to assert against
expect_channel - list of expected websocket emit channel message calls
expect_schedule - list of expected calls to reschedule itself
expect_commit - list of expected on_commit calls
If any of these are None, then the assertion is not made.
"""
if expect_schedule and len(expect_schedule) > 1:
raise RuntimeError('Task manager should reschedule itself one time, at most.')
with mock.patch('awx.main.models.unified_jobs.UnifiedJob.websocket_emit_status') as mock_channel:
with mock.patch('awx.main.utils.common._schedule_task_manager') as tm_sch:
# Job are ultimately submitted in on_commit hook, but this will not
# actually run, because it waits until outer transaction, which is the test
# itself in this case
with mock.patch('django.db.connection.on_commit') as mock_commit:
tm.schedule()
if expect_channel is not None:
assert mock_channel.mock_calls == expect_channel
if expect_schedule is not None:
assert tm_sch.mock_calls == expect_schedule
if expect_commit is not None:
assert mock_commit.mock_calls == expect_commit
def test_task_manager_workflow_rescheduling(self, job_template_factory, inventory, project, default_instance_group):
jt = JobTemplate.objects.create(
allow_simultaneous=True,
inventory=inventory,
project=project,
playbook='helloworld.yml'
)
wfjt = WorkflowJobTemplate.objects.create(name='foo')
for i in range(2):
wfjt.workflow_nodes.create(
unified_job_template=jt
)
wj = wfjt.create_unified_job()
assert wj.workflow_nodes.count() == 2
wj.signal_start()
tm = TaskManager()
# Transitions workflow job to running
# needs to re-schedule so it spawns jobs next round
self.run_tm(tm, [mock.call('running')], [mock.call()])
# Spawns jobs
# needs re-schedule to submit jobs next round
self.run_tm(tm, [mock.call('pending'), mock.call('pending')], [mock.call()])
assert jt.jobs.count() == 2 # task manager spawned jobs
# Submits jobs
# intermission - jobs will run and reschedule TM when finished
self.run_tm(tm, [mock.call('waiting'), mock.call('waiting')], [])
# I am the job runner
for job in jt.jobs.all():
job.status = 'successful'
job.save()
# Finishes workflow
# no further action is necessary, so rescheduling should not happen
self.run_tm(tm, [mock.call('successful')], [])
def test_task_manager_workflow_workflow_rescheduling(self):
wfjts = [WorkflowJobTemplate.objects.create(name='foo')]
for i in range(5):
wfjt = WorkflowJobTemplate.objects.create(name='foo{}'.format(i))
wfjts[-1].workflow_nodes.create(
unified_job_template=wfjt
)
wfjts.append(wfjt)
wj = wfjts[0].create_unified_job()
wj.signal_start()
tm = TaskManager()
while wfjts[0].status != 'successful':
wfjts[1].refresh_from_db()
if wfjts[1].status == 'successful':
# final run, no more work to do
self.run_tm(tm, expect_schedule=[])
else:
self.run_tm(tm, expect_schedule=[mock.call()])
wfjts[0].refresh_from_db()
@pytest.mark.django_db
def test_single_jt_multi_job_launch_blocks_last(default_instance_group, job_template_factory, mocker):
instance = default_instance_group.instances.all()[0]
@@ -245,140 +326,3 @@ def test_shared_dependencies_launch(default_instance_group, job_template_factory
iu = [x for x in ii.inventory_updates.all()]
assert len(pu) == 1
assert len(iu) == 1
@pytest.mark.django_db
def test_cleanup_interval(mock_cache):
with mock.patch.multiple('awx.main.scheduler.task_manager.cache', get=mock_cache.get, set=mock_cache.set):
assert mock_cache.get('last_celery_task_cleanup') is None
TaskManager().cleanup_inconsistent_celery_tasks()
last_cleanup = mock_cache.get('last_celery_task_cleanup')
assert isinstance(last_cleanup, datetime)
TaskManager().cleanup_inconsistent_celery_tasks()
assert cache.get('last_celery_task_cleanup') == last_cleanup
class TestReaper():
@pytest.fixture
def all_jobs(self, mocker):
now = tz_now()
Instance.objects.create(hostname='host1', capacity=100)
Instance.objects.create(hostname='host2', capacity=100)
Instance.objects.create(hostname='host3_split', capacity=100)
Instance.objects.create(hostname='host4_offline', capacity=0)
j1 = Job.objects.create(status='pending', execution_node='host1')
j2 = Job.objects.create(status='waiting', celery_task_id='considered_j2')
j3 = Job.objects.create(status='waiting', celery_task_id='considered_j3')
j3.modified = now - timedelta(seconds=60)
j3.save(update_fields=['modified'])
j4 = Job.objects.create(status='running', celery_task_id='considered_j4', execution_node='host1')
j5 = Job.objects.create(status='waiting', celery_task_id='reapable_j5')
j5.modified = now - timedelta(seconds=60)
j5.save(update_fields=['modified'])
j6 = Job.objects.create(status='waiting', celery_task_id='considered_j6')
j6.modified = now - timedelta(seconds=60)
j6.save(update_fields=['modified'])
j7 = Job.objects.create(status='running', celery_task_id='considered_j7', execution_node='host2')
j8 = Job.objects.create(status='running', celery_task_id='reapable_j7', execution_node='host2')
j9 = Job.objects.create(status='waiting', celery_task_id='reapable_j8')
j9.modified = now - timedelta(seconds=60)
j9.save(update_fields=['modified'])
j10 = Job.objects.create(status='running', celery_task_id='host3_j10', execution_node='host3_split')
j11 = Job.objects.create(status='running', celery_task_id='host4_j11', execution_node='host4_offline')
j12 = WorkflowJob.objects.create(status='running', celery_task_id='workflow_job', execution_node='host1')
js = [j1, j2, j3, j4, j5, j6, j7, j8, j9, j10, j11, j12]
for j in js:
j.save = mocker.Mock(wraps=j.save)
j.websocket_emit_status = mocker.Mock()
return js
@pytest.fixture
def considered_jobs(self, all_jobs):
return all_jobs[2:7] + [all_jobs[10]]
@pytest.fixture
def running_tasks(self, all_jobs):
return {
'host1': [all_jobs[3]],
'host2': [all_jobs[7], all_jobs[8]],
'host3_split': [all_jobs[9]],
'host4_offline': [all_jobs[10]],
}
@pytest.fixture
def waiting_tasks(self, all_jobs):
return [all_jobs[2], all_jobs[4], all_jobs[5], all_jobs[8]]
@pytest.fixture
def reapable_jobs(self, all_jobs):
return [all_jobs[4], all_jobs[7], all_jobs[10]]
@pytest.fixture
def unconsidered_jobs(self, all_jobs):
return all_jobs[0:1] + all_jobs[5:7]
@pytest.fixture
def active_tasks(self):
return ([], {
'host1': ['considered_j2', 'considered_j3', 'considered_j4',],
'host2': ['considered_j6', 'considered_j7'],
})
@pytest.mark.django_db
@mock.patch.object(JobNotificationMixin, 'send_notification_templates')
@mock.patch.object(TaskManager, 'get_active_tasks', lambda self: ([], []))
def test_cleanup_inconsistent_task(self, notify, active_tasks, considered_jobs, reapable_jobs, running_tasks, waiting_tasks, mocker, settings):
settings.AWX_INCONSISTENT_TASK_INTERVAL = 0
tm = TaskManager()
tm.get_running_tasks = mocker.Mock(return_value=(running_tasks, waiting_tasks))
tm.get_active_tasks = mocker.Mock(return_value=active_tasks)
tm.cleanup_inconsistent_celery_tasks()
for j in considered_jobs:
if j not in reapable_jobs:
j.save.assert_not_called()
assert notify.call_count == 4
notify.assert_has_calls([mock.call('failed') for j in reapable_jobs], any_order=True)
for j in reapable_jobs:
j.websocket_emit_status.assert_called_once_with('failed')
assert j.status == 'failed'
assert j.job_explanation == (
'Task was marked as running in Tower but was not present in the job queue, so it has been marked as failed.'
)
@pytest.mark.django_db
def test_get_running_tasks(self, all_jobs):
tm = TaskManager()
# Ensure the query grabs the expected jobs
execution_nodes_jobs, waiting_jobs = tm.get_running_tasks()
assert 'host1' in execution_nodes_jobs
assert 'host2' in execution_nodes_jobs
assert 'host3_split' in execution_nodes_jobs
assert all_jobs[3] in execution_nodes_jobs['host1']
assert all_jobs[6] in execution_nodes_jobs['host2']
assert all_jobs[7] in execution_nodes_jobs['host2']
assert all_jobs[9] in execution_nodes_jobs['host3_split']
assert all_jobs[10] in execution_nodes_jobs['host4_offline']
assert all_jobs[11] not in execution_nodes_jobs['host1']
assert all_jobs[2] in waiting_jobs
assert all_jobs[4] in waiting_jobs
assert all_jobs[5] in waiting_jobs
assert all_jobs[8] in waiting_jobs

View File

@@ -14,6 +14,65 @@ from rest_framework import serializers
EXAMPLE_PRIVATE_KEY = '-----BEGIN PRIVATE KEY-----\nxyz==\n-----END PRIVATE KEY-----'
EXAMPLE_ENCRYPTED_PRIVATE_KEY = '-----BEGIN PRIVATE KEY-----\nProc-Type: 4,ENCRYPTED\nxyz==\n-----END PRIVATE KEY-----'
PKCS8_PRIVATE_KEY = '''-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQD0uyqyUHELQ25B
8lNBu/ZfVx8fPFT6jvAUscxfWLqsZCJrR8BWadXMa/0ALMaUuZbZ8Ug27jztOSO8
w8hJ6dqHaQ2gfbwsfbF6XHaetap0OoAFtnaiULSvljOkoWG+WSyfvJ73ZwEP3KzW
0JbNX24zGFdTFzX1W+8BbLpEIw3XiP9iYPtu0uit6VradMrt2Kdu+VKlQzbG1+89
g70IyFkvopynnWAkA+YXNo08dxOzmci7/G0Cp1Lwh4IAH++HbE2E4odWm5zoCaT7
gcZzKuZs/kkDHaS9O5VjsWGrZ+mp3NgeABbFRP0jDhCtS8QRa94RC6mobtnYoRd7
C1Iz3cdjAgMBAAECggEAb5p9BZUegBrviH5YDmWHnIHP7QAn5p1RibZtM1v0wRHn
ClJNuXqJJ7BlT3Ob2Y3q55ebLYWmXi4NCJOl3mMZJ2A2eSZtrkJhsaHB7G1+/oMB
B9nmLu4r/9i4005PEy16ZpvvSHZ+KvwhC9NSufRXflCO3hL7JdmXXGh3ZwQvV0a7
mP1RIQKIcLynPBTbTH1w30Znj2M4bSjUlsLbOYhwg2YQxa1qKuCtata5qdAVbgny
JYPruBhcHLPGvC0FBcd8zoYWLvQ52hcXNxrl0iN1KY7zIEYmU+3gbuBIoVl2Qo/p
zmH01bo9h9p5DdkjQ6MdjvrOX8aT93S1g9y8WqtoXQKBgQD7E2+RZ/XNIFts9cqG
2S7aywIydkgEmaOJl1fzebutJPPQXJDpQZtEenr+CG7KsRPf8nJ3jc/4OHIsnHYD
WBgXLQz0QWEgXwTRicXsxsARzHKV2Lb8IsXK5vfia+i9fxZV3WwkKVXOmTJHcVl1
XD5zfbAlrQ4r+Uo618zgpchsBQKBgQD5h+A+PX+3PdUPNkHdCltMwaSsXjBcYYoF
uZGR4v8jRQguGD5h8Eyk/cS3VVryYRKiYJCvaPFXTzN6GAsQoSnMW+37GKsbL+oK
5JYoSiCY6BpaJO3uo/UwvitV8EjHdaArb5oBjx1yiobRqhVJ+iH1PKxgnQFI5RgO
4AhnnYMqRwKBgQDUX+VQXlp5LzSGXwX3uH+8jFmIa6qRUZAWU1EO3tqUI5ykk5fz
5g27B8s/U8y7YLuKA581Z1wR/1T8TUA5peuCtxWtChxo8Fa4E0y68ocGxyPpgk2N
yq/56BKnkFVm7Lfs24WctOYjAkyYR9W+ws8Ei71SsSY6pfxW97ESGMkGLQKBgAlW
ABnUCzc75QDQst4mSQwyIosgawbJz3QvYTboG0uihY/T8GGRsAxsQjPpyaFP6HaS
zlcBwiXWHMLwq1lP7lRrDBhc7+nwfP0zWDrhqx6NcI722sAW+lF8i/qHJvHvgLKf
Vk/AnwVuEWU+y9UcurCGOJzUwvuLNr83upjF1+Z5AoGAP91XiBCorJLRJaryi6zt
iCjRxoVsrN6NvAh+MQ1yfAopO4RhxEXM/uUOBkulNhlnp+evSxUwDnFNOWzsZVn9
B6yXdJ9BTWXFX7YhEkosRZCXnNWX4Dz+DGU/yvSHQR/JYj8mRav98TmJU6lK6Vw/
YukmWPxNB+x4Ym3RNPrLpU4=
-----END PRIVATE KEY-----'''
PKCS8_ENCRYPTED_PRIVATE_KEY = '''-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIFHzBJBgkqhkiG9w0BBQ0wPDAbBgkqhkiG9w0BBQwwDgQIC4E/DX+33rACAggA
MB0GCWCGSAFlAwQBAgQQbeAsQdsEKoztekP5JXmHFASCBNAmNAMGSnycmN4sYleT
NS9r/ph9v58dv0/hzbE6TCt/i6nmA/D8mtuYB8gm30E/DOuN/dnL3z2gpyvr478P
FjoRnueuwMdLcfEpzEXotJdc7vmUsSjTFq99oh84JHdCfWSRtxkDu64dwp3GPC9+
f1qqg6o4/bPkjni+bCMgq9vgr4K+vuaKzaJqUTEQFuT3CirDGoWGpfRDtDoBmlg8
8esEXoA6RD2DNv6fQrOu9Q4Fc0YkzcoIfY6EJxu+f75LF/NUVpmeJ8QDjj6VFVuX
35ChPYolhBSC/MHBHAVVrn17FAdpLkiz7hIR7KBIg2nuu8oUnPMzDff/CeehYzNb
OH12P9zaHZa3DZHuu27oI6yUdgs8HYNLtBzXH/DbyAeW9alg1Ofber5DO62ieL3E
LqBd4R7qqDSTQmiA6B8LkVIrFrIOqn+nWoM9gHhIrTI409A/oTbpen87sZ4MIQk4
Vjw/A/D5OYhnjOEVgMXrNpKzFfRJPdKh8LYjAaytsLKZk/NOWKpBOcIPhBG/agmx
CX2NE2tpwNo+uWSOG6qTqc8xiQFDsQmbz9YEuux13J3Hg5gVMOJQNMvYpxgFD156
Z82QBMdrY1tRIA91kW97UDj6OEAyz8HnmL+rCiRLGJXKUnZsSET+VHs9+uhBggX8
GxliP35pYlmdejqGWHjiYlGF2+WKrd5axx/m1DcfZdXSaF1IdLKafnNXzUZbOnOM
7RbKHDhBKr/vkBV1SGYgDLNn4hflFzhdI65AKxO2KankzaWxF09/0kRZlmxm+tZX
8r0fHe9IO1KQR/52Kfg1vAQdt2KiyAziw5+tcqQT28knSDboNKpD2Du8BAoH9xG7
0Ca57oBHh/VGzM/niJBjI4EMOPZKuRJsxZF7wOOO6NTh/XFf3LpzsR1y3qoXN4cR
n+/jLUO/3kSGsqso6DT9C0o1pTrnORaJb4aF05jljFx9LYiQUOoLujp8cVW7XxQB
pTgJEFxTN5YA//cwYu3GOJ1AggSeF/WkHCDfCTpTfnO/WTZ0oc+nNyC1lBVfcZ67
GCH8COsfmhusrYiJUN6vYZIr4MfylVg53PUKYbLKYad9bIIaYYuu3MP4CtKDWHvk
8q+GzpjVUCPwjjsea56RMav+xDPvmgIayDptae26Fv+mRPcwqORYMFNtVRG6DUXo
+lrWlaDlkfyfZlQ6sK5c1cJNI8pSPocP/c9TBhP+xFROiWxvMOxhM7DmDl8rhAxU
ttZSukCg7n38AFsUqg5eLLq9sT+P6VmX8d3YflPBIkvNgK7nKUTwgrpbuADo07b0
sVlAY/9SmtHvOCibxphvPYUOhwWo97PzzAsdVGz/xRvH8mzI/Iftbc1U2C2La8FJ
xjaAFwWK/CjQSwnCB8raWo9FUavV6xdb2K0G4VBVDvZO9EJBzX0m6EqQx3XMZf1s
crP0Dp9Ee66vVOlj+XnyyTkUADSYHr8/42Aohv96fJEMjy5gbBl4QQm2QKzAkq9n
lrHvQpCxPixUUAEI0ZL1Y74hcMecnfbpGibrUvSp+cyDCOG92KKxLXEgVYCbXHZu
bOlOanZF3vC6I9dUC2d8I5B87b2K+y57OkWpmS3zxCEpsBqQmn8Te50DnlkPJPBj
GLqbpJyX2r3p/Rmo6mLY71SqpA==
-----END ENCRYPTED PRIVATE KEY-----'''
@pytest.mark.django_db
def test_default_cred_types():
@@ -89,6 +148,10 @@ def test_credential_creation(organization_factory):
[EXAMPLE_PRIVATE_KEY, 'super-secret', False], # unencrypted key, unlock pass
[EXAMPLE_ENCRYPTED_PRIVATE_KEY, 'super-secret', True], # encrypted key, unlock pass
[EXAMPLE_ENCRYPTED_PRIVATE_KEY, None, False], # encrypted key, no unlock pass
[PKCS8_ENCRYPTED_PRIVATE_KEY, 'passme', True], # encrypted PKCS8 key, unlock pass
[PKCS8_ENCRYPTED_PRIVATE_KEY, None, False], # encrypted PKCS8 key, no unlock pass
[PKCS8_PRIVATE_KEY, None, True], # unencrypted PKCS8 key, no unlock pass
[PKCS8_PRIVATE_KEY, 'passme', False], # unencrypted PKCS8 key, unlock pass
[None, None, True], # no key, no unlock pass
[None, 'super-secret', False], # no key, unlock pass
['INVALID-KEY-DATA', None, False], # invalid key data

View File

@@ -0,0 +1,387 @@
import datetime
import multiprocessing
import random
import signal
import time
from django.utils.timezone import now as tz_now
import pytest
from awx.main.models import Job, WorkflowJob, Instance
from awx.main.dispatch import reaper
from awx.main.dispatch.pool import PoolWorker, WorkerPool, AutoscalePool
from awx.main.dispatch.publish import task
from awx.main.dispatch.worker import BaseWorker, TaskWorker
@task()
def add(a, b):
return a + b
class BaseTask(object):
def add(self, a, b):
return add(a, b)
@task()
class Adder(BaseTask):
def run(self, a, b):
return super(Adder, self).add(a, b)
@task(queue='hard-math')
def multiply(a, b):
return a * b
class SimpleWorker(BaseWorker):
def perform_work(self, body, *args):
pass
class ResultWriter(BaseWorker):
def perform_work(self, body, result_queue):
result_queue.put(body + '!!!')
class SlowResultWriter(BaseWorker):
def perform_work(self, body, result_queue):
time.sleep(3)
super(SlowResultWriter, self).perform_work(body, result_queue)
@pytest.mark.usefixtures("disable_database_settings")
class TestPoolWorker:
def setup_method(self, test_method):
self.worker = PoolWorker(1000, self.tick, tuple())
def tick(self):
self.worker.finished.put(self.worker.queue.get()['uuid'])
time.sleep(.5)
def test_qsize(self):
assert self.worker.qsize == 0
for i in range(3):
self.worker.put({'task': 'abc123'})
assert self.worker.qsize == 3
def test_put(self):
assert len(self.worker.managed_tasks) == 0
assert self.worker.messages_finished == 0
self.worker.put({'task': 'abc123'})
assert len(self.worker.managed_tasks) == 1
assert self.worker.messages_sent == 1
def test_managed_tasks(self):
self.worker.put({'task': 'abc123'})
self.worker.calculate_managed_tasks()
assert len(self.worker.managed_tasks) == 1
self.tick()
self.worker.calculate_managed_tasks()
assert len(self.worker.managed_tasks) == 0
def test_current_task(self):
self.worker.put({'task': 'abc123'})
assert self.worker.current_task['task'] == 'abc123'
def test_quit(self):
self.worker.quit()
assert self.worker.queue.get() == 'QUIT'
def test_idle_busy(self):
assert self.worker.idle is True
assert self.worker.busy is False
self.worker.put({'task': 'abc123'})
assert self.worker.busy is True
assert self.worker.idle is False
@pytest.mark.django_db
class TestWorkerPool:
def setup_method(self, test_method):
self.pool = WorkerPool(min_workers=3)
def teardown_method(self, test_method):
self.pool.stop(signal.SIGTERM)
def test_worker(self):
self.pool.init_workers(SimpleWorker().work_loop)
assert len(self.pool) == 3
for worker in self.pool.workers:
assert worker.messages_sent == 0
assert worker.alive is True
def test_single_task(self):
self.pool.init_workers(SimpleWorker().work_loop)
self.pool.write(0, 'xyz')
assert self.pool.workers[0].messages_sent == 1 # worker at index 0 handled one task
assert self.pool.workers[1].messages_sent == 0
assert self.pool.workers[2].messages_sent == 0
def test_queue_preference(self):
self.pool.init_workers(SimpleWorker().work_loop)
self.pool.write(2, 'xyz')
assert self.pool.workers[0].messages_sent == 0
assert self.pool.workers[1].messages_sent == 0
assert self.pool.workers[2].messages_sent == 1 # worker at index 2 handled one task
def test_worker_processing(self):
result_queue = multiprocessing.Queue()
self.pool.init_workers(ResultWriter().work_loop, result_queue)
for i in range(10):
self.pool.write(
random.choice(range(len(self.pool))),
'Hello, Worker {}'.format(i)
)
all_messages = [result_queue.get(timeout=1) for i in range(10)]
all_messages.sort()
assert all_messages == [
'Hello, Worker {}!!!'.format(i)
for i in range(10)
]
total_handled = sum([worker.messages_sent for worker in self.pool.workers])
assert total_handled == 10
@pytest.mark.django_db
class TestAutoScaling:
def setup_method(self, test_method):
self.pool = AutoscalePool(min_workers=2, max_workers=10)
def teardown_method(self, test_method):
self.pool.stop(signal.SIGTERM)
def test_scale_up(self):
result_queue = multiprocessing.Queue()
self.pool.init_workers(SlowResultWriter().work_loop, result_queue)
# start with two workers, write an event to each worker and make it busy
assert len(self.pool) == 2
for i, w in enumerate(self.pool.workers):
w.put('Hello, Worker {}'.format(0))
assert len(self.pool) == 2
# wait for the subprocesses to start working on their tasks and be marked busy
time.sleep(1)
assert self.pool.should_grow
# write a third message, expect a new worker to spawn because all
# workers are busy
self.pool.write(0, 'Hello, Worker {}'.format(2))
assert len(self.pool) == 3
def test_scale_down(self):
self.pool.init_workers(ResultWriter().work_loop, multiprocessing.Queue())
# start with two workers, and scale up to 10 workers
assert len(self.pool) == 2
for i in range(8):
self.pool.up()
assert len(self.pool) == 10
# cleanup should scale down to 8 workers
self.pool.cleanup()
assert len(self.pool) == 2
def test_max_scale_up(self):
self.pool.init_workers(ResultWriter().work_loop, multiprocessing.Queue())
assert len(self.pool) == 2
for i in range(25):
self.pool.up()
assert self.pool.max_workers == 10
assert self.pool.full is True
assert len(self.pool) == 10
def test_equal_worker_distribution(self):
# if all workers are busy, spawn new workers *before* adding messages
# to an existing queue
self.pool.init_workers(SlowResultWriter().work_loop, multiprocessing.Queue)
# start with two workers, write an event to each worker and make it busy
assert len(self.pool) == 2
for i in range(10):
self.pool.write(0, 'Hello, World!')
assert len(self.pool) == 10
for w in self.pool.workers:
assert w.busy
assert len(w.managed_tasks) == 1
# the queue is full at 10, the _next_ write should put the message into
# a worker's backlog
assert len(self.pool) == 10
for w in self.pool.workers:
assert w.messages_sent == 1
self.pool.write(0, 'Hello, World!')
assert len(self.pool) == 10
assert self.pool.workers[0].messages_sent == 2
def test_lost_worker_autoscale(self):
# if a worker exits, it should be replaced automatically up to min_workers
self.pool.init_workers(ResultWriter().work_loop, multiprocessing.Queue())
# start with two workers, kill one of them
assert len(self.pool) == 2
assert not self.pool.should_grow
alive_pid = self.pool.workers[1].pid
self.pool.workers[0].process.terminate()
time.sleep(1) # wait a moment for sigterm
# clean up and the dead worker
self.pool.cleanup()
assert len(self.pool) == 1
assert self.pool.workers[0].pid == alive_pid
# the next queue write should replace the lost worker
self.pool.write(0, 'Hello, Worker')
assert len(self.pool) == 2
@pytest.mark.usefixtures("disable_database_settings")
class TestTaskDispatcher:
@property
def tm(self):
return TaskWorker()
def test_function_dispatch(self):
result = self.tm.perform_work({
'task': 'awx.main.tests.functional.test_dispatch.add',
'args': [2, 2]
})
assert result == 4
def test_method_dispatch(self):
result = self.tm.perform_work({
'task': 'awx.main.tests.functional.test_dispatch.Adder',
'args': [2, 2]
})
assert result == 4
class TestTaskPublisher:
def test_function_callable(self):
assert add(2, 2) == 4
def test_method_callable(self):
assert Adder().run(2, 2) == 4
def test_function_apply_async(self):
message, queue = add.apply_async([2, 2])
assert message['args'] == [2, 2]
assert message['kwargs'] == {}
assert message['task'] == 'awx.main.tests.functional.test_dispatch.add'
assert queue == 'awx_private_queue'
def test_method_apply_async(self):
message, queue = Adder.apply_async([2, 2])
assert message['args'] == [2, 2]
assert message['kwargs'] == {}
assert message['task'] == 'awx.main.tests.functional.test_dispatch.Adder'
assert queue == 'awx_private_queue'
def test_apply_with_queue(self):
message, queue = add.apply_async([2, 2], queue='abc123')
assert queue == 'abc123'
def test_queue_defined_in_task_decorator(self):
message, queue = multiply.apply_async([2, 2])
assert queue == 'hard-math'
def test_queue_overridden_from_task_decorator(self):
message, queue = multiply.apply_async([2, 2], queue='not-so-hard')
assert queue == 'not-so-hard'
def test_apply_with_callable_queuename(self):
message, queue = add.apply_async([2, 2], queue=lambda: 'called')
assert queue == 'called'
yesterday = tz_now() - datetime.timedelta(days=1)
@pytest.mark.django_db
class TestJobReaper(object):
@pytest.mark.parametrize('status, execution_node, controller_node, modified, fail', [
('running', '', '', None, False), # running, not assigned to the instance
('running', 'awx', '', None, True), # running, has the instance as its execution_node
('running', '', 'awx', None, True), # running, has the instance as its controller_node
('waiting', '', '', None, False), # waiting, not assigned to the instance
('waiting', 'awx', '', None, False), # waiting, was edited less than a minute ago
('waiting', '', 'awx', None, False), # waiting, was edited less than a minute ago
('waiting', 'awx', '', yesterday, True), # waiting, assigned to the execution_node, stale
('waiting', '', 'awx', yesterday, True), # waiting, assigned to the controller_node, stale
])
def test_should_reap(self, status, fail, execution_node, controller_node, modified):
i = Instance(hostname='awx')
i.save()
j = Job(
status=status,
execution_node=execution_node,
controller_node=controller_node,
start_args='SENSITIVE',
)
j.save()
if modified:
# we have to edit the modification time _without_ calling save()
# (because .save() overwrites it to _now_)
Job.objects.filter(id=j.id).update(modified=modified)
reaper.reap(i)
job = Job.objects.first()
if fail:
assert job.status == 'failed'
assert 'marked as failed' in job.job_explanation
assert job.start_args == ''
else:
assert job.status == status
@pytest.mark.parametrize('excluded_uuids, fail', [
(['abc123'], False),
([], True),
])
def test_do_not_reap_excluded_uuids(self, excluded_uuids, fail):
i = Instance(hostname='awx')
i.save()
j = Job(
status='running',
execution_node='awx',
controller_node='',
start_args='SENSITIVE',
celery_task_id='abc123',
)
j.save()
# if the UUID is excluded, don't reap it
reaper.reap(i, excluded_uuids=excluded_uuids)
job = Job.objects.first()
if fail:
assert job.status == 'failed'
assert 'marked as failed' in job.job_explanation
assert job.start_args == ''
else:
assert job.status == 'running'
def test_workflow_does_not_reap(self):
i = Instance(hostname='awx')
i.save()
j = WorkflowJob(
status='running',
execution_node='awx'
)
j.save()
reaper.reap(i)
assert WorkflowJob.objects.first().status == 'running'

View File

@@ -0,0 +1,122 @@
import glob
import json
import os
from django.conf import settings
from pip._internal.req import parse_requirements
def test_python_and_js_licenses():
def index_licenses(path):
# Check for GPL (forbidden) and LGPL (need to ship source)
# This is not meant to be an exhaustive check.
def check_license(license_file):
with open(license_file) as f:
data = f.read()
is_lgpl = 'GNU LESSER GENERAL PUBLIC LICENSE' in data.upper()
# The LGPL refers to the GPL in-text
# Case-sensitive for GPL to match license text and not PSF license reference
is_gpl = 'GNU GENERAL PUBLIC LICENSE' in data and not is_lgpl
return (is_gpl, is_lgpl)
def find_embedded_source_version(path, name):
for entry in os.listdir(path):
# Check variations of '-' and '_' in filenames due to python
for fname in [name, name.replace('-','_')]:
if entry.startswith(fname) and entry.endswith('.tar.gz'):
entry = entry[:-7]
(n, v) = entry.rsplit('-',1)
return v
return None
list = {}
for txt_file in glob.glob('%s/*.txt' % path):
filename = txt_file.split('/')[-1]
name = filename[:-4].lower()
(is_gpl, is_lgpl) = check_license(txt_file)
list[name] = {
'name': name,
'filename': filename,
'gpl': is_gpl,
'source_required': (is_gpl or is_lgpl),
'source_version': find_embedded_source_version(path, name)
}
return list
def read_api_requirements(path):
ret = {}
for req_file in ['requirements.txt', 'requirements_ansible.txt', 'requirements_git.txt', 'requirements_ansible_git.txt']:
fname = '%s/%s' % (path, req_file)
for reqt in parse_requirements(fname, session=''):
name = reqt.name
version = str(reqt.specifier)
if version.startswith('=='):
version=version[2:]
if reqt.link:
(name, version) = reqt.link.filename.split('@',1)
if name.endswith('.git'):
name = name[:-4]
ret[name] = { 'name': name, 'version': version}
return ret
def read_ui_requirements(path):
def json_deps(jsondata):
ret = {}
deps = jsondata.get('dependencies',{})
for key in deps.keys():
key = key.lower()
devonly = deps[key].get('dev',False)
if not devonly:
if key not in ret.keys():
depname = key.replace('/','-')
ret[depname] = {
'name': depname,
'version': deps[key]['version']
}
ret.update(json_deps(deps[key]))
return ret
with open('%s/package-lock.json' % path) as f:
jsondata = json.load(f)
return json_deps(jsondata)
def remediate_licenses_and_requirements(licenses, requirements):
errors = []
items = licenses.keys()
items.sort()
for item in items:
if item not in requirements.keys() and item != 'awx':
errors.append(" license file %s does not correspond to an existing requirement; it should be removed." % (licenses[item]['filename'],))
continue
# uWSGI has a linking exception
if licenses[item]['gpl'] and item != 'uwsgi':
errors.append(" license for %s is GPL. This software cannot be used." % (item,))
if licenses[item]['source_required']:
version = requirements[item]['version']
if version != licenses[item]['source_version']:
errors.append(" embedded source for %s is %s instead of the required version %s" % (item, licenses[item]['source_version'], version))
elif licenses[item]['source_version']:
errors.append(" embedded source version %s for %s is included despite not being needed" % (licenses[item]['source_version'],item))
items = requirements.keys()
items.sort()
for item in items:
if item not in licenses.keys():
errors.append(" license for requirement %s is missing" %(item,))
return errors
base_dir = settings.BASE_DIR
api_licenses = index_licenses('%s/../docs/licenses' % base_dir)
ui_licenses = index_licenses('%s/../docs/licenses/ui' % base_dir)
api_requirements = read_api_requirements('%s/../requirements' % base_dir)
ui_requirements = read_ui_requirements('%s/ui' % base_dir)
errors = []
errors += remediate_licenses_and_requirements(ui_licenses, ui_requirements)
errors += remediate_licenses_and_requirements(api_licenses, api_requirements)
if errors:
raise Exception('Included licenses not consistent with requirements:\n%s' %
'\n'.join(errors))

View File

@@ -3,6 +3,11 @@ import pytest
from awx.main.models.inventory import Inventory
from awx.main.models.credential import Credential
from awx.main.models.jobs import JobTemplate, Job
from awx.main.access import (
UnifiedJobAccess,
WorkflowJobAccess, WorkflowJobNodeAccess,
JobAccess
)
@pytest.mark.django_db
@@ -43,6 +48,31 @@ def test_inventory_use_access(inventory, user):
assert common_user.can_access(Inventory, 'use', inventory)
@pytest.mark.django_db
def test_slice_job(slice_job_factory, rando):
workflow_job = slice_job_factory(2, jt_kwargs={'created_by': rando}, spawn=True)
workflow_job.job_template.execute_role.members.add(rando)
# Abilities of user with execute_role for slice workflow job container
assert WorkflowJobAccess(rando).can_start(workflow_job) # relaunch allowed
for access_cls in (UnifiedJobAccess, WorkflowJobAccess):
access = access_cls(rando)
assert access.can_read(workflow_job)
assert workflow_job in access.get_queryset()
# Abilities of user with execute_role for all the slice of the job
for node in workflow_job.workflow_nodes.all():
access = WorkflowJobNodeAccess(rando)
assert access.can_read(node)
assert node in access.get_queryset()
job = node.job
assert JobAccess(rando).can_start(job) # relaunch allowed
for access_cls in (UnifiedJobAccess, JobAccess):
access = access_cls(rando)
assert access.can_read(job)
assert job in access.get_queryset()
@pytest.mark.django_db
class TestJobRelaunchAccess:
@pytest.fixture

Some files were not shown because too many files have changed in this diff Show More