Compare commits

..

426 Commits
2.1.0 ... 2.1.1

Author SHA1 Message Date
softwarefactory-project-zuul[bot]
28733800c4 Merge pull request #2842 from mabashian/2839-workflow-key
Changes workflow key icon from fa-key to fa-compass

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 13:21:45 +00:00
softwarefactory-project-zuul[bot]
cf2deefa41 Merge pull request #2843 from ansible/update-e2e-tests-2
Updating e2e tests to match change in order layout

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 02:04:21 +00:00
John Hill
e5645dd798 one more 2018-11-29 19:35:11 -05:00
John Hill
6205a5db83 Updating to fix linting error 2018-11-29 19:11:27 -05:00
John Hill
e50dd92425 Cannot depend on the id and order, reverting to workflow node names 2018-11-29 18:16:53 -05:00
softwarefactory-project-zuul[bot]
4955fc8bc4 Merge pull request #2840 from ryanpetrello/project-update-bug
resolve a nuanced traceback for JTs that run w/ a failed project

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 22:35:59 +00:00
mabashian
15adb1e828 Changes workflow key icon from fa-key to fa-compass 2018-11-29 17:16:25 -05:00
Ryan Petrello
c90d81b914 resolve a nuanced traceback for JTs that run w/ a failed project
related: https://github.com/ansible/awx/pull/2719
2018-11-29 17:10:23 -05:00
softwarefactory-project-zuul[bot]
8e9c28701e Merge pull request #2836 from ryanpetrello/better-dispatch-retries
add additional DB retry logic to the callback receiver

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 21:18:24 +00:00
softwarefactory-project-zuul[bot]
abc74fc9b8 Merge pull request #2824 from chrismeyersfsu/workflow-convergence_enforce2
enforce 1 edge between 2 nodes constraint

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 19:34:25 +00:00
chris meyers
21fce00102 python3 compliance
* This ones for you rydog
2018-11-29 14:07:43 -05:00
chris meyers
d347a06e3d do not deny existing workflow node relationships 2018-11-29 13:29:16 -05:00
Ryan Petrello
0391dbc292 add additional DB retry logic to the callback receiver
initially, I implemented this for _only_ the task worker, but it's
probably needed for callback event workers, too
2018-11-29 11:57:46 -05:00
softwarefactory-project-zuul[bot]
349c7efa69 Merge pull request #2792 from AlanCoding/how_many_slices
Prohibit relaunching sliced jobs with changed count

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 15:57:40 +00:00
softwarefactory-project-zuul[bot]
0f451595d7 Merge pull request #2826 from ryanpetrello/remove-deprovision-node
remove the deprecated `awx-manage deprovision_node` command

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 15:52:13 +00:00
Ryan Petrello
fcb6ce2907 remove a few deprecated awx-manage commands 2018-11-29 10:09:57 -05:00
softwarefactory-project-zuul[bot]
273d7a83f2 Merge pull request #2825 from ryanpetrello/dont-fear-the-reaper
don't reap jobs that aren't running

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 14:31:37 +00:00
chris meyers
916c92ffc7 save state 2018-11-29 08:53:46 -05:00
Ryan Petrello
38bf174bda don't reap jobs that aren't running
this is a simple sanity check, but it should help us avoid shooting
ourselves in the foot in complicated scenarios, such as:

1.  A dispatcher worker is running a job, and it's killed with `kill -9`
2.  The dispatcher attempts to reap jobs with a matching celery_task_id
3.  The associated sync project update has the *same* celery_task_id
    (an implementation detail of how we implemented that), and it ends
    up getting reaped _even though_ it's already finished and has
    status=successful
2018-11-28 18:11:12 -05:00
chris meyers
09dff99340 enforce 1 edge between 2 nodes constraint 2018-11-28 16:57:50 -05:00
softwarefactory-project-zuul[bot]
7f178ef28b Merge pull request #2822 from ansible/workflow-e2e-update
Small update to the node order to reflect e2e test workflow changes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 21:08:36 +00:00
softwarefactory-project-zuul[bot]
68328109d7 Merge pull request #2807 from AlanCoding/yuck_artifacts
Do not pass artifacts to non-job nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 21:05:24 +00:00
John Hill
d573a9a346 Small update to the node order to reflect workflow changes 2018-11-28 15:37:30 -05:00
AlanCoding
d6e89689ae do not pass artifacts to non-job nodes 2018-11-28 15:19:47 -05:00
softwarefactory-project-zuul[bot]
d1d97598e2 Merge pull request #2821 from AlanCoding/clean_nodes
Clean up unwanted data in activity stream of nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 19:13:45 +00:00
softwarefactory-project-zuul[bot]
f57fa9d1fb Merge pull request #2810 from chrismeyersfsu/feature-replay_job_status
emit job status lifecycle in event replayer

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 18:57:56 +00:00
chris meyers
83760deb9d align tests with new replay get_job interface 2018-11-28 13:33:44 -05:00
softwarefactory-project-zuul[bot]
3893e29a33 Merge pull request #2815 from ryanpetrello/fix-iso-nodes-dev
fix isolated nodes in the dev environment

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 17:06:16 +00:00
softwarefactory-project-zuul[bot]
feeaa0bf5c Merge pull request #2747 from kialam/remove-md5
Remove MD5 usage and dependency from UI

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 16:15:59 +00:00
kialam
22802e7a64 Update README to include needed npm version. 2018-11-28 10:43:53 -05:00
Ryan Petrello
1ac5bc5e2b remove angular-md5 license 2018-11-28 10:43:53 -05:00
kialam
362a3753d0 Remove 'angular-md5' from our dependencies. 2018-11-28 10:43:53 -05:00
kialam
71ee9d28b9 Add link to original gist and rename file. 2018-11-28 10:43:53 -05:00
kialam
d8d89d253d Remove instances of "md5" from the UI. 2018-11-28 10:43:53 -05:00
Ryan Petrello
a72f3d2f2f generate host_config_key using random UUIDs, not a time-based md5 hash 2018-11-28 10:43:45 -05:00
AlanCoding
1adeb833fb clean up unwanted data in activity stream of nodes 2018-11-28 10:41:32 -05:00
Ryan Petrello
a810aaf319 fix isolated nodes in the dev environment 2018-11-28 09:54:39 -05:00
softwarefactory-project-zuul[bot]
d9866c35b4 Merge pull request #2819 from ryanpetrello/fix-busted-tests
mock an HTTP call to fix busted unit tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 14:50:38 +00:00
Ryan Petrello
4e45c3a66c mock an HTTP call to fix busted unit tests 2018-11-28 09:17:50 -05:00
softwarefactory-project-zuul[bot]
87b55dc413 Merge pull request #2816 from ansible/jakemcdermott-smoke-break
update expected color vals of active tab in smoke test

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2018-11-28 04:02:31 +00:00
Jake McDermott
2e3949d612 update expected color vals of active tab in smoke test 2018-11-27 22:46:16 -05:00
softwarefactory-project-zuul[bot]
d928ccd922 Merge pull request #2799 from ansible/jakemcdermott-readonly-viz
don't conditionally hide workflow viz templates list button

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 03:39:51 +00:00
softwarefactory-project-zuul[bot]
a9c51b737c Merge pull request #2389 from ansible/workflow-convergence
Workflow convergence

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 22:04:50 +00:00
mabashian
51669c9765 Fixes hint/lint errors in workflow viz test 2018-11-27 16:12:42 -05:00
mabashian
17cc82d946 Ensure that selected row is cleared when adding new node after editing existing node 2018-11-27 16:12:42 -05:00
mabashian
10de5b6866 Fixes clicking on a wf in wf node. Also fixes editing wf in wf node with inv prompt 2018-11-27 16:12:42 -05:00
mabashian
55dc27f243 Set active tab to jobs when initially clicking a workflow_job_template type node 2018-11-27 16:12:42 -05:00
mabashian
6fc2ba3495 Fixes delete node shifting e2e test 2018-11-27 16:12:42 -05:00
mabashian
7bad01e193 Fixes e2e workflow visualizer tests 2018-11-27 16:12:42 -05:00
mabashian
62a1f10c42 Fix node pagination for project/inv 2018-11-27 16:12:42 -05:00
mabashian
3975a2ecdb fix linkpath class 2018-11-27 16:12:42 -05:00
Jake McDermott
bfa361c87f hide prompt button when not on jobs tab 2018-11-27 16:12:42 -05:00
Jake McDermott
d5f07a9652 hide inventory help message when not on jobs tab 2018-11-27 16:12:42 -05:00
Jake McDermott
65ec1d18ad skip missing inventory prompt value check when selecting workflow node 2018-11-27 16:12:42 -05:00
Jake McDermott
7b4521f980 workflow node prompt fixup
* use workflow model and endpoint when node is workflow
* always include template type in prompt data
* skip missing inventory checks when node is workflow
* skip checks for required credential fields when node is workflow
2018-11-27 16:12:42 -05:00
John Mitchell
3762ba7b24 add back in workflow_nodes in order to be able to use it for count of nodes 2018-11-27 16:12:42 -05:00
John Mitchell
762c882cd7 consume workflow maker total nodes label change 2018-11-27 16:12:42 -05:00
John Mitchell
343639d4b7 fix workflow maker total templates header to total nodes 2018-11-27 16:12:42 -05:00
John Mitchell
38dc0b8e90 fix workflow total jobs header to total nodes 2018-11-27 16:12:42 -05:00
mabashian
ed40ba6267 Fix searching on related fields 2018-11-27 16:12:42 -05:00
mabashian
54d56f2284 Fix node jobs column sorting. Adds arrows to potential workflow node links 2018-11-27 16:12:42 -05:00
mabashian
1477bbae30 Fixed error Cannot read property 'type' of undefined in console when selecting a project or inventory node 2018-11-27 16:12:42 -05:00
mabashian
625c6c30fc Fixed edge dropdown id 2018-11-27 16:12:42 -05:00
chris meyers
228e412478 simplify workflow job failure reason
* Log the more detailed reason for a workflow job failing but expose a
simplified reason to users via job_explanation
2018-11-27 16:12:42 -05:00
chris meyers
f8f2e005ba better comment for deciding parent's status 2018-11-27 16:12:42 -05:00
chris meyers
d8bf82a8cb add help_text to do_not_run workflow field 2018-11-27 16:12:41 -05:00
chris meyers
2eeca3cfd7 add example workflow run to docs 2018-11-27 16:12:41 -05:00
mabashian
28a4bbbe8a Fixed jshint errors that fell out of merge conflict 2018-11-27 16:12:41 -05:00
mabashian
1cfcaa72ad Fixed editNodeHelpMessage logic that was broken during merge conflict 2018-11-27 16:12:41 -05:00
AlanCoding
4c14727762 bump migration number 2018-11-27 16:12:41 -05:00
chris meyers
0c8dde9718 fix dfs_run_nodes()
* Tried to re-use the topological sort order to crawl the graph to find
the next node(s) to run. This is incorrect, we need to take into account
the fail/success of jobs and directionally crawl the graph.
2018-11-27 16:12:41 -05:00
chris meyers
febf051748 do not mark ujt None nodes dnr
* Leave workflow nodes with no related unified job template nodes
do_not_run = False. If we mark it True, we can't differentiate between
the actual want to not take that path vs. do not run this because I do
not have a valid related unified job template.
2018-11-27 16:12:41 -05:00
mabashian
56885a5da1 Remove reference to isStartNode and just check the id of the node to determine if it's our start node or not 2018-11-27 16:12:41 -05:00
mabashian
623cf54766 Added dagre and graphlib licenses 2018-11-27 16:12:41 -05:00
mabashian
a804c854bf Fix test failures and jshint errors 2018-11-27 16:12:41 -05:00
chris meyers
7b087d4a6c loop over dnr nodes by topological sort
* Perform topological sort on graph nodes before looping over them to
mark do not run. This guarantees that parent nodes will be processed
before calling dependent child nodes. The complexity of the sorting is
N. The complexity of marking the the nodes is N*V
2018-11-27 16:12:41 -05:00
chris meyers
cfa098479e Revert "optimize mark dnr nodes algorithm"
This reverts commit 6372c52772.
2018-11-27 16:12:41 -05:00
mabashian
3c510e6344 Fixed bug where root link became clickable. Fix workflow key on results page. 2018-11-27 16:12:41 -05:00
chris meyers
4c9a1d6b90 optimize mark dnr nodes algorithm
* Compute largest depth of each node and traverse graph by depth. This
allows us to check a node once, and only once, to determine if it needs
to be marked for do not run.
2018-11-27 16:12:41 -05:00
chris meyers
d1aa52a2a6 fix up mark dnr logic 2018-11-27 16:12:41 -05:00
chris meyers
f30f52a0a8 handle missing unified job template in workflow
* Workflow Node without unified_job_template is treated as a job marked
as failure; when deciding what path to execute.
* Remove optimization of marking dnr nodes due to it making the
algorithm incorrect.
2018-11-27 16:12:41 -05:00
mabashian
5b459e3c5d Code cleanup. Fixed bugs with workflow results page including details links 2018-11-27 16:12:41 -05:00
chris meyers
676c068b71 add job_description to failed workflow node
* When workflow job fails because a workflow job node doesn't have a
related unified_job_template note that with an error on the workflow
job's job_description
* When a workflow job fails because a failure path isn't defined, note
that on the workflow job job_description
2018-11-27 16:12:41 -05:00
chris meyers
00d71cea50 detect workflow nodes without job templates
* Fail workflow job run when encountering a Workflow Job Nodes with
no related job templates.
2018-11-27 16:12:41 -05:00
mabashian
72263c5c7b Addresses a number of workflow related bugs 2018-11-27 16:12:41 -05:00
chris meyers
281345dd67 flake8 fix 2018-11-27 16:12:41 -05:00
chris meyers
1a85fcd2d5 update docs to include workflow failure semantic 2018-11-27 16:12:41 -05:00
chris meyers
c1171fe4ff treat canceled nodes as failed when processing wf
* When deciding what jobs to run next, treat canceled as failed.
* Also add tests.
2018-11-27 16:12:41 -05:00
chris meyers
d6a8ad0b33 treat canceled jobs in wf the same as failed jobs
* Also fix spelling mistake that caused workflows to be falsely marked
successful in the case of a canceled job.
2018-11-27 16:12:41 -05:00
mabashian
4a6a3b27fa Fixed a number of workflow visualizer bugs. Added loading spinners while data is being loaded/processed. 2018-11-27 16:12:41 -05:00
chris meyers
266831e26d add cycle unit test 2018-11-27 16:12:41 -05:00
chris meyers
a6e20eeaaa update wf done and failed tests 2018-11-27 16:12:41 -05:00
chris meyers
6529c1bb46 update done and fail detection for workflow
* Instead of traversing the workflow graph to determine if a workflow is
done or has failed; instead, loop through all the nodes in the graph and
grab only the relevant nodes.
2018-11-27 16:12:41 -05:00
mabashian
ae0d0db62c Added dagre to handle our workflow graph layout. Fixed various workflow related bugs. 2018-11-27 16:12:41 -05:00
chris meyers
b81d795c00 fix up dot graph generator
* Update graph dot generator to use the new efficient graph
2018-11-27 16:12:41 -05:00
chris meyers
1b87e11d8f flake8 2018-11-27 16:12:41 -05:00
chris meyers
8bb9cfd62a add dag tests 2018-11-27 16:12:41 -05:00
chris meyers
a176a4b8cf remove unused code 2018-11-27 16:12:41 -05:00
chris meyers
3f4d14e48d crawl entire graph when marking DNR
* From the root, the code was only going down the did run path to find
nodes to mark DNR. This is incorrect, Now, we traverse the entire graph
each time to find nodes to mark DNR.
2018-11-27 16:12:41 -05:00
chris meyers
0499d419c3 more efficient graph processing
* Getting parent nodes from child was inefficient. Optimize it with a
hash table like we did for the getting of children.
* Getting leaf nodes was inefficient. Optimize it like we did getting
root nodes. A node is assumed to be a leaf node until it gets a child.
2018-11-27 16:12:41 -05:00
mabashian
700860e040 Fix long name tooltip. Fixed bug adding new node before finishing adding new link.
Fixed template list column layout.  Ensure that we're getting 200 workflow nodes per GET request
2018-11-27 16:12:41 -05:00
chris meyers
3dadeb3037 remove print statements 2018-11-27 16:12:41 -05:00
chris meyers
16a60412cf optimization fix
* WorkflowDAG accepts workflow job template and workflow jobs for which
to build a graph out of the nodes. The optimized query for each is
different. This changeset adds the differing queries for a workflow job.
2018-11-27 16:12:41 -05:00
chris meyers
9f3e272665 optimize cycle detection 2018-11-27 16:12:41 -05:00
mabashian
b84fc3b111 Fixes for post-rebase bugs 2018-11-27 16:12:41 -05:00
chris meyers
e1e8d3b372 bump migration 2018-11-27 16:12:40 -05:00
mabashian
05f4d94db2 Fixed serveral bugs including credential prompting. Added logic to bring links/nodes to the forefront when you hover over them in case there's some overlap 2018-11-27 16:12:40 -05:00
mabashian
61fb3eb390 First pass at implementing better node placement in the workflow graph 2018-11-27 16:12:40 -05:00
mabashian
7b95d2114d Implements workflow convergence without proper layout 2018-11-27 16:12:40 -05:00
chris meyers
07db7a41b3 more flake8 2018-11-27 16:12:40 -05:00
chris meyers
1120f8b1e1 try2 at the devil flake8 2018-11-27 16:12:40 -05:00
chris meyers
17b3996568 fix flake8 anyway I can 2018-11-27 16:12:40 -05:00
chris meyers
584b3f4e3d remove workflow test
* We now handle workflows with jobs that have errored. We treat them the
same as a failure result. Before, we would abort the workflow when we
encountered an error.
2018-11-27 16:12:40 -05:00
chris meyers
f8c53f4933 handle job error state in convergence 2018-11-27 16:12:40 -05:00
chris meyers
6e40e9c856 handle edge case ring cycle 2018-11-27 16:12:40 -05:00
chris meyers
2f9dc4d075 remove relationship in view if cycle detected 2018-11-27 16:12:40 -05:00
chris meyers
9afc38b714 fixup migrations 2018-11-27 16:12:40 -05:00
chris meyers
dfccc9e07d rework wf cycle detection for convergence 2018-11-27 16:12:40 -05:00
chris meyers
7b22d1b874 cycle detection when multiple parents 2018-11-27 16:12:40 -05:00
mabashian
29b4979736 Completed work necessary to support editing workflow links and nodes separately. Added hover and tooltip to links 2018-11-27 16:12:40 -05:00
mabashian
87d6253176 Decouple editing a wf node with editing a node link 2018-11-27 16:12:40 -05:00
chris meyers
1e10d4323f update docs 2018-11-27 16:12:40 -05:00
chris meyers
4111e53113 correctly name migration to align with 3.4.0 2018-11-27 16:12:40 -05:00
chris meyers
02df0c29e9 merge artifacts deterministically 2018-11-27 16:12:40 -05:00
chris meyers
475c90fd00 prevent job launching twice 2018-11-27 16:12:40 -05:00
chris meyers
2742b00a65 flake8 2018-11-27 16:12:40 -05:00
chris meyers
ea29e66a41 fix workflow finish state detector
* Take into account the new do_not_run field when finding if a workflow
is finished. If do_not_run is True then the node is considered finished.
2018-11-27 16:12:40 -05:00
chris meyers
6ef6b649e8 cleaner code 2018-11-27 16:12:40 -05:00
chris meyers
9bf2a49e0f save state 2018-11-27 16:12:40 -05:00
chris meyers
914892c3ac all parents should finish before start child 2018-11-27 16:12:40 -05:00
chris meyers
77661c6032 short circuit performance optimization 2018-11-27 16:12:40 -05:00
chris meyers
b4fc585495 stop DNR propogation on always path
* This makes sure DNR propogation stops when a job is successful, down
an always path
2018-11-27 16:12:40 -05:00
chris meyers
ff6db37a95 correct stop DNR propogation
* If a child has a parent that is not in the finished state then do not
propogate the DNR to the child in question.
* If a parent is in a finished state; do not propogate the DNR to the
child if the path to the child is traversed (based on the parent job
status).
2018-11-27 16:12:40 -05:00
chris meyers
1a064bdc59 satisfy flake8 2018-11-27 16:12:40 -05:00
chris meyers
ebabec0dad always find and mark dnr nodes 2018-11-27 16:12:40 -05:00
chris meyers
3506b9a7d8 Revert "mark dnr field read only"
This reverts commit 3dbc52d91223167683fd01174222bd6c22813dbd.

Workflow Job Nodes are read only already
2018-11-27 16:12:40 -05:00
chris meyers
cc374ca705 update debug dot graph to output dnr data 2018-11-27 16:12:40 -05:00
chris meyers
ad56a27cc0 mark dnr field read only 2018-11-27 16:12:40 -05:00
chris meyers
779e1a34db remove dnr field from jt wf node 2018-11-27 16:12:40 -05:00
chris meyers
447dfbb64d only visit nodes once for dnr 2018-11-27 16:12:40 -05:00
chris meyers
a9365a3967 code cleanup 2018-11-27 16:12:40 -05:00
chris meyers
f5c10f99b0 support workflow convergence nodes
* remove convergence restriction in API
* change task manager logic to be aware of and support convergence nodes
2018-11-27 16:12:40 -05:00
softwarefactory-project-zuul[bot]
c53ccc8d4a Merge pull request #2813 from shanemcd/update-translation-templates
Extract latest strings from source code for translations

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 17:53:49 +00:00
chris meyers
e214dcac85 add slowdown, final status delay, and debug
* slowdown by using --speed 0.1 <-- decimal
* optionally specify a delay between the event and the final status
* debug mode where you can step through emitting the job events
2018-11-27 12:45:49 -05:00
softwarefactory-project-zuul[bot]
de77f6bd1f Merge pull request #2809 from jlmitch5/fixManagmentJobScheduleEditLink
fix management job schedule edit link button

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 17:33:07 +00:00
Shane McDonald
18c4771a38 Extract latest strings from source code for translations 2018-11-27 12:31:52 -05:00
chris meyers
042c7ffe5b emit job status lifecycle in event replayer 2018-11-27 11:54:39 -05:00
John Mitchell
f25c6effa3 fix management job schedule edit link button 2018-11-27 11:35:13 -05:00
softwarefactory-project-zuul[bot]
46d303ceee Merge pull request #2805 from ryanpetrello/wcag
raise contrast on a few key page elements to pass WCAG contrast checks

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 15:45:15 +00:00
Ryan Petrello
b1bd87bcd2 raise contrast on a few key page elements to pass WCAG contrast checks 2018-11-27 10:20:24 -05:00
softwarefactory-project-zuul[bot]
50b0a5a54d Merge pull request #2756 from wenottingham/logged-out-damn-spot
Add a message to the resulting login dialog when a user explicitly logs out.

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2018-11-27 15:13:45 +00:00
Jake McDermott
4f731017ea don't conditionally hide workflow viz templates list button 2018-11-26 16:21:44 -05:00
softwarefactory-project-zuul[bot]
9eb2c02e92 Merge pull request #2788 from AlanCoding/container_names
Set fixed container names

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-26 18:53:41 +00:00
softwarefactory-project-zuul[bot]
55e5432027 Merge pull request #2794 from shanemcd/devel
Add a note about `—force-with-lease` to contributing documentation.

Reviewed-by: Marliana Lara <marliana.lara@gmail.com>
             https://github.com/marshmalien
2018-11-26 16:55:31 +00:00
Shane McDonald
a9ae4dc5a8 Clean up after yourselves, people! 2018-11-26 11:31:58 -05:00
Shane McDonald
0398b744a1 Add a note about —force-with-lease to contributing documentation. 2018-11-26 11:31:36 -05:00
AlanCoding
012511e4f0 prohibit relaunching sliced jobs with changed count 2018-11-26 10:54:19 -05:00
softwarefactory-project-zuul[bot]
4483d0320f Merge pull request #2791 from ryanpetrello/fix-iso-installs
only override django for FIPS in environments where Django is installed

Reviewed-by: Yanis Guenane
             https://github.com/Spredzy
2018-11-26 15:36:23 +00:00
Ryan Petrello
32e7ddd43a only override django for FIPS in environments where Django is installed
isolated awx installs don't have this tooling, and so they don't need
this specific monkey-patch
2018-11-26 09:17:48 -05:00
AlanCoding
0b32733dc8 set fixed container names 2018-11-26 08:26:57 -05:00
Christian Adams
d310c48988 Merge pull request #2758 from rooftopcellist/secure_current_user
make current_user ck secure and httponly
2018-11-21 15:26:35 -05:00
softwarefactory-project-zuul[bot]
5a6eefaf2c Merge pull request #2780 from kialam/update-readme-install-exact
Update README to include saving exact dependencies using npm.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 19:12:50 +00:00
kialam
8c5a94fa64 Update readme to include saving exact dependencies using npm. 2018-11-21 13:35:56 -05:00
adamscmRH
05d988349c make current_user ck secure and httponly 2018-11-21 10:36:35 -05:00
softwarefactory-project-zuul[bot]
bb1473f67f Merge pull request #2762 from kialam/fix-2554-converted-jts-from-sjt
UI WF results relaunch: handle any relaunch errors.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 15:01:17 +00:00
softwarefactory-project-zuul[bot]
79f483a66d Merge pull request #2772 from ansible/jakemcdermott-update
add package uninstall example to readme

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 03:11:45 +00:00
Jake McDermott
1b09a0230d Update README.md 2018-11-20 21:26:11 -05:00
kialam
f3344e9816 Fix failing tests. 2018-11-20 18:35:29 -05:00
softwarefactory-project-zuul[bot]
554e4d45aa Merge pull request #2763 from kialam/add-npm-precheck-script
Add precheck script.

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2018-11-20 21:12:58 +00:00
softwarefactory-project-zuul[bot]
bfe86cbc95 Merge pull request #2753 from jlmitch5/fixInstanceGroupLookup
fix instance group lookup in orgs scrolling behavior

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 20:05:14 +00:00
softwarefactory-project-zuul[bot]
29e4160d3e Merge pull request #2764 from ryanpetrello/dispatcher-sos
add dispatcher status to the sosreport

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2018-11-20 18:06:05 +00:00
Ryan Petrello
b4f906ceb1 add dispatcher status to the sosreport 2018-11-20 12:12:02 -05:00
kialam
e099fc58c7 Add precheck script. 2018-11-20 11:48:25 -05:00
softwarefactory-project-zuul[bot]
e342ef5cfa Merge pull request #2719 from AlanCoding/project_is_failed
Fail job run if project is failed

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 16:45:56 +00:00
John Mitchell
8997fca457 add sanitize 2018-11-20 11:41:58 -05:00
kialam
435ab4ad67 Handle any relaunch errors. 2018-11-20 11:25:36 -05:00
softwarefactory-project-zuul[bot]
d7a28dcea4 Merge pull request #2755 from kialam/fix-invalid-start-date
Scheduler: Start Date begins 1 day ahead.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 15:36:47 +00:00
softwarefactory-project-zuul[bot]
5f3024d395 Merge pull request #2703 from kialam/fix-2552-org-jt-list-websocket
Fix Organizations TemplateList websockets

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 15:01:05 +00:00
softwarefactory-project-zuul[bot]
3126480d1e Merge pull request #2754 from kdelee/rename-schema-change-detector
Rename schema job to be more clear about its purpose

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 14:22:08 +00:00
Elijah DeLee
ca84d312ce Rename schema job to be more clear about its purpose
The make target fails when it detects schema changes, not when schema is invalid.

Also update CONTRIBUTING.md to include information about zuul jobs.
2018-11-20 07:42:10 -06:00
AlanCoding
b790b50a1a Fail job run if project is failed
provide message in traceback and explanation fields

add log messages for dependency spawns
2018-11-20 07:55:37 -05:00
softwarefactory-project-zuul[bot]
6df26eb7a3 Merge pull request #2750 from ryanpetrello/timeout-modal-remove-close-button
remove the x icon on the session timeout modal (it doesn't work)

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 08:50:49 +00:00
softwarefactory-project-zuul[bot]
fccaebdc8e Merge pull request #2342 from ansible/workflow_inventory
Workflow level inventory

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 03:19:40 +00:00
Jake McDermott
45728dc1bb update workflow docs 2018-11-19 17:35:52 -05:00
AlanCoding
9cd8aa1667 further update of workflow docs for inventory feature 2018-11-19 17:35:39 -05:00
Jake McDermott
b74597f4dd fix bug when reverting non-default inventory prompts 2018-11-19 17:35:26 -05:00
Bill Nottingham
ce3d3c3490 Add a message to the resulting login dialog when a user explicitly logs out. 2018-11-19 17:16:55 -05:00
kialam
a9fe1ad9c1 Start Date begins 1 day ahead. 2018-11-19 16:20:46 -05:00
John Mitchell
22e7083d71 fix instance group lookup 2018-11-19 15:38:11 -05:00
Jake McDermott
951515da2f disable next and show warning when default workflow inventory is removed 2018-11-19 15:16:46 -05:00
Ryan Petrello
9e2f4cff08 remove the x icon on the session timeout modal (it doesn't work) 2018-11-19 14:37:49 -05:00
softwarefactory-project-zuul[bot]
0c1a4439ba Merge pull request #2745 from mabashian/2428-workflow-edit-patch
Use patch instead of put when updating a wfjt

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 19:23:26 +00:00
John Mitchell
c2a1603a56 Merge pull request #2746 from jlmitch5/navColorContrast
fix color contrast of nav
2018-11-19 13:50:04 -05:00
Jake McDermott
13e715aeb9 handle null inventory value on workflow launch 2018-11-19 12:53:21 -05:00
Jake McDermott
2bc75270e7 dry up org permissions test 2018-11-19 12:53:18 -05:00
Jake McDermott
fabe56088d fix workflow e2e tests again 2018-11-19 12:53:14 -05:00
Jake McDermott
0e3bf6db09 open workflow visualizer from form 2018-11-19 12:53:10 -05:00
Jake McDermott
c6a7d0859d add workflow jobs to inventory list status popup 2018-11-19 12:53:05 -05:00
Jake McDermott
fed00a18ad show workflow jobs on inventory completed jobs view 2018-11-19 12:53:02 -05:00
Jake McDermott
ecbdc55955 show related workflow counts on inventory deletion warning prompt 2018-11-19 12:52:57 -05:00
AlanCoding
bca9bcf6dd fix prompts contradiction: should be non-functional change 2018-11-19 12:52:54 -05:00
Jake McDermott
018a8e12de fix lookup message 2018-11-19 12:52:50 -05:00
AlanCoding
e0a28e32eb Tweak of error message wording for model-specific name 2018-11-19 12:52:47 -05:00
AlanCoding
c105885c7b Do not count template variables as prompted 2018-11-19 12:52:43 -05:00
Jake McDermott
89a0be64af fix bug with opening visualizer from list page 2018-11-19 12:52:38 -05:00
AlanCoding
c1d85f568c fix survey vars bug and inventory defaults display 2018-11-19 12:52:35 -05:00
Jake McDermott
75566bad39 fix workflow e2e tests 2018-11-19 12:52:32 -05:00
Jake McDermott
75c2d1eda1 add inventory help messages for workflow node edit 2018-11-19 12:52:29 -05:00
Jake McDermott
9a4667c6c7 add static messages to workflow inventory lookups 2018-11-19 12:52:26 -05:00
Jake McDermott
9917841585 open and close workflow visualizer from list 2018-11-19 12:52:23 -05:00
Jake McDermott
fbc3cd3758 redirect to workflow visualizer on workflow creation 2018-11-19 12:52:19 -05:00
Jake McDermott
d65687f14a add workflow inventory prompt to scheduler 2018-11-19 12:52:16 -05:00
Jake McDermott
4ea7511ae8 make workflow prompt inventory step optional 2018-11-19 12:52:12 -05:00
Jake McDermott
a8d22b9459 show correct ask_inventory state 2018-11-19 12:52:08 -05:00
Jake McDermott
f8453ffe68 accept inventory_id in workflow launch requests 2018-11-19 12:52:05 -05:00
Jake McDermott
38f43c147a fix exploding unit test 2018-11-19 12:52:01 -05:00
Jake McDermott
38fbcf8ee6 add missing api fields 2018-11-19 12:51:58 -05:00
Jake McDermott
2bd25b1fba add inventory prompt to wf editor 2018-11-19 12:51:54 -05:00
AlanCoding
7178fb83b0 migration number bumped again 2018-11-19 12:51:51 -05:00
Jake McDermott
2376013d49 add prompt on launch for workflow inventory 2018-11-19 12:51:47 -05:00
Jake McDermott
a94042def5 display inventory on workflow job details 2018-11-19 12:51:44 -05:00
Jake McDermott
2d2164a4ba add inventory lookup to workflow detail view 2018-11-19 12:51:41 -05:00
AlanCoding
3c980d373c bump migration number 2018-11-19 12:51:37 -05:00
AlanCoding
5b3ce1e999 add test for WFJT schedule inventory prompting 2018-11-19 12:51:33 -05:00
AlanCoding
6d4469ebbd handle inventory for WFJT editing RBAC 2018-11-19 12:51:29 -05:00
AlanCoding
eb58a6cc0e add test for launching with deleted inventory 2018-11-19 12:51:26 -05:00
AlanCoding
a60401abb9 fix bug with WFJT launch validation 2018-11-19 12:51:22 -05:00
AlanCoding
1203c8c0ee feature docs for workflow-level inventory 2018-11-19 12:51:18 -05:00
AlanCoding
0c52d17951 fix bug, handle RBAC, add test 2018-11-19 12:51:13 -05:00
AlanCoding
44fa3b18a9 Adjust prompt logic and views to accept workflow inventory 2018-11-19 12:50:57 -05:00
AlanCoding
33328c4ad7 initial model changes for workflow inventory 2018-11-19 12:50:29 -05:00
John Mitchell
11adcb9800 fix color contrast of nav 2018-11-19 12:35:55 -05:00
softwarefactory-project-zuul[bot]
932a1c6386 Merge pull request #2743 from ryanpetrello/survey_spec_stream
include survey_spec in activity stream

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 17:33:33 +00:00
softwarefactory-project-zuul[bot]
d1791fc48c Merge pull request #2585 from wenottingham/licensed-to-illegibility
Add a test that checks the included license files against included dependencies

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 17:22:49 +00:00
AlanCoding
5b274cfc2a include survey_spec in activity stream 2018-11-19 12:07:48 -05:00
mabashian
1d7d2820fd Use patch instead of put when updating a wfjt 2018-11-19 12:04:35 -05:00
Bill Nottingham
605c1355a8 Add updates to UI license grabber from jlmitch5. 2018-11-19 12:00:00 -05:00
Bill Nottingham
a6e00df041 Clean up included licenses such that tests pass.
Rename ui licenses to '.txt' for consistency.
Update bundled code as appropriate.
Remove dead licenses and dev-only UI licenses.
Add additional python licenses from Azure & related updates.
2018-11-19 12:00:00 -05:00
Bill Nottingham
67219e743f Add a test that we are including proper license files for all requirements. 2018-11-19 11:59:59 -05:00
softwarefactory-project-zuul[bot]
67273ff8c3 Merge pull request #2742 from jlmitch5/fixScrollTop
fix scroll top

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 16:56:27 +00:00
softwarefactory-project-zuul[bot]
a3bbe308a8 Merge pull request #2741 from matburt/fix_project_admin_add
Fix a bug that did not allow project_admin's to create a project.

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-19 16:53:16 +00:00
kialam
f1b5bbb1f6 Add websocket listener to Org > JT list view. 2018-11-19 11:46:07 -05:00
Matthew Jones
7330102961 Remove a warning message for dispatcher pool for tests 2018-11-19 11:19:57 -05:00
Matthew Jones
61916b86b5 Fix a bug that did not allow project_admin's to create a project.
This was a regression from previous functionality
2018-11-19 11:05:48 -05:00
John Mitchell
35d5bde690 fix scroll top 2018-11-19 11:03:03 -05:00
softwarefactory-project-zuul[bot]
b17c477af7 Merge pull request #2737 from AlanCoding/death_to_groups
Implement deprecation of groups_with_active_failures

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 15:31:26 +00:00
softwarefactory-project-zuul[bot]
01d891cd6e Merge pull request #2738 from ryanpetrello/inv-update-as
don't send activity stream create for unregistered models

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 15:02:33 +00:00
Ryan Petrello
e36335f68c only send activity stream create for registered unified jobs
see https://github.com/ansible/awx/issues/2733
2018-11-19 09:44:12 -05:00
softwarefactory-project-zuul[bot]
39369c7721 Merge pull request #2702 from kdelee/schema_validation
Pre-Merge Schema validation

Reviewed-by: Matthew Jones <mat@matburt.net>
             https://github.com/matburt
2018-11-19 14:26:42 +00:00
softwarefactory-project-zuul[bot]
4fbc39991d Merge pull request #2730 from AlanCoding/slice_license
License check for slicing >1

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 14:18:10 +00:00
softwarefactory-project-zuul[bot]
91075e8332 Merge pull request #2732 from AlanCoding/middle
Include migration middleware in timings and profiling

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 13:59:34 +00:00
AlanCoding
0ed50b380a Implement deprecation of groups_with_active_failures 2018-11-19 08:29:46 -05:00
AlanCoding
53716a4c5a include migration middleware in timings and profiling 2018-11-18 10:55:37 -05:00
AlanCoding
f30bbad07d License check for slicing >1 2018-11-17 22:48:46 -05:00
softwarefactory-project-zuul[bot]
b923efad37 Merge pull request #2717 from wenottingham/make-that-network-work
Add ncclient for use by networking modules.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 22:02:29 +00:00
softwarefactory-project-zuul[bot]
3b36372880 Merge pull request #2716 from ryanpetrello/insights-user-agent
add a user agent for requests to Insights

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 21:39:15 +00:00
Ryan Petrello
661cc896a9 add a user agent for requests to Insights 2018-11-16 16:25:08 -05:00
Bill Nottingham
e9c3623dfd specify a version 2018-11-16 15:34:42 -05:00
Bill Nottingham
c65b362841 Add ncclient for use by networking modules. 2018-11-16 15:21:04 -05:00
softwarefactory-project-zuul[bot]
6a0e11a233 Merge pull request #2713 from AlanCoding/system_env_vars
Minor cleanup of task environment vars

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 19:35:05 +00:00
AlanCoding
7417f9925f Minor cleanup of task environment vars 2018-11-16 13:28:42 -05:00
softwarefactory-project-zuul[bot]
2f669685d8 Merge pull request #2707 from ryanpetrello/min-max-workers
prevent the dispatcher from using a nonsensical max_workers value

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 15:41:42 +00:00
softwarefactory-project-zuul[bot]
86510029e1 Merge pull request #2692 from kialam/fix-2586-ldap-dropdown-fix
Fix LDAP and TACACS+ dropdowns

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 15:28:09 +00:00
Ryan Petrello
37234ca66e prevent the dispatcher from using a nonsensical max_workers value 2018-11-16 10:16:39 -05:00
Elijah DeLee
4ae1fdef05 Ignore differences in whitespace for schema validation 2018-11-16 09:47:33 -05:00
kialam
95e94a8ab5 Some styling changes; fix Server dropdown.
- Left align first dropdown on Github and LDAP tabs.
- Add border to give some whitespace
2018-11-15 20:46:34 -05:00
kialam
ea35d9713a Fix empty dropdowns for both LDAP and TACACS tabs. 2018-11-15 20:46:34 -05:00
Elijah DeLee
949cf53b89 Use r in front of regex string to make flake8 happy
This means we should not escape the \ character in the same way
2018-11-15 17:29:27 -05:00
Elijah DeLee
a68e22b114 Add tox target to detect schema changes
Fetches reference schema from public bucket
Still need define method for updating reference schema on merge.
2018-11-15 16:25:13 -05:00
Elijah DeLee
d70cd113e1 Reduce duplicated logic for genschema target 2018-11-15 15:29:35 -05:00
Matthew Jones
f737fc066f Generate schema suitable for comparing for schema changes 2018-11-15 15:29:35 -05:00
softwarefactory-project-zuul[bot]
9b992c971e Merge pull request #2672 from AlanCoding/sliceanator
Fix bug with non-sliced JT job spawn

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 20:16:19 +00:00
softwarefactory-project-zuul[bot]
e0d59766e0 Merge pull request #2696 from AlanCoding/bulk_del_inv
Pre-delete bulk delete related, fix parallel request conflicts

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-15 19:58:49 +00:00
AlanCoding
a9d88f728d Pre-delete bulk delete related, fix parallel request conflicts 2018-11-15 11:39:48 -05:00
softwarefactory-project-zuul[bot]
e24d63ea9a Merge pull request #2665 from ryanpetrello/fips
add support for running awx w/ FIPS enabled

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 15:18:07 +00:00
softwarefactory-project-zuul[bot]
1833a5b78b Merge pull request #2667 from saito-hideki/issue/admin_with_docker-compose
Fixed issue where admin_user and password change are not reflected

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 14:24:43 +00:00
softwarefactory-project-zuul[bot]
4213a00548 Merge pull request #2686 from AlanCoding/fast_workflows
Add task manager rescheduling hooks, de-duplication, lifecycle tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 14:19:09 +00:00
softwarefactory-project-zuul[bot]
7345512785 Merge pull request #2690 from ryanpetrello/ldap-long-name
truncate user first/last name if it exceeds 30 chars on LDAP auth

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-14 22:29:43 +00:00
Ryan Petrello
d3dc126d45 truncate user first/last name if it exceeds 30 chars on LDAP auth 2018-11-14 15:51:43 -05:00
AlanCoding
758a488aee Add task manager rescheduling hooks, de-duplication, lifecycle tests 2018-11-14 11:31:34 -05:00
softwarefactory-project-zuul[bot]
c0c358b640 Merge pull request #2682 from ryanpetrello/job-m2m-activity-stream
include M2M labels and credentials in Job creation activity stream

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-14 16:05:30 +00:00
Ryan Petrello
49f4ed10ca include M2M labels and credentials in Job creation activity stream 2018-11-14 10:36:01 -05:00
softwarefactory-project-zuul[bot]
5e18eccd19 Merge pull request #2671 from wenottingham/you-want-to-change-things---well---you-can't
Fix tooltip referring to PROJECTS_ROOT.

Reviewed-by: Bill Nottingham
             https://github.com/wenottingham
2018-11-13 21:01:19 +00:00
softwarefactory-project-zuul[bot]
c4e9daca4e Merge pull request #2669 from paradegoat/issue-2370
Updated JT project-related error message

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-13 20:22:31 +00:00
AlanCoding
5443e10697 Fix bug with non-sliced JT job spawn 2018-11-13 15:20:40 -05:00
Ryan Petrello
a3f9c0b012 warn about FIPS mode if the Django version changes 2018-11-13 15:04:36 -05:00
Geoff Humphreys
8517053934 Updated JT project-related error message
Signed-off-by: Geoff Humphreys <humphreys.geoff@gmail.com>
2018-11-13 14:56:22 -05:00
Bill Nottingham
0506968d4f Fix tooltip referring to PROJECTS_ROOT.
This can't be changed in AWX settings; it needs to be done in settings.py
either on the filesystem, or overridden in a k8s/openshift configmap.
2018-11-13 12:14:24 -05:00
softwarefactory-project-zuul[bot]
3bb91b20d0 Merge pull request #2663 from AlanCoding/unified_jobs_flake
Get rid of star import in unified_jobs.py

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-13 15:37:04 +00:00
softwarefactory-project-zuul[bot]
268b1ff436 Merge pull request #2662 from wwitzel3/devel
move root views new file

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-13 15:33:23 +00:00
Hideki Saito
f16a72081a Fixed issue where admin_user and password change are not reflected
- No effect of changing admin_user and admin_password when using docker-compose #2666
2018-11-13 18:21:18 +09:00
Ryan Petrello
cceac8d907 support PKCS8-formatted keys to enable FIPS compliance
see: https://access.redhat.com/solutions/1519083
2018-11-12 16:21:57 -05:00
adamscmRH
8d012de3e2 monkey-patch _digest for fips 2018-11-12 15:32:23 -05:00
AlanCoding
cee7ac9511 get rid of star import in unified_jobs.py 2018-11-12 13:38:58 -05:00
softwarefactory-project-zuul[bot]
36faaf4720 Merge pull request #2656 from iplaman/fix-postgres96
Fix default Postgresql version to 9.6

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-12 18:34:48 +00:00
Wayne Witzel III
33b8e7624b move root views new file 2018-11-12 12:36:16 -05:00
Idan Bidani
a213e01491 updating default Postgresql version to 9.6 2018-11-10 18:27:22 -05:00
softwarefactory-project-zuul[bot]
a00ed8e297 Merge pull request #2646 from jlmitch5/fixCredentialKindTranslation
Fix credential kind translation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-10 03:06:38 +00:00
softwarefactory-project-zuul[bot]
aeaebcd81a Merge pull request #2572 from AlanCoding/coalesce
Coalesce host and group Activity Stream deletion entries

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-09 21:04:07 +00:00
softwarefactory-project-zuul[bot]
b5849f3712 Merge pull request #2639 from ansible/enhancement-inv_summary
Update UJT with inv summary field and update schedule base route to include object to be scheduled

Reviewed-by: Chris Meyers
             https://github.com/chrismeyersfsu
2018-11-09 19:06:36 +00:00
AlanCoding
658f87953e coalesce data without setting 2018-11-09 14:00:45 -05:00
AlanCoding
5562e636ea Coalesce host and group A.S. deletion entries 2018-11-09 13:58:31 -05:00
John Mitchell
80fcdae50b update credential type usage to kind instead of name 2018-11-09 10:36:09 -05:00
softwarefactory-project-zuul[bot]
2c5f209996 Merge pull request #2631 from kialam/fix/2533-dashboard-hover
Fix Dashboard hover-over action so it shows the correct info

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-09 05:27:11 +00:00
softwarefactory-project-zuul[bot]
692b55311e Merge pull request #2645 from ryanpetrello/unbound
fix an unbound variable

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 20:40:57 +00:00
Ryan Petrello
10667fc855 fix an unbound variable
see: https://github.com/ansible/awx/issues/2642j
2018-11-08 15:14:47 -05:00
softwarefactory-project-zuul[bot]
1fc33b551d Merge pull request #2643 from kialam/fix/2606
Fix DETAILS link in WF viz not working until after job has ran

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 20:14:45 +00:00
kialam
729256c3d1 Fix DETAILS link in WF viz not working until after job has ran
- Make additional GET request if we need it in order to surface the job type so that we can properly redirect the user to the detail page.
2018-11-08 13:23:32 -05:00
chris meyers
23e1feba96 fill in summary inv for sched only when needed
* If scheduler inv exists then the inv summary will be filled in with
our generic summary filler inner. Else, if the related unified job has
an inventory, fill in the inv summary with that, explicitly.
2018-11-08 12:43:47 -05:00
John Mitchell
e3614c3012 update to using new inventory id from summary fields of UJT if applicable 2018-11-08 12:22:02 -05:00
chris meyers
f37391397e add inventory to schedule summary fields
* Use the same logic that related inventory uses. If there is an
inventory that overrides the inventory on the unified job  template then
summarize that field. Else, use the inventory on the unified job template
being scheduled.
2018-11-08 11:47:31 -05:00
softwarefactory-project-zuul[bot]
6a8454f748 Merge pull request #2352 from ansible/workflows_squared
[Feature] Allow use of workflows inside of workflows

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 16:28:37 +00:00
softwarefactory-project-zuul[bot]
d1328c7625 Merge pull request #2044 from cdvv7788/AWX-2030
Add command to revoke tokens (https://github.com/ansible/awx/issues/2030)

Reviewed-by: Cristian Vargas
             https://github.com/cdvv7788
2018-11-08 15:15:44 +00:00
John Mitchell
d1cce109fb update schedule base route to include resource being scheduled 2018-11-08 09:41:06 -05:00
kialam
32bd8b6473 Fix Dashboard hover-over action so it shows the correct info.
- Refactor our Dashboard directive to display the dashboard graph's x-axis according to the new changes in NVD3 v1.8.1. See https://github.com/nvd3-community/nvd3/blob/gh-pages/examples/lineChart.html for reference.
2018-11-08 09:15:54 -05:00
Ryan Petrello
001bd4ca59 resolve a few token revocation issues, and add tests 2018-11-08 08:15:24 -05:00
softwarefactory-project-zuul[bot]
541b503e06 Merge pull request #2634 from wwitzel3/views-inventory
inventory views

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 12:31:14 +00:00
Cristian Vargas
093c29e315 Add command to revoke tokens
Signed-off-by: Cristian Vargas <cristian@swapps.co>
2018-11-08 07:28:21 -05:00
softwarefactory-project-zuul[bot]
eec7d7199b Merge pull request #2614 from stokkie90/devel
Fixed bug where all group variables are not included

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 12:27:30 +00:00
Rick Stokkingreef
7dbb862673 Fixed test cases 2018-11-08 12:07:07 +01:00
softwarefactory-project-zuul[bot]
be1422d021 Merge pull request #2629 from ansible/org-view-ui-tests
Org view ui tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 06:32:59 +00:00
Wayne Witzel III
459ac0e5d9 inventory views 2018-11-07 22:06:33 -05:00
Wayne Witzel III
16c9d043c0 Merge pull request #2344 from wwitzel3/views-organization
organization views
2018-11-07 21:33:14 -05:00
Wayne Witzel III
1b465c4ed9 remove duplicate BaseUsersList 2018-11-07 18:18:41 -05:00
Wayne Witzel III
198a0db808 move organization views to their own file 2018-11-07 18:18:41 -05:00
softwarefactory-project-zuul[bot]
91dda0a164 Merge pull request #2625 from abedwardsw/feature/vmware_groupby_custom_field_excludes
update to latest vmware_inventory.py support for groupby_custom_field_excludes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 21:03:05 +00:00
softwarefactory-project-zuul[bot]
9341480209 Merge pull request #2627 from jakemcdermott/fix-form-cred-lookups
fix project and adhoc command form lookups when many credential types exist

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 20:43:36 +00:00
Adam Edwards
21877b3378 update to latest vmware_inventory.py with support for groupby_custom_field_excludes
e364d717cb/contrib/inventory/vmware_inventory.py

Signed-off-by: Adam Edwards <adam@middleware360.com>
2018-11-07 15:43:15 -05:00
softwarefactory-project-zuul[bot]
03169a96ef Merge pull request #2626 from jakemcdermott/fix-multicred
always recompile multicredential lists

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 20:27:20 +00:00
Marliana Lara
ebc3dbe7b6 Add source WF label to job details template
* Change label from "Parent WF" to "Source WF"
* Fix WF job result Firefox responsive style bugs
2018-11-07 13:22:41 -05:00
AlanCoding
1bed5d4af2 avoid nested on_commit use 2018-11-07 13:22:41 -05:00
AlanCoding
0783d86c6c adjust recursion error text 2018-11-07 13:22:41 -05:00
Marliana Lara
a3d5705cea Fix bug with workflow maker templates pagination and smart search 2018-11-07 13:22:40 -05:00
Marliana Lara
2ae8583a86 Fix Firefox labels bug 2018-11-07 13:22:40 -05:00
Marliana Lara
edda4bb265 Address PR review 2018-11-07 13:22:40 -05:00
AlanCoding
d068481aec link workflow job node based on job type, not UJT type 2018-11-07 13:22:40 -05:00
Marliana Lara
e20d8c8e81 Show workflow badge
Add Workflow tags

Hookup workflow details link

Add parent workflow and job explanation fields

Add workflow key icon to WF maker and WF results

Hookup wf prompting

Add wf key dropdown and hide wf info badge
2018-11-07 13:22:39 -05:00
Marliana Lara
f6cc351f7f Format workflow-chart directive code 2018-11-07 13:22:39 -05:00
Marliana Lara
c2d4887043 Include workflow jobs in workflow maker job templates list 2018-11-07 13:22:39 -05:00
AlanCoding
4428dbf1ff Allow use of role_level filter in UJT list 2018-11-07 13:22:39 -05:00
AlanCoding
e225489f43 workflows-in-workflows add docs and tests 2018-11-07 13:22:39 -05:00
AlanCoding
5169fe3484 safeguard against infinite loop if jobs have cycles 2018-11-07 13:22:38 -05:00
AlanCoding
01d1470544 workflow variables processing, recursion detection 2018-11-07 13:22:38 -05:00
AlanCoding
faa6ee47c5 allow use of workflows in workflows 2018-11-07 13:22:38 -05:00
softwarefactory-project-zuul[bot]
aa1d71148c Merge pull request #2583 from AlanCoding/superusers_manage
Do not block superusers with MANAGE_ORGANIZATION_AUTH setting

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 17:38:04 +00:00
Daniel Sami
e86ded6c68 removed namespace from user fixture 2018-11-07 09:02:00 -05:00
Daniel Sami
365bf4eb53 UI tests for org permission views 2018-11-07 09:01:23 -05:00
Jake McDermott
ceb9bfe486 fix adhoc command cred lookup when many credential types exist 2018-11-06 22:47:07 -05:00
Jake McDermott
0f85c867a0 fix project cred lookups when many credential types exist 2018-11-06 22:46:50 -05:00
Jake McDermott
f8a8186bd1 always recompile multicred lists 2018-11-06 20:58:08 -05:00
softwarefactory-project-zuul[bot]
33dfb6bf76 Merge pull request #2623 from ryanpetrello/this-is-the-end
properly validate cert data that happens to contain an END substring

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 21:14:26 +00:00
Ryan Petrello
28cd762dd7 properly validate cert data that happens to contain an END substring 2018-11-06 15:57:35 -05:00
softwarefactory-project-zuul[bot]
217cca47f5 Merge pull request #2620 from ansible/chrismeyersfsu-patch-1
Update saml.md

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 20:16:20 +00:00
softwarefactory-project-zuul[bot]
8faa5d8b7a Merge pull request #2591 from AlanCoding/no_more_ask
Implement deprecation of duplicated ask_ fields in job view

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 20:13:30 +00:00
softwarefactory-project-zuul[bot]
2c6711e183 Merge pull request #2618 from ryanpetrello/json-activity-stream-changes
send activity stream changes as raw JSON, not a JSON-ified string

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 19:45:19 +00:00
Chris Meyers
45328b6e6d Update saml.md 2018-11-06 14:45:10 -05:00
Ryan Petrello
1523feee91 send activity stream changes as raw JSON, not a JSON-ified string
see: https://github.com/ansible/awx/issues/2005
2018-11-06 14:28:57 -05:00
softwarefactory-project-zuul[bot]
856dc3645e Merge pull request #2605 from jakemcdermott/fix-2601
remove admin and member roles from organization->team permissions 

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 17:08:43 +00:00
Jake McDermott
5e4dd54112 remove admin and member roles from organization->team role assignment options 2018-11-06 11:52:19 -05:00
softwarefactory-project-zuul[bot]
5860689619 Merge pull request #2596 from jlmitch5/fixPermIssue
fix permission issue with regular user jt admins

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 16:40:58 +00:00
John Mitchell
da7834476b remove inadverdent scope variable that was added 2018-11-06 10:52:16 -05:00
John Mitchell
d5ba981515 remove inadverdent log statement 2018-11-06 10:50:15 -05:00
softwarefactory-project-zuul[bot]
afe07bd874 Merge pull request #2595 from ryanpetrello/fix-gce-json-creds
move from GCE_PEM_FILE_PATH to GCE_CREDENTIALS_FILE_PATH

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 15:09:11 +00:00
Rick Stokkingreef
f916bd7994 Fixed bug when all group vars are not included 2018-11-06 14:26:36 +01:00
softwarefactory-project-zuul[bot]
3fef7acaa8 Merge pull request #2457 from jakemcdermott/output-line-fixup
dont render blank output lines for some event types + adjust line wrapping

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 01:54:38 +00:00
Jake McDermott
95190c5509 remove unused constants 2018-11-05 20:13:00 -05:00
Jake McDermott
76e887f46d highlight entire row on hover 2018-11-05 20:12:52 -05:00
Jake McDermott
0c2b1b7747 don't compile html in real time 2018-11-05 20:12:44 -05:00
Jake McDermott
4c74c8c40c delete contents of slide array before reassigning 2018-11-05 20:12:37 -05:00
Jake McDermott
3a929919a3 enable expanded details for dynamic host events 2018-11-05 20:12:29 -05:00
Jake McDermott
c25af96c56 don't render events if stdout is zero-length string 2018-11-05 20:12:21 -05:00
Jake McDermott
f28f1e434d adjust output line wrapping 2018-11-05 20:12:12 -05:00
Jake McDermott
b3c5df193a don't render playbook_on_notify or runner_on_ok events if they have no stdout 2018-11-05 20:11:53 -05:00
John Mitchell
8645602b0a fix permission issue where regular users assigned jt admin could not add user jt roles they couldn't edit 2018-11-05 16:45:35 -05:00
Ryan Petrello
05156a5991 move from GEC_PEM_FILE_PATH to GCE_CREDENTIALS_FILE_PATH 2018-11-05 15:44:31 -05:00
softwarefactory-project-zuul[bot]
049d642df8 Merge pull request #1894 from AlanCoding/rm_sdonu
Remove unused project field

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-05 16:14:56 +00:00
AlanCoding
951ebf146a remove unused project field 2018-11-05 10:40:53 -05:00
AlanCoding
7a67e0f3d6 Implement deprecation of duplicated ask_ fields in job view 2018-11-05 10:38:55 -05:00
softwarefactory-project-zuul[bot]
37def8cf7c Merge pull request #2587 from jakemcdermott/fix-2563
remove admin and member roles from team->organizations role assignment options

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-05 15:13:18 +00:00
Jake McDermott
e4c28fed03 remove admin and member roles from team->organizations role assignment options 2018-11-04 22:59:03 -05:00
softwarefactory-project-zuul[bot]
f8b7259d7f Merge pull request #2543 from westfood/fix-helm-postgresql
Using new Helm parameters for PostgreSQL access.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-02 18:56:15 +00:00
AlanCoding
6ae1e156c8 do not block superusers with MANAGE_ORGANIZATION_AUTH setting 2018-11-02 14:13:05 -04:00
softwarefactory-project-zuul[bot]
9a055dbf78 Merge pull request #2550 from matburt/runner_on_start
Add support for runner_on_start

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-02 15:10:07 +00:00
Matthew Jones
80ac44565a Make sure we reference the actual hostname 2018-11-02 10:37:58 -04:00
softwarefactory-project-zuul[bot]
a338199198 Merge pull request #2577 from AlanCoding/fix_grandparents
Fix bug where grandparent groups were excluded

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-02 14:36:48 +00:00
AlanCoding
47fc0a759f fix bug where grandparent groups were excluded 2018-11-02 10:10:38 -04:00
softwarefactory-project-zuul[bot]
5eb4b35508 Merge pull request #2547 from AlanCoding/decorator
Get rid of decorator dependency

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-01 20:39:30 +00:00
softwarefactory-project-zuul[bot]
d93eedaedb Merge pull request #2571 from ryanpetrello/devel
Merge remote-tracking branch 'release_3.3.1' into devel

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-01 16:48:46 +00:00
Ryan Petrello
a748a272fb Merge remote-tracking branch 'tower/release_3.3.1' into devel 2018-11-01 12:07:02 -04:00
AlanCoding
d8d710a83d get rid of decorator dependency 2018-10-31 11:37:10 -04:00
Matthew Jones
673068464a Add support for runner_on_start
This will be available in ansible 2.8
2018-10-30 13:23:55 -04:00
westfood
694e494484 Using new Helm parameters for PostgreSQL access. 2018-10-28 11:55:36 +01:00
Wayne Witzel III
c8e208dea7 Merge pull request #3074 from wwitzel3/release_3.3.1
fix typo in length
2018-10-17 17:10:41 -04:00
Wayne Witzel III
f2cec03900 fix typo in length 2018-10-17 16:34:24 -04:00
Wayne Witzel III
0b4e0678e9 Merge pull request #3070 from wwitzel3/release_3.3.1
better error handling when over limit
2018-10-17 09:09:01 -04:00
Wayne Witzel III
6e3b2a5c2d better error handling when over limit 2018-10-15 16:07:14 -04:00
Ryan Petrello
3d378077d9 Merge pull request #3066 from saito-hideki/issue/3064
[3.3.1] Add files and output of commands to gather with sosreport
2018-10-15 12:28:08 -04:00
Ryan Petrello
716d440a76 Merge pull request #3068 from ryanpetrello/fix-3039
fix a typo on the JT add page that breaks the custom venv field
2018-10-15 11:40:33 -04:00
Ryan Petrello
9d81727d16 fix a typo on the JT add page that breaks the custom venv field 2018-10-15 11:19:58 -04:00
Hideki Saito
d5626a4f3e [3.3.1] Add files and output of commands to gather with sosreport
- Fixed issue #3064
2018-10-15 11:40:51 +09:00
Ryan Petrello
6073e8e3b6 Merge pull request #3062 from ryanpetrello/fix-3043
minor nit for https://github.com/ansible/tower/pull/3060
2018-10-12 16:56:23 -04:00
Ryan Petrello
867ff5da71 minor nit for https://github.com/ansible/tower/pull/3060 2018-10-12 16:17:14 -04:00
Ryan Petrello
c8b2ca7fed Merge pull request #3060 from ryanpetrello/fix-3043
properly handle AnsibleVaultEncryptedUnicode objects in the callback
2018-10-12 15:42:11 -04:00
Ryan Petrello
d4e3127fb4 properly handle AnsibleVaultEncryptedUnicode objects in the callback 2018-10-12 12:29:46 -04:00
Chris Meyers
503a47c509 Merge pull request #3054 from chrismeyersfsu/fix-ldap_posix_group_type
fix issue with ldap queries containing unicode
2018-10-12 09:14:09 -05:00
Ryan Petrello
71577bb00d Merge pull request #3052 from wwitzel3/bump-asgi_amqp
Use latest version of asgi_amqp
2018-10-10 16:07:57 -04:00
chris meyers
cfb58eb145 fix issue with ldap queries containing unicode 2018-10-10 12:32:27 -04:00
Wayne Witzel III
5994c35975 Use latest version of asgi_amqp 2018-10-10 11:33:11 -04:00
Michael Abashian
3aa07baf26 Merge pull request #3035 from mabashian/3010-extra-vars
Fixes bug where schedule extra vars were not being displayed in the edit form
2018-09-28 09:24:41 -04:00
Chris Meyers
f1c53fcd85 Merge pull request #3034 from chrismeyersfsu/fix-ldap_params
at migration time, validate ldap group type params
2018-09-28 08:53:42 -04:00
mabashian
6c98e6c3a0 Actually fix extra vars on edit schedule. This commit takes into account survey question answers which need to get pulled out of extra vars and displayed in the prompt. 2018-09-27 16:49:23 -04:00
mabashian
8aec4ed72e Fixes bug where schedule extra vars were not being displayed in the edit form 2018-09-27 16:30:10 -04:00
chris meyers
0a0cdc2e21 at migration time, validate ldap group type params
* Previously, we have logic in the API to ensure that ldap group type
params, when changed, align with ldap group type Class init
expectations. However, we did not have this logic in the migrations.
This PR adds the validation check to migrations.
2018-09-27 12:18:39 -04:00
Ryan Petrello
f0776d6838 Merge pull request #3026 from ryanpetrello/fix-3004
properly support deprecated `Authorization: Token xyz`
2018-09-24 15:33:56 -04:00
Ryan Petrello
9de63832ce properly support deprecated Authorization: Token xyz 2018-09-24 15:16:09 -04:00
John Mitchell
70629ef7f3 Merge pull request #2997 from jlmitch5/fixPageSelector
fix filter page size selector
2018-09-13 10:42:50 -04:00
John Mitchell
1d8bb47726 fix filter page size selector 2018-09-12 17:31:10 -04:00
Matthew Jones
5e16c72d30 Merge pull request #2988 from mabashian/2982-wfjt-list-select
Fixes bug in wfjt node form where rows weren't remaining selected after being clicked
2018-09-12 14:00:32 -04:00
Matthew Jones
02f709f8d1 Merge pull request #2995 from jlmitch5/lodashFindUpdate
update syntax of lodash find call
2018-09-12 13:59:59 -04:00
Shane McDonald
90bd27f5a8 Whitespace fix
I’m not actually this pedantic, I just need something to tag.
2018-09-12 13:41:56 -04:00
John Mitchell
593ab90f92 update syntax of lodash find call 2018-09-12 10:54:17 -04:00
mabashian
27c06a7285 Fixes bug in wfjt node form where rows weren't remaining selected after being clicked 2018-09-11 16:34:02 -04:00
Ryan Petrello
b2c755ba76 Merge pull request #2980 from rooftopcellist/amend_changelog_networkui
rm network ui from changelog
2018-09-11 10:03:00 -04:00
Ryan Petrello
c88cab7d31 Merge pull request #2983 from ansible/deprecated_facts
deprecate fact endpoints
2018-09-11 10:02:25 -04:00
chris meyers
f82f4a9993 deprecate fact endpoints and commands 2018-09-07 17:46:33 -04:00
adamscmRH
5a6f1a342f rm network ui from changelog 2018-09-07 15:04:34 -04:00
415 changed files with 10755 additions and 9366 deletions

4
.gitignore vendored
View File

@@ -1,3 +1,7 @@
# Ignore generated schema
swagger.json
schema.json
reference-schema.json
# Tags
.tags

View File

@@ -34,10 +34,11 @@ Have questions about this document or anything not covered here? Come chat with
## Things to know prior to submitting code
- All code submissions are done through pull requests against the `devel` branch.
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
## Setting up your development environment
@@ -49,7 +50,7 @@ The AWX development environment workflow and toolchain is based on Docker, and t
Prior to starting the development services, you'll need `docker` and `docker-compose`. On Linux, you can generally find these in your distro's packaging, but you may find that Docker themselves maintain a separate repo that tracks more closely to the latest releases.
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
respectively.
For Linux platforms, refer to the following from Docker:
@@ -137,21 +138,21 @@ Run the following to build the AWX UI:
```
### Running the environment
#### Start the containers
#### Start the containers
Start the development containers by running the following:
```bash
(host)$ make docker-compose
```
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django and the front end build process.
If you start a second terminal session, you can take a look at the running containers using the `docker ps` command. For example:
```bash
# List running containers
(host)$ docker ps
(host)$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@@ -219,7 +220,7 @@ If you want to start and use the development environment, you'll first need to b
```
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes.
Now you can start each service individually, or start all services in a pre-configured tmux session like so:
```bash
@@ -248,9 +249,9 @@ Before you can log into AWX, you need to create an admin user. With this user yo
(container)# awx-manage createsuperuser
```
You will be prompted for a username, an email address, and a password, and you will be asked to confirm the password. The email address is not important, so just enter something that looks like an email address. Remember the username and password, as you will use them to log into the web interface for the first time.
##### Load demo data
You can optionally load some demo data. This will create a demo project, inventory, and job template. From within the container shell, run the following to load the data:
```bash
@@ -276,7 +277,7 @@ in OpenAPI format. A variety of online tools are available for translating
this data into more consumable formats (such as HTML). http://editor.swagger.io
is an example of one such service.
### Accessing the AWX web interface
### Accessing the AWX web interface
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).
@@ -289,7 +290,7 @@ When necessary, remove any AWX containers and images by running the following:
```bash
(host)$ make docker-clean
```
## What should I work on?
For feature work, take a look at the current [Enhancements](https://github.com/ansible/awx/issues?q=is%3Aissue+is%3Aopen+label%3Atype%3Aenhancement).
@@ -331,6 +332,23 @@ Sometimes it might take us a while to fully review your PR. We try to keep the `
All submitted PRs will have the linter and unit tests run against them via Zuul, and the status reported in the PR.
## PR Checks ran by Zuul
Zuul jobs for awx are defined in the [zuul-jobs](https://github.com/ansible/zuul-jobs) repo.
Zuul runs the following checks that must pass:
1) `tox-awx-api-lint`
2) `tox-awx-ui-lint`
3) `tox-awx-api`
4) `tox-awx-ui`
5) `tox-awx-swagger`
Zuul runs the following checks that are non-voting (can not pass but serve to inform PR reviewers):
1) `tox-awx-detect-schema-change`
This check generates the schema and diffs it against a reference copy of the `devel` version of the schema.
Reviewers should inspect the `job-output.txt.gz` related to the check if their is a failure (grep for `diff -u -b` to find beginning of diff).
If the schema change is expected and makes sense in relation to the changes made by the PR, then you are good to go!
If not, the schema changes should be fixed, but this decision must be enforced by reviewers.
## Reporting Issues
We welcome your feedback, and encourage you to file an issue when you run into a problem. But before opening a new issues, we ask that you please view our [Issues guide](./ISSUES.md).

View File

@@ -11,7 +11,6 @@ GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
IMAGE_REPOSITORY_AUTH ?=
IMAGE_REPOSITORY_BASE ?= https://gcr.io
VERSION := $(shell cat VERSION)
# NOTE: This defaults the container image version to the branch that's active
@@ -85,6 +84,11 @@ clean-venv:
clean-dist:
rm -rf dist
clean-schema:
rm -rf swagger.json
rm -rf schema.json
rm -rf reference-schema.json
# Remove temporary build files, compiled Python files.
clean: clean-ui clean-dist
rm -rf awx/public
@@ -204,7 +208,7 @@ init:
if [ "$(AWX_GROUP_QUEUES)" == "tower,thepentagon" ]; then \
$(MANAGEMENT_COMMAND) provision_instance --hostname=isolated; \
$(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \
$(MANAGEMENT_COMMAND) generate_isolated_key | ssh -o "StrictHostKeyChecking no" root@isolated 'cat >> /root/.ssh/authorized_keys'; \
$(MANAGEMENT_COMMAND) generate_isolated_key > /awx_devel/awx/main/expect/authorized_keys; \
fi;
# Refresh development environment after pulling new code.
@@ -338,11 +342,14 @@ pyflakes: reports
pylint: reports
@(set -o pipefail && $@ | reports/$@.report)
genschema: reports
$(MAKE) swagger PYTEST_ARGS="--genschema"
swagger: reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
check: flake8 pep8 # pyflakes pylint
@@ -541,11 +548,6 @@ docker-isolated:
docker start tools_awx_1
docker start tools_isolated_1
echo "__version__ = '`git describe --long | cut -d - -f 1-1`'" | docker exec -i tools_isolated_1 /bin/bash -c "cat > /venv/awx/lib/python2.7/site-packages/awx.py"
if [ "`docker exec -i -t tools_isolated_1 cat /root/.ssh/authorized_keys`" == "`docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub`" ]; then \
echo "SSH keys already copied to isolated instance"; \
else \
docker exec "tools_isolated_1" bash -c "mkdir -p /root/.ssh && rm -f /root/.ssh/authorized_keys && echo $$(docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub) >> /root/.ssh/authorized_keys"; \
fi
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
# Docker Compose Development environment
@@ -564,6 +566,16 @@ docker-compose-runtest:
docker-compose-build-swagger:
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh swagger
docker-compose-genschema:
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh genschema
mv swagger.json schema.json
docker-compose-detect-schema-change:
$(MAKE) docker-compose-genschema
curl https://s3.amazonaws.com/awx-public-ci-files/schema.json -o reference-schema.json
# Ignore differences in whitespace with -b
diff -u -b schema.json reference-schema.json
docker-compose-clean:
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm -w /awx_devel --service-ports awx make clean
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose rm -sf
@@ -600,7 +612,6 @@ docker-compose-cluster-elk: docker-auth
minishift-dev:
ansible-playbook -i localhost, -e devtree_directory=$(CURDIR) tools/clusterdevel/start_minishift_dev.yml
clean-elk:
docker stop tools_kibana_1
docker stop tools_logstash_1

View File

@@ -21,6 +21,48 @@ except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
import django
from django.utils.encoding import force_bytes
from django.db.backends.base.schema import BaseDatabaseSchemaEditor
from django.db.backends.base import schema
HAS_DJANGO = True
except ImportError:
HAS_DJANGO = False
if HAS_DJANGO is True:
# This line exists to make sure we don't regress on FIPS support if we
# upgrade Django; if you're upgrading Django and see this error,
# update the version check below, and confirm that FIPS still works.
if django.__version__ != '1.11.16':
raise RuntimeError("Django version other than 1.11.16 detected {}. \
Subclassing BaseDatabaseSchemaEditor is known to work for Django 1.11.16 \
and may not work in newer Django versions.".format(django.__version__))
class FipsBaseDatabaseSchemaEditor(BaseDatabaseSchemaEditor):
@classmethod
def _digest(cls, *args):
"""
Generates a 32-bit digest of a set of arguments that can be used to
shorten identifying names.
"""
try:
h = hashlib.md5()
except ValueError:
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(force_bytes(arg))
return h.hexdigest()[:8]
schema.BaseDatabaseSchemaEditor = FipsBaseDatabaseSchemaEditor
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.
command_dir = os.path.join(management_dir, 'commands')

View File

@@ -25,7 +25,6 @@ from rest_framework.filters import BaseFilterBackend
from awx.main.utils import get_type_for_model, to_python_boolean
from awx.main.utils.db import get_all_field_names
from awx.main.models.credential import CredentialType
from awx.main.models.rbac import RoleAncestorEntry
class V1CredentialFilterBackend(BaseFilterBackend):
@@ -347,12 +346,12 @@ class FieldLookupBackend(BaseFilterBackend):
else:
args.append(Q(**{k:v}))
for role_name in role_filters:
if not hasattr(queryset.model, 'accessible_pk_qs'):
raise ParseError(_(
'Cannot apply role_level filter to this list because its model '
'does not use roles for access control.'))
args.append(
Q(pk__in=RoleAncestorEntry.objects.filter(
ancestor__in=request.user.roles.all(),
content_type_id=ContentType.objects.get_for_model(queryset.model).id,
role_field=role_name
).values_list('object_id').distinct())
Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name))
)
if or_filters:
q = Q()

View File

@@ -6,7 +6,7 @@ import inspect
import logging
import time
import six
import urllib
import urllib
# Django
from django.conf import settings
@@ -56,7 +56,7 @@ __all__ = ['APIView', 'GenericAPIView', 'ListAPIView', 'SimpleListAPIView',
'ParentMixin',
'DeleteLastUnattachLabelMixin',
'SubListAttachDetachAPIView',
'CopyAPIView']
'CopyAPIView', 'BaseUsersList',]
logger = logging.getLogger('awx.api.generics')
analytics_logger = logging.getLogger('awx.analytics.performance')
@@ -92,8 +92,7 @@ class LoggedLoginView(auth_views.LoginView):
current_user = UserSerializer(self.request.user)
current_user = JSONRenderer().render(current_user.data)
current_user = urllib.quote('%s' % current_user, '')
ret.set_cookie('current_user', current_user)
ret.set_cookie('current_user', current_user, secure=settings.SESSION_COOKIE_SECURE or None)
return ret
else:
ret.status_code = 401
@@ -552,9 +551,8 @@ class SubListDestroyAPIView(DestroyAPIView, SubListAPIView):
def perform_list_destroy(self, instance_list):
if self.check_sub_obj_permission:
# Check permissions for all before deleting, avoiding half-deleted lists
for instance in instance_list:
if self.has_delete_permission(instance):
if not self.has_delete_permission(instance):
raise PermissionDenied()
for instance in instance_list:
self.perform_destroy(instance, check_permission=False)
@@ -990,3 +988,22 @@ class CopyAPIView(GenericAPIView):
serializer = self._get_copy_return_serializer(new_obj)
headers = {'Location': new_obj.get_absolute_url(request=request)}
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class BaseUsersList(SubListCreateAttachDetachAPIView):
def post(self, request, *args, **kwargs):
ret = super(BaseUsersList, self).post( request, *args, **kwargs)
if ret.status_code != 201:
return ret
try:
if ret.data is not None and request.data.get('is_system_auditor', False):
# This is a faux-field that just maps to checking the system
# auditor role member list.. unfortunately this means we can't
# set it on creation, and thus needs to be set here.
user = User.objects.get(id=ret.data['id'])
user.is_system_auditor = request.data['is_system_auditor']
ret.data['is_system_auditor'] = request.data['is_system_auditor']
except AttributeError as exc:
print(exc)
pass
return ret

View File

@@ -61,7 +61,7 @@ from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.validators import vars_validate_or_raise
from awx.conf.license import feature_enabled
from awx.conf.license import feature_enabled, LicenseForbids
from awx.api.versioning import reverse, get_request_version
from awx.api.fields import (BooleanNullField, CharNullField, ChoiceNullField,
VerbatimField, DeprecatedCredentialField)
@@ -104,7 +104,7 @@ SUMMARIZABLE_FK_FIELDS = {
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'vault_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed'),
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed', 'type'),
'job_template': DEFAULT_SUMMARY_FIELDS,
'workflow_job_template': DEFAULT_SUMMARY_FIELDS,
'workflow_job': DEFAULT_SUMMARY_FIELDS,
@@ -1311,16 +1311,12 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
'admin', 'update',
{'copy': 'organization.project_admin'}
]
scm_delete_on_next_update = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
class Meta:
model = Project
fields = ('*', 'organization', 'scm_delete_on_next_update', 'scm_update_on_launch',
fields = ('*', 'organization', 'scm_update_on_launch',
'scm_update_cache_timeout', 'scm_revision', 'custom_virtualenv',) + \
('last_update_failed', 'last_updated') # Backwards compatibility
read_only_fields = ('scm_delete_on_next_update',)
def get_related(self, obj):
res = super(ProjectSerializer, self).get_related(obj)
@@ -1503,6 +1499,12 @@ class InventorySerializer(BaseSerializerWithVariables):
'admin', 'adhoc',
{'copy': 'organization.inventory_admin'}
]
groups_with_active_failures = serializers.IntegerField(
read_only=True,
min_value=0,
help_text=_('This field has been deprecated and will be removed in a future release')
)
class Meta:
model = Inventory
@@ -1724,6 +1726,11 @@ class AnsibleFactsSerializer(BaseSerializer):
class GroupSerializer(BaseSerializerWithVariables):
capabilities_prefetch = ['inventory.admin', 'inventory.adhoc']
groups_with_active_failures = serializers.IntegerField(
read_only=True,
min_value=0,
help_text=_('This field has been deprecated and will be removed in a future release')
)
class Meta:
model = Group
@@ -3050,7 +3057,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
prompting_error_message = _("Must either set a default value or ask to prompt on launch.")
if project is None:
raise serializers.ValidationError({'project': _("Job types 'run' and 'check' must have assigned a project.")})
raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):
raise serializers.ValidationError({'inventory': prompting_error_message})
@@ -3059,6 +3066,13 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
def validate_extra_vars(self, value):
return vars_validate_or_raise(value)
def validate_job_slice_count(self, value):
if value > 1 and not feature_enabled('workflows'):
raise LicenseForbids({'job_slice_count': [_(
"Job slicing is a workflows-based feature and your license does not allow use of workflows."
)]})
return value
def get_summary_fields(self, obj):
summary_fields = super(JobTemplateSerializer, self).get_summary_fields(obj)
all_creds = []
@@ -3103,19 +3117,46 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
return summary_fields
class JobTemplateWithSpecSerializer(JobTemplateSerializer):
'''
Used for activity stream entries.
'''
class Meta:
model = JobTemplate
fields = ('*', 'survey_spec')
class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
passwords_needed_to_start = serializers.ReadOnlyField()
ask_diff_mode_on_launch = serializers.ReadOnlyField()
ask_variables_on_launch = serializers.ReadOnlyField()
ask_limit_on_launch = serializers.ReadOnlyField()
ask_skip_tags_on_launch = serializers.ReadOnlyField()
ask_tags_on_launch = serializers.ReadOnlyField()
ask_job_type_on_launch = serializers.ReadOnlyField()
ask_verbosity_on_launch = serializers.ReadOnlyField()
ask_inventory_on_launch = serializers.ReadOnlyField()
ask_credential_on_launch = serializers.ReadOnlyField()
ask_diff_mode_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_variables_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_limit_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_skip_tags_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_tags_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_job_type_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_verbosity_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_inventory_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_credential_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
artifacts = serializers.SerializerMethodField()
class Meta:
@@ -3558,7 +3599,7 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
class Meta:
model = WorkflowJobTemplate
fields = ('*', 'extra_vars', 'organization', 'survey_enabled', 'allow_simultaneous',
'ask_variables_on_launch',)
'ask_variables_on_launch', 'inventory', 'ask_inventory_on_launch',)
def get_related(self, obj):
res = super(WorkflowJobTemplateSerializer, self).get_related(obj)
@@ -3586,13 +3627,24 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
return vars_validate_or_raise(value)
class WorkflowJobTemplateWithSpecSerializer(WorkflowJobTemplateSerializer):
'''
Used for activity stream entries.
'''
class Meta:
model = WorkflowJobTemplate
fields = ('*', 'survey_spec')
class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
class Meta:
model = WorkflowJob
fields = ('*', 'workflow_job_template', 'extra_vars', 'allow_simultaneous',
'job_template', 'is_sliced_job',
'-execution_node', '-event_processing_finished', '-controller_node',)
'-execution_node', '-event_processing_finished', '-controller_node',
'inventory',)
def get_related(self, obj):
res = super(WorkflowJobSerializer, self).get_related(obj)
@@ -3675,7 +3727,7 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
if obj is None:
return ret
if 'extra_data' in ret and obj.survey_passwords:
ret['extra_data'] = obj.display_extra_data()
ret['extra_data'] = obj.display_extra_vars()
return ret
def get_summary_fields(self, obj):
@@ -3816,9 +3868,6 @@ class WorkflowJobTemplateNodeSerializer(LaunchConfigurationBaseSerializer):
ujt_obj = attrs['unified_job_template']
elif self.instance:
ujt_obj = self.instance.unified_job_template
if isinstance(ujt_obj, (WorkflowJobTemplate)):
raise serializers.ValidationError({
"unified_job_template": _("Cannot nest a %s inside a WorkflowJobTemplate") % ujt_obj.__class__.__name__})
if 'credential' in deprecated_fields: # TODO: remove when v2 API is deprecated
cred = deprecated_fields['credential']
attrs['credential'] = cred
@@ -3867,7 +3916,8 @@ class WorkflowJobNodeSerializer(LaunchConfigurationBaseSerializer):
class Meta:
model = WorkflowJobNode
fields = ('*', 'credential', 'job', 'workflow_job', '-name', '-description', 'id', 'url', 'related',
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes',)
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes',
'do_not_run',)
def get_related(self, obj):
res = super(WorkflowJobNodeSerializer, self).get_related(obj)
@@ -4369,37 +4419,63 @@ class JobLaunchSerializer(BaseSerializer):
class WorkflowJobLaunchSerializer(BaseSerializer):
can_start_without_user_input = serializers.BooleanField(read_only=True)
defaults = serializers.SerializerMethodField()
variables_needed_to_start = serializers.ReadOnlyField()
survey_enabled = serializers.SerializerMethodField()
extra_vars = VerbatimField(required=False, write_only=True)
inventory = serializers.PrimaryKeyRelatedField(
queryset=Inventory.objects.all(),
required=False, write_only=True
)
workflow_job_template_data = serializers.SerializerMethodField()
class Meta:
model = WorkflowJobTemplate
fields = ('can_start_without_user_input', 'extra_vars',
'survey_enabled', 'variables_needed_to_start',
fields = ('ask_inventory_on_launch', 'can_start_without_user_input', 'defaults', 'extra_vars',
'inventory', 'survey_enabled', 'variables_needed_to_start',
'node_templates_missing', 'node_prompts_rejected',
'workflow_job_template_data')
'workflow_job_template_data', 'survey_enabled')
read_only_fields = ('ask_inventory_on_launch',)
def get_survey_enabled(self, obj):
if obj:
return obj.survey_enabled and 'spec' in obj.survey_spec
return False
def get_defaults(self, obj):
defaults_dict = {}
for field_name in WorkflowJobTemplate.get_ask_mapping().keys():
if field_name == 'inventory':
defaults_dict[field_name] = dict(
name=getattrd(obj, '%s.name' % field_name, None),
id=getattrd(obj, '%s.pk' % field_name, None))
else:
defaults_dict[field_name] = getattr(obj, field_name)
return defaults_dict
def get_workflow_job_template_data(self, obj):
return dict(name=obj.name, id=obj.id, description=obj.description)
def validate(self, attrs):
obj = self.instance
template = self.instance
accepted, rejected, errors = obj._accept_or_ignore_job_kwargs(
_exclude_errors=['required'],
**attrs)
accepted, rejected, errors = template._accept_or_ignore_job_kwargs(**attrs)
self._ignored_fields = rejected
WFJT_extra_vars = obj.extra_vars
attrs = super(WorkflowJobLaunchSerializer, self).validate(attrs)
obj.extra_vars = WFJT_extra_vars
return attrs
if template.inventory and template.inventory.pending_deletion is True:
errors['inventory'] = _("The inventory associated with this Workflow is being deleted.")
elif 'inventory' in accepted and accepted['inventory'].pending_deletion:
errors['inventory'] = _("The provided inventory is being deleted.")
if errors:
raise serializers.ValidationError(errors)
WFJT_extra_vars = template.extra_vars
WFJT_inventory = template.inventory
super(WorkflowJobLaunchSerializer, self).validate(attrs)
template.extra_vars = WFJT_extra_vars
template.inventory = WFJT_inventory
return accepted
class NotificationTemplateSerializer(BaseSerializer):
@@ -4618,6 +4694,23 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
res['inventory'] = obj.unified_job_template.inventory.get_absolute_url(self.context.get('request'))
return res
def get_summary_fields(self, obj):
summary_fields = super(ScheduleSerializer, self).get_summary_fields(obj)
if 'inventory' in summary_fields:
return summary_fields
inventory = None
if obj.unified_job_template and getattr(obj.unified_job_template, 'inventory', None):
inventory = obj.unified_job_template.inventory
else:
return summary_fields
summary_fields['inventory'] = dict()
for field in SUMMARIZABLE_FK_FIELDS['inventory']:
summary_fields['inventory'][field] = getattr(inventory, field, None)
return summary_fields
def validate_unified_job_template(self, value):
if type(value) == InventorySource and value.source not in SCHEDULEABLE_PROVIDERS:
raise serializers.ValidationError(_('Inventory Source must be a cloud resource.'))

View File

@@ -8,8 +8,8 @@ import dateutil
import time
import socket
import sys
import logging
import requests
import functools
from base64 import b64encode
from collections import OrderedDict, Iterable
import six
@@ -18,14 +18,12 @@ import six
# Django
from django.conf import settings
from django.core.exceptions import FieldError, ObjectDoesNotExist
from django.db.models import Q, Count
from django.db.models import Q
from django.db import IntegrityError, transaction, connection
from django.shortcuts import get_object_or_404
from django.utils.encoding import smart_text
from django.utils.safestring import mark_safe
from django.utils.timezone import now
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import csrf_exempt, ensure_csrf_cookie
from django.views.decorators.csrf import csrf_exempt
from django.template.loader import render_to_string
from django.http import HttpResponse
from django.contrib.contenttypes.models import ContentType
@@ -35,7 +33,7 @@ from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import PermissionDenied, ParseError
from rest_framework.parsers import FormParser
from rest_framework.permissions import AllowAny, IsAuthenticated, SAFE_METHODS
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
from rest_framework.settings import api_settings
from rest_framework.views import exception_handler
@@ -63,12 +61,11 @@ from wsgiref.util import FileWrapper
# AWX
from awx.main.tasks import send_notifications
from awx.main.access import get_user_queryset
from awx.main.ha import is_ha_environment
from awx.api.filters import V1CredentialFilterBackend
from awx.api.generics import get_view_name
from awx.api.generics import * # noqa
from awx.api.versioning import reverse, get_request_version, drf_reverse
from awx.conf.license import get_license, feature_enabled, feature_exists, LicenseForbids
from awx.api.versioning import reverse, get_request_version
from awx.conf.license import feature_enabled, feature_exists, LicenseForbids, get_license
from awx.main.models import * # noqa
from awx.main.utils import * # noqa
from awx.main.utils import (
@@ -91,7 +88,7 @@ from awx.api.renderers import * # noqa
from awx.api.serializers import * # noqa
from awx.api.metadata import RoleMetadata, JobTypeMetadata
from awx.main.constants import ACTIVE_STATES
from awx.main.scheduler.tasks import run_job_complete
from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.api.views.mixin import (
ActivityStreamEnforcementMixin,
SystemTrackingEnforcementMixin,
@@ -100,7 +97,53 @@ from awx.api.views.mixin import (
InstanceGroupMembershipMixin,
RelatedJobsPreventDeleteMixin,
OrganizationCountsMixin,
ControlledByScmMixin,
)
from awx.api.views.organization import ( # noqa
OrganizationList,
OrganizationDetail,
OrganizationInventoriesList,
OrganizationUsersList,
OrganizationAdminsList,
OrganizationProjectsList,
OrganizationWorkflowJobTemplatesList,
OrganizationTeamsList,
OrganizationActivityStreamList,
OrganizationNotificationTemplatesList,
OrganizationNotificationTemplatesAnyList,
OrganizationNotificationTemplatesErrorList,
OrganizationNotificationTemplatesSuccessList,
OrganizationInstanceGroupsList,
OrganizationAccessList,
OrganizationObjectRolesList,
)
from awx.api.views.inventory import ( # noqa
InventoryList,
InventoryDetail,
InventoryUpdateEventsList,
InventoryScriptList,
InventoryScriptDetail,
InventoryScriptObjectRolesList,
InventoryScriptCopy,
InventoryList,
InventoryDetail,
InventoryActivityStreamList,
InventoryInstanceGroupsList,
InventoryAccessList,
InventoryObjectRolesList,
InventoryJobTemplateList,
InventoryCopy,
)
from awx.api.views.root import ( # noqa
ApiRootView,
ApiOAuthAuthorizationRootView,
ApiVersionRootView,
ApiV1RootView,
ApiV2RootView,
ApiV1PingView,
ApiV1ConfigView,
)
logger = logging.getLogger('awx.api.views')
@@ -118,244 +161,6 @@ def api_exception_handler(exc, context):
return exception_handler(exc, context)
class ApiRootView(APIView):
permission_classes = (AllowAny,)
view_name = _('REST API')
versioning_class = None
swagger_topic = 'Versioning'
@method_decorator(ensure_csrf_cookie)
def get(self, request, format=None):
''' List supported API versions '''
v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'})
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
data = OrderedDict()
data['description'] = _('AWX REST API')
data['current_version'] = v2
data['available_versions'] = dict(v1 = v1, v2 = v2)
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
if feature_enabled('rebranding'):
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
return Response(data)
class ApiOAuthAuthorizationRootView(APIView):
permission_classes = (AllowAny,)
view_name = _("API OAuth 2 Authorization Root")
versioning_class = None
swagger_topic = 'Authentication'
def get(self, request, format=None):
data = OrderedDict()
data['authorize'] = drf_reverse('api:authorize')
data['token'] = drf_reverse('api:token')
data['revoke_token'] = drf_reverse('api:revoke-token')
return Response(data)
class ApiVersionRootView(APIView):
permission_classes = (AllowAny,)
swagger_topic = 'Versioning'
def get(self, request, format=None):
''' List top level resources '''
data = OrderedDict()
data['ping'] = reverse('api:api_v1_ping_view', request=request)
data['instances'] = reverse('api:instance_list', request=request)
data['instance_groups'] = reverse('api:instance_group_list', request=request)
data['config'] = reverse('api:api_v1_config_view', request=request)
data['settings'] = reverse('api:setting_category_list', request=request)
data['me'] = reverse('api:user_me_list', request=request)
data['dashboard'] = reverse('api:dashboard_view', request=request)
data['organizations'] = reverse('api:organization_list', request=request)
data['users'] = reverse('api:user_list', request=request)
data['projects'] = reverse('api:project_list', request=request)
data['project_updates'] = reverse('api:project_update_list', request=request)
data['teams'] = reverse('api:team_list', request=request)
data['credentials'] = reverse('api:credential_list', request=request)
if get_request_version(request) > 1:
data['credential_types'] = reverse('api:credential_type_list', request=request)
data['applications'] = reverse('api:o_auth2_application_list', request=request)
data['tokens'] = reverse('api:o_auth2_token_list', request=request)
data['inventory'] = reverse('api:inventory_list', request=request)
data['inventory_scripts'] = reverse('api:inventory_script_list', request=request)
data['inventory_sources'] = reverse('api:inventory_source_list', request=request)
data['inventory_updates'] = reverse('api:inventory_update_list', request=request)
data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request)
data['job_events'] = reverse('api:job_event_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)
data['system_job_templates'] = reverse('api:system_job_template_list', request=request)
data['system_jobs'] = reverse('api:system_job_list', request=request)
data['schedules'] = reverse('api:schedule_list', request=request)
data['roles'] = reverse('api:role_list', request=request)
data['notification_templates'] = reverse('api:notification_template_list', request=request)
data['notifications'] = reverse('api:notification_list', request=request)
data['labels'] = reverse('api:label_list', request=request)
data['unified_job_templates'] = reverse('api:unified_job_template_list', request=request)
data['unified_jobs'] = reverse('api:unified_job_list', request=request)
data['activity_stream'] = reverse('api:activity_stream_list', request=request)
data['workflow_job_templates'] = reverse('api:workflow_job_template_list', request=request)
data['workflow_jobs'] = reverse('api:workflow_job_list', request=request)
data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request)
data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request)
return Response(data)
class ApiV1RootView(ApiVersionRootView):
view_name = _('Version 1')
class ApiV2RootView(ApiVersionRootView):
view_name = _('Version 2')
class ApiV1PingView(APIView):
"""A simple view that reports very basic information about this
instance, which is acceptable to be public information.
"""
permission_classes = (AllowAny,)
authentication_classes = ()
view_name = _('Ping')
swagger_topic = 'System Configuration'
def get(self, request, format=None):
"""Return some basic information about this instance
Everything returned here should be considered public / insecure, as
this requires no auth and is intended for use by the installer process.
"""
response = {
'ha': is_ha_environment(),
'version': get_awx_version(),
'active_node': settings.CLUSTER_HOST_ID,
}
response['instances'] = []
for instance in Instance.objects.all():
response['instances'].append(dict(node=instance.hostname, heartbeat=instance.modified,
capacity=instance.capacity, version=instance.version))
response['instances'].sort()
response['instance_groups'] = []
for instance_group in InstanceGroup.objects.all():
response['instance_groups'].append(dict(name=instance_group.name,
capacity=instance_group.capacity,
instances=[x.hostname for x in instance_group.instances.all()]))
return Response(response)
class ApiV1ConfigView(APIView):
permission_classes = (IsAuthenticated,)
view_name = _('Configuration')
swagger_topic = 'System Configuration'
def check_permissions(self, request):
super(ApiV1ConfigView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head', 'get'}:
self.permission_denied(request) # Raises PermissionDenied exception.
def get(self, request, format=None):
'''Return various sitewide configuration settings'''
if request.user.is_superuser or request.user.is_system_auditor:
license_data = get_license(show_key=True)
else:
license_data = get_license(show_key=False)
if not license_data.get('valid_key', False):
license_data = {}
if license_data and 'features' in license_data and 'activity_streams' in license_data['features']:
# FIXME: Make the final setting value dependent on the feature?
license_data['features']['activity_streams'] &= settings.ACTIVITY_STREAM_ENABLED
pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off'
data = dict(
time_zone=settings.TIME_ZONE,
license_info=license_data,
version=get_awx_version(),
ansible_version=get_ansible_version(),
eula=render_to_string("eula.md") if license_data.get('license_type', 'UNLICENSED') != 'open' else '',
analytics_status=pendo_state
)
# If LDAP is enabled, user_ldap_fields will return a list of field
# names that are managed by LDAP and should be read-only for users with
# a non-empty ldap_dn attribute.
if getattr(settings, 'AUTH_LDAP_SERVER_URI', None) and feature_enabled('ldap'):
user_ldap_fields = ['username', 'password']
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_ATTR_MAP', {}).keys())
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_FLAGS_BY_GROUP', {}).keys())
data['user_ldap_fields'] = user_ldap_fields
if request.user.is_superuser \
or request.user.is_system_auditor \
or Organization.accessible_objects(request.user, 'admin_role').exists() \
or Organization.accessible_objects(request.user, 'auditor_role').exists():
data.update(dict(
project_base_dir = settings.PROJECTS_ROOT,
project_local_paths = Project.get_local_path_choices(),
custom_virtualenvs = get_custom_venv_choices()
))
elif JobTemplate.accessible_objects(request.user, 'admin_role').exists():
data['custom_virtualenvs'] = get_custom_venv_choices()
return Response(data)
def post(self, request):
if not isinstance(request.data, dict):
return Response({"error": _("Invalid license data")}, status=status.HTTP_400_BAD_REQUEST)
if "eula_accepted" not in request.data:
return Response({"error": _("Missing 'eula_accepted' property")}, status=status.HTTP_400_BAD_REQUEST)
try:
eula_accepted = to_python_boolean(request.data["eula_accepted"])
except ValueError:
return Response({"error": _("'eula_accepted' value is invalid")}, status=status.HTTP_400_BAD_REQUEST)
if not eula_accepted:
return Response({"error": _("'eula_accepted' must be True")}, status=status.HTTP_400_BAD_REQUEST)
request.data.pop("eula_accepted")
try:
data_actual = json.dumps(request.data)
except Exception:
logger.info(smart_text(u"Invalid JSON submitted for license."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid JSON")}, status=status.HTTP_400_BAD_REQUEST)
try:
from awx.main.utils.common import get_licenser
license_data = json.loads(data_actual)
license_data_validated = get_licenser(**license_data).validate()
except Exception:
logger.warning(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid License")}, status=status.HTTP_400_BAD_REQUEST)
# If the license is valid, write it to the database.
if license_data_validated['valid_key']:
settings.LICENSE = license_data
settings.TOWER_URL_BASE = "{}://{}".format(request.scheme, request.get_host())
return Response(license_data_validated)
logger.warning(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid license")}, status=status.HTTP_400_BAD_REQUEST)
def delete(self, request):
try:
settings.LICENSE = {}
return Response(status=status.HTTP_204_NO_CONTENT)
except Exception:
# FIX: Log
return Response({"error": _("Failed to remove license (%s)") % has_error}, status=status.HTTP_400_BAD_REQUEST)
class DashboardView(APIView):
view_name = _("Dashboard")
@@ -744,213 +549,6 @@ class AuthView(APIView):
return Response(data)
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_queryset(self):
qs = Organization.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'auditor_role', 'member_role', 'read_role')
qs = qs.prefetch_related('created_by', 'modified_by')
return qs
def create(self, request, *args, **kwargs):
"""Create a new organzation.
If there is already an organization and the license of this
instance does not permit multiple organizations, then raise
LicenseForbids.
"""
# Sanity check: If the multiple organizations feature is disallowed
# by the license, then we are only willing to create this organization
# if no organizations exist in the system.
if (not feature_enabled('multiple_organizations') and
self.model.objects.exists()):
raise LicenseForbids(_('Your license only permits a single '
'organization to exist.'))
# Okay, create the organization as usual.
return super(OrganizationList, self).create(request, *args, **kwargs)
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_serializer_context(self, *args, **kwargs):
full_context = super(OrganizationDetail, self).get_serializer_context(*args, **kwargs)
if not hasattr(self, 'kwargs') or 'pk' not in self.kwargs:
return full_context
org_id = int(self.kwargs['pk'])
org_counts = {}
access_kwargs = {'accessor': self.request.user, 'role_field': 'read_role'}
direct_counts = Organization.objects.filter(id=org_id).annotate(
users=Count('member_role__members', distinct=True),
admins=Count('admin_role__members', distinct=True)
).values('users', 'admins')
if not direct_counts:
return full_context
org_counts = direct_counts[0]
org_counts['inventories'] = Inventory.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['teams'] = Team.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['projects'] = Project.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['job_templates'] = JobTemplate.accessible_objects(**access_kwargs).filter(
project__organization__id=org_id).count()
full_context['related_field_counts'] = {}
full_context['related_field_counts'][org_id] = org_counts
return full_context
class OrganizationInventoriesList(SubListAPIView):
model = Inventory
serializer_class = InventorySerializer
parent_model = Organization
relationship = 'inventories'
class BaseUsersList(SubListCreateAttachDetachAPIView):
def post(self, request, *args, **kwargs):
ret = super(BaseUsersList, self).post( request, *args, **kwargs)
if ret.status_code != 201:
return ret
try:
if ret.data is not None and request.data.get('is_system_auditor', False):
# This is a faux-field that just maps to checking the system
# auditor role member list.. unfortunately this means we can't
# set it on creation, and thus needs to be set here.
user = User.objects.get(id=ret.data['id'])
user.is_system_auditor = request.data['is_system_auditor']
ret.data['is_system_auditor'] = request.data['is_system_auditor']
except AttributeError as exc:
print(exc)
pass
return ret
class OrganizationUsersList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'member_role.members'
class OrganizationAdminsList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'admin_role.members'
class OrganizationProjectsList(SubListCreateAttachDetachAPIView):
model = Project
serializer_class = ProjectSerializer
parent_model = Organization
relationship = 'projects'
parent_key = 'organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAttachDetachAPIView):
model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization
relationship = 'workflows'
parent_key = 'organization'
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team
serializer_class = TeamSerializer
parent_model = Organization
relationship = 'teams'
parent_key = 'organization'
class OrganizationActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Organization
relationship = 'activitystream_set'
search_fields = ('changes',)
class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates'
parent_key = 'organization'
class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_any'
class OrganizationNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_error'
class OrganizationNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_success'
class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Organization
relationship = 'instance_groups'
class OrganizationAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Organization
class OrganizationObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = Organization
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class TeamList(ListCreateAPIView):
model = Team
@@ -1253,18 +851,6 @@ class SystemJobEventsList(SubListAPIView):
return super(SystemJobEventsList, self).finalize_response(request, response, *args, **kwargs)
class InventoryUpdateEventsList(SubListAPIView):
model = InventoryUpdateEvent
serializer_class = InventoryUpdateEventSerializer
parent_model = InventoryUpdate
relationship = 'inventory_update_events'
view_name = _('Inventory Update Events List')
search_fields = ('stdout',)
def finalize_response(self, request, response, *args, **kwargs):
response['X-UI-Max-Events'] = settings.MAX_UI_JOB_EVENTS
return super(InventoryUpdateEventsList, self).finalize_response(request, response, *args, **kwargs)
class ProjectUpdateCancel(RetrieveAPIView):
@@ -1822,177 +1408,6 @@ class CredentialCopy(CopyAPIView):
copy_return_serializer_class = CredentialSerializer
class InventoryScriptList(ListCreateAPIView):
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
class InventoryScriptDetail(RetrieveUpdateDestroyAPIView):
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
can_delete = request.user.can_access(self.model, 'delete', instance)
if not can_delete:
raise PermissionDenied(_("Cannot delete inventory script."))
for inv_src in InventorySource.objects.filter(source_script=instance):
inv_src.source_script = None
inv_src.save()
return super(InventoryScriptDetail, self).destroy(request, *args, **kwargs)
class InventoryScriptObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = CustomInventoryScript
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class InventoryScriptCopy(CopyAPIView):
model = CustomInventoryScript
copy_return_serializer_class = CustomInventoryScriptSerializer
class InventoryList(ListCreateAPIView):
model = Inventory
serializer_class = InventorySerializer
def get_queryset(self):
qs = Inventory.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'read_role', 'update_role', 'use_role', 'adhoc_role')
qs = qs.prefetch_related('created_by', 'modified_by', 'organization')
return qs
class ControlledByScmMixin(object):
'''
Special method to reset SCM inventory commit hash
if anything that it manages changes.
'''
def _reset_inv_src_rev(self, obj):
if self.request.method in SAFE_METHODS or not obj:
return
project_following_sources = obj.inventory_sources.filter(
update_on_project_update=True, source='scm')
if project_following_sources:
# Allow inventory changes unrelated to variables
if self.model == Inventory and (
not self.request or not self.request.data or
parse_yaml_or_json(self.request.data.get('variables', '')) == parse_yaml_or_json(obj.variables)):
return
project_following_sources.update(scm_last_revision='')
def get_object(self):
obj = super(ControlledByScmMixin, self).get_object()
self._reset_inv_src_rev(obj)
return obj
def get_parent_object(self):
obj = super(ControlledByScmMixin, self).get_parent_object()
self._reset_inv_src_rev(obj)
return obj
class InventoryDetail(RelatedJobsPreventDeleteMixin, ControlledByScmMixin, RetrieveUpdateDestroyAPIView):
model = Inventory
serializer_class = InventoryDetailSerializer
def update(self, request, *args, **kwargs):
obj = self.get_object()
kind = self.request.data.get('kind') or kwargs.get('kind')
# Do not allow changes to an Inventory kind.
if kind is not None and obj.kind != kind:
return self.http_method_not_allowed(request, *args, **kwargs)
return super(InventoryDetail, self).update(request, *args, **kwargs)
def destroy(self, request, *args, **kwargs):
obj = self.get_object()
if not request.user.can_access(self.model, 'delete', obj):
raise PermissionDenied()
self.check_related_active_jobs(obj) # related jobs mixin
try:
obj.schedule_deletion(getattr(request.user, 'id', None))
return Response(status=status.HTTP_202_ACCEPTED)
except RuntimeError as e:
return Response(dict(error=_("{0}".format(e))), status=status.HTTP_400_BAD_REQUEST)
class InventoryActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Inventory
relationship = 'activitystream_set'
search_fields = ('changes',)
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(Q(inventory=parent) | Q(host__in=parent.hosts.all()) | Q(group__in=parent.groups.all()))
class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Inventory
relationship = 'instance_groups'
class InventoryAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Inventory
class InventoryObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = Inventory
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class InventoryJobTemplateList(SubListAPIView):
model = JobTemplate
serializer_class = JobTemplateSerializer
parent_model = Inventory
relationship = 'jobtemplates'
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(inventory=parent)
class InventoryCopy(CopyAPIView):
model = Inventory
copy_return_serializer_class = InventorySerializer
class HostRelatedSearchMixin(object):
@property
@@ -2130,6 +1545,7 @@ class HostFactVersionsList(SystemTrackingEnforcementMixin, ParentMixin, ListAPIV
serializer_class = FactVersionSerializer
parent_model = Host
search_fields = ('facts',)
deprecated = True
def get_queryset(self):
from_spec = self.request.query_params.get('from', None)
@@ -2155,6 +1571,7 @@ class HostFactCompareView(SystemTrackingEnforcementMixin, SubDetailAPIView):
model = Fact
parent_model = Host
serializer_class = FactSerializer
deprecated = True
def retrieve(self, request, *args, **kwargs):
datetime_spec = request.query_params.get('datetime', None)
@@ -2180,7 +1597,15 @@ class HostInsights(GenericAPIView):
def _get_insights(self, url, username, password):
session = requests.Session()
session.auth = requests.auth.HTTPBasicAuth(username, password)
headers = {'Content-Type': 'application/json'}
license = get_license(show_key=False).get('license_type', 'UNLICENSED')
headers = {
'Content-Type': 'application/json',
'User-Agent': '{} {} ({})'.format(
'AWX' if license == 'open' else 'Red Hat Ansible Tower',
get_awx_version(),
license
)
}
return session.get(url, headers=headers, timeout=120)
def get_insights(self, url, username, password):
@@ -2643,6 +2068,14 @@ class InventorySourceHostsList(HostRelatedSearchMixin, SubListDestroyAPIView):
relationship = 'hosts'
check_sub_obj_permission = False
def perform_list_destroy(self, instance_list):
# Activity stream doesn't record disassociation here anyway
# no signals-related reason to not bulk-delete
Host.groups.through.objects.filter(
host__inventory_sources=self.get_parent_object()
).delete()
return super(InventorySourceHostsList, self).perform_list_destroy(instance_list)
class InventorySourceGroupsList(SubListDestroyAPIView):
@@ -2652,6 +2085,13 @@ class InventorySourceGroupsList(SubListDestroyAPIView):
relationship = 'groups'
check_sub_obj_permission = False
def perform_list_destroy(self, instance_list):
# Same arguments for bulk delete as with host list
Group.hosts.through.objects.filter(
group__inventory_sources=self.get_parent_object()
).delete()
return super(InventorySourceGroupsList, self).perform_list_destroy(instance_list)
class InventorySourceUpdatesList(SubListAPIView):
@@ -3515,35 +2955,29 @@ class WorkflowJobTemplateNodeChildrenBaseList(WorkflowsEnforcementMixin, Enforce
if created:
return None
workflow_nodes = parent.workflow_job_template.workflow_job_template_nodes.all().\
prefetch_related('success_nodes', 'failure_nodes', 'always_nodes')
graph = {}
for workflow_node in workflow_nodes:
graph[workflow_node.pk] = dict(node_object=workflow_node, metadata={'parent': None, 'traversed': False})
if parent.id == sub.id:
return {"Error": _("Cycle detected.")}
find = False
for node_type in ['success_nodes', 'failure_nodes', 'always_nodes']:
for workflow_node in workflow_nodes:
parent_node = graph[workflow_node.pk]
related_nodes = getattr(parent_node['node_object'], node_type).all()
for related_node in related_nodes:
sub_node = graph[related_node.pk]
sub_node['metadata']['parent'] = parent_node
if not find and parent == workflow_node and sub == related_node and self.relationship == node_type:
find = True
if not find:
sub_node = graph[sub.pk]
parent_node = graph[parent.pk]
if sub_node['metadata']['parent'] is not None:
return {"Error": _("Multiple parent relationship not allowed.")}
sub_node['metadata']['parent'] = parent_node
iter_node = sub_node
while iter_node is not None:
if iter_node['metadata']['traversed']:
return {"Error": _("Cycle detected.")}
iter_node['metadata']['traversed'] = True
iter_node = iter_node['metadata']['parent']
'''
Look for parent->child connection in all relationships except the relationship that is
attempting to be added; because it's ok to re-add the relationship
'''
relationships = ['success_nodes', 'failure_nodes', 'always_nodes']
relationships.remove(self.relationship)
qs = functools.reduce(lambda x, y: (x | y),
(Q(**{'{}__in'.format(rel): [sub.id]}) for rel in relationships))
if WorkflowJobTemplateNode.objects.filter(Q(pk=parent.id) & qs).exists():
return {"Error": _("Relationship not allowed.")}
parent_node_type_relationship = getattr(parent, self.relationship)
parent_node_type_relationship.add(sub)
graph = WorkflowDAG(parent.workflow_job_template)
if graph.has_cycle():
parent_node_type_relationship.remove(sub)
return {"Error": _("Cycle detected.")}
parent_node_type_relationship.remove(sub)
return None
@@ -3671,23 +3105,31 @@ class WorkflowJobTemplateLaunch(WorkflowsEnforcementMixin, RetrieveAPIView):
extra_vars.setdefault(v, u'')
if extra_vars:
data['extra_vars'] = extra_vars
if obj.ask_inventory_on_launch:
data['inventory'] = obj.inventory_id
else:
data.pop('inventory', None)
return data
def post(self, request, *args, **kwargs):
obj = self.get_object()
if 'inventory_id' in request.data:
request.data['inventory'] = request.data['inventory_id']
serializer = self.serializer_class(instance=obj, data=request.data)
if not serializer.is_valid():
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
prompted_fields, ignored_fields, errors = obj._accept_or_ignore_job_kwargs(**request.data)
if not request.user.can_access(JobLaunchConfig, 'add', serializer.validated_data, template=obj):
raise PermissionDenied()
new_job = obj.create_unified_job(**prompted_fields)
new_job = obj.create_unified_job(**serializer.validated_data)
new_job.signal_start()
data = OrderedDict()
data['workflow_job'] = new_job.id
data['ignored_fields'] = ignored_fields
data['ignored_fields'] = serializer._ignored_fields
data.update(WorkflowJobSerializer(new_job, context=self.get_serializer_context()).to_representation(new_job))
headers = {'Location': new_job.get_absolute_url(request)}
return Response(data, status=status.HTTP_201_CREATED, headers=headers)
@@ -3711,8 +3153,11 @@ class WorkflowJobRelaunch(WorkflowsEnforcementMixin, GenericAPIView):
def post(self, request, *args, **kwargs):
obj = self.get_object()
if obj.is_sliced_job and not obj.job_template_id:
raise ParseError(_('Cannot relaunch slice workflow job orphaned from job template.'))
if obj.is_sliced_job:
if not obj.job_template_id:
raise ParseError(_('Cannot relaunch slice workflow job orphaned from job template.'))
elif obj.job_template.job_slice_count != obj.workflow_nodes.count():
raise ParseError(_('Cannot relaunch sliced workflow job after slice count has changed.'))
new_workflow_job = obj.create_relaunch_workflow_job()
new_workflow_job.signal_start()
@@ -3849,8 +3294,7 @@ class WorkflowJobCancel(WorkflowsEnforcementMixin, RetrieveAPIView):
obj = self.get_object()
if obj.can_cancel:
obj.cancel()
#TODO: Figure out whether an immediate schedule is needed.
run_job_complete.delay(obj.id)
schedule_task_manager()
return Response(status=status.HTTP_202_ACCEPTED)
else:
return self.http_method_not_allowed(request, *args, **kwargs)
@@ -4175,6 +3619,11 @@ class JobRelaunch(RetrieveAPIView):
'Cannot relaunch because previous job had 0 {status_value} hosts.'
).format(status_value=retry_hosts)}, status=status.HTTP_400_BAD_REQUEST)
copy_kwargs['limit'] = ','.join(retry_host_list)
limit_length = len(copy_kwargs['limit'])
if limit_length > 1024:
return Response({'limit': _(
'Cannot relaunch because the limit length {limit_length} exceeds the max of {limit_max}.'
).format(limit_length=limit_length, limit_max=1024)}, status=status.HTTP_400_BAD_REQUEST)
new_job = obj.copy_unified_job(**copy_kwargs)
result = new_job.signal_start(**serializer.validated_data['credential_passwords'])

211
awx/api/views/inventory.py Normal file
View File

@@ -0,0 +1,211 @@
# Copyright (c) 2018 Red Hat, Inc.
# All Rights Reserved.
# Python
import logging
# Django
from django.conf import settings
from django.db.models import Q
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import status
# AWX
from awx.main.models import (
ActivityStream,
Inventory,
JobTemplate,
Role,
User,
InstanceGroup,
InventoryUpdateEvent,
InventoryUpdate,
InventorySource,
CustomInventoryScript,
)
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
SubListAPIView,
SubListAttachDetachAPIView,
ResourceAccessList,
CopyAPIView,
)
from awx.api.serializers import (
InventorySerializer,
ActivityStreamSerializer,
RoleSerializer,
InstanceGroupSerializer,
InventoryUpdateEventSerializer,
CustomInventoryScriptSerializer,
InventoryDetailSerializer,
JobTemplateSerializer,
)
from awx.api.views.mixin import (
ActivityStreamEnforcementMixin,
RelatedJobsPreventDeleteMixin,
ControlledByScmMixin,
)
logger = logging.getLogger('awx.api.views.organization')
class InventoryUpdateEventsList(SubListAPIView):
model = InventoryUpdateEvent
serializer_class = InventoryUpdateEventSerializer
parent_model = InventoryUpdate
relationship = 'inventory_update_events'
view_name = _('Inventory Update Events List')
search_fields = ('stdout',)
def finalize_response(self, request, response, *args, **kwargs):
response['X-UI-Max-Events'] = settings.MAX_UI_JOB_EVENTS
return super(InventoryUpdateEventsList, self).finalize_response(request, response, *args, **kwargs)
class InventoryScriptList(ListCreateAPIView):
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
class InventoryScriptDetail(RetrieveUpdateDestroyAPIView):
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
can_delete = request.user.can_access(self.model, 'delete', instance)
if not can_delete:
raise PermissionDenied(_("Cannot delete inventory script."))
for inv_src in InventorySource.objects.filter(source_script=instance):
inv_src.source_script = None
inv_src.save()
return super(InventoryScriptDetail, self).destroy(request, *args, **kwargs)
class InventoryScriptObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = CustomInventoryScript
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class InventoryScriptCopy(CopyAPIView):
model = CustomInventoryScript
copy_return_serializer_class = CustomInventoryScriptSerializer
class InventoryList(ListCreateAPIView):
model = Inventory
serializer_class = InventorySerializer
def get_queryset(self):
qs = Inventory.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'read_role', 'update_role', 'use_role', 'adhoc_role')
qs = qs.prefetch_related('created_by', 'modified_by', 'organization')
return qs
class InventoryDetail(RelatedJobsPreventDeleteMixin, ControlledByScmMixin, RetrieveUpdateDestroyAPIView):
model = Inventory
serializer_class = InventoryDetailSerializer
def update(self, request, *args, **kwargs):
obj = self.get_object()
kind = self.request.data.get('kind') or kwargs.get('kind')
# Do not allow changes to an Inventory kind.
if kind is not None and obj.kind != kind:
return self.http_method_not_allowed(request, *args, **kwargs)
return super(InventoryDetail, self).update(request, *args, **kwargs)
def destroy(self, request, *args, **kwargs):
obj = self.get_object()
if not request.user.can_access(self.model, 'delete', obj):
raise PermissionDenied()
self.check_related_active_jobs(obj) # related jobs mixin
try:
obj.schedule_deletion(getattr(request.user, 'id', None))
return Response(status=status.HTTP_202_ACCEPTED)
except RuntimeError as e:
return Response(dict(error=_("{0}".format(e))), status=status.HTTP_400_BAD_REQUEST)
class InventoryActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Inventory
relationship = 'activitystream_set'
search_fields = ('changes',)
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(Q(inventory=parent) | Q(host__in=parent.hosts.all()) | Q(group__in=parent.groups.all()))
class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Inventory
relationship = 'instance_groups'
class InventoryAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Inventory
class InventoryObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = Inventory
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class InventoryJobTemplateList(SubListAPIView):
model = JobTemplate
serializer_class = JobTemplateSerializer
parent_model = Inventory
relationship = 'jobtemplates'
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(inventory=parent)
class InventoryCopy(CopyAPIView):
model = Inventory
copy_return_serializer_class = InventorySerializer

View File

@@ -13,12 +13,16 @@ from django.shortcuts import get_object_or_404
from django.utils.timezone import now
from django.utils.translation import ugettext_lazy as _
from rest_framework.permissions import SAFE_METHODS
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import status
from awx.main.constants import ACTIVE_STATES
from awx.main.utils import get_object_or_400
from awx.main.utils import (
get_object_or_400,
parse_yaml_or_json,
)
from awx.main.models.ha import (
Instance,
InstanceGroup,
@@ -273,3 +277,33 @@ class OrganizationCountsMixin(object):
full_context['related_field_counts'] = count_context
return full_context
class ControlledByScmMixin(object):
'''
Special method to reset SCM inventory commit hash
if anything that it manages changes.
'''
def _reset_inv_src_rev(self, obj):
if self.request.method in SAFE_METHODS or not obj:
return
project_following_sources = obj.inventory_sources.filter(
update_on_project_update=True, source='scm')
if project_following_sources:
# Allow inventory changes unrelated to variables
if self.model == Inventory and (
not self.request or not self.request.data or
parse_yaml_or_json(self.request.data.get('variables', '')) == parse_yaml_or_json(obj.variables)):
return
project_following_sources.update(scm_last_revision='')
def get_object(self):
obj = super(ControlledByScmMixin, self).get_object()
self._reset_inv_src_rev(obj)
return obj
def get_parent_object(self):
obj = super(ControlledByScmMixin, self).get_parent_object()
self._reset_inv_src_rev(obj)
return obj

View File

@@ -0,0 +1,247 @@
# Copyright (c) 2018 Red Hat, Inc.
# All Rights Reserved.
# Python
import logging
# Django
from django.db.models import Count
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
# AWX
from awx.conf.license import (
feature_enabled,
LicenseForbids,
)
from awx.main.models import (
ActivityStream,
Inventory,
Project,
JobTemplate,
WorkflowJobTemplate,
Organization,
NotificationTemplate,
Role,
User,
Team,
InstanceGroup,
)
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
SubListAPIView,
SubListCreateAttachDetachAPIView,
SubListAttachDetachAPIView,
ResourceAccessList,
BaseUsersList,
)
from awx.api.serializers import (
OrganizationSerializer,
InventorySerializer,
ProjectSerializer,
UserSerializer,
TeamSerializer,
ActivityStreamSerializer,
RoleSerializer,
NotificationTemplateSerializer,
WorkflowJobTemplateSerializer,
InstanceGroupSerializer,
)
from awx.api.views.mixin import (
ActivityStreamEnforcementMixin,
RelatedJobsPreventDeleteMixin,
OrganizationCountsMixin,
)
logger = logging.getLogger('awx.api.views.organization')
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_queryset(self):
qs = Organization.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'auditor_role', 'member_role', 'read_role')
qs = qs.prefetch_related('created_by', 'modified_by')
return qs
def create(self, request, *args, **kwargs):
"""Create a new organzation.
If there is already an organization and the license of this
instance does not permit multiple organizations, then raise
LicenseForbids.
"""
# Sanity check: If the multiple organizations feature is disallowed
# by the license, then we are only willing to create this organization
# if no organizations exist in the system.
if (not feature_enabled('multiple_organizations') and
self.model.objects.exists()):
raise LicenseForbids(_('Your license only permits a single '
'organization to exist.'))
# Okay, create the organization as usual.
return super(OrganizationList, self).create(request, *args, **kwargs)
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_serializer_context(self, *args, **kwargs):
full_context = super(OrganizationDetail, self).get_serializer_context(*args, **kwargs)
if not hasattr(self, 'kwargs') or 'pk' not in self.kwargs:
return full_context
org_id = int(self.kwargs['pk'])
org_counts = {}
access_kwargs = {'accessor': self.request.user, 'role_field': 'read_role'}
direct_counts = Organization.objects.filter(id=org_id).annotate(
users=Count('member_role__members', distinct=True),
admins=Count('admin_role__members', distinct=True)
).values('users', 'admins')
if not direct_counts:
return full_context
org_counts = direct_counts[0]
org_counts['inventories'] = Inventory.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['teams'] = Team.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['projects'] = Project.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['job_templates'] = JobTemplate.accessible_objects(**access_kwargs).filter(
project__organization__id=org_id).count()
full_context['related_field_counts'] = {}
full_context['related_field_counts'][org_id] = org_counts
return full_context
class OrganizationInventoriesList(SubListAPIView):
model = Inventory
serializer_class = InventorySerializer
parent_model = Organization
relationship = 'inventories'
class OrganizationUsersList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'member_role.members'
class OrganizationAdminsList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'admin_role.members'
class OrganizationProjectsList(SubListCreateAttachDetachAPIView):
model = Project
serializer_class = ProjectSerializer
parent_model = Organization
relationship = 'projects'
parent_key = 'organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAttachDetachAPIView):
model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization
relationship = 'workflows'
parent_key = 'organization'
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team
serializer_class = TeamSerializer
parent_model = Organization
relationship = 'teams'
parent_key = 'organization'
class OrganizationActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Organization
relationship = 'activitystream_set'
search_fields = ('changes',)
class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates'
parent_key = 'organization'
class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_any'
class OrganizationNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_error'
class OrganizationNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_success'
class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Organization
relationship = 'instance_groups'
class OrganizationAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Organization
class OrganizationObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = Organization
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)

278
awx/api/views/root.py Normal file
View File

@@ -0,0 +1,278 @@
# Copyright (c) 2018 Ansible, Inc.
# All Rights Reserved.
import logging
import json
from collections import OrderedDict
from django.conf import settings
from django.utils.encoding import smart_text
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import ensure_csrf_cookie
from django.template.loader import render_to_string
from django.utils.translation import ugettext_lazy as _
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
from rest_framework import status
from awx.api.generics import APIView
from awx.main.ha import is_ha_environment
from awx.main.utils import (
get_awx_version,
get_ansible_version,
get_custom_venv_choices,
to_python_boolean,
)
from awx.api.versioning import reverse, get_request_version, drf_reverse
from awx.conf.license import get_license, feature_enabled
from awx.main.models import (
Project,
Organization,
Instance,
InstanceGroup,
JobTemplate,
)
logger = logging.getLogger('awx.api.views.root')
class ApiRootView(APIView):
permission_classes = (AllowAny,)
view_name = _('REST API')
versioning_class = None
swagger_topic = 'Versioning'
@method_decorator(ensure_csrf_cookie)
def get(self, request, format=None):
''' List supported API versions '''
v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'})
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
data = OrderedDict()
data['description'] = _('AWX REST API')
data['current_version'] = v2
data['available_versions'] = dict(v1 = v1, v2 = v2)
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
if feature_enabled('rebranding'):
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
return Response(data)
class ApiOAuthAuthorizationRootView(APIView):
permission_classes = (AllowAny,)
view_name = _("API OAuth 2 Authorization Root")
versioning_class = None
swagger_topic = 'Authentication'
def get(self, request, format=None):
data = OrderedDict()
data['authorize'] = drf_reverse('api:authorize')
data['token'] = drf_reverse('api:token')
data['revoke_token'] = drf_reverse('api:revoke-token')
return Response(data)
class ApiVersionRootView(APIView):
permission_classes = (AllowAny,)
swagger_topic = 'Versioning'
def get(self, request, format=None):
''' List top level resources '''
data = OrderedDict()
data['ping'] = reverse('api:api_v1_ping_view', request=request)
data['instances'] = reverse('api:instance_list', request=request)
data['instance_groups'] = reverse('api:instance_group_list', request=request)
data['config'] = reverse('api:api_v1_config_view', request=request)
data['settings'] = reverse('api:setting_category_list', request=request)
data['me'] = reverse('api:user_me_list', request=request)
data['dashboard'] = reverse('api:dashboard_view', request=request)
data['organizations'] = reverse('api:organization_list', request=request)
data['users'] = reverse('api:user_list', request=request)
data['projects'] = reverse('api:project_list', request=request)
data['project_updates'] = reverse('api:project_update_list', request=request)
data['teams'] = reverse('api:team_list', request=request)
data['credentials'] = reverse('api:credential_list', request=request)
if get_request_version(request) > 1:
data['credential_types'] = reverse('api:credential_type_list', request=request)
data['applications'] = reverse('api:o_auth2_application_list', request=request)
data['tokens'] = reverse('api:o_auth2_token_list', request=request)
data['inventory'] = reverse('api:inventory_list', request=request)
data['inventory_scripts'] = reverse('api:inventory_script_list', request=request)
data['inventory_sources'] = reverse('api:inventory_source_list', request=request)
data['inventory_updates'] = reverse('api:inventory_update_list', request=request)
data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request)
data['job_events'] = reverse('api:job_event_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)
data['system_job_templates'] = reverse('api:system_job_template_list', request=request)
data['system_jobs'] = reverse('api:system_job_list', request=request)
data['schedules'] = reverse('api:schedule_list', request=request)
data['roles'] = reverse('api:role_list', request=request)
data['notification_templates'] = reverse('api:notification_template_list', request=request)
data['notifications'] = reverse('api:notification_list', request=request)
data['labels'] = reverse('api:label_list', request=request)
data['unified_job_templates'] = reverse('api:unified_job_template_list', request=request)
data['unified_jobs'] = reverse('api:unified_job_list', request=request)
data['activity_stream'] = reverse('api:activity_stream_list', request=request)
data['workflow_job_templates'] = reverse('api:workflow_job_template_list', request=request)
data['workflow_jobs'] = reverse('api:workflow_job_list', request=request)
data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request)
data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request)
return Response(data)
class ApiV1RootView(ApiVersionRootView):
view_name = _('Version 1')
class ApiV2RootView(ApiVersionRootView):
view_name = _('Version 2')
class ApiV1PingView(APIView):
"""A simple view that reports very basic information about this
instance, which is acceptable to be public information.
"""
permission_classes = (AllowAny,)
authentication_classes = ()
view_name = _('Ping')
swagger_topic = 'System Configuration'
def get(self, request, format=None):
"""Return some basic information about this instance
Everything returned here should be considered public / insecure, as
this requires no auth and is intended for use by the installer process.
"""
response = {
'ha': is_ha_environment(),
'version': get_awx_version(),
'active_node': settings.CLUSTER_HOST_ID,
}
response['instances'] = []
for instance in Instance.objects.all():
response['instances'].append(dict(node=instance.hostname, heartbeat=instance.modified,
capacity=instance.capacity, version=instance.version))
response['instances'].sort()
response['instance_groups'] = []
for instance_group in InstanceGroup.objects.all():
response['instance_groups'].append(dict(name=instance_group.name,
capacity=instance_group.capacity,
instances=[x.hostname for x in instance_group.instances.all()]))
return Response(response)
class ApiV1ConfigView(APIView):
permission_classes = (IsAuthenticated,)
view_name = _('Configuration')
swagger_topic = 'System Configuration'
def check_permissions(self, request):
super(ApiV1ConfigView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head', 'get'}:
self.permission_denied(request) # Raises PermissionDenied exception.
def get(self, request, format=None):
'''Return various sitewide configuration settings'''
if request.user.is_superuser or request.user.is_system_auditor:
license_data = get_license(show_key=True)
else:
license_data = get_license(show_key=False)
if not license_data.get('valid_key', False):
license_data = {}
if license_data and 'features' in license_data and 'activity_streams' in license_data['features']:
# FIXME: Make the final setting value dependent on the feature?
license_data['features']['activity_streams'] &= settings.ACTIVITY_STREAM_ENABLED
pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off'
data = dict(
time_zone=settings.TIME_ZONE,
license_info=license_data,
version=get_awx_version(),
ansible_version=get_ansible_version(),
eula=render_to_string("eula.md") if license_data.get('license_type', 'UNLICENSED') != 'open' else '',
analytics_status=pendo_state
)
# If LDAP is enabled, user_ldap_fields will return a list of field
# names that are managed by LDAP and should be read-only for users with
# a non-empty ldap_dn attribute.
if getattr(settings, 'AUTH_LDAP_SERVER_URI', None) and feature_enabled('ldap'):
user_ldap_fields = ['username', 'password']
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_ATTR_MAP', {}).keys())
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_FLAGS_BY_GROUP', {}).keys())
data['user_ldap_fields'] = user_ldap_fields
if request.user.is_superuser \
or request.user.is_system_auditor \
or Organization.accessible_objects(request.user, 'admin_role').exists() \
or Organization.accessible_objects(request.user, 'auditor_role').exists():
data.update(dict(
project_base_dir = settings.PROJECTS_ROOT,
project_local_paths = Project.get_local_path_choices(),
custom_virtualenvs = get_custom_venv_choices()
))
elif JobTemplate.accessible_objects(request.user, 'admin_role').exists():
data['custom_virtualenvs'] = get_custom_venv_choices()
return Response(data)
def post(self, request):
if not isinstance(request.data, dict):
return Response({"error": _("Invalid license data")}, status=status.HTTP_400_BAD_REQUEST)
if "eula_accepted" not in request.data:
return Response({"error": _("Missing 'eula_accepted' property")}, status=status.HTTP_400_BAD_REQUEST)
try:
eula_accepted = to_python_boolean(request.data["eula_accepted"])
except ValueError:
return Response({"error": _("'eula_accepted' value is invalid")}, status=status.HTTP_400_BAD_REQUEST)
if not eula_accepted:
return Response({"error": _("'eula_accepted' must be True")}, status=status.HTTP_400_BAD_REQUEST)
request.data.pop("eula_accepted")
try:
data_actual = json.dumps(request.data)
except Exception:
logger.info(smart_text(u"Invalid JSON submitted for license."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid JSON")}, status=status.HTTP_400_BAD_REQUEST)
try:
from awx.main.utils.common import get_licenser
license_data = json.loads(data_actual)
license_data_validated = get_licenser(**license_data).validate()
except Exception:
logger.warning(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid License")}, status=status.HTTP_400_BAD_REQUEST)
# If the license is valid, write it to the database.
if license_data_validated['valid_key']:
settings.LICENSE = license_data
settings.TOWER_URL_BASE = "{}://{}".format(request.scheme, request.get_host())
return Response(license_data_validated)
logger.warning(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid license")}, status=status.HTTP_400_BAD_REQUEST)
def delete(self, request):
try:
settings.LICENSE = {}
return Response(status=status.HTTP_204_NO_CONTENT)
except Exception:
# FIX: Log
return Response({"error": _("Failed to remove license.")}, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -0,0 +1,18 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
# AWX
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('conf', '0005_v330_rename_two_session_settings'),
]
operations = [
migrations.RunPython(fill_ldap_group_type_params),
]

View File

@@ -0,0 +1,30 @@
import inspect
from django.conf import settings
from django.utils.timezone import now
def fill_ldap_group_type_params(apps, schema_editor):
group_type = settings.AUTH_LDAP_GROUP_TYPE
Setting = apps.get_model('conf', 'Setting')
group_type_params = {'name_attr': 'cn', 'member_attr': 'member'}
qs = Setting.objects.filter(key='AUTH_LDAP_GROUP_TYPE_PARAMS')
entry = None
if qs.exists():
entry = qs[0]
group_type_params = entry.value
else:
entry = Setting(key='AUTH_LDAP_GROUP_TYPE_PARAMS',
value=group_type_params,
created=now(),
modified=now())
init_attrs = set(inspect.getargspec(group_type.__init__).args[1:])
for k in group_type_params.keys():
if k not in init_attrs:
del group_type_params[k]
entry.value = group_type_params
entry.save()

View File

@@ -475,6 +475,15 @@ class BaseCallbackModule(CallbackBase):
with self.capture_event_data('runner_retry', **event_data):
super(BaseCallbackModule, self).v2_runner_retry(result)
def v2_runner_on_start(self, host, task):
event_data = dict(
host=host.get_name(),
task=task
)
with self.capture_event_data('runner_on_start', **event_data):
super(BaseCallbackModule, self).v2_runner_on_start(host, task)
class AWXDefaultCallbackModule(BaseCallbackModule, DefaultCallbackModule):

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -524,7 +524,7 @@ class UserAccess(BaseAccess):
# A user can be changed if they are themselves, or by org admins or
# superusers. Change permission implies changing only certain fields
# that a user should be able to edit for themselves.
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
return bool(self.user == obj or self.can_admin(obj, data))
@@ -577,7 +577,7 @@ class UserAccess(BaseAccess):
return False
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
# Reverse obj and sub_obj, defer to RoleAccess if this is a role assignment.
@@ -587,7 +587,7 @@ class UserAccess(BaseAccess):
return super(UserAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
if relationship == 'roles':
@@ -1157,13 +1157,10 @@ class TeamAccess(BaseAccess):
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
"""Reverse obj and sub_obj, defer to RoleAccess if this is an assignment
of a resource role to the team."""
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
# MANAGE_ORGANIZATION_AUTH setting checked in RoleAccess
if isinstance(sub_obj, Role):
if sub_obj.content_object is None:
raise PermissionDenied(_("The {} role cannot be assigned to a team").format(sub_obj.name))
elif isinstance(sub_obj.content_object, User):
raise PermissionDenied(_("The admin_role for a User cannot be assigned to a team"))
if isinstance(sub_obj.content_object, ResourceMixin):
role_access = RoleAccess(self.user)
@@ -1175,9 +1172,7 @@ class TeamAccess(BaseAccess):
*args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
# MANAGE_ORGANIZATION_AUTH setting checked in RoleAccess
if isinstance(sub_obj, Role):
if isinstance(sub_obj.content_object, ResourceMixin):
role_access = RoleAccess(self.user)
@@ -1213,7 +1208,7 @@ class ProjectAccess(BaseAccess):
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'admin_role').exists()
return Organization.accessible_objects(self.user, 'project_admin_role').exists()
return (self.check_related('organization', Organization, data, role_field='project_admin_role', mandatory=True) and
self.check_related('credential', Credential, data, role_field='use_role'))
@@ -1840,8 +1835,10 @@ class WorkflowJobTemplateAccess(BaseAccess):
if 'survey_enabled' in data and data['survey_enabled']:
self.check_license(feature='surveys')
return self.check_related('organization', Organization, data, role_field='workflow_admin_role',
mandatory=True)
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True) and
self.check_related('inventory', Inventory, data, role_field='use_role')
)
def can_copy(self, obj):
if self.save_messages:
@@ -1895,8 +1892,11 @@ class WorkflowJobTemplateAccess(BaseAccess):
if self.user.is_superuser:
return True
return (self.check_related('organization', Organization, data, role_field='workflow_admin_role', obj=obj) and
self.user in obj.admin_role)
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', obj=obj) and
self.check_related('inventory', Inventory, data, role_field='use_role', obj=obj) and
self.user in obj.admin_role
)
def can_delete(self, obj):
return self.user.is_superuser or self.user in obj.admin_role
@@ -1954,19 +1954,29 @@ class WorkflowJobAccess(BaseAccess):
if not template:
return False
# If job was launched by another user, it could have survey passwords
if obj.created_by_id != self.user.pk:
# Obtain prompts used to start original job
JobLaunchConfig = obj._meta.get_field('launch_config').related_model
try:
config = JobLaunchConfig.objects.get(job=obj)
except JobLaunchConfig.DoesNotExist:
config = None
# Obtain prompts used to start original job
JobLaunchConfig = obj._meta.get_field('launch_config').related_model
try:
config = JobLaunchConfig.objects.get(job=obj)
except JobLaunchConfig.DoesNotExist:
if self.save_messages:
self.messages['detail'] = _('Workflow Job was launched with unknown prompts.')
return False
if config is None or config.prompts_dict():
# Check if access to prompts to prevent relaunch
if config.prompts_dict():
if obj.created_by_id != self.user.pk:
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts provided by another user.')
return False
if not JobLaunchConfigAccess(self.user).can_add({'reference_obj': config}):
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts you lack access to.')
return False
if config.has_unprompted(template):
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts no longer accepted.')
return False
# execute permission to WFJT is mandatory for any relaunch
return (self.user in template.execute_role)
@@ -2552,14 +2562,13 @@ class RoleAccess(BaseAccess):
# Unsupported for now
return False
def can_attach(self, obj, sub_obj, relationship, data,
skip_sub_obj_read_check=False):
return self.can_unattach(obj, sub_obj, relationship, data, skip_sub_obj_read_check)
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
return self.can_unattach(obj, sub_obj, relationship, *args, **kwargs)
@check_superuser
def can_unattach(self, obj, sub_obj, relationship, data=None, skip_sub_obj_read_check=False):
if isinstance(obj.content_object, Team):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
if not skip_sub_obj_read_check and relationship in ['members', 'member_role.parents', 'parents']:

View File

@@ -296,6 +296,9 @@ class AutoscalePool(WorkerPool):
# 5 workers per GB of total memory
self.max_workers = (total_memory_gb * 5)
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)
@property
def should_grow(self):
if len(self.workers) < self.min_workers:

View File

@@ -11,6 +11,9 @@ logger = logging.getLogger('awx.main.dispatch')
def reap_job(j, status):
if UnifiedJob.objects.get(id=j.id).status not in ('running', 'waiting'):
# just in case, don't reap jobs that aren't running
return
j.status = status
j.start_args = '' # blank field to remove encrypted passwords
j.job_explanation += ' '.join((

View File

@@ -7,6 +7,7 @@ import signal
from uuid import UUID
from Queue import Empty as QueueEmpty
from django import db
from kombu import Producer
from kombu.mixins import ConsumerMixin
@@ -128,6 +129,10 @@ class BaseWorker(object):
logger.error("Exception on worker {}, restarting: ".format(idx) + str(e))
continue
try:
for conn in db.connections.all():
# If the database connection has a hiccup during the prior message, close it
# so we can establish a new connection
conn.close_if_unusable_or_obsolete()
self.perform_work(body, *args)
finally:
if 'uuid' in body:

View File

@@ -1,7 +1,5 @@
import logging
import time
import os
import signal
import traceback
from django.conf import settings
@@ -110,8 +108,7 @@ class CallbackBrokerWorker(BaseWorker):
break
except (OperationalError, InterfaceError, InternalError):
if retries >= self.MAX_RETRIES:
logger.exception('Worker could not re-establish database connectivity, shutting down gracefully: Job {}'.format(job_identifier))
os.kill(os.getppid(), signal.SIGINT)
logger.exception('Worker could not re-establish database connectivity, giving up on event for Job {}'.format(job_identifier))
return
delay = 60 * retries
logger.exception('Database Error Saving Job Event, retry #{i} in {delay} seconds:'.format(

View File

@@ -5,7 +5,6 @@ import sys
import traceback
import six
from django import db
from awx.main.tasks import dispatch_startup, inform_cluster_of_shutdown
@@ -75,10 +74,6 @@ class TaskWorker(BaseWorker):
'task': u'awx.main.tasks.RunProjectUpdate'
}
'''
for conn in db.connections.all():
# If the database connection has a hiccup during at task, close it
# so we can establish a new connection
conn.close_if_unusable_or_obsolete()
result = None
try:
result = self.run_callable(body)

1
awx/main/expect/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
authorized_keys

View File

View File

@@ -3,6 +3,7 @@
# Python
import re
import sys
from dateutil.relativedelta import relativedelta
# Django
@@ -129,6 +130,7 @@ class Command(BaseCommand):
@transaction.atomic
def handle(self, *args, **options):
sys.stderr.write("This command has been deprecated and will be removed in a future release.\n")
if not feature_enabled('system_tracking'):
raise CommandError("The System Tracking feature is not enabled for your instance")
cleanup_facts = CleanupFacts()

View File

@@ -2,7 +2,6 @@
# All Rights Reserved
import subprocess
import warnings
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError
@@ -24,17 +23,10 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str,
help='Hostname used during provisioning')
parser.add_argument('--name', dest='name', type=str,
help='(PENDING DEPRECIATION) Hostname used during provisioning')
@transaction.atomic
def handle(self, *args, **options):
# TODO: remove in 3.3
if options.get('name'):
warnings.warn("`--name` is depreciated in favor of `--hostname`, and will be removed in release 3.3.")
if options.get('hostname'):
raise CommandError("Cannot accept both --name and --hostname.")
options['hostname'] = options['name']
hostname = options.get('hostname')
if not hostname:
raise CommandError("--hostname is a required argument")

View File

@@ -1,17 +0,0 @@
# Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved
# Borrow from another AWX command
from awx.main.management.commands.deprovision_instance import Command as OtherCommand
# Python
import warnings
class Command(OtherCommand):
def handle(self, *args, **options):
# TODO: delete this entire file in 3.3
warnings.warn('This command is replaced with `deprovision_instance` and will '
'be removed in release 3.3.')
return super(Command, self).handle(*args, **options)

View File

@@ -1,17 +0,0 @@
# Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved
# Borrow from another AWX command
from awx.main.management.commands.provision_instance import Command as OtherCommand
# Python
import warnings
class Command(OtherCommand):
def handle(self, *args, **options):
# TODO: delete this entire file in 3.3
warnings.warn('This command is replaced with `provision_instance` and will '
'be removed in release 3.3.')
return super(Command, self).handle(*args, **options)

View File

@@ -4,6 +4,7 @@
import sys
import time
import json
import random
from django.utils import timezone
from django.core.management.base import BaseCommand
@@ -26,7 +27,21 @@ from awx.api.serializers import (
)
class ReplayJobEvents():
class JobStatusLifeCycle():
def emit_job_status(self, job, status):
# {"status": "successful", "project_id": 13, "unified_job_id": 659, "group_name": "jobs"}
job.websocket_emit_status(status)
def determine_job_event_finish_status_index(self, job_event_count, random_seed):
if random_seed == 0:
return job_event_count - 1
random.seed(random_seed)
job_event_index = random.randint(0, job_event_count - 1)
return job_event_index
class ReplayJobEvents(JobStatusLifeCycle):
recording_start = None
replay_start = None
@@ -76,9 +91,10 @@ class ReplayJobEvents():
job_events = job.inventory_update_events.order_by('created')
elif type(job) is SystemJob:
job_events = job.system_job_events.order_by('created')
if job_events.count() == 0:
count = job_events.count()
if count == 0:
raise RuntimeError("No events for job id {}".format(job.id))
return job_events
return job_events, count
def get_serializer(self, job):
if type(job) is Job:
@@ -95,7 +111,7 @@ class ReplayJobEvents():
raise RuntimeError("Job is of type {} and replay is not yet supported.".format(type(job)))
sys.exit(1)
def run(self, job_id, speed=1.0, verbosity=0, skip_range=[]):
def run(self, job_id, speed=1.0, verbosity=0, skip_range=[], random_seed=0, final_status_delay=0, debug=False):
stats = {
'events_ontime': {
'total': 0,
@@ -119,17 +135,27 @@ class ReplayJobEvents():
}
try:
job = self.get_job(job_id)
job_events = self.get_job_events(job)
job_events, job_event_count = self.get_job_events(job)
serializer = self.get_serializer(job)
except RuntimeError as e:
print("{}".format(e.message))
sys.exit(1)
je_previous = None
self.emit_job_status(job, 'pending')
self.emit_job_status(job, 'waiting')
self.emit_job_status(job, 'running')
finish_status_index = self.determine_job_event_finish_status_index(job_event_count, random_seed)
for n, je_current in enumerate(job_events):
if je_current.counter in skip_range:
continue
if debug:
raw_input("{} of {}:".format(n, job_event_count))
if not je_previous:
stats['recording_start'] = je_current.created
self.start(je_current.created)
@@ -146,7 +172,7 @@ class ReplayJobEvents():
print("recording: next job in {} seconds".format(recording_diff))
if replay_offset >= 0:
replay_diff = recording_diff - replay_offset
if replay_diff > 0:
stats['events_ontime']['total'] += 1
if verbosity >= 3:
@@ -167,6 +193,11 @@ class ReplayJobEvents():
stats['events_total'] += 1
je_previous = je_current
if n == finish_status_index:
if final_status_delay != 0:
self.sleep(final_status_delay)
self.emit_job_status(job, job.status)
if stats['events_total'] > 2:
stats['replay_end'] = self.now()
stats['replay_duration'] = (stats['replay_end'] - stats['replay_start']).total_seconds()
@@ -206,16 +237,26 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--job_id', dest='job_id', type=int, metavar='j',
help='Id of the job to replay (job or adhoc)')
parser.add_argument('--speed', dest='speed', type=int, metavar='s',
parser.add_argument('--speed', dest='speed', type=float, metavar='s',
help='Speedup factor.')
parser.add_argument('--skip-range', dest='skip_range', type=str, metavar='k',
default='0:-1:1', help='Range of events to skip')
parser.add_argument('--random-seed', dest='random_seed', type=int, metavar='r',
default=0, help='Random number generator seed to use when determining job_event index to emit final job status')
parser.add_argument('--final-status-delay', dest='final_status_delay', type=float, metavar='f',
default=0, help='Delay between event and final status emit')
parser.add_argument('--debug', dest='debug', type=bool, metavar='d',
default=False, help='Enable step mode to control emission of job events one at a time.')
def handle(self, *args, **options):
job_id = options.get('job_id')
speed = options.get('speed') or 1
verbosity = options.get('verbosity') or 0
random_seed = options.get('random_seed')
final_status_delay = options.get('final_status_delay')
debug = options.get('debug')
skip = self._parse_slice_range(options.get('skip_range'))
replayer = ReplayJobEvents()
replayer.run(job_id, speed, verbosity, skip)
replayer.run(job_id, speed=speed, verbosity=verbosity, skip_range=skip, random_seed=random_seed,
final_status_delay=final_status_delay, debug=debug)

View File

@@ -0,0 +1,37 @@
# Django
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
# AWX
from awx.main.models.oauth import OAuth2AccessToken
from oauth2_provider.models import RefreshToken
def revoke_tokens(token_list):
for token in token_list:
token.revoke()
print('revoked {} {}'.format(token.__class__.__name__, token.token))
class Command(BaseCommand):
"""Command that revokes OAuth2 access tokens."""
help='Revokes OAuth2 access tokens. Use --all to revoke access and refresh tokens.'
def add_arguments(self, parser):
parser.add_argument('--user', dest='user', type=str, help='revoke OAuth2 tokens for a specific username')
parser.add_argument('--all', dest='all', action='store_true', help='revoke OAuth2 access tokens and refresh tokens')
def handle(self, *args, **options):
if not options['user']:
if options['all']:
revoke_tokens(RefreshToken.objects.filter(revoked=None))
revoke_tokens(OAuth2AccessToken.objects.all())
else:
try:
user = User.objects.get(username=options['user'])
except ObjectDoesNotExist:
raise CommandError('A user with that username does not exist.')
if options['all']:
revoke_tokens(RefreshToken.objects.filter(revoked=None).filter(user=user))
revoke_tokens(user.main_oauth2accesstoken.filter(user=user))

View File

@@ -0,0 +1,19 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-05-18 17:49
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0051_v340_job_slicing'),
]
operations = [
migrations.RemoveField(
model_name='project',
name='scm_delete_on_next_update',
),
]

View File

@@ -0,0 +1,37 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-09-27 19:50
from __future__ import unicode_literals
import awx.main.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0052_v340_remove_project_scm_delete_on_next_update'),
]
operations = [
migrations.AddField(
model_name='workflowjob',
name='char_prompts',
field=awx.main.fields.JSONField(blank=True, default={}),
),
migrations.AddField(
model_name='workflowjob',
name='inventory',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='workflowjobs', to='main.Inventory'),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='ask_inventory_on_launch',
field=awx.main.fields.AskForField(default=False),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='inventory',
field=models.ForeignKey(blank=True, default=None, help_text='Inventory applied to all job templates in workflow that prompt for inventory.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='workflowjobtemplates', to='main.Inventory'),
),
]

View File

@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-09-28 14:23
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0053_v340_workflow_inventory'),
]
operations = [
migrations.AddField(
model_name='workflowjobnode',
name='do_not_run',
field=models.BooleanField(default=False, help_text='Indidcates that a job will not be created when True. Workflow runtime semantics will mark this True if the node is in a path that will decidedly not be ran. A value of False means the node may not run.'),
),
]

View File

@@ -580,7 +580,7 @@ class CredentialType(CommonModelNameNotUnique):
if not self.injectors:
if self.managed_by_tower and credential.kind in dir(builtin_injectors):
injected_env = {}
getattr(builtin_injectors, credential.kind)(credential, injected_env)
getattr(builtin_injectors, credential.kind)(credential, injected_env, private_data_dir)
env.update(injected_env)
safe_env.update(build_safe_env(injected_env))
return

View File

@@ -1,20 +1,37 @@
import json
import os
import stat
import tempfile
from awx.main.utils import decrypt_field
from django.conf import settings
def aws(cred, env):
def aws(cred, env, private_data_dir):
env['AWS_ACCESS_KEY_ID'] = cred.username
env['AWS_SECRET_ACCESS_KEY'] = decrypt_field(cred, 'password')
if len(cred.security_token) > 0:
env['AWS_SECURITY_TOKEN'] = decrypt_field(cred, 'security_token')
def gce(cred, env):
def gce(cred, env, private_data_dir):
env['GCE_EMAIL'] = cred.username
env['GCE_PROJECT'] = cred.project
json_cred = {
'type': 'service_account',
'private_key': decrypt_field(cred, 'ssh_key_data'),
'client_email': cred.username,
'project_id': cred.project
}
handle, path = tempfile.mkstemp(dir=private_data_dir)
f = os.fdopen(handle, 'w')
json.dump(json_cred, f)
f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
env['GCE_CREDENTIALS_FILE_PATH'] = path
def azure_rm(cred, env):
def azure_rm(cred, env, private_data_dir):
if len(cred.client) and len(cred.tenant):
env['AZURE_CLIENT_ID'] = cred.client
env['AZURE_SECRET'] = decrypt_field(cred, 'secret')
@@ -28,7 +45,7 @@ def azure_rm(cred, env):
env['AZURE_CLOUD_ENVIRONMENT'] = cred.inputs['cloud_environment']
def vmware(cred, env):
def vmware(cred, env, private_data_dir):
env['VMWARE_USER'] = cred.username
env['VMWARE_PASSWORD'] = decrypt_field(cred, 'password')
env['VMWARE_HOST'] = cred.host

View File

@@ -294,7 +294,9 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
# Remove any empty groups
for group_name in list(data.keys()):
if not data.get(group_name, {}).get('hosts', []):
if group_name == 'all':
continue
if not (data.get(group_name, {}).get('hosts', []) or data.get(group_name, {}).get('children', [])):
data.pop(group_name)
if hostvars:

View File

@@ -34,7 +34,7 @@ from awx.main.models.notifications import (
JobNotificationMixin,
)
from awx.main.utils import parse_yaml_or_json, getattr_dne
from awx.main.fields import ImplicitRoleField
from awx.main.fields import ImplicitRoleField, JSONField, AskForField
from awx.main.models.mixins import (
ResourceMixin,
SurveyJobTemplateMixin,
@@ -43,7 +43,6 @@ from awx.main.models.mixins import (
CustomVirtualEnvMixin,
RelatedJobsMixin,
)
from awx.main.fields import JSONField, AskForField
logger = logging.getLogger('awx.main.models.jobs')
@@ -315,7 +314,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
if self.inventory is None and not self.ask_inventory_on_launch:
validation_errors['inventory'] = [_("Job Template must provide 'inventory' or allow prompting for it."),]
if self.project is None:
validation_errors['project'] = [_("Job types 'run' and 'check' must have assigned a project."),]
validation_errors['project'] = [_("Job Templates must have a project assigned."),]
return validation_errors
@property
@@ -338,6 +337,9 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
kwargs['_parent_field_name'] = "job_template"
kwargs.setdefault('_eager_fields', {})
kwargs['_eager_fields']['is_sliced_job'] = True
elif prevent_slicing:
kwargs.setdefault('_eager_fields', {})
kwargs['_eager_fields'].setdefault('job_slice_count', 1)
job = super(JobTemplate, self).create_unified_job(**kwargs)
if slice_event:
try:
@@ -892,19 +894,19 @@ class NullablePromptPsuedoField(object):
instance.char_prompts[self.field_name] = value
class LaunchTimeConfig(BaseModel):
class LaunchTimeConfigBase(BaseModel):
'''
Common model for all objects that save details of a saved launch config
WFJT / WJ nodes, schedules, and job launch configs (not all implemented yet)
Needed as separate class from LaunchTimeConfig because some models
use `extra_data` and some use `extra_vars`. We cannot change the API,
so we force fake it in the model definitions
- model defines extra_vars - use this class
- model needs to use extra data - use LaunchTimeConfig
Use this for models which are SurveyMixins and UnifiedJobs or Templates
'''
class Meta:
abstract = True
# Prompting-related fields that have to be handled as special cases
credentials = models.ManyToManyField(
'Credential',
related_name='%(class)ss'
)
inventory = models.ForeignKey(
'Inventory',
related_name='%(class)ss',
@@ -913,15 +915,6 @@ class LaunchTimeConfig(BaseModel):
default=None,
on_delete=models.SET_NULL,
)
extra_data = JSONField(
blank=True,
default={}
)
survey_passwords = prevent_search(JSONField(
blank=True,
default={},
editable=False,
))
# All standard fields are stored in this dictionary field
# This is a solution to the nullable CharField problem, specific to prompting
char_prompts = JSONField(
@@ -931,6 +924,7 @@ class LaunchTimeConfig(BaseModel):
def prompts_dict(self, display=False):
data = {}
# Some types may have different prompts, but always subset of JT prompts
for prompt_name in JobTemplate.get_ask_mapping().keys():
try:
field = self._meta.get_field(prompt_name)
@@ -943,11 +937,11 @@ class LaunchTimeConfig(BaseModel):
if len(prompt_val) > 0:
data[prompt_name] = prompt_val
elif prompt_name == 'extra_vars':
if self.extra_data:
if self.extra_vars:
if display:
data[prompt_name] = self.display_extra_data()
data[prompt_name] = self.display_extra_vars()
else:
data[prompt_name] = self.extra_data
data[prompt_name] = self.extra_vars
if self.survey_passwords and not display:
data['survey_passwords'] = self.survey_passwords
else:
@@ -956,18 +950,21 @@ class LaunchTimeConfig(BaseModel):
data[prompt_name] = prompt_val
return data
def display_extra_data(self):
def display_extra_vars(self):
'''
Hides fields marked as passwords in survey.
'''
if self.survey_passwords:
extra_data = parse_yaml_or_json(self.extra_data).copy()
extra_vars = parse_yaml_or_json(self.extra_vars).copy()
for key, value in self.survey_passwords.items():
if key in extra_data:
extra_data[key] = value
return extra_data
if key in extra_vars:
extra_vars[key] = value
return extra_vars
else:
return self.extra_data
return self.extra_vars
def display_extra_data(self):
return self.display_extra_vars()
@property
def _credential(self):
@@ -991,7 +988,42 @@ class LaunchTimeConfig(BaseModel):
return None
class LaunchTimeConfig(LaunchTimeConfigBase):
'''
Common model for all objects that save details of a saved launch config
WFJT / WJ nodes, schedules, and job launch configs (not all implemented yet)
'''
class Meta:
abstract = True
# Special case prompting fields, even more special than the other ones
extra_data = JSONField(
blank=True,
default={}
)
survey_passwords = prevent_search(JSONField(
blank=True,
default={},
editable=False,
))
# Credentials needed for non-unified job / unified JT models
credentials = models.ManyToManyField(
'Credential',
related_name='%(class)ss'
)
@property
def extra_vars(self):
return self.extra_data
@extra_vars.setter
def extra_vars(self, extra_vars):
self.extra_data = extra_vars
for field_name in JobTemplate.get_ask_mapping().keys():
if field_name == 'extra_vars':
continue
try:
LaunchTimeConfig._meta.get_field(field_name)
except FieldDoesNotExist:

View File

@@ -301,14 +301,22 @@ class SurveyJobTemplateMixin(models.Model):
accepted.update(extra_vars)
extra_vars = {}
if extra_vars:
# Prune the prompted variables for those identical to template
tmp_extra_vars = self.extra_vars_dict
for key in (set(tmp_extra_vars.keys()) & set(extra_vars.keys())):
if tmp_extra_vars[key] == extra_vars[key]:
extra_vars.pop(key)
if extra_vars:
# Leftover extra_vars, keys provided that are not allowed
rejected.update(extra_vars)
# ignored variables does not block manual launch
if 'prompts' not in _exclude_errors:
errors['extra_vars'] = [_('Variables {list_of_keys} are not allowed on launch. Check the Prompt on Launch setting '+
'on the Job Template to include Extra Variables.').format(
list_of_keys=', '.join(extra_vars.keys()))]
'on the {model_name} to include Extra Variables.').format(
list_of_keys=six.text_type(', ').join([six.text_type(key) for key in extra_vars.keys()]),
model_name=self._meta.verbose_name.title())]
return (accepted, rejected, errors)

View File

@@ -254,10 +254,6 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
on_delete=models.CASCADE,
related_name='projects',
)
scm_delete_on_next_update = models.BooleanField(
default=False,
editable=False,
)
scm_update_on_launch = models.BooleanField(
default=False,
help_text=_('Update the project when a job is launched that uses the project.'),
@@ -331,13 +327,6 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
# if it hasn't been specified, then we're just doing a normal save.
update_fields = kwargs.get('update_fields', [])
skip_update = bool(kwargs.pop('skip_update', False))
# Check if scm_type or scm_url changes.
if self.pk:
project_before = self.__class__.objects.get(pk=self.pk)
if project_before.scm_type != self.scm_type or project_before.scm_url != self.scm_url:
self.scm_delete_on_next_update = True
if 'scm_delete_on_next_update' not in update_fields:
update_fields.append('scm_delete_on_next_update')
# Create auto-generated local path if project uses SCM.
if self.pk and self.scm_type and not self.local_path.startswith('_'):
slug_name = slugify(six.text_type(self.name)).replace(u'-', u'_')
@@ -397,19 +386,6 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
def _can_update(self):
return bool(self.scm_type)
def _update_unified_job_kwargs(self, create_kwargs, kwargs):
'''
:param create_kwargs: key-worded arguments to be updated and later used for creating unified job.
:type create_kwargs: dict
:param kwargs: request parameters used to override unified job template fields with runtime values.
:type kwargs: dict
:return: modified create_kwargs.
:rtype: dict
'''
if self.scm_delete_on_next_update:
create_kwargs['scm_delete_on_update'] = True
return create_kwargs
def create_project_update(self, **kwargs):
return self.create_unified_job(**kwargs)
@@ -549,17 +525,6 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
def get_ui_url(self):
return urlparse.urljoin(settings.TOWER_URL_BASE, "/#/jobs/project/{}".format(self.pk))
def _update_parent_instance(self):
parent_instance = self._get_parent_instance()
if parent_instance and self.job_type == 'check':
update_fields = self._update_parent_instance_no_save(parent_instance)
if self.status in ('successful', 'failed', 'error', 'canceled'):
if not self.failed and parent_instance.scm_delete_on_next_update:
parent_instance.scm_delete_on_next_update = False
if 'scm_delete_on_next_update' not in update_fields:
update_fields.append('scm_delete_on_next_update')
parent_instance.save(update_fields=update_fields)
def cancel(self, job_explanation=None, is_chain=False):
res = super(ProjectUpdate, self).cancel(job_explanation=job_explanation, is_chain=is_chain)
if res and self.launch_type != 'sync':

View File

@@ -30,15 +30,21 @@ from rest_framework.exceptions import ParseError
from polymorphic.models import PolymorphicModel
# AWX
from awx.main.models.base import * # noqa
from awx.main.models.base import (
CommonModelNameNotUnique,
PasswordFieldsModel,
NotificationFieldsModel,
prevent_search
)
from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin
from awx.main.utils import (
encrypt_dict, decrypt_field, _inventory_updates,
copy_model_by_class, copy_m2m_relationships,
get_type_for_model, parse_yaml_or_json, getattr_dne
)
from awx.main.utils import polymorphic
from awx.main.utils import polymorphic, schedule_task_manager
from awx.main.constants import ACTIVE_STATES, CAN_CANCEL
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.consumers import emit_channel_notification
@@ -315,6 +321,7 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
Return notification_templates relevant to this Unified Job Template
'''
# NOTE: Derived classes should implement
from awx.main.models.notifications import NotificationTemplate
return NotificationTemplate.objects.none()
def create_unified_job(self, **kwargs):
@@ -343,8 +350,8 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
validated_kwargs = kwargs.copy()
if unallowed_fields:
if parent_field_name is None:
logger.warn(six.text_type('Fields {} are not allowed as overrides to spawn {} from {}.').format(
six.text_type(', ').join(unallowed_fields), unified_job, self
logger.warn(six.text_type('Fields {} are not allowed as overrides to spawn from {}.').format(
six.text_type(', ').join(unallowed_fields), self
))
map(validated_kwargs.pop, unallowed_fields)
@@ -368,7 +375,12 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
unified_job.survey_passwords = new_job_passwords
kwargs['survey_passwords'] = new_job_passwords # saved in config object for relaunch
unified_job.save()
from awx.main.signals import disable_activity_stream, activity_stream_create
with disable_activity_stream():
# Don't emit the activity stream record here for creation,
# because we haven't attached important M2M relations yet, like
# credentials and labels
unified_job.save()
# Labels and credentials copied here
if validated_kwargs.get('credentials'):
@@ -380,7 +392,6 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
validated_kwargs['credentials'] = [cred for cred in cred_dict.values()]
kwargs['credentials'] = validated_kwargs['credentials']
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
copy_m2m_relationships(self, unified_job, fields, kwargs=validated_kwargs)
@@ -391,6 +402,11 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
# Create record of provided prompts for relaunch and rescheduling
unified_job.create_config_from_prompts(kwargs, parent=self)
# manually issue the create activity stream entry _after_ M2M relations
# have been associated to the UJ
if unified_job.__class__ in activity_stream_registrar.models:
activity_stream_create(None, unified_job, True)
return unified_job
@classmethod
@@ -1245,8 +1261,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
self.update_fields(start_args=json.dumps(kwargs), status='pending')
self.websocket_emit_status("pending")
from awx.main.scheduler.tasks import run_job_launch
connection.on_commit(lambda: run_job_launch.delay(self.id))
schedule_task_manager()
# Each type of unified job has a different Task class; get the
# appropirate one.

View File

@@ -24,14 +24,14 @@ from awx.main.models.rbac import (
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR
)
from awx.main.fields import ImplicitRoleField
from awx.main.fields import ImplicitRoleField, AskForField
from awx.main.models.mixins import (
ResourceMixin,
SurveyJobTemplateMixin,
SurveyJobMixin,
RelatedJobsMixin,
)
from awx.main.models.jobs import LaunchTimeConfig
from awx.main.models.jobs import LaunchTimeConfigBase, LaunchTimeConfig, JobTemplate
from awx.main.models.credential import Credential
from awx.main.redact import REPLACE_STR
from awx.main.fields import JSONField
@@ -82,7 +82,7 @@ class WorkflowNodeBase(CreatedModifiedModel, LaunchTimeConfig):
success_parents = getattr(self, '%ss_success' % self.__class__.__name__.lower()).all()
failure_parents = getattr(self, '%ss_failure' % self.__class__.__name__.lower()).all()
always_parents = getattr(self, '%ss_always' % self.__class__.__name__.lower()).all()
return success_parents | failure_parents | always_parents
return (success_parents | failure_parents | always_parents).order_by('id')
@classmethod
def _get_workflow_job_field_names(cls):
@@ -184,10 +184,26 @@ class WorkflowJobNode(WorkflowNodeBase):
default={},
editable=False,
)
do_not_run = models.BooleanField(
default=False,
help_text=_("Indidcates that a job will not be created when True. Workflow runtime "
"semantics will mark this True if the node is in a path that will "
"decidedly not be ran. A value of False means the node may not run."),
)
def get_absolute_url(self, request=None):
return reverse('api:workflow_job_node_detail', kwargs={'pk': self.pk}, request=request)
def prompts_dict(self, *args, **kwargs):
r = super(WorkflowJobNode, self).prompts_dict(*args, **kwargs)
# Explanation - WFJT extra_vars still break pattern, so they are not
# put through prompts processing, but inventory is only accepted
# if JT prompts for it, so it goes through this mechanism
if self.workflow_job and self.workflow_job.inventory_id:
# workflow job inventory takes precedence
r['inventory'] = self.workflow_job.inventory
return r
def get_job_kwargs(self):
'''
In advance of creating a new unified job as part of a workflow,
@@ -199,7 +215,14 @@ class WorkflowJobNode(WorkflowNodeBase):
data = {}
ujt_obj = self.unified_job_template
if ujt_obj is not None:
accepted_fields, ignored_fields, errors = ujt_obj._accept_or_ignore_job_kwargs(**self.prompts_dict())
# MERGE note: move this to prompts_dict method on node when merging
# with the workflow inventory branch
prompts_data = self.prompts_dict()
if isinstance(ujt_obj, WorkflowJobTemplate):
if self.workflow_job.extra_vars:
prompts_data.setdefault('extra_vars', {})
prompts_data['extra_vars'].update(self.workflow_job.extra_vars_dict)
accepted_fields, ignored_fields, errors = ujt_obj._accept_or_ignore_job_kwargs(**prompts_data)
if errors:
logger.info(_('Bad launch configuration starting template {template_pk} as part of '
'workflow {workflow_pk}. Errors:\n{error_text}').format(
@@ -241,13 +264,15 @@ class WorkflowJobNode(WorkflowNodeBase):
data['survey_passwords'] = password_dict
# process extra_vars
extra_vars = data.get('extra_vars', {})
if aa_dict:
functional_aa_dict = copy(aa_dict)
functional_aa_dict.pop('_ansible_no_log', None)
extra_vars.update(functional_aa_dict)
# Workflow Job extra_vars higher precedence than ancestor artifacts
if self.workflow_job and self.workflow_job.extra_vars:
extra_vars.update(self.workflow_job.extra_vars_dict)
if ujt_obj and isinstance(ujt_obj, (JobTemplate, WorkflowJobTemplate)):
if aa_dict:
functional_aa_dict = copy(aa_dict)
functional_aa_dict.pop('_ansible_no_log', None)
extra_vars.update(functional_aa_dict)
if ujt_obj and isinstance(ujt_obj, JobTemplate):
# Workflow Job extra_vars higher precedence than ancestor artifacts
if self.workflow_job and self.workflow_job.extra_vars:
extra_vars.update(self.workflow_job.extra_vars_dict)
if extra_vars:
data['extra_vars'] = extra_vars
# ensure that unified jobs created by WorkflowJobs are marked
@@ -282,7 +307,8 @@ class WorkflowJobOptions(BaseModel):
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in WorkflowJobOptions._meta.fields) | set(
['name', 'description', 'schedule', 'survey_passwords', 'labels']
# NOTE: if other prompts are added to WFJT, put fields in WJOptions, remove inventory
['name', 'description', 'schedule', 'survey_passwords', 'labels', 'inventory']
)
def _create_workflow_nodes(self, old_node_list, user=None):
@@ -334,6 +360,19 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
on_delete=models.SET_NULL,
related_name='workflows',
)
inventory = models.ForeignKey(
'Inventory',
related_name='%(class)ss',
blank=True,
null=True,
default=None,
on_delete=models.SET_NULL,
help_text=_('Inventory applied to all job templates in workflow that prompt for inventory.'),
)
ask_inventory_on_launch = AskForField(
blank=True,
default=False,
)
admin_role = ImplicitRoleField(parent_role=[
'singleton:' + ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
'organization.workflow_admin_role'
@@ -388,27 +427,45 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
workflow_job.copy_nodes_from_original(original=self)
return workflow_job
def _accept_or_ignore_job_kwargs(self, _exclude_errors=(), **kwargs):
def _accept_or_ignore_job_kwargs(self, **kwargs):
exclude_errors = kwargs.pop('_exclude_errors', [])
prompted_data = {}
rejected_data = {}
accepted_vars, rejected_vars, errors_dict = self.accept_or_ignore_variables(
kwargs.get('extra_vars', {}),
_exclude_errors=exclude_errors,
extra_passwords=kwargs.get('survey_passwords', {}))
if accepted_vars:
prompted_data['extra_vars'] = accepted_vars
if rejected_vars:
rejected_data['extra_vars'] = rejected_vars
errors_dict = {}
# WFJTs do not behave like JTs, it can not accept inventory, credential, etc.
bad_kwargs = kwargs.copy()
bad_kwargs.pop('extra_vars', None)
bad_kwargs.pop('survey_passwords', None)
if bad_kwargs:
rejected_data.update(bad_kwargs)
for field in bad_kwargs:
errors_dict[field] = _('Field is not allowed for use in workflows.')
# Handle all the fields that have prompting rules
# NOTE: If WFJTs prompt for other things, this logic can be combined with jobs
for field_name, ask_field_name in self.get_ask_mapping().items():
if field_name == 'extra_vars':
accepted_vars, rejected_vars, vars_errors = self.accept_or_ignore_variables(
kwargs.get('extra_vars', {}),
_exclude_errors=exclude_errors,
extra_passwords=kwargs.get('survey_passwords', {}))
if accepted_vars:
prompted_data['extra_vars'] = accepted_vars
if rejected_vars:
rejected_data['extra_vars'] = rejected_vars
errors_dict.update(vars_errors)
continue
if field_name not in kwargs:
continue
new_value = kwargs[field_name]
old_value = getattr(self, field_name)
if new_value == old_value:
continue # no-op case: Counted as neither accepted or ignored
elif getattr(self, ask_field_name):
# accepted prompt
prompted_data[field_name] = new_value
else:
# unprompted - template is not configured to accept field on launch
rejected_data[field_name] = new_value
# Not considered an error for manual launch, to support old
# behavior of putting them in ignored_fields and launching anyway
if 'prompts' not in exclude_errors:
errors_dict[field_name] = _('Field is not configured to prompt on launch.').format(field_name=field_name)
return prompted_data, rejected_data, errors_dict
@@ -438,7 +495,7 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
return WorkflowJob.objects.filter(workflow_job_template=self)
class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificationMixin):
class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificationMixin, LaunchTimeConfigBase):
class Meta:
app_label = 'main'
ordering = ('id',)
@@ -505,6 +562,24 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio
def task_impact(self):
return 0
def get_ancestor_workflows(self):
"""Returns a list of WFJTs that are indirect parents of this workflow job
say WFJTs are set up to spawn in order of A->B->C, and this workflow job
came from C, then C is the parent and [B, A] will be returned from this.
"""
ancestors = []
wj_ids = set([self.pk])
wj = self.get_workflow_job()
while wj and wj.workflow_job_template_id:
if wj.pk in wj_ids:
logger.critical('Cycles detected in the workflow jobs graph, '
'this is not normal and suggests task manager degeneracy.')
break
wj_ids.add(wj.pk)
ancestors.append(wj.workflow_job_template)
wj = wj.get_workflow_job()
return ancestors
def get_notification_templates(self):
return self.workflow_job_template.notification_templates

View File

@@ -2,6 +2,7 @@
# All Rights Reserved.
# Python
import json
import logging
import os
@@ -12,10 +13,31 @@ from django.conf import settings
# Kombu
from kombu import Connection, Exchange, Producer
from kombu.serialization import registry
__all__ = ['CallbackQueueDispatcher']
# use a custom JSON serializer so we can properly handle !unsafe and !vault
# objects that may exist in events emitted by the callback plugin
# see: https://github.com/ansible/ansible/pull/38759
class AnsibleJSONEncoder(json.JSONEncoder):
def default(self, o):
if getattr(o, 'yaml_tag', None) == '!vault':
return o.data
return super(AnsibleJSONEncoder, self).default(o)
registry.register(
'json-ansible',
lambda obj: json.dumps(obj, cls=AnsibleJSONEncoder),
lambda obj: json.loads(obj),
content_type='application/json',
content_encoding='utf-8'
)
class CallbackQueueDispatcher(object):
def __init__(self):
@@ -41,7 +63,7 @@ class CallbackQueueDispatcher(object):
producer = Producer(self.connection)
producer.publish(obj,
serializer='json',
serializer='json-ansible',
compression='bzip2',
exchange=self.exchange,
declare=[self.exchange],

View File

@@ -1,11 +1,4 @@
from awx.main.models import (
Job,
AdHocCommand,
InventoryUpdate,
ProjectUpdate,
WorkflowJob,
)
from collections import deque
class SimpleDAG(object):
@@ -13,12 +6,51 @@ class SimpleDAG(object):
def __init__(self):
self.nodes = []
self.edges = []
self.root_nodes = set([])
r'''
Track node_obj->node index
dict where key is a full workflow node object or whatever we are
storing in ['node_object'] and value is an index to be used into
self.nodes
'''
self.node_obj_to_node_index = dict()
r'''
Track per-node from->to edges
i.e.
{
'success': {
1: [2, 3],
4: [2, 3],
},
'failed': {
1: [5],
}
}
'''
self.node_from_edges_by_label = dict()
r'''
Track per-node reverse relationship (child to parent)
i.e.
{
'success': {
2: [1, 4],
3: [1, 4],
},
'failed': {
5: [1],
}
}
'''
self.node_to_edges_by_label = dict()
def __contains__(self, obj):
for node in self.nodes:
if node['node_object'] == obj:
return True
if self.node['node_object'] in self.node_obj_to_node_index:
return True
return False
def __len__(self):
@@ -27,98 +59,169 @@ class SimpleDAG(object):
def __iter__(self):
return self.nodes.__iter__()
def generate_graphviz_plot(self):
def short_string_obj(obj):
if type(obj) == Job:
type_str = "Job"
if type(obj) == AdHocCommand:
type_str = "AdHocCommand"
elif type(obj) == InventoryUpdate:
type_str = "Inventory"
elif type(obj) == ProjectUpdate:
type_str = "Project"
elif type(obj) == WorkflowJob:
type_str = "Workflow"
else:
type_str = "Unknown"
type_str += "%s" % str(obj.id)
return type_str
def generate_graphviz_plot(self, file_name="/awx_devel/graph.gv"):
def run_status(obj):
dnr = "RUN"
status = "NA"
if hasattr(obj, 'job') and obj.job and hasattr(obj.job, 'status'):
status = obj.job.status
if hasattr(obj, 'do_not_run') and obj.do_not_run is True:
dnr = "DNR"
return "{}_{}_{}".format(dnr, status, obj.id)
doc = """
digraph g {
rankdir = LR
"""
for n in self.nodes:
obj = n['node_object']
status = "NA"
if hasattr(obj, 'job') and obj.job:
status = obj.job.status
color = 'black'
if status == 'successful':
color = 'green'
elif status == 'failed':
color = 'red'
elif obj.do_not_run is True:
color = 'gray'
doc += "%s [color = %s]\n" % (
short_string_obj(n['node_object']),
"red" if n['node_object'].status == 'running' else "black",
)
for from_node, to_node, label in self.edges:
doc += "%s -> %s [ label=\"%s\" ];\n" % (
short_string_obj(self.nodes[from_node]['node_object']),
short_string_obj(self.nodes[to_node]['node_object']),
label,
run_status(n['node_object']),
color
)
for label, edges in self.node_from_edges_by_label.iteritems():
for from_node, to_nodes in edges.iteritems():
for to_node in to_nodes:
doc += "%s -> %s [ label=\"%s\" ];\n" % (
run_status(self.nodes[from_node]['node_object']),
run_status(self.nodes[to_node]['node_object']),
label,
)
doc += "}\n"
gv_file = open('/tmp/graph.gv', 'w')
gv_file = open(file_name, 'w')
gv_file.write(doc)
gv_file.close()
def add_node(self, obj, metadata=None):
if self.find_ord(obj) is None:
self.nodes.append(dict(node_object=obj, metadata=metadata))
'''
Assume node is a root node until a child is added
'''
node_index = len(self.nodes)
self.root_nodes.add(node_index)
self.node_obj_to_node_index[obj] = node_index
entry = dict(node_object=obj, metadata=metadata)
self.nodes.append(entry)
def add_edge(self, from_obj, to_obj, label=None):
def add_edge(self, from_obj, to_obj, label):
from_obj_ord = self.find_ord(from_obj)
to_obj_ord = self.find_ord(to_obj)
if from_obj_ord is None or to_obj_ord is None:
raise LookupError("Object not found")
self.edges.append((from_obj_ord, to_obj_ord, label))
def add_edges(self, edgelist):
for edge_pair in edgelist:
self.add_edge(edge_pair[0], edge_pair[1], edge_pair[2])
'''
To node is no longer a root node
'''
self.root_nodes.discard(to_obj_ord)
if from_obj_ord is None and to_obj_ord is None:
raise LookupError("From object {} and to object not found".format(from_obj, to_obj))
elif from_obj_ord is None:
raise LookupError("From object not found {}".format(from_obj))
elif to_obj_ord is None:
raise LookupError("To object not found {}".format(to_obj))
self.node_from_edges_by_label.setdefault(label, dict()) \
.setdefault(from_obj_ord, [])
self.node_to_edges_by_label.setdefault(label, dict()) \
.setdefault(to_obj_ord, [])
self.node_from_edges_by_label[label][from_obj_ord].append(to_obj_ord)
self.node_to_edges_by_label[label][to_obj_ord].append(from_obj_ord)
def find_ord(self, obj):
for idx in range(len(self.nodes)):
if obj == self.nodes[idx]['node_object']:
return idx
return None
return self.node_obj_to_node_index.get(obj, None)
def _get_dependencies_by_label(self, node_index, label):
return [self.nodes[index] for index in
self.node_from_edges_by_label.get(label, {})
.get(node_index, [])]
def get_dependencies(self, obj, label=None):
antecedents = []
this_ord = self.find_ord(obj)
for node, dep, lbl in self.edges:
if label:
if node == this_ord and lbl == label:
antecedents.append(self.nodes[dep])
else:
if node == this_ord:
antecedents.append(self.nodes[dep])
return antecedents
nodes = []
if label:
return self._get_dependencies_by_label(this_ord, label)
else:
nodes = []
map(lambda l: nodes.extend(self._get_dependencies_by_label(this_ord, l)),
self.node_from_edges_by_label.keys())
return nodes
def _get_dependents_by_label(self, node_index, label):
return [self.nodes[index] for index in
self.node_to_edges_by_label.get(label, {})
.get(node_index, [])]
def get_dependents(self, obj, label=None):
decendents = []
this_ord = self.find_ord(obj)
for node, dep, lbl in self.edges:
if label:
if dep == this_ord and lbl == label:
decendents.append(self.nodes[node])
else:
if dep == this_ord:
decendents.append(self.nodes[node])
return decendents
def get_leaf_nodes(self):
leafs = []
for n in self.nodes:
if len(self.get_dependencies(n['node_object'])) < 1:
leafs.append(n)
return leafs
nodes = []
if label:
return self._get_dependents_by_label(this_ord, label)
else:
nodes = []
map(lambda l: nodes.extend(self._get_dependents_by_label(this_ord, l)),
self.node_to_edges_by_label.keys())
return nodes
def get_root_nodes(self):
roots = []
for n in self.nodes:
if len(self.get_dependents(n['node_object'])) < 1:
roots.append(n)
return roots
return [self.nodes[index] for index in self.root_nodes]
def has_cycle(self):
node_objs = [node['node_object'] for node in self.get_root_nodes()]
node_objs_visited = set([])
path = set([])
stack = node_objs
res = False
if len(self.nodes) != 0 and len(node_objs) == 0:
return True
while stack:
node_obj = stack.pop()
children = [node['node_object'] for node in self.get_dependencies(node_obj)]
children_to_add = filter(lambda node_obj: node_obj not in node_objs_visited, children)
if children_to_add:
if node_obj in path:
res = True
break
path.add(node_obj)
stack.append(node_obj)
stack.extend(children_to_add)
else:
node_objs_visited.add(node_obj)
path.discard(node_obj)
return res
def sort_nodes_topological(self):
nodes_sorted = deque()
obj_ids_processed = set([])
def visit(node):
obj = node['node_object']
if obj.id in obj_ids_processed:
return
for child in self.get_dependencies(obj):
visit(child)
obj_ids_processed.add(obj.id)
nodes_sorted.appendleft(node)
for node in self.nodes:
obj = node['node_object']
if obj.id in obj_ids_processed:
continue
visit(node)
return nodes_sorted

View File

@@ -1,4 +1,13 @@
from django.utils.translation import ugettext_lazy as _
from django.utils.encoding import smart_text
# Python
from awx.main.models import (
WorkflowJobTemplateNode,
WorkflowJobNode,
)
# AWX
from awx.main.scheduler.dag_simple import SimpleDAG
@@ -9,44 +18,84 @@ class WorkflowDAG(SimpleDAG):
if workflow_job:
self._init_graph(workflow_job)
def _init_graph(self, workflow_job):
node_qs = workflow_job.workflow_job_nodes
workflow_nodes = node_qs.prefetch_related('success_nodes', 'failure_nodes', 'always_nodes').all()
for workflow_node in workflow_nodes:
def _init_graph(self, workflow_job_or_jt):
if hasattr(workflow_job_or_jt, 'workflow_job_template_nodes'):
vals = ['from_workflowjobtemplatenode_id', 'to_workflowjobtemplatenode_id']
filters = {
'from_workflowjobtemplatenode__workflow_job_template_id': workflow_job_or_jt.id
}
workflow_nodes = workflow_job_or_jt.workflow_job_template_nodes
success_nodes = WorkflowJobTemplateNode.success_nodes.through.objects.filter(**filters).values_list(*vals)
failure_nodes = WorkflowJobTemplateNode.failure_nodes.through.objects.filter(**filters).values_list(*vals)
always_nodes = WorkflowJobTemplateNode.always_nodes.through.objects.filter(**filters).values_list(*vals)
elif hasattr(workflow_job_or_jt, 'workflow_job_nodes'):
vals = ['from_workflowjobnode_id', 'to_workflowjobnode_id']
filters = {
'from_workflowjobnode__workflow_job_id': workflow_job_or_jt.id
}
workflow_nodes = workflow_job_or_jt.workflow_job_nodes
success_nodes = WorkflowJobNode.success_nodes.through.objects.filter(**filters).values_list(*vals)
failure_nodes = WorkflowJobNode.failure_nodes.through.objects.filter(**filters).values_list(*vals)
always_nodes = WorkflowJobNode.always_nodes.through.objects.filter(**filters).values_list(*vals)
else:
raise RuntimeError("Unexpected object {} {}".format(type(workflow_job_or_jt), workflow_job_or_jt))
wfn_by_id = dict()
for workflow_node in workflow_nodes.all():
wfn_by_id[workflow_node.id] = workflow_node
self.add_node(workflow_node)
for node_type in ['success_nodes', 'failure_nodes', 'always_nodes']:
for workflow_node in workflow_nodes:
related_nodes = getattr(workflow_node, node_type).all()
for related_node in related_nodes:
self.add_edge(workflow_node, related_node, node_type)
for edge in success_nodes:
self.add_edge(wfn_by_id[edge[0]], wfn_by_id[edge[1]], 'success_nodes')
for edge in failure_nodes:
self.add_edge(wfn_by_id[edge[0]], wfn_by_id[edge[1]], 'failure_nodes')
for edge in always_nodes:
self.add_edge(wfn_by_id[edge[0]], wfn_by_id[edge[1]], 'always_nodes')
def _are_relevant_parents_finished(self, node):
obj = node['node_object']
parent_nodes = [p['node_object'] for p in self.get_dependents(obj)]
for p in parent_nodes:
if p.do_not_run is True:
continue
elif p.unified_job_template is None:
continue
# do_not_run is False, node might still run a job and thus blocks children
elif not p.job:
return False
# Node decidedly got a job; check if job is done
elif p.job and p.job.status not in ['successful', 'failed', 'error', 'canceled']:
return False
return True
def bfs_nodes_to_run(self):
root_nodes = self.get_root_nodes()
nodes = root_nodes
nodes = self.get_root_nodes()
nodes_found = []
node_ids_visited = set()
for index, n in enumerate(nodes):
obj = n['node_object']
job = obj.job
if not job:
nodes_found.append(n)
# Job is about to run or is running. Hold our horses and wait for
# the job to finish. We can't proceed down the graph path until we
# have the job result.
elif job.status not in ['failed', 'successful']:
if obj.id in node_ids_visited:
continue
elif job.status == 'failed':
children_failed = self.get_dependencies(obj, 'failure_nodes')
children_always = self.get_dependencies(obj, 'always_nodes')
children_all = children_failed + children_always
nodes.extend(children_all)
elif job.status == 'successful':
children_success = self.get_dependencies(obj, 'success_nodes')
children_always = self.get_dependencies(obj, 'always_nodes')
children_all = children_success + children_always
nodes.extend(children_all)
node_ids_visited.add(obj.id)
if obj.do_not_run is True:
continue
if obj.job:
if obj.job.status in ['failed', 'error', 'canceled']:
nodes.extend(self.get_dependencies(obj, 'failure_nodes') +
self.get_dependencies(obj, 'always_nodes'))
elif obj.job.status == 'successful':
nodes.extend(self.get_dependencies(obj, 'success_nodes') +
self.get_dependencies(obj, 'always_nodes'))
elif obj.unified_job_template is None:
nodes.extend(self.get_dependencies(obj, 'failure_nodes') +
self.get_dependencies(obj, 'always_nodes'))
else:
if self._are_relevant_parents_finished(n):
nodes_found.append(n)
return [n['node_object'] for n in nodes_found]
def cancel_node_jobs(self):
@@ -63,40 +112,113 @@ class WorkflowDAG(SimpleDAG):
return cancel_finished
def is_workflow_done(self):
root_nodes = self.get_root_nodes()
nodes = root_nodes
is_failed = False
for node in self.nodes:
obj = node['node_object']
if obj.do_not_run is False and not obj.job and obj.unified_job_template:
return False
elif obj.job and obj.job.status not in ['successful', 'failed', 'canceled', 'error']:
return False
return True
for index, n in enumerate(nodes):
obj = n['node_object']
job = obj.job
def has_workflow_failed(self):
failed_nodes = []
res = False
failed_path_nodes_id_status = []
failed_unified_job_template_node_ids = []
if obj.unified_job_template is None:
is_failed = True
continue
elif not job:
return False, False
for node in self.nodes:
obj = node['node_object']
if obj.do_not_run is False and obj.unified_job_template is None:
failed_nodes.append(node)
elif obj.job and obj.job.status in ['failed', 'canceled', 'error']:
failed_nodes.append(node)
children_success = self.get_dependencies(obj, 'success_nodes')
children_failed = self.get_dependencies(obj, 'failure_nodes')
children_always = self.get_dependencies(obj, 'always_nodes')
if not is_failed and job.status != 'successful':
children_all = children_success + children_failed + children_always
for child in children_all:
if child['node_object'].job:
break
for node in failed_nodes:
obj = node['node_object']
if (len(self.get_dependencies(obj, 'failure_nodes')) +
len(self.get_dependencies(obj, 'always_nodes'))) == 0:
if obj.unified_job_template is None:
res = True
failed_unified_job_template_node_ids.append(str(obj.id))
else:
is_failed = True if children_all else job.status in ['failed', 'canceled', 'error']
res = True
failed_path_nodes_id_status.append((str(obj.id), obj.job.status))
if job.status in ['canceled', 'error']:
continue
elif job.status == 'failed':
nodes.extend(children_failed + children_always)
elif job.status == 'successful':
nodes.extend(children_success + children_always)
if res is True:
s = _("No error handle path for workflow job node(s) [{node_status}] workflow job "
"node(s) missing unified job template and error handle path [{no_ufjt}].")
parms = {
'node_status': '',
'no_ufjt': '',
}
if len(failed_path_nodes_id_status) > 0:
parms['node_status'] = ",".join(["({},{})".format(id, status) for id, status in failed_path_nodes_id_status])
if len(failed_unified_job_template_node_ids) > 0:
parms['no_ufjt'] = ",".join(failed_unified_job_template_node_ids)
return True, smart_text(s.format(**parms))
return False, None
r'''
Determine if all nodes have been decided on being marked do_not_run.
Nodes that are do_not_run False may become do_not_run True in the future.
We know a do_not_run False node will NOT be marked do_not_run True if there
is a job run for that node.
:param workflow_nodes: list of workflow_nodes
Return a boolean
'''
def _are_all_nodes_dnr_decided(self, workflow_nodes):
for n in workflow_nodes:
if n.do_not_run is False and not n.job and n.unified_job_template:
return False
return True
r'''
Determine if a node (1) is ready to be marked do_not_run and (2) should
be marked do_not_run.
:param node: SimpleDAG internal node
:param parent_nodes: list of workflow_nodes
Return a boolean
'''
def _should_mark_node_dnr(self, node, parent_nodes):
for p in parent_nodes:
if p.do_not_run is True:
pass
elif p.job:
if p.job.status == 'successful':
if node in (self.get_dependencies(p, 'success_nodes') +
self.get_dependencies(p, 'always_nodes')):
return False
elif p.job.status in ['failed', 'error', 'canceled']:
if node in (self.get_dependencies(p, 'failure_nodes') +
self.get_dependencies(p, 'always_nodes')):
return False
else:
return False
elif p.do_not_run is False and p.unified_job_template is None:
if node in (self.get_dependencies(p, 'failure_nodes') +
self.get_dependencies(p, 'always_nodes')):
return False
else:
# Job is about to run or is running. Hold our horses and wait for
# the job to finish. We can't proceed down the graph path until we
# have the job result.
return False, False
return True, is_failed
return False
return True
def mark_dnr_nodes(self):
root_nodes = self.get_root_nodes()
nodes_marked_do_not_run = []
for node in self.sort_nodes_topological():
obj = node['node_object']
if obj.do_not_run is False and not obj.job and node not in root_nodes:
parent_nodes = [p['node_object'] for p in self.get_dependents(obj)]
if self._are_all_nodes_dnr_decided(parent_nodes):
if self._should_mark_node_dnr(node, parent_nodes):
obj.do_not_run = True
nodes_marked_do_not_run.append(node)
return [n['node_object'] for n in nodes_marked_do_not_run]

View File

@@ -26,10 +26,11 @@ from awx.main.models import (
ProjectUpdate,
SystemJob,
WorkflowJob,
WorkflowJobTemplate
)
from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.main.utils.pglock import advisory_lock
from awx.main.utils import get_type_for_model
from awx.main.utils import get_type_for_model, task_manager_bulk_reschedule, schedule_task_manager
from awx.main.signals import disable_activity_stream
from awx.main.scheduler.dependency_graph import DependencyGraph
from awx.main.utils import decrypt_field
@@ -120,7 +121,26 @@ class TaskManager():
spawn_node.job = job
spawn_node.save()
logger.info('Spawned %s in %s for node %s', job.log_format, workflow_job.log_format, spawn_node.pk)
if job._resources_sufficient_for_launch():
can_start = True
if isinstance(spawn_node.unified_job_template, WorkflowJobTemplate):
workflow_ancestors = job.get_ancestor_workflows()
if spawn_node.unified_job_template in set(workflow_ancestors):
can_start = False
logger.info('Refusing to start recursive workflow-in-workflow id={}, wfjt={}, ancestors={}'.format(
job.id, spawn_node.unified_job_template.pk, [wa.pk for wa in workflow_ancestors]))
display_list = [spawn_node.unified_job_template] + workflow_ancestors
job.job_explanation = _(
"Workflow Job spawned from workflow could not start because it "
"would result in recursion (spawn order, most recent first: {})"
).format(six.text_type(', ').join([six.text_type('<{}>').format(tmp) for tmp in display_list]))
else:
logger.debug('Starting workflow-in-workflow id={}, wfjt={}, ancestors={}'.format(
job.id, spawn_node.unified_job_template.pk, [wa.pk for wa in workflow_ancestors]))
if not job._resources_sufficient_for_launch():
can_start = False
job.job_explanation = _("Job spawned from workflow could not start because it "
"was missing a related resource such as project or inventory")
if can_start:
if workflow_job.start_args:
start_args = json.loads(decrypt_field(workflow_job, 'start_args'))
else:
@@ -129,14 +149,10 @@ class TaskManager():
if not can_start:
job.job_explanation = _("Job spawned from workflow could not start because it "
"was not in the right state or required manual credentials")
else:
can_start = False
job.job_explanation = _("Job spawned from workflow could not start because it "
"was missing a related resource such as project or inventory")
if not can_start:
job.status = 'failed'
job.save(update_fields=['status', 'job_explanation'])
connection.on_commit(lambda: job.websocket_emit_status('failed'))
job.websocket_emit_status('failed')
# TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
#emit_websocket_notification('/socket.io/jobs', '', dict(id=))
@@ -145,7 +161,9 @@ class TaskManager():
result = []
for workflow_job in workflow_jobs:
dag = WorkflowDAG(workflow_job)
status_changed = False
if workflow_job.cancel_flag:
workflow_job.workflow_nodes.filter(do_not_run=False, job__isnull=True).update(do_not_run=True)
logger.debug('Canceling spawned jobs of %s due to cancel flag.', workflow_job.log_format)
cancel_finished = dag.cancel_node_jobs()
if cancel_finished:
@@ -153,19 +171,31 @@ class TaskManager():
workflow_job.status = 'canceled'
workflow_job.start_args = '' # blank field to remove encrypted passwords
workflow_job.save(update_fields=['status', 'start_args'])
connection.on_commit(lambda: workflow_job.websocket_emit_status(workflow_job.status))
status_changed = True
else:
is_done, has_failed = dag.is_workflow_done()
workflow_nodes = dag.mark_dnr_nodes()
map(lambda n: n.save(update_fields=['do_not_run']), workflow_nodes)
is_done = dag.is_workflow_done()
if not is_done:
continue
has_failed, reason = dag.has_workflow_failed()
logger.info('Marking %s as %s.', workflow_job.log_format, 'failed' if has_failed else 'successful')
result.append(workflow_job.id)
new_status = 'failed' if has_failed else 'successful'
logger.debug(six.text_type("Transitioning {} to {} status.").format(workflow_job.log_format, new_status))
update_fields = ['status', 'start_args']
workflow_job.status = new_status
if reason:
logger.info(reason)
workflow_job.job_explanation = "No error handling paths found, marking workflow as failed"
update_fields.append('job_explanation')
workflow_job.start_args = '' # blank field to remove encrypted passwords
workflow_job.save(update_fields=['status', 'start_args'])
connection.on_commit(lambda: workflow_job.websocket_emit_status(workflow_job.status))
workflow_job.save(update_fields=update_fields)
status_changed = True
if status_changed:
workflow_job.websocket_emit_status(workflow_job.status)
if workflow_job.spawned_by_workflow:
schedule_task_manager()
return result
def get_dependent_jobs_for_inv_and_proj_update(self, job_obj):
@@ -205,6 +235,7 @@ class TaskManager():
if type(task) is WorkflowJob:
task.status = 'running'
logger.info('Transitioning %s to running status.', task.log_format)
schedule_task_manager()
elif not task.supports_isolation() and rampart_group.controller_id:
# non-Ansible jobs on isolated instances run on controller
task.instance_group = rampart_group.controller
@@ -231,7 +262,6 @@ class TaskManager():
self.consume_capacity(task, rampart_group.name)
def post_commit():
task.websocket_emit_status(task.status)
if task.status != 'failed' and type(task) is not WorkflowJob:
task_cls = task._get_task_class()
task_cls.apply_async(
@@ -250,6 +280,7 @@ class TaskManager():
}],
)
task.websocket_emit_status(task.status) # adds to on_commit
connection.on_commit(post_commit)
def process_running_tasks(self, running_tasks):
@@ -263,6 +294,11 @@ class TaskManager():
project_task.created = task.created - timedelta(seconds=1)
project_task.status = 'pending'
project_task.save()
logger.info(
'Spawned {} as dependency of {}'.format(
project_task.log_format, task.log_format
)
)
return project_task
def create_inventory_update(self, task, inventory_source_task):
@@ -272,6 +308,11 @@ class TaskManager():
inventory_task.created = task.created - timedelta(seconds=2)
inventory_task.status = 'pending'
inventory_task.save()
logger.info(
'Spawned {} as dependency of {}'.format(
inventory_task.log_format, task.log_format
)
)
# inventory_sources = self.get_inventory_source_tasks([task])
# self.process_inventory_sources(inventory_sources)
return inventory_task
@@ -540,7 +581,8 @@ class TaskManager():
return
logger.debug("Starting Scheduler")
finished_wfjs = self._schedule()
with task_manager_bulk_reschedule():
finished_wfjs = self._schedule()
# Operations whose queries rely on modifications made during the atomic scheduling session
for wfj in WorkflowJob.objects.filter(id__in=finished_wfjs):

View File

@@ -9,16 +9,6 @@ from awx.main.dispatch.publish import task
logger = logging.getLogger('awx.main.scheduler')
@task()
def run_job_launch(job_id):
TaskManager().schedule()
@task()
def run_job_complete(job_id):
TaskManager().schedule()
@task()
def run_task_manager():
logger.debug("Running Tower task manager.")

View File

@@ -396,7 +396,7 @@ model_serializer_mapping = {
Credential: CredentialSerializer,
Team: TeamSerializer,
Project: ProjectSerializer,
JobTemplate: JobTemplateSerializer,
JobTemplate: JobTemplateWithSpecSerializer,
Job: JobSerializer,
AdHocCommand: AdHocCommandSerializer,
NotificationTemplate: NotificationTemplateSerializer,
@@ -404,7 +404,7 @@ model_serializer_mapping = {
CredentialType: CredentialTypeSerializer,
Schedule: ScheduleSerializer,
Label: LabelSerializer,
WorkflowJobTemplate: WorkflowJobTemplateSerializer,
WorkflowJobTemplate: WorkflowJobTemplateWithSpecSerializer,
WorkflowJobTemplateNode: WorkflowJobTemplateNodeSerializer,
WorkflowJob: WorkflowJobSerializer,
OAuth2AccessToken: OAuth2TokenSerializer,
@@ -425,6 +425,11 @@ def activity_stream_create(sender, instance, created, **kwargs):
changes = model_to_dict(instance, model_serializer_mapping)
# Special case where Job survey password variables need to be hidden
if type(instance) == Job:
changes['credentials'] = [
six.text_type('{} ({})').format(c.name, c.id)
for c in instance.credentials.iterator()
]
changes['labels'] = [l.name for l in instance.labels.iterator()]
if 'extra_vars' in changes:
changes['extra_vars'] = instance.display_extra_vars()
if type(instance) == OAuth2AccessToken:
@@ -487,12 +492,21 @@ def activity_stream_delete(sender, instance, **kwargs):
# If we trigger this handler there we may fall into db-integrity-related race conditions.
# So we add flag verification to prevent normal signal handling. This funciton will be
# explicitly called with flag on in Inventory.schedule_deletion.
if isinstance(instance, Inventory) and not kwargs.get('inventory_delete_flag', False):
return
changes = {}
if isinstance(instance, Inventory):
if not kwargs.get('inventory_delete_flag', False):
return
# Add additional data about child hosts / groups that will be deleted
changes['coalesced_data'] = {
'hosts_deleted': instance.hosts.count(),
'groups_deleted': instance.groups.count()
}
elif isinstance(instance, (Host, Group)) and instance.inventory.pending_deletion:
return # accounted for by inventory entry, above
_type = type(instance)
if getattr(_type, '_deferred', False):
return
changes = model_to_dict(instance)
changes.update(model_to_dict(instance, model_serializer_mapping))
object1 = camelcase_to_underscore(instance.__class__.__name__)
if type(instance) == OAuth2AccessToken:
changes['token'] = CENSOR_VALUE

View File

@@ -56,7 +56,7 @@ from awx.main.dispatch import get_local_queuename, reaper
from awx.main.utils import (get_ansible_version, get_ssh_version, decrypt_field, update_scm_url,
check_proot_installed, build_proot_temp_dir, get_licenser,
wrap_args_with_proot, OutputEventFilter, OutputVerboseFilter, ignore_inventory_computed_fields,
ignore_inventory_group_removal, extract_ansible_vars)
ignore_inventory_group_removal, extract_ansible_vars, schedule_task_manager)
from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja
from awx.main.utils.reload import stop_local_services
from awx.main.utils.pglock import advisory_lock
@@ -493,8 +493,7 @@ def handle_work_success(task_actual):
if not instance:
return
from awx.main.scheduler.tasks import run_job_complete
run_job_complete.delay(instance.id)
schedule_task_manager()
@task()
@@ -533,8 +532,7 @@ def handle_work_error(task_id, *args, **kwargs):
# what the job complete message handler does then we may want to send a
# completion event for each job here.
if first_instance:
from awx.main.scheduler.tasks import run_job_complete
run_job_complete.delay(first_instance.id)
schedule_task_manager()
pass
@@ -1210,8 +1208,6 @@ class RunJob(BaseTask):
if job.project:
env['PROJECT_REVISION'] = job.project.scm_revision
env['ANSIBLE_RETRY_FILES_ENABLED'] = "False"
env['ANSIBLE_INVENTORY_ENABLED'] = 'script'
env['ANSIBLE_INVENTORY_UNPARSED_FAILED'] = 'True'
env['MAX_EVENT_RES'] = str(settings.MAX_EVENT_RES_DATA)
if not kwargs.get('isolated'):
env['ANSIBLE_CALLBACK_PLUGINS'] = plugin_path
@@ -1226,15 +1222,10 @@ class RunJob(BaseTask):
os.mkdir(cp_dir, 0o700)
env['ANSIBLE_SSH_CONTROL_PATH_DIR'] = cp_dir
# Allow the inventory script to include host variables inline via ['_meta']['hostvars'].
env['INVENTORY_HOSTVARS'] = str(True)
# Set environment variables for cloud credentials.
cred_files = kwargs.get('private_data_files', {}).get('credentials', {})
for cloud_cred in job.cloud_credentials:
if cloud_cred and cloud_cred.kind == 'gce':
env['GCE_PEM_FILE_PATH'] = cred_files.get(cloud_cred, '')
elif cloud_cred and cloud_cred.kind == 'openstack':
if cloud_cred and cloud_cred.kind == 'openstack':
env['OS_CLIENT_CONFIG_FILE'] = cred_files.get(cloud_cred, '')
for network_cred in job.network_credentials:
@@ -1390,6 +1381,12 @@ class RunJob(BaseTask):
if job.is_isolated() is True:
pu_ig = pu_ig.controller
pu_en = settings.CLUSTER_HOST_ID
if job.project.status in ('error', 'failed'):
msg = _(
'The project revision for this job template is unknown due to a failed update.'
)
job = self.update_model(job.pk, status='failed', job_explanation=msg)
raise RuntimeError(msg)
local_project_sync = job.project.create_project_update(
_eager_fields=dict(
launch_type="sync",
@@ -1416,6 +1413,11 @@ class RunJob(BaseTask):
def final_run_hook(self, job, status, **kwargs):
super(RunJob, self).final_run_hook(job, status, **kwargs)
if 'private_data_dir' not in kwargs:
# If there's no private data dir, that means we didn't get into the
# actual `run()` call; this _usually_ means something failed in
# the pre_run_hook method
return
if job.use_fact_cache:
job.finish_job_fact_cache(
kwargs['private_data_dir'],
@@ -1805,10 +1807,6 @@ class RunInventoryUpdate(BaseTask):
"""
private_data = {'credentials': {}}
credential = inventory_update.get_cloud_credential()
# If this is GCE, return the RSA key
if inventory_update.source == 'gce':
private_data['credentials'][credential] = decrypt_field(credential, 'ssh_key_data')
return private_data
if inventory_update.source == 'openstack':
openstack_auth = dict(auth_url=credential.host,
@@ -2041,7 +2039,6 @@ class RunInventoryUpdate(BaseTask):
'ec2': 'EC2_INI_PATH',
'vmware': 'VMWARE_INI_PATH',
'azure_rm': 'AZURE_INI_PATH',
'gce': 'GCE_PEM_FILE_PATH',
'openstack': 'OS_CLIENT_CONFIG_FILE',
'satellite6': 'FOREMAN_INI_PATH',
'cloudforms': 'CLOUDFORMS_INI_PATH'

View File

@@ -15,6 +15,12 @@ from awx.main.tests.factories import (
)
def pytest_addoption(parser):
parser.addoption(
"--genschema", action="store_true", default=False, help="execute schema validator"
)
def pytest_configure(config):
import sys
sys._called_from_test = True

View File

@@ -49,7 +49,8 @@ class TestSwaggerGeneration():
data.update(response.accepted_renderer.get_customizations() or {})
data['host'] = None
data['modified'] = datetime.datetime.utcnow().isoformat()
if not pytest.config.getoption("--genschema"):
data['modified'] = datetime.datetime.utcnow().isoformat()
data['schemes'] = ['https']
data['consumes'] = ['application/json']
@@ -121,7 +122,7 @@ class TestSwaggerGeneration():
pattern = pattern.replace('{id}', '[0-9]+')
pattern = pattern.replace(r'{category_slug}', r'[a-zA-Z0-9\-]+')
for path, result in swagger_autogen.items():
if re.match('^{}$'.format(pattern), path):
if re.match(r'^{}$'.format(pattern), path):
for key, value in result.items():
method, status_code = key
content_type, resp, request_data = value
@@ -139,11 +140,14 @@ class TestSwaggerGeneration():
for param in node[method].get('parameters'):
if param['in'] == 'body':
node[method]['parameters'].remove(param)
node[method].setdefault('parameters', []).append({
'name': 'data',
'in': 'body',
'schema': {'example': request_data},
})
if pytest.config.getoption("--genschema"):
pytest.skip("In schema generator skipping swagger generator", allow_module_level=True)
else:
node[method].setdefault('parameters', []).append({
'name': 'data',
'in': 'body',
'schema': {'example': request_data},
})
# Build response examples
if resp:
@@ -164,8 +168,13 @@ class TestSwaggerGeneration():
# replace ISO dates w/ the same value so we don't generate
# needless diffs
data = re.sub(
'[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]+Z',
'2018-02-01T08:00:00.000000Z',
r'[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]+Z',
r'2018-02-01T08:00:00.000000Z',
data
)
data = re.sub(
r'''(\s+"client_id": ")([a-zA-Z0-9]{40})("\,\s*)''',
r'\1xxxx\3',
data
)
f.write(data)

View File

@@ -561,11 +561,12 @@ def test_callback_accept_prompted_extra_var(mocker, survey_spec_factory, job_tem
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user, expect=201, format='json')
assert UnifiedJobTemplate.create_unified_job.called
assert UnifiedJobTemplate.create_unified_job.call_args == ({
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {
'extra_vars': {'survey_var': 4, 'job_launch_var': 3},
'_eager_fields': {'launch_type': 'callback'},
'limit': 'single-host'},
)
'limit': 'single-host'
}
mock_job.signal_start.assert_called_once()
@@ -587,10 +588,11 @@ def test_callback_ignore_unprompted_extra_var(mocker, survey_spec_factory, job_t
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user, expect=201, format='json')
assert UnifiedJobTemplate.create_unified_job.called
assert UnifiedJobTemplate.create_unified_job.call_args == ({
'_eager_fields': {'launch_type': 'callback'},
'limit': 'single-host'},
)
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {
'limit': 'single-host'
}
mock_job.signal_start.assert_called_once()

View File

@@ -6,7 +6,7 @@ import pytest
# AWX
from awx.api.serializers import JobTemplateSerializer
from awx.api.versioning import reverse
from awx.main.models import Job, JobTemplate, CredentialType
from awx.main.models import Job, JobTemplate, CredentialType, WorkflowJobTemplate
from awx.main.migrations import _save_password_keys as save_password_keys
# Django
@@ -519,6 +519,24 @@ def test_launch_with_pending_deletion_inventory(get, post, organization_factory,
assert resp.data['inventory'] == ['The inventory associated with this Job Template is being deleted.']
@pytest.mark.django_db
def test_launch_with_pending_deletion_inventory_workflow(get, post, organization, inventory, admin_user):
wfjt = WorkflowJobTemplate.objects.create(
name='wfjt',
organization=organization,
inventory=inventory
)
inventory.pending_deletion = True
inventory.save()
resp = post(
url=reverse('api:workflow_job_template_launch', kwargs={'pk': wfjt.pk}),
user=admin_user, expect=400
)
assert resp.data['inventory'] == ['The inventory associated with this Workflow is being deleted.']
@pytest.mark.django_db
def test_launch_with_extra_credentials(get, post, organization_factory,
job_template_factory, machine_credential,

View File

@@ -34,6 +34,30 @@ def test_wfjt_schedule_accepted(post, workflow_job_template, admin_user):
post(url, {'name': 'test sch', 'rrule': RRULE_EXAMPLE}, admin_user, expect=201)
@pytest.mark.django_db
def test_wfjt_unprompted_inventory_rejected(post, workflow_job_template, inventory, admin_user):
r = post(
url=reverse('api:workflow_job_template_schedules_list', kwargs={'pk': workflow_job_template.id}),
data={'name': 'test sch', 'rrule': RRULE_EXAMPLE, 'inventory': inventory.pk},
user=admin_user,
expect=400
)
assert r.data['inventory'] == ['Field is not configured to prompt on launch.']
@pytest.mark.django_db
def test_wfjt_unprompted_inventory_accepted(post, workflow_job_template, inventory, admin_user):
workflow_job_template.ask_inventory_on_launch = True
workflow_job_template.save()
r = post(
url=reverse('api:workflow_job_template_schedules_list', kwargs={'pk': workflow_job_template.id}),
data={'name': 'test sch', 'rrule': RRULE_EXAMPLE, 'inventory': inventory.pk},
user=admin_user,
expect=201
)
assert Schedule.objects.get(pk=r.data['id']).inventory == inventory
@pytest.mark.django_db
def test_valid_survey_answer(post, admin_user, project, inventory, survey_spec_factory):
job_template = JobTemplate.objects.create(

View File

@@ -13,6 +13,7 @@ def test_empty_inventory(post, get, admin_user, organization, group_factory):
inventory.save()
resp = get(reverse('api:inventory_script_view', kwargs={'version': 'v2', 'pk': inventory.pk}), admin_user)
jdata = json.loads(resp.content)
jdata.pop('all')
assert inventory.hosts.count() == 0
assert jdata == {}

View File

@@ -361,6 +361,30 @@ def test_isolated_key_flag_readonly(get, patch, delete, admin):
assert settings.AWX_ISOLATED_KEY_GENERATION is True
@pytest.mark.django_db
@pytest.mark.parametrize('headers', [True, False])
def test_saml_x509cert_validation(patch, get, admin, headers):
cert = "MIIEogIBAAKCAQEA1T4za6qBbHxFpN5f9eFvA74MFjrsjcp1uvzOaE23AYKMDEJghJ6dqQ7GwHLNIeIeumqDFmODauIzrgSDJTT5+NG30Rr+rRi0zDkrkBAj/AtA+SaVhbzqB6ZSd7LaMly9XAc+82OKlNpuWS9hPmFaSShzDTXRu5RRyvm4NDCAOGDu5hyVR2pV/ffKDNfNkChnqzvRRW9laQcVmliZhlTGn7nPZ+JbjpwEy0nwW+4zoAiEvwnT52N4xTqIcYOnXtGiaf13dh7FkUfYmS0tzF3+h8QRKwtIm4y+sq84R/kr79/0t5aRUpJynNrECajzmArpL4IjXKTPIyUpTKirJgGnCwIDAQABAoIBAC6bbbm2hpsjfkVOpUKkhxMWUqX5MwK6oYjBAIwjkEAwPFPhnh7eXC87H42oidVCCt1LsmMOVQbjcdAzBEb5kTkk/Twi3k8O+1U3maHfJT5NZ2INYNjeNXh+jb/Dw5UGWAzpOIUR2JQ4Oa4cgPCVbppW0O6uOKz6+fWXJv+hKiUoBCC0TiY52iseHJdUOaKNxYRD2IyIzCAxFSd5tZRaARIYDsugXp3E/TdbsVWA7bmjIBOXq+SquTrlB8x7j3B7+Pi09nAJ2U/uV4PHE+/2Fl009ywfmqancvnhwnz+GQ5jjP+gTfghJfbO+Z6M346rS0Vw+osrPgfyudNHlCswHOECgYEA/Cfq25gDP07wo6+wYWbx6LIzj/SSZy/Ux9P8zghQfoZiPoaq7BQBPAzwLNt7JWST8U11LZA8/wo6ch+HSTMk+m5ieVuru2cHxTDqeNlh94eCrNwPJ5ayA5U6LxAuSCTAzp+rv6KQUx1JcKSEHuh+nRYTKvUDE6iA6YtPLO96lLUCgYEA2H5rOPX2M4w1Q9zjol77lplbPRdczXNd0PIzhy8Z2ID65qvmr1nxBG4f2H96ykW8CKLXNvSXreNZ1BhOXc/3Hv+3mm46iitB33gDX4mlV4Jyo/w5IWhUKRyoW6qXquFFsScxRzTrx/9M+aZeRRLdsBk27HavFEg6jrbQ0SleZL8CgYAaM6Op8d/UgkVrHOR9Go9kmK/W85kK8+NuaE7Ksf57R0eKK8AzC9kc/lMuthfTyOG+n0ff1i8gaVWtai1Ko+/hvfqplacAsDIUgYK70AroB8LCZ5ODj5sr2CPVpB7LDFakod7c6O2KVW6+L7oy5AHUHOkc+5y4PDg5DGrLxo68SQKBgAlGoWF3aG0c/MtDk51JZI43U+lyLs++ua5SMlMAeaMFI7rucpvgxqrh7Qthqukvw7a7A22fXUBeFWM5B2KNnpD9c+hyAKAa6l+gzMQzKZpuRGsyS2BbEAAS8kO7M3Rm4o2MmFfstI2FKs8nibJ79HOvIONQ0n+T+K5Utu2/UAQRAoGAFB4fiIyQ0nYzCf18Z4Wvi/qeIOW+UoBonIN3y1h4wruBywINHxFMHx4aVImJ6R09hoJ9D3Mxli3xF/8JIjfTG5fBSGrGnuofl14d/XtRDXbT2uhVXrIkeLL/ojODwwEx0VhxIRUEjPTvEl6AFSRRcBp3KKzQ/cu7ENDY6GTlOUI=" # noqa
if headers:
cert = '-----BEGIN CERTIFICATE-----\n' + cert + '\n-----END CERTIFICATE-----'
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'saml'})
resp = patch(url, user=admin, data={
'SOCIAL_AUTH_SAML_ENABLED_IDPS': {
"okta": {
"attr_last_name": "LastName",
"attr_username": "login",
"entity_id": "http://www.okta.com/abc123",
"attr_user_permanent_id": "login",
"url": "https://example.okta.com/app/abc123/xyz123/sso/saml",
"attr_email": "Email",
"x509cert": cert,
"attr_first_name": "FirstName"
}
}
})
assert resp.status_code == 200
@pytest.mark.django_db
def test_default_broker_url():
url = parse_url(settings.BROKER_URL)

View File

@@ -1,4 +1,5 @@
import pytest
import json
from awx.api.versioning import reverse
@@ -75,6 +76,52 @@ def test_node_accepts_prompted_fields(inventory, project, workflow_job_template,
user=admin_user, expect=201)
@pytest.mark.django_db
class TestExclusiveRelationshipEnforcement():
@pytest.fixture
def n1(self, workflow_job_template):
return WorkflowJobTemplateNode.objects.create(workflow_job_template=workflow_job_template)
@pytest.fixture
def n2(self, workflow_job_template):
return WorkflowJobTemplateNode.objects.create(workflow_job_template=workflow_job_template)
def generate_url(self, relationship, id):
return reverse('api:workflow_job_template_node_{}_nodes_list'.format(relationship),
kwargs={'pk': id})
relationship_permutations = [
['success', 'failure', 'always'],
['success', 'always', 'failure'],
['failure', 'always', 'success'],
['failure', 'success', 'always'],
['always', 'success', 'failure'],
['always', 'failure', 'success'],
]
@pytest.mark.parametrize("relationships", relationship_permutations, ids=["-".join(item) for item in relationship_permutations])
def test_multi_connections_same_parent_disallowed(self, post, admin_user, n1, n2, relationships):
for index, relationship in enumerate(relationships):
r = post(self.generate_url(relationship, n1.id),
data={'associate': True, 'id': n2.id},
user=admin_user,
expect=204 if index == 0 else 400)
if index != 0:
assert {'Error': 'Relationship not allowed.'} == json.loads(r.content)
@pytest.mark.parametrize("relationship", ['success', 'failure', 'always'])
def test_existing_relationship_allowed(self, post, admin_user, n1, n2, relationship):
post(self.generate_url(relationship, n1.id),
data={'associate': True, 'id': n2.id},
user=admin_user,
expect=204)
post(self.generate_url(relationship, n1.id),
data={'associate': True, 'id': n2.id},
user=admin_user,
expect=204)
@pytest.mark.django_db
class TestNodeCredentials:
'''

View File

@@ -0,0 +1,79 @@
# Python
import datetime
import pytest
import string
import random
import StringIO
# Django
from django.core.management import call_command
from django.core.management.base import CommandError
# AWX
from awx.main.models import RefreshToken
from awx.main.models.oauth import OAuth2AccessToken
from awx.api.versioning import reverse
@pytest.mark.django_db
class TestOAuth2RevokeCommand:
def test_non_existing_user(self):
out = StringIO.StringIO()
fake_username = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(6))
arg = '--user=' + fake_username
with pytest.raises(CommandError) as excinfo:
call_command('revoke_oauth2_tokens', arg, stdout=out)
assert 'A user with that username does not exist' in excinfo.value.message
out.close()
def test_revoke_all_access_tokens(self, post, admin, alice):
url = reverse('api:o_auth2_token_list')
for user in (admin, alice):
post(
url,
{'description': 'test token', 'scope': 'read'},
user
)
assert OAuth2AccessToken.objects.count() == 2
call_command('revoke_oauth2_tokens')
assert OAuth2AccessToken.objects.count() == 0
def test_revoke_access_token_for_user(self, post, admin, alice):
url = reverse('api:o_auth2_token_list')
post(
url,
{'description': 'test token', 'scope': 'read'},
alice
)
assert OAuth2AccessToken.objects.count() == 1
call_command('revoke_oauth2_tokens', '--user=admin')
assert OAuth2AccessToken.objects.count() == 1
call_command('revoke_oauth2_tokens', '--user=alice')
assert OAuth2AccessToken.objects.count() == 0
def test_revoke_all_refresh_tokens(self, post, admin, oauth_application):
url = reverse('api:o_auth2_token_list')
post(
url,
{
'description': 'test token for',
'scope': 'read',
'application': oauth_application.pk
},
admin
)
assert OAuth2AccessToken.objects.count() == 1
assert RefreshToken.objects.count() == 1
call_command('revoke_oauth2_tokens')
assert OAuth2AccessToken.objects.count() == 0
assert RefreshToken.objects.count() == 1
for r in RefreshToken.objects.all():
assert r.revoked is None
call_command('revoke_oauth2_tokens', '--all')
assert RefreshToken.objects.count() == 1
for r in RefreshToken.objects.all():
assert r.revoked is not None
assert isinstance(r.revoked, datetime.datetime)

View File

@@ -15,9 +15,9 @@ from awx.main.models import (
)
# other AWX
from awx.main.utils import model_to_dict
from awx.main.utils import model_to_dict, model_instance_diff
from awx.main.utils.common import get_allowed_fields
from awx.api.serializers import InventorySourceSerializer
from awx.main.signals import model_serializer_mapping
# Django
from django.contrib.auth.models import AnonymousUser
@@ -26,11 +26,6 @@ from django.contrib.auth.models import AnonymousUser
from crum import impersonate
model_serializer_mapping = {
InventorySource: InventorySourceSerializer
}
class TestImplicitRolesOmitted:
'''
Test that there is exactly 1 "create" entry in the activity stream for
@@ -220,8 +215,38 @@ def test_modified_not_allowed_field(somecloud_type):
activity_stream_registrar, but did not add its serializer to
the model->serializer mapping.
'''
from awx.main.signals import model_serializer_mapping
from awx.main.registrar import activity_stream_registrar
for Model in activity_stream_registrar.models:
assert 'modified' not in get_allowed_fields(Model(), model_serializer_mapping), Model
@pytest.mark.django_db
def test_survey_spec_create_entry(job_template, survey_spec_factory):
start_count = job_template.activitystream_set.count()
job_template.survey_spec = survey_spec_factory('foo')
job_template.save()
assert job_template.activitystream_set.count() == start_count + 1
@pytest.mark.django_db
def test_survey_create_diff(job_template, survey_spec_factory):
old = JobTemplate.objects.get(pk=job_template.pk)
job_template.survey_spec = survey_spec_factory('foo')
before, after = model_instance_diff(old, job_template, model_serializer_mapping)['survey_spec']
assert before == '{}'
assert json.loads(after) == survey_spec_factory('foo')
@pytest.mark.django_db
def test_saved_passwords_hidden_activity(workflow_job_template, job_template_with_survey_passwords):
node_with_passwords = workflow_job_template.workflow_nodes.create(
unified_job_template=job_template_with_survey_passwords,
extra_data={'bbbb': '$encrypted$fooooo'},
survey_passwords={'bbbb': '$encrypted$'}
)
node_with_passwords.delete()
entry = ActivityStream.objects.order_by('timestamp').last()
changes = json.loads(entry.changes)
assert 'survey_passwords' not in changes
assert json.loads(changes['extra_data'])['bbbb'] == '$encrypted$'

View File

@@ -38,6 +38,41 @@ class TestInventoryScript:
'remote_tower_id': host.id
}
def test_all_group(self, inventory):
inventory.groups.create(name='all', variables={'a1': 'a1'})
# make sure we return a1 details in output
data = inventory.get_script_data()
assert 'all' in data
assert data['all'] == {
'hosts': [],
'children': [],
'vars': {
'a1': 'a1'
}
}
def test_grandparent_group(self, inventory):
g1 = inventory.groups.create(name='g1', variables={'v1': 'v1'})
g2 = inventory.groups.create(name='g2', variables={'v2': 'v2'})
h1 = inventory.hosts.create(name='h1')
# h1 becomes indirect member of g1 group
g1.children.add(g2)
g2.hosts.add(h1)
# make sure we return g1 details in output
data = inventory.get_script_data(hostvars=1)
assert 'g1' in data
assert 'g2' in data
assert data['g1'] == {
'hosts': [],
'children': ['g2'],
'vars': {'v1': 'v1'}
}
assert data['g2'] == {
'hosts': ['h1'],
'children': [],
'vars': {'v2': 'v2'}
}
def test_slice_subset(self, inventory):
for i in range(3):
inventory.hosts.create(name='host{}'.format(i))
@@ -63,7 +98,9 @@ class TestInventoryScript:
}
if i < 2:
expected_data['contains_two_hosts'] = {'hosts': ['host{}'.format(i)], 'children': [], 'vars': {}}
assert inventory.get_script_data(slice_number=i + 1, slice_count=3) == expected_data
data = inventory.get_script_data(slice_number=i + 1, slice_count=3)
data.pop('all')
assert data == expected_data
@pytest.mark.django_db

View File

@@ -18,6 +18,18 @@ def test_awx_virtualenv_from_settings(inventory, project, machine_credential):
assert job.ansible_virtualenv_path == '/venv/ansible'
@pytest.mark.django_db
def test_prevent_slicing():
jt = JobTemplate.objects.create(
name='foo',
job_slice_count=4
)
job = jt.create_unified_job(_prevent_slicing=True)
assert job.job_slice_count == 1
assert job.job_slice_number == 0
assert isinstance(job, Job)
@pytest.mark.django_db
def test_awx_custom_virtualenv(inventory, project, machine_credential):
jt = JobTemplate.objects.create(

View File

@@ -3,11 +3,17 @@
import pytest
# AWX
from awx.main.models.workflow import WorkflowJob, WorkflowJobNode, WorkflowJobTemplateNode, WorkflowJobTemplate
from awx.main.models.workflow import (
WorkflowJob,
WorkflowJobNode,
WorkflowJobTemplateNode,
WorkflowJobTemplate,
)
from awx.main.models.jobs import JobTemplate, Job
from awx.main.models.projects import ProjectUpdate
from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.api.versioning import reverse
from awx.api.views import WorkflowJobTemplateNodeSuccessNodesList
# Django
from django.test import TransactionTestCase
@@ -58,46 +64,100 @@ class TestWorkflowDAGFunctional(TransactionTestCase):
def test_workflow_done(self):
wfj = self.workflow_job(states=['failed', None, None, 'successful', None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
assert 3 == len(dag.mark_dnr_nodes())
is_done = dag.is_workflow_done()
has_failed, reason = dag.has_workflow_failed()
self.assertTrue(is_done)
self.assertFalse(has_failed)
assert reason is None
# verify that relaunched WFJ fails if a JT leaf is deleted
for jt in JobTemplate.objects.all():
jt.delete()
relaunched = wfj.create_relaunch_workflow_job()
dag = WorkflowDAG(workflow_job=relaunched)
is_done, has_failed = dag.is_workflow_done()
self.assertTrue(is_done)
self.assertTrue(has_failed)
def test_workflow_fails_for_unfinished_node(self):
wfj = self.workflow_job(states=['error', None, None, None, None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
dag.mark_dnr_nodes()
is_done = dag.is_workflow_done()
has_failed, reason = dag.has_workflow_failed()
self.assertTrue(is_done)
self.assertTrue(has_failed)
assert "Workflow job node {} related unified job template missing".format(wfj.workflow_nodes.all()[0].id)
def test_workflow_fails_for_no_error_handler(self):
wfj = self.workflow_job(states=['successful', 'failed', None, None, None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
dag.mark_dnr_nodes()
is_done = dag.is_workflow_done()
has_failed = dag.has_workflow_failed()
self.assertTrue(is_done)
self.assertTrue(has_failed)
def test_workflow_fails_leaf(self):
wfj = self.workflow_job(states=['successful', 'successful', 'failed', None, None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
dag.mark_dnr_nodes()
is_done = dag.is_workflow_done()
has_failed = dag.has_workflow_failed()
self.assertTrue(is_done)
self.assertTrue(has_failed)
def test_workflow_not_finished(self):
wfj = self.workflow_job(states=['new', None, None, None, None])
dag = WorkflowDAG(workflow_job=wfj)
is_done, has_failed = dag.is_workflow_done()
dag.mark_dnr_nodes()
is_done = dag.is_workflow_done()
has_failed, reason = dag.has_workflow_failed()
self.assertFalse(is_done)
self.assertFalse(has_failed)
assert reason is None
@pytest.mark.django_db
class TestWorkflowDNR():
@pytest.fixture
def workflow_job_fn(self):
def fn(states=['new', 'new', 'new', 'new', 'new', 'new']):
r"""
Workflow topology:
node[0]
/ |
s f
/ |
node[1] node[3]
/ |
s f
/ |
node[2] node[4]
\ |
s f
\ |
node[5]
"""
wfj = WorkflowJob.objects.create()
jt = JobTemplate.objects.create(name='test-jt')
nodes = [WorkflowJobNode.objects.create(workflow_job=wfj, unified_job_template=jt) for i in range(0, 6)]
for node, state in zip(nodes, states):
if state:
node.job = jt.create_job()
node.job.status = state
node.job.save()
node.save()
nodes[0].success_nodes.add(nodes[1])
nodes[1].success_nodes.add(nodes[2])
nodes[0].failure_nodes.add(nodes[3])
nodes[3].failure_nodes.add(nodes[4])
nodes[2].success_nodes.add(nodes[5])
nodes[4].failure_nodes.add(nodes[5])
return wfj, nodes
return fn
def test_workflow_dnr_because_parent(self, workflow_job_fn):
wfj, nodes = workflow_job_fn(states=['successful', None, None, None, None, None,])
dag = WorkflowDAG(workflow_job=wfj)
workflow_nodes = dag.mark_dnr_nodes()
assert 2 == len(workflow_nodes)
assert nodes[3] in workflow_nodes
assert nodes[4] in workflow_nodes
@pytest.mark.django_db
@@ -126,7 +186,7 @@ class TestWorkflowJob:
assert nodes[0].failure_nodes.filter(id=nodes[3].id).exists()
assert nodes[3].failure_nodes.filter(id=nodes[4].id).exists()
def test_inherit_ancestor_artifacts_from_job(self, project, mocker):
def test_inherit_ancestor_artifacts_from_job(self, job_template, mocker):
"""
Assure that nodes along the line of execution inherit artifacts
from both jobs ran, and from the accumulation of old jobs
@@ -137,13 +197,13 @@ class TestWorkflowJob:
# Workflow job nodes
job_node = WorkflowJobNode.objects.create(workflow_job=wfj, job=job,
ancestor_artifacts={'a': 42})
queued_node = WorkflowJobNode.objects.create(workflow_job=wfj)
queued_node = WorkflowJobNode.objects.create(workflow_job=wfj, unified_job_template=job_template)
# Connect old job -> new job
mocker.patch.object(queued_node, 'get_parent_nodes', lambda: [job_node])
assert queued_node.get_job_kwargs()['extra_vars'] == {'a': 42, 'b': 43}
assert queued_node.ancestor_artifacts == {'a': 42, 'b': 43}
def test_inherit_ancestor_artifacts_from_project_update(self, project, mocker):
def test_inherit_ancestor_artifacts_from_project_update(self, project, job_template, mocker):
"""
Test that the existence of a project update (no artifacts) does
not break the flow of ancestor_artifacts
@@ -154,7 +214,7 @@ class TestWorkflowJob:
# Workflow job nodes
project_node = WorkflowJobNode.objects.create(workflow_job=wfj, job=update,
ancestor_artifacts={'a': 42, 'b': 43})
queued_node = WorkflowJobNode.objects.create(workflow_job=wfj)
queued_node = WorkflowJobNode.objects.create(workflow_job=wfj, unified_job_template=job_template)
# Connect project update -> new job
mocker.patch.object(queued_node, 'get_parent_nodes', lambda: [project_node])
assert queued_node.get_job_kwargs()['extra_vars'] == {'a': 42, 'b': 43}
@@ -186,18 +246,12 @@ class TestWorkflowJobTemplate:
assert parent_qs[0] == wfjt.workflow_job_template_nodes.all()[1]
def test_topology_validator(self, wfjt):
from awx.api.views import WorkflowJobTemplateNodeChildrenBaseList
test_view = WorkflowJobTemplateNodeChildrenBaseList()
test_view = WorkflowJobTemplateNodeSuccessNodesList()
nodes = wfjt.workflow_job_template_nodes.all()
node_assoc = WorkflowJobTemplateNode.objects.create(workflow_job_template=wfjt)
nodes[2].always_nodes.add(node_assoc)
# test cycle validation
assert test_view.is_valid_relation(node_assoc, nodes[0]) == {'Error': 'Cycle detected.'}
# test multi-ancestor validation
assert test_view.is_valid_relation(node_assoc, nodes[1]) == {'Error': 'Multiple parent relationship not allowed.'}
# test mutex validation
test_view.relationship = 'failure_nodes'
print(nodes[0].success_nodes.get(id=nodes[1].id).failure_nodes.get(id=nodes[2].id))
assert test_view.is_valid_relation(nodes[2], nodes[0]) == {'Error': 'Cycle detected.'}
def test_always_success_failure_creation(self, wfjt, admin, get):
wfjt_node = wfjt.workflow_job_template_nodes.all()[1]
node = WorkflowJobTemplateNode.objects.create(workflow_job_template=wfjt)
@@ -215,3 +269,55 @@ class TestWorkflowJobTemplate:
wfjt2.validate_unique()
wfjt2 = WorkflowJobTemplate(name='foo', organization=None)
wfjt2.validate_unique()
@pytest.mark.django_db
def test_workflow_ancestors(organization):
# Spawn order of templates grandparent -> parent -> child
# create child WFJT and workflow job
child = WorkflowJobTemplate.objects.create(organization=organization, name='child')
child_job = WorkflowJob.objects.create(
workflow_job_template=child,
launch_type='workflow'
)
# create parent WFJT and workflow job, and link it up
parent = WorkflowJobTemplate.objects.create(organization=organization, name='parent')
parent_job = WorkflowJob.objects.create(
workflow_job_template=parent,
launch_type='workflow'
)
WorkflowJobNode.objects.create(
workflow_job=parent_job,
unified_job_template=child,
job=child_job
)
# create grandparent WFJT and workflow job and link it up
grandparent = WorkflowJobTemplate.objects.create(organization=organization, name='grandparent')
grandparent_job = WorkflowJob.objects.create(
workflow_job_template=grandparent,
launch_type='schedule'
)
WorkflowJobNode.objects.create(
workflow_job=grandparent_job,
unified_job_template=parent,
job=parent_job
)
# ancestors method gives a list of WFJT ids
assert child_job.get_ancestor_workflows() == [parent, grandparent]
@pytest.mark.django_db
def test_workflow_ancestors_recursion_prevention(organization):
# This is toxic database data, this tests that it doesn't create an infinite loop
wfjt = WorkflowJobTemplate.objects.create(organization=organization, name='child')
wfj = WorkflowJob.objects.create(
workflow_job_template=wfjt,
launch_type='workflow'
)
WorkflowJobNode.objects.create(
workflow_job=wfj,
unified_job_template=wfjt,
job=wfj # well, this is a problem
)
# mostly, we just care that this assertion finishes in finite time
assert wfj.get_ancestor_workflows() == []

View File

@@ -5,6 +5,7 @@ from datetime import timedelta
from awx.main.scheduler import TaskManager
from awx.main.utils import encrypt_field
from awx.main.models import WorkflowJobTemplate, JobTemplate
@pytest.mark.django_db
@@ -21,6 +22,95 @@ def test_single_job_scheduler_launch(default_instance_group, job_template_factor
TaskManager.start_task.assert_called_once_with(j, default_instance_group, [], instance)
@pytest.mark.django_db
class TestJobLifeCycle:
def run_tm(self, tm, expect_channel=None, expect_schedule=None, expect_commit=None):
"""Test helper method that takes parameters to assert against
expect_channel - list of expected websocket emit channel message calls
expect_schedule - list of expected calls to reschedule itself
expect_commit - list of expected on_commit calls
If any of these are None, then the assertion is not made.
"""
if expect_schedule and len(expect_schedule) > 1:
raise RuntimeError('Task manager should reschedule itself one time, at most.')
with mock.patch('awx.main.models.unified_jobs.UnifiedJob.websocket_emit_status') as mock_channel:
with mock.patch('awx.main.utils.common._schedule_task_manager') as tm_sch:
# Job are ultimately submitted in on_commit hook, but this will not
# actually run, because it waits until outer transaction, which is the test
# itself in this case
with mock.patch('django.db.connection.on_commit') as mock_commit:
tm.schedule()
if expect_channel is not None:
assert mock_channel.mock_calls == expect_channel
if expect_schedule is not None:
assert tm_sch.mock_calls == expect_schedule
if expect_commit is not None:
assert mock_commit.mock_calls == expect_commit
def test_task_manager_workflow_rescheduling(self, job_template_factory, inventory, project, default_instance_group):
jt = JobTemplate.objects.create(
allow_simultaneous=True,
inventory=inventory,
project=project,
playbook='helloworld.yml'
)
wfjt = WorkflowJobTemplate.objects.create(name='foo')
for i in range(2):
wfjt.workflow_nodes.create(
unified_job_template=jt
)
wj = wfjt.create_unified_job()
assert wj.workflow_nodes.count() == 2
wj.signal_start()
tm = TaskManager()
# Transitions workflow job to running
# needs to re-schedule so it spawns jobs next round
self.run_tm(tm, [mock.call('running')], [mock.call()])
# Spawns jobs
# needs re-schedule to submit jobs next round
self.run_tm(tm, [mock.call('pending'), mock.call('pending')], [mock.call()])
assert jt.jobs.count() == 2 # task manager spawned jobs
# Submits jobs
# intermission - jobs will run and reschedule TM when finished
self.run_tm(tm, [mock.call('waiting'), mock.call('waiting')], [])
# I am the job runner
for job in jt.jobs.all():
job.status = 'successful'
job.save()
# Finishes workflow
# no further action is necessary, so rescheduling should not happen
self.run_tm(tm, [mock.call('successful')], [])
def test_task_manager_workflow_workflow_rescheduling(self):
wfjts = [WorkflowJobTemplate.objects.create(name='foo')]
for i in range(5):
wfjt = WorkflowJobTemplate.objects.create(name='foo{}'.format(i))
wfjts[-1].workflow_nodes.create(
unified_job_template=wfjt
)
wfjts.append(wfjt)
wj = wfjts[0].create_unified_job()
wj.signal_start()
tm = TaskManager()
while wfjts[0].status != 'successful':
wfjts[1].refresh_from_db()
if wfjts[1].status == 'successful':
# final run, no more work to do
self.run_tm(tm, expect_schedule=[])
else:
self.run_tm(tm, expect_schedule=[mock.call()])
wfjts[0].refresh_from_db()
@pytest.mark.django_db
def test_single_jt_multi_job_launch_blocks_last(default_instance_group, job_template_factory, mocker):
instance = default_instance_group.instances.all()[0]

View File

@@ -14,6 +14,65 @@ from rest_framework import serializers
EXAMPLE_PRIVATE_KEY = '-----BEGIN PRIVATE KEY-----\nxyz==\n-----END PRIVATE KEY-----'
EXAMPLE_ENCRYPTED_PRIVATE_KEY = '-----BEGIN PRIVATE KEY-----\nProc-Type: 4,ENCRYPTED\nxyz==\n-----END PRIVATE KEY-----'
PKCS8_PRIVATE_KEY = '''-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQD0uyqyUHELQ25B
8lNBu/ZfVx8fPFT6jvAUscxfWLqsZCJrR8BWadXMa/0ALMaUuZbZ8Ug27jztOSO8
w8hJ6dqHaQ2gfbwsfbF6XHaetap0OoAFtnaiULSvljOkoWG+WSyfvJ73ZwEP3KzW
0JbNX24zGFdTFzX1W+8BbLpEIw3XiP9iYPtu0uit6VradMrt2Kdu+VKlQzbG1+89
g70IyFkvopynnWAkA+YXNo08dxOzmci7/G0Cp1Lwh4IAH++HbE2E4odWm5zoCaT7
gcZzKuZs/kkDHaS9O5VjsWGrZ+mp3NgeABbFRP0jDhCtS8QRa94RC6mobtnYoRd7
C1Iz3cdjAgMBAAECggEAb5p9BZUegBrviH5YDmWHnIHP7QAn5p1RibZtM1v0wRHn
ClJNuXqJJ7BlT3Ob2Y3q55ebLYWmXi4NCJOl3mMZJ2A2eSZtrkJhsaHB7G1+/oMB
B9nmLu4r/9i4005PEy16ZpvvSHZ+KvwhC9NSufRXflCO3hL7JdmXXGh3ZwQvV0a7
mP1RIQKIcLynPBTbTH1w30Znj2M4bSjUlsLbOYhwg2YQxa1qKuCtata5qdAVbgny
JYPruBhcHLPGvC0FBcd8zoYWLvQ52hcXNxrl0iN1KY7zIEYmU+3gbuBIoVl2Qo/p
zmH01bo9h9p5DdkjQ6MdjvrOX8aT93S1g9y8WqtoXQKBgQD7E2+RZ/XNIFts9cqG
2S7aywIydkgEmaOJl1fzebutJPPQXJDpQZtEenr+CG7KsRPf8nJ3jc/4OHIsnHYD
WBgXLQz0QWEgXwTRicXsxsARzHKV2Lb8IsXK5vfia+i9fxZV3WwkKVXOmTJHcVl1
XD5zfbAlrQ4r+Uo618zgpchsBQKBgQD5h+A+PX+3PdUPNkHdCltMwaSsXjBcYYoF
uZGR4v8jRQguGD5h8Eyk/cS3VVryYRKiYJCvaPFXTzN6GAsQoSnMW+37GKsbL+oK
5JYoSiCY6BpaJO3uo/UwvitV8EjHdaArb5oBjx1yiobRqhVJ+iH1PKxgnQFI5RgO
4AhnnYMqRwKBgQDUX+VQXlp5LzSGXwX3uH+8jFmIa6qRUZAWU1EO3tqUI5ykk5fz
5g27B8s/U8y7YLuKA581Z1wR/1T8TUA5peuCtxWtChxo8Fa4E0y68ocGxyPpgk2N
yq/56BKnkFVm7Lfs24WctOYjAkyYR9W+ws8Ei71SsSY6pfxW97ESGMkGLQKBgAlW
ABnUCzc75QDQst4mSQwyIosgawbJz3QvYTboG0uihY/T8GGRsAxsQjPpyaFP6HaS
zlcBwiXWHMLwq1lP7lRrDBhc7+nwfP0zWDrhqx6NcI722sAW+lF8i/qHJvHvgLKf
Vk/AnwVuEWU+y9UcurCGOJzUwvuLNr83upjF1+Z5AoGAP91XiBCorJLRJaryi6zt
iCjRxoVsrN6NvAh+MQ1yfAopO4RhxEXM/uUOBkulNhlnp+evSxUwDnFNOWzsZVn9
B6yXdJ9BTWXFX7YhEkosRZCXnNWX4Dz+DGU/yvSHQR/JYj8mRav98TmJU6lK6Vw/
YukmWPxNB+x4Ym3RNPrLpU4=
-----END PRIVATE KEY-----'''
PKCS8_ENCRYPTED_PRIVATE_KEY = '''-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIFHzBJBgkqhkiG9w0BBQ0wPDAbBgkqhkiG9w0BBQwwDgQIC4E/DX+33rACAggA
MB0GCWCGSAFlAwQBAgQQbeAsQdsEKoztekP5JXmHFASCBNAmNAMGSnycmN4sYleT
NS9r/ph9v58dv0/hzbE6TCt/i6nmA/D8mtuYB8gm30E/DOuN/dnL3z2gpyvr478P
FjoRnueuwMdLcfEpzEXotJdc7vmUsSjTFq99oh84JHdCfWSRtxkDu64dwp3GPC9+
f1qqg6o4/bPkjni+bCMgq9vgr4K+vuaKzaJqUTEQFuT3CirDGoWGpfRDtDoBmlg8
8esEXoA6RD2DNv6fQrOu9Q4Fc0YkzcoIfY6EJxu+f75LF/NUVpmeJ8QDjj6VFVuX
35ChPYolhBSC/MHBHAVVrn17FAdpLkiz7hIR7KBIg2nuu8oUnPMzDff/CeehYzNb
OH12P9zaHZa3DZHuu27oI6yUdgs8HYNLtBzXH/DbyAeW9alg1Ofber5DO62ieL3E
LqBd4R7qqDSTQmiA6B8LkVIrFrIOqn+nWoM9gHhIrTI409A/oTbpen87sZ4MIQk4
Vjw/A/D5OYhnjOEVgMXrNpKzFfRJPdKh8LYjAaytsLKZk/NOWKpBOcIPhBG/agmx
CX2NE2tpwNo+uWSOG6qTqc8xiQFDsQmbz9YEuux13J3Hg5gVMOJQNMvYpxgFD156
Z82QBMdrY1tRIA91kW97UDj6OEAyz8HnmL+rCiRLGJXKUnZsSET+VHs9+uhBggX8
GxliP35pYlmdejqGWHjiYlGF2+WKrd5axx/m1DcfZdXSaF1IdLKafnNXzUZbOnOM
7RbKHDhBKr/vkBV1SGYgDLNn4hflFzhdI65AKxO2KankzaWxF09/0kRZlmxm+tZX
8r0fHe9IO1KQR/52Kfg1vAQdt2KiyAziw5+tcqQT28knSDboNKpD2Du8BAoH9xG7
0Ca57oBHh/VGzM/niJBjI4EMOPZKuRJsxZF7wOOO6NTh/XFf3LpzsR1y3qoXN4cR
n+/jLUO/3kSGsqso6DT9C0o1pTrnORaJb4aF05jljFx9LYiQUOoLujp8cVW7XxQB
pTgJEFxTN5YA//cwYu3GOJ1AggSeF/WkHCDfCTpTfnO/WTZ0oc+nNyC1lBVfcZ67
GCH8COsfmhusrYiJUN6vYZIr4MfylVg53PUKYbLKYad9bIIaYYuu3MP4CtKDWHvk
8q+GzpjVUCPwjjsea56RMav+xDPvmgIayDptae26Fv+mRPcwqORYMFNtVRG6DUXo
+lrWlaDlkfyfZlQ6sK5c1cJNI8pSPocP/c9TBhP+xFROiWxvMOxhM7DmDl8rhAxU
ttZSukCg7n38AFsUqg5eLLq9sT+P6VmX8d3YflPBIkvNgK7nKUTwgrpbuADo07b0
sVlAY/9SmtHvOCibxphvPYUOhwWo97PzzAsdVGz/xRvH8mzI/Iftbc1U2C2La8FJ
xjaAFwWK/CjQSwnCB8raWo9FUavV6xdb2K0G4VBVDvZO9EJBzX0m6EqQx3XMZf1s
crP0Dp9Ee66vVOlj+XnyyTkUADSYHr8/42Aohv96fJEMjy5gbBl4QQm2QKzAkq9n
lrHvQpCxPixUUAEI0ZL1Y74hcMecnfbpGibrUvSp+cyDCOG92KKxLXEgVYCbXHZu
bOlOanZF3vC6I9dUC2d8I5B87b2K+y57OkWpmS3zxCEpsBqQmn8Te50DnlkPJPBj
GLqbpJyX2r3p/Rmo6mLY71SqpA==
-----END ENCRYPTED PRIVATE KEY-----'''
@pytest.mark.django_db
def test_default_cred_types():
@@ -89,6 +148,10 @@ def test_credential_creation(organization_factory):
[EXAMPLE_PRIVATE_KEY, 'super-secret', False], # unencrypted key, unlock pass
[EXAMPLE_ENCRYPTED_PRIVATE_KEY, 'super-secret', True], # encrypted key, unlock pass
[EXAMPLE_ENCRYPTED_PRIVATE_KEY, None, False], # encrypted key, no unlock pass
[PKCS8_ENCRYPTED_PRIVATE_KEY, 'passme', True], # encrypted PKCS8 key, unlock pass
[PKCS8_ENCRYPTED_PRIVATE_KEY, None, False], # encrypted PKCS8 key, no unlock pass
[PKCS8_PRIVATE_KEY, None, True], # unencrypted PKCS8 key, no unlock pass
[PKCS8_PRIVATE_KEY, 'passme', False], # unencrypted PKCS8 key, unlock pass
[None, None, True], # no key, no unlock pass
[None, 'super-secret', False], # no key, unlock pass
['INVALID-KEY-DATA', None, False], # invalid key data

View File

@@ -0,0 +1,122 @@
import glob
import json
import os
from django.conf import settings
from pip._internal.req import parse_requirements
def test_python_and_js_licenses():
def index_licenses(path):
# Check for GPL (forbidden) and LGPL (need to ship source)
# This is not meant to be an exhaustive check.
def check_license(license_file):
with open(license_file) as f:
data = f.read()
is_lgpl = 'GNU LESSER GENERAL PUBLIC LICENSE' in data.upper()
# The LGPL refers to the GPL in-text
# Case-sensitive for GPL to match license text and not PSF license reference
is_gpl = 'GNU GENERAL PUBLIC LICENSE' in data and not is_lgpl
return (is_gpl, is_lgpl)
def find_embedded_source_version(path, name):
for entry in os.listdir(path):
# Check variations of '-' and '_' in filenames due to python
for fname in [name, name.replace('-','_')]:
if entry.startswith(fname) and entry.endswith('.tar.gz'):
entry = entry[:-7]
(n, v) = entry.rsplit('-',1)
return v
return None
list = {}
for txt_file in glob.glob('%s/*.txt' % path):
filename = txt_file.split('/')[-1]
name = filename[:-4].lower()
(is_gpl, is_lgpl) = check_license(txt_file)
list[name] = {
'name': name,
'filename': filename,
'gpl': is_gpl,
'source_required': (is_gpl or is_lgpl),
'source_version': find_embedded_source_version(path, name)
}
return list
def read_api_requirements(path):
ret = {}
for req_file in ['requirements.txt', 'requirements_ansible.txt', 'requirements_git.txt', 'requirements_ansible_git.txt']:
fname = '%s/%s' % (path, req_file)
for reqt in parse_requirements(fname, session=''):
name = reqt.name
version = str(reqt.specifier)
if version.startswith('=='):
version=version[2:]
if reqt.link:
(name, version) = reqt.link.filename.split('@',1)
if name.endswith('.git'):
name = name[:-4]
ret[name] = { 'name': name, 'version': version}
return ret
def read_ui_requirements(path):
def json_deps(jsondata):
ret = {}
deps = jsondata.get('dependencies',{})
for key in deps.keys():
key = key.lower()
devonly = deps[key].get('dev',False)
if not devonly:
if key not in ret.keys():
depname = key.replace('/','-')
ret[depname] = {
'name': depname,
'version': deps[key]['version']
}
ret.update(json_deps(deps[key]))
return ret
with open('%s/package-lock.json' % path) as f:
jsondata = json.load(f)
return json_deps(jsondata)
def remediate_licenses_and_requirements(licenses, requirements):
errors = []
items = licenses.keys()
items.sort()
for item in items:
if item not in requirements.keys() and item != 'awx':
errors.append(" license file %s does not correspond to an existing requirement; it should be removed." % (licenses[item]['filename'],))
continue
# uWSGI has a linking exception
if licenses[item]['gpl'] and item != 'uwsgi':
errors.append(" license for %s is GPL. This software cannot be used." % (item,))
if licenses[item]['source_required']:
version = requirements[item]['version']
if version != licenses[item]['source_version']:
errors.append(" embedded source for %s is %s instead of the required version %s" % (item, licenses[item]['source_version'], version))
elif licenses[item]['source_version']:
errors.append(" embedded source version %s for %s is included despite not being needed" % (licenses[item]['source_version'],item))
items = requirements.keys()
items.sort()
for item in items:
if item not in licenses.keys():
errors.append(" license for requirement %s is missing" %(item,))
return errors
base_dir = settings.BASE_DIR
api_licenses = index_licenses('%s/../docs/licenses' % base_dir)
ui_licenses = index_licenses('%s/../docs/licenses/ui' % base_dir)
api_requirements = read_api_requirements('%s/../requirements' % base_dir)
ui_requirements = read_ui_requirements('%s/ui' % base_dir)
errors = []
errors += remediate_licenses_and_requirements(ui_licenses, ui_requirements)
errors += remediate_licenses_and_requirements(api_licenses, api_requirements)
if errors:
raise Exception('Included licenses not consistent with requirements:\n%s' %
'\n'.join(errors))

View File

@@ -0,0 +1,25 @@
import pytest
from awx.main.access import (
ProjectAccess,
)
@pytest.mark.django_db
@pytest.mark.parametrize("role", ["admin_role", "project_admin_role"])
def test_access_admin(role, organization, project, user):
a = user('admin', False)
project.organization = organization
role = getattr(organization, role)
role.members.add(a)
access = ProjectAccess(a)
assert access.can_read(project)
assert access.can_add(None)
assert access.can_add({'organization': organization.id})
assert access.can_change(project, None)
assert access.can_change(project, {'organization': organization.id})
assert access.can_admin(project, None)
assert access.can_admin(project, {'organization': organization.id})
assert access.can_delete(project)

View File

@@ -1,8 +1,9 @@
import pytest
import mock
from django.test import TransactionTestCase
from awx.main.access import UserAccess
from awx.main.access import UserAccess, RoleAccess, TeamAccess
from awx.main.models import User, Organization, Inventory
@@ -59,6 +60,57 @@ def test_user_queryset(user):
assert qs.count() == 1
@pytest.mark.django_db
@pytest.mark.parametrize('ext_auth,superuser,expect', [
(True, True, True),
(False, True, True), # your setting can't touch me, I'm superuser
(True, False, True), # org admin, managing my peeps
(False, False, False), # setting blocks org admin
], ids=['superuser', 'superuser-off', 'org', 'org-off'])
def test_manage_org_auth_setting(ext_auth, superuser, expect, organization, rando, user, team):
u = user('foo-user', is_superuser=superuser)
if not superuser:
organization.admin_role.members.add(u)
with mock.patch('awx.main.access.settings') as settings_mock:
settings_mock.MANAGE_ORGANIZATION_AUTH = ext_auth
assert [
# use via /api/v2/users/N/roles/
UserAccess(u).can_attach(rando, organization.admin_role, 'roles'),
UserAccess(u).can_attach(rando, team.admin_role, 'roles'),
# use via /api/v2/roles/N/users/
RoleAccess(u).can_attach(organization.admin_role, rando, 'members'),
RoleAccess(u).can_attach(team.admin_role, rando, 'members')
] == [expect for i in range(4)]
assert [
# use via /api/v2/users/N/roles/
UserAccess(u).can_unattach(rando, organization.admin_role, 'roles'),
UserAccess(u).can_unattach(rando, team.admin_role, 'roles'),
# use via /api/v2/roles/N/users/
RoleAccess(u).can_unattach(organization.admin_role, rando, 'members'),
RoleAccess(u).can_unattach(team.admin_role, rando, 'members')
] == [expect for i in range(4)]
@pytest.mark.django_db
@pytest.mark.parametrize('ext_auth', [True, False])
def test_team_org_resource_role(ext_auth, organization, rando, org_admin, team):
with mock.patch('awx.main.access.settings') as settings_mock:
settings_mock.MANAGE_ORGANIZATION_AUTH = ext_auth
assert [
# use via /api/v2/teams/N/roles/
TeamAccess(org_admin).can_attach(team, organization.workflow_admin_role, 'roles'),
# use via /api/v2/roles/teams/
RoleAccess(org_admin).can_attach(organization.workflow_admin_role, team, 'member_role.parents')
] == [True for i in range(2)]
assert [
# use via /api/v2/teams/N/roles/
TeamAccess(org_admin).can_unattach(team, organization.workflow_admin_role, 'roles'),
# use via /api/v2/roles/teams/
RoleAccess(org_admin).can_unattach(organization.workflow_admin_role, team, 'member_role.parents')
] == [True for i in range(2)]
@pytest.mark.django_db
def test_user_accessible_objects(user, organization):
'''

View File

@@ -149,6 +149,20 @@ class TestWorkflowJobAccess:
wfjt.execute_role.members.add(alice)
assert not WorkflowJobAccess(rando).can_start(workflow_job)
def test_relaunch_inventory_access(self, workflow_job, inventory, rando):
wfjt = workflow_job.workflow_job_template
wfjt.execute_role.members.add(rando)
assert rando in wfjt.execute_role
workflow_job.created_by = rando
workflow_job.inventory = inventory
workflow_job.save()
wfjt.ask_inventory_on_launch = True
wfjt.save()
JobLaunchConfig.objects.create(job=workflow_job, inventory=inventory)
assert not WorkflowJobAccess(rando).can_start(workflow_job)
inventory.use_role.members.add(rando)
assert WorkflowJobAccess(rando).can_start(workflow_job)
@pytest.mark.django_db
class TestWFJTCopyAccess:

View File

@@ -1,4 +1,6 @@
# -*- coding: utf-8 -*-
import re
import mock
import pytest
import requests
@@ -218,6 +220,14 @@ class TestHostInsights():
assert resp.data['error'] == 'The Insights Credential for "inventory_name_here" was not found.'
assert resp.status_code == 404
def test_get_insights_user_agent(self, patch_parent, mocker):
with mock.patch.object(requests.Session, 'get') as get:
HostInsights()._get_insights('https://example.org', 'joe', 'example')
assert get.call_count == 1
args, kwargs = get.call_args_list[0]
assert args == ('https://example.org',)
assert re.match(r'AWX [^\s]+ \(open\)', kwargs['headers']['User-Agent'])
class TestSurveySpecValidation:

View File

@@ -55,8 +55,9 @@ class TestReplayJobEvents():
r.get_serializer = lambda self: mock_serializer_fn
r.get_job = mocker.MagicMock(return_value=Job(id=3))
r.sleep = mocker.MagicMock()
r.get_job_events = lambda self: job_events
r.get_job_events = lambda self: (job_events, len(job_events))
r.replay_offset = lambda *args, **kwarg: 0
r.emit_job_status = lambda job, status: True
return r
@mock.patch('awx.main.management.commands.replay_job_events.emit_channel_notification', lambda *a, **kw: None)

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
import tempfile
import json
import yaml
@@ -10,7 +11,9 @@ from awx.main.models import (
Job,
JobTemplate,
JobLaunchConfig,
WorkflowJobTemplate
WorkflowJobTemplate,
Project,
Inventory
)
from awx.main.utils.safe_yaml import SafeLoader
@@ -305,3 +308,49 @@ class TestWorkflowSurveys:
)
assert wfjt.variables_needed_to_start == ['question2']
assert not wfjt.can_start_without_user_input()
@pytest.mark.django_db
@pytest.mark.parametrize('provided_vars,valid', [
({'tmpl_var': 'bar'}, True), # same as template, not counted as prompts
({'tmpl_var': 'bar2'}, False), # different value from template, not okay
({'tmpl_var': 'bar', 'a': 2}, False), # extra key, not okay
({'tmpl_var': 'bar', False: 2}, False), # Falsy key
({'tmpl_var': 'bar', u'🐉': u'🐉'}, False), # dragons
])
class TestExtraVarsNoPrompt:
def process_vars_and_assert(self, tmpl, provided_vars, valid):
prompted_fields, ignored_fields, errors = tmpl._accept_or_ignore_job_kwargs(
extra_vars=provided_vars
)
if valid:
assert not ignored_fields
assert not errors
else:
assert ignored_fields
assert errors
def test_jt_extra_vars_counting(self, provided_vars, valid):
jt = JobTemplate(
name='foo',
extra_vars={'tmpl_var': 'bar'},
project=Project(),
project_id=42,
playbook='helloworld.yml',
inventory=Inventory(),
inventory_id=42
)
prompted_fields, ignored_fields, errors = jt._accept_or_ignore_job_kwargs(
extra_vars=provided_vars
)
self.process_vars_and_assert(jt, provided_vars, valid)
def test_wfjt_extra_vars_counting(self, provided_vars, valid):
wfjt = WorkflowJobTemplate(
name='foo',
extra_vars={'tmpl_var': 'bar'}
)
prompted_fields, ignored_fields, errors = wfjt._accept_or_ignore_job_kwargs(
extra_vars=provided_vars
)
self.process_vars_and_assert(wfjt, provided_vars, valid)

View File

@@ -236,4 +236,4 @@ class TestWorkflowJobNodeJobKWARGS:
def test_get_ask_mapping_integrity():
assert WorkflowJobTemplate.get_ask_mapping().keys() == ['extra_vars']
assert WorkflowJobTemplate.get_ask_mapping().keys() == ['extra_vars', 'inventory']

View File

@@ -0,0 +1,42 @@
import pytest
from awx.main.scheduler.dag_simple import SimpleDAG
@pytest.fixture
def node_generator():
def fn():
return object()
return fn
@pytest.fixture
def simple_cycle_1(node_generator):
g = SimpleDAG()
nodes = [node_generator() for i in range(4)]
map(lambda n: g.add_node(n), nodes)
r'''
0
/\
/ \
. .
1---.2
. |
| |
-----|
.
3
'''
g.add_edge(nodes[0], nodes[1], "success_nodes")
g.add_edge(nodes[0], nodes[2], "success_nodes")
g.add_edge(nodes[2], nodes[3], "success_nodes")
g.add_edge(nodes[2], nodes[1], "success_nodes")
g.add_edge(nodes[1], nodes[2], "success_nodes")
return (g, nodes)
def test_has_cycle(simple_cycle_1):
(g, nodes) = simple_cycle_1
assert g.has_cycle() is True

View File

@@ -0,0 +1,318 @@
import pytest
import uuid
import os
from django.utils.translation import ugettext_lazy as _
from django.utils.encoding import smart_text
from awx.main.scheduler.dag_workflow import WorkflowDAG
class Job():
def __init__(self, status='successful'):
self.status = status
class WorkflowNode(object):
def __init__(self, id=None, job=None, do_not_run=False, unified_job_template=None):
self.id = id if id is not None else uuid.uuid4()
self.job = job
self.do_not_run = do_not_run
self.unified_job_template = unified_job_template
@pytest.fixture
def wf_node_generator(mocker):
pytest.count = 0
def fn(**kwargs):
wfn = WorkflowNode(id=pytest.count, unified_job_template=object(), **kwargs)
pytest.count += 1
return wfn
return fn
@pytest.fixture
def workflow_dag_1(wf_node_generator):
g = WorkflowDAG()
nodes = [wf_node_generator() for i in range(4)]
map(lambda n: g.add_node(n), nodes)
r'''
0
/\
S / \
/ \
1 |
| |
F | | S
| |
3 |
\ |
F \ |
\/
2
'''
g.add_edge(nodes[0], nodes[1], "success_nodes")
g.add_edge(nodes[0], nodes[2], "success_nodes")
g.add_edge(nodes[1], nodes[3], "failure_nodes")
g.add_edge(nodes[3], nodes[2], "failure_nodes")
return (g, nodes)
class TestWorkflowDAG():
@pytest.fixture
def workflow_dag_root_children(self, wf_node_generator):
g = WorkflowDAG()
wf_root_nodes = [wf_node_generator() for i in range(0, 10)]
wf_leaf_nodes = [wf_node_generator() for i in range(0, 10)]
map(lambda n: g.add_node(n), wf_root_nodes + wf_leaf_nodes)
'''
Pair up a root node with a single child via an edge
R1 R2 ... Rx
| | |
| | |
C1 C2 Cx
'''
map(lambda (i, n): g.add_edge(wf_root_nodes[i], n, 'label'), enumerate(wf_leaf_nodes))
return (g, wf_root_nodes, wf_leaf_nodes)
def test_get_root_nodes(self, workflow_dag_root_children):
(g, wf_root_nodes, ignore) = workflow_dag_root_children
assert set([n.id for n in wf_root_nodes]) == set([n['node_object'].id for n in g.get_root_nodes()])
class TestDNR():
def test_mark_dnr_nodes(self, workflow_dag_1):
(g, nodes) = workflow_dag_1
r'''
S0
/\
S / \
/ \
1 |
| |
F | | S
| |
3 |
\ |
F \ |
\/
2
'''
nodes[0].job = Job(status='successful')
do_not_run_nodes = g.mark_dnr_nodes()
assert 0 == len(do_not_run_nodes)
r'''
S0
/\
S / \
/ \
S1 |
| |
F | | S
| |
DNR 3 |
\ |
F \ |
\/
2
'''
nodes[1].job = Job(status='successful')
do_not_run_nodes = g.mark_dnr_nodes()
assert 1 == len(do_not_run_nodes)
assert nodes[3] == do_not_run_nodes[0]
class TestIsWorkflowDone():
@pytest.fixture
def workflow_dag_2(self, workflow_dag_1):
(g, nodes) = workflow_dag_1
r'''
S0
/\
S / \
/ \
S1 |
| |
F | | S
| |
DNR 3 |
\ |
F \ |
\/
W2
'''
nodes[0].job = Job(status='successful')
g.mark_dnr_nodes()
nodes[1].job = Job(status='successful')
g.mark_dnr_nodes()
nodes[2].job = Job(status='waiting')
return (g, nodes)
@pytest.fixture
def workflow_dag_failed(self, workflow_dag_1):
(g, nodes) = workflow_dag_1
r'''
S0
/\
S / \
/ \
S1 |
| |
F | | S
| |
DNR 3 |
\ |
F \ |
\/
F2
'''
nodes[0].job = Job(status='successful')
g.mark_dnr_nodes()
nodes[1].job = Job(status='successful')
g.mark_dnr_nodes()
nodes[2].job = Job(status='failed')
return (g, nodes)
@pytest.fixture
def workflow_dag_canceled(self, wf_node_generator):
g = WorkflowDAG()
nodes = [wf_node_generator() for i in range(1)]
map(lambda n: g.add_node(n), nodes)
r'''
F0
'''
nodes[0].job = Job(status='canceled')
return (g, nodes)
@pytest.fixture
def workflow_dag_failure(self, workflow_dag_canceled):
(g, nodes) = workflow_dag_canceled
nodes[0].job.status = 'failed'
return (g, nodes)
def test_done(self, workflow_dag_2):
g = workflow_dag_2[0]
assert g.is_workflow_done() is False
def test_workflow_done_and_failed(self, workflow_dag_failed):
(g, nodes) = workflow_dag_failed
assert g.is_workflow_done() is True
assert g.has_workflow_failed() == \
(True, smart_text(_("No error handle path for workflow job node(s) [({},{})] workflow job node(s)"
" missing unified job template and error handle path [].").format(nodes[2].id, nodes[2].job.status)))
def test_is_workflow_done_no_unified_job_tempalte_end(self, workflow_dag_failed):
(g, nodes) = workflow_dag_failed
nodes[2].unified_job_template = None
assert g.is_workflow_done() is True
assert g.has_workflow_failed() == \
(True, smart_text(_("No error handle path for workflow job node(s) [] workflow job node(s) missing"
" unified job template and error handle path [{}].").format(nodes[2].id)))
def test_is_workflow_done_no_unified_job_tempalte_begin(self, workflow_dag_1):
(g, nodes) = workflow_dag_1
nodes[0].unified_job_template = None
g.mark_dnr_nodes()
assert g.is_workflow_done() is True
assert g.has_workflow_failed() == \
(True, smart_text(_("No error handle path for workflow job node(s) [] workflow job node(s) missing"
" unified job template and error handle path [{}].").format(nodes[0].id)))
def test_canceled_should_fail(self, workflow_dag_canceled):
(g, nodes) = workflow_dag_canceled
assert g.has_workflow_failed() == \
(True, smart_text(_("No error handle path for workflow job node(s) [({},{})] workflow job node(s)"
" missing unified job template and error handle path [].").format(nodes[0].id, nodes[0].job.status)))
def test_failure_should_fail(self, workflow_dag_failure):
(g, nodes) = workflow_dag_failure
assert g.has_workflow_failed() == \
(True, smart_text(_("No error handle path for workflow job node(s) [({},{})] workflow job node(s)"
" missing unified job template and error handle path [].").format(nodes[0].id, nodes[0].job.status)))
class TestBFSNodesToRun():
@pytest.fixture
def workflow_dag_canceled(self, wf_node_generator):
g = WorkflowDAG()
nodes = [wf_node_generator() for i in range(4)]
map(lambda n: g.add_node(n), nodes)
r'''
C0
/ | \
F / A| \ S
/ | \
1 2 3
'''
g.add_edge(nodes[0], nodes[1], "failure_nodes")
g.add_edge(nodes[0], nodes[2], "always_nodes")
g.add_edge(nodes[0], nodes[3], "success_nodes")
nodes[0].job = Job(status='canceled')
return (g, nodes)
def test_cancel_still_runs_children(self, workflow_dag_canceled):
(g, nodes) = workflow_dag_canceled
g.mark_dnr_nodes()
assert set([nodes[1], nodes[2]]) == set(g.bfs_nodes_to_run())
@pytest.mark.skip(reason="Run manually to re-generate doc images")
class TestDocsExample():
@pytest.fixture
def complex_dag(self, wf_node_generator):
g = WorkflowDAG()
nodes = [wf_node_generator() for i in range(10)]
map(lambda n: g.add_node(n), nodes)
g.add_edge(nodes[0], nodes[1], "failure_nodes")
g.add_edge(nodes[0], nodes[2], "success_nodes")
g.add_edge(nodes[0], nodes[3], "always_nodes")
g.add_edge(nodes[1], nodes[4], "success_nodes")
g.add_edge(nodes[1], nodes[5], "failure_nodes")
g.add_edge(nodes[2], nodes[6], "failure_nodes")
g.add_edge(nodes[3], nodes[6], "success_nodes")
g.add_edge(nodes[4], nodes[6], "always_nodes")
g.add_edge(nodes[6], nodes[7], "always_nodes")
g.add_edge(nodes[6], nodes[8], "success_nodes")
g.add_edge(nodes[6], nodes[9], "failure_nodes")
return (g, nodes)
def test_dnr_step(self, complex_dag):
(g, nodes) = complex_dag
base_dir = '/awx_devel'
g.generate_graphviz_plot(file_name=os.path.join(base_dir, "workflow_step0.gv"))
nodes[0].job = Job(status='successful')
g.mark_dnr_nodes()
g.generate_graphviz_plot(file_name=os.path.join(base_dir, "workflow_step1.gv"))
nodes[2].job = Job(status='successful')
nodes[3].job = Job(status='successful')
g.mark_dnr_nodes()
g.generate_graphviz_plot(file_name=os.path.join(base_dir, "workflow_step2.gv"))
nodes[6].job = Job(status='failed')
g.mark_dnr_nodes()
g.generate_graphviz_plot(file_name=os.path.join(base_dir, "workflow_step3.gv"))
nodes[7].job = Job(status='successful')
nodes[9].job = Job(status='successful')
g.mark_dnr_nodes()
g.generate_graphviz_plot(file_name=os.path.join(base_dir, "workflow_step4.gv"))

View File

@@ -1032,10 +1032,11 @@ class TestJobCredentials(TestJobExecution):
def run_pexpect_side_effect(*args, **kwargs):
args, cwd, env, stdout = args
assert env['GCE_EMAIL'] == 'bob'
assert env['GCE_PROJECT'] == 'some-project'
ssh_key_data = env['GCE_PEM_FILE_PATH']
assert open(ssh_key_data, 'rb').read() == self.EXAMPLE_PRIVATE_KEY
json_data = json.load(open(env['GCE_CREDENTIALS_FILE_PATH'], 'rb'))
assert json_data['type'] == 'service_account'
assert json_data['private_key'] == self.EXAMPLE_PRIVATE_KEY
assert json_data['client_email'] == 'bob'
assert json_data['project_id'] == 'some-project'
return ['successful', 0]
self.run_pexpect.side_effect = run_pexpect_side_effect
@@ -2048,11 +2049,12 @@ class TestInventoryUpdateCredentials(TestJobExecution):
def run_pexpect_side_effect(*args, **kwargs):
args, cwd, env, stdout = args
assert env['GCE_EMAIL'] == 'bob'
assert env['GCE_PROJECT'] == 'some-project'
assert env['GCE_ZONE'] == expected_gce_zone
ssh_key_data = env['GCE_PEM_FILE_PATH']
assert open(ssh_key_data, 'rb').read() == self.EXAMPLE_PRIVATE_KEY
json_data = json.load(open(env['GCE_CREDENTIALS_FILE_PATH'], 'rb'))
assert json_data['type'] == 'service_account'
assert json_data['private_key'] == self.EXAMPLE_PRIVATE_KEY
assert json_data['client_email'] == 'bob'
assert json_data['project_id'] == 'some-project'
config = ConfigParser.ConfigParser()
config.read(env['GCE_INI_PATH'])

View File

@@ -154,12 +154,11 @@ def test_memoize_delete(memoized_function, mock_cache):
def test_memoize_parameter_error():
@common.memoize(cache_key='foo', track_function=True)
def fn():
return
with pytest.raises(common.IllegalArgumentError):
fn()
@common.memoize(cache_key='foo', track_function=True)
def fn():
return
def test_extract_ansible_vars():

View File

@@ -18,14 +18,11 @@ import contextlib
import tempfile
import six
import psutil
from functools import reduce
from functools import reduce, wraps
from StringIO import StringIO
from decimal import Decimal
# Decorator
from decorator import decorator
# Django
from django.core.exceptions import ObjectDoesNotExist
from django.db import DatabaseError
@@ -52,7 +49,8 @@ __all__ = ['get_object_or_400', 'get_object_or_403', 'camelcase_to_underscore',
'extract_ansible_vars', 'get_search_fields', 'get_system_task_capacity', 'get_cpu_capacity', 'get_mem_capacity',
'wrap_args_with_proot', 'build_proot_temp_dir', 'check_proot_installed', 'model_to_dict',
'model_instance_diff', 'timestamp_apiformat', 'parse_yaml_or_json', 'RequireDebugTrueOrTest',
'has_model_field_prefetched', 'set_environ', 'IllegalArgumentError', 'get_custom_venv_choices', 'get_external_account']
'has_model_field_prefetched', 'set_environ', 'IllegalArgumentError', 'get_custom_venv_choices', 'get_external_account',
'task_manager_bulk_reschedule', 'schedule_task_manager']
def get_object_or_400(klass, *args, **kwargs):
@@ -136,31 +134,35 @@ def memoize(ttl=60, cache_key=None, track_function=False):
'''
Decorator to wrap a function and cache its result.
'''
if cache_key and track_function:
raise IllegalArgumentError("Can not specify cache_key when track_function is True")
cache = get_memoize_cache()
def _memoizer(f, *args, **kwargs):
if cache_key and track_function:
raise IllegalArgumentError("Can not specify cache_key when track_function is True")
if track_function:
cache_dict_key = slugify('%r %r' % (args, kwargs))
key = slugify("%s" % f.__name__)
cache_dict = cache.get(key) or dict()
if cache_dict_key not in cache_dict:
value = f(*args, **kwargs)
cache_dict[cache_dict_key] = value
cache.set(key, cache_dict, ttl)
def memoize_decorator(f):
@wraps(f)
def _memoizer(*args, **kwargs):
if track_function:
cache_dict_key = slugify('%r %r' % (args, kwargs))
key = slugify("%s" % f.__name__)
cache_dict = cache.get(key) or dict()
if cache_dict_key not in cache_dict:
value = f(*args, **kwargs)
cache_dict[cache_dict_key] = value
cache.set(key, cache_dict, ttl)
else:
value = cache_dict[cache_dict_key]
else:
value = cache_dict[cache_dict_key]
else:
key = cache_key or slugify('%s %r %r' % (f.__name__, args, kwargs))
value = cache.get(key)
if value is None:
value = f(*args, **kwargs)
cache.set(key, value, ttl)
key = cache_key or slugify('%s %r %r' % (f.__name__, args, kwargs))
value = cache.get(key)
if value is None:
value = f(*args, **kwargs)
cache.set(key, value, ttl)
return value
return decorator(_memoizer)
return value
return _memoizer
return memoize_decorator
def memoize_delete(function_name):
@@ -726,6 +728,7 @@ def get_system_task_capacity(scale=Decimal(1.0), cpu_capacity=None, mem_capacity
_inventory_updates = threading.local()
_task_manager = threading.local()
@contextlib.contextmanager
@@ -741,6 +744,37 @@ def ignore_inventory_computed_fields():
_inventory_updates.is_updating = previous_value
def _schedule_task_manager():
from awx.main.scheduler.tasks import run_task_manager
from django.db import connection
# runs right away if not in transaction
connection.on_commit(lambda: run_task_manager.delay())
@contextlib.contextmanager
def task_manager_bulk_reschedule():
"""Context manager to avoid submitting task multiple times.
"""
try:
previous_flag = getattr(_task_manager, 'bulk_reschedule', False)
previous_value = getattr(_task_manager, 'needs_scheduling', False)
_task_manager.bulk_reschedule = True
_task_manager.needs_scheduling = False
yield
finally:
_task_manager.bulk_reschedule = previous_flag
if _task_manager.needs_scheduling:
_schedule_task_manager()
_task_manager.needs_scheduling = previous_value
def schedule_task_manager():
if getattr(_task_manager, 'bulk_reschedule', False):
_task_manager.needs_scheduling = True
return
_schedule_task_manager()
@contextlib.contextmanager
def ignore_inventory_group_removal():
'''

View File

@@ -31,6 +31,10 @@ class LogstashFormatter(LogstashFormatterVersion1):
to the logging receiver
'''
if kind == 'activity_stream':
try:
raw_data['changes'] = json.loads(raw_data.get('changes', '{}'))
except Exception:
pass # best effort here, if it's not valid JSON, then meh
return raw_data
elif kind == 'system_tracking':
data = copy(raw_data['ansible_facts'])

View File

@@ -76,7 +76,7 @@ def validate_pem(data, min_keys=0, max_keys=None, min_certs=0, max_certs=None):
if pem_obj_type.endswith('PRIVATE KEY'):
key_count += 1
pem_obj_info['type'] = 'PRIVATE KEY'
key_type = pem_obj_type.replace('PRIVATE KEY', '').strip()
key_type = pem_obj_type.replace('ENCRYPTED PRIVATE KEY', '').replace('PRIVATE KEY', '').strip()
try:
pem_obj_info['key_type'] = private_key_types[key_type]
except KeyError:
@@ -118,6 +118,8 @@ def validate_pem(data, min_keys=0, max_keys=None, min_certs=0, max_certs=None):
# length field, followed by the ciphername -- if ciphername is anything
# other than 'none' the key is encrypted.
pem_obj_info['key_enc'] = not bool(pem_obj_info['bin'].startswith('openssh-key-v1\x00\x00\x00\x00\x04none'))
elif match.group('type') == 'ENCRYPTED PRIVATE KEY':
pem_obj_info['key_enc'] = True
elif pem_obj_info.get('key_type', ''):
pem_obj_info['key_enc'] = bool('ENCRYPTED' in pem_obj_info['data'])
@@ -168,9 +170,9 @@ def validate_certificate(data):
Validate that data contains one or more certificates. Adds BEGIN/END lines
if necessary.
"""
if 'BEGIN' not in data:
if 'BEGIN ' not in data:
data = "-----BEGIN CERTIFICATE-----\n{}".format(data)
if 'END' not in data:
if 'END ' not in data:
data = "{}\n-----END CERTIFICATE-----\n".format(data)
return validate_pem(data, max_keys=0, min_certs=1)

View File

@@ -9,7 +9,7 @@
# TODO:
# * more jq examples
# * optional folder heriarchy
# * optional folder hierarchy
"""
$ jq '._meta.hostvars[].config' data.json | head
@@ -38,9 +38,8 @@ import sys
import uuid
from time import time
import six
from jinja2 import Environment
from six import integer_types, string_types
from six import integer_types, PY3
from six.moves import configparser
try:
@@ -99,6 +98,7 @@ class VMWareInventory(object):
host_filters = []
skip_keys = []
groupby_patterns = []
groupby_custom_field_excludes = []
safe_types = [bool, str, float, None] + list(integer_types)
iter_types = [dict, list]
@@ -230,10 +230,11 @@ class VMWareInventory(object):
'groupby_patterns': '{{ guest.guestid }},{{ "templates" if config.template else "guests"}}',
'lower_var_keys': True,
'custom_field_group_prefix': 'vmware_tag_',
'groupby_custom_field_excludes': '',
'groupby_custom_field': False}
}
if six.PY3:
if PY3:
config = configparser.ConfigParser()
else:
config = configparser.SafeConfigParser()
@@ -304,8 +305,12 @@ class VMWareInventory(object):
groupby_pattern += "}}"
self.groupby_patterns.append(groupby_pattern)
self.debugl('groupby patterns are %s' % self.groupby_patterns)
temp_groupby_custom_field_excludes = config.get('vmware', 'groupby_custom_field_excludes')
self.groupby_custom_field_excludes = [x.strip('"') for x in [y.strip("'") for y in temp_groupby_custom_field_excludes.split(",")]]
self.debugl('groupby exclude strings are %s' % self.groupby_custom_field_excludes)
# Special feature to disable the brute force serialization of the
# virtulmachine objects. The key name for these properties does not
# virtual machine objects. The key name for these properties does not
# matter because the values are just items for a larger list.
if config.has_section('properties'):
self.guest_props = []
@@ -397,7 +402,7 @@ class VMWareInventory(object):
cfm = content.customFieldsManager
if cfm is not None and cfm.field:
for f in cfm.field:
if f.managedObjectType == vim.VirtualMachine:
if not f.managedObjectType or f.managedObjectType == vim.VirtualMachine:
self.custom_fields[f.key] = f.name
self.debugl('%d custom fields collected' % len(self.custom_fields))
except vmodl.RuntimeFault as exc:
@@ -494,16 +499,15 @@ class VMWareInventory(object):
for k, v in inventory['_meta']['hostvars'].items():
if 'customvalue' in v:
for tv in v['customvalue']:
if not isinstance(tv['value'], string_types):
continue
newkey = None
field_name = self.custom_fields[tv['key']] if tv['key'] in self.custom_fields else tv['key']
if field_name in self.groupby_custom_field_excludes:
continue
values = []
keylist = map(lambda x: x.strip(), tv['value'].split(','))
for kl in keylist:
try:
newkey = self.config.get('vmware', 'custom_field_group_prefix') + str(field_name) + '_' + kl
newkey = "%s%s_%s" % (self.config.get('vmware', 'custom_field_group_prefix'), str(field_name), kl)
newkey = newkey.strip()
except Exception as e:
self.debugl(e)
@@ -521,7 +525,6 @@ class VMWareInventory(object):
def create_template_mapping(self, inventory, pattern, dtype='string'):
''' Return a hash of uuid to templated string from pattern '''
mapping = {}
for k, v in inventory['_meta']['hostvars'].items():
t = self.env.from_string(pattern)
@@ -557,7 +560,15 @@ class VMWareInventory(object):
if '.' not in prop:
# props without periods are direct attributes of the parent
rdata[key] = getattr(vm, prop)
vm_property = getattr(vm, prop)
if isinstance(vm_property, vim.CustomFieldsManager.Value.Array):
temp_vm_property = []
for vm_prop in vm_property:
temp_vm_property.append({'key': vm_prop.key,
'value': vm_prop.value})
rdata[key] = temp_vm_property
else:
rdata[key] = vm_property
else:
# props with periods are subkeys of parent attributes
parts = prop.split('.')

View File

@@ -250,8 +250,8 @@ TEMPLATES = [
]
MIDDLEWARE_CLASSES = ( # NOQA
'awx.main.middleware.MigrationRanCheckMiddleware',
'awx.main.middleware.TimingMiddleware',
'awx.main.middleware.MigrationRanCheckMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
@@ -606,6 +606,9 @@ ANSIBLE_PARAMIKO_RECORD_HOST_KEYS = False
# output
ANSIBLE_FORCE_COLOR = True
# If tmp generated inventory parsing fails (error state), fail playbook fast
ANSIBLE_INVENTORY_UNPARSED_FAILED = True
# Additional environment variables to be passed to the ansible subprocesses
AWX_TASK_ENV = {}
@@ -948,7 +951,7 @@ TOWER_ADMIN_ALERTS = True
# Note: This setting may be overridden by database settings.
TOWER_URL_BASE = "https://towerhost"
INSIGHTS_URL_BASE = "https://access.redhat.com"
INSIGHTS_URL_BASE = "https://example.org"
TOWER_SETTINGS_MANIFEST = {}

View File

@@ -361,6 +361,16 @@ def on_populate_user(sender, **kwargs):
# checking membership.
ldap_user._get_groups().get_group_dns()
# If the LDAP user has a first or last name > $maxlen chars, truncate it
for field in ('first_name', 'last_name'):
max_len = User._meta.get_field(field).max_length
field_len = len(getattr(user, field))
if field_len > max_len:
setattr(user, field, getattr(user, field)[:max_len])
logger.warn(six.text_type(
'LDAP user {} has {} > max {} characters'
).format(user.username, field, max_len))
# Update organization membership based on group memberships.
org_map = getattr(backend.settings, 'ORGANIZATION_MAP', {})
for org_name, org_opts in org_map.items():

View File

@@ -13,6 +13,7 @@ from django.views.generic.base import RedirectView
from django.utils.encoding import smart_text
from awx.api.serializers import UserSerializer
from rest_framework.renderers import JSONRenderer
from django.conf import settings
logger = logging.getLogger('awx.sso.views')
@@ -45,7 +46,7 @@ class CompleteView(BaseRedirectView):
current_user = UserSerializer(self.request.user)
current_user = JSONRenderer().render(current_user.data)
current_user = urllib.quote('%s' % current_user, '')
response.set_cookie('current_user', current_user)
response.set_cookie('current_user', current_user, secure=settings.SESSION_COOKIE_SECURE or None)
return response

View File

@@ -31,7 +31,8 @@ module.exports = {
$: true,
_: true,
codemirror: true,
jsyaml: true
jsyaml: true,
crypto: true
},
rules: {
'arrow-parens': 'off',

View File

@@ -34,7 +34,9 @@
"describe": false,
"moment": false,
"spyOn": false,
"jasmine": false
"jasmine": false,
"dagre": false,
"crypto": false
},
"strict": false,
"quotmark": false,

View File

@@ -2,7 +2,7 @@
## Requirements
- node.js 8.x LTS
- npm 5.x LTS
- npm >=5.10
- bzip2, gcc-c++, git, make
## Development
@@ -67,6 +67,23 @@ npm install --prefix awx/ui --save prod-package@1.23
git add awx/ui/package.json awx/ui/package-lock.json
```
## Adding exact dependencies
```shell
# add an exact development or build dependency
npm install --prefix awx/ui --save-dev --save-exact dev-package@1.2.3
# add an exact production dependency
npm install --prefix awx/ui --save --save-exact prod-package@1.23
```
## Removing dependencies
```shell
# remove a development or build dependency
npm uninstall --prefix awx/ui --save-dev dev-package
# remove a production dependency
npm uninstall --prefix awx/ui --save prod-package
```
## Building for Production
```shell
# built files are placed in awx/ui/static

View File

@@ -50,7 +50,9 @@ export default {
const searchParam = _.assign($stateParams.job_search, {
or__job__inventory: inventoryId,
or__adhoccommand__inventory: inventoryId,
or__inventoryupdate__inventory_source__inventory: inventoryId });
or__inventoryupdate__inventory_source__inventory: inventoryId,
or__workflowjob__inventory: inventoryId,
});
const searchPath = GetBasePath('unified_jobs');

View File

@@ -1,7 +1,7 @@
@import 'host-event/_index';
.at-Stdout {
&-menuTop {
color: @at-gray-848992;
color: @at-gray-646972;
border: 1px solid @at-gray-b7;
border-top-left-radius: 4px;
border-top-right-radius: 4px;
@@ -72,14 +72,22 @@
&-row {
display: flex;
&:hover {
background-color: white;
}
&:hover div {
background-color: white;
}
}
&-row--clickable {
cursor: pointer;
}
&-toggle {
background-color: @at-gray-eb;
color: @at-gray-848992;
color: @at-gray-646972;
display: flex;
flex: 0 0 30px;
font-size: 18px;
@@ -112,12 +120,6 @@
.at-mixin-event();
}
&-event--host {
.at-mixin-event();
cursor: pointer;
}
&-time {
padding-right: 2ch;
font-size: 12px;
@@ -175,7 +177,6 @@
}
.at-mixin-event() {
flex: 1;
padding: 0 10px;
white-space: pre-wrap;
word-break: break-all;
@@ -203,7 +204,7 @@
color: @login-notice-text;
border-radius: @at-border-radius;
border: 1px solid @at-gray-b7;
// color: @at-gray-848992;
// color: @at-gray-646972;
padding: 6px @at-padding-input 6px @at-padding-input;
}
@@ -333,7 +334,7 @@
.JobResults-container {
display: grid;
grid-gap: 20px;
grid-template-columns: minmax(300px, 1fr) minmax(500px, 2fr);
grid-template-columns: minmax(400px, 1fr) minmax(500px, 2fr);
grid-template-rows: minmax(500px, ~"calc(100vh - 130px)");
.at-Panel {
@@ -456,5 +457,6 @@
.JobResults-container {
display: flex;
flex-direction: column;
min-width: 400px;
}
}

View File

@@ -16,7 +16,7 @@ export const JOB_STATUS_FINISHED = JOB_STATUS_COMPLETE.concat(JOB_STATUS_INCOMPL
export const OUTPUT_ANSI_COLORMAP = {
0: '#000',
1: '#A00',
2: '#0A0',
2: '#080',
3: '#F0AD4E',
4: '#00A',
5: '#A0A',

View File

@@ -120,10 +120,12 @@ function getSourceWorkflowJobDetails () {
return null;
}
const label = strings.get('labels.SOURCE_WORKFLOW_JOB');
const value = sourceWorkflowJob.name;
const link = `/#/workflows/${sourceWorkflowJob.id}`;
const tooltip = strings.get('tooltips.SOURCE_WORKFLOW_JOB');
return { link, tooltip };
return { label, value, link, tooltip };
}
function getSliceJobDetails () {

View File

@@ -281,6 +281,19 @@
</div>
</div>
<!-- SOURCE WORKFLOW JOB DETAIL -->
<div class="JobResults-resultRow" ng-if="vm.sourceWorkflowJob">
<label class="JobResults-resultRowLabel">{{ vm.sourceWorkflowJob.label }}</label>
<div class="JobResults-resultRowText">
<a href="{{ vm.sourceWorkflowJob.link }}"
aw-tool-tip="{{ vm.sourceWorkflowJob.tooltip }}"
data-placement="top">
{{ vm.sourceWorkflowJob.value }}
</a>
</div>
</div>
<!-- EXTRA VARIABLES DETAIL -->
<at-code-mirror
class="JobResults-resultRow"

View File

@@ -6,7 +6,6 @@ import {
OUTPUT_PAGE_SIZE,
} from './constants';
let $compile;
let $q;
let $scope;
let $state;
@@ -97,6 +96,7 @@ function firstRange () {
.then(() => render.pushFront(results));
})
.finally(() => {
render.compile();
scroll.resume();
lockFollow = false;
});
@@ -124,6 +124,7 @@ function nextRange () {
.then(() => render.pushFront(results));
})
.finally(() => {
render.compile();
scroll.resume();
lockFrames = false;
@@ -162,6 +163,7 @@ function previousRange () {
return $q.resolve();
})
.finally(() => {
render.compile();
scroll.resume();
lockFrames = false;
@@ -189,6 +191,7 @@ function lastRange () {
return $q.resolve();
})
.finally(() => {
render.compile();
scroll.resume();
return $q.resolve();
@@ -280,6 +283,7 @@ function firstPage () {
.then(() => render.pushFront(results));
})
.finally(() => {
render.compile();
scroll.resume();
return $q.resolve();
@@ -309,6 +313,7 @@ function lastPage () {
return $q.resolve();
})
.finally(() => {
render.compile();
scroll.resume();
return $q.resolve();
@@ -330,6 +335,7 @@ function nextPage () {
.then(() => render.pushFront(results));
})
.finally(() => {
render.compile();
scroll.resume();
});
}
@@ -363,6 +369,7 @@ function previousPage () {
return $q.resolve();
})
.finally(() => {
render.compile();
scroll.resume();
return $q.resolve();
@@ -546,10 +553,6 @@ function toggleTaskCollapse (uuid) {
render.records[uuid].isCollapsed = !isCollapsed;
}
function compile (html) {
return $compile(html)($scope);
}
function showHostDetails (id, uuid) {
$state.go('output.host-event.json', { eventId: id, taskUuid: uuid });
}
@@ -599,7 +602,7 @@ function showMissingEvents (uuid) {
delete render.records[uuid];
}
})
.then(() => render.compile(elements))
.then(() => render.compile())
.then(() => lines);
});
}
@@ -709,7 +712,6 @@ function clear () {
}
function OutputIndexController (
_$compile_,
_$q_,
_$scope_,
_$state_,
@@ -727,7 +729,6 @@ function OutputIndexController (
const { isPanelExpanded, _debug } = $stateParams;
const isProcessingFinished = !_debug && _resource_.model.get('event_processing_finished');
$compile = _$compile_;
$q = _$q_;
$scope = _$scope_;
$state = _$state_;
@@ -765,7 +766,7 @@ function OutputIndexController (
vm.debug = _debug;
render.requestAnimationFrame(() => {
render.init({ compile, toggles: vm.toggleLineEnabled });
render.init($scope, { toggles: vm.toggleLineEnabled });
status.init(resource);
page.init(resource.events);
@@ -815,6 +816,7 @@ function OutputIndexController (
status.sync();
scroll.unlock();
scroll.unhide();
render.compile();
}
});
@@ -850,7 +852,6 @@ function OutputIndexController (
}
OutputIndexController.$inject = [
'$compile',
'$q',
'$scope',
'$state',

View File

@@ -74,6 +74,7 @@ function OutputStrings (BaseString) {
SKIP_TAGS: t.s('Skip Tags'),
SOURCE: t.s('Source'),
SOURCE_CREDENTIAL: t.s('Source Credential'),
SOURCE_WORKFLOW_JOB: t.s('Source Workflow'),
STARTED: t.s('Started'),
STATUS: t.s('Status'),
VERBOSITY: t.s('Verbosity'),

View File

@@ -3,7 +3,6 @@ import Entities from 'html-entities';
import {
EVENT_START_PLAY,
EVENT_START_PLAYBOOK,
EVENT_STATS_PLAY,
EVENT_START_TASK,
OUTPUT_ANSI_COLORMAP,
@@ -34,9 +33,13 @@ const pattern = [
const re = new RegExp(pattern);
const hasAnsi = input => re.test(input);
function JobRenderService ($q, $sce, $window) {
this.init = ({ compile, toggles }) => {
this.hooks = { compile };
let $scope;
function JobRenderService ($q, $compile, $sce, $window) {
this.init = (_$scope_, { toggles }) => {
$scope = _$scope_;
this.setScope();
this.el = $(OUTPUT_ELEMENT_TBODY);
this.parent = null;
@@ -209,7 +212,7 @@ function JobRenderService ($q, $sce, $window) {
const lines = stdout.split('\r\n');
const record = this.createRecord(event, lines);
if (event.event === EVENT_START_PLAYBOOK) {
if (lines.length === 1 && lines[0] === '') {
return { html: '', count: 0 };
}
@@ -260,17 +263,17 @@ function JobRenderService ($q, $sce, $window) {
return this.records[event.counter];
}
let isHost = false;
if (typeof event.host === 'number') {
isHost = true;
let isClickable = false;
if (typeof event.host === 'number' || event.event_data && event.event_data.res) {
isClickable = true;
} else if (event.type === 'project_update_event' &&
event.event !== 'runner_on_skipped' &&
event.event_data.host) {
isHost = true;
isClickable = true;
}
const record = {
isHost,
isClickable,
id: event.id,
line: event.start_line + 1,
name: event.event,
@@ -344,6 +347,7 @@ function JobRenderService ($q, $sce, $window) {
let tdToggle = '';
let tdEvent = '';
let classList = '';
let directives = '';
if (record.isMissing) {
return `<div id="${record.uuid}" class="at-Stdout-row">
@@ -370,10 +374,6 @@ function JobRenderService ($q, $sce, $window) {
tdToggle = `<div class="at-Stdout-toggle" ng-click="vm.toggleCollapse('${id}')"><i class="fa ${icon} can-toggle"></i></div>`;
}
if (record.isHost) {
tdEvent = `<div class="at-Stdout-event--host" ng-click="vm.showHostDetails('${record.id}', '${record.uuid}')"><span ng-non-bindable>${content}</span></div>`;
}
if (record.time && record.line === ln) {
timestamp = `<span>${record.time}</span>`;
}
@@ -401,11 +401,16 @@ function JobRenderService ($q, $sce, $window) {
}
}
if (record && record.isClickable) {
classList += ' at-Stdout-row--clickable';
directives = `ng-click="vm.showHostDetails('${record.id}', '${record.uuid}')"`;
}
return `
<div id="${id}" class="at-Stdout-row ${classList}">
<div id="${id}" class="at-Stdout-row ${classList}" ${directives}>
${tdToggle}
<div class="at-Stdout-line">${ln}</div>
${tdEvent}
<div class="at-Stdout-event"><span ng-non-bindable>${content}</span></div>
<div class="at-Stdout-time">${timestamp}</div>
</div>`;
};
@@ -435,8 +440,16 @@ function JobRenderService ($q, $sce, $window) {
});
});
this.compile = content => {
this.hooks.compile(content);
this.setScope = () => {
if (this.scope) this.scope.$destroy();
delete this.scope;
this.scope = $scope.$new();
};
this.compile = () => {
this.setScope();
$compile(this.el)(this.scope);
return this.requestAnimationFrame();
};
@@ -472,10 +485,7 @@ function JobRenderService ($q, $sce, $window) {
const result = this.prependEventGroup(events);
const html = this.trustHtml(result.html);
const newElements = angular.element(html);
return this.requestAnimationFrame(() => this.el.prepend(newElements))
.then(() => this.compile(newElements))
return this.requestAnimationFrame(() => this.el.prepend(html))
.then(() => result.lines);
};
@@ -487,10 +497,7 @@ function JobRenderService ($q, $sce, $window) {
const result = this.appendEventGroup(events);
const html = this.trustHtml(result.html);
const newElements = angular.element(html);
return this.requestAnimationFrame(() => this.el.append(newElements))
.then(() => this.compile(newElements))
return this.requestAnimationFrame(() => this.el.append(html))
.then(() => result.lines);
};
@@ -601,6 +608,6 @@ function JobRenderService ($q, $sce, $window) {
this.getCapacity = () => OUTPUT_EVENT_LIMIT - (this.getTailCounter() - this.getHeadCounter());
}
JobRenderService.$inject = ['$q', '$sce', '$window'];
JobRenderService.$inject = ['$q', '$compile', '$sce', '$window'];
export default JobRenderService;

View File

@@ -114,6 +114,9 @@ function SlidingWindowService ($q) {
}
}
this.buffer.events.length = 0;
delete this.buffer.events;
this.buffer.events = frames;
this.buffer.min = min;
this.buffer.max = max;

View File

@@ -8,6 +8,15 @@ const templatesListTemplate = require('~features/templates/templatesList.view.ht
export default {
url: "/:organization_id/job_templates",
name: 'organizations.job_templates',
data: {
activityStream: true,
activityStreamTarget: 'template',
socket: {
"groups": {
"jobs": ["status_changed"]
}
}
},
params: {
template_search: {
dynamic: true,

View File

@@ -12,6 +12,7 @@ function TemplatesStrings (BaseString) {
PANEL_TITLE: t.s('TEMPLATES'),
ADD_DD_JT_LABEL: t.s('Job Template'),
ADD_DD_WF_LABEL: t.s('Workflow Template'),
OPEN_WORKFLOW_VISUALIZER: t.s('Click here to open the workflow visualizer'),
ROW_ITEM_LABEL_ACTIVITY: t.s('Activity'),
ROW_ITEM_LABEL_INVENTORY: t.s('Inventory'),
ROW_ITEM_LABEL_PROJECT: t.s('Project'),
@@ -102,22 +103,35 @@ function TemplatesStrings (BaseString) {
ALWAYS: t.s('Always'),
PROJECT_SYNC: t.s('Project Sync'),
INVENTORY_SYNC: t.s('Inventory Sync'),
WORKFLOW: t.s('Workflow'),
WARNING: t.s('Warning'),
TOTAL_TEMPLATES: t.s('TOTAL TEMPLATES'),
TOTAL_NODES: t.s('TOTAL NODES'),
ADD_A_TEMPLATE: t.s('ADD A TEMPLATE'),
EDIT_TEMPLATE: t.s('EDIT TEMPLATE'),
JOBS: t.s('JOBS'),
PLEASE_CLICK_THE_START_BUTTON: t.s('Please click the start button to build your workflow.'),
PLEASE_HOVER_OVER_A_TEMPLATE: t.s('Please hover over a template for additional options.'),
EDIT_LINK_TOOLTIP: t.s('Click to edit link'),
VIEW_LINK_TOOLTIP: t.s('Click to view link'),
RUN: t.s('RUN'),
CHECK: t.s('CHECK'),
SELECT: t.s('SELECT'),
DELETED: t.s('DELETED'),
START: t.s('START'),
DETAILS: t.s('DETAILS'),
TITLE: t.s('WORKFLOW VISUALIZER')
}
TITLE: t.s('WORKFLOW VISUALIZER'),
INVENTORY_WILL_OVERRIDE: t.s('The inventory of this node will be overridden by the parent workflow inventory.'),
INVENTORY_WILL_NOT_OVERRIDE: t.s('The inventory of this node will not be overridden by the parent workflow inventory.'),
INVENTORY_PROMPT_WILL_OVERRIDE: t.s('The inventory of this node will be overridden if a parent workflow inventory is provided at launch.'),
INVENTORY_PROMPT_WILL_NOT_OVERRIDE: t.s('The inventory of this node will not be overridden if a parent workflow inventory is provided at launch.'),
ADD_LINK: t.s('ADD LINK'),
EDIT_LINK: t.s('EDIT LINK'),
VIEW_LINK: t.s('VIEW LINK'),
NEW_LINK: t.s('Please click on an available node to form a new link.'),
UNLINK: t.s('UNLINK'),
READ_ONLY_PROMPT_VALUES: t.s('The following promptable values were provided when this node was created:'),
READ_ONLY_NO_PROMPT_VALUES: t.s('No promptable values were provided when this node was created.')
};
}
TemplatesStrings.$inject = ['BaseStringService'];

View File

@@ -101,6 +101,14 @@ function ListTemplatesController(
vm.isPortalMode = $state.includes('portalMode');
vm.openWorkflowVisualizer = template => {
const name = 'templates.editWorkflowJobTemplate.workflowMaker';
const params = { workflow_job_template_id: template.id };
const options = { reload: true };
$state.go(name, params, options);
};
vm.deleteTemplate = template => {
if (!template) {
Alert(strings.get('error.DELETE'), strings.get('alert.MISSING_PARAMETER'));

View File

@@ -93,6 +93,11 @@
ng-show="!vm.isPortalMode && template.summary_fields.user_capabilities.copy"
tooltip="{{:: vm.strings.get('listActions.COPY', vm.getType(template)) }}">
</at-row-action>
<at-row-action icon="fa-sitemap" ng-click="vm.openWorkflowVisualizer(template)"
ng-show="!vm.isPortalMode"
ng-if="template.type === 'workflow_job_template'"
tooltip="{{:: vm.strings.get('list.OPEN_WORKFLOW_VISUALIZER') }}">
</at-row-action>
<at-row-action icon="fa-trash" ng-click="vm.deleteTemplate(template)"
ng-show="!vm.isPortalMode && template.summary_fields.user_capabilities.delete"
tooltip="{{:: vm.strings.get('listActions.DELETE', vm.getType(template)) }}">

View File

@@ -2307,6 +2307,10 @@ input[disabled].ui-spinner-input {
background: @default-bg;
}
.ui-dialog.no-close button.close {
display: none;
}
html { height: 100%; }
body {

Some files were not shown because too many files have changed in this diff Show More