Compare commits

...

759 Commits
7.0.0 ... 8.0.0

Author SHA1 Message Date
softwarefactory-project-zuul[bot]
3277d3afe0 Merge pull request #5050 from ryanpetrello/release-8.0.0
Bump VERSION to 8.0.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-21 17:26:33 +00:00
Ryan Petrello
45136b6503 Bump VERSION to 8.0.0 2019-10-21 12:29:24 -04:00
Ryan Petrello
e9af6af97c Merge pull request #5047 from ryanpetrello/devel
merge a variety of downstream bug fixes
2019-10-21 12:27:54 -04:00
Ryan Petrello
f86d647571 Merge branch 'hardening' into devel 2019-10-21 12:09:27 -04:00
Ryan Petrello
f02357ca16 Merge pull request #3871 from mabashian/3865-add-buttons
Revert 6282b5bacb
2019-10-21 12:08:52 -04:00
mabashian
e64b087e9f Revert 6282b5bacb 2019-10-21 11:55:12 -04:00
Ryan Petrello
5bb7e69a4d Merge pull request #3864 from ansible/container-launch-error
when isolated or container jobs fail to launch, set job status to error
2019-10-21 11:18:04 -04:00
Ryan Petrello
a8aed53c10 when isolated or container jobs fail to launch, set job status to error
a status of error makes more sense, because failed generally points to
an issue with the playbook itself, while error is more generally used
for reporting issues internal to Tower

see: https://github.com/ansible/awx/issues/4909
2019-10-21 11:02:31 -04:00
Jake McDermott
b19539069c Merge pull request #3863 from jakemcdermott/fix-3578-part-3
Set omitted runner_on_start event line lengths to 0
2019-10-21 10:37:09 -04:00
Jake McDermott
312cf13777 Set omitted runner event line lengths to 0
runner_on_start events have zero-length strings for their stdout
fields. We don't want to display these in the ui so we omit them.
Although the stdout field is an empty string, it still has a recorded
line length of 1 that we must account for. Since we're not rendering
the blank line, we must also go back and set the event record's line
length to 0 in order to avoid deleting too many lines when we pop or
shift events off of the view while scrolling.
2019-10-19 19:18:45 -04:00
Jake McDermott
c6033399d0 Fix off-by-one errors 2019-10-19 18:58:42 -04:00
Ryan Petrello
85f118c17d Merge pull request #3852 from ansible/pod-reaper
implement a simple periodic pod reaper for container groups
2019-10-18 16:50:30 -04:00
softwarefactory-project-zuul[bot]
0de805ac67 Merge pull request #5035 from keithjgrant/4617-inventory-lookup-error-message
Add error messages for InventorySelect

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-18 19:46:29 +00:00
Ryan Petrello
c7426fbff4 Merge pull request #3861 from wenottingham/where-did-you-come-from
Log the remote IP for logged in users
2019-10-18 15:16:34 -04:00
softwarefactory-project-zuul[bot]
3cbd52a56e Merge pull request #5040 from keithjgrant/4976-job-list-status-icons
Add status icon to job list

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-18 19:12:14 +00:00
Jake McDermott
97a635ef49 Merge pull request #3844 from jakemcdermott/fix-3578-part-1
Always disable search when processing events
2019-10-18 15:07:41 -04:00
Keith Grant
155ed75f15 update jt Inventory field error message 2019-10-18 11:38:01 -07:00
Bill Nottingham
a664c5eabe Log the remote IP for logged in users 2019-10-18 14:28:10 -04:00
Keith Grant
8b23c6e19a add job id to jobs list 2019-10-18 10:44:39 -07:00
Keith Grant
a5d9bbb1e6 add status icon to job list 2019-10-18 10:08:59 -07:00
softwarefactory-project-zuul[bot]
c262df0dfe Merge pull request #5009 from wanderboessenkool/rabbit-healthcheck-cpu-usage
Change 'rabbitmqctl status' to a wget | grep to save CPU

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-18 16:48:39 +00:00
Jake McDermott
3f113129a9 Merge pull request #3851 from jakemcdermott/fix-3578-part-2
Get the last two pages of events on page load
2019-10-18 12:46:19 -04:00
Keith Grant
df7e034b96 fix blur behavior/error messages in JT form 2019-10-18 08:40:48 -07:00
Ryan Petrello
bd8b3a4f74 Merge pull request #3856 from ansible/revert-3842-callback-receiver-status
Revert "add support for `awx-manage run_callback_receiver --status`"
2019-10-18 10:10:58 -04:00
Ryan Petrello
d01088d33e Revert "add support for awx-manage run_callback_receiver --status" 2019-10-18 09:49:02 -04:00
Ryan Petrello
0012602b30 Merge pull request #3853 from ansible/fix-upgrades
properly migrate the CyberArk AIM type to its new name
2019-10-18 07:55:20 -04:00
Wander Boessenkool
8ecc1f37f0 Move python healthcheck script from probes to configMap 2019-10-18 10:15:21 +02:00
Ryan Petrello
0ab44e70f9 properly migrate the CyberArk AIM type to its new name 2019-10-17 22:40:33 -04:00
Jake McDermott
95c9e8e068 Always disable search when processing events
When jobs are still processing events, the UI uses numerical ranges
based on job_event.counter instead of page numbers. We can't apply
search filters in this state because then there would be no way to
distinguish between events that are missing due to being filtered out
by search and events that are missing because they're still being
processed.

The UI must be able to distinguish between the two types of missing
events because their absence is presented differently. Events that are
filtered out by a search query have no visual representation, while
events that are missing due to event processing or other causes are
displayed as clickable "..." segments.
2019-10-17 18:37:33 -04:00
Wander Boessenkool
c49e64e62c Make HTTPConnection import python 2,3 agnostic 2019-10-17 23:36:33 +02:00
Wander Boessenkool
00c9d756e8 Move installtime hardcoded rabbitmq credentials to environment variables for healthcheck 2019-10-17 23:23:29 +02:00
Ryan Petrello
16812542f8 implement a simple periodic pod reaper for container groups
see: https://github.com/ansible/awx/issues/4911
2019-10-17 17:06:36 -04:00
Ryan Petrello
0bcd1db239 Merge pull request #3850 from ansible/container-groups-cleanup-adhoc
clean up pods for all k8s execution, not just playbook runs
2019-10-17 16:53:23 -04:00
Bill Nottingham
9edbcdc7b0 Merge pull request #3848 from wenottingham/have-never-read-or-watched-the-help
Adjust description/help text for profiling features.
2019-10-17 16:48:12 -04:00
Wander Boessenkool
9ab58e9757 Change healthcheck from wget and grep to python with httplib 2019-10-17 22:25:20 +02:00
Jake McDermott
1fae3534a1 Get the last two pages of events on page load
When the page loads, we want to retrieve and initially display enough
content for the scrollbar to show. If the very last page doesn't
have enough content for the scrollbar to show, the user won't be able
to scroll up to see more job history. To avoid this scenario, we always
fetch the last _two_ pages when loading the view.
2019-10-17 16:19:19 -04:00
Graham Mainwaring
a038f9fd78 Merge pull request #3845 from ghjm/gather_analytics_dry_run
Add a --dry-run option to gather analytics locally, even if analytics is disabled in settings.
2019-10-17 16:17:18 -04:00
Ryan Petrello
ff1e1b2010 Merge pull request #3849 from ansible/k8s-native-execution-node
properly set execution_node for project and inv updates run "in k8s"
2019-10-17 16:06:16 -04:00
Wander Boessenkool
d6134fb194 Change /bin/ash to /bin/sh as requested by @shanecmd 2019-10-17 21:37:51 +02:00
Ryan Petrello
570ffad52b clean up pods for all k8s execution, not just playbook runs
see: https://github.com/ansible/awx/issues/4908
2019-10-17 15:29:44 -04:00
Ryan Petrello
1cf02e1e17 properly set execution_node for project and inv updates run "in k8s"
see: https://github.com/ansible/awx/issues/4907
2019-10-17 15:15:24 -04:00
Bill Nottingham
2f350cfda7 Adjust description/help text for profiling features.
Note that data is merely for sosreport collection for now, and
warn against increasing collection frequency.
2019-10-17 15:04:21 -04:00
Keith Grant
8e2622d117 add error messages for InventorySelect 2019-10-17 10:55:37 -07:00
Graham Mainwaring
7dd241fcff Add a --dry-run option to gather analytics locally, even if analytics is disabled in settings. 2019-10-17 13:54:13 -04:00
Ryan Petrello
c6a28756f2 Merge pull request #3841 from ansible/fix-5028
fix a 500 error when creating/editing notification templates
2019-10-17 12:06:36 -04:00
Ryan Petrello
94eb1aacb8 Merge pull request #3842 from ansible/callback-receiver-status
add support for `awx-manage run_callback_receiver --status`
2019-10-17 11:46:27 -04:00
Ryan Petrello
ffb1707e74 add support for awx-manage run_callback_receiver --status 2019-10-17 11:10:27 -04:00
Ryan Petrello
5e797a5ad5 Merge pull request #3840 from ansible/fix-5029
fix a minor bug in the notification templates UI
2019-10-17 10:33:34 -04:00
Ryan Petrello
4c92e0af77 fix a 500 error when creating/editing notification templates
see: https://github.com/ansible/awx/issues/5028
2019-10-17 08:53:01 -04:00
Ryan Petrello
24b9a6a38d fix a minor bug in the notification templates UI
see: https://github.com/ansible/awx/issues/5029
2019-10-17 08:43:04 -04:00
softwarefactory-project-zuul[bot]
857683e548 Merge pull request #4917 from jakemcdermott/ui-next-ci
Tune webpack config and add Dockerfile

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-16 22:29:58 +00:00
softwarefactory-project-zuul[bot]
4bf96362cc Merge pull request #5018 from ryanpetrello/cli-deprecation-warnings
warn about endpoint deprecation in the CLI

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-16 20:37:29 +00:00
Ryan Petrello
5001d3158d Merge pull request #3837 from ansible/rename-cyberark-aim
rename the CyberArk AIM credential type
2019-10-16 16:12:58 -04:00
Ryan Petrello
ce5bb9197e rename the CyberArk AIM credential type
see: https://github.com/ansible/awx/issues/4400
2019-10-16 15:58:35 -04:00
Ryan Petrello
309e89e0f0 Merge pull request #3813 from matburt/fix_smart_inventory_impact
Change host counting for task impact
2019-10-16 15:56:50 -04:00
Ryan Petrello
e27dbfcc0b Merge pull request #3836 from wenottingham/eeeeeeeeEEEEEEEEEEEEEEEEeeeeeeeeeenum
Remove removal requirement that isn't actually in the requirements
2019-10-16 15:35:31 -04:00
Bill Nottingham
7df448a348 Remove removal requirement that isn't actually in the requirements 2019-10-16 15:34:33 -04:00
Bill Nottingham
e220e9d8d7 Merge pull request #3834 from wenottingham/seriously-cut-it-out-google
Blacklist rsa even more.
2019-10-16 15:30:01 -04:00
Ryan Petrello
c8a29bac66 warn about endpoint deprecation in the CLI 2019-10-16 15:26:59 -04:00
Bill Nottingham
11d39bd8cc Blacklist rsa even more. 2019-10-16 15:17:19 -04:00
softwarefactory-project-zuul[bot]
1376b8a149 Merge pull request #5020 from ryanpetrello/devel
merge a variety of downstream bug fixes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-16 18:13:02 +00:00
Ryan Petrello
750208b2da Merge pull request #3831 from rooftopcellist/use_system_ca_for_collection
Make Analytics collections verify with system trusted CA
2019-10-16 13:55:07 -04:00
Christian Adams
9d81b00772 have analytics collections verify with system trusted CA list 2019-10-16 13:32:06 -04:00
Ryan Petrello
c7be94c2f2 Merge branch 'hardening' into devel 2019-10-16 13:15:20 -04:00
Ryan Petrello
1adf5ee51d Merge pull request #3805 from beeankha/cli_approval_notification_support
Enable Approval Notification Support for CLI
2019-10-16 13:09:22 -04:00
Ryan Petrello
da998fb196 Merge pull request #3828 from AlanCoding/deprecate_script
API deprecation of inventory script views
2019-10-16 11:27:41 -04:00
Ryan Petrello
b559860c78 Merge pull request #3804 from jbradberry/cli-no-truncate
Do not truncate job event list stdout when called from the CLI
2019-10-16 10:36:29 -04:00
Ryan Petrello
a31e2bdac1 Merge pull request #3829 from ansible/tz-fix
fix a tz parsing bug
2019-10-16 10:31:44 -04:00
beeankha
62e4ebb85d Minor change to README, plus a rebase. 2019-10-16 09:50:00 -04:00
beeankha
aa4f5ccca9 Add blank line (flake8) 2019-10-16 09:50:00 -04:00
beeankha
fdddba18be Update code to be compatible with py2 2019-10-16 09:50:00 -04:00
beeankha
ad89c5eea7 Enable approval notification support for CLI 2019-10-16 09:50:00 -04:00
Jake McDermott
e15bb4de44 Merge pull request #3827 from jakemcdermott/fix_flake8_error
Fix flake8 error
2019-10-16 09:47:57 -04:00
Ryan Petrello
5f2e1c9705 fix a tz parsing bug 2019-10-16 09:46:18 -04:00
Jake McDermott
8c5b0cbd57 Merge pull request #3815 from jakemcdermott/fix-3790
Allow navigation to previous launch prompt tabs
2019-10-16 09:36:54 -04:00
Jake McDermott
73272e338b Merge pull request #3809 from AlexSCorey/4944-4943-TOCBugs
Sys Aud can see CG forms, Adds correct CG form link, Disables CodeMirror
2019-10-16 09:35:02 -04:00
AlanCoding
86ef81cebf API deprecation of inventory script views 2019-10-16 09:34:21 -04:00
Jake McDermott
cd18ec408c Remove unused variable 2019-10-16 09:30:43 -04:00
Ryan Petrello
90c5efa336 Merge pull request #3824 from AlanCoding/one_less_redirect
Avoid unnecessary OPTIONS redirect
2019-10-16 09:17:03 -04:00
Alex Corey
4134d0b516 Updates PR and Addresses Console Error
THis commit imroves conditional rendering of the container groups form for
System Auditors.  It also removes a ng-class condition in the IG list that was unused.
2019-10-16 09:15:44 -04:00
AlanCoding
2123092bdc Avoid unnecessary OPTIONS redirect 2019-10-16 09:08:22 -04:00
softwarefactory-project-zuul[bot]
ba7b53b38e Merge pull request #4992 from keithjgrant/4817-react-router-upgrade
4817 react router upgrade

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-15 20:29:16 +00:00
Keith Grant
cac5417916 delete commented code 2019-10-15 12:56:39 -07:00
Keith Grant
b318f24490 don't skip JobDetail test now that async act works 2019-10-15 12:55:26 -07:00
Bill Nottingham
c2743d8678 Merge pull request #3802 from wenottingham/oh-noes-not-again
Check the user's ansible.cfg for role/collection paths.
2019-10-15 14:21:18 -04:00
Bill Nottingham
60ca843b71 Use logger.exception instead of logger.warning. 2019-10-15 14:20:09 -04:00
Keith Grant
766f863655 update ProjectDetails tests with memoryHistory 2019-10-15 11:11:44 -07:00
Seth Foster
e85fe6a3b7 Merge pull request #3807 from fosterseth/release_3.6.0
Allow oauth2 settings to be set in the ui and api
2019-10-15 13:40:25 -04:00
Keith Grant
0b190c2d0d fix login/logout redirect behavior 2019-10-15 10:25:40 -07:00
Keith Grant
c7d73c4583 lint fixes 2019-10-15 10:25:40 -07:00
Keith Grant
7ad2c03480 clean up 'act()' warnings in tests 2019-10-15 10:25:40 -07:00
Keith Grant
9e44fea7b5 bump react to lastest patch version 2019-10-15 10:25:40 -07:00
Keith Grant
baf5bbc53a finish updating tests for upgraded react-router 2019-10-15 10:25:40 -07:00
Keith Grant
20c24eb275 WIP upgrade react & react-router 2019-10-15 10:25:40 -07:00
Alex Corey
5cf84ddb60 Sys Aud can see CG forms, Adds correct CG form link, Disables CodeMirror
This allows the System Auditor to see the container groups form in a disabled state.
If the pod_spec_override has been changed that field will be open when the page renders
but it will be disabled. It also greys out all code mirror text area fields for System Auditor.
It adds the correct url for the Container Groups message bar to inform users of possible
pitfalls associated with that feature.
2019-10-15 10:47:04 -04:00
Jake McDermott
85781d0bc1 Allow navigation to previous launch prompt tabs 2019-10-14 17:07:50 -04:00
softwarefactory-project-zuul[bot]
cbed525547 Merge pull request #4965 from marshmalien/project-detail
Add project detail and unit tests

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-14 18:30:30 +00:00
Michael Abashian
31c14b005c Merge pull request #3816 from ansible/4945-wfjt-responsivity-styles
Fix workflow results detail panel responsive style
2019-10-14 14:28:38 -04:00
Michael Abashian
7574541037 Merge pull request #3814 from ansible/4846-wfjt-notifications-placeholder
Style empty list placeholder text inline
2019-10-14 14:28:03 -04:00
Marliana Lara
e5184e0ed1 Fix workflow results detail panel responsive style 2019-10-14 13:50:59 -04:00
Marliana Lara
6282b5bacb Style empty list placeholder text inline 2019-10-14 13:11:31 -04:00
Wander Boessenkool
038fd9271d Properly escape quotes 2019-10-14 17:53:28 +02:00
Seth Foster
8e26e4edd5 Allow oauth2 settings to be set in the ui and api
Oauth2 settings were initialized early in the awx import stage, and
those settings were not modifiable. This change allows oauth2 to check
for settings in django.conf settings, which are dynamically updated
through api calls at runtime. As a result, oauth2 settings will match
the values in django.conf settings at any point in time.
2019-10-14 11:38:20 -04:00
softwarefactory-project-zuul[bot]
027e79b7f5 Merge pull request #5006 from keithjgrant/4550-searchbar-no-results
Retain search bar when zero results found

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-14 15:31:55 +00:00
Jeff Bradberry
cf89108edf Force the CLI to use no_truncate for the monitor calls 2019-10-14 11:21:11 -04:00
Matthew
e06bf9f87e Change host counting for task impact
Go through the job -> inventory module linkage to calculate the hosts for a more accurate view
of the number of hosts that could be impacted. This also creates a bailout that will set count
hosts to the forks rather than assuming some crazy low number in the case where we can't determine
the actual number of hosts because we are missing the inventory
2019-10-14 10:41:21 -04:00
Wander Boessenkool
e87055095c Change 'rabbitmqctl status' to a wget | grep
- This reduces CPU usage from 250 millis on idle to 25 millis on idle
- Default rabbitmq user needs administrator privileges
2019-10-14 14:53:53 +02:00
Jeff Bradberry
e672e68a02 Allow the job event list views to take a no_truncate GET param 2019-10-11 17:18:36 -04:00
Graham Mainwaring
263c44a09b Merge pull request #3808 from ghjm/workflow_approved_by
Add approved_by field to workflow approvals
2019-10-11 17:00:54 -04:00
Graham Mainwaring
08839e1381 Add approved_by field to workflow approvals 2019-10-11 16:57:13 -04:00
Keith Grant
faffbc3e65 retain search bar when zero results found 2019-10-11 12:11:41 -07:00
Alan Rominger
8e296bbf8c Merge pull request #3796 from AlanCoding/inventory_fq_36
use fully qualified inventory plugin name
2019-10-11 06:56:13 -04:00
Jake McDermott
03d59e1616 Tune webpack config and add Dockerfile
Add Dockerfile for running containerized dev server. Update webpack
config to make dev server available over exposed docker port.
2019-10-10 17:37:27 -04:00
Jeff Bradberry
9efa7b84df Depend on a serializer context variable no_truncate
to decide whether to turn off the ANSI control sequence-aware
truncation, instead of needing inappropriate awareness of the details
of the view that invoked the serializer.  This will also allow us to
have views that can more flexibly turn off the truncation under other
circumstances.
2019-10-10 16:08:17 -04:00
Ryan Petrello
8e1f7695b1 Merge pull request #3803 from ansible/back-to-pending
Prevent pods from failing if the reason is because of a resource quota
2019-10-10 16:08:08 -04:00
Ryan Petrello
d6adab576f Merge pull request #3800 from ansible/update-vmware-inv-script
update to latest vmware_inventory.py
2019-10-10 16:07:31 -04:00
Jeff Bradberry
a803cedd7c Break out a new reusable truncate_stdout utility function 2019-10-10 16:07:08 -04:00
Ryan Petrello
d5bdf554f1 fix a programming error when k8s pods fail to launch 2019-10-10 15:54:00 -04:00
Shane McDonald
8f75382b81 Implement retry logic for container group pod launches 2019-10-10 15:53:56 -04:00
Shane McDonald
b93164e1ed Prevent pods from failing if the reason is because of a resource quota
Signed-off-by: Shane McDonald <me@shanemcd.com>
2019-10-10 15:53:50 -04:00
Bill Nottingham
31bdde00c9 Check the user's ansible.cfg for role/collection paths.
There's no other way to add our new paths reliably without breaking things.
2019-10-10 15:09:26 -04:00
Ryan Petrello
ed52e8348b Merge pull request #3799 from ansible/log-audit-note
add a note about settings.LOG_AGGREGATOR_AUDIT usage
2019-10-10 11:32:34 -04:00
Ryan Petrello
008fe42b4d update to latest vmware_inventory.py
06c7b87613/contrib/inventory/vmware_inventory.py
2019-10-10 10:49:15 -04:00
Ryan Petrello
d9dbbe6748 add a note about settings.LOG_AGGREGATOR_AUDIT usage
see: https://github.com/ansible/awx/pull/4872#issuecomment-540133448
2019-10-10 10:44:23 -04:00
AlanCoding
16ebfe3a63 use fully qualified inventory plugin name 2019-10-10 08:51:18 -04:00
softwarefactory-project-zuul[bot]
08df2cad68 Merge pull request #4974 from jakemcdermott/switch-default-job-panel
Switch default job panel to output

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2019-10-10 04:24:04 +00:00
softwarefactory-project-zuul[bot]
ca039f5338 Merge pull request #4957 from jakemcdermott/webhooks-labels
Use consistent wording for JT options

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-10 02:44:16 +00:00
Marliana Lara
4b83bda306 Wrap phrase for translation and update test 2019-10-09 22:24:50 -04:00
Marliana Lara
7fc4e8d20a Add project detail and unit tests 2019-10-09 22:24:49 -04:00
softwarefactory-project-zuul[bot]
4c697ae477 Merge pull request #4905 from rooftopcellist/collection_frequency
Update Frequency of Collection for Automation Analytics

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-10 02:22:51 +00:00
Christian Adams
844b8a803f update frequency of collection for automation analytics 2019-10-09 21:10:48 -04:00
Ryan Petrello
7fe32ab607 Merge pull request #4868 from rebeccahhh/survey-spec-default
Sanitize newlines out of survey option inputs
2019-10-09 21:09:33 -04:00
Ryan Petrello
cc27c95187 Merge pull request #4872 from ryanpetrello/log-aggregration-auditing
add a settings flag for writing all external logs to disk
2019-10-09 21:09:19 -04:00
softwarefactory-project-zuul[bot]
9745bfbdb4 Merge pull request #4931 from austlane/devel
Requirements.txt: Upgrade bundled pyvmomi to 6.7.3

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-10 00:20:50 +00:00
Austin
d6708b2b59 Upgrade bundled pyvmomi to 6.7.3
Fixes issues with vmware_guest_facts expecting version 6.7.1 or greater (JSON support)
2019-10-09 14:45:00 -04:00
Jake McDermott
c202574ae3 Switch default job panel to output 2019-10-09 14:31:44 -04:00
Rebeccah
7efacb69aa added in parsing for multiple choice and multiselect, which either takes a string, splits it up, and then eliminates any extra newlines, or just accepts alist. Extra newlines are sanitized out.
Signed-off-by: Rebeccah <rhunter@redhat.com>
2019-10-09 13:59:32 -04:00
Ryan Petrello
a076e84a33 add a settings flag for writing all external logs to disk 2019-10-09 13:54:54 -04:00
Jake McDermott
bc6648b518 Use consistent wording for JT options 2019-10-09 13:22:40 -04:00
softwarefactory-project-zuul[bot]
ff67d65065 Merge pull request #4932 from jakemcdermott/combine-lint-and-test
Add targets for combined lint and test

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-09 15:50:22 +00:00
softwarefactory-project-zuul[bot]
aab8495998 Merge pull request #4939 from AlanCoding/no_warning
Get rid of warning in collections install

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-09 15:19:28 +00:00
softwarefactory-project-zuul[bot]
0f1a92bd51 Merge pull request #4589 from AlanCoding/mah_galaxy
Allow use of user-specified private galaxy server with fallback

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-09 15:01:12 +00:00
softwarefactory-project-zuul[bot]
f10dc16014 Merge pull request #4921 from ryanpetrello/ryan-broke-it
fix a bug that breaks inventory update stdout when used in a workflow

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-09 14:50:52 +00:00
softwarefactory-project-zuul[bot]
bbc4ec48b9 Merge pull request #4933 from beeankha/org_notification_fix
Org-Level Notification Fix + Show URL 

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-09 14:25:43 +00:00
softwarefactory-project-zuul[bot]
fff8664219 Merge pull request #4941 from mabashian/3727-output-scroll
Removes restriction on scrolling for output fewer than 50 lines

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-09 13:46:46 +00:00
softwarefactory-project-zuul[bot]
bc8f5ad015 Merge pull request #4929 from mabashian/ui-next-inventories
Adds basic inventory list and scaffolding for inv/smart inv details and related tabs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-09 13:46:43 +00:00
softwarefactory-project-zuul[bot]
9fade47bbf Merge pull request #4937 from Spredzy/fix_svn_project_sync
project_update: Make subversion module honor locale

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-09 13:40:26 +00:00
softwarefactory-project-zuul[bot]
5ad922a861 Merge pull request #4938 from jakemcdermott/fix-missing-iso
Use summary_fields.controller_id to tell if job is isolated

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-08 20:18:03 +00:00
mabashian
f34bd632d8 Adds unit test coverage for add button rbac on several lists 2019-10-08 16:10:55 -04:00
mabashian
d239d55d2a Failed delete string pluralization 2019-10-08 15:15:13 -04:00
mabashian
d9ad906167 Adds basic inventory list and scaffolding for inv/smart inv details+related tabs 2019-10-08 15:09:32 -04:00
AlanCoding
d40ab38745 Get rid of warning in collections install 2019-10-08 14:19:52 -04:00
mabashian
8acd4376d9 Removes restriction on scrolling for output fewer than 50 lines 2019-10-08 14:15:25 -04:00
Jake McDermott
a7a194296c Use summary_fields.controller_id to tell if job is isolated 2019-10-08 12:08:23 -04:00
Yanis Guenane
166635ac79 project_update: Make subversion module honor locale
Currently the subversion module does not honor system configured locale
as the module itself overrides them to `C`.

This commit enforces the module to honor the `LANG` locale for
deployment. Allowing project update with repo that contains UTF-8
characters.

Closes: https://github.com/ansible/awx/issues/4936
Signed-off-by: Yanis Guenane <yguenane@redhat.com>
2019-10-08 17:54:53 +02:00
AlanCoding
f10296b1b7 Revert "bump required version to 2.10"
This reverts commit e0e9c8321b.
2019-10-08 11:20:57 -04:00
AlanCoding
e0e9c8321b bump required version to 2.10 2019-10-08 11:00:57 -04:00
softwarefactory-project-zuul[bot]
52b145dbf6 Merge pull request #4930 from mabashian/4889-number-input-scroll
Prevent usage of mousewheel on spinner elements

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-08 14:25:07 +00:00
AlanCoding
0594bdf650 Add more galaxy server param validation 2019-10-07 20:03:40 -04:00
softwarefactory-project-zuul[bot]
ba9758ccc7 Merge pull request #4915 from jakemcdermott/fix-template-load-error
Gracefully handle missing template summary fields 

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-07 19:35:40 +00:00
beeankha
02b13fd4ae Enable notifications to send at org level, ...
... and list the URL in body of approval notification messages.
2019-10-07 15:34:09 -04:00
AlanCoding
06c62c4861 update docs for galaxy auth URL material 2019-10-07 14:52:10 -04:00
Jake McDermott
132555485c Add targets for combined lint and test
Reduce the total number of simultaneous zuul jobs.
2019-10-07 14:49:18 -04:00
softwarefactory-project-zuul[bot]
f1a9e68985 Merge pull request #4928 from saito-hideki/issue/4749
Fixed missing user-id in URL links when adding users to Team and Org …

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-07 18:32:19 +00:00
AlanCoding
c09039e963 Add setting for auth_url
Also adjust public galaxy URL setting to
allow using only the primary Galaxy server

Include auth_url in token exclusivity validation
2019-10-07 14:02:43 -04:00
AlanCoding
85c99cc38a Redact env vars for Galaxy token or password 2019-10-07 14:02:43 -04:00
AlanCoding
576ff1007e Describe usage of primary galaxy server in docs 2019-10-07 14:02:43 -04:00
AlanCoding
922e779a86 Rename private to primary in galaxy settings
use a setting for the public galaxy URL
2019-10-07 14:02:43 -04:00
AlanCoding
8bda048e6d validate galaxy server settings
involves some changes to the redact code
2019-10-07 14:02:42 -04:00
AlanCoding
093bf6877b Finish adding settings to UI 2019-10-07 14:02:42 -04:00
AlanCoding
d59d8562db Avoid redacting Galaxy URLs 2019-10-07 14:02:42 -04:00
AlanCoding
c566c332f9 Initial env var implementation of private galaxy server 2019-10-07 14:02:42 -04:00
mabashian
cb4a3a799e Prevent usage of mousewheel on spinner elements 2019-10-07 11:28:51 -04:00
softwarefactory-project-zuul[bot]
5a5b46aea0 Merge pull request #4903 from jakemcdermott/add-output-for-all-job-types
Make job output panel work with all job types

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-07 15:17:53 +00:00
Jake McDermott
85909c4264 Gracefully handle missing summary fields
Not all templates have `modified_by`, `created_by` fields or
other summary_fields. Avoid page load error by only referencing these
fields if they exist.
2019-10-07 10:57:56 -04:00
softwarefactory-project-zuul[bot]
9d2c877143 Merge pull request #4913 from jakemcdermott/list-item-date-formatting
Add basic date formatter

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-07 14:27:39 +00:00
Hideki Saito
8e94c0686a Fixed missing user-id in URL links when adding users to Team and Org in modal
- Fixed issue #4749

Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-10-07 09:47:29 +00:00
softwarefactory-project-zuul[bot]
52447f59c1 Merge pull request #4924 from ryanpetrello/runner-1-4-2
pin to runner==1.4.2

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-05 18:55:36 +00:00
Ryan Petrello
04eed02428 pin to runner==1.4.2 2019-10-04 17:11:34 -04:00
Ryan Petrello
b45b9333e1 Merge pull request #4716 from jladdjr/perf_stats
Enable collection of performance stats
2019-10-04 17:09:30 -04:00
softwarefactory-project-zuul[bot]
62659aefc2 Merge pull request #4189 from shanemcd/toc
Container Groups

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-04 20:19:12 +00:00
softwarefactory-project-zuul[bot]
a52ccd1086 Merge pull request #4859 from mabashian/ui-next-projects
Add projects list and scaffolding for project details+tabs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-04 19:54:15 +00:00
Jim Ladd
bd9a196ef9 bump ansible-runner to 1.4.1 2019-10-04 12:48:29 -07:00
Ryan Petrello
64b04e6347 bump ansible-runner to 1.4.0 2019-10-04 12:48:29 -07:00
Jim Ladd
15e70d2173 Add tests for resource profiling 2019-10-04 12:48:29 -07:00
Keith Grant
b981f3eed6 add resource profiling toggle in jobs settings 2019-10-04 12:48:29 -07:00
Jim Ladd
2c1c2f452d Add libcgroup-tools to dev env (provides cgcreate, cgexec, etc) 2019-10-04 12:48:29 -07:00
Jim Ladd
cf1c9a0559 Add awx settings for resource profiling 2019-10-04 12:48:29 -07:00
Jim Ladd
ed3f49a69d Add support for calling runner with perf stats 2019-10-04 12:48:29 -07:00
Alex Corey
74b398f920 Add Tech Preview notice to Container Group UI
and some refactoring
2019-10-04 15:17:57 -04:00
Ryan Petrello
ae0c9ead40 fix a bug that breaks inventory update stdout when used in a workflow
see: https://github.com/ansible/awx/issues/4920
related: https://github.com/ansible/awx/pull/4731
2019-10-04 15:17:39 -04:00
mabashian
d6a0f929a8 Fix merge conflict fallout 2019-10-04 15:10:02 -04:00
softwarefactory-project-zuul[bot]
f5358f748e Merge pull request #4919 from ryanpetrello/jupyter-minus-minus
remove jupyter from supervisor in the dev env

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-04 19:07:47 +00:00
mabashian
c9e889ca82 Removes residual references to org from project list components. Assign IDs to project and jt related tabs after array has been finalized. 2019-10-04 15:06:06 -04:00
mabashian
f502fbfad6 Put project related tabs in the correct order 2019-10-04 15:06:06 -04:00
mabashian
b8fe3f648e Add projects list and scaffolding for project details+tabs 2019-10-04 15:06:06 -04:00
softwarefactory-project-zuul[bot]
8d3ecf708b Merge pull request #4856 from AlexSCorey/4247-ResponsveTemplateList
Makes template list Responsive

Reviewed-by: Alex Corey <Alex.swansboro@gmail.com>
             https://github.com/AlexSCorey
2019-10-04 18:50:24 +00:00
Alex Corey
9289ade1ec IG List responsiveness 2019-10-04 13:21:28 -04:00
Alex Corey
958c8a4177 Fixes Credential Type Issue, ExtraVars Toggle Issue, Job Results Inert Link 2019-10-04 13:21:28 -04:00
Shane McDonald
59413e0a8f Change default pod spec in OPTIONS request to json 2019-10-04 13:21:28 -04:00
Mat Wilson
ad1e7c46c3 add k8s cred type to awxkit 2019-10-04 13:21:28 -04:00
Alex Corey
8fabb1f10d Show override toggle as off when pod_spec matches default 2019-10-04 13:21:28 -04:00
Alex Corey
895c71f62c removes instances tab from CGs 2019-10-04 13:21:27 -04:00
Alex Corey
32a57e9a97 add default pod spec to edit 2019-10-04 13:21:27 -04:00
Alex Corey
584777e21e Adds Tabs to CGs 2019-10-04 13:21:27 -04:00
Jake McDermott
61a756c59d add is_containerized to ig serializer 2019-10-04 13:21:27 -04:00
Jake McDermott
b547a8c3ca link to container group from job runs 2019-10-04 13:21:27 -04:00
Alex Corey
007f33c186 Add Container Group Form Render and cred type modal pops up
modal render proper cred

WIP

modal rendering properly

card for edit CG renders, no fields

add code mirror and some list styling

address PR issues
2019-10-04 13:21:27 -04:00
Shane McDonald
aab1cd68b0 Fix InstanceGroup summary fields 2019-10-04 13:21:24 -04:00
Shane McDonald
92cc9a9213 Create separate Make target for cleaning API-related artifacts
My workflow for running tests is now:

```
$ docker exec -ti tools_awx_1 make clean-api awx-link test
```
2019-10-04 13:21:23 -04:00
Shane McDonald
b9c675e3a2 API documentation for container groups 2019-10-04 13:21:23 -04:00
Shane McDonald
bd5003ca98 Task manager / scheduler Kubernetes integration 2019-10-04 13:21:21 -04:00
Jake McDermott
d3b0edf75a Apply date formatter to lists and details 2019-10-04 13:17:15 -04:00
Jake McDermott
9421781cc7 Add basic date formatter 2019-10-04 13:16:42 -04:00
Shane McDonald
a9059edc65 Allow associating a credential with an instance group 2019-10-04 12:54:31 -04:00
Shane McDonald
7850e3a835 Ignore unison and emacs temporary files 2019-10-04 12:54:31 -04:00
Ryan Petrello
34d02011db remove jupyter from supervisor in the dev env
if you use this tool, just run `make jupyter`
2019-10-04 11:53:26 -04:00
softwarefactory-project-zuul[bot]
353692a0ba Merge pull request #4896 from mabashian/ui_next-notifs
Refactor notifications list and add it to JT details

Reviewed-by: Michael Abashian
             https://github.com/mabashian
2019-10-04 14:50:35 +00:00
mabashian
90451e551d Add notifications to the breadcrumb config for templates 2019-10-04 09:48:38 -04:00
mabashian
2457926f0a Refactor notifications list to be more generic. Hook notifictions tab up on JT details. 2019-10-04 09:48:38 -04:00
softwarefactory-project-zuul[bot]
cf27ac295a Merge pull request #4893 from wenottingham/we-will-not-present-you-for-confirmation
Remove text about confirming launch-time passwords.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-04 10:40:56 +00:00
Jim Ladd
7e40673dd0 Add docs for perf data collection 2019-10-03 23:54:00 -07:00
softwarefactory-project-zuul[bot]
daa6f35d02 Merge pull request #4914 from jladdjr/increase_instance_version_length
Increase instance version length

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-04 05:37:40 +00:00
Jim Ladd
cdcf2fa4c2 Increase instance version length 2019-10-03 21:24:07 -04:00
Jake McDermott
275765b8fc Refactor language utility
Move the language helper out of RootProvider and into a utilities
module so that it can be more easiliy reused where needed. In some
cases we want the full language code so that logic has been moved
into a separate function.
2019-10-03 20:26:47 -04:00
softwarefactory-project-zuul[bot]
2786395808 Merge pull request #4436 from jbradberry/webhook-receivers
Webhook receivers

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 22:23:22 +00:00
Jake McDermott
731982c736 Build correct job_event url for different job types 2019-10-03 16:27:02 -04:00
softwarefactory-project-zuul[bot]
d2214acd6d Merge pull request #4895 from rooftopcellist/rm_cruft
Remove unneeded debug statement

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 20:07:44 +00:00
Bill Nottingham
9d593f0715 Remove text about confirming launch-time passwords.
We haven't done confirmation on these for a long time.
2019-10-03 15:22:15 -04:00
softwarefactory-project-zuul[bot]
9e778b24c7 Merge pull request #4890 from rooftopcellist/refresh_expiry
Add RefreshToken Expiration setting in UI

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 19:13:58 +00:00
softwarefactory-project-zuul[bot]
bdd28bcb3b Merge pull request #4899 from ryanpetrello/cli-improved-name-lookups
attempt to properly map more foreign keys to named lookups

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 18:52:57 +00:00
Christian Adams
19a6c70858 remove cruft leftover from the postgresql upgrade 2019-10-03 14:43:56 -04:00
softwarefactory-project-zuul[bot]
393474f33d Merge pull request #4701 from AlanCoding/awx_modules_merge
Migrate Ansible tower modules to collection maintained alongside AWX

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 18:25:55 +00:00
softwarefactory-project-zuul[bot]
b41e6394c5 Merge pull request #4898 from mabashian/upgrade-handlebars-uglify
Bumps handlebars and uglify-js deps

Reviewed-by: Michael Abashian
             https://github.com/mabashian
2019-10-03 17:35:24 +00:00
Christian Adams
f36f10a702 add RefreshToken Expiration setting in UI 2019-10-03 13:13:31 -04:00
Ryan Petrello
fccd6a2286 attempt to properly map more foreign keys to named lookups
this is imperfect, but it's at least an improvement until we can come up
with a better solution

in order to really do this right, the API itself probably needs to grow
some more metadata that allows us to specify *actual* `type`s that
relate to API resources

see: https://github.com/ansible/awx/issues/4874
2019-10-03 12:59:06 -04:00
mabashian
ea2312259f Bumps handlebars and uglify-js deps 2019-10-03 12:54:56 -04:00
softwarefactory-project-zuul[bot]
0e2b7767f5 Merge pull request #4900 from mabashian/fix-snapshot
Fix broken notif list snapshot

Reviewed-by: awxbot
             https://github.com/awxbot
2019-10-03 16:52:56 +00:00
mabashian
82505cd43a Fix broken notif list snapshot 2019-10-03 12:19:50 -04:00
softwarefactory-project-zuul[bot]
5b17ce5729 Merge pull request #4886 from fosterseth/fix-4710-clearexpiredtokens
Set oauth2 refresh token expiration setting

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 16:19:10 +00:00
softwarefactory-project-zuul[bot]
82b313c767 Merge pull request #4871 from rooftopcellist/check_db
Add awx-manage command to check db connection

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 15:16:32 +00:00
softwarefactory-project-zuul[bot]
fec67a3545 Merge pull request #4888 from ryanpetrello/cli-send-receive-note
cli: add a note about send/receive

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 14:17:08 +00:00
softwarefactory-project-zuul[bot]
202af079eb Merge pull request #4849 from mabashian/3779-license-error
Show error body when license application fails

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 12:57:23 +00:00
softwarefactory-project-zuul[bot]
f78d7637a4 Merge pull request #4853 from mabashian/4778-workflow-text-overlap
Prevent text overlap on workflow nodes when an approval node is deleted

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-03 12:48:56 +00:00
softwarefactory-project-zuul[bot]
71bd257191 Merge pull request #4875 from keithjgrant/4684-jt-form-cleanup
Job Template form cleanup

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-02 22:24:57 +00:00
Keith Grant
82064eb4dc fix prop type error in LabelSelect 2019-10-02 14:17:14 -07:00
Seth Foster
bbd625f3aa update help_text to include information about REFRESH_TOKEN_EXPIRE_SECONDS 2019-10-02 17:16:01 -04:00
softwarefactory-project-zuul[bot]
3725ccb43b Merge pull request #4865 from mabashian/3607-settings
Fix settings page rendering when some fields are set manually in file

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-02 19:43:18 +00:00
Seth Foster
8b22c86b10 Register default settings for OAUTH2_PROVIDER app
Grab AUTHORIZATION_CODE_EXPIRE_SECONDS from oauth2_settings
rather than hard code.

Add REFRESH_TOKEN_EXPIRE_SECONDS to valid_key_names
in OAuth2ProviderField class
2019-10-02 15:29:45 -04:00
softwarefactory-project-zuul[bot]
e6a5d18ebe Merge pull request #4885 from wenottingham/minor-typo
Apply some minor copy edits to awx docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-02 18:35:32 +00:00
softwarefactory-project-zuul[bot]
eca191f7f5 Merge pull request #4839 from rebeccahhh/devel
added in ability to delete a user if they are part of your organization

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-02 18:35:27 +00:00
Ryan Petrello
35fe127891 cli: add a note about send/receive 2019-10-02 14:06:56 -04:00
Seth Foster
db1ad2de95 Set REFRESH_TOKEN_EXPIRE_SECONDS
- Set OAUTH2 REFRESH_TOKEN_EXPIRE_SECONDS to 1 month
  (2628000 seconds)
- If not set, awx-manage cleartokens, or cleanup_tokens,
  will not work properly
- Once cleartokens is run, this setting is the amount of
  time after an access token expires that we keep its
  refresh token in the database
2019-10-02 13:51:46 -04:00
Bill Nottingham
ac12a9cfe1 Apply some minor copy edits 2019-10-02 13:46:10 -04:00
softwarefactory-project-zuul[bot]
dacda644ac Merge pull request #4882 from jbradberry/isolated-playbooks-with-spaces
Quote the playbook passed to runner from the isolated manager

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-02 17:40:20 +00:00
softwarefactory-project-zuul[bot]
cfa407e001 Merge pull request #4843 from lunarthegrey/patch-1
Fix issue #3705

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-02 17:38:13 +00:00
softwarefactory-project-zuul[bot]
329630ce2a Merge pull request #4869 from mabashian/4192-enter-textarea
Unbind keydown listeners when Alert modals are closed.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-02 15:11:21 +00:00
Jeff Bradberry
1122d28a1b Quote the playbook passed to runner from the isolated manager 2019-10-02 11:10:19 -04:00
softwarefactory-project-zuul[bot]
a9b299cd98 Merge pull request #4881 from ryanpetrello/cli-ssh-example
cli: warn users if they specify a missing file with @

Reviewed-by: Yanis Guenane
             https://github.com/Spredzy
2019-10-02 15:00:51 +00:00
Ryan Petrello
6c1488ed00 cli: warn users if they specify a missing file with @ 2019-10-02 10:28:04 -04:00
softwarefactory-project-zuul[bot]
1f62d223a2 Merge pull request #4880 from ryanpetrello/cli-install-rst
template CLI install documentation into a separate file

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-02 14:04:05 +00:00
Ryan Petrello
da23c4e949 template CLI install documentation into a separate file 2019-10-02 09:41:11 -04:00
Keith Grant
6d00d43273 prettier 2019-10-01 15:20:51 -07:00
Keith Grant
77b68e0eb7 use getAddedAndRemoved for saving instance groups 2019-10-01 14:37:42 -07:00
softwarefactory-project-zuul[bot]
945d100302 Merge pull request #4836 from fosterseth/fix-4334-active-user-removed
check if User exists before saving UserSessionMembership

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-01 19:59:11 +00:00
Christian Adams
c0fd70f189 add mgmt cmd to check db connection 2019-10-01 15:40:43 -04:00
Keith Grant
ba4e79fd3a update JT form tests 2019-10-01 11:03:36 -07:00
AlanCoding
db0bd471c3 rename playbook vars to have collection_ 2019-10-01 13:45:07 -04:00
mabashian
616fe285fa Unbind keydown listeners when Alert modals are closed.
This fixes a bug where attempting to hit enter in any sort of textarea would be ignored if the user had previously encountered an Alert modal while navigating throughout the application.
2019-10-01 12:38:43 -04:00
Jake McDermott
b4b2cf76f6 Refactor job secondary label assignment 2019-10-01 10:33:47 -04:00
mabashian
4aeda635ff Checks to make sure that OAUTH2_PROVIDER key is returned by api in settings options before attempting to use it. This fixes a bug where setting ACCESS_TOKEN_EXPIRE_SECONDS and AUTHORIZATION_CODE_EXPIRE_SECONDS manually in a file was causing the settings page to render improperly. 2019-10-01 10:17:33 -04:00
softwarefactory-project-zuul[bot]
7e8c00ee24 Merge pull request #4864 from ryanpetrello/dont-stop-the-beat
warn loudly if celerybeat encounters AMQP connection issues

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-01 13:56:54 +00:00
Ryan Petrello
27c4e35ee4 warn loudly if celerybeat encounters AMQP connection issues
related: https://github.com/ansible/awx/pull/4857
2019-10-01 09:32:22 -04:00
softwarefactory-project-zuul[bot]
80a17987ff Merge pull request #4854 from ryanpetrello/cli-login-formatting
cli: make `awx login` respect the -f flag

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-10-01 01:14:15 +00:00
softwarefactory-project-zuul[bot]
10a6a29a07 Merge pull request #4857 from ryanpetrello/kombu-dns
make kombu DNS failures louder in the logs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-30 21:35:29 +00:00
Ryan Petrello
b80eafe4a1 make kombu DNS failures louder in the logs 2019-09-30 16:48:09 -04:00
Alex Corey
6c443a0a6a fix lint error 2019-09-30 16:38:51 -04:00
Alex Corey
55378c635e Makes template list responive 2019-09-30 16:28:24 -04:00
Ryan Petrello
a4047e414f cli: make awx login respect the -f flag
see: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/awx-project/ZAlhpLMBzVw/fUSqujoWBQAJ
2019-09-30 15:38:08 -04:00
Jeff Bradberry
d549877ebd Check for the existance of a UnifiedJobTemplate with the same webhook GUID
instead of trying (incorrectly) to be specific about the JT/WFJT type.
2019-09-30 15:26:49 -04:00
Rebeccah
28a119ca96 re-worked unit test into 3 seperate unit tests, one for orphans, one for group members, and one for multi-group members 2019-09-30 15:07:19 -04:00
Rebeccah
758529d7dd added in unit test for org admin deleting user 2019-09-30 15:07:19 -04:00
Rebeccah
075d1a2521 removed superuser check since can_admin already checks that, and also added allow orphans so admins can delete orphaned users 2019-09-30 15:07:19 -04:00
Rebeccah
69924c9544 added in ability to delete a user if they are part of your organization 2019-09-30 15:07:19 -04:00
softwarefactory-project-zuul[bot]
b858001c8f Merge pull request #4851 from ryanpetrello/fix-host-key-checking
improve host key checking configurability

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-30 18:38:05 +00:00
Ryan Petrello
82be87566f improve host key checking configurability
see: https://github.com/ansible/tower/issues/3737
2019-09-30 14:13:07 -04:00
mabashian
52b8b7676a Prevent text overlap on workflow nodes when an approval node is deleted 2019-09-30 13:38:46 -04:00
Jeff Bradberry
204c05aa3b Change the webhook post-back payload to use the job UI url 2019-09-30 13:32:23 -04:00
Jeff Bradberry
ac34b24868 Post the job or workflow url to the webhook service as part of the status 2019-09-30 13:32:23 -04:00
Jeff Bradberry
ffe89820e3 Return to using ContentType.kind
which is _not_ the `kind` attribute being deprecated.
2019-09-30 13:32:23 -04:00
Jeff Bradberry
062c4908c9 Modify the webhook debounce logic
to check if we've already previously run a job with the same webhook
GUID plus template id.  This will allow organizations to write
multiple JT/WFJTs to handle the same set of webhook events.
2019-09-30 13:32:23 -04:00
Jeff Bradberry
b6b70e55fb Address a variety of small review issues 2019-09-30 13:32:23 -04:00
Jeff Bradberry
6aa6471b7c Add help_text to the new fields 2019-09-30 13:32:23 -04:00
Jeff Bradberry
e14d4ddec6 Add a doc template to the webhook key API view 2019-09-30 13:32:23 -04:00
Jake McDermott
84dcda0a61 Use job launch_type field to detect webhook jobs
We have a launch type field for categorizing the different ways jobs
can be launched. This updates the UI to use this field when checking
if a job was launched by a webhook.
2019-09-30 13:32:23 -04:00
Jeff Bradberry
df24f5d28f Add a new launch_type of 'webhook' 2019-09-30 13:32:23 -04:00
Jeff Bradberry
fea7f914d2 Avoid the use of CredentialType.kind 2019-09-30 13:32:23 -04:00
Elijah DeLee
d4c8167b1b add arguments to awxkit for webhooks on jt or wfjt 2019-09-30 13:32:22 -04:00
Jeff Bradberry
a4873d97d8 Raise a validation error if a credential is set while the service is not 2019-09-30 13:32:22 -04:00
Jeff Bradberry
efe4ea6575 Fix the webhook receiver url for workflow jobs 2019-09-30 13:32:22 -04:00
Jeff Bradberry
b415c31b4f Fix problems with posting to Gitlab's API 2019-09-30 13:32:22 -04:00
Jeff Bradberry
e91462d085 Update the Webhook Credential help text tooltip
to make it more apparent to the user that this is an optional part of
the feature, and that failure to add a webhook credential will disable
status post-backs.
2019-09-30 13:32:22 -04:00
Jake McDermott
e85ff83be6 Apply 403 alert fixes for Workflows, too 2019-09-30 13:32:22 -04:00
Jake McDermott
d500c1bb40 Don't alert user of 403 errors for webhook key 2019-09-30 13:32:22 -04:00
Jeff Bradberry
885841caea Drop Bitbucket support
since only the Bitbucket Server product supports signed payloads,
bitbucket.org does not.  And we are requiring signed payloads.
2019-09-30 13:32:22 -04:00
Jake McDermott
f7396cf81a Always include selected webhook service in creation requests 2019-09-30 13:32:22 -04:00
Jeff Bradberry
286da3a7eb Posting webhook status now works 2019-09-30 13:32:21 -04:00
Jeff Bradberry
40b03eb6ef Enable the call to update_webhook_status
by calling it directly within send_notification_templates.  Also,
update the context field in the payload to be either 'ansible/awx' or
'ansible/tower', depending on which is being used.
2019-09-30 13:32:21 -04:00
Jeff Bradberry
c76c531b7a Provide a payload for the webhook status post-back 2019-09-30 13:32:21 -04:00
Jake McDermott
75d3359b6f make label consistent with help text 2019-09-30 13:32:21 -04:00
Jeff Bradberry
4ad5054222 Add logic to post the job status for webhooks back to the service
under some circumstances.
2019-09-30 13:32:21 -04:00
Jeff Bradberry
aa34984d7c Fix the git ref extractor for Gitlab pull requests 2019-09-30 13:32:21 -04:00
Jake McDermott
08594682a4 stub options request in workflow add unit test 2019-09-30 13:32:21 -04:00
Jeff Bradberry
d73abda5d1 Update the webhook receiver git ref extractor logic
to deal with the null-ref case, and to deal correctly with Github push events.
2019-09-30 13:32:21 -04:00
Jake McDermott
3bc91f123e add trailing '/' to webhook urls 2019-09-30 13:32:21 -04:00
Jake McDermott
41ba5c0968 add webhook fields to workflow unit test mock 2019-09-30 13:32:21 -04:00
Jeff Bradberry
e8e3a601b2 Pull out a git ref for each event type where we might care 2019-09-30 13:32:21 -04:00
Jake McDermott
b96c03e456 represent webhooks on job lists 2019-09-30 13:32:21 -04:00
Jake McDermott
5e9448a854 always show launched by webhook details if there's a webhook guid 2019-09-30 13:32:21 -04:00
Jake McDermott
6b17e86f30 add launched-by-webhook details to job runs 2019-09-30 13:32:21 -04:00
Jake McDermott
00337990db add webhook fields to workflows 2019-09-30 13:32:21 -04:00
Jake McDermott
1a33ae61a7 use key icon for webhook cred 2019-09-30 13:32:21 -04:00
Jake McDermott
5f7bfaa20a support server-side webhook key generation 2019-09-30 13:32:21 -04:00
Jeff Bradberry
178a2c7c49 Disable the authentication classes for the webhook receivers
One of them was consuming the body of the posts.  We do still need to
have an extraneous `request.body` expression, though now in
WebhookReceiverBase.post, since the `request.data` expression in the
logging also consumes the request body.
2019-09-30 13:32:20 -04:00
Jeff Bradberry
58e5f02129 Expose the new webhook fields in the job and workflow serializers 2019-09-30 13:32:20 -04:00
Jeff Bradberry
dd6c97ed87 Include a message in the webhook response 2019-09-30 13:32:20 -04:00
Jeff Bradberry
7aa424b210 Make sure that the new webhook fields are populated when firing off a job
Also, added a temporary hacky workaround for the fact that something
in our request/response stack for APIView is consuming the request
contents in an unfriendly way, preventing the `.body` @property from
working.
2019-09-30 13:32:20 -04:00
Jake McDermott
e0a363beb8 issue network calls for setting and getting webhook key 2019-09-30 13:32:20 -04:00
Jake McDermott
48eb502161 wip 2019-09-30 13:32:20 -04:00
Jake McDermott
151de89c26 add webhook credential field 2019-09-30 13:32:20 -04:00
Jake McDermott
f5c151d5c4 add webhook url field 2019-09-30 13:32:20 -04:00
Jake McDermott
17b34b1e36 add webhook service field 2019-09-30 13:32:20 -04:00
Jeff Bradberry
ee1d118752 Add the webhook receiver url to the related urls in the serializers 2019-09-30 13:32:20 -04:00
Jeff Bradberry
245931f603 Debounce when multiple copies of the same webhook event come in 2019-09-30 13:26:04 -04:00
Jeff Bradberry
095aa77857 Create a new model mixin for Job and WorkflowJob webhook fields 2019-09-30 13:26:04 -04:00
Jeff Bradberry
bb1397a3d4 Validate the webhook credential
- we should allow a null credential, so that the admin can choose to configure not posting back status changes of the triggered job
- the credential must be of the new 'token' kind
- if we do configure a credential, its type must match the selected SCM service
2019-09-30 13:26:04 -04:00
Jeff Bradberry
5848f0360a Update test_default_cred_types to include the new personal access token types 2019-09-30 13:26:04 -04:00
Jeff Bradberry
83fc2187cc Fix the summary fields for webhook_credential 2019-09-30 13:26:04 -04:00
Jeff Bradberry
4dba9916dc Add a new set of personal access token credential types 2019-09-30 13:26:03 -04:00
Jeff Bradberry
8836ed44ce Construct an ID for Gitlab webhooks
by taking the SHA1 of the body of the webhook request.
2019-09-30 13:26:03 -04:00
Jeff Bradberry
992c414737 Launch a Job or WorkflowJob based on the incoming webhook 2019-09-30 13:26:03 -04:00
Jeff Bradberry
66a8186995 Get the webhook receiver views to work at least minimally 2019-09-30 13:26:03 -04:00
Jeff Bradberry
fa15696ffe Remove some dead comments 2019-09-30 13:26:03 -04:00
Jeff Bradberry
82a0dc0024 Cycle or unset the webhook key if the webhook service changes
Also, tests.
2019-09-30 13:26:03 -04:00
Jeff Bradberry
d4b20b7340 Update tests to use the expect keyword argument for get() and post() 2019-09-30 13:26:03 -04:00
Jeff Bradberry
c0ad5a7768 Expose the webhook_service and webhook_credential fields in the serializer
webhook_credential specifically as a summary field.
2019-09-30 13:26:03 -04:00
Jeff Bradberry
d9ac291115 Add some RBAC oriented tests for the webhook secret key view 2019-09-30 13:26:03 -04:00
Jeff Bradberry
6b86cf6e86 Revert to using the explicit dispatch to the appropriate model
since passing the model class at url include time doesn't work.
2019-09-30 13:26:03 -04:00
Jeff Bradberry
771ef275d4 Include a check for the webhook_key related resource url
in the tests for JTs and WFJTs.
2019-09-30 13:26:03 -04:00
Jeff Bradberry
2310413dc0 Fix problem with the tests by dynamically setting the view model
instead of using a model @property or lookup method.
2019-09-30 13:26:03 -04:00
Jeff Bradberry
edb9d6b16c Add the related link to the webhook secrets view to the serializers 2019-09-30 13:26:03 -04:00
Jeff Bradberry
7973a18103 Switch to using a permission class for the webhook secret key view
This view is now behaving as expected for superuser, org admin, JT
admin, JT exec, and org member roles.
2019-09-30 13:23:27 -04:00
Jeff Bradberry
747a2283d6 Attempt to get the RBAC right on the webhook secret key view 2019-09-30 13:23:27 -04:00
Jeff Bradberry
9d269d59d6 Add an api view for obtaining and rotating the webhook key 2019-09-30 13:23:27 -04:00
Jeff Bradberry
b0c530402f Move the webhook url include from the top level urlconf to the JT/WFJT urlconfs 2019-09-30 13:23:26 -04:00
Jeff Bradberry
50a54c9214 Forbid access to the webhook receiver views if webhook_key is not set 2019-09-30 13:23:26 -04:00
Jeff Bradberry
8f97dbf781 Hook in the webhook receiver views into the urlconf 2019-09-30 13:23:26 -04:00
Jeff Bradberry
a7a99ed141 Beginnings of the API views for the webhook receivers 2019-09-30 13:23:26 -04:00
Jeff Bradberry
d6116490c6 Add the webhook-specific fields to JobTemplate and WorkflowJobTemplate 2019-09-30 13:23:26 -04:00
softwarefactory-project-zuul[bot]
ff8e896b0f Merge pull request #4850 from wenottingham/to-the-cloud!
Adjust help message; we're no longer using the insights client

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-30 16:46:22 +00:00
Bill Nottingham
fc70d8b321 Adjust help message; we're no longer using the insights client 2019-09-30 12:17:46 -04:00
mabashian
a61306580a Show error body when license application fails 2019-09-30 12:09:24 -04:00
Lunar
afe38b8e68 Change to ~/.awx 2019-09-30 10:58:37 -05:00
softwarefactory-project-zuul[bot]
505dcf9dd2 Merge pull request #4657 from beeankha/wf_approval_notification
[WIP] Notification Support for Workflow Approval Nodes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-30 14:54:59 +00:00
softwarefactory-project-zuul[bot]
1d2123a4f9 Merge pull request #4845 from ryanpetrello/cli-fix-ujt-allow
cli: fix `awx unified_job_templates`

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-30 14:10:14 +00:00
Ryan Petrello
4adf9bab67 cli: fix awx unified_job_templates
this endpoint doesn't return an HTTP Allow header at all (because you
can't really do anything other than list templates)
2019-09-30 09:32:58 -04:00
Lunar
adac87adf2 Fix issue #3705
/tmp should not be used, it gets wiped and causes issues as noted in issue #3705.
2019-09-29 23:20:09 -05:00
AlanCoding
7dd8e35e8c Use namespaced doc fragment, cleanup
doc fragment will now be at awx.awx.auth
changed from just tower, which source from core

remove Makefile things no longer needed
2019-09-27 23:09:39 -04:00
Keith Grant
554a63d8fc write LabelSelect tests 2019-09-27 15:30:42 -07:00
Keith Grant
da149d931c rework MultiSelect into controlled input; refactoring 2019-09-27 15:06:15 -07:00
mabashian
3182197287 Makes notification toggles more responsive on smaller screens. 2019-09-27 15:48:00 -04:00
beeankha
9ed4e1682d Remove redundant whitespace 2019-09-27 15:48:00 -04:00
beeankha
5aa6a94710 Enable approval notifications to show up at...
...workflow jobs notifications endpoint
2019-09-27 15:48:00 -04:00
beeankha
96689f45c8 Update approval notification message 2019-09-27 15:48:00 -04:00
beeankha
ce6a276e1f Update migration file 2019-09-27 15:48:00 -04:00
beeankha
8eb1484129 Update migration file, change status syntax 2019-09-27 15:48:00 -04:00
beeankha
1ddf9fd1ed Fix up models, clean up code re: PR comments 2019-09-27 15:48:00 -04:00
beeankha
17a8e08d93 Add unit tests for approval notifications 2019-09-27 15:48:00 -04:00
beeankha
f835c8650b Enable org-level approval notifications to work. 2019-09-27 15:48:00 -04:00
beeankha
aa5a4d42c7 Enable email notifications to work,
...and customize default messages
2019-09-27 15:48:00 -04:00
beeankha
57fd6b7280 Set default messages for approval notifications 2019-09-27 15:48:00 -04:00
mabashian
7eb7aad491 Adds approval toggles to wf and org notif lists 2019-09-27 15:48:00 -04:00
beeankha
e2b8adcd09 Remove notification endpoint from approvals list 2019-09-27 15:48:00 -04:00
beeankha
13450fdbf9 Set up approval notifications to send 2019-09-27 15:48:00 -04:00
beeankha
6be2d84adb Add endpoints for approval node notifications
...and also add a migration file.
2019-09-27 15:48:00 -04:00
softwarefactory-project-zuul[bot]
d2a5af44de Merge pull request #4792 from mabashian/relaunch-jobs
Add relaunch to Jobs list and Job Details views

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-27 19:18:48 +00:00
AlanCoding
75bb7aae14 update references to collection folder 2019-09-27 14:29:04 -04:00
AlanCoding
98619c5e23 rename awx modules folder to collection 2019-09-27 14:29:04 -04:00
AlanCoding
35afa37417 Rename to collection, add license, galaxy build 2019-09-27 14:29:03 -04:00
AlanCoding
2f0f692f4a Integrate Ansible core tower modules content into AWX
This commit includes all the changes involved in
converting the old Ansible Tower modules from commits
in Ansible core into the AWX collection that replaces it.
Also includes work needed to integrate it into the
AWX processes like tests, docs, and the Makefile.

Apply changes from content_collector tool

Add integrated module tests
  operate via run_module fixture
  add makefile target for them

Add flake8 target and fix flake8 errors

Update README

Make consolidated target for testing modules
2019-09-27 14:29:03 -04:00
AlanCoding
5271c993ac Move commit for migration of Ansible core tower modules 2019-09-27 14:29:03 -04:00
AlanCoding
38112bae22 Use to_native for error messages, fix docs typo 2019-09-27 14:29:03 -04:00
Andrey Klychkov
30a6efdb93 fix typos in web_infrastructure modules (#62202) 2019-09-27 14:29:02 -04:00
Alan Rominger
bffc1bfdd4 Allow tower inventory plugin to accept integer inventory_id (#61338) 2019-09-27 14:29:02 -04:00
Alicia Cozine
a7bf31d423 clarifies how ASK works for Tower credentials (#59050)
* clarifies how ASK works for credentials
2019-09-27 14:29:02 -04:00
Pilou
1ae1011ccb tower_role: ensure alias of "validate_certs" parameter is handled (#57518)
* tower_role: ensure alias of validate_certs is handled

* tower modules: remove tower_verify_ssl alias too

Error was:

    Failed to update role: The Tower server claims it was sent a bad request.
    GET https://tower/api/v2/projects/22/object_roles/
    Params: [('tower_verify_ssl', False), ('role_field', 'admin_role')]
    Data: None
    Response: {"detail": "Role has no field named 'tower_verify_ssl'"}

Full traceback:

    File "/tmp/ansible_tower_role_payload_7_2p0X/__main__.py", line 145, in main
      result = role.grant(**params)
    File "/usr/local/lib/python2.7/dist-packages/tower_cli/resources/role.py", line 365, in grant
      return self.role_write(fail_on_found=fail_on_found, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/tower_cli/resources/role.py", line 242, in role_write
      fail_on_multiple_results=True, **data)
    File "/usr/local/lib/python2.7/dist-packages/tower_cli/models/base.py", line 301, in read
      r = client.get(url, params=params)
    File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 546, in get
      return self.request('GET', url, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/tower_cli/api.py", line 299, in request
      kwargs.get('data', None), r.content.decode('utf8'))
2019-09-27 14:29:02 -04:00
Alberto Murillo
83183cd7ce tower_workflow_template: Add missing options (#56891)
The following options can be set on a workflow template but the
functionallity to do so from this module was missing.

inventory
ask_variables_on_launch
ask_inventory_on_launch

Fixes #49728

Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
2019-09-27 14:29:02 -04:00
Pilou
934d7d62ef fix tower_credential example: lookup are run on controller (#57516)
- don't advise to use lookups in order to fetch path on managed nodes
- fix example:
    - use slurp instead 'file' lookup
    - remove extraneous brackets
2019-09-27 14:29:01 -04:00
Hideki Saito
5dce6258e6 tower_user: Fix to create a system auditor properly (#54585)
- Fixed issue #54446: tower_user cannot create an auditor user

Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-09-27 14:29:01 -04:00
Pilou
87f6065a05 tower_credential: ssh_key_data isn't a path anymore (#57113) 2019-09-27 14:29:01 -04:00
stoned
4ca0d8c72a Add missing roles to tower_role module (#56182)
* Add missing roles to tower_role module

* Placate 'ansible-test sanity --test pep8'
2019-09-27 14:29:01 -04:00
Hideki Saito
ba8bd25da2 Fixed wrong variable specification format in examples (#55252)
Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-09-27 14:29:00 -04:00
Hideki Saito
df0bd0797c Fix handling of inventory and credential options for tower_job_launch (#54967)
- Fixed issue #25017,#37567
- Add example for prompt on launch
- Add integration test for prompt on launch

Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-09-27 14:29:00 -04:00
James Cassell
3aa7ee8d17 standardize TLS connection properties (#54315)
* openstack: standardize tls params

* tower: tower_verify_ssl->validate_certs

* docker: use standard tls config params

- cacert_path -> ca_cert
- cert_path -> client_cert
- key_path -> client_key
- tls_verify -> validate_certs

* k8s: standardize tls connection params

- verify_ssl -> validate_certs
- ssl_ca_cert -> ca_cert
- cert_file -> client_cert
- key_file -> client_key

* ingate: verify_ssl -> validate_certs

* manageiq: standardize tls params

- verify_ssl -> validate_certs
- ca_bundle_path -> ca_cert

* mysql: standardize tls params

- ssl_ca -> ca_cert
- ssl_cert -> client_cert
- ssl_key -> client_key

* nios: ssl_verify -> validate_certs

* postgresql: ssl_rootcert -> ca_cert

* rabbitmq: standardize tls params

- cacert -> ca_cert
- cert -> client_cert
- key -> client_key

* rackspace: verify_ssl -> validate_certs

* vca: verify_certs -> validate_certs

* kubevirt_cdi_upload: upload_host_verify_ssl -> upload_host_validate_certs

* lxd: standardize tls params

- key_file -> client_key
- cert_file -> client_cert

* get_certificate: ca_certs -> ca_cert

* get_certificate.py: clarify one or more certs in a file

Co-Authored-By: jamescassell <code@james.cassell.me>

* zabbix: tls_issuer -> ca_cert

* bigip_device_auth_ldap: standardize tls params

- ssl_check_peer -> validate_certs
- ssl_client_cert -> client_cert
- ssl_client_key -> client_key
- ssl_ca_cert -> ca_cert

* vdirect: vdirect_validate_certs -> validate_certs

* mqtt: standardize tls params

- ca_certs -> ca_cert
- certfile -> client_cert
- keyfile -> client_key

* pulp_repo: standardize tls params

remove `importer_ssl` prefix

* rhn_register: sslcacert -> ca_cert

* yum_repository: standardize tls params

The fix for yum_repository is not straightforward since this module is
only a thin wrapper for the underlying commands and config.  In this
case, we add the new values as aliases, keeping the old as primary,
only due to the internal structure of the module.

Aliases added:
- sslcacert -> ca_cert
- sslclientcert -> client_cert
- sslclientkey -> client_key
- sslverify -> validate_certs

* gitlab_hook: enable_ssl_verification -> hook_validate_certs

* Adjust arguments for docker_swarm inventory plugin.

* foreman callback: standardize tls params

- ssl_cert -> client_cert
- ssl_key -> client_key

* grafana_annotations: validate_grafana_certs -> validate_certs

* nrdp callback: validate_nrdp_certs -> validate_certs

* kubectl connection: standardize tls params

- kubectl_cert_file -> client_cert
- kubectl_key_file -> client_key
- kubectl_ssl_ca_cert -> ca_cert
- kubectl_verify_ssl -> validate_certs

* oc connection: standardize tls params

- oc_cert_file -> client_cert
- oc_key_file -> client_key
- oc_ssl_ca_cert -> ca_cert
- oc_verify_ssl -> validate_certs

* psrp connection: cert_trust_path -> ca_cert

TODO: cert_validation -> validate_certs (multi-valued vs bool)

* k8s inventory: standardize tls params

- cert_file -> client_cert
- key_file -> client_key
- ca_cert -> ca_cert
- verify_ssl -> validate_certs

* openshift inventory: standardize tls params

- cert_file -> client_cert
- key_file -> client_key
- ca_cert -> ca_cert
- verify_ssl -> validate_certs

* tower inventory: verify_ssl -> validate_certs

* hashi_vault lookup: cacert -> ca_cert

* k8s lookup: standardize tls params

- cert_file -> client_cert
- key_file -> client_key
- ca_cert -> ca_cert
- verify_ssl -> validate_certs

* laps_passord lookup: cacert_file -> ca_cert

* changelog for TLS parameter standardization
2019-09-27 14:29:00 -04:00
Pilou
aeaab41120 tower_settings: "get" isn't implemented, "value" parameter is required (#54028)
* tower_settings doc: 'get' isn't implemented

* tower_settings: fix typo in argument_spec
2019-09-27 14:29:00 -04:00
Abhijeet Kasurde
1eb61ba5ce tower_credential: Add parameter vault_id (#53400)
vault_id allows user to specify vault identifier as per Tower UI.

Fixes: #45644

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2019-09-27 14:28:59 -04:00
Alan Rominger
91d0c47120 Add option for tower inventory to give general metadata (#52747) 2019-09-27 14:28:59 -04:00
Abhijeet Kasurde
b96b69360f tower: Handle AuthError (#53377)
Handle AuthError raised when user provides incorrect password
for Tower admin user.

Fixes: #50535

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2019-09-27 14:28:59 -04:00
David Medberry
b034295c99 Update tower_credential.py (#51469)
misspelled word and poor capitalization (in the same sentence of the doc string)

+label: docsite_pr
2019-09-27 14:28:59 -04:00
Pilou
c6e47a0a16 tower modules: check that 'verify_ssl' defined in ~/.tower_cli.cfg isn't ignored (#50687)
* Check that verify_ssl defined in tower_cli.cfg isn't ignored

* Avoid to override verify_ssl value defined in tower_cli.cfg

By default, tower-cli library enables SSL certificates check. But
verify_ssl false value defined in config files read by default by
tower-cli library (for example /etc/tower/tower_cli.cfg) was ignored
because overriden by the tower_verify_ssl parameter default value.

* fix a typo in comment
2019-09-27 14:28:58 -04:00
Jordan Borean
ca782a495d Final round of moving modules to new import error msg (#51852)
* Final round of moving modules to new import error msg

* readd URL to jenkins install guide

* fix unit tests
2019-09-27 14:28:58 -04:00
Dennis Lerch
71c625bd83 add diff_mode_enabled to documentation
option 'diff_mode_enabled' is not mentioned in documentation

+label: docsite_pr
2019-09-27 14:28:58 -04:00
Samuel Carpentier
5c2bc09f9d New module: tower_notification (#50512)
* New module: tower_notification

* Fix CI check failures

* Add integration tests and extend examples

* Add missing required field for deletion tests and examples

* Add missing required field for deletion tests and examples

* Set port type to int

* Add missing field for Slack notification

* Add missing field types for IRC notification

* Update module documentation

* Correct field name and type for IRC notification

* Uniformize 'targets' field

* Uniformize 'targets' field
2019-09-27 14:28:57 -04:00
jainnikhil30
58c4ae6a00 Add scm_update_cache_timeout, job_timeout and custom_virtualenv to tower_project (#51330)
* adding scm_update_cache_timeout and job_timeout to tower_project module

* add support for cache_timeout, job_timeout and custom_venv for tower_project module

* add the version_added
2019-09-27 14:28:57 -04:00
John Westcott IV
37509af868 Adding TowerCLI receive module (#51023) 2019-09-27 14:28:57 -04:00
John Westcott IV
41f649b5a2 Adding tower_workflow_launch (#42701) 2019-09-27 14:28:57 -04:00
John Westcott IV
5ef995cd7d Adding TowerCLI send module (#37843) 2019-09-27 14:28:56 -04:00
jainnikhil30
a32981242b Fixing exception import for tower modules (#50447)
* fixing the exception import from tower modules

* Adding tests for checking tower modules are failing with correct msg

* fixed failing tests

* fixed failing test in tower_team
2019-09-27 14:28:56 -04:00
Alicia Cozine
c821996051 fix docs for tower modules (#50710) 2019-09-27 14:28:56 -04:00
Andrea Tartaglia
a37a18c0bf Added organization in the scm_credential get (#49884)
* Added organization in the scm_credential get

* Fallback looking for cred in project org

* Tests project with multi org credential

* Fixed CI issue

* Added changelog fragment
2019-09-27 14:28:56 -04:00
Dag Wieers
b11374157d Convert to reduced list of known types (#50010) 2019-09-27 14:28:55 -04:00
Abhijeet Kasurde
2d6743635e E325 Removal - Part II (#49196)
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2019-09-27 14:28:55 -04:00
seag-rvc
590341ed7d Update tower_host.py (#49186)
<!--- Your description here -->
Current example does not show how to declare variables
+label: docsite_pr
2019-09-27 14:28:55 -04:00
Hugh Ma
ae980b9a82 Add survey_spec parameter to module. (#48182)
* Add survey_spec parameter to module.
Fixes #48011

* Removed trailing white space. Added integration test.
2019-09-27 14:28:55 -04:00
Sean Cavanaugh
94b557d8aa adding additional example and clarification (#47224)
The kind: paramter is missing from the original example (which won't actually work).  The organization needs to be listed as required (if you don't list it, you will get an error).  Also adding additional example using the tower_username and tower_password method.
2019-09-27 14:28:54 -04:00
Sloane Hertel
5dc9ca222f add more consistent extension matching for inventory plugins (#46786)
* Add consistent extension matching for inventory plugins that support YAML configuration files

* Document extension matching expectations
2019-09-27 14:28:54 -04:00
Yanis Guenane
79166209ee inventory/tower: authors -< author so Doc can pick it up (#45936) 2019-09-27 14:28:54 -04:00
Dag Wieers
45a8992254 Docs: Avoid use of 'default: null' (#45795)
Various modules document the default 'null' value, but it causes None to
be shown in the documentation explicitly.
2019-09-27 14:28:54 -04:00
Meecr0b
def043c383 tower_credential: expect ssh_key_data to be a string instead of path (#45158)
* expect ssh_key_data to be a string instead of path

ssh_key_data should be a string filled with the private key
the old behavior can be archived with a lookup

Fixes #45119

* clarifies ssh_key_data description, adds newline
2019-09-27 14:28:53 -04:00
Adrien Fleury
7b6dc13078 New module : tower_credential_type (#37243)
* Add new module *tower_credential_type*

* Add support for credential type creation and/or deletion
* Add test coverage for tower_credential_type
2019-09-27 14:28:53 -04:00
Adrien Fleury
ee2709a898 New module tower_workflow_template. (#37520)
* Add new module *tower_workflow_template*

Manage Tower workflows and their schemas.
2019-09-27 14:28:53 -04:00
jainnikhil30
d9d56b4b50 Add 'tower_settings' module for managing Ansible Tower Settings (#43933)
* add the tower_settings module
2019-09-27 14:28:52 -04:00
Yunfan Zhang
7580f9c2b9 Added Ansible Tower inventory plugin. (#41816)
Signed-off-by: Yunfan Zhang <yz322@duke.edu>
2019-09-27 14:28:52 -04:00
curry9999
d4f4983a89 To improve readability, we added a line feed. (#43801)
* 	modified:   google/gcp_compute_backend_bucket.py
	modified:   google/gcp_compute_backend_service.py
	modified:   google/gcp_compute_forwarding_rule.py
	modified:   google/gcp_compute_global_forwarding_rule.py
	modified:   google/gcp_compute_image.py
	modified:   google/gcp_compute_instance.py
	modified:   google/gcp_compute_instance_group.py
	modified:   google/gcp_compute_instance_group_manager.py
	modified:   google/gcp_compute_instance_template.py
	modified:   google/gcp_compute_route.py
	modified:   google/gcp_compute_subnetwork.py
	modified:   google/gcp_compute_target_http_proxy.py
	modified:   google/gcp_compute_target_https_proxy.py
	modified:   google/gcp_compute_target_ssl_proxy.py
	modified:   google/gcp_compute_target_tcp_proxy.py
	modified:   google/gcp_compute_url_map.py
	modified:   google/gcp_container_node_pool.py
	modified:   google/gcp_dns_resource_record_set.py
	modified:   google/gcp_pubsub_subscription.py
	modified:   google/gcp_storage_bucket_access_control.py

* 	modified:   lib/ansible/modules/cloud/amazon/aws_ses_identity.py
	modified:   lib/ansible/modules/cloud/amazon/route53_facts.py
	modified:   lib/ansible/modules/cloud/cloudscale/cloudscale_server.py
	modified:   lib/ansible/modules/network/aos/_aos_logical_device.py
	modified:   lib/ansible/modules/network/aos/_aos_rack_type.py
	modified:   lib/ansible/modules/network/aos/_aos_template.py
	modified:   lib/ansible/modules/network/cumulus/nclu.py
	modified:   lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py
	modified:   lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_wait.py
2019-09-27 14:28:52 -04:00
Will Thames
24cedcc560 Allow tower_inventory_sources params to be False (#43749)
Ensure that params set to False are respected, rather than ignored.
2019-09-27 14:28:52 -04:00
Pilou
64014faf02 Tower modules: move HAS_TOWER_CLI check in module_utils and minor improvements (#39809)
* tower_* modules: move HAS_TOWER_CLI in TowerModule

Besides this change allows to define other common parameters such as
mutually_exclusive.

* tower_*: config file can not be used with auth params

* tower module_utils: remove useless call to expanduser

'path' type: expanduser & expandvars are automatically called
2019-09-27 14:28:51 -04:00
David Moreau Simard
9b228d7d2d Properly detect credentials for tower_project
It seemed like it was mostly the wrong variables been looked at, making
it so a git repository could not be created without a credential.
2019-09-27 14:28:51 -04:00
Pierre-Louis Bonicoli
1349449e1e tower_project: manual projects don't require creds 2019-09-27 14:28:51 -04:00
Abhijeet Kasurde
185e9a09e0 Correct typo from 'Valut' to 'Vault' (#41574)
Correct typo from 'Valut' to 'Vault'

+label: docsite_pr
2019-09-27 14:28:51 -04:00
jaevans
623e0f7cc9 Add support for Tower Smart inventories (#41458)
* Support Smart Inventories

Add kind and host_filter fields and pass through to tower_cli.

* Add documentation for new Smart Inventories options

* Add missing description header for host_filter documentation

* Add version added tags to new options

* Bumped vesion_added to 2.7
2019-09-27 14:28:50 -04:00
Andrew J Huffman
8569bf71af Updating tower_job_template.py (#38821)
* Updating tower_job_template.py

* tower_job_template: Update parameter version_added to 2.7

* Ensure that unset credentials aren't passed

Passing empty strings for unset credentials causes ValueErrors as
the API expects an integer. Don't pass unset credentials
2019-09-27 14:28:50 -04:00
Adrien Fleury
6471481d02 Module: Tower inventory source module (#37110)
* tower_inventory_source: Add support for the inventory source via ansible-tower-cli.

* Add test coverage for tower_inventory_source.

* Update version_added to 2.7
2019-09-27 14:28:50 -04:00
Pierre Roux
139703aafb Fix tower_* modules **params kwargs (#40137)
* Add cleaning function to handle **params

The cleaning function is only added to tower modules which pass a `**params`
argument as an unpacked dictionnary to the tower-cli method calls.

Fix #39745

* Remove previous code added only for tower_role

In 872a7b4, the `update_resources` function was modified so that it would clear unwanted
parameters. However, this behaviour is desired for other modules too, modified in
another commit. (see tower_clean_params).
2019-09-27 14:28:50 -04:00
Ryan Petrello
65aeb2b68a add some Tower module integration tests (and fix a bug or two) (#37421)
* add additional test coverage for tower modules

* add test coverage for the tower_credential module

* add test coverage for the tower_user module

* fix a bug in py3 for tower_credential when ssh_key_data is specified

* add test coverage for tower_host, tower_label, and tower_project

* add test coverage for tower_inventory and tower_job_template

* add more test coverage for tower modules

- tower_job_launch
- tower_job_list
- tower_job_wait
- tower_job_cancel

* add a check mode/version assertion for tower module integration tests

* add test coverage for the tower_role module

* add test coverage for the tower_group module

* add more integration test edge cases for various tower modules

* give the job_wait module more time before failing

* randomize passwords in the tower_user and tower_group tests
2019-09-27 14:28:49 -04:00
Dag Wieers
0b26297177 Clean up module documentation (#36909)
* Clean up module documentation

This PR includes:
- Removal of `default: None` (and variations)
- Removal of `required: false`
- Fixing booleans and `type: bool` where required

* Fix remaining (new) validation issues
2019-09-27 14:28:49 -04:00
Pilou
3e8de196ac ansible_tower modules doc: fix typos, use formatting functions (#37414)
* fix typos

* use formatting functions

* use 'job template' instead of 'job_template'

* acronyms: user uppercase

* become_enabled param is about privilege escalation
2019-09-27 14:28:49 -04:00
Ryan Petrello
b14323c985 properly pass /api/v1/ credential fields for older Towers (#36917) 2019-09-27 14:28:48 -04:00
Ryan Petrello
2e04969f17 properly detect the absence of credential_type in older tower-cli (#36908) 2019-09-27 14:28:48 -04:00
Ryan Petrello
2edca4f357 tower cred: support credential kind/type for /api/v1/ and /api/v2/ (#36662)
older versions of Tower (3.1) don't have a concept of CredentialTypes
(this was introduced in Tower 3.2).  This change detects older versions
of pre-3.2 tower-cli that *only* support the deprecated `kind`
attribute.
2019-09-27 14:28:48 -04:00
Ryan Petrello
d192297987 tower cred: update kind options in documentation 2019-09-27 14:28:48 -04:00
Ryan Petrello
0bb7e06761 tower cred: filter user name lookup by the proper key 2019-09-27 14:28:47 -04:00
Ryan Petrello
4213ca1424 tower cred: implement credential /api/v1/ kind compatability 2019-09-27 14:28:47 -04:00
Thierry Bouvet
3db3430129 Fix credentials for Tower API V2 2019-09-27 14:28:47 -04:00
Toshio Kuratomi
f7592f6ae7 Port arg specs from type='str' to type='path' 2019-09-27 14:28:47 -04:00
Pilou
ac82751dfa ansible_tower: fix broken import, reuse tower_argument_spec and documentation fragment (#29115)
* module_utils/ansible_tower: fix broken import

* tower_*: use tower_argument_spec & doc fragment

* tower doc fragment: Ansible requires Python 2.6+

* tower_job_wait: fix broken import (Py3 compat)
2019-09-27 14:28:46 -04:00
ethackal
f46db65bad Fixes verify_ssl option when False in ansible_tower module util (#30308)
* Fixes verify_ssl option when False in ansible_tower module util

* fixed comparison to None per PEP-8 standards
2019-09-27 14:28:46 -04:00
Toshio Kuratomi
d0ffc3f626 Update metadata to 1.1 2019-09-27 14:28:46 -04:00
Toshio Kuratomi
e9ec80d86b Remove wildcard imports
Made the following changes:

* Removed wildcard imports
* Replaced long form of GPL header with short form
* Removed get_exception usage
* Added from __future__ boilerplate
  * Adjust division operator to // where necessary

For the following files:

* web_infrastructure modules
* system modules
* linode, lxc, lxd, atomic, cloudscale, dimensiondata, ovh, packet,
  profitbricks, pubnub, smartos, softlayer, univention modules
* compat dirs (disabled as its used intentionally)
2019-09-27 14:28:46 -04:00
Christopher Galtenberg
56723c3203 Improve help text for extra-vars requiring @ for filename
(cherry picked from commit 1b34de89ee1d75cb7f616b5a34cd5043bf7dfd2b)
2019-09-27 14:28:45 -04:00
Abhijeet Kasurde
fdbafe42ab Fix spelling mistakes (comments only) (#25564)
Original Author : klemens <ka7@github.com>

Taking over previous PR as per
https://github.com/ansible/ansible/pull/23644#issuecomment-307334525

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2019-09-27 14:28:45 -04:00
Bill Nottingham
7c0554bf7b Fix handling of extra_vars_path parameter. (#25272)
tower-cli process_extra_vars takes a list.
2019-09-27 14:28:45 -04:00
Dag Wieers
49df11d478 Collated PEP8 fixes (#25293)
- Make PEP8 compliant
2019-09-27 14:28:45 -04:00
James Labocki
5ef7003395 Fix indentation for register module in example (#25274) 2019-09-27 14:28:44 -04:00
Kevin Clark
640e528fdc adds privilege escalation method for pmrun(Unix Privilege Manager 6.0) 2019-09-27 14:28:44 -04:00
Lee Shakespeare
06e09550af Lookup credential id and pass in credential rather than scm_credential (#24624)
* Lookup credential id and pass in credential rather than scm_credential

* Change the excepting handling to catch missing credentials

* Make error messages for not found lookups more useful
2019-09-27 14:28:44 -04:00
Lee Shakespeare
74c7c7b532 Tower user remove organization (#24544)
* Remove organization field from the tower_user module re: issue #24510

* Fix trailing spaces.

* Fixes for Shippable errors, pep8

* Remove a random inserted space.
2019-09-27 14:28:44 -04:00
Abhijeet Kasurde
62bc1a8662 Pep8 fixes for web_infra/ansible_tower (#24479)
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2019-09-27 14:28:43 -04:00
Andrea Tartaglia
9469bbc06f fixed RETURN docs for modules (#24011)
* fixed RETURN docs for remaining modules

* updated proxymysql_mysql_users 'sample' to yaml dict

* fixed whitespace errors
2019-09-27 14:28:43 -04:00
Toshio Kuratomi
7cdde96c3c New metadata 1.0 (#22587)
Changes to the metadata format were approved here:
https://github.com/ansible/proposals/issues/54
* Update documentation to the new metadata format
* Changes to metadata-tool to account for new metadata
  * Add GPL license header
  * Add upgrade subcommand to upgrade metadata version
  * Change default metadata to the new format
  * Fix exclusion of non-modules from the metadata report
* Fix ansible-doc for new module metadata
* Exclude metadata version from ansible-doc output
* Fix website docs generation for the new metadata
* Update metadata schema in valiate-modules test
* Update the metadata in all modules to the new version
2019-09-27 14:28:43 -04:00
John R Barker
bc2a63c415 Fix invalid fields in module DOCUMENATION (#22297)
* fix module doc fields

* More module docs corrections

* More module docs corrections

* More module docs corrections

* More module docs corrections

* correct aliases

* Review comments

* Must quote ':'

* More authors

* Use suboptions:

* restore type: bool

* type should be in the same place

* More tidyups

* authors

* Use suboptions

* revert

* remove duplicate author

* More issues post rebase
2019-09-27 14:28:43 -04:00
Wayne Witzel III
9c6c9c3708 Ansible Tower job_wait module (#22160)
* Ansible Tower job_wait module

* clean up documentation and update code comment
2019-09-27 14:28:42 -04:00
Wayne Witzel III
6a2e3d2915 Ansible Tower job cancel module (#22161)
* Ansible Tower job cancel module

* fix interpreter line
2019-09-27 14:28:42 -04:00
Wayne Witzel III
85977be23c Ansible Tower job list module (#22164) 2019-09-27 14:28:42 -04:00
Wayne Witzel III
3855393cd3 Ansible Tower job_launch module (#22148)
* Ansible Tower job_launch module

* Added RETURN documentation and fixed import locations

* remove superfluos required attributes, make tags a list, and fix some typos

* only join tags if they are actually a list

* use isinstance instead of type, cleanup imports
2019-09-27 14:28:42 -04:00
Wayne Witzel III
bd6e5c2529 add Ansible Tower role module (#21592)
* add Ansible Tower role module

* remove owner as choice from role paramenter
2019-09-27 14:28:41 -04:00
Wayne Witzel III
8a5914affd add Tower JobTemplate module (#21681)
* add Tower JobTemplate module

* add host_config_key and remove defaults from required parameters
2019-09-27 14:28:41 -04:00
Wayne Witzel III
979adfd16c add Ansible Tower team module (#21593) 2019-09-27 14:28:41 -04:00
John R Barker
5f2381e9ad Correct example 2019-09-27 14:28:41 -04:00
Wayne Witzel III
99027e4b30 Add Tower Project module (#21479) 2019-09-27 14:28:40 -04:00
Wayne Witzel III
150c0a9fdc Add Tower Group module (#21480) 2019-09-27 14:28:40 -04:00
Wayne Witzel III
2c825b792f Add Tower Host module (#21482) 2019-09-27 14:28:40 -04:00
Wayne Witzel III
2f9b0733bb Add Tower Inventory module (#21483) 2019-09-27 14:28:40 -04:00
Wayne Witzel III
962668389a Add Tower Label module (#21485) 2019-09-27 14:28:39 -04:00
Wayne Witzel III
0699e44b53 Ansible Tower user and credential module (#21020)
* rename tower config module parameters to avoid conflicts

* add Ansible Tower user module

* add Ansible Tower credential module

* remove errant hash from interpreter line

* friendlier error messages

* Update tower_verify_ssl defaults and module examples

* Update tower_verify_ssl default documentation

* Tower expects satellite6 not foreman
2019-09-27 14:28:39 -04:00
Matt Martz
788a2e5fc8 Update validate-modules (#20932)
* Update validate-modules

* Validates ANSIBLE_METADATA
* Ensures imports happen after documentation vars
* Some pep8 cleanup

* Clean up some left over unneeded code

* Update modules for new module guidelines and validate-modules checks

* Update imports for ec2_vpc_route_table and ec2_vpc_nat_gateway
2019-09-27 14:28:39 -04:00
Brian Coca
16e6b3f148 updated friendlier description 2019-09-27 14:28:39 -04:00
Wayne Witzel III
6140308675 Ansible Tower organization module (#20355)
* add Ansible Tower organization module

* skip Python 2.4 check for ansible_tower module

* make spec and doc match, extract tower auth helper method

* added auth params at module level

* support check mode

* extract check mode check to ansible_tower utils, add utils to 2.4 skip

* update interpreter shebang

* remove colon from docs

* no log for password, verify_ssl default to true
2019-09-27 14:28:35 -04:00
mabashian
518a25430d Adjust unit test after adding conditional render to modals in launch button component 2019-09-27 13:47:13 -04:00
Seth Foster
b6ffde75ef check expired sessions only if User exists
- Indent rest of code into the conditional
  that checks for expired sessions of that User
- If user doesn't exist, no need to check
2019-09-27 12:31:10 -04:00
softwarefactory-project-zuul[bot]
8d44ab55f1 Merge pull request #4837 from ryanpetrello/faster-dev-iso
lower the isolated poll interval in development

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-27 16:21:22 +00:00
Keith Grant
6e9804b713 Rework saving labels 2019-09-27 08:53:52 -07:00
Seth Foster
94fa745859 removed duplicate import of User 2019-09-27 11:47:44 -04:00
softwarefactory-project-zuul[bot]
eacc7b8eb0 Merge pull request #4827 from ryanpetrello/fix-docker-compose-cluster
fix broken `docker-compose-cluster` config

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-27 15:35:18 +00:00
Ryan Petrello
73c3b8849b lower the isolated poll interval in development
(waiting 30s is annoying)
2019-09-27 11:26:10 -04:00
mabashian
bb474b0797 Only render the launch modals if errors are present. This addresses a local unit test failure. 2019-09-27 11:05:23 -04:00
Seth Foster
c94ebba0b3 Saving user session checks if User exists
- Check that model User object exists with id=user_id
  before attempting to save to database
- UserSessionMembership saves to the database using
  foreign key, User
- However, User with matching id might not exist if
  browser sends request with stale cookies
- Change made in regards to issue #4334
2019-09-27 10:52:53 -04:00
mabashian
af90a78df5 Extends LaunchButton component to include support for relaunching. Adds relaunch button to jobs list and job detail view(s). 2019-09-27 10:37:46 -04:00
softwarefactory-project-zuul[bot]
de68de7f9a Merge pull request #4735 from AlanCoding/come_as_you_are
Copy git submodules as-is to avoid auth and path errors

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-27 13:49:16 +00:00
softwarefactory-project-zuul[bot]
04ba7aaf89 Merge pull request #4835 from ryanpetrello/plural-credentials
Update the "Credential" label on JT forms to say "Credentials"

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-27 13:16:50 +00:00
AlanCoding
454f76c066 Copy git submodules as-is to avoid auth and path errors 2019-09-27 09:03:47 -04:00
softwarefactory-project-zuul[bot]
9b281bbc8a Merge pull request #4829 from ryanpetrello/fix-4294
Fix error with rejoining node to cluster after lost connection to postgres

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-27 12:50:25 +00:00
Ryan Petrello
870b76dc59 Update the "Credential" label on JT forms to say "Credentials"
see: https://github.com/ansible/awx/issues/4831
2019-09-27 08:44:04 -04:00
Buymov Ivan
f2676064fd Fix error with rejoining node to cluster after lost connection to postgres 2019-09-27 01:17:27 -04:00
Ryan Petrello
4b62f4845a fix broken docker-compose-cluster config 2019-09-27 00:37:29 -04:00
softwarefactory-project-zuul[bot]
bc6edf7af3 Merge pull request #3583 from kumy/patch-1
Use variables to set rabbitmq host and port

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-27 01:18:20 +00:00
kumy
3dd69a06e7 Use variables to set rabbitmq host and port 2019-09-26 20:53:55 -04:00
softwarefactory-project-zuul[bot]
cb8c9567b0 Merge pull request #4807 from ryanpetrello/cred-plugins-with-promptable-passphrase
fix a bug that prevents launch-time passphrases w/ cred plugins

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-26 21:40:58 +00:00
Ryan Petrello
d30d51d72c fix a bug that prevents launch-time passphrases w/ cred plugins
with the advent of credential plugins there's no way for us to *actually
know* the RSA key value at the time the credential is _created_, because
the order of operations is:

1.  Create the credential with a specified passphrase
2.  Associate a new dynamic inventory source pointed at some third party
    provider (hashi, cyberark, etc...)

this commit removes the code that warns you about an extraneous
passphrase (if you don't specify a private key)

additionally, the code for determining whether or not a credential
_requires_ a password/phrase at launch time has been updated to test
private key validity based on the *actual* value from the third party
provider

see: https://github.com/ansible/awx/issues/4791
2019-09-26 17:14:25 -04:00
softwarefactory-project-zuul[bot]
693e588a25 Merge pull request #4659 from bcoca/nicer_error
better error message on missing runner

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-26 20:52:33 +00:00
softwarefactory-project-zuul[bot]
0f42782feb Merge pull request #4804 from AlexSCorey/4616-4766-JTAddBtn-ToolBarCheckBox
ToolBar checkbox checks, JT Add Button closes and Test Clean up

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-26 20:41:27 +00:00
Alex Corey
90cac2ec35 fix lint errors 2019-09-26 16:11:34 -04:00
softwarefactory-project-zuul[bot]
8343552dfc Merge pull request #4822 from jbradberry/team-grant-permissions-visibility
Change the visibility of the Grant Permission button on the team edit page

Reviewed-by: Marliana Lara <marliana.lara@gmail.com>
             https://github.com/marshmalien
2019-09-26 20:02:56 +00:00
softwarefactory-project-zuul[bot]
c229e586da Merge pull request #4825 from jbradberry/prevent-notification-config-search
Prevent search on the NotificationTemplate.notification_configuration field

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-26 19:40:26 +00:00
softwarefactory-project-zuul[bot]
778b306208 Merge pull request #4824 from rooftopcellist/scl_in_containers
Add needed scl enables for community container installs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-26 19:40:21 +00:00
softwarefactory-project-zuul[bot]
415592219c Merge pull request #4823 from ryanpetrello/even-more-pendo
allow *.pendo.io as an img-src in our Content Security Policy

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-26 19:40:16 +00:00
softwarefactory-project-zuul[bot]
3a7756393e Merge pull request #4816 from marshmalien/add-missing-job-detail-fields
Add missing job detail fields

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-26 19:40:10 +00:00
Jeff Bradberry
1a7148dc80 Prevent search on the NotificationTemplate.notification_configuration field 2019-09-26 14:37:04 -04:00
Alex Corey
d42ffd7353 Removes unused fnc and unnecessary props adds dom node to Empty State
This.node was use for the add button for both empty list and list with data.
This was done to reduce complexity in handleAddToggle() and I don't think it
will cause bugs because those two elements are not rendered at the same time.
2019-09-26 13:32:00 -04:00
Christian Adams
9f8d975a19 revert to get needed scl enables for community container installs 2019-09-26 13:24:26 -04:00
Ryan Petrello
955bb4a44c allow *.pendo.io as an img-src in our Content Security Policy 2019-09-26 13:12:54 -04:00
Keith Grant
71511b66ac fix JobTemplate tests 2019-09-26 10:01:34 -07:00
Marliana Lara
e97fc54deb Add class to StatusIcon wrapper and fix merge conflicts 2019-09-26 11:50:38 -04:00
Jeff Bradberry
ee27313b42 Change the visibility of the Grant Permission button on the team edit page
In the case where MANAGE_ORGANIZATION_AUTH = False, an Org admin is
still supposed to have the capability of adding resource roles to a
team.  This was in fact still doable directly in the API, or via the
organization edit page.
2019-09-26 11:28:25 -04:00
Keith Grant
439727f1bd extract new LabelSelect component from JobTemplateForm 2019-09-25 15:18:30 -07:00
Marliana Lara
76325eefd3 Expand job detail tests to verify more fields
* Update job detail tests to use large mock job data source
* Move mock job data source into a shared file
* Update OrgAccess snapshot due to DetailList style change
2019-09-25 16:39:05 -04:00
Marliana Lara
4b8a06801c Add missing job detail fields and status icons 2019-09-25 16:35:29 -04:00
Alex Corey
38b506bb94 Removes isPlain prop from DropdownItem 2019-09-25 10:28:57 -04:00
Keith Grant
61f6e3c4d2 Refactor to PlaybookSelect component 2019-09-24 14:49:15 -07:00
softwarefactory-project-zuul[bot]
640e5391f3 Merge pull request #4783 from AlexSCorey/4222-JobResultsDelete
Adds delete button to job details and handle delete errors

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-24 21:48:22 +00:00
Alex Corey
1316ace475 Fixes translation omissions 2019-09-24 16:34:09 -04:00
softwarefactory-project-zuul[bot]
3282caf629 Merge pull request #4770 from AlexSCorey/Pluralization
Requires individual components to pluralize

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-24 15:55:11 +00:00
Alex Corey
b3b53a8ce4 Pluralized modal titles and empty state strings
All itemNames used in empty state messages, and delete modal titles
need to be plural and capitalized.  Because of this change we no
longer need the ucFirst() in utils/strings.js.
2019-09-24 11:21:54 -04:00
Alex Corey
8dd4379bf2 Adds proper translation. removes ucFirst()> 2019-09-24 11:21:54 -04:00
Alex Corey
b79c686336 requires individual components to pluralize 2019-09-24 11:21:54 -04:00
softwarefactory-project-zuul[bot]
d08c601690 Merge pull request #4803 from ryanpetrello/more-pendo-fixes
correct CSP header to allow all pendo.io traffic

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-24 02:35:13 +00:00
softwarefactory-project-zuul[bot]
86f8d648cc Merge pull request #4787 from keithjgrant/qs-util-cleanup
QS util cleanup

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-23 23:03:22 +00:00
softwarefactory-project-zuul[bot]
34bdb6d1c3 Merge pull request #4808 from rooftopcellist/awxkit_docs_link
add link to awxkit docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-23 20:08:47 +00:00
Christian Adams
9548c8ae19 add link to awxkit docs 2019-09-23 15:38:51 -04:00
softwarefactory-project-zuul[bot]
e147869d75 Merge pull request #4789 from wenottingham/lacking-in-insights
Rename Automation Insights to Automation Analytics.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-23 18:46:41 +00:00
Keith Grant
362339d89c add FieldTooltip component, some JobTemplateForm cleanup 2019-09-23 11:00:13 -07:00
softwarefactory-project-zuul[bot]
ebbcefd7df Merge pull request #4806 from beeankha/url_update
Update URL

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-23 17:42:01 +00:00
beeankha
6a9940c027 Update URL 2019-09-23 13:17:26 -04:00
softwarefactory-project-zuul[bot]
8c755dd316 Merge pull request #4795 from wenottingham/sanitization
Add some sanitizations

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-23 14:58:41 +00:00
softwarefactory-project-zuul[bot]
fcfd59ebe2 Merge pull request #4767 from beeankha/awx_doc_edits
[WIP] Edit AWX Docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-23 14:30:12 +00:00
Alex Corey
ce1d9793ce ToolBar checkbox checks and JT Add Button closes and Test Clean up
The Select-All check box in the DataList Toolbar will be checked when the
user clicks on it. Also, the JT Add button closes when the user clicks
elsewhere on the page as well as when the user clicks on the button a second time.
I also cleaned up some tests in the DataListToolBar file.
2019-09-23 09:49:38 -04:00
beeankha
f35ad41e17 Fix misc. errors and typos 2019-09-23 09:46:54 -04:00
Ryan Petrello
d52aa11422 correct CSP header to allow all pendo.io traffic 2019-09-23 09:15:55 -04:00
softwarefactory-project-zuul[bot]
c628a54c79 Merge pull request #4796 from wenottingham/get-our-finest-attributes
Fix SAML login when only certain attributes are set.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-21 15:51:43 +00:00
Keith Grant
5d840af223 fix typo, clarify test names 2019-09-20 13:45:22 -07:00
Keith Grant
1f149bb086 refactor encodeNoneDefaultQueryString 2019-09-20 13:41:30 -07:00
Keith Grant
508535be66 rework removeParams 2019-09-20 13:41:30 -07:00
Keith Grant
cb69cac62d split qs addParams into mergeParams and replaceParams for clarity 2019-09-20 13:41:30 -07:00
Keith Grant
3ea4a32940 qs cleanup, more tests 2019-09-20 13:41:30 -07:00
Keith Grant
d6f93737c4 improve qs test coverage, minor cleanup 2019-09-20 13:41:30 -07:00
Keith Grant
eb0c4fd4d4 refactor parseQueryString 2019-09-20 13:41:30 -07:00
Bill Nottingham
36571a1275 Fix SAML login when only certain attributes are set.
The user may not set all of saml_{attr,admin_attr,auditor_attr},
so don't assume they all exist.
2019-09-20 15:28:38 -04:00
softwarefactory-project-zuul[bot]
b2b475d1a6 Merge pull request #4742 from jlmitch5/searchv2
ui_next search updates

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-20 18:28:52 +00:00
Bill Nottingham
01177f3632 Add some sanitizations (from @jakemcdermott, @mabashian) 2019-09-20 13:58:12 -04:00
John Mitchell
ac530e1328 utilize far instead of function for search type 2019-09-20 13:49:00 -04:00
John Mitchell
9605d8049d rewrite search key type check to be a var instead of a function 2019-09-20 13:49:00 -04:00
John Mitchell
d2e335c7c5 revert restriction on defaultParams 2019-09-20 13:49:00 -04:00
John Mitchell
678fba1ffb fix tests with search updates 2019-09-20 13:48:59 -04:00
John Mitchell
34e1b8be1d make default params only accept page, page_size and order_by 2019-09-20 13:48:59 -04:00
John Mitchell
86029934ad selectively add icontains to only text-based searches 2019-09-20 13:48:59 -04:00
softwarefactory-project-zuul[bot]
ec3edb07e8 Merge pull request #4779 from omgjlk/proper-vault-headers
Use proper headers to auth with Vault

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-20 16:30:56 +00:00
softwarefactory-project-zuul[bot]
fcc61a5752 Merge pull request #4777 from jlmitch5/styleCleanupUiNext
assorted style cleanup in UI next

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-20 16:00:27 +00:00
beeankha
860715d088 More AWX docs edits 2019-09-20 11:32:10 -04:00
beeankha
e2be392f31 Edit AWX docs 2019-09-20 11:32:10 -04:00
mabashian
f49d566f17 Update failed snapshot 2019-09-20 11:05:18 -04:00
mabashian
83e413b0bf Fix prettier failures 2019-09-20 11:05:18 -04:00
John Mitchell
e68349b6b5 assorted style cleanup in UI next
- round all corners of combo fields
- make sure required asterisk is always before help popover ?
- bug: fix ? popover from opening lookups
- fix spacing of detail toggle for http error
2019-09-20 11:05:18 -04:00
softwarefactory-project-zuul[bot]
14cc203945 Merge pull request #4784 from fosterseth/fix-3646-ldapserverfielduri
fix for LDAPServerURIField error if number present in top-level-domain

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-20 15:00:46 +00:00
Seth Foster
ca5de6378a Fix LDAPServerURIField number in domain
- Bug: API error if LDAPServerURIField contains a number in the top level domain
- Add custom regex in LDAPServerURIField class that is passed to django
  URLValidator
- The custom regex allows for numbers to be present in the top level domain
- Unit tests check that valid URIs pass through URLValidator, and that
  invalid URIs raise the correct exception
- Related to issue #3646
2019-09-20 10:36:47 -04:00
Alex Corey
1ebe91cbf7 Adds delete button to job details and handle delete errors
After successful deletion of Job the user is nativated back to Jobs List.
If the job is not successfully deleted a alert modal pops up and on close
the user remains on Job Details page.
2019-09-19 17:34:18 -04:00
softwarefactory-project-zuul[bot]
154cda7501 Merge pull request #4782 from mabashian/inventory-lookup
Changes InventoriesLookup to InventoryLookup

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-19 21:27:50 +00:00
Bill Nottingham
badba581fd Rename Automation Insights to Automation Analytics.
Fix user-facing code, don't worry about settings names.
2019-09-19 17:00:10 -04:00
mabashian
a6a50f0eb1 Changes InventoriesLookup to InventoryLookup and removes pluralization of lookup headers for Inventory and Project since only one can be selected. 2019-09-19 11:13:55 -04:00
softwarefactory-project-zuul[bot]
b00fc29cdc Merge pull request #4762 from shanemcd/devel
Fix cluster dev env

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-19 13:45:43 +00:00
softwarefactory-project-zuul[bot]
13f628f73d Merge pull request #4776 from mabashian/3924-icons
Fix 3rd party auth icon rendering in login modal

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-18 23:39:25 +00:00
Jesse Keating
e1bdbeaa5c Restore new style headers
This leads to having both the new style header and the old compatability
header. Best of both worlds!
2019-09-18 13:27:55 -07:00
Jesse Keating
b3c264bf21 Use proper headers to auth with Vault
Reading examples at
https://learn.hashicorp.com/vault/getting-started/apis show needing to
use `X-Vault-Token` header, instead of `Authorization`. Without this
header, the vault server would return a 400 status with an error message
of "missing client token". With this change AWX is now able to interface
with the Hashicorp backend.
2019-09-18 12:26:47 -07:00
mabashian
53992d41d5 Re-generate fontcustom .woff file. This should fix microsoft and google icon rendering on login modal. 2019-09-18 13:50:15 -04:00
softwarefactory-project-zuul[bot]
686d4fe26f Merge pull request #4769 from mabashian/4554-mgt-jobs-column
Fix management job column width issues

Reviewed-by: Michael Abashian
             https://github.com/mabashian
2019-09-17 18:35:47 +00:00
mabashian
774a4e32cc Fix management job column width issues 2019-09-17 13:45:20 -04:00
Shane McDonald
22441d280e Fix pg password in cluster dev env 2019-09-17 08:45:19 -04:00
softwarefactory-project-zuul[bot]
fe850c247f Merge pull request #4755 from rooftopcellist/fix_awx_pass
pass correct awx-dev password on startup

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-16 23:28:01 +00:00
softwarefactory-project-zuul[bot]
c651029cdb Merge pull request #4756 from AlexSCorey/4744-JobSchedulers
fixes grid so action buttons stay in view

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-16 22:06:58 +00:00
Christian Adams
54d50d71ab pass correct awx-dev password on startup 2019-09-16 17:25:15 -04:00
Alex Corey
ef4f1df9bb fixes grid so action buttons stay in view 2019-09-16 16:27:10 -04:00
softwarefactory-project-zuul[bot]
b5fa1606bd Merge pull request #4369 from AlanCoding/workflow_limit
Implementation of WFJT limit & SCM_branch prompting

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-16 20:17:59 +00:00
softwarefactory-project-zuul[bot]
3139bc9248 Merge pull request #4723 from rebeccahhh/devel
expose schedule name for scheduled workflow node job

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-16 20:17:54 +00:00
softwarefactory-project-zuul[bot]
a6c0793695 Merge pull request #4753 from AlexSCorey/4423-LoggingRevertNoSave
allows user to save form after reverting

Reviewed-by: Alex Corey <Alex.swansboro@gmail.com>
             https://github.com/AlexSCorey
2019-09-16 19:26:27 +00:00
softwarefactory-project-zuul[bot]
5a24e223b7 Merge pull request #4743 from AlexSCorey/4523-InertWFJTLink
allows all WFJT  tabs to properly route to workflow visualizer

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-16 19:26:20 +00:00
AlanCoding
e3c1189f56 bump migration 2019-09-16 14:51:57 -04:00
AlanCoding
fdf9dd733b bump migration 2019-09-16 14:51:56 -04:00
Jake McDermott
9697e1befb comment fixup 2019-09-16 14:51:56 -04:00
Jake McDermott
84a8559ea0 psuedo -> pseudo 2019-09-16 14:51:56 -04:00
Jake McDermott
8ac8fb9016 add more details to workflow limit help text 2019-09-16 14:51:56 -04:00
AlanCoding
01bb32ebb0 Deal with limit prompting in factory 2019-09-16 14:51:56 -04:00
AlanCoding
711c240baf Consistently give WJ extra_vars as text 2019-09-16 14:51:56 -04:00
AlanCoding
291528d823 adjust UI unit tests again
bump migration

bump migration again
2019-09-16 14:51:56 -04:00
AlanCoding
1406ea3026 Fix missing places for ask_limit and ask_scm_branch 2019-09-16 14:51:55 -04:00
AlanCoding
e8581f6892 Implement WFJT prompting for limit & scm_branch
add feature to UI and awxkit

restructure some details of create_unified_job
  for workflows to allow use of char_prompts
  hidden field
avoid conflict with sliced jobs in char_prompts copy logic

update developer docs

update migration reference

bump migration
2019-09-16 14:51:53 -04:00
Rebeccah
fbf182de28 removed refernce to workflow job in parent schedule naming as it is unnecessary 2019-09-16 14:22:37 -04:00
Rebeccah
5fab9e418b added functional test to test new schedule functionality 2019-09-16 14:22:37 -04:00
Rebeccah
d39b931377 added in logic to check if the parent workflow has a schedule and onsolidated a couple import statements 2019-09-16 14:22:37 -04:00
Alex Corey
9028afab07 allows user to save form after reverting 2019-09-16 12:55:31 -04:00
softwarefactory-project-zuul[bot]
d3b413c125 Merge pull request #4752 from shanemcd/drop-pg-scl
Stop using PG SCL in dev env

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-16 16:40:11 +00:00
softwarefactory-project-zuul[bot]
ab1db04164 Merge pull request #4734 from rooftopcellist/awx-dev
Revert DB passwords for container installs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-16 16:09:15 +00:00
Shane McDonald
3b89e894db Stop using PG SCL in dev env 2019-09-16 11:41:13 -04:00
Christian Adams
bdbbb2a4a2 Fix authentication bug with container installs
- update awx-dev db password where needed
2019-09-15 19:52:41 -04:00
Alex Corey
9e8d0758c8 allows all WFJT tabs to properly route to workflow visualizer 2019-09-13 15:21:54 -04:00
softwarefactory-project-zuul[bot]
9f0657e19a Merge pull request #4714 from keithjgrant/4521-noti-template-type-switch
fix bugs when switching NTs to Webhook type

Reviewed-by: Keith Grant
             https://github.com/keithjgrant
2019-09-13 18:20:13 +00:00
softwarefactory-project-zuul[bot]
6bc09028ca Merge pull request #4741 from wenottingham/there-are-no-limits
Tweak license partial to properly show 'unlimited' for the value we consider unlimited.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-13 17:36:04 +00:00
softwarefactory-project-zuul[bot]
e1c7cd7e9f Merge pull request #4740 from ryanpetrello/pycharm-port
change the default port range for the sdb debugging tool

Reviewed-by: Seth Foster
             https://github.com/fosterseth
2019-09-13 17:32:15 +00:00
softwarefactory-project-zuul[bot]
806648af89 Merge pull request #4611 from ryanpetrello/license-updates
update trial license enforcement logic

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-13 17:27:01 +00:00
softwarefactory-project-zuul[bot]
3e3940efd5 Merge pull request #4733 from keithjgrant/4612-lookup-param-bugs
Prevent Lookup search filters from affecting other Lookups on page

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-13 17:01:18 +00:00
Ryan Petrello
662033db44 fix some awxkit flake8 failures 2019-09-13 12:14:33 -04:00
Elijah DeLee
eba69142f1 add subscriptions endpoint to awxkit 2019-09-13 12:14:33 -04:00
mabashian
8a29276a7d Fix whitespace 2019-09-13 12:14:32 -04:00
mabashian
611f163289 Make sure that the license page under settings has creds available 2019-09-13 12:14:31 -04:00
mabashian
113622c05e Moves initial system settings rest call out to resolve 2019-09-13 12:14:31 -04:00
mabashian
608567795d Add support for looking up and selecting licenses on license page 2019-09-13 12:14:30 -04:00
Ryan Petrello
29fd399b06 introduce a new API for generating licenses from candlepin creds 2019-09-13 12:14:29 -04:00
mabashian
04f458c007 Cleanup inline styles 2019-09-13 12:14:29 -04:00
mabashian
38a7d62745 Add support for custom error messaging on license page 2019-09-13 12:14:28 -04:00
mabashian
53925f5e98 Tweaks to the ux for saved creds 2019-09-13 12:14:27 -04:00
mabashian
2474a3a2ea Pull creds into license form if available via settings 2019-09-13 12:14:27 -04:00
mabashian
900fcbf87e Add rh username and pass to license form 2019-09-13 12:14:26 -04:00
Ryan Petrello
846e67ee6a update trial license enforcement logic 2019-09-13 12:14:25 -04:00
Bill Nottingham
98daee4823 Show unlimited licenses as unlimited.
Don't show 'hosts remaining' for unlimited licenses.
2019-09-13 12:07:35 -04:00
Ryan Petrello
5ed97e0f65 change the default port range for the sdb debugging tool
the current range conflicts w/ a port used by the pycharm editor
2019-09-13 11:55:43 -04:00
Keith Grant
e854b179e4 add test for Lookup clearing QS params 2019-09-13 08:48:17 -07:00
Keith Grant
26e320582a namespace qs params for each lookup separately 2019-09-13 08:48:17 -07:00
Keith Grant
ee864b2df3 clear Lookup query params when lookup is closed 2019-09-13 08:48:17 -07:00
Keith Grant
e309ad25e4 de-lint 2019-09-13 08:45:19 -07:00
Keith Grant
55244834a3 fix bugs when switching NTs to Webhook type 2019-09-13 08:45:19 -07:00
softwarefactory-project-zuul[bot]
4e1fbb3e91 Merge pull request #4731 from ryanpetrello/job-event-contains-workflow
record the parent workflow ID for job events that originate from a WFJT

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-13 15:40:25 +00:00
softwarefactory-project-zuul[bot]
fe43bab174 Merge pull request #4727 from jakemcdermott/fix-4726
show extra vars on workflow template schedules

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-13 14:33:44 +00:00
softwarefactory-project-zuul[bot]
ed692018cd Merge pull request #4732 from ryanpetrello/cli-migrating
cli: show a better error if AWX is migrating

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-13 13:50:14 +00:00
Jake McDermott
311860e027 show extra vars on workflow template schedules
When the workflow job template prompts for extra vars, we show the
extra vars input field on the scheduler.
2019-09-13 09:12:15 -04:00
Ryan Petrello
b5225bd80d record the parent WFJ ID for job events that originate from a WFJT 2019-09-13 08:28:32 -04:00
softwarefactory-project-zuul[bot]
60fc952716 Merge pull request #4730 from shanemcd/fix-entrypoint-args
Consolidate scl enable calls

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 23:10:47 +00:00
Ryan Petrello
9d93cf8021 cli: show a better error if AWX is migrating
see: https://github.com/ansible/awx/issues/4721
2019-09-12 17:16:16 -04:00
Shane McDonald
fe0db4e329 Consolidate scl enable calls 2019-09-12 15:43:09 -04:00
Shane McDonald
3c0a0e1f4a Fix arg parsing in entrypoint 2019-09-12 15:12:04 -04:00
softwarefactory-project-zuul[bot]
70057bc0f2 Merge pull request #4058 from rooftopcellist/pg10
Postgres 10 Upgrade

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 18:58:34 +00:00
Shane McDonald
b2e8e3cc3d Add scl enable back to start_tests script
I think we need to specify the entrypoint in the pod definition too
2019-09-12 14:03:08 -04:00
Shane McDonald
804ec0dde9 Handle SCL enable in one place 2019-09-12 13:54:32 -04:00
softwarefactory-project-zuul[bot]
3e6131c509 Merge pull request #4724 from dsesami/flake-fixes
flake fixes for e2e

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 17:48:27 +00:00
Shane McDonald
e7bb5ac3e4 Fix tests 2019-09-12 13:41:41 -04:00
Daniel Sami
f2ccce3478 flake fixes for e2e 2019-09-12 13:05:35 -04:00
Shane McDonald
036567817e Implement local docker pg upgrade 2019-09-12 12:52:43 -04:00
Christian Adams
ec1e93cc69 Upgrade to postgres 10.6
- use awx-python in shebang in dev env
  - scl enable where needed for rhel7 & container installs
  - use scram-sha-256 pg user hashing by default
  - ensure psycopg2 is using the correct PG_CONFIG at build time for the right libpq version
2019-09-12 12:52:43 -04:00
Jose OrPa
04ab736f09 #3778 Upgrading postgresql to v10 2019-09-12 12:52:42 -04:00
softwarefactory-project-zuul[bot]
ffc6e2218e Merge pull request #4627 from rooftopcellist/upload_directly
Upload directly

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 14:40:53 +00:00
softwarefactory-project-zuul[bot]
253e0765bd Merge pull request #4719 from ryanpetrello/runner-on-start
add metadata for new runner_on_start events

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 14:06:07 +00:00
softwarefactory-project-zuul[bot]
2d78534223 Merge pull request #4674 from AlanCoding/clean_galaxy_out
Use project verbosity without color for galaxy commands

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 13:54:27 +00:00
softwarefactory-project-zuul[bot]
34645523fd Merge pull request #4718 from Spredzy/proper-args
e2e/test: NPMRC_FILE is a build-arg not an environment variable

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 13:40:50 +00:00
Ryan Petrello
90c7514303 add metadata for new runner_on_start events
see: https://github.com/ansible/awx/issues/4129
2019-09-12 09:05:54 -04:00
Yanis Guenane
3c22f99234 e2e/test: NPMRC_FILE is a build-arg not an environment variable 2019-09-12 14:39:28 +02:00
softwarefactory-project-zuul[bot]
d8fbf1e21a Merge pull request #4706 from ryanpetrello/faster-dashboard
optimize dashboard performance for larger UnifiedJob counts

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 10:51:42 +00:00
softwarefactory-project-zuul[bot]
168e03ea0e Merge pull request #4715 from ansible/cli-more-readme
cli: add instructions for using awx -h

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-12 01:37:05 +00:00
Ryan Petrello
3eecda4edc cli: add instructions for using awx -h 2019-09-11 20:43:20 -04:00
Ryan Petrello
c0d9600b66 WIP: optimize dashboard performance for larger UnifiedJob counts 2019-09-11 20:37:27 -04:00
softwarefactory-project-zuul[bot]
087b68aa65 Merge pull request #4713 from ryanpetrello/cli-readme
add a bit more detail to the awx CLI README

Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
             https://github.com/rooftopcellist
2019-09-11 21:12:45 +00:00
Ryan Petrello
83ee4fb289 add a bit more detail to the awx CLI README 2019-09-11 16:24:13 -04:00
softwarefactory-project-zuul[bot]
51a724451c Merge pull request #4677 from marshmalien/4430-host-details-modal
[ui_next] Add host details modal

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-11 18:09:52 +00:00
Christian Adams
6309c0a426 Upload using RH cred settings 2019-09-11 13:24:09 -04:00
softwarefactory-project-zuul[bot]
14ef06854d Merge pull request #4711 from Spredzy/pass-npmrc
e2e/cluster: Allow one to pass a npmrc config file

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-11 15:49:22 +00:00
Marliana Lara
ad2e58cd43 Align the top of the modal to a fixed distance from the top of the
browser
2019-09-11 11:48:17 -04:00
Yanis Guenane
2ea280bbaf Dockerfile: Allow one to pass a npmrc file 2019-09-11 17:08:33 +02:00
softwarefactory-project-zuul[bot]
788c5d3741 Merge pull request #4709 from dsesami/flake-wait
Added a longer timeout for spinny specifically

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-11 14:39:05 +00:00
Yanis Guenane
0e5abb5fa3 Revert "add npm registry arg"
This reverts commit 844f3fde72.
2019-09-11 16:12:50 +02:00
Daniel Sami
5dbcafc392 Added a longer timeout for spinny specifically 2019-09-11 08:59:20 -04:00
Marliana Lara
e53c979344 Show changed status when the status is both ok and changed 2019-09-10 17:09:25 -04:00
Marliana Lara
2527a78874 Combine host and job status icons into one generic status icon 2019-09-10 17:06:53 -04:00
Marliana Lara
871f2cf9c5 Add host event modal tests and code cleanup 2019-09-10 16:29:25 -04:00
softwarefactory-project-zuul[bot]
f3dc4abe37 Merge pull request #4702 from AlexSCorey/4446-TokenDelete
Tokens Properly reload after successful deletion

Reviewed-by: Alex Corey <Alex.swansboro@gmail.com>
             https://github.com/AlexSCorey
2019-09-10 20:28:35 +00:00
softwarefactory-project-zuul[bot]
7646185e2c Merge pull request #4704 from keithjgrant/4522-save-nt-after-fixing-form-errors
Fix Notification Template form validation of headers field

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 19:56:33 +00:00
softwarefactory-project-zuul[bot]
a9a1c6eb6d Merge pull request #4683 from dsesami/npm-registry-arg
Add npm registry arg

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 19:56:25 +00:00
softwarefactory-project-zuul[bot]
4b1706401f Merge pull request #4686 from mabashian/ui_next-ux-improvements
Various UX improvements

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 19:44:22 +00:00
Keith Grant
8192b79b1f fix NT validation of headers after saving 2019-09-10 10:45:19 -07:00
softwarefactory-project-zuul[bot]
8868fa6416 Merge pull request #4689 from mabashian/search-tag-key
Display search key along with value in tag

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 17:28:03 +00:00
softwarefactory-project-zuul[bot]
71215d3d03 Merge pull request #4690 from mabashian/npm-audit
Upgrades lodash and angular-moment

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2019-09-10 17:20:05 +00:00
softwarefactory-project-zuul[bot]
5193209ca7 Merge pull request #4699 from mabashian/4549-not-found
Don't render the 404 not found page until the content's finished loading

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 17:14:32 +00:00
softwarefactory-project-zuul[bot]
d73dd02e5b Merge pull request #4628 from saito-hideki/issue/4528
Fix condition to show prompt button on the schedule add/edit form

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 16:59:03 +00:00
Alex Corey
e23c1477da Properly reloads after successful deletion 2019-09-10 12:13:41 -04:00
softwarefactory-project-zuul[bot]
19d6941034 Merge pull request #4697 from ryanpetrello/cli-human-uniqueness
cli: fix a minor bug in uniqueness rule detection

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 15:01:34 +00:00
mabashian
d4b2cacb3e Don't render the 404 not found page until the content's finished loading 2019-09-10 10:30:17 -04:00
softwarefactory-project-zuul[bot]
2f7476c804 Merge pull request #4692 from ryanpetrello/cli-detail-options
cli: make "detail" actions actually respect Allow: headers

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 14:18:12 +00:00
Ryan Petrello
39ee60a913 cli: fix a minor bug in uniqueness rule detection 2019-09-10 10:17:09 -04:00
mabashian
da43b9b84c Prettify FilterTags 2019-09-10 10:08:42 -04:00
softwarefactory-project-zuul[bot]
eaf3a28d57 Merge pull request #4695 from ryanpetrello/oauth-token-exclusive-lock
move the oauth2 last_used update to the bottom of the request

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-10 12:55:15 +00:00
Ryan Petrello
14f8ef4f44 move the oauth2 last_used update to the bottom of the request
this avoids acquiring a row lock (and holding it for the duration of
transactions which include a very slow query)
2019-09-10 08:15:13 -04:00
Ryan Petrello
ee47e98c50 cli: make "detail" actions actually respect Allow: headers 2019-09-09 21:11:15 -04:00
softwarefactory-project-zuul[bot]
edb7ddb9ae Merge pull request #4687 from AlexSCorey/4685-LookUpPluralization
Adds proper pluralization to Look Up modal empty state

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-09 21:23:32 +00:00
mabashian
f69f43e3ba Upgrades lodash and angular-moment 2019-09-09 16:31:11 -04:00
softwarefactory-project-zuul[bot]
d9a7859a05 Merge pull request #4630 from ryanpetrello/cli-roles
cli: add support for granting and revoking roles from users/teams

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-09 20:19:12 +00:00
softwarefactory-project-zuul[bot]
ff32a5286e Merge pull request #4688 from ryanpetrello/human-format-null
fix a human format bug for settings

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-09 20:08:59 +00:00
mabashian
629e2e89b9 Fix linting error 2019-09-09 15:50:41 -04:00
mabashian
ea015190de Display search key along with value in tag 2019-09-09 15:41:39 -04:00
Ryan Petrello
6762702868 cli: add support for granting and revoking roles from users/teams 2019-09-09 15:27:16 -04:00
Ryan Petrello
3e97608914 fix a human format bug for settings 2019-09-09 15:25:04 -04:00
Alex Corey
a95394b135 Adds proper pluralization to Look Up modal empty state 2019-09-09 14:55:39 -04:00
softwarefactory-project-zuul[bot]
276b577103 Merge pull request #4648 from ryanpetrello/format-metrics-human
cli: add special code for formatting metrics and settings with -f human

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-09 18:52:29 +00:00
softwarefactory-project-zuul[bot]
b1666f2692 Merge pull request #4653 from ryanpetrello/attach-detach-by-name
cli: add support for attach/detaching creds/notifications via name

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2019-09-09 18:52:22 +00:00
mabashian
1f8f4e184b Various UX improvements 2019-09-09 12:54:31 -04:00
Daniel Sami
844f3fde72 add npm registry arg 2019-09-09 11:56:10 -04:00
softwarefactory-project-zuul[bot]
a3a5db1c44 Merge pull request #4678 from wenottingham/cloudy-with-a-chance-of-updates
Update assorted cloud module requirements.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-08 19:42:45 +00:00
softwarefactory-project-zuul[bot]
28630cb7fa Merge pull request #4650 from keithjgrant/4431-jt-form-advanced-fields
4431 jt form advanced fields

Reviewed-by: Keith Grant
             https://github.com/keithjgrant
2019-09-06 23:02:08 +00:00
Keith Grant
820605b0ca fix JT form tests 2019-09-06 14:29:00 -07:00
softwarefactory-project-zuul[bot]
bc79093102 Merge pull request #4679 from ryanpetrello/fix-doc-819
fix up a minor Swagger doc bug

Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
             https://github.com/rooftopcellist
2019-09-06 21:15:34 +00:00
Ryan Petrello
eb2fc80114 fix up a minor Swagger doc bug 2019-09-06 16:28:06 -04:00
Bill Nottingham
eb2de51f86 add licenses for new azure deps 2019-09-06 15:55:16 -04:00
Bill Nottingham
7aa94a9bb5 Update assorted cloud module requirements.
Update boto3 for ec2_transit_gateway module.
Update Azure requirements to sync with Ansible 2.9 branch.
2019-09-06 15:30:08 -04:00
Keith Grant
be6f5e18ae fixing JT Form tests post-rebase 2019-09-06 11:35:24 -07:00
Keith Grant
9777b79818 hide overflow in ExpandingContainer while opening 2019-09-06 09:59:58 -07:00
Marliana Lara
25aa9bc43e Change empty state text 2019-09-06 12:52:44 -04:00
Marliana Lara
a79de2b4ed Fix existing test failures 2019-09-06 12:44:11 -04:00
Marliana Lara
7480baf256 Add host event modal 2019-09-06 12:43:53 -04:00
Marliana Lara
5babab7af4 Add host status icon and pull status styles into separate file 2019-09-06 12:43:17 -04:00
Keith Grant
4e73f4b778 show all advanced JT fields on edit form 2019-09-06 09:14:19 -07:00
softwarefactory-project-zuul[bot]
4f05955724 Merge pull request #4671 from gamuniz/fix_204_insights_api
fixed insights api 204 errors

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-06 16:03:46 +00:00
AlanCoding
ebc369bfef Use project verbosity without color for galaxy commands 2019-09-06 11:02:20 -04:00
Ben Thomasson
10d53637ad Changes uploader to use the insights api directly 2019-09-06 10:58:35 -04:00
Gabe Muniz
23a7151278 fixed insights api 204 errors 2019-09-06 09:27:17 -04:00
Keith Grant
0254cf3567 wire-in instance groups to JT form 2019-09-05 16:47:34 -07:00
softwarefactory-project-zuul[bot]
f91a02a9e4 Merge pull request #4661 from ansible/apurva_wait_for_job_fix
awxkit: don't suppress error when job does not exist

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-05 21:37:15 +00:00
Keith Grant
93b794eaa7 JT Form fixes after rebase 2019-09-05 13:45:30 -07:00
softwarefactory-project-zuul[bot]
927c99a2f4 Merge pull request #4667 from AlanCoding/update_fix
fix project sync revision bug

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-05 20:29:10 +00:00
softwarefactory-project-zuul[bot]
b4759de30d Merge pull request #4529 from wenottingham/the-pirates-are-bringing-r-bac
Fix display of indirect access permissions.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-05 17:40:44 +00:00
Apurva Bakshi
11a71f5ffa fix attribute error 2019-09-05 13:16:39 -04:00
Ryan Petrello
064f871fff add special code for formatting metrics and settings with -f human
see: https://github.com/ansible/awx/issues/4566
2019-09-05 11:55:23 -04:00
softwarefactory-project-zuul[bot]
ffbce2611a Merge pull request #4656 from wenottingham/collection-fix
Revert 9b95cc27c4.

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2019-09-05 13:50:18 +00:00
softwarefactory-project-zuul[bot]
8b9ddb5922 Merge pull request #4663 from ryanpetrello/fix-3741
fix a 500 error that can occur when a WorkflowApproval's node is deleted

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
             https://github.com/beeankha
2019-09-05 13:41:06 +00:00
AlanCoding
c65b77a20a fix project sync revision bug 2019-09-05 08:49:52 -04:00
Ryan Petrello
b7e8044d69 fix a 500 error that can occur when a WorkflowApproval's node is deleted 2019-09-04 21:12:22 -04:00
Keith Grant
8b1ca12d8f update JobTemplateForm tests 2019-09-04 15:54:30 -07:00
Keith Grant
4f546be87a add JT form callback fields 2019-09-04 14:35:39 -07:00
Keith Grant
e6475f21f6 flush out JT form; upgrade enzyme; add CheckboxField 2019-09-04 14:34:57 -07:00
Keith Grant
218348412b creat TagMultiSelect component; cleanup MultiSelect 2019-09-04 14:22:08 -07:00
Bill Nottingham
db64717551 Merge pull request #2 from AlanCoding/collection-fix-fix
Fix collection precedence bug, add new to left of list
2019-09-04 17:01:30 -04:00
Keith Grant
9edc686ab5 add generic onChange prop to MultiSelect 2019-09-04 13:58:32 -07:00
softwarefactory-project-zuul[bot]
c819a78a4b Merge pull request #4647 from rebeccahhh/devel
Added 'rescued' and 'ignored' fields into all_hosts dictionary update

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2019-09-04 20:48:04 +00:00
AlanCoding
11146a071f Fix collection precedence bug, add new to left of list 2019-09-04 15:32:03 -04:00
Keith Grant
4d31d83e1e add instance groups to JT form 2019-09-04 11:22:02 -07:00
Keith Grant
3a9a884bbc add omitProps helper 2019-09-04 11:19:05 -07:00
Keith Grant
8a31be6ffe refactor lookup components 2019-09-04 11:19:05 -07:00
Keith Grant
6fd86fed65 add jt advanced fields 2019-09-04 10:45:34 -07:00
Brian Coca
9d48ba4243 better error message on missing runner 2019-09-04 13:16:55 -04:00
Keith Grant
80bdb1a67a add CollapsibleSection & ExpandingContainer components 2019-09-04 09:55:24 -07:00
Bill Nottingham
24c16b1c58 Fix to enable system paths along with requirements override, from @alancoding. 2019-09-04 11:43:48 -04:00
Bill Nottingham
0fb7c859f4 Revert 9b95cc27c4.
We do not want to create a new setting for a on-Tower-host global
collection path at this time.
2019-09-04 11:43:33 -04:00
Ryan Petrello
1e5bcca0b9 cli: add support for attach/detaching creds/notifications via name 2019-09-03 21:51:39 -04:00
Rebeccah
daba25f107 added 'ignored' and 'rescued' into all_hosts dictionary update 2019-09-03 18:58:29 -04:00
Hideki Saito
9d2441789e Fix condition to show prompt button on the schedule add/edit form
it hides the prompt button on survey form when prompt is extra vars only.

- Fixed issue #4528

Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-09-03 02:15:14 +00:00
Bill Nottingham
b4f6b380fd Show a tooltip for indirect permissions to show where they come from. 2019-08-30 16:28:39 -04:00
Bill Nottingham
444f024bb0 Fix display of indirect access permissions.
For indirect roles, we need to actually show the derived roles, not the
details of the role that gives us the derived roles. This means that
we can get multiple derived roles from a single indirect role, so
we have to expand the list.
2019-08-20 20:28:53 -04:00
580 changed files with 23109 additions and 9795 deletions

8
.gitignore vendored
View File

@@ -133,4 +133,12 @@ awx/lib/site-packages
venv/*
use_dev_supervisor.txt
# Ansible module tests
awx_collection_test_venv/
awx_collection/*.tar.gz
awx_collection/galaxy.yml
.idea/*
*.unison.tmp
*.#

View File

@@ -156,8 +156,8 @@ If you start a second terminal session, you can take a look at the running conta
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa4a75d6d77b gcr.io/ansible-tower-engineering/awx_devel:devel "/tini -- /bin/sh ..." 23 seconds ago Up 15 seconds 0.0.0.0:5555->5555/tcp, 0.0.0.0:6899-6999->6899-6999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 22/tcp, 0.0.0.0:8080->8080/tcp tools_awx_1
e4c0afeb548c postgres:9.6 "docker-entrypoint..." 26 seconds ago Up 23 seconds 5432/tcp tools_postgres_1
aa4a75d6d77b gcr.io/ansible-tower-engineering/awx_devel:devel "/tini -- /bin/sh ..." 23 seconds ago Up 15 seconds 0.0.0.0:5555->5555/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 22/tcp, 0.0.0.0:8080->8080/tcp tools_awx_1
e4c0afeb548c postgres:10 "docker-entrypoint..." 26 seconds ago Up 23 seconds 5432/tcp tools_postgres_1
0089699d5afd tools_logstash "/docker-entrypoin..." 26 seconds ago Up 25 seconds tools_logstash_1
4d4ff0ced266 memcached:alpine "docker-entrypoint..." 26 seconds ago Up 25 seconds 0.0.0.0:11211->11211/tcp tools_memcached_1
92842acd64cd rabbitmq:3-management "docker-entrypoint..." 26 seconds ago Up 24 seconds 4369/tcp, 5671-5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp tools_rabbitmq_1

View File

@@ -193,7 +193,7 @@ $ eval $(minishift docker-env)
By default, AWX will deploy a PostgreSQL pod inside of your cluster. You will need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) which is named `postgresql` by default, and can be overridden by setting the `openshift_pg_pvc_name` variable. For testing and demo purposes, you may set `openshift_pg_emptydir=yes`.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
### Start the build
@@ -503,7 +503,7 @@ If you wish to tag and push built images to a Docker registry, set the following
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a container, and data will be persisted to a host volume. In this scenario, you must set the value of `postgres_data_dir` to a path that can be mounted to the container. When the container is stopped, the database files will still exist in the specified path.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_database`, and `pg_port` with the connection information.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information.
### Start the build

View File

@@ -18,6 +18,7 @@ COMPOSE_TAG ?= $(GIT_BRANCH)
COMPOSE_HOST ?= $(shell hostname)
VENV_BASE ?= /venv
COLLECTION_VENV ?= /awx_devel/awx_collection_test_venv
SCL_PREFIX ?=
CELERY_SCHEDULE_FILE ?= /var/lib/awx/beat.db
@@ -99,20 +100,22 @@ clean-languages:
find . -type f -regex ".*\.mo$$" -delete
# Remove temporary build files, compiled Python files.
clean: clean-ui clean-dist
clean: clean-ui clean-api clean-dist
rm -rf awx/public
rm -rf awx/lib/site-packages
rm -rf awx/job_status
rm -rf awx/job_output
rm -rf reports
rm -f awx/awx_test.sqlite3*
rm -rf requirements/vendor
rm -rf tmp
rm -rf $(I18N_FLAG_FILE)
mkdir tmp
clean-api:
rm -rf build $(NAME)-$(VERSION) *.egg-info
find . -type f -regex ".*\.py[co]$$" -delete
find . -type d -name "__pycache__" -delete
rm -f awx/awx_test.sqlite3*
rm -rf requirements/vendor
# convenience target to assert environment variables are defined
guard-%:
@@ -186,7 +189,7 @@ requirements_awx: virtualenv_awx
cat requirements/requirements.txt requirements/requirements_git.txt | $(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) --ignore-installed -r /dev/stdin ; \
fi
echo "include-system-site-packages = true" >> $(VENV_BASE)/awx/lib/python$(PYTHON_VERSION)/pyvenv.cfg
#$(VENV_BASE)/awx/bin/pip uninstall --yes -r requirements/requirements_tower_uninstall.txt
$(VENV_BASE)/awx/bin/pip uninstall --yes -r requirements/requirements_tower_uninstall.txt
requirements_awx_dev:
$(VENV_BASE)/awx/bin/pip install -r requirements/requirements_dev.txt
@@ -375,6 +378,31 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py2,py3
awx-manage check_migrations --dry-run --check -n 'vNNN_missing_migration_file'
prepare_collection_venv:
rm -rf $(COLLECTION_VENV)
mkdir $(COLLECTION_VENV)
ln -s /usr/lib/python2.7/site-packages/ansible $(COLLECTION_VENV)/ansible
$(VENV_BASE)/awx/bin/pip install --target=$(COLLECTION_VENV) git+https://github.com/ansible/tower-cli.git
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
test_collection:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONPATH=$(COLLECTION_VENV):/awx_devel/awx_collection:$PYTHONPATH py.test $(COLLECTION_TEST_DIRS)
flake8_collection:
flake8 awx_collection/ # Different settings, in main exclude list
test_collection_all: prepare_collection_venv test_collection flake8_collection
build_collection:
ansible-playbook -i localhost, awx_collection/template_galaxy.yml -e collection_package=$(COLLECTION_PACKAGE) -e collection_namespace=$(COLLECTION_NAMESPACE) -e collection_version=$(VERSION)
ansible-galaxy collection build awx_collection --output-path=awx_collection
test_unit:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -516,6 +544,12 @@ jshint: $(UI_DEPS_FLAG_FILE)
$(NPM_BIN) run --prefix awx/ui jshint
$(NPM_BIN) run --prefix awx/ui lint
ui-zuul-lint-and-test: $(UI_DEPS_FLAG_FILE)
$(NPM_BIN) run --prefix awx/ui jshint
$(NPM_BIN) run --prefix awx/ui lint
$(NPM_BIN) --prefix awx/ui run test:ci
$(NPM_BIN) --prefix awx/ui run unit
# END UI TASKS
# --------------------------------------
@@ -531,6 +565,12 @@ ui-next-test:
$(NPM_BIN) --prefix awx/ui_next install
$(NPM_BIN) run --prefix awx/ui_next test
ui-next-zuul-lint-and-test:
$(NPM_BIN) --prefix awx/ui_next install
$(NPM_BIN) run --prefix awx/ui_next lint
$(NPM_BIN) run --prefix awx/ui_next prettier-check
$(NPM_BIN) run --prefix awx/ui_next test
# END UI NEXT TASKS
# --------------------------------------
@@ -648,7 +688,7 @@ clean-elk:
docker rm tools_kibana_1
psql-container:
docker run -it --net tools_default --rm postgres:9.6 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
docker run -it --net tools_default --rm postgres:10 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
VERSION:
@echo "awx: $(VERSION)"

View File

@@ -1 +1 @@
7.0.0
8.0.0

View File

@@ -82,6 +82,16 @@ def find_commands(management_dir):
return commands
def oauth2_getattribute(self, attr):
# Custom method to override
# oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__
from django.conf import settings
val = settings.OAUTH2_PROVIDER.get(attr)
if val is None:
val = object.__getattribute__(self, attr)
return val
def prepare_env():
# Update the default settings environment variable based on current mode.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'awx.settings.%s' % MODE)
@@ -93,6 +103,12 @@ def prepare_env():
# Monkeypatch Django find_commands to also work with .pyc files.
import django.core.management
django.core.management.find_commands = find_commands
# Monkeypatch Oauth2 toolkit settings class to check for settings
# in django.conf settings each time, not just once during import
import oauth2_provider.settings
oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__ = oauth2_getattribute
# Use the AWX_TEST_DATABASE_* environment variables to specify the test
# database settings to use when management command is run as an external
# program via unit tests.

View File

@@ -38,12 +38,15 @@ register(
'OAUTH2_PROVIDER',
field_class=OAuth2ProviderField,
default={'ACCESS_TOKEN_EXPIRE_SECONDS': oauth2_settings.ACCESS_TOKEN_EXPIRE_SECONDS,
'AUTHORIZATION_CODE_EXPIRE_SECONDS': 600},
'AUTHORIZATION_CODE_EXPIRE_SECONDS': oauth2_settings.AUTHORIZATION_CODE_EXPIRE_SECONDS,
'REFRESH_TOKEN_EXPIRE_SECONDS': oauth2_settings.REFRESH_TOKEN_EXPIRE_SECONDS},
label=_('OAuth 2 Timeout Settings'),
help_text=_('Dictionary for customizing OAuth 2 timeouts, available items are '
'`ACCESS_TOKEN_EXPIRE_SECONDS`, the duration of access tokens in the number '
'of seconds, and `AUTHORIZATION_CODE_EXPIRE_SECONDS`, the duration of '
'authorization codes in the number of seconds.'),
'of seconds, `AUTHORIZATION_CODE_EXPIRE_SECONDS`, the duration of '
'authorization codes in the number of seconds, and `REFRESH_TOKEN_EXPIRE_SECONDS`, '
'the duration of refresh tokens, after expired access tokens, '
'in the number of seconds.'),
category=_('Authentication'),
category_slug='authentication',
)

View File

@@ -80,7 +80,7 @@ class OAuth2ProviderField(fields.DictField):
default_error_messages = {
'invalid_key_names': _('Invalid key names: {invalid_key_names}'),
}
valid_key_names = {'ACCESS_TOKEN_EXPIRE_SECONDS', 'AUTHORIZATION_CODE_EXPIRE_SECONDS'}
valid_key_names = {'ACCESS_TOKEN_EXPIRE_SECONDS', 'AUTHORIZATION_CODE_EXPIRE_SECONDS', 'REFRESH_TOKEN_EXPIRE_SECONDS'}
child = fields.IntegerField(min_value=1)
def to_internal_value(self, data):

View File

@@ -126,7 +126,7 @@ class FieldLookupBackend(BaseFilterBackend):
'''
RESERVED_NAMES = ('page', 'page_size', 'format', 'order', 'order_by',
'search', 'type', 'host_filter')
'search', 'type', 'host_filter', 'count_disabled', 'no_truncate')
SUPPORTED_LOOKUPS = ('exact', 'iexact', 'contains', 'icontains',
'startswith', 'istartswith', 'endswith', 'iendswith',

View File

@@ -92,7 +92,7 @@ class LoggedLoginView(auth_views.LoginView):
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
current_user = getattr(request, 'user', None)
if request.user.is_authenticated:
logger.info(smart_text(u"User {} logged in.".format(self.request.user.username)))
logger.info(smart_text(u"User {} logged in from {}".format(self.request.user.username,request.META.get('REMOTE_ADDR', None))))
ret.set_cookie('userLoggedIn', 'true')
current_user = UserSerializer(self.request.user)
current_user = smart_text(JSONRenderer().render(current_user.data))
@@ -205,6 +205,9 @@ class APIView(views.APIView):
response['X-API-Query-Count'] = len(q_times)
response['X-API-Query-Time'] = '%0.3fs' % sum(q_times)
if getattr(self, 'deprecated', False):
response['Warning'] = '299 awx "This resource has been deprecated and will be removed in a future release."' # noqa
return response
def get_authenticate_header(self, request):
@@ -489,9 +492,12 @@ class SubListAPIView(ParentMixin, ListAPIView):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model).distinct()
sublist_qs = getattrd(parent, self.relationship).distinct()
sublist_qs = self.get_sublist_queryset(parent)
return qs & sublist_qs
def get_sublist_queryset(self, parent):
return getattrd(parent, self.relationship).distinct()
class DestroyAPIView(generics.DestroyAPIView):

View File

@@ -20,8 +20,9 @@ from rest_framework.fields import JSONField as DRFJSONField
from rest_framework.request import clone_request
# AWX
from awx.main.fields import JSONField
from awx.main.fields import JSONField, ImplicitRoleField
from awx.main.models import InventorySource, NotificationTemplate
from awx.main.scheduler.kubernetes import PodManager
class Metadata(metadata.SimpleMetadata):
@@ -200,6 +201,9 @@ class Metadata(metadata.SimpleMetadata):
if not isinstance(meta, dict):
continue
if field == "pod_spec_override":
meta['default'] = PodManager().pod_definition
# Add type choices if available from the serializer.
if field == 'type' and hasattr(serializer, 'get_type_choices'):
meta['choices'] = serializer.get_type_choices()
@@ -252,6 +256,16 @@ class Metadata(metadata.SimpleMetadata):
if getattr(view, 'related_search_fields', None):
metadata['related_search_fields'] = view.related_search_fields
# include role names in metadata
roles = []
model = getattr(view, 'model', None)
if model:
for field in model._meta.get_fields():
if type(field) is ImplicitRoleField:
roles.append(field.name)
if len(roles) > 0:
metadata['object_roles'] = roles
from rest_framework import generics
if isinstance(view, generics.ListAPIView) and hasattr(view, 'paginator'):
metadata['max_page_size'] = view.paginator.max_page_size

View File

@@ -3,14 +3,28 @@
# Django REST Framework
from django.conf import settings
from django.core.paginator import Paginator as DjangoPaginator
from rest_framework import pagination
from rest_framework.response import Response
from rest_framework.utils.urls import replace_query_param
class DisabledPaginator(DjangoPaginator):
@property
def num_pages(self):
return 1
@property
def count(self):
return 200
class Pagination(pagination.PageNumberPagination):
page_size_query_param = 'page_size'
max_page_size = settings.MAX_PAGE_SIZE
count_disabled = False
def get_next_link(self):
if not self.page.has_next():
@@ -39,3 +53,17 @@ class Pagination(pagination.PageNumberPagination):
for pl in context['page_links']]
return context
def paginate_queryset(self, queryset, request, **kwargs):
self.count_disabled = 'count_disabled' in request.query_params
try:
if self.count_disabled:
self.django_paginator_class = DisabledPaginator
return super(Pagination, self).paginate_queryset(queryset, request, **kwargs)
finally:
self.django_paginator_class = DjangoPaginator
def get_paginated_response(self, data):
if self.count_disabled:
return Response({'results': data})
return super(Pagination, self).get_paginated_response(data)

View File

@@ -249,3 +249,8 @@ class InstanceGroupTowerPermission(ModelAccessPermission):
if request.method == 'DELETE' and obj.name == "tower":
return False
return super(InstanceGroupTowerPermission, self).has_object_permission(request, view, obj)
class WebhookKeyPermission(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return request.user.can_access(view.model, 'admin', obj, request.data)

View File

@@ -45,7 +45,6 @@ from polymorphic.models import PolymorphicModel
from awx.main.access import get_user_capabilities
from awx.main.constants import (
SCHEDULEABLE_PROVIDERS,
ANSI_SGR_PATTERN,
ACTIVE_STATES,
CENSOR_VALUE,
)
@@ -70,7 +69,8 @@ from awx.main.utils import (
get_type_for_model, get_model_for_type,
camelcase_to_underscore, getattrd, parse_yaml_or_json,
has_model_field_prefetched, extract_ansible_vars, encrypt_dict,
prefetch_page_capabilities, get_external_account)
prefetch_page_capabilities, get_external_account, truncate_stdout,
)
from awx.main.utils.filters import SmartFilter
from awx.main.redact import UriCleaner, REPLACE_STR
@@ -116,7 +116,7 @@ SUMMARIZABLE_FK_FIELDS = {
'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'kubernetes', 'credential_type_id'),
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed', 'type'),
'job_template': DEFAULT_SUMMARY_FIELDS,
'workflow_job_template': DEFAULT_SUMMARY_FIELDS,
@@ -135,10 +135,12 @@ SUMMARIZABLE_FK_FIELDS = {
'source_script': ('name', 'description'),
'role': ('id', 'role_field'),
'notification_template': DEFAULT_SUMMARY_FIELDS,
'instance_group': {'id', 'name', 'controller_id'},
'instance_group': ('id', 'name', 'controller_id', 'is_containerized'),
'insights_credential': DEFAULT_SUMMARY_FIELDS,
'source_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'target_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'webhook_credential': DEFAULT_SUMMARY_FIELDS,
'approved_or_denied_by': ('id', 'username', 'first_name', 'last_name'),
}
@@ -1261,6 +1263,7 @@ class OrganizationSerializer(BaseSerializer):
notification_templates_started = self.reverse('api:organization_notification_templates_started_list', kwargs={'pk': obj.pk}),
notification_templates_success = self.reverse('api:organization_notification_templates_success_list', kwargs={'pk': obj.pk}),
notification_templates_error = self.reverse('api:organization_notification_templates_error_list', kwargs={'pk': obj.pk}),
notification_templates_approvals = self.reverse('api:organization_notification_templates_approvals_list', kwargs={'pk': obj.pk}),
object_roles = self.reverse('api:organization_object_roles_list', kwargs={'pk': obj.pk}),
access_list = self.reverse('api:organization_access_list', kwargs={'pk': obj.pk}),
instance_groups = self.reverse('api:organization_instance_groups_list', kwargs={'pk': obj.pk}),
@@ -2513,7 +2516,7 @@ class CredentialSerializer(BaseSerializer):
class Meta:
model = Credential
fields = ('*', 'organization', 'credential_type', 'inputs', 'kind', 'cloud')
fields = ('*', 'organization', 'credential_type', 'inputs', 'kind', 'cloud', 'kubernetes')
extra_kwargs = {
'credential_type': {
'label': _('Credential Type'),
@@ -2825,6 +2828,25 @@ class JobTemplateMixin(object):
d['recent_jobs'] = self._recent_jobs(obj)
return d
def validate(self, attrs):
webhook_service = attrs.get('webhook_service', getattr(self.instance, 'webhook_service', None))
webhook_credential = attrs.get('webhook_credential', getattr(self.instance, 'webhook_credential', None))
if webhook_credential:
if webhook_credential.credential_type.kind != 'token':
raise serializers.ValidationError({
'webhook_credential': _("Must be a Personal Access Token."),
})
msg = {'webhook_credential': _("Must match the selected webhook service.")}
if webhook_service:
if webhook_credential.credential_type.namespace != '{}_token'.format(webhook_service):
raise serializers.ValidationError(msg)
else:
raise serializers.ValidationError(msg)
return super().validate(attrs)
class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobOptionsSerializer):
show_capabilities = ['start', 'schedule', 'copy', 'edit', 'delete']
@@ -2837,30 +2859,39 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
class Meta:
model = JobTemplate
fields = ('*', 'host_config_key', 'ask_scm_branch_on_launch', 'ask_diff_mode_on_launch', 'ask_variables_on_launch',
'ask_limit_on_launch', 'ask_tags_on_launch',
'ask_skip_tags_on_launch', 'ask_job_type_on_launch', 'ask_verbosity_on_launch', 'ask_inventory_on_launch',
'ask_credential_on_launch', 'survey_enabled', 'become_enabled', 'diff_mode',
'allow_simultaneous', 'custom_virtualenv', 'job_slice_count')
fields = (
'*', 'host_config_key', 'ask_scm_branch_on_launch', 'ask_diff_mode_on_launch',
'ask_variables_on_launch', 'ask_limit_on_launch', 'ask_tags_on_launch',
'ask_skip_tags_on_launch', 'ask_job_type_on_launch', 'ask_verbosity_on_launch',
'ask_inventory_on_launch', 'ask_credential_on_launch', 'survey_enabled',
'become_enabled', 'diff_mode', 'allow_simultaneous', 'custom_virtualenv',
'job_slice_count', 'webhook_service', 'webhook_credential',
)
def get_related(self, obj):
res = super(JobTemplateSerializer, self).get_related(obj)
res.update(dict(
jobs = self.reverse('api:job_template_jobs_list', kwargs={'pk': obj.pk}),
schedules = self.reverse('api:job_template_schedules_list', kwargs={'pk': obj.pk}),
activity_stream = self.reverse('api:job_template_activity_stream_list', kwargs={'pk': obj.pk}),
launch = self.reverse('api:job_template_launch', kwargs={'pk': obj.pk}),
notification_templates_started = self.reverse('api:job_template_notification_templates_started_list', kwargs={'pk': obj.pk}),
notification_templates_success = self.reverse('api:job_template_notification_templates_success_list', kwargs={'pk': obj.pk}),
notification_templates_error = self.reverse('api:job_template_notification_templates_error_list', kwargs={'pk': obj.pk}),
access_list = self.reverse('api:job_template_access_list', kwargs={'pk': obj.pk}),
survey_spec = self.reverse('api:job_template_survey_spec', kwargs={'pk': obj.pk}),
labels = self.reverse('api:job_template_label_list', kwargs={'pk': obj.pk}),
object_roles = self.reverse('api:job_template_object_roles_list', kwargs={'pk': obj.pk}),
instance_groups = self.reverse('api:job_template_instance_groups_list', kwargs={'pk': obj.pk}),
slice_workflow_jobs = self.reverse('api:job_template_slice_workflow_jobs_list', kwargs={'pk': obj.pk}),
copy = self.reverse('api:job_template_copy', kwargs={'pk': obj.pk}),
))
res.update(
jobs=self.reverse('api:job_template_jobs_list', kwargs={'pk': obj.pk}),
schedules=self.reverse('api:job_template_schedules_list', kwargs={'pk': obj.pk}),
activity_stream=self.reverse('api:job_template_activity_stream_list', kwargs={'pk': obj.pk}),
launch=self.reverse('api:job_template_launch', kwargs={'pk': obj.pk}),
webhook_key=self.reverse('api:webhook_key', kwargs={'model_kwarg': 'job_templates', 'pk': obj.pk}),
webhook_receiver=(
self.reverse('api:webhook_receiver_{}'.format(obj.webhook_service),
kwargs={'model_kwarg': 'job_templates', 'pk': obj.pk})
if obj.webhook_service else ''
),
notification_templates_started=self.reverse('api:job_template_notification_templates_started_list', kwargs={'pk': obj.pk}),
notification_templates_success=self.reverse('api:job_template_notification_templates_success_list', kwargs={'pk': obj.pk}),
notification_templates_error=self.reverse('api:job_template_notification_templates_error_list', kwargs={'pk': obj.pk}),
access_list=self.reverse('api:job_template_access_list', kwargs={'pk': obj.pk}),
survey_spec=self.reverse('api:job_template_survey_spec', kwargs={'pk': obj.pk}),
labels=self.reverse('api:job_template_label_list', kwargs={'pk': obj.pk}),
object_roles=self.reverse('api:job_template_object_roles_list', kwargs={'pk': obj.pk}),
instance_groups=self.reverse('api:job_template_instance_groups_list', kwargs={'pk': obj.pk}),
slice_workflow_jobs=self.reverse('api:job_template_slice_workflow_jobs_list', kwargs={'pk': obj.pk}),
copy=self.reverse('api:job_template_copy', kwargs={'pk': obj.pk}),
)
if obj.host_config_key:
res['callback'] = self.reverse('api:job_template_callback', kwargs={'pk': obj.pk})
return res
@@ -2888,7 +2919,6 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
def validate_extra_vars(self, value):
return vars_validate_or_raise(value)
def get_summary_fields(self, obj):
summary_fields = super(JobTemplateSerializer, self).get_summary_fields(obj)
all_creds = []
@@ -2929,9 +2959,11 @@ class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
class Meta:
model = Job
fields = ('*', 'job_template', 'passwords_needed_to_start',
'allow_simultaneous', 'artifacts', 'scm_revision',
'instance_group', 'diff_mode', 'job_slice_number', 'job_slice_count')
fields = (
'*', 'job_template', 'passwords_needed_to_start', 'allow_simultaneous',
'artifacts', 'scm_revision', 'instance_group', 'diff_mode', 'job_slice_number',
'job_slice_count', 'webhook_service', 'webhook_credential', 'webhook_guid',
)
def get_related(self, obj):
res = super(JobSerializer, self).get_related(obj)
@@ -3314,29 +3346,42 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
'admin', 'execute',
{'copy': 'organization.workflow_admin'}
]
limit = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
scm_branch = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
class Meta:
model = WorkflowJobTemplate
fields = ('*', 'extra_vars', 'organization', 'survey_enabled', 'allow_simultaneous',
'ask_variables_on_launch', 'inventory', 'ask_inventory_on_launch',)
fields = (
'*', 'extra_vars', 'organization', 'survey_enabled', 'allow_simultaneous',
'ask_variables_on_launch', 'inventory', 'limit', 'scm_branch',
'ask_inventory_on_launch', 'ask_scm_branch_on_launch', 'ask_limit_on_launch',
'webhook_service', 'webhook_credential',
)
def get_related(self, obj):
res = super(WorkflowJobTemplateSerializer, self).get_related(obj)
res.update(dict(
res.update(
workflow_jobs = self.reverse('api:workflow_job_template_jobs_list', kwargs={'pk': obj.pk}),
schedules = self.reverse('api:workflow_job_template_schedules_list', kwargs={'pk': obj.pk}),
launch = self.reverse('api:workflow_job_template_launch', kwargs={'pk': obj.pk}),
webhook_key=self.reverse('api:webhook_key', kwargs={'model_kwarg': 'workflow_job_templates', 'pk': obj.pk}),
webhook_receiver=(
self.reverse('api:webhook_receiver_{}'.format(obj.webhook_service),
kwargs={'model_kwarg': 'workflow_job_templates', 'pk': obj.pk})
if obj.webhook_service else ''
),
workflow_nodes = self.reverse('api:workflow_job_template_workflow_nodes_list', kwargs={'pk': obj.pk}),
labels = self.reverse('api:workflow_job_template_label_list', kwargs={'pk': obj.pk}),
activity_stream = self.reverse('api:workflow_job_template_activity_stream_list', kwargs={'pk': obj.pk}),
notification_templates_started = self.reverse('api:workflow_job_template_notification_templates_started_list', kwargs={'pk': obj.pk}),
notification_templates_success = self.reverse('api:workflow_job_template_notification_templates_success_list', kwargs={'pk': obj.pk}),
notification_templates_error = self.reverse('api:workflow_job_template_notification_templates_error_list', kwargs={'pk': obj.pk}),
notification_templates_approvals = self.reverse('api:workflow_job_template_notification_templates_approvals_list', kwargs={'pk': obj.pk}),
access_list = self.reverse('api:workflow_job_template_access_list', kwargs={'pk': obj.pk}),
object_roles = self.reverse('api:workflow_job_template_object_roles_list', kwargs={'pk': obj.pk}),
survey_spec = self.reverse('api:workflow_job_template_survey_spec', kwargs={'pk': obj.pk}),
copy = self.reverse('api:workflow_job_template_copy', kwargs={'pk': obj.pk}),
))
)
if obj.organization:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization.pk})
return res
@@ -3344,6 +3389,22 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
def validate_extra_vars(self, value):
return vars_validate_or_raise(value)
def validate(self, attrs):
attrs = super(WorkflowJobTemplateSerializer, self).validate(attrs)
# process char_prompts, these are not direct fields on the model
mock_obj = self.Meta.model()
for field_name in ('scm_branch', 'limit'):
if field_name in attrs:
setattr(mock_obj, field_name, attrs[field_name])
attrs.pop(field_name)
# Model `.save` needs the container dict, not the pseudo fields
if mock_obj.char_prompts:
attrs['char_prompts'] = mock_obj.char_prompts
return attrs
class WorkflowJobTemplateWithSpecSerializer(WorkflowJobTemplateSerializer):
'''
@@ -3356,13 +3417,16 @@ class WorkflowJobTemplateWithSpecSerializer(WorkflowJobTemplateSerializer):
class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
limit = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
scm_branch = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
class Meta:
model = WorkflowJob
fields = ('*', 'workflow_job_template', 'extra_vars', 'allow_simultaneous',
'job_template', 'is_sliced_job',
'-execution_node', '-event_processing_finished', '-controller_node',
'inventory',)
fields = (
'*', 'workflow_job_template', 'extra_vars', 'allow_simultaneous', 'job_template',
'is_sliced_job', '-execution_node', '-event_processing_finished', '-controller_node',
'inventory', 'limit', 'scm_branch', 'webhook_service', 'webhook_credential', 'webhook_guid',
)
def get_related(self, obj):
res = super(WorkflowJobSerializer, self).get_related(obj)
@@ -3438,6 +3502,8 @@ class WorkflowApprovalSerializer(UnifiedJobSerializer):
kwargs={'pk': obj.workflow_approval_template.pk})
res['approve'] = self.reverse('api:workflow_approval_approve', kwargs={'pk': obj.pk})
res['deny'] = self.reverse('api:workflow_approval_deny', kwargs={'pk': obj.pk})
if obj.approved_or_denied_by:
res['approved_or_denied_by'] = self.reverse('api:user_detail', kwargs={'pk': obj.approved_or_denied_by.pk})
return res
@@ -3469,7 +3535,7 @@ class WorkflowApprovalTemplateSerializer(UnifiedJobTemplateSerializer):
if 'last_job' in res:
del res['last_job']
res.update(dict(jobs = self.reverse('api:workflow_approval_template_jobs_list', kwargs={'pk': obj.pk}),))
res.update(jobs = self.reverse('api:workflow_approval_template_jobs_list', kwargs={'pk': obj.pk}))
return res
@@ -3596,7 +3662,7 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
if errors:
raise serializers.ValidationError(errors)
# Model `.save` needs the container dict, not the psuedo fields
# Model `.save` needs the container dict, not the pseudo fields
if mock_obj.char_prompts:
attrs['char_prompts'] = mock_obj.char_prompts
@@ -3788,25 +3854,17 @@ class JobEventSerializer(BaseSerializer):
return d
def to_representation(self, obj):
ret = super(JobEventSerializer, self).to_representation(obj)
# Show full stdout for event detail view, truncate only for list view.
if hasattr(self.context.get('view', None), 'retrieve'):
return ret
data = super(JobEventSerializer, self).to_representation(obj)
# Show full stdout for playbook_on_* events.
if obj and obj.event.startswith('playbook_on'):
return ret
return data
# If the view logic says to not trunctate (request was to the detail view or a param was used)
if self.context.get('no_truncate', False):
return data
max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY
if max_bytes > 0 and 'stdout' in ret and len(ret['stdout']) >= max_bytes:
ret['stdout'] = ret['stdout'][:(max_bytes - 1)] + u'\u2026'
set_count = 0
reset_count = 0
for m in ANSI_SGR_PATTERN.finditer(ret['stdout']):
if m.string[m.start():m.end()] == u'\u001b[0m':
reset_count += 1
else:
set_count += 1
ret['stdout'] += u'\u001b[0m' * (set_count - reset_count)
return ret
if 'stdout' in data:
data['stdout'] = truncate_stdout(data['stdout'], max_bytes)
return data
class JobEventWebSocketSerializer(JobEventSerializer):
@@ -3901,22 +3959,14 @@ class AdHocCommandEventSerializer(BaseSerializer):
return res
def to_representation(self, obj):
ret = super(AdHocCommandEventSerializer, self).to_representation(obj)
# Show full stdout for event detail view, truncate only for list view.
if hasattr(self.context.get('view', None), 'retrieve'):
return ret
data = super(AdHocCommandEventSerializer, self).to_representation(obj)
# If the view logic says to not trunctate (request was to the detail view or a param was used)
if self.context.get('no_truncate', False):
return data
max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY
if max_bytes > 0 and 'stdout' in ret and len(ret['stdout']) >= max_bytes:
ret['stdout'] = ret['stdout'][:(max_bytes - 1)] + u'\u2026'
set_count = 0
reset_count = 0
for m in ANSI_SGR_PATTERN.finditer(ret['stdout']):
if m.string[m.start():m.end()] == u'\u001b[0m':
reset_count += 1
else:
set_count += 1
ret['stdout'] += u'\u001b[0m' * (set_count - reset_count)
return ret
if 'stdout' in data:
data['stdout'] = truncate_stdout(data['stdout'], max_bytes)
return data
class AdHocCommandEventWebSocketSerializer(AdHocCommandEventSerializer):
@@ -4180,12 +4230,16 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
queryset=Inventory.objects.all(),
required=False, write_only=True
)
limit = serializers.CharField(required=False, write_only=True, allow_blank=True)
scm_branch = serializers.CharField(required=False, write_only=True, allow_blank=True)
workflow_job_template_data = serializers.SerializerMethodField()
class Meta:
model = WorkflowJobTemplate
fields = ('ask_inventory_on_launch', 'can_start_without_user_input', 'defaults', 'extra_vars',
'inventory', 'survey_enabled', 'variables_needed_to_start',
fields = ('ask_inventory_on_launch', 'ask_limit_on_launch', 'ask_scm_branch_on_launch',
'can_start_without_user_input', 'defaults', 'extra_vars',
'inventory', 'limit', 'scm_branch',
'survey_enabled', 'variables_needed_to_start',
'node_templates_missing', 'node_prompts_rejected',
'workflow_job_template_data', 'survey_enabled', 'ask_variables_on_launch')
read_only_fields = ('ask_inventory_on_launch', 'ask_variables_on_launch')
@@ -4225,9 +4279,14 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
WFJT_extra_vars = template.extra_vars
WFJT_inventory = template.inventory
WFJT_limit = template.limit
WFJT_scm_branch = template.scm_branch
super(WorkflowJobLaunchSerializer, self).validate(attrs)
template.extra_vars = WFJT_extra_vars
template.inventory = WFJT_inventory
template.limit = WFJT_limit
template.scm_branch = WFJT_scm_branch
return accepted
@@ -4347,6 +4406,8 @@ class NotificationTemplateSerializer(BaseSerializer):
for event in messages:
if not messages[event]:
continue
if not isinstance(messages[event], dict):
continue
body = messages[event].get('body', {})
if body:
try:
@@ -4658,6 +4719,11 @@ class InstanceGroupSerializer(BaseSerializer):
'Isolated groups have a designated controller group.'),
read_only=True
)
is_containerized = serializers.BooleanField(
help_text=_('Indicates whether instances in this group are containerized.'
'Containerized groups have a designated Openshift or Kubernetes cluster.'),
read_only=True
)
# NOTE: help_text is duplicated from field definitions, no obvious way of
# both defining field details here and also getting the field's help_text
policy_instance_percentage = serializers.IntegerField(
@@ -4683,8 +4749,9 @@ class InstanceGroupSerializer(BaseSerializer):
fields = ("id", "type", "url", "related", "name", "created", "modified",
"capacity", "committed_capacity", "consumed_capacity",
"percent_capacity_remaining", "jobs_running", "jobs_total",
"instances", "controller", "is_controller", "is_isolated",
"policy_instance_percentage", "policy_instance_minimum", "policy_instance_list")
"instances", "controller", "is_controller", "is_isolated", "is_containerized", "credential",
"policy_instance_percentage", "policy_instance_minimum", "policy_instance_list",
"pod_spec_override", "summary_fields")
def get_related(self, obj):
res = super(InstanceGroupSerializer, self).get_related(obj)
@@ -4692,6 +4759,9 @@ class InstanceGroupSerializer(BaseSerializer):
res['instances'] = self.reverse('api:instance_group_instance_list', kwargs={'pk': obj.pk})
if obj.controller_id:
res['controller'] = self.reverse('api:instance_group_detail', kwargs={'pk': obj.controller_id})
if obj.credential:
res['credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.credential_id})
return res
def validate_policy_instance_list(self, value):
@@ -4711,6 +4781,11 @@ class InstanceGroupSerializer(BaseSerializer):
raise serializers.ValidationError(_('tower instance group name may not be changed.'))
return value
def validate_credential(self, value):
if value and not value.kubernetes:
raise serializers.ValidationError(_('Only Kubernetes credentials can be associated with an Instance Group'))
return value
def get_capacity_dict(self):
# Store capacity values (globally computed) in the context
if 'capacity_map' not in self.context:

View File

@@ -1,6 +1,6 @@
{% ifmeth GET %}
# List Roles for a Team:
{% ifmeth GET %}
Make a GET request to this resource to retrieve a list of roles associated with the selected team.
{% include "api/_list_common.md" %}

View File

@@ -1,6 +1,6 @@
{% ifmeth GET %}
# List Roles for a User:
{% ifmeth GET %}
Make a GET request to this resource to retrieve a list of roles associated with the selected user.
{% include "api/_list_common.md" %}

View File

@@ -0,0 +1,12 @@
Webhook Secret Key:
Make a GET request to this resource to obtain the secret key for a job
template or workflow job template configured to be triggered by
webhook events. The response will include the following fields:
* `webhook_key`: Secret key that needs to be copied and added to the
webhook configuration of the service this template will be receiving
webhook events from (string, read-only)
Make an empty POST request to this resource to generate a new
replacement `webhook_key`.

View File

@@ -1,7 +1,7 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.conf.urls import url
from django.conf.urls import include, url
from awx.api.views import (
JobTemplateList,
@@ -45,6 +45,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/object_roles/$', JobTemplateObjectRolesList.as_view(), name='job_template_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/labels/$', JobTemplateLabelList.as_view(), name='job_template_label_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', JobTemplateCopy.as_view(), name='job_template_copy'),
url(r'^(?P<pk>[0-9]+)/', include('awx.api.urls.webhooks'), {'model_kwarg': 'job_templates'}),
]
__all__ = ['urls']

View File

@@ -18,6 +18,7 @@ from awx.api.views import (
OrganizationNotificationTemplatesErrorList,
OrganizationNotificationTemplatesStartedList,
OrganizationNotificationTemplatesSuccessList,
OrganizationNotificationTemplatesApprovalList,
OrganizationInstanceGroupsList,
OrganizationObjectRolesList,
OrganizationAccessList,
@@ -43,6 +44,8 @@ urls = [
name='organization_notification_templates_error_list'),
url(r'^(?P<pk>[0-9]+)/notification_templates_success/$', OrganizationNotificationTemplatesSuccessList.as_view(),
name='organization_notification_templates_success_list'),
url(r'^(?P<pk>[0-9]+)/notification_templates_approvals/$', OrganizationNotificationTemplatesApprovalList.as_view(),
name='organization_notification_templates_approvals_list'),
url(r'^(?P<pk>[0-9]+)/instance_groups/$', OrganizationInstanceGroupsList.as_view(), name='organization_instance_groups_list'),
url(r'^(?P<pk>[0-9]+)/object_roles/$', OrganizationObjectRolesList.as_view(), name='organization_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/access_list/$', OrganizationAccessList.as_view(), name='organization_access_list'),

View File

@@ -14,6 +14,7 @@ from awx.api.views import (
ApiV2RootView,
ApiV2PingView,
ApiV2ConfigView,
ApiV2SubscriptionView,
AuthView,
UserMeList,
DashboardView,
@@ -94,6 +95,7 @@ v2_urls = [
url(r'^metrics/$', MetricsView.as_view(), name='metrics_view'),
url(r'^ping/$', ApiV2PingView.as_view(), name='api_v2_ping_view'),
url(r'^config/$', ApiV2ConfigView.as_view(), name='api_v2_config_view'),
url(r'^config/subscriptions/$', ApiV2SubscriptionView.as_view(), name='api_v2_subscription_view'),
url(r'^auth/$', AuthView.as_view()),
url(r'^me/$', UserMeList.as_view(), name='user_me_list'),
url(r'^dashboard/$', DashboardView.as_view(), name='dashboard_view'),

14
awx/api/urls/webhooks.py Normal file
View File

@@ -0,0 +1,14 @@
from django.conf.urls import url
from awx.api.views import (
WebhookKeyView,
GithubWebhookReceiver,
GitlabWebhookReceiver,
)
urlpatterns = [
url(r'^webhook_key/$', WebhookKeyView.as_view(), name='webhook_key'),
url(r'^github/$', GithubWebhookReceiver.as_view(), name='webhook_receiver_github'),
url(r'^gitlab/$', GitlabWebhookReceiver.as_view(), name='webhook_receiver_gitlab'),
]

View File

@@ -1,7 +1,7 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.conf.urls import url
from django.conf.urls import include, url
from awx.api.views import (
WorkflowJobTemplateList,
@@ -16,6 +16,7 @@ from awx.api.views import (
WorkflowJobTemplateNotificationTemplatesErrorList,
WorkflowJobTemplateNotificationTemplatesStartedList,
WorkflowJobTemplateNotificationTemplatesSuccessList,
WorkflowJobTemplateNotificationTemplatesApprovalList,
WorkflowJobTemplateAccessList,
WorkflowJobTemplateObjectRolesList,
WorkflowJobTemplateLabelList,
@@ -38,9 +39,12 @@ urls = [
name='workflow_job_template_notification_templates_error_list'),
url(r'^(?P<pk>[0-9]+)/notification_templates_success/$', WorkflowJobTemplateNotificationTemplatesSuccessList.as_view(),
name='workflow_job_template_notification_templates_success_list'),
url(r'^(?P<pk>[0-9]+)/notification_templates_approvals/$', WorkflowJobTemplateNotificationTemplatesApprovalList.as_view(),
name='workflow_job_template_notification_templates_approvals_list'),
url(r'^(?P<pk>[0-9]+)/access_list/$', WorkflowJobTemplateAccessList.as_view(), name='workflow_job_template_access_list'),
url(r'^(?P<pk>[0-9]+)/object_roles/$', WorkflowJobTemplateObjectRolesList.as_view(), name='workflow_job_template_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/labels/$', WorkflowJobTemplateLabelList.as_view(), name='workflow_job_template_label_list'),
url(r'^(?P<pk>[0-9]+)/', include('awx.api.urls.webhooks'), {'model_kwarg': 'workflow_job_templates'}),
]
__all__ = ['urls']

View File

@@ -119,6 +119,7 @@ from awx.api.views.organization import ( # noqa
OrganizationNotificationTemplatesErrorList,
OrganizationNotificationTemplatesStartedList,
OrganizationNotificationTemplatesSuccessList,
OrganizationNotificationTemplatesApprovalList,
OrganizationInstanceGroupsList,
OrganizationAccessList,
OrganizationObjectRolesList,
@@ -147,6 +148,12 @@ from awx.api.views.root import ( # noqa
ApiV2RootView,
ApiV2PingView,
ApiV2ConfigView,
ApiV2SubscriptionView,
)
from awx.api.views.webhooks import ( # noqa
WebhookKeyView,
GithubWebhookReceiver,
GitlabWebhookReceiver,
)
@@ -245,13 +252,6 @@ class DashboardView(APIView):
'total': hg_projects.count(),
'failed': hg_failed_projects.count()}
user_jobs = get_user_queryset(request.user, models.Job)
user_failed_jobs = user_jobs.filter(failed=True)
data['jobs'] = {'url': reverse('api:job_list', request=request),
'failure_url': reverse('api:job_list', request=request) + "?failed=True",
'total': user_jobs.count(),
'failed': user_failed_jobs.count()}
user_list = get_user_queryset(request.user, models.User)
team_list = get_user_queryset(request.user, models.Team)
credential_list = get_user_queryset(request.user, models.Credential)
@@ -2568,10 +2568,34 @@ class JobTemplateSurveySpec(GenericAPIView):
return Response(dict(error=_(
"The {min_or_max} limit in survey question {idx} expected to be integer."
).format(min_or_max=key, **context)))
if qtype in ['multiplechoice', 'multiselect'] and 'choices' not in survey_item:
return Response(dict(error=_(
"Survey question {idx} of type {survey_item[type]} must specify choices.".format(**context)
)))
# if it's a multiselect or multiple choice, it must have coices listed
# choices and defualts must come in as strings seperated by /n characters.
if qtype == 'multiselect' or qtype == 'multiplechoice':
if 'choices' in survey_item:
if isinstance(survey_item['choices'], str):
survey_item['choices'] = '\n'.join(choice for choice in survey_item['choices'].splitlines() if choice.strip() != '')
else:
return Response(dict(error=_(
"Survey question {idx} of type {survey_item[type]} must specify choices.".format(**context)
)))
# If there is a default string split it out removing extra /n characters.
# Note: There can still be extra newline characters added in the API, these are sanitized out using .strip()
if 'default' in survey_item:
if isinstance(survey_item['default'], str):
survey_item['default'] = '\n'.join(choice for choice in survey_item['default'].splitlines() if choice.strip() != '')
list_of_defaults = survey_item['default'].splitlines()
else:
list_of_defaults = survey_item['default']
if qtype == 'multiplechoice':
# Multiplechoice types should only have 1 default.
if len(list_of_defaults) > 1:
return Response(dict(error=_(
"Multiple Choice (Single Select) can only have one default value.".format(**context)
)))
if any(item not in survey_item['choices'] for item in list_of_defaults):
return Response(dict(error=_(
"Default choice must be answered from the choices listed.".format(**context)
)))
# Process encryption substitution
if ("default" in survey_item and isinstance(survey_item['default'], str) and
@@ -3117,6 +3141,17 @@ class WorkflowJobTemplateCopy(CopyAPIView):
data.update(messages)
return Response(data)
def _build_create_dict(self, obj):
"""Special processing of fields managed by char_prompts
"""
r = super(WorkflowJobTemplateCopy, self)._build_create_dict(obj)
field_names = set(f.name for f in obj._meta.get_fields())
for field_name, ask_field_name in obj.get_ask_mapping().items():
if field_name in r and field_name not in field_names:
r.setdefault('char_prompts', {})
r['char_prompts'][field_name] = r.pop(field_name)
return r
@staticmethod
def deep_copy_permission_check_func(user, new_objs):
for obj in new_objs:
@@ -3145,7 +3180,6 @@ class WorkflowJobTemplateLabelList(JobTemplateLabelList):
class WorkflowJobTemplateLaunch(RetrieveAPIView):
model = models.WorkflowJobTemplate
obj_permission_type = 'start'
serializer_class = serializers.WorkflowJobLaunchSerializer
@@ -3162,10 +3196,15 @@ class WorkflowJobTemplateLaunch(RetrieveAPIView):
extra_vars.setdefault(v, u'')
if extra_vars:
data['extra_vars'] = extra_vars
if obj.ask_inventory_on_launch:
data['inventory'] = obj.inventory_id
else:
data.pop('inventory', None)
modified_ask_mapping = models.WorkflowJobTemplate.get_ask_mapping()
modified_ask_mapping.pop('extra_vars')
for field_name, ask_field_name in obj.get_ask_mapping().items():
if not getattr(obj, ask_field_name):
data.pop(field_name, None)
elif field_name == 'inventory':
data[field_name] = getattrd(obj, "%s.%s" % (field_name, 'id'), None)
else:
data[field_name] = getattr(obj, field_name)
return data
def post(self, request, *args, **kwargs):
@@ -3279,6 +3318,11 @@ class WorkflowJobTemplateNotificationTemplatesSuccessList(WorkflowJobTemplateNot
relationship = 'notification_templates_success'
class WorkflowJobTemplateNotificationTemplatesApprovalList(WorkflowJobTemplateNotificationTemplatesAnyList):
relationship = 'notification_templates_approvals'
class WorkflowJobTemplateAccessList(ResourceAccessList):
model = models.User # needs to be User for AccessLists's
@@ -3364,6 +3408,11 @@ class WorkflowJobNotificationsList(SubListAPIView):
relationship = 'notifications'
search_fields = ('subject', 'notification_type', 'body',)
def get_sublist_queryset(self, parent):
return self.model.objects.filter(Q(unifiedjob_notifications=parent) |
Q(unifiedjob_notifications__unified_job_node__workflow_job=parent,
unifiedjob_notifications__workflowapproval__isnull=False)).distinct()
class WorkflowJobActivityStreamList(SubListAPIView):
@@ -3719,12 +3768,23 @@ class JobEventList(ListAPIView):
serializer_class = serializers.JobEventSerializer
search_fields = ('stdout',)
def get_serializer_context(self):
context = super().get_serializer_context()
if self.request.query_params.get('no_truncate'):
context.update(no_truncate=True)
return context
class JobEventDetail(RetrieveAPIView):
model = models.JobEvent
serializer_class = serializers.JobEventSerializer
def get_serializer_context(self):
context = super().get_serializer_context()
context.update(no_truncate=True)
return context
class JobEventChildrenList(SubListAPIView):
@@ -3953,12 +4013,23 @@ class AdHocCommandEventList(ListAPIView):
serializer_class = serializers.AdHocCommandEventSerializer
search_fields = ('stdout',)
def get_serializer_context(self):
context = super().get_serializer_context()
if self.request.query_params.get('no_truncate'):
context.update(no_truncate=True)
return context
class AdHocCommandEventDetail(RetrieveAPIView):
model = models.AdHocCommandEvent
serializer_class = serializers.AdHocCommandEventSerializer
def get_serializer_context(self):
context = super().get_serializer_context()
context.update(no_truncate=True)
return context
class BaseAdHocCommandEventsList(SubListAPIView):

View File

@@ -70,12 +70,16 @@ class InventoryUpdateEventsList(SubListAPIView):
class InventoryScriptList(ListCreateAPIView):
deprecated = True
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
class InventoryScriptDetail(RetrieveUpdateDestroyAPIView):
deprecated = True
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
@@ -92,6 +96,8 @@ class InventoryScriptDetail(RetrieveUpdateDestroyAPIView):
class InventoryScriptObjectRolesList(SubListAPIView):
deprecated = True
model = Role
serializer_class = RoleSerializer
parent_model = CustomInventoryScript
@@ -105,6 +111,8 @@ class InventoryScriptObjectRolesList(SubListAPIView):
class InventoryScriptCopy(CopyAPIView):
deprecated = True
model = CustomInventoryScript
copy_return_serializer_class = CustomInventoryScriptSerializer

View File

@@ -195,6 +195,11 @@ class OrganizationNotificationTemplatesSuccessList(OrganizationNotificationTempl
relationship = 'notification_templates_success'
class OrganizationNotificationTemplatesApprovalList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_approvals'
class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup

View File

@@ -17,6 +17,8 @@ from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
from rest_framework import status
import requests
from awx.api.generics import APIView
from awx.main.ha import is_ha_environment
from awx.main.utils import (
@@ -169,6 +171,45 @@ class ApiV2PingView(APIView):
return Response(response)
class ApiV2SubscriptionView(APIView):
permission_classes = (IsAuthenticated,)
name = _('Configuration')
swagger_topic = 'System Configuration'
def check_permissions(self, request):
super(ApiV2SubscriptionView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head'}:
self.permission_denied(request) # Raises PermissionDenied exception.
def post(self, request):
from awx.main.utils.common import get_licenser
data = request.data.copy()
if data.get('rh_password') == '$encrypted$':
data['rh_password'] = settings.REDHAT_PASSWORD
try:
user, pw = data.get('rh_username'), data.get('rh_password')
validated = get_licenser().validate_rh(user, pw)
if user:
settings.REDHAT_USERNAME = data['rh_username']
if pw:
settings.REDHAT_PASSWORD = data['rh_password']
except Exception as exc:
msg = _("Invalid License")
if (
isinstance(exc, requests.exceptions.HTTPError) and
getattr(getattr(exc, 'response', None), 'status_code', None) == 401
):
msg = _("The provided credentials are invalid (HTTP 401).")
if isinstance(exc, (ValueError, OSError)) and exc.args:
msg = exc.args[0]
logger.exception(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": msg}, status=status.HTTP_400_BAD_REQUEST)
return Response(validated)
class ApiV2ConfigView(APIView):
permission_classes = (IsAuthenticated,)

247
awx/api/views/webhooks.py Normal file
View File

@@ -0,0 +1,247 @@
from hashlib import sha1
import hmac
import json
import logging
import urllib.parse
from django.utils.encoding import force_bytes
from django.utils.translation import ugettext_lazy as _
from django.views.decorators.csrf import csrf_exempt
from rest_framework import status
from rest_framework.exceptions import PermissionDenied
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from awx.api import serializers
from awx.api.generics import APIView, GenericAPIView
from awx.api.permissions import WebhookKeyPermission
from awx.main.models import Job, JobTemplate, WorkflowJob, WorkflowJobTemplate
logger = logging.getLogger('awx.api.views.webhooks')
class WebhookKeyView(GenericAPIView):
serializer_class = serializers.EmptySerializer
permission_classes = (WebhookKeyPermission,)
def get_queryset(self):
qs_models = {
'job_templates': JobTemplate,
'workflow_job_templates': WorkflowJobTemplate,
}
self.model = qs_models.get(self.kwargs['model_kwarg'])
return super().get_queryset()
def get(self, request, *args, **kwargs):
obj = self.get_object()
return Response({'webhook_key': obj.webhook_key})
def post(self, request, *args, **kwargs):
obj = self.get_object()
obj.rotate_webhook_key()
obj.save(update_fields=['webhook_key'])
return Response({'webhook_key': obj.webhook_key}, status=status.HTTP_201_CREATED)
class WebhookReceiverBase(APIView):
lookup_url_kwarg = None
lookup_field = 'pk'
permission_classes = (AllowAny,)
authentication_classes = ()
ref_keys = {}
def get_queryset(self):
qs_models = {
'job_templates': JobTemplate,
'workflow_job_templates': WorkflowJobTemplate,
}
model = qs_models.get(self.kwargs['model_kwarg'])
if model is None:
raise PermissionDenied
return model.objects.filter(webhook_service=self.service).exclude(webhook_key='')
def get_object(self):
queryset = self.get_queryset()
lookup_url_kwarg = self.lookup_url_kwarg or self.lookup_field
filter_kwargs = {self.lookup_field: self.kwargs[lookup_url_kwarg]}
obj = queryset.filter(**filter_kwargs).first()
if obj is None:
raise PermissionDenied
return obj
def get_event_type(self):
raise NotImplementedError
def get_event_guid(self):
raise NotImplementedError
def get_event_status_api(self):
raise NotImplementedError
def get_event_ref(self):
key = self.ref_keys.get(self.get_event_type(), '')
value = self.request.data
for element in key.split('.'):
try:
if element.isdigit():
value = value[int(element)]
else:
value = (value or {}).get(element)
except Exception:
value = None
if value == '0000000000000000000000000000000000000000': # a deleted ref
value = None
return value
def get_signature(self):
raise NotImplementedError
def check_signature(self, obj):
if not obj.webhook_key:
raise PermissionDenied
mac = hmac.new(force_bytes(obj.webhook_key), msg=force_bytes(self.request.body), digestmod=sha1)
logger.debug("header signature: %s", self.get_signature())
logger.debug("calculated signature: %s", force_bytes(mac.hexdigest()))
if not hmac.compare_digest(force_bytes(mac.hexdigest()), self.get_signature()):
raise PermissionDenied
@csrf_exempt
def post(self, request, *args, **kwargs):
# Ensure that the full contents of the request are captured for multiple uses.
request.body
logger.debug(
"headers: {}\n"
"data: {}\n".format(request.headers, request.data)
)
obj = self.get_object()
self.check_signature(obj)
event_type = self.get_event_type()
event_guid = self.get_event_guid()
event_ref = self.get_event_ref()
status_api = self.get_event_status_api()
kwargs = {
'unified_job_template_id': obj.id,
'webhook_service': obj.webhook_service,
'webhook_guid': event_guid,
}
if WorkflowJob.objects.filter(**kwargs).exists() or Job.objects.filter(**kwargs).exists():
# Short circuit if this webhook has already been received and acted upon.
logger.debug("Webhook previously received, returning without action.")
return Response({'message': _("Webhook previously received, aborting.")},
status=status.HTTP_202_ACCEPTED)
kwargs = {
'_eager_fields': {
'launch_type': 'webhook',
'webhook_service': obj.webhook_service,
'webhook_credential': obj.webhook_credential,
'webhook_guid': event_guid,
},
'extra_vars': json.dumps({
'tower_webhook_event_type': event_type,
'tower_webhook_event_guid': event_guid,
'tower_webhook_event_ref': event_ref,
'tower_webhook_status_api': status_api,
'tower_webhook_payload': request.data,
})
}
new_job = obj.create_unified_job(**kwargs)
new_job.signal_start()
return Response({'message': "Job queued."}, status=status.HTTP_202_ACCEPTED)
class GithubWebhookReceiver(WebhookReceiverBase):
service = 'github'
ref_keys = {
'pull_request': 'pull_request.head.sha',
'pull_request_review': 'pull_request.head.sha',
'pull_request_review_comment': 'pull_request.head.sha',
'push': 'after',
'release': 'release.tag_name',
'commit_comment': 'comment.commit_id',
'create': 'ref',
'page_build': 'build.commit',
}
def get_event_type(self):
return self.request.META.get('HTTP_X_GITHUB_EVENT')
def get_event_guid(self):
return self.request.META.get('HTTP_X_GITHUB_DELIVERY')
def get_event_status_api(self):
if self.get_event_type() != 'pull_request':
return
return self.request.data.get('pull_request', {}).get('statuses_url')
def get_signature(self):
header_sig = self.request.META.get('HTTP_X_HUB_SIGNATURE')
if not header_sig:
logger.debug("Expected signature missing from header key HTTP_X_HUB_SIGNATURE")
raise PermissionDenied
hash_alg, signature = header_sig.split('=')
if hash_alg != 'sha1':
logger.debug("Unsupported signature type, expected: sha1, received: {}".format(hash_alg))
raise PermissionDenied
return force_bytes(signature)
class GitlabWebhookReceiver(WebhookReceiverBase):
service = 'gitlab'
ref_keys = {
'Push Hook': 'checkout_sha',
'Tag Push Hook': 'checkout_sha',
'Merge Request Hook': 'object_attributes.last_commit.id',
}
def get_event_type(self):
return self.request.META.get('HTTP_X_GITLAB_EVENT')
def get_event_guid(self):
# GitLab does not provide a unique identifier on events, so construct one.
h = sha1()
h.update(force_bytes(self.request.body))
return h.hexdigest()
def get_event_status_api(self):
if self.get_event_type() != 'Merge Request Hook':
return
project = self.request.data.get('project', {})
repo_url = project.get('web_url')
if not repo_url:
return
parsed = urllib.parse.urlparse(repo_url)
return "{}://{}/api/v4/projects/{}/statuses/{}".format(
parsed.scheme, parsed.netloc, project['id'], self.get_event_ref())
def get_signature(self):
return force_bytes(self.request.META.get('HTTP_X_GITLAB_TOKEN') or '')
def check_signature(self, obj):
if not obj.webhook_key:
raise PermissionDenied
# GitLab only returns the secret token, not an hmac hash. Use
# the hmac `compare_digest` helper function to prevent timing
# analysis by attackers.
if not hmac.compare_digest(force_bytes(obj.webhook_key), self.get_signature()):
raise PermissionDenied

View File

@@ -10,8 +10,8 @@ from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework.fields import ( # noqa
BooleanField, CharField, ChoiceField, DictField, EmailField, IntegerField,
ListField, NullBooleanField
BooleanField, CharField, ChoiceField, DictField, EmailField,
IntegerField, ListField, NullBooleanField
)
logger = logging.getLogger('awx.conf.fields')
@@ -121,11 +121,14 @@ class URLField(CharField):
def __init__(self, **kwargs):
schemes = kwargs.pop('schemes', None)
regex = kwargs.pop('regex', None)
self.allow_plain_hostname = kwargs.pop('allow_plain_hostname', False)
super(URLField, self).__init__(**kwargs)
validator_kwargs = dict(message=_('Enter a valid URL'))
if schemes is not None:
validator_kwargs['schemes'] = schemes
if regex is not None:
validator_kwargs['regex'] = regex
self.validators.append(URLValidator(**validator_kwargs))
def to_representation(self, value):

View File

@@ -317,10 +317,19 @@ class BaseAccess(object):
validation_info['time_remaining'] = 99999999
validation_info['grace_period_remaining'] = 99999999
report_violation = lambda message: logger.error(message)
if (
validation_info.get('trial', False) is True or
validation_info['instance_count'] == 10 # basic 10 license
):
def report_violation(message):
raise PermissionDenied(message)
if check_expiration and validation_info.get('time_remaining', None) is None:
raise PermissionDenied(_("License is missing."))
if check_expiration and validation_info.get("grace_period_remaining") <= 0:
raise PermissionDenied(_("License has expired."))
elif check_expiration and validation_info.get("grace_period_remaining") <= 0:
report_violation(_("License has expired."))
free_instances = validation_info.get('free_instances', 0)
available_instances = validation_info.get('available_instances', 0)
@@ -328,11 +337,11 @@ class BaseAccess(object):
if add_host_name:
host_exists = Host.objects.filter(name=add_host_name).exists()
if not host_exists and free_instances == 0:
raise PermissionDenied(_("License count of %s instances has been reached.") % available_instances)
report_violation(_("License count of %s instances has been reached.") % available_instances)
elif not host_exists and free_instances < 0:
raise PermissionDenied(_("License count of %s instances has been exceeded.") % available_instances)
report_violation(_("License count of %s instances has been exceeded.") % available_instances)
elif not add_host_name and free_instances < 0:
raise PermissionDenied(_("Host count exceeds available instances."))
report_violation(_("Host count exceeds available instances."))
def check_org_host_limit(self, data, add_host_name=None):
validation_info = get_licenser().validate()
@@ -652,7 +661,7 @@ class UserAccess(BaseAccess):
if obj.is_superuser and super_users.count() == 1:
# cannot delete the last active superuser
return False
if self.user.is_superuser:
if self.can_admin(obj, None, allow_orphans=True):
return True
return False

View File

@@ -5,10 +5,9 @@ import os
import os.path
import tempfile
import shutil
import subprocess
import requests
from django.conf import settings
from django.utils.encoding import smart_str
from django.utils.timezone import now, timedelta
from rest_framework.exceptions import PermissionDenied
@@ -81,17 +80,16 @@ def gather(dest=None, module=None, collection_type='scheduled'):
last_run = state.last_run
logger.debug("Last analytics run was: {}".format(last_run))
max_interval = now() - timedelta(days=7)
max_interval = now() - timedelta(weeks=4)
if last_run < max_interval or not last_run:
last_run = max_interval
if _valid_license() is False:
logger.exception("Invalid License provided, or No License Provided")
return "Error: Invalid License provided, or No License Provided"
if not settings.INSIGHTS_TRACKING_STATE:
logger.error("Insights analytics not enabled")
if collection_type != 'dry-run' and not settings.INSIGHTS_TRACKING_STATE:
logger.error("Automation Analytics not enabled. Use --dry-run to gather locally without sending.")
return
if module is None:
@@ -146,30 +144,39 @@ def gather(dest=None, module=None, collection_type='scheduled'):
def ship(path):
"""
Ship gathered metrics via the Insights agent
Ship gathered metrics to the Insights API
"""
if not path:
logger.error('Automation Analytics TAR not found')
return
if "Error:" in str(path):
return
try:
agent = 'insights-client'
if shutil.which(agent) is None:
logger.error('could not find {} on PATH'.format(agent))
return
logger.debug('shipping analytics file: {}'.format(path))
try:
cmd = [
agent, '--payload', path, '--content-type', settings.INSIGHTS_AGENT_MIME
]
output = smart_str(subprocess.check_output(cmd, timeout=60 * 5))
logger.debug(output)
# reset the `last_run` when data is shipped
run_now = now()
state = TowerAnalyticsState.get_solo()
state.last_run = run_now
state.save()
except subprocess.CalledProcessError:
logger.exception('{} failure:'.format(cmd))
except subprocess.TimeoutExpired:
logger.exception('{} timeout:'.format(cmd))
url = getattr(settings, 'AUTOMATION_ANALYTICS_URL', None)
if not url:
logger.error('AUTOMATION_ANALYTICS_URL is not set')
return
rh_user = getattr(settings, 'REDHAT_USERNAME', None)
rh_password = getattr(settings, 'REDHAT_PASSWORD', None)
if not rh_user:
return logger.error('REDHAT_USERNAME is not set')
if not rh_password:
return logger.error('REDHAT_PASSWORD is not set')
with open(path, 'rb') as f:
files = {'file': (os.path.basename(path), f, settings.INSIGHTS_AGENT_MIME)}
response = requests.post(url,
files=files,
verify="/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem",
auth=(rh_user, rh_password),
timeout=(31, 31))
if response.status_code != 202:
return logger.exception('Upload failed with status {}, {}'.format(response.status_code,
response.text))
run_now = now()
state = TowerAnalyticsState.get_solo()
state.last_run = run_now
state.save()
finally:
# cleanup tar.gz
os.remove(path)

View File

@@ -2,12 +2,14 @@
import json
import logging
import os
from distutils.version import LooseVersion as Version
# Django
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework import serializers
from rest_framework.fields import FloatField
# Tower
from awx.conf import fields, register, register_validate
@@ -153,7 +155,7 @@ register(
register(
'AUTOMATION_ANALYTICS_URL',
field_class=fields.URLField,
default='https://cloud.redhat.com',
default='https://example.com',
schemes=('http', 'https'),
allow_plain_hostname=True, # Allow hostname only without TLD.
label=_('Automation Analytics upload URL.'),
@@ -298,6 +300,16 @@ register(
category_slug='jobs',
)
register(
'AWX_ISOLATED_HOST_KEY_CHECKING',
field_class=fields.BooleanField,
label=_('Isolated host key checking'),
help_text=_('When set to True, AWX will enforce strict host key checking for communication with isolated nodes.'),
category=_('Jobs'),
category_slug='jobs',
default=False
)
register(
'AWX_ISOLATED_KEY_GENERATION',
field_class=fields.BooleanField,
@@ -335,6 +347,53 @@ register(
category_slug='jobs',
)
register(
'AWX_RESOURCE_PROFILING_ENABLED',
field_class=fields.BooleanField,
default=False,
label=_('Enable detailed resource profiling on all playbook runs'),
help_text=_('If set, detailed resource profiling data will be collected on all jobs. '
'This data can be gathered with `sosreport`.'), # noqa
category=_('Jobs'),
category_slug='jobs',
)
register(
'AWX_RESOURCE_PROFILING_CPU_POLL_INTERVAL',
field_class=FloatField,
default='0.25',
label=_('Interval (in seconds) between polls for cpu usage.'),
help_text=_('Interval (in seconds) between polls for cpu usage. '
'Setting this lower than the default will affect playbook performance.'),
category=_('Jobs'),
category_slug='jobs',
required=False,
)
register(
'AWX_RESOURCE_PROFILING_MEMORY_POLL_INTERVAL',
field_class=FloatField,
default='0.25',
label=_('Interval (in seconds) between polls for memory usage.'),
help_text=_('Interval (in seconds) between polls for memory usage. '
'Setting this lower than the default will affect playbook performance.'),
category=_('Jobs'),
category_slug='jobs',
required=False,
)
register(
'AWX_RESOURCE_PROFILING_PID_POLL_INTERVAL',
field_class=FloatField,
default='0.25',
label=_('Interval (in seconds) between polls for PID count.'),
help_text=_('Interval (in seconds) between polls for PID count. '
'Setting this lower than the default will affect playbook performance.'),
category=_('Jobs'),
category_slug='jobs',
required=False,
)
register(
'AWX_TASK_ENV',
field_class=fields.KeyValueField,
@@ -350,12 +409,21 @@ register(
'INSIGHTS_TRACKING_STATE',
field_class=fields.BooleanField,
default=False,
label=_('Gather data for Automation Insights'),
help_text=_('Enables Tower to gather data on automation and send it to Red Hat Insights.'),
label=_('Gather data for Automation Analytics'),
help_text=_('Enables Tower to gather data on automation and send it to Red Hat.'),
category=_('System'),
category_slug='system',
)
register(
'PROJECT_UPDATE_VVV',
field_class=fields.BooleanField,
label=_('Run Project Updates With Higher Verbosity'),
help_text=_('Adds the CLI -vvv flag to ansible-playbook runs of project_update.yml used for project updates.'),
category=_('Jobs'),
category_slug='jobs',
)
register(
'AWX_ROLES_ENABLED',
field_class=fields.BooleanField,
@@ -376,6 +444,75 @@ register(
category_slug='jobs',
)
register(
'PRIMARY_GALAXY_URL',
field_class=fields.URLField,
required=False,
allow_blank=True,
label=_('Primary Galaxy Server URL'),
help_text=_(
'For organizations that run their own Galaxy service, this gives the option to specify a '
'host as the primary galaxy server. Requirements will be downloaded from the primary if the '
'specific role or collection is available there. If the content is not avilable in the primary, '
'or if this field is left blank, it will default to galaxy.ansible.com.'
),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PRIMARY_GALAXY_USERNAME',
field_class=fields.CharField,
required=False,
allow_blank=True,
label=_('Primary Galaxy Server Username'),
help_text=_('For using a galaxy server at higher precedence than the public Ansible Galaxy. '
'The username to use for basic authentication against the Galaxy instance, '
'this is mutually exclusive with PRIMARY_GALAXY_TOKEN.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PRIMARY_GALAXY_PASSWORD',
field_class=fields.CharField,
encrypted=True,
required=False,
allow_blank=True,
label=_('Primary Galaxy Server Password'),
help_text=_('For using a galaxy server at higher precedence than the public Ansible Galaxy. '
'The password to use for basic authentication against the Galaxy instance, '
'this is mutually exclusive with PRIMARY_GALAXY_TOKEN.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PRIMARY_GALAXY_TOKEN',
field_class=fields.CharField,
encrypted=True,
required=False,
allow_blank=True,
label=_('Primary Galaxy Server Token'),
help_text=_('For using a galaxy server at higher precedence than the public Ansible Galaxy. '
'The token to use for connecting with the Galaxy instance, '
'this is mutually exclusive with corresponding username and password settings.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PRIMARY_GALAXY_AUTH_URL',
field_class=fields.CharField,
required=False,
allow_blank=True,
label=_('Primary Galaxy Authentication URL'),
help_text=_('For using a galaxy server at higher precedence than the public Ansible Galaxy. '
'The token_endpoint of a Keycloak server.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'STDOUT_MAX_BYTES_DISPLAY',
field_class=fields.IntegerField,
@@ -616,6 +753,16 @@ register(
category=_('Logging'),
category_slug='logging',
)
register(
'LOG_AGGREGATOR_AUDIT',
field_class=fields.BooleanField,
allow_null=True,
default=False,
label=_('Enabled external log aggregation auditing'),
help_text=_('When enabled, all external logs emitted by Tower will also be written to /var/log/tower/external.log. This is an experimental setting intended to be used for debugging external log aggregation issues (and may be subject to change in the future).'), # noqa
category=_('Logging'),
category_slug='logging',
)
register(
@@ -646,4 +793,75 @@ def logging_validate(serializer, attrs):
return attrs
def galaxy_validate(serializer, attrs):
"""Ansible Galaxy config options have mutual exclusivity rules, these rules
are enforced here on serializer validation so that users will not be able
to save settings which obviously break all project updates.
"""
prefix = 'PRIMARY_GALAXY_'
from awx.main.constants import GALAXY_SERVER_FIELDS
if not any('{}{}'.format(prefix, subfield.upper()) in attrs for subfield in GALAXY_SERVER_FIELDS):
return attrs
def _new_value(setting_name):
if setting_name in attrs:
return attrs[setting_name]
elif not serializer.instance:
return ''
return getattr(serializer.instance, setting_name, '')
galaxy_data = {}
for subfield in GALAXY_SERVER_FIELDS:
galaxy_data[subfield] = _new_value('{}{}'.format(prefix, subfield.upper()))
errors = {}
if not galaxy_data['url']:
for k, v in galaxy_data.items():
if v:
setting_name = '{}{}'.format(prefix, k.upper())
errors.setdefault(setting_name, [])
errors[setting_name].append(_(
'Cannot provide field if PRIMARY_GALAXY_URL is not set.'
))
for k in GALAXY_SERVER_FIELDS:
if galaxy_data[k]:
setting_name = '{}{}'.format(prefix, k.upper())
if (not serializer.instance) or (not getattr(serializer.instance, setting_name, '')):
# new auth is applied, so check if compatible with version
from awx.main.utils import get_ansible_version
current_version = get_ansible_version()
min_version = '2.9'
if Version(current_version) < Version(min_version):
errors.setdefault(setting_name, [])
errors[setting_name].append(_(
'Galaxy server settings are not available until Ansible {min_version}, '
'you are running {current_version}.'
).format(min_version=min_version, current_version=current_version))
if (galaxy_data['password'] or galaxy_data['username']) and (galaxy_data['token'] or galaxy_data['auth_url']):
for k in ('password', 'username', 'token', 'auth_url'):
setting_name = '{}{}'.format(prefix, k.upper())
if setting_name in attrs:
errors.setdefault(setting_name, [])
errors[setting_name].append(_(
'Setting Galaxy token and authentication URL is mutually exclusive with username and password.'
))
if bool(galaxy_data['username']) != bool(galaxy_data['password']):
msg = _('If authenticating via username and password, both must be provided.')
for k in ('username', 'password'):
setting_name = '{}{}'.format(prefix, k.upper())
errors.setdefault(setting_name, [])
errors[setting_name].append(msg)
if bool(galaxy_data['token']) != bool(galaxy_data['auth_url']):
msg = _('If authenticating via token, both token and authentication URL must be provided.')
for k in ('token', 'auth_url'):
setting_name = '{}{}'.format(prefix, k.upper())
errors.setdefault(setting_name, [])
errors[setting_name].append(msg)
if errors:
raise serializers.ValidationError(errors)
return attrs
register_validate('logging', logging_validate)
register_validate('jobs', galaxy_validate)

View File

@@ -51,3 +51,7 @@ LOGGER_BLACKLIST = (
# loggers that may be called getting logging settings
'awx.conf'
)
# these correspond to both AWX and Ansible settings to keep naming consistent
# for instance, settings.PRIMARY_GALAXY_AUTH_URL vs env var ANSIBLE_GALAXY_SERVER_FOO_AUTH_URL
GALAXY_SERVER_FIELDS = ('url', 'username', 'password', 'token', 'auth_url')

View File

@@ -101,7 +101,7 @@ def aim_backend(**kwargs):
aim_plugin = CredentialPlugin(
'CyberArk AIM Secret Lookup',
'CyberArk AIM Central Credential Provider Lookup',
inputs=aim_inputs,
backend=aim_backend
)

View File

@@ -103,6 +103,8 @@ def kv_backend(**kwargs):
sess = requests.Session()
sess.headers['Authorization'] = 'Bearer {}'.format(token)
# Compatability header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
if api_version == 'v2':
if kwargs.get('secret_version'):
@@ -158,6 +160,8 @@ def ssh_backend(**kwargs):
sess = requests.Session()
sess.headers['Authorization'] = 'Bearer {}'.format(token)
# Compatability header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
# https://www.vaultproject.io/api/secret/ssh/index.html#sign-ssh-key
request_url = '/'.join([url, secret_path, 'sign', role]).rstrip('/')
resp = sess.post(request_url, **request_kwargs)

View File

@@ -33,7 +33,11 @@ def reap(instance=None, status='failed', excluded_uuids=[]):
'''
Reap all jobs in waiting|running for this instance.
'''
me = instance or Instance.objects.me()
me = instance
if me is None:
(changed, me) = Instance.objects.get_or_register()
if changed:
logger.info("Registered tower node '{}'".format(me.hostname))
now = tz_now()
workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id
jobs = UnifiedJob.objects.filter(

View File

@@ -11,7 +11,9 @@ from django.conf import settings
import ansible_runner
import awx
from awx.main.utils import get_system_task_capacity
from awx.main.utils import (
get_system_task_capacity
)
from awx.main.queue import CallbackQueueDispatcher
logger = logging.getLogger('awx.isolated.manager')
@@ -29,7 +31,7 @@ def set_pythonpath(venv_libdir, env):
class IsolatedManager(object):
def __init__(self, cancelled_callback=None, check_callback=None):
def __init__(self, cancelled_callback=None, check_callback=None, pod_manager=None):
"""
:param cancelled_callback: a callable - which returns `True` or `False`
- signifying if the job has been prematurely
@@ -40,11 +42,29 @@ class IsolatedManager(object):
self.idle_timeout = max(60, 2 * settings.AWX_ISOLATED_CONNECTION_TIMEOUT)
self.started_at = None
self.captured_command_artifact = False
self.instance = None
self.pod_manager = pod_manager
def build_inventory(self, hosts):
if self.instance and self.instance.is_containerized:
inventory = {'all': {'hosts': {}}}
for host in hosts:
inventory['all']['hosts'][host] = {
"ansible_connection": "kubectl",
"ansible_kubectl_config": self.pod_manager.kube_config
}
else:
inventory = '\n'.join([
'{} ansible_ssh_user={}'.format(host, settings.AWX_ISOLATED_USERNAME)
for host in hosts
])
return inventory
def build_runner_params(self, hosts, verbosity=1):
env = dict(os.environ.items())
env['ANSIBLE_RETRY_FILES_ENABLED'] = 'False'
env['ANSIBLE_HOST_KEY_CHECKING'] = 'False'
env['ANSIBLE_HOST_KEY_CHECKING'] = str(settings.AWX_ISOLATED_HOST_KEY_CHECKING)
env['ANSIBLE_LIBRARY'] = os.path.join(os.path.dirname(awx.__file__), 'plugins', 'isolated')
set_pythonpath(os.path.join(settings.ANSIBLE_VENV_PATH, 'lib'), env)
@@ -69,17 +89,12 @@ class IsolatedManager(object):
else:
playbook_logger.info(runner_obj.stdout.read())
inventory = '\n'.join([
'{} ansible_ssh_user={}'.format(host, settings.AWX_ISOLATED_USERNAME)
for host in hosts
])
return {
'project_dir': os.path.abspath(os.path.join(
os.path.dirname(awx.__file__),
'playbooks'
)),
'inventory': inventory,
'inventory': self.build_inventory(hosts),
'envvars': env,
'finished_callback': finished_callback,
'verbosity': verbosity,
@@ -153,6 +168,12 @@ class IsolatedManager(object):
runner_obj = self.run_management_playbook('run_isolated.yml',
self.private_data_dir,
extravars=extravars)
if runner_obj.status == 'failed':
self.instance.result_traceback = runner_obj.stdout.read()
self.instance.save(update_fields=['result_traceback'])
return 'error', runner_obj.rc
return runner_obj.status, runner_obj.rc
def check(self, interval=None):
@@ -175,6 +196,7 @@ class IsolatedManager(object):
rc = None
last_check = time.time()
dispatcher = CallbackQueueDispatcher()
while status == 'failed':
canceled = self.cancelled_callback() if self.cancelled_callback else False
if not canceled and time.time() - last_check < interval:
@@ -279,7 +301,6 @@ class IsolatedManager(object):
def cleanup(self):
# If the job failed for any reason, make a last-ditch effort at cleanup
extravars = {
'private_data_dir': self.private_data_dir,
'cleanup_dirs': [
@@ -393,6 +414,7 @@ class IsolatedManager(object):
[instance.execution_node],
verbosity=min(5, self.instance.verbosity)
)
status, rc = self.dispatch(playbook, module, module_args)
if status == 'successful':
status, rc = self.check()

View File

@@ -0,0 +1,17 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from django.core.management.base import BaseCommand
from django.db import connection
class Command(BaseCommand):
"""Checks connection to the database, and prints out connection info if not connected"""
def handle(self, *args, **options):
with connection.cursor() as cursor:
cursor.execute("SELECT version()")
version = str(cursor.fetchone()[0])
return "Database Version: {}".format(version)

View File

@@ -11,8 +11,10 @@ class Command(BaseCommand):
help = 'Gather AWX analytics data'
def add_arguments(self, parser):
parser.add_argument('--dry-run', dest='dry-run', action='store_true',
help='Gather analytics without shipping. Works even if analytics are disabled in settings.')
parser.add_argument('--ship', dest='ship', action='store_true',
help='Enable to ship metrics via insights-client')
help='Enable to ship metrics to the Red Hat Cloud')
def init_logging(self):
self.logger = logging.getLogger('awx.main.analytics')
@@ -23,9 +25,14 @@ class Command(BaseCommand):
self.logger.propagate = False
def handle(self, *args, **options):
tgz = gather(collection_type='manual')
self.init_logging()
opt_ship = options.get('ship')
opt_dry_run = options.get('dry-run')
if opt_ship and opt_dry_run:
self.logger.error('Both --ship and --dry-run cannot be processed at the same time.')
return
tgz = gather(collection_type='manual' if not opt_dry_run else 'dry-run')
if tgz:
self.logger.debug(tgz)
if options.get('ship'):
if opt_ship:
ship(tgz)

View File

@@ -919,7 +919,8 @@ class Command(BaseCommand):
new_count = Host.objects.active_count()
if time_remaining <= 0 and not license_info.get('demo', False):
logger.error(LICENSE_EXPIRED_MESSAGE)
raise CommandError("License has expired!")
if license_info.get('trial', False) is True:
raise CommandError("License has expired!")
# special check for tower-type inventory sources
# but only if running the plugin
TOWER_SOURCE_FILES = ['tower.yml', 'tower.yaml']
@@ -936,7 +937,11 @@ class Command(BaseCommand):
logger.error(DEMO_LICENSE_MESSAGE % d)
else:
logger.error(LICENSE_MESSAGE % d)
raise CommandError('License count exceeded!')
if (
license_info.get('trial', False) is True or
license_info['instance_count'] == 10 # basic 10 license
):
raise CommandError('License count exceeded!')
def check_org_host_limit(self):
license_info = get_licenser().validate()

View File

@@ -33,6 +33,7 @@ class Command(BaseCommand):
]):
ssh_key = settings.AWX_ISOLATED_PRIVATE_KEY
env = dict(os.environ.items())
env['ANSIBLE_HOST_KEY_CHECKING'] = str(settings.AWX_ISOLATED_HOST_KEY_CHECKING)
set_pythonpath(os.path.join(settings.ANSIBLE_VENV_PATH, 'lib'), env)
res = ansible_runner.interface.run(
private_data_dir=path,

View File

@@ -221,8 +221,9 @@ class InstanceGroupManager(models.Manager):
elif t.status == 'running':
# Subtract capacity from all groups that contain the instance
if t.execution_node not in instance_ig_mapping:
logger.warning('Detected %s running inside lost instance, '
'may still be waiting for reaper.', t.log_format)
if not t.is_containerized:
logger.warning('Detected %s running inside lost instance, '
'may still be waiting for reaper.', t.log_format)
if t.instance_group:
impacted_groups = [t.instance_group.name]
else:

View File

@@ -0,0 +1,28 @@
# Generated by Django 2.2.4 on 2019-09-10 21:30
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0087_v360_update_credential_injector_help_text'),
]
operations = [
migrations.AlterField(
model_name='unifiedjob',
name='finished',
field=models.DateTimeField(db_index=True, default=None, editable=False, help_text='The date and time the job finished execution.', null=True),
),
migrations.AlterField(
model_name='unifiedjob',
name='launch_type',
field=models.CharField(choices=[('manual', 'Manual'), ('relaunch', 'Relaunch'), ('callback', 'Callback'), ('scheduled', 'Scheduled'), ('dependency', 'Dependency'), ('workflow', 'Workflow'), ('sync', 'Sync'), ('scm', 'SCM Update')], db_index=True, default='manual', editable=False, max_length=20),
),
migrations.AlterField(
model_name='unifiedjob',
name='created',
field=models.DateTimeField(db_index=True, default=None, editable=False),
),
]

View File

@@ -0,0 +1,23 @@
# Generated by Django 2.2.4 on 2019-09-12 13:05
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0088_v360_dashboard_optimizations'),
]
operations = [
migrations.AlterField(
model_name='jobevent',
name='event',
field=models.CharField(choices=[('runner_on_failed', 'Host Failed'), ('runner_on_start', 'Host Started'), ('runner_on_ok', 'Host OK'), ('runner_on_error', 'Host Failure'), ('runner_on_skipped', 'Host Skipped'), ('runner_on_unreachable', 'Host Unreachable'), ('runner_on_no_hosts', 'No Hosts Remaining'), ('runner_on_async_poll', 'Host Polling'), ('runner_on_async_ok', 'Host Async OK'), ('runner_on_async_failed', 'Host Async Failure'), ('runner_item_on_ok', 'Item OK'), ('runner_item_on_failed', 'Item Failed'), ('runner_item_on_skipped', 'Item Skipped'), ('runner_retry', 'Host Retry'), ('runner_on_file_diff', 'File Difference'), ('playbook_on_start', 'Playbook Started'), ('playbook_on_notify', 'Running Handlers'), ('playbook_on_include', 'Including File'), ('playbook_on_no_hosts_matched', 'No Hosts Matched'), ('playbook_on_no_hosts_remaining', 'No Hosts Remaining'), ('playbook_on_task_start', 'Task Started'), ('playbook_on_vars_prompt', 'Variables Prompted'), ('playbook_on_setup', 'Gathering Facts'), ('playbook_on_import_for_host', 'internal: on Import for Host'), ('playbook_on_not_import_for_host', 'internal: on Not Import for Host'), ('playbook_on_play_start', 'Play Started'), ('playbook_on_stats', 'Playbook Complete'), ('debug', 'Debug'), ('verbose', 'Verbose'), ('deprecated', 'Deprecated'), ('warning', 'Warning'), ('system_warning', 'System Warning'), ('error', 'Error')], max_length=100),
),
migrations.AlterField(
model_name='projectupdateevent',
name='event',
field=models.CharField(choices=[('runner_on_failed', 'Host Failed'), ('runner_on_start', 'Host Started'), ('runner_on_ok', 'Host OK'), ('runner_on_error', 'Host Failure'), ('runner_on_skipped', 'Host Skipped'), ('runner_on_unreachable', 'Host Unreachable'), ('runner_on_no_hosts', 'No Hosts Remaining'), ('runner_on_async_poll', 'Host Polling'), ('runner_on_async_ok', 'Host Async OK'), ('runner_on_async_failed', 'Host Async Failure'), ('runner_item_on_ok', 'Item OK'), ('runner_item_on_failed', 'Item Failed'), ('runner_item_on_skipped', 'Item Skipped'), ('runner_retry', 'Host Retry'), ('runner_on_file_diff', 'File Difference'), ('playbook_on_start', 'Playbook Started'), ('playbook_on_notify', 'Running Handlers'), ('playbook_on_include', 'Including File'), ('playbook_on_no_hosts_matched', 'No Hosts Matched'), ('playbook_on_no_hosts_remaining', 'No Hosts Remaining'), ('playbook_on_task_start', 'Task Started'), ('playbook_on_vars_prompt', 'Variables Prompted'), ('playbook_on_setup', 'Gathering Facts'), ('playbook_on_import_for_host', 'internal: on Import for Host'), ('playbook_on_not_import_for_host', 'internal: on Not Import for Host'), ('playbook_on_play_start', 'Play Started'), ('playbook_on_stats', 'Playbook Complete'), ('debug', 'Debug'), ('verbose', 'Verbose'), ('deprecated', 'Deprecated'), ('warning', 'Warning'), ('system_warning', 'System Warning'), ('error', 'Error')], max_length=100),
),
]

View File

@@ -0,0 +1,59 @@
# Generated by Django 2.2.2 on 2019-07-23 17:56
import awx.main.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0089_v360_new_job_event_types'),
]
operations = [
migrations.AddField(
model_name='workflowjobtemplate',
name='ask_limit_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='ask_scm_branch_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='char_prompts',
field=awx.main.fields.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='joblaunchconfig',
name='inventory',
field=models.ForeignKey(blank=True, default=None, help_text='Inventory applied as a prompt, assuming job template prompts for inventory', null=True, on_delete=models.deletion.SET_NULL, related_name='joblaunchconfigs', to='main.Inventory'),
),
migrations.AlterField(
model_name='schedule',
name='inventory',
field=models.ForeignKey(blank=True, default=None, help_text='Inventory applied as a prompt, assuming job template prompts for inventory', null=True, on_delete=models.deletion.SET_NULL, related_name='schedules', to='main.Inventory'),
),
migrations.AlterField(
model_name='workflowjob',
name='inventory',
field=models.ForeignKey(blank=True, default=None, help_text='Inventory applied as a prompt, assuming job template prompts for inventory', null=True, on_delete=models.deletion.SET_NULL, related_name='workflowjobs', to='main.Inventory'),
),
migrations.AlterField(
model_name='workflowjobnode',
name='inventory',
field=models.ForeignKey(blank=True, default=None, help_text='Inventory applied as a prompt, assuming job template prompts for inventory', null=True, on_delete=models.deletion.SET_NULL, related_name='workflowjobnodes', to='main.Inventory'),
),
migrations.AlterField(
model_name='workflowjobtemplate',
name='inventory',
field=models.ForeignKey(blank=True, default=None, help_text='Inventory applied as a prompt, assuming job template prompts for inventory', null=True, on_delete=models.deletion.SET_NULL, related_name='workflowjobtemplates', to='main.Inventory'),
),
migrations.AlterField(
model_name='workflowjobtemplatenode',
name='inventory',
field=models.ForeignKey(blank=True, default=None, help_text='Inventory applied as a prompt, assuming job template prompts for inventory', null=True, on_delete=models.deletion.SET_NULL, related_name='workflowjobtemplatenodes', to='main.Inventory'),
),
]

View File

@@ -0,0 +1,28 @@
# Generated by Django 2.2.4 on 2019-09-11 13:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0090_v360_WFJT_prompts'),
]
operations = [
migrations.AddField(
model_name='organization',
name='notification_templates_approvals',
field=models.ManyToManyField(blank=True, related_name='organization_notification_templates_for_approvals', to='main.NotificationTemplate'),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='notification_templates_approvals',
field=models.ManyToManyField(blank=True, related_name='workflowjobtemplate_notification_templates_for_approvals', to='main.NotificationTemplate'),
),
migrations.AlterField(
model_name='workflowjobnode',
name='do_not_run',
field=models.BooleanField(default=False, help_text='Indicates that a job will not be created when True. Workflow runtime semantics will mark this True if the node is in a path that will decidedly not be ran. A value of False means the node may not run.'),
),
]

View File

@@ -0,0 +1,49 @@
# Generated by Django 2.2.4 on 2019-09-12 14:49
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0091_v360_approval_node_notifications'),
]
operations = [
migrations.AddField(
model_name='jobtemplate',
name='webhook_credential',
field=models.ForeignKey(blank=True, help_text='Personal Access Token for posting back the status to the service API', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='jobtemplates', to='main.Credential'),
),
migrations.AddField(
model_name='jobtemplate',
name='webhook_key',
field=models.CharField(blank=True, help_text='Shared secret that the webhook service will use to sign requests', max_length=64),
),
migrations.AddField(
model_name='jobtemplate',
name='webhook_service',
field=models.CharField(blank=True, choices=[('github', 'GitHub'), ('gitlab', 'GitLab')], help_text='Service that webhook requests will be accepted from', max_length=16),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='webhook_credential',
field=models.ForeignKey(blank=True, help_text='Personal Access Token for posting back the status to the service API', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='workflowjobtemplates', to='main.Credential'),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='webhook_key',
field=models.CharField(blank=True, help_text='Shared secret that the webhook service will use to sign requests', max_length=64),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='webhook_service',
field=models.CharField(blank=True, choices=[('github', 'GitHub'), ('gitlab', 'GitLab')], help_text='Service that webhook requests will be accepted from', max_length=16),
),
migrations.AlterField(
model_name='unifiedjob',
name='launch_type',
field=models.CharField(choices=[('manual', 'Manual'), ('relaunch', 'Relaunch'), ('callback', 'Callback'), ('scheduled', 'Scheduled'), ('dependency', 'Dependency'), ('workflow', 'Workflow'), ('webhook', 'Webhook'), ('sync', 'Sync'), ('scm', 'SCM Update')], db_index=True, default='manual', editable=False, max_length=20),
),
]

View File

@@ -0,0 +1,27 @@
# Generated by Django 2.2.4 on 2019-09-12 14:50
from django.db import migrations, models
from awx.main.models import CredentialType
from awx.main.utils.common import set_current_apps
def setup_tower_managed_defaults(apps, schema_editor):
set_current_apps(apps)
CredentialType.setup_tower_managed_defaults()
class Migration(migrations.Migration):
dependencies = [
('main', '0092_v360_webhook_mixin'),
]
operations = [
migrations.AlterField(
model_name='credentialtype',
name='kind',
field=models.CharField(choices=[('ssh', 'Machine'), ('vault', 'Vault'), ('net', 'Network'), ('scm', 'Source Control'), ('cloud', 'Cloud'), ('token', 'Personal Access Token'), ('insights', 'Insights'), ('external', 'External')], max_length=32),
),
migrations.RunPython(setup_tower_managed_defaults),
]

View File

@@ -0,0 +1,44 @@
# Generated by Django 2.2.4 on 2019-09-12 14:52
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0093_v360_personal_access_tokens'),
]
operations = [
migrations.AddField(
model_name='job',
name='webhook_credential',
field=models.ForeignKey(blank=True, help_text='Personal Access Token for posting back the status to the service API', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='jobs', to='main.Credential'),
),
migrations.AddField(
model_name='job',
name='webhook_guid',
field=models.CharField(blank=True, help_text='Unique identifier of the event that triggered this webhook', max_length=128),
),
migrations.AddField(
model_name='job',
name='webhook_service',
field=models.CharField(blank=True, choices=[('github', 'GitHub'), ('gitlab', 'GitLab')], help_text='Service that webhook requests will be accepted from', max_length=16),
),
migrations.AddField(
model_name='workflowjob',
name='webhook_credential',
field=models.ForeignKey(blank=True, help_text='Personal Access Token for posting back the status to the service API', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='workflowjobs', to='main.Credential'),
),
migrations.AddField(
model_name='workflowjob',
name='webhook_guid',
field=models.CharField(blank=True, help_text='Unique identifier of the event that triggered this webhook', max_length=128),
),
migrations.AddField(
model_name='workflowjob',
name='webhook_service',
field=models.CharField(blank=True, choices=[('github', 'GitHub'), ('gitlab', 'GitLab')], help_text='Service that webhook requests will be accepted from', max_length=16),
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.4 on 2019-10-04 00:50
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0094_v360_webhook_mixin2'),
]
operations = [
migrations.AlterField(
model_name='instance',
name='version',
field=models.CharField(blank=True, max_length=120),
),
]

View File

@@ -0,0 +1,38 @@
# Generated by Django 2.2.4 on 2019-09-16 23:50
from django.db import migrations, models
import django.db.models.deletion
from awx.main.models import CredentialType
from awx.main.utils.common import set_current_apps
def create_new_credential_types(apps, schema_editor):
set_current_apps(apps)
CredentialType.setup_tower_managed_defaults()
class Migration(migrations.Migration):
dependencies = [
('main', '0095_v360_increase_instance_version_length'),
]
operations = [
migrations.AddField(
model_name='instancegroup',
name='credential',
field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='instancegroups', to='main.Credential'),
),
migrations.AddField(
model_name='instancegroup',
name='pod_spec_override',
field=models.TextField(blank=True, default=''),
),
migrations.AlterField(
model_name='credentialtype',
name='kind',
field=models.CharField(choices=[('ssh', 'Machine'), ('vault', 'Vault'), ('net', 'Network'), ('scm', 'Source Control'), ('cloud', 'Cloud'), ('token', 'Personal Access Token'), ('insights', 'Insights'), ('external', 'External'), ('kubernetes', 'Kubernetes')], max_length=32),
),
migrations.RunPython(create_new_credential_types)
]

View File

@@ -0,0 +1,21 @@
# Generated by Django 2.2.4 on 2019-10-11 15:40
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('main', '0096_v360_container_groups'),
]
operations = [
migrations.AddField(
model_name='workflowapproval',
name='approved_or_denied_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{'class': 'workflowapproval', 'model_name': 'workflowapproval', 'app_label': 'main'}(class)s_approved+", to=settings.AUTH_USER_MODEL),
),
]

View File

@@ -0,0 +1,31 @@
# Generated by Django 2.2.4 on 2019-10-16 19:51
from django.db import migrations
from awx.main.models import CredentialType
def update_cyberark_aim_name(apps, schema_editor):
CredentialType.setup_tower_managed_defaults()
aim_types = apps.get_model('main', 'CredentialType').objects.filter(
namespace='aim'
).order_by('id')
if aim_types.count() == 2:
original, renamed = aim_types.all()
apps.get_model('main', 'Credential').objects.filter(
credential_type_id=original.id
).update(
credential_type_id=renamed.id
)
original.delete()
class Migration(migrations.Migration):
dependencies = [
('main', '0097_v360_workflowapproval_approved_or_denied_by'),
]
operations = [
migrations.RunPython(update_cyberark_aim_name)
]

View File

@@ -150,6 +150,14 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
def supports_isolation(cls):
return True
@property
def is_containerized(self):
return bool(self.instance_group and self.instance_group.is_containerized)
@property
def can_run_containerized(self):
return True
def get_absolute_url(self, request=None):
return reverse('api:ad_hoc_command_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -64,7 +64,7 @@ def build_safe_env(env):
for k, v in safe_env.items():
if k == 'AWS_ACCESS_KEY_ID':
continue
elif k.startswith('ANSIBLE_') and not k.startswith('ANSIBLE_NET'):
elif k.startswith('ANSIBLE_') and not k.startswith('ANSIBLE_NET') and not k.startswith('ANSIBLE_GALAXY_SERVER'):
continue
elif hidden_re.search(k):
safe_env[k] = HIDDEN_PASSWORD
@@ -135,6 +135,10 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin):
def cloud(self):
return self.credential_type.kind == 'cloud'
@property
def kubernetes(self):
return self.credential_type.kind == 'kubernetes'
def get_absolute_url(self, request=None):
return reverse('api:credential_detail', kwargs={'pk': self.pk}, request=request)
@@ -151,7 +155,7 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin):
@property
def has_encrypted_ssh_key_data(self):
try:
ssh_key_data = decrypt_field(self, 'ssh_key_data')
ssh_key_data = self.get_input('ssh_key_data')
except AttributeError:
return False
@@ -322,8 +326,10 @@ class CredentialType(CommonModelNameNotUnique):
('net', _('Network')),
('scm', _('Source Control')),
('cloud', _('Cloud')),
('token', _('Personal Access Token')),
('insights', _('Insights')),
('external', _('External')),
('kubernetes', _('Kubernetes')),
)
kind = models.CharField(
@@ -633,9 +639,6 @@ ManagedCredentialType(
'secret': True,
'ask_at_runtime': True
}],
'dependencies': {
'ssh_key_unlock': ['ssh_key_data'],
}
}
)
@@ -667,9 +670,6 @@ ManagedCredentialType(
'type': 'string',
'secret': True
}],
'dependencies': {
'ssh_key_unlock': ['ssh_key_data'],
}
}
)
@@ -738,7 +738,6 @@ ManagedCredentialType(
'secret': True,
}],
'dependencies': {
'ssh_key_unlock': ['ssh_key_data'],
'authorize_password': ['authorize'],
},
'required': ['username'],
@@ -975,6 +974,40 @@ ManagedCredentialType(
}
)
ManagedCredentialType(
namespace='github_token',
kind='token',
name=ugettext_noop('GitHub Personal Access Token'),
managed_by_tower=True,
inputs={
'fields': [{
'id': 'token',
'label': ugettext_noop('Token'),
'type': 'string',
'secret': True,
'help_text': ugettext_noop('This token needs to come from your profile settings in GitHub')
}],
'required': ['token'],
},
)
ManagedCredentialType(
namespace='gitlab_token',
kind='token',
name=ugettext_noop('GitLab Personal Access Token'),
managed_by_tower=True,
inputs={
'fields': [{
'id': 'token',
'label': ugettext_noop('Token'),
'type': 'string',
'secret': True,
'help_text': ugettext_noop('This token needs to come from your profile settings in GitLab')
}],
'required': ['token'],
},
)
ManagedCredentialType(
namespace='insights',
kind='insights',
@@ -1090,6 +1123,38 @@ ManagedCredentialType(
)
ManagedCredentialType(
namespace='kubernetes_bearer_token',
kind='kubernetes',
name=ugettext_noop('OpenShift or Kubernetes API Bearer Token'),
inputs={
'fields': [{
'id': 'host',
'label': ugettext_noop('OpenShift or Kubernetes API Endpoint'),
'type': 'string',
'help_text': ugettext_noop('The OpenShift or Kubernetes API Endpoint to authenticate with.')
},{
'id': 'bearer_token',
'label': ugettext_noop('API authentication bearer token.'),
'type': 'string',
'secret': True,
},{
'id': 'verify_ssl',
'label': ugettext_noop('Verify SSL'),
'type': 'boolean',
'default': True,
},{
'id': 'ssl_ca_cert',
'label': ugettext_noop('Certificate Authority data'),
'type': 'string',
'secret': True,
'multiline': True,
}],
'required': ['host', 'bearer_token'],
}
)
class CredentialInputSource(PrimordialModel):
class Meta:

View File

@@ -83,6 +83,7 @@ class BasePlaybookEvent(CreatedModifiedModel):
# - runner_on*
# - playbook_on_task_start (once for each task within a play)
# - runner_on_failed
# - runner_on_start
# - runner_on_ok
# - runner_on_error (not used for v2)
# - runner_on_skipped
@@ -102,6 +103,7 @@ class BasePlaybookEvent(CreatedModifiedModel):
EVENT_TYPES = [
# (level, event, verbose name, failed)
(3, 'runner_on_failed', _('Host Failed'), True),
(3, 'runner_on_start', _('Host Started'), False),
(3, 'runner_on_ok', _('Host OK'), False),
(3, 'runner_on_error', _('Host Failure'), True),
(3, 'runner_on_skipped', _('Host Skipped'), False),
@@ -322,7 +324,10 @@ class BasePlaybookEvent(CreatedModifiedModel):
kwargs.pop('created', None)
sanitize_event_keys(kwargs, cls.VALID_KEYS)
workflow_job_id = kwargs.pop('workflow_job_id', None)
job_event = cls.objects.create(**kwargs)
if workflow_job_id:
setattr(job_event, 'workflow_job_id', workflow_job_id)
analytics_logger.info('Event data saved.', extra=dict(python_objects=dict(job_event=job_event)))
return job_event
@@ -394,7 +399,7 @@ class JobEvent(BasePlaybookEvent):
An event/message logged from the callback when running a job.
'''
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['job_id']
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['job_id', 'workflow_job_id']
class Meta:
app_label = 'main'
@@ -528,7 +533,7 @@ class JobEvent(BasePlaybookEvent):
class ProjectUpdateEvent(BasePlaybookEvent):
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['project_update_id']
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['project_update_id', 'workflow_job_id']
class Meta:
app_label = 'main'
@@ -614,6 +619,7 @@ class BaseCommandEvent(CreatedModifiedModel):
kwargs.pop('created', None)
sanitize_event_keys(kwargs, cls.VALID_KEYS)
kwargs.pop('workflow_job_id', None)
event = cls.objects.create(**kwargs)
if isinstance(event, AdHocCommandEvent):
analytics_logger.info(
@@ -637,7 +643,7 @@ class BaseCommandEvent(CreatedModifiedModel):
class AdHocCommandEvent(BaseCommandEvent):
VALID_KEYS = BaseCommandEvent.VALID_KEYS + ['ad_hoc_command_id', 'event']
VALID_KEYS = BaseCommandEvent.VALID_KEYS + ['ad_hoc_command_id', 'event', 'workflow_job_id']
class Meta:
app_label = 'main'
@@ -745,7 +751,7 @@ class AdHocCommandEvent(BaseCommandEvent):
class InventoryUpdateEvent(BaseCommandEvent):
VALID_KEYS = BaseCommandEvent.VALID_KEYS + ['inventory_update_id']
VALID_KEYS = BaseCommandEvent.VALID_KEYS + ['inventory_update_id', 'workflow_job_id']
class Meta:
app_label = 'main'

View File

@@ -18,7 +18,7 @@ from awx import __version__ as awx_application_version
from awx.api.versioning import reverse
from awx.main.managers import InstanceManager, InstanceGroupManager
from awx.main.fields import JSONField
from awx.main.models.base import BaseModel, HasEditsMixin
from awx.main.models.base import BaseModel, HasEditsMixin, prevent_search
from awx.main.models.unified_jobs import UnifiedJob
from awx.main.utils import get_cpu_capacity, get_mem_capacity, get_system_task_capacity
from awx.main.models.mixins import RelatedJobsMixin
@@ -59,7 +59,7 @@ class Instance(HasPolicyEditsMixin, BaseModel):
null=True,
editable=False,
)
version = models.CharField(max_length=24, blank=True)
version = models.CharField(max_length=120, blank=True)
capacity = models.PositiveIntegerField(
default=100,
editable=False,
@@ -176,6 +176,18 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
null=True,
on_delete=models.CASCADE
)
credential = models.ForeignKey(
'Credential',
related_name='%(class)ss',
blank=True,
null=True,
default=None,
on_delete=models.SET_NULL,
)
pod_spec_override = prevent_search(models.TextField(
blank=True,
default='',
))
policy_instance_percentage = models.IntegerField(
default=0,
help_text=_("Percentage of Instances to automatically assign to this group")
@@ -218,6 +230,10 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
def is_isolated(self):
return bool(self.controller)
@property
def is_containerized(self):
return bool(self.credential and self.credential.kubernetes)
'''
RelatedJobsMixin
'''
@@ -271,7 +287,8 @@ def schedule_policy_task():
@receiver(post_save, sender=InstanceGroup)
def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs):
if created or instance.has_policy_changes():
schedule_policy_task()
if not instance.is_containerized:
schedule_policy_task()
@receiver(post_save, sender=Instance)
@@ -282,7 +299,8 @@ def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
@receiver(post_delete, sender=InstanceGroup)
def on_instance_group_deleted(sender, instance, using, **kwargs):
schedule_policy_task()
if not instance.is_containerized:
schedule_policy_task()
@receiver(post_delete, sender=Instance)

View File

@@ -1501,7 +1501,7 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in InventorySourceOptions._meta.fields) | set(
['name', 'description', 'schedule', 'credentials', 'inventory']
['name', 'description', 'credentials', 'inventory']
)
def save(self, *args, **kwargs):

View File

@@ -39,7 +39,7 @@ from awx.main.models.notifications import (
NotificationTemplate,
JobNotificationMixin,
)
from awx.main.utils import parse_yaml_or_json, getattr_dne
from awx.main.utils import parse_yaml_or_json, getattr_dne, NullablePromptPseudoField
from awx.main.fields import ImplicitRoleField, JSONField, AskForField
from awx.main.models.mixins import (
ResourceMixin,
@@ -48,6 +48,8 @@ from awx.main.models.mixins import (
TaskManagerJobMixin,
CustomVirtualEnvMixin,
RelatedJobsMixin,
WebhookMixin,
WebhookTemplateMixin,
)
@@ -187,7 +189,7 @@ class JobOptions(BaseModel):
return needed
class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, ResourceMixin, CustomVirtualEnvMixin, RelatedJobsMixin):
class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, ResourceMixin, CustomVirtualEnvMixin, RelatedJobsMixin, WebhookTemplateMixin):
'''
A job template is a reusable job definition for applying a project (with
playbook) to an inventory source with a given credential.
@@ -271,7 +273,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in JobOptions._meta.fields) | set(
['name', 'description', 'schedule', 'survey_passwords', 'labels', 'credentials',
['name', 'description', 'survey_passwords', 'labels', 'credentials',
'job_slice_number', 'job_slice_count']
)
@@ -484,7 +486,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
return UnifiedJob.objects.filter(unified_job_template=self)
class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskManagerJobMixin, CustomVirtualEnvMixin):
class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskManagerJobMixin, CustomVirtualEnvMixin, WebhookMixin):
'''
A job applies a project (with playbook) to an inventory source with a given
credential. It represents a single invocation of ansible-playbook with the
@@ -627,15 +629,17 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
@property
def task_impact(self):
# NOTE: We sorta have to assume the host count matches and that forks default to 5
from awx.main.models.inventory import Host
if self.launch_type == 'callback':
count_hosts = 2
else:
count_hosts = Host.objects.filter(inventory__jobs__pk=self.pk).count()
if self.job_slice_count > 1:
# Integer division intentional
count_hosts = (count_hosts + self.job_slice_count - self.job_slice_number) // self.job_slice_count
# If for some reason we can't count the hosts then lets assume the impact as forks
if self.inventory is not None:
count_hosts = self.inventory.hosts.count()
if self.job_slice_count > 1:
# Integer division intentional
count_hosts = (count_hosts + self.job_slice_count - self.job_slice_number) // self.job_slice_count
else:
count_hosts = 5 if self.forks == 0 else self.forks
return min(count_hosts, 5 if self.forks == 0 else self.forks) + 1
@property
@@ -666,6 +670,14 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
def processed_hosts(self):
return self._get_hosts(job_host_summaries__processed__gt=0)
@property
def ignored_hosts(self):
return self._get_hosts(job_host_summaries__ignored__gt=0)
@property
def rescued_hosts(self):
return self._get_hosts(job_host_summaries__rescued__gt=0)
def notification_data(self, block=5):
data = super(Job, self).notification_data()
all_hosts = {}
@@ -684,7 +696,9 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
failures=h.failures,
ok=h.ok,
processed=h.processed,
skipped=h.skipped) # TODO: update with rescued, ignored (see https://github.com/ansible/awx/issues/4394)
skipped=h.skipped,
rescued=h.rescued,
ignored=h.ignored)
data.update(dict(inventory=self.inventory.name if self.inventory else None,
project=self.project.name if self.project else None,
playbook=self.playbook,
@@ -706,6 +720,14 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
return "$hidden due to Ansible no_log flag$"
return artifacts
@property
def can_run_containerized(self):
return any([ig for ig in self.preferred_instance_groups if ig.is_containerized])
@property
def is_containerized(self):
return bool(self.instance_group and self.instance_group.is_containerized)
@property
def preferred_instance_groups(self):
if self.project is not None and self.project.organization is not None:
@@ -829,25 +851,6 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
host.save()
# Add on aliases for the non-related-model fields
class NullablePromptPsuedoField(object):
"""
Interface for psuedo-property stored in `char_prompts` dict
Used in LaunchTimeConfig and submodels
"""
def __init__(self, field_name):
self.field_name = field_name
def __get__(self, instance, type=None):
return instance.char_prompts.get(self.field_name, None)
def __set__(self, instance, value):
if value in (None, {}):
instance.char_prompts.pop(self.field_name, None)
else:
instance.char_prompts[self.field_name] = value
class LaunchTimeConfigBase(BaseModel):
'''
Needed as separate class from LaunchTimeConfig because some models
@@ -868,6 +871,7 @@ class LaunchTimeConfigBase(BaseModel):
null=True,
default=None,
on_delete=models.SET_NULL,
help_text=_('Inventory applied as a prompt, assuming job template prompts for inventory')
)
# All standard fields are stored in this dictionary field
# This is a solution to the nullable CharField problem, specific to prompting
@@ -904,21 +908,14 @@ class LaunchTimeConfigBase(BaseModel):
data[prompt_name] = prompt_val
return data
def display_extra_vars(self):
'''
Hides fields marked as passwords in survey.
'''
if self.survey_passwords:
extra_vars = parse_yaml_or_json(self.extra_vars).copy()
for key, value in self.survey_passwords.items():
if key in extra_vars:
extra_vars[key] = value
return extra_vars
else:
return self.extra_vars
def display_extra_data(self):
return self.display_extra_vars()
for field_name in JobTemplate.get_ask_mapping().keys():
if field_name == 'extra_vars':
continue
try:
LaunchTimeConfigBase._meta.get_field(field_name)
except FieldDoesNotExist:
setattr(LaunchTimeConfigBase, field_name, NullablePromptPseudoField(field_name))
class LaunchTimeConfig(LaunchTimeConfigBase):
@@ -953,14 +950,21 @@ class LaunchTimeConfig(LaunchTimeConfigBase):
def extra_vars(self, extra_vars):
self.extra_data = extra_vars
def display_extra_vars(self):
'''
Hides fields marked as passwords in survey.
'''
if hasattr(self, 'survey_passwords') and self.survey_passwords:
extra_vars = parse_yaml_or_json(self.extra_vars).copy()
for key, value in self.survey_passwords.items():
if key in extra_vars:
extra_vars[key] = value
return extra_vars
else:
return self.extra_vars
for field_name in JobTemplate.get_ask_mapping().keys():
if field_name == 'extra_vars':
continue
try:
LaunchTimeConfig._meta.get_field(field_name)
except FieldDoesNotExist:
setattr(LaunchTimeConfig, field_name, NullablePromptPsuedoField(field_name))
def display_extra_data(self):
return self.display_extra_vars()
class JobLaunchConfig(LaunchTimeConfig):

View File

@@ -1,31 +1,37 @@
# Python
import os
import json
from copy import copy, deepcopy
import json
import logging
import os
import requests
# Django
from django.apps import apps
from django.conf import settings
from django.db import models
from django.contrib.contenttypes.models import ContentType
from django.contrib.auth.models import User # noqa
from django.utils.translation import ugettext_lazy as _
from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import ValidationError
from django.db import models
from django.db.models.query import QuerySet
from django.utils.crypto import get_random_string
from django.utils.translation import ugettext_lazy as _
# AWX
from awx.main.models.base import prevent_search
from awx.main.models.rbac import (
Role, RoleAncestorEntry, get_roles_on_resource
)
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser
from awx.main.utils.encryption import decrypt_value, get_encryption_key, is_encrypted
from awx.main.utils.polymorphic import build_polymorphic_ctypes_map
from awx.main.fields import JSONField, AskForField
from awx.main.constants import ACTIVE_STATES
logger = logging.getLogger('awx.main.models.mixins')
__all__ = ['ResourceMixin', 'SurveyJobTemplateMixin', 'SurveyJobMixin',
'TaskManagerUnifiedJobMixin', 'TaskManagerJobMixin', 'TaskManagerProjectUpdateMixin',
'TaskManagerInventoryUpdateMixin', 'CustomVirtualEnvMixin']
@@ -247,7 +253,7 @@ class SurveyJobTemplateMixin(models.Model):
else:
choice_list = copy(survey_element['choices'])
if isinstance(choice_list, str):
choice_list = choice_list.split('\n')
choice_list = [choice for choice in choice_list.splitlines() if choice.strip() != '']
for val in data[survey_element['variable']]:
if val not in choice_list:
errors.append("Value %s for '%s' expected to be one of %s." % (val, survey_element['variable'],
@@ -255,7 +261,7 @@ class SurveyJobTemplateMixin(models.Model):
elif survey_element['type'] == 'multiplechoice':
choice_list = copy(survey_element['choices'])
if isinstance(choice_list, str):
choice_list = choice_list.split('\n')
choice_list = [choice for choice in choice_list.splitlines() if choice.strip() != '']
if survey_element['variable'] in data:
if data[survey_element['variable']] not in choice_list:
errors.append("Value %s for '%s' expected to be one of %s." % (data[survey_element['variable']],
@@ -483,3 +489,139 @@ class RelatedJobsMixin(object):
raise RuntimeError("Programmer error. Expected _get_active_jobs() to return a QuerySet.")
return [dict(id=t[0], type=mapping[t[1]]) for t in jobs.values_list('id', 'polymorphic_ctype_id')]
class WebhookTemplateMixin(models.Model):
class Meta:
abstract = True
SERVICES = [
('github', "GitHub"),
('gitlab', "GitLab"),
]
webhook_service = models.CharField(
max_length=16,
choices=SERVICES,
blank=True,
help_text=_('Service that webhook requests will be accepted from')
)
webhook_key = prevent_search(models.CharField(
max_length=64,
blank=True,
help_text=_('Shared secret that the webhook service will use to sign requests')
))
webhook_credential = models.ForeignKey(
'Credential',
blank=True,
null=True,
on_delete=models.SET_NULL,
related_name='%(class)ss',
help_text=_('Personal Access Token for posting back the status to the service API')
)
def rotate_webhook_key(self):
self.webhook_key = get_random_string(length=50)
def save(self, *args, **kwargs):
update_fields = kwargs.get('update_fields')
if not self.pk or self._values_have_edits({'webhook_service': self.webhook_service}):
if self.webhook_service:
self.rotate_webhook_key()
else:
self.webhook_key = ''
if update_fields and 'webhook_service' in update_fields:
update_fields.add('webhook_key')
super().save(*args, **kwargs)
class WebhookMixin(models.Model):
class Meta:
abstract = True
SERVICES = WebhookTemplateMixin.SERVICES
webhook_service = models.CharField(
max_length=16,
choices=SERVICES,
blank=True,
help_text=_('Service that webhook requests will be accepted from')
)
webhook_credential = models.ForeignKey(
'Credential',
blank=True,
null=True,
on_delete=models.SET_NULL,
related_name='%(class)ss',
help_text=_('Personal Access Token for posting back the status to the service API')
)
webhook_guid = models.CharField(
blank=True,
max_length=128,
help_text=_('Unique identifier of the event that triggered this webhook')
)
def update_webhook_status(self, status):
if not self.webhook_credential:
logger.debug("No credential configured to post back webhook status, skipping.")
return
status_api = self.extra_vars_dict.get('tower_webhook_status_api')
if not status_api:
logger.debug("Webhook event did not have a status API endpoint associated, skipping.")
return
service_header = {
'github': ('Authorization', 'token {}'),
'gitlab': ('PRIVATE-TOKEN', '{}'),
}
service_statuses = {
'github': {
'pending': 'pending',
'successful': 'success',
'failed': 'failure',
'canceled': 'failure', # GitHub doesn't have a 'canceled' status :(
'error': 'error',
},
'gitlab': {
'pending': 'pending',
'running': 'running',
'successful': 'success',
'failed': 'failed',
'error': 'failed', # GitLab doesn't have an 'error' status distinct from 'failed' :(
'canceled': 'canceled',
},
}
statuses = service_statuses[self.webhook_service]
if status not in statuses:
logger.debug("Skipping webhook job status change: '{}'".format(status))
return
try:
license_type = get_licenser().validate().get('license_type')
data = {
'state': statuses[status],
'context': 'ansible/awx' if license_type == 'open' else 'ansible/tower',
'target_url': self.get_ui_url(),
}
k, v = service_header[self.webhook_service]
headers = {
k: v.format(self.webhook_credential.get_input('token')),
'Content-Type': 'application/json'
}
response = requests.post(status_api, data=json.dumps(data), headers=headers, timeout=30)
except Exception:
logger.exception("Posting webhook status caused an error.")
return
if response.status_code < 400:
logger.debug("Webhook status update sent.")
else:
logger.error(
"Posting webhook status failed, code: {}\n"
"{}\n"
"Payload sent: {}".format(response.status_code, response.text, json.dumps(data))
)

View File

@@ -17,7 +17,7 @@ from jinja2.exceptions import TemplateSyntaxError, UndefinedError, SecurityError
# AWX
from awx.api.versioning import reverse
from awx.main.models.base import CommonModelNameNotUnique, CreatedModifiedModel
from awx.main.models.base import CommonModelNameNotUnique, CreatedModifiedModel, prevent_search
from awx.main.utils import encrypt_field, decrypt_field, set_environ
from awx.main.notifications.email_backend import CustomEmailBackend
from awx.main.notifications.slack_backend import SlackBackend
@@ -70,7 +70,7 @@ class NotificationTemplate(CommonModelNameNotUnique):
choices=NOTIFICATION_TYPE_CHOICES,
)
notification_configuration = JSONField(blank=False)
notification_configuration = prevent_search(JSONField(blank=False))
def default_messages():
return {'started': None, 'success': None, 'error': None}

View File

@@ -3,7 +3,7 @@ import re
# Django
from django.core.validators import RegexValidator
from django.db import models
from django.db import models, connection
from django.utils.timezone import now
from django.utils.translation import ugettext_lazy as _
from django.conf import settings
@@ -121,7 +121,7 @@ class OAuth2AccessToken(AbstractAccessToken):
valid = super(OAuth2AccessToken, self).is_valid(scopes)
if valid:
self.last_used = now()
self.save(update_fields=['last_used'])
connection.on_commit(lambda: self.save(update_fields=['last_used']))
return valid
def save(self, *args, **kwargs):

View File

@@ -51,6 +51,11 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
default=0,
help_text=_('Maximum number of hosts allowed to be managed by this organization.'),
)
notification_templates_approvals = models.ManyToManyField(
"NotificationTemplate",
blank=True,
related_name='%(class)s_notification_templates_for_approvals'
)
admin_role = ImplicitRoleField(
parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,

View File

@@ -329,7 +329,7 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in ProjectOptions._meta.fields) | set(
['name', 'description', 'schedule']
['name', 'description']
)
def save(self, *args, **kwargs):

View File

@@ -119,10 +119,11 @@ class Schedule(PrimordialModel, LaunchTimeConfig):
tzinfo = r._dtstart.tzinfo
if tzinfo is utc:
return 'UTC'
fname = tzinfo._filename
for zone in all_zones:
if fname.endswith(zone):
return zone
fname = getattr(tzinfo, '_filename', None)
if fname:
for zone in all_zones:
if fname.endswith(zone):
return zone
logger.warn('Could not detect valid zoneinfo for {}'.format(self.rrule))
return ''

View File

@@ -42,9 +42,9 @@ from awx.main.utils import (
camelcase_to_underscore, get_model_for_type,
encrypt_dict, decrypt_field, _inventory_updates,
copy_model_by_class, copy_m2m_relationships,
get_type_for_model, parse_yaml_or_json, getattr_dne
get_type_for_model, parse_yaml_or_json, getattr_dne,
polymorphic, schedule_task_manager
)
from awx.main.utils import polymorphic, schedule_task_manager
from awx.main.constants import ACTIVE_STATES, CAN_CANCEL
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.consumers import emit_channel_notification
@@ -532,6 +532,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
('scheduled', _('Scheduled')), # Job was started from a schedule.
('dependency', _('Dependency')), # Job was started as a dependency of another job.
('workflow', _('Workflow')), # Job was started from a workflow job.
('webhook', _('Webhook')), # Job was started from a webhook event.
('sync', _('Sync')), # Job was started from a project sync.
('scm', _('SCM Update')) # Job was created as an Inventory SCM sync.
]
@@ -559,11 +560,17 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
related_name='%(class)s_unified_jobs',
on_delete=polymorphic.SET_NULL,
)
created = models.DateTimeField(
default=None,
editable=False,
db_index=True, # add an index, this is a commonly queried field
)
launch_type = models.CharField(
max_length=20,
choices=LAUNCH_TYPE_CHOICES,
default='manual',
editable=False,
db_index=True
)
schedule = models.ForeignKey( # Which schedule entry was responsible for starting this job.
'Schedule',
@@ -621,6 +628,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
default=None,
editable=False,
help_text=_("The date and time the job finished execution."),
db_index=True,
)
elapsed = models.DecimalField(
max_digits=12,
@@ -706,6 +714,10 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
def supports_isolation(cls):
return False
@property
def can_run_containerized(self):
return False
def _get_parent_field_name(self):
return 'unified_job_template' # Override in subclasses.
@@ -1199,6 +1211,8 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
def websocket_emit_status(self, status):
connection.on_commit(lambda: self._websocket_emit_status(status))
if hasattr(self, 'update_webhook_status'):
connection.on_commit(lambda: self.update_webhook_status(status))
def notification_data(self):
return dict(id=self.id,
@@ -1379,9 +1393,13 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
wj = self.get_workflow_job()
if wj:
schedule = getattr_dne(wj, 'schedule')
for name in ('awx', 'tower'):
r['{}_workflow_job_id'.format(name)] = wj.pk
r['{}_workflow_job_name'.format(name)] = wj.name
if schedule:
r['{}_parent_job_schedule_id'.format(name)] = schedule.pk
r['{}_parent_job_schedule_name'.format(name)] = schedule.name
if not created_by:
schedule = getattr_dne(self, 'schedule')
@@ -1411,3 +1429,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
def is_isolated(self):
return bool(self.controller_node)
@property
def is_containerized(self):
return False

View File

@@ -3,14 +3,19 @@
# Python
import logging
from copy import copy
from urllib.parse import urljoin
# Django
from django.db import models
from django.db import connection, models
from django.conf import settings
from django.utils.translation import ugettext_lazy as _
from django.core.exceptions import ObjectDoesNotExist
#from django import settings as tower_settings
# Django-CRUM
from crum import get_current_user
# AWX
from awx.api.versioning import reverse
from awx.main.models import (prevent_search, accepts_json, UnifiedJobTemplate,
@@ -19,7 +24,7 @@ from awx.main.models.notifications import (
NotificationTemplate,
JobNotificationMixin
)
from awx.main.models.base import BaseModel, CreatedModifiedModel, VarsDictProperty
from awx.main.models.base import CreatedModifiedModel, VarsDictProperty
from awx.main.models.rbac import (
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR
@@ -30,6 +35,8 @@ from awx.main.models.mixins import (
SurveyJobTemplateMixin,
SurveyJobMixin,
RelatedJobsMixin,
WebhookMixin,
WebhookTemplateMixin,
)
from awx.main.models.jobs import LaunchTimeConfigBase, LaunchTimeConfig, JobTemplate
from awx.main.models.credential import Credential
@@ -38,9 +45,6 @@ from awx.main.fields import JSONField
from awx.main.utils import schedule_task_manager
from copy import copy
from urllib.parse import urljoin
__all__ = ['WorkflowJobTemplate', 'WorkflowJob', 'WorkflowJobOptions', 'WorkflowJobNode',
'WorkflowJobTemplateNode', 'WorkflowApprovalTemplate', 'WorkflowApproval']
@@ -196,7 +200,7 @@ class WorkflowJobNode(WorkflowNodeBase):
)
do_not_run = models.BooleanField(
default=False,
help_text=_("Indidcates that a job will not be created when True. Workflow runtime "
help_text=_("Indicates that a job will not be created when True. Workflow runtime "
"semantics will mark this True if the node is in a path that will "
"decidedly not be ran. A value of False means the node may not run."),
)
@@ -207,11 +211,14 @@ class WorkflowJobNode(WorkflowNodeBase):
def prompts_dict(self, *args, **kwargs):
r = super(WorkflowJobNode, self).prompts_dict(*args, **kwargs)
# Explanation - WFJT extra_vars still break pattern, so they are not
# put through prompts processing, but inventory is only accepted
# put through prompts processing, but inventory and others are only accepted
# if JT prompts for it, so it goes through this mechanism
if self.workflow_job and self.workflow_job.inventory_id:
# workflow job inventory takes precedence
r['inventory'] = self.workflow_job.inventory
if self.workflow_job:
if self.workflow_job.inventory_id:
# workflow job inventory takes precedence
r['inventory'] = self.workflow_job.inventory
if self.workflow_job.char_prompts:
r.update(self.workflow_job.char_prompts)
return r
def get_job_kwargs(self):
@@ -298,7 +305,7 @@ class WorkflowJobNode(WorkflowNodeBase):
return data
class WorkflowJobOptions(BaseModel):
class WorkflowJobOptions(LaunchTimeConfigBase):
class Meta:
abstract = True
@@ -318,10 +325,11 @@ class WorkflowJobOptions(BaseModel):
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in WorkflowJobOptions._meta.fields) | set(
# NOTE: if other prompts are added to WFJT, put fields in WJOptions, remove inventory
['name', 'description', 'schedule', 'survey_passwords', 'labels', 'inventory']
r = set(f.name for f in WorkflowJobOptions._meta.fields) | set(
['name', 'description', 'survey_passwords', 'labels', 'limit', 'scm_branch']
)
r.remove('char_prompts') # needed due to copying launch config to launch config
return r
def _create_workflow_nodes(self, old_node_list, user=None):
node_links = {}
@@ -355,7 +363,7 @@ class WorkflowJobOptions(BaseModel):
return new_workflow_job
class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTemplateMixin, ResourceMixin, RelatedJobsMixin):
class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTemplateMixin, ResourceMixin, RelatedJobsMixin, WebhookTemplateMixin):
SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name', 'organization')]
FIELDS_TO_PRESERVE_AT_COPY = [
@@ -372,19 +380,24 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
on_delete=models.SET_NULL,
related_name='workflows',
)
inventory = models.ForeignKey(
'Inventory',
related_name='%(class)ss',
blank=True,
null=True,
default=None,
on_delete=models.SET_NULL,
help_text=_('Inventory applied to all job templates in workflow that prompt for inventory.'),
)
ask_inventory_on_launch = AskForField(
blank=True,
default=False,
)
ask_limit_on_launch = AskForField(
blank=True,
default=False,
)
ask_scm_branch_on_launch = AskForField(
blank=True,
default=False,
)
notification_templates_approvals = models.ManyToManyField(
"NotificationTemplate",
blank=True,
related_name='%(class)s_notification_templates_for_approvals'
)
admin_role = ImplicitRoleField(parent_role=[
'singleton:' + ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
'organization.workflow_admin_role'
@@ -438,9 +451,22 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
.filter(unifiedjobtemplate_notification_templates_for_started__in=[self]))
success_notification_templates = list(base_notification_templates
.filter(unifiedjobtemplate_notification_templates_for_success__in=[self]))
approval_notification_templates = list(base_notification_templates
.filter(workflowjobtemplate_notification_templates_for_approvals__in=[self]))
# Get Organization NotificationTemplates
if self.organization is not None:
error_notification_templates = set(error_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_errors=self.organization)))
started_notification_templates = set(started_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_started=self.organization)))
success_notification_templates = set(success_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_success=self.organization)))
approval_notification_templates = set(approval_notification_templates + list(base_notification_templates.filter(
organization_notification_templates_for_approvals=self.organization)))
return dict(error=list(error_notification_templates),
started=list(started_notification_templates),
success=list(success_notification_templates))
success=list(success_notification_templates),
approvals=list(approval_notification_templates))
def create_unified_job(self, **kwargs):
workflow_job = super(WorkflowJobTemplate, self).create_unified_job(**kwargs)
@@ -515,7 +541,7 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
return WorkflowJob.objects.filter(workflow_job_template=self)
class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificationMixin, LaunchTimeConfigBase):
class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificationMixin, WebhookMixin):
class Meta:
app_label = 'main'
ordering = ('id',)
@@ -646,7 +672,7 @@ class WorkflowApprovalTemplate(UnifiedJobTemplate):
return self.workflowjobtemplatenodes.first().workflow_job_template
class WorkflowApproval(UnifiedJob):
class WorkflowApproval(UnifiedJob, JobNotificationMixin):
class Meta:
app_label = 'main'
@@ -667,6 +693,14 @@ class WorkflowApproval(UnifiedJob):
default=False,
help_text=_("Shows when an approval node (with a timeout assigned to it) has timed out.")
)
approved_or_denied_by = models.ForeignKey(
'auth.User',
related_name='%s(class)s_approved+',
default=None,
null=True,
editable=False,
on_delete=models.SET_NULL,
)
@classmethod
@@ -680,26 +714,78 @@ class WorkflowApproval(UnifiedJob):
def event_class(self):
return None
def get_ui_url(self):
return urljoin(settings.TOWER_URL_BASE, '/#/workflows/{}'.format(self.workflow_job.id))
def _get_parent_field_name(self):
return 'workflow_approval_template'
def approve(self, request=None):
self.status = 'successful'
self.approved_or_denied_by = get_current_user()
self.save()
self.send_approval_notification('approved')
self.websocket_emit_status(self.status)
schedule_task_manager()
return reverse('api:workflow_approval_approve', kwargs={'pk': self.pk}, request=request)
def deny(self, request=None):
self.status = 'failed'
self.approved_or_denied_by = get_current_user()
self.save()
self.send_approval_notification('denied')
self.websocket_emit_status(self.status)
schedule_task_manager()
return reverse('api:workflow_approval_deny', kwargs={'pk': self.pk}, request=request)
def signal_start(self, **kwargs):
can_start = super(WorkflowApproval, self).signal_start(**kwargs)
self.send_approval_notification('running')
return can_start
def send_approval_notification(self, approval_status):
from awx.main.tasks import send_notifications # avoid circular import
if self.workflow_job_template is None:
return
for nt in self.workflow_job_template.notification_templates["approvals"]:
try:
(notification_subject, notification_body) = self.build_approval_notification_message(nt, approval_status)
except Exception:
raise NotImplementedError("build_approval_notification_message() does not exist")
# Use kwargs to force late-binding
# https://stackoverflow.com/a/3431699/10669572
def send_it(local_nt=nt, local_subject=notification_subject, local_body=notification_body):
def _func():
send_notifications.delay([local_nt.generate_notification(local_subject, local_body).id],
job_id=self.id)
return _func
connection.on_commit(send_it())
def build_approval_notification_message(self, nt, approval_status):
subject = []
workflow_url = urljoin(settings.TOWER_URL_BASE, '/#/workflows/{}'.format(self.workflow_job.id))
subject.append(('The approval node "{}"').format(self.workflow_approval_template.name))
if approval_status == 'running':
subject.append(('needs review. This node can be viewed at: {}').format(workflow_url))
if approval_status == 'approved':
subject.append(('was approved. {}').format(workflow_url))
if approval_status == 'timed_out':
subject.append(('has timed out. {}').format(workflow_url))
elif approval_status == 'denied':
subject.append(('was denied. {}').format(workflow_url))
subject = " ".join(subject)
body = self.notification_data()
body['body'] = subject
return subject, body
@property
def workflow_job_template(self):
return self.unified_job_node.workflow_job.unified_job_template
try:
return self.unified_job_node.workflow_job.unified_job_template
except ObjectDoesNotExist:
return None
@property
def workflow_job(self):

View File

@@ -1,6 +1,8 @@
import re
import urllib.parse as urlparse
from django.conf import settings
REPLACE_STR = '$encrypted$'
@@ -10,14 +12,22 @@ class UriCleaner(object):
@staticmethod
def remove_sensitive(cleartext):
if settings.PRIMARY_GALAXY_URL:
exclude_list = [settings.PRIMARY_GALAXY_URL] + [server['url'] for server in settings.FALLBACK_GALAXY_SERVERS]
else:
exclude_list = [server['url'] for server in settings.FALLBACK_GALAXY_SERVERS]
redactedtext = cleartext
text_index = 0
while True:
match = UriCleaner.SENSITIVE_URI_PATTERN.search(redactedtext, text_index)
if not match:
break
uri_str = match.group(1)
# Do not redact items from the exclude list
if any(uri_str.startswith(exclude_uri) for exclude_uri in exclude_list):
text_index = match.start() + len(uri_str)
continue
try:
uri_str = match.group(1)
# May raise a ValueError if invalid URI for one reason or another
o = urlparse.urlsplit(uri_str)

View File

@@ -1,4 +1,6 @@
# Copyright (c) 2017 Ansible, Inc.
#
from awx.main.scheduler.task_manager import TaskManager # noqa
from .task_manager import TaskManager
__all__ = ['TaskManager']

View File

@@ -0,0 +1,183 @@
import collections
import os
import stat
import time
import yaml
import tempfile
import logging
from base64 import b64encode
from django.conf import settings
from kubernetes import client, config
from django.utils.functional import cached_property
from awx.main.utils.common import parse_yaml_or_json
logger = logging.getLogger('awx.main.scheduler')
class PodManager(object):
def __init__(self, task=None):
self.task = task
def deploy(self):
if not self.credential.kubernetes:
raise RuntimeError('Pod deployment cannot occur without a Kubernetes credential')
self.kube_api.create_namespaced_pod(body=self.pod_definition,
namespace=self.namespace,
_request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)
num_retries = settings.AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES
for retry_attempt in range(num_retries - 1):
logger.debug(f"Checking for pod {self.pod_name}. Attempt {retry_attempt + 1} of {num_retries}")
pod = self.kube_api.read_namespaced_pod(name=self.pod_name,
namespace=self.namespace,
_request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)
if pod.status.phase != 'Pending':
break
else:
logger.debug(f"Pod {self.pod_name} is Pending.")
time.sleep(settings.AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY)
continue
if pod.status.phase == 'Running':
logger.debug(f"Pod {self.pod_name} is online.")
return pod
else:
logger.warn(f"Pod {self.pod_name} did not start. Status is {pod.status.phase}.")
@classmethod
def list_active_jobs(self, instance_group):
task = collections.namedtuple('Task', 'id instance_group')(
id='',
instance_group=instance_group
)
pm = PodManager(task)
try:
for pod in pm.kube_api.list_namespaced_pod(
pm.namespace,
label_selector='ansible-awx={}'.format(settings.INSTALL_UUID)
).to_dict().get('items', []):
job = pod['metadata'].get('labels', {}).get('ansible-awx-job-id')
if job:
try:
yield int(job)
except ValueError:
pass
except Exception:
logger.exception('Failed to list pods for container group {}'.format(instance_group))
def delete(self):
return self.kube_api.delete_namespaced_pod(name=self.pod_name,
namespace=self.namespace,
_request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)
@property
def namespace(self):
return self.pod_definition['metadata']['namespace']
@property
def credential(self):
return self.task.instance_group.credential
@cached_property
def kube_config(self):
return generate_tmp_kube_config(self.credential, self.namespace)
@cached_property
def kube_api(self):
my_client = config.new_client_from_config(config_file=self.kube_config)
return client.CoreV1Api(api_client=my_client)
@property
def pod_name(self):
return f"awx-job-{self.task.id}"
@property
def pod_definition(self):
default_pod_spec = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": settings.AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE
},
"spec": {
"containers": [{
"image": settings.AWX_CONTAINER_GROUP_DEFAULT_IMAGE,
"tty": True,
"stdin": True,
"imagePullPolicy": "Always",
"args": [
'sleep', 'infinity'
]
}]
}
}
pod_spec_override = {}
if self.task and self.task.instance_group.pod_spec_override:
pod_spec_override = parse_yaml_or_json(
self.task.instance_group.pod_spec_override)
pod_spec = {**default_pod_spec, **pod_spec_override}
if self.task:
pod_spec['metadata']['name'] = self.pod_name
pod_spec['metadata']['labels'] = {
'ansible-awx': settings.INSTALL_UUID,
'ansible-awx-job-id': str(self.task.id)
}
pod_spec['spec']['containers'][0]['name'] = self.pod_name
return pod_spec
def generate_tmp_kube_config(credential, namespace):
host_input = credential.get_input('host')
config = {
"apiVersion": "v1",
"kind": "Config",
"preferences": {},
"clusters": [
{
"name": host_input,
"cluster": {
"server": host_input
}
}
],
"users": [
{
"name": host_input,
"user": {
"token": credential.get_input('bearer_token')
}
}
],
"contexts": [
{
"name": host_input,
"context": {
"cluster": host_input,
"user": host_input,
"namespace": namespace
}
}
],
"current-context": host_input
}
if credential.get_input('verify_ssl'):
config["clusters"][0]["cluster"]["certificate-authority-data"] = b64encode(
credential.get_input('ssl_ca_cert').encode() # encode to bytes
).decode() # decode the base64 data into a str
else:
config["clusters"][0]["cluster"]["insecure-skip-tls-verify"] = True
fd, path = tempfile.mkstemp(prefix='kubeconfig')
with open(path, 'wb') as temp:
temp.write(yaml.dump(config).encode())
temp.flush()
os.chmod(temp.name, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
return path

View File

@@ -251,6 +251,20 @@ class TaskManager():
task.controller_node = controller_node
logger.debug('Submitting isolated {} to queue {} controlled by {}.'.format(
task.log_format, task.execution_node, controller_node))
elif rampart_group.is_containerized:
task.instance_group = rampart_group
if not task.supports_isolation():
# project updates and inventory updates don't *actually* run in pods,
# so just pick *any* non-isolated, non-containerized host and use it
for group in InstanceGroup.objects.all():
if group.is_containerized or group.controller_id:
continue
match = group.find_largest_idle_instance()
if match:
task.execution_node = match.hostname
logger.debug('Submitting containerized {} to queue {}.'.format(
task.log_format, task.execution_node))
break
else:
task.instance_group = rampart_group
if instance is not None:
@@ -447,7 +461,7 @@ class TaskManager():
for rampart_group in preferred_instance_groups:
if idle_instance_that_fits is None:
idle_instance_that_fits = rampart_group.find_largest_idle_instance()
if self.get_remaining_capacity(rampart_group.name) <= 0:
if not rampart_group.is_containerized and self.get_remaining_capacity(rampart_group.name) <= 0:
logger.debug("Skipping group {} capacity <= 0".format(rampart_group.name))
continue
@@ -456,10 +470,11 @@ class TaskManager():
logger.debug("Starting dependent {} in group {} instance {}".format(
task.log_format, rampart_group.name, execution_instance.hostname))
elif not execution_instance and idle_instance_that_fits:
execution_instance = idle_instance_that_fits
logger.debug("Starting dependent {} in group {} on idle instance {}".format(
task.log_format, rampart_group.name, execution_instance.hostname))
if execution_instance:
if not rampart_group.is_containerized:
execution_instance = idle_instance_that_fits
logger.debug("Starting dependent {} in group {} on idle instance {}".format(
task.log_format, rampart_group.name, execution_instance.hostname))
if execution_instance or rampart_group.is_containerized:
self.graph[rampart_group.name]['graph'].add_job(task)
tasks_to_fail = [t for t in dependency_tasks if t != task]
tasks_to_fail += [dependent_task]
@@ -492,10 +507,16 @@ class TaskManager():
self.start_task(task, None, task.get_jobs_fail_chain(), None)
continue
for rampart_group in preferred_instance_groups:
if task.can_run_containerized and rampart_group.is_containerized:
self.graph[rampart_group.name]['graph'].add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), None)
found_acceptable_queue = True
break
if idle_instance_that_fits is None:
idle_instance_that_fits = rampart_group.find_largest_idle_instance()
remaining_capacity = self.get_remaining_capacity(rampart_group.name)
if remaining_capacity <= 0:
if not rampart_group.is_containerized and self.get_remaining_capacity(rampart_group.name) <= 0:
logger.debug("Skipping group {}, remaining_capacity {} <= 0".format(
rampart_group.name, remaining_capacity))
continue
@@ -505,10 +526,11 @@ class TaskManager():
logger.debug("Starting {} in group {} instance {} (remaining_capacity={})".format(
task.log_format, rampart_group.name, execution_instance.hostname, remaining_capacity))
elif not execution_instance and idle_instance_that_fits:
execution_instance = idle_instance_that_fits
logger.debug("Starting {} in group {} instance {} (remaining_capacity={})".format(
task.log_format, rampart_group.name, execution_instance.hostname, remaining_capacity))
if execution_instance:
if not rampart_group.is_containerized:
execution_instance = idle_instance_that_fits
logger.debug("Starting {} in group {} instance {} (remaining_capacity={})".format(
task.log_format, rampart_group.name, execution_instance.hostname, remaining_capacity))
if execution_instance or rampart_group.is_containerized:
self.graph[rampart_group.name]['graph'].add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), execution_instance)
found_acceptable_queue = True
@@ -533,6 +555,7 @@ class TaskManager():
logger.warn(timeout_message)
task.timed_out = True
task.status = 'failed'
task.send_approval_notification('timed_out')
task.websocket_emit_status(task.status)
task.job_explanation = timeout_message
task.save(update_fields=['status', 'job_explanation', 'timed_out'])

View File

@@ -684,16 +684,18 @@ def save_user_session_membership(sender, **kwargs):
return
if UserSessionMembership.objects.filter(user=user_id, session=session).exists():
return
UserSessionMembership(user_id=user_id, session=session, created=timezone.now()).save()
expired = UserSessionMembership.get_memberships_over_limit(user_id)
for membership in expired:
Session.objects.filter(session_key__in=[membership.session_id]).delete()
membership.delete()
if len(expired):
consumers.emit_channel_notification(
'control-limit_reached_{}'.format(user_id),
dict(group_name='control', reason='limit_reached')
)
# check if user_id from session has an id match in User before saving
if User.objects.filter(id=int(user_id)).exists():
UserSessionMembership(user_id=user_id, session=session, created=timezone.now()).save()
expired = UserSessionMembership.get_memberships_over_limit(user_id)
for membership in expired:
Session.objects.filter(session_key__in=[membership.session_id]).delete()
membership.delete()
if len(expired):
consumers.emit_channel_notification(
'control-limit_reached_{}'.format(user_id),
dict(group_name='control', reason='limit_reached')
)
@receiver(post_save, sender=OAuth2AccessToken)

View File

@@ -40,6 +40,9 @@ from django.utils.translation import ugettext_lazy as _
from django.core.cache import cache
from django.core.exceptions import ObjectDoesNotExist
# Kubernetes
from kubernetes.client.rest import ApiException
# Django-CRUM
from crum import impersonate
@@ -52,7 +55,7 @@ import ansible_runner
# AWX
from awx import __version__ as awx_application_version
from awx.main.constants import CLOUD_PROVIDERS, PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV
from awx.main.constants import CLOUD_PROVIDERS, PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV, GALAXY_SERVER_FIELDS
from awx.main.access import access_registry
from awx.main.models import (
Schedule, TowerScheduleState, Instance, InstanceGroup,
@@ -73,6 +76,7 @@ from awx.main.utils import (get_ssh_version, update_scm_url,
ignore_inventory_computed_fields,
ignore_inventory_group_removal, extract_ansible_vars, schedule_task_manager,
get_awx_version)
from awx.main.utils.ansible import read_ansible_config
from awx.main.utils.common import get_ansible_version, _get_ansible_version, get_custom_venv_choices
from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja
from awx.main.utils.reload import stop_local_services
@@ -251,6 +255,9 @@ def apply_cluster_membership_policies():
# On a differential basis, apply instances to non-isolated groups
with transaction.atomic():
for g in actual_groups:
if g.obj.is_containerized:
logger.debug('Skipping containerized group {} for policy calculation'.format(g.obj.name))
continue
instances_to_add = set(g.instances) - set(g.prior_instances)
instances_to_remove = set(g.prior_instances) - set(g.instances)
if instances_to_add:
@@ -323,7 +330,7 @@ def send_notifications(notification_list, job_id=None):
notification.status = "successful"
notification.notifications_sent = sent
except Exception as e:
logger.error("Send Notification Failed {}".format(e))
logger.exception("Send Notification Failed {}".format(e))
notification.status = "failed"
notification.error = smart_str(e)
update_fields.append('error')
@@ -451,6 +458,25 @@ def cluster_node_heartbeat():
logger.exception('Error marking {} as lost'.format(other_inst.hostname))
@task(queue=get_local_queuename)
def awx_k8s_reaper():
from awx.main.scheduler.kubernetes import PodManager # prevent circular import
for group in InstanceGroup.objects.filter(credential__isnull=False).iterator():
if group.is_containerized:
logger.debug("Checking for orphaned k8s pods for {}.".format(group))
for job in UnifiedJob.objects.filter(
pk__in=list(PodManager.list_active_jobs(group))
).exclude(status__in=ACTIVE_STATES):
logger.debug('{} is no longer active, reaping orphaned k8s pod'.format(job.log_format))
try:
PodManager(job).delete()
except Exception:
logger.exception("Failed to delete orphaned pod {} from {}".format(
job.log_format, group
))
@task(queue=get_local_queuename)
def awx_isolated_heartbeat():
local_hostname = settings.CLUSTER_HOST_ID
@@ -704,6 +730,7 @@ class BaseTask(object):
def __init__(self):
self.cleanup_paths = []
self.parent_workflow_job_id = None
def update_model(self, pk, _attempt=0, **updates):
"""Reload the model instance from the database and update the
@@ -876,12 +903,8 @@ class BaseTask(object):
show_paths = self.proot_show_paths + local_paths + \
settings.AWX_PROOT_SHOW_PATHS
# Help the user out by including the collections path inside the bubblewrap environment
if getattr(settings, 'AWX_ANSIBLE_COLLECTIONS_PATHS', []):
show_paths.extend(settings.AWX_ANSIBLE_COLLECTIONS_PATHS)
pi_path = settings.AWX_PROOT_BASE_PATH
if not self.instance.is_isolated():
if not self.instance.is_isolated() and not self.instance.is_containerized:
pi_path = tempfile.mkdtemp(
prefix='ansible_runner_pi_',
dir=settings.AWX_PROOT_BASE_PATH
@@ -908,6 +931,31 @@ class BaseTask(object):
process_isolation_params['process_isolation_ro_paths'].append(instance.ansible_virtualenv_path)
return process_isolation_params
def build_params_resource_profiling(self, instance, private_data_dir):
resource_profiling_params = {}
if self.should_use_resource_profiling(instance):
cpu_poll_interval = settings.AWX_RESOURCE_PROFILING_CPU_POLL_INTERVAL
mem_poll_interval = settings.AWX_RESOURCE_PROFILING_MEMORY_POLL_INTERVAL
pid_poll_interval = settings.AWX_RESOURCE_PROFILING_PID_POLL_INTERVAL
results_dir = os.path.join(private_data_dir, 'artifacts/playbook_profiling')
if not os.path.isdir(results_dir):
os.makedirs(results_dir, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
logger.debug('Collected the following resource profiling intervals: cpu: {} mem: {} pid: {}'
.format(cpu_poll_interval, mem_poll_interval, pid_poll_interval))
resource_profiling_params.update({'resource_profiling': True,
'resource_profiling_base_cgroup': 'ansible-runner',
'resource_profiling_cpu_poll_interval': cpu_poll_interval,
'resource_profiling_memory_poll_interval': mem_poll_interval,
'resource_profiling_pid_poll_interval': pid_poll_interval,
'resource_profiling_results_dir': results_dir})
else:
logger.debug('Resource profiling not enabled for task')
return resource_profiling_params
def _write_extra_vars_file(self, private_data_dir, vars, safe_dict={}):
env_path = os.path.join(private_data_dir, 'env')
try:
@@ -966,13 +1014,14 @@ class BaseTask(object):
if self.should_use_proot(instance):
env['PROOT_TMP_DIR'] = settings.AWX_PROOT_BASE_PATH
env['AWX_PRIVATE_DATA_DIR'] = private_data_dir
if 'ANSIBLE_COLLECTIONS_PATHS' in env:
env['ANSIBLE_COLLECTIONS_PATHS'] += os.pathsep + os.pathsep.join(settings.AWX_ANSIBLE_COLLECTIONS_PATHS)
else:
env['ANSIBLE_COLLECTIONS_PATHS'] = os.pathsep.join(settings.AWX_ANSIBLE_COLLECTIONS_PATHS)
return env
def should_use_resource_profiling(self, job):
'''
Return whether this task should use resource profiling
'''
return False
def should_use_proot(self, instance):
'''
Return whether this task should use proot.
@@ -1057,6 +1106,19 @@ class BaseTask(object):
'''
Hook for any steps to run after job/task is marked as complete.
'''
job_profiling_dir = os.path.join(private_data_dir, 'artifacts/playbook_profiling')
awx_profiling_dir = '/var/log/tower/playbook_profiling/'
if not os.path.exists(awx_profiling_dir):
os.mkdir(awx_profiling_dir)
if os.path.isdir(job_profiling_dir):
shutil.copytree(job_profiling_dir, os.path.join(awx_profiling_dir, str(instance.pk)))
if instance.is_containerized:
from awx.main.scheduler.kubernetes import PodManager # prevent circular import
pm = PodManager(instance)
logger.debug(f"Deleting pod {pm.pod_name}")
pm.delete()
def event_handler(self, event_data):
#
@@ -1078,6 +1140,8 @@ class BaseTask(object):
if event_data.get(self.event_data_key, None):
if self.event_data_key != 'job_id':
event_data.pop('parent_uuid', None)
if self.parent_workflow_job_id:
event_data['workflow_job_id'] = self.parent_workflow_job_id
should_write_event = False
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
@@ -1149,6 +1213,18 @@ class BaseTask(object):
'''
Run the job/task and capture its output.
'''
self.instance = self.model.objects.get(pk=pk)
containerized = self.instance.is_containerized
pod_manager = None
if containerized:
# Here we are trying to launch a pod before transitioning the job into a running
# state. For some scenarios (like waiting for resources to become available) we do this
# rather than marking the job as error or failed. This is not always desirable. Cases
# such as invalid authentication should surface as an error.
pod_manager = self.deploy_container_group_pod(self.instance)
if not pod_manager:
return
# self.instance because of the update_model pattern and when it's used in callback handlers
self.instance = self.update_model(pk, status='running',
start_args='') # blank field to remove encrypted passwords
@@ -1167,6 +1243,11 @@ class BaseTask(object):
private_data_dir = None
isolated_manager_instance = None
# store a reference to the parent workflow job (if any) so we can include
# it in event data JSON
if self.instance.spawned_by_workflow:
self.parent_workflow_job_id = self.instance.get_workflow_job().id
try:
isolated = self.instance.is_isolated()
self.instance.send_notification_templates("running")
@@ -1202,6 +1283,8 @@ class BaseTask(object):
self.build_extra_vars_file(self.instance, private_data_dir)
args = self.build_args(self.instance, private_data_dir, passwords)
cwd = self.build_cwd(self.instance, private_data_dir)
resource_profiling_params = self.build_params_resource_profiling(self.instance,
private_data_dir)
process_isolation_params = self.build_params_process_isolation(self.instance,
private_data_dir,
cwd)
@@ -1241,9 +1324,14 @@ class BaseTask(object):
'pexpect_timeout': getattr(settings, 'PEXPECT_TIMEOUT', 5),
'suppress_ansible_output': True,
**process_isolation_params,
**resource_profiling_params,
},
}
if containerized:
# We don't want HOME passed through to container groups.
params['envvars'].pop('HOME')
if isinstance(self.instance, AdHocCommand):
params['module'] = self.build_module_name(self.instance)
params['module_args'] = self.build_module_args(self.instance)
@@ -1262,7 +1350,7 @@ class BaseTask(object):
if not params[v]:
del params[v]
if self.instance.is_isolated() is True:
if self.instance.is_isolated() or containerized:
module_args = None
if 'module_args' in params:
# if it's adhoc, copy the module args
@@ -1273,10 +1361,12 @@ class BaseTask(object):
params.pop('inventory'),
os.path.join(private_data_dir, 'inventory')
)
ansible_runner.utils.dump_artifacts(params)
isolated_manager_instance = isolated_manager.IsolatedManager(
cancelled_callback=lambda: self.update_model(self.instance.pk).cancel_flag,
check_callback=self.check_handler,
pod_manager=pod_manager
)
status, rc = isolated_manager_instance.run(self.instance,
private_data_dir,
@@ -1330,6 +1420,42 @@ class BaseTask(object):
raise AwxTaskError.TaskError(self.instance, rc)
def deploy_container_group_pod(self, task):
from awx.main.scheduler.kubernetes import PodManager # Avoid circular import
pod_manager = PodManager(self.instance)
self.cleanup_paths.append(pod_manager.kube_config)
try:
log_name = task.log_format
logger.debug(f"Launching pod for {log_name}.")
pod_manager.deploy()
except (ApiException, Exception) as exc:
if isinstance(exc, ApiException) and exc.status == 403:
try:
if 'exceeded quota' in json.loads(exc.body)['message']:
# If the k8s cluster does not have capacity, we move the
# job back into pending and wait until the next run of
# the task manager. This does not exactly play well with
# our current instance group precendence logic, since it
# will just sit here forever if kubernetes returns this
# error.
logger.warn(exc.body)
logger.warn(f"Could not launch pod for {log_name}. Exceeded quota.")
self.update_model(task.pk, status='pending')
return
except Exception:
logger.exception(f"Unable to handle response from Kubernetes API for {log_name}.")
logger.exception(f"Error when launching pod for {log_name}")
self.update_model(task.pk, status='error', result_traceback=traceback.format_exc())
return
self.update_model(task.pk, execution_node=pod_manager.pod_name)
return pod_manager
@task()
class RunJob(BaseTask):
'''
@@ -1474,13 +1600,23 @@ class RunJob(BaseTask):
if authorize:
env['ANSIBLE_NET_AUTH_PASS'] = network_cred.get_input('authorize_password', default='')
for env_key, folder in (
('ANSIBLE_COLLECTIONS_PATHS', 'requirements_collections'),
('ANSIBLE_ROLES_PATH', 'requirements_roles')):
paths = []
path_vars = (
('ANSIBLE_COLLECTIONS_PATHS', 'collections_paths', 'requirements_collections', '~/.ansible/collections:/usr/share/ansible/collections'),
('ANSIBLE_ROLES_PATH', 'roles_path', 'requirements_roles', '~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles'))
config_values = read_ansible_config(job.project.get_project_path(), list(map(lambda x: x[1], path_vars)))
for env_key, config_setting, folder, default in path_vars:
paths = default.split(':')
if env_key in env:
paths.append(env[env_key])
paths.append(os.path.join(private_data_dir, folder))
for path in env[env_key].split(':'):
if path not in paths:
paths = [env[env_key]] + paths
elif config_setting in config_values:
for path in config_values[config_setting].split(':'):
if path not in paths:
paths = [config_values[config_setting]] + paths
paths = [os.path.join(private_data_dir, folder)] + paths
env[env_key] = os.pathsep.join(paths)
return env
@@ -1595,10 +1731,18 @@ class RunJob(BaseTask):
d[r'Vault password \({}\):\s*?$'.format(vault_id)] = k
return d
def should_use_resource_profiling(self, job):
'''
Return whether this task should use resource profiling
'''
return settings.AWX_RESOURCE_PROFILING_ENABLED
def should_use_proot(self, job):
'''
Return whether this task should use proot.
'''
if job.is_containerized:
return False
return getattr(settings, 'AWX_PROOT_ENABLED', False)
def pre_run_hook(self, job, private_data_dir):
@@ -1659,6 +1803,7 @@ class RunJob(BaseTask):
if job.is_isolated() is True:
pu_ig = pu_ig.controller
pu_en = settings.CLUSTER_HOST_ID
sync_metafields = dict(
launch_type="sync",
job_type='run',
@@ -1696,29 +1841,11 @@ class RunJob(BaseTask):
# up-to-date with project, job is running project current version
if job_revision:
job = self.update_model(job.pk, scm_revision=job_revision)
# copy the project directory
runner_project_folder = os.path.join(private_data_dir, 'project')
if job.project.scm_type == 'git':
git_repo = git.Repo(project_path)
if not os.path.exists(runner_project_folder):
os.mkdir(runner_project_folder)
tmp_branch_name = 'awx_internal/{}'.format(uuid4())
# always clone based on specific job revision
if not job.scm_revision:
raise RuntimeError('Unexpectedly could not determine a revision to run from project.')
source_branch = git_repo.create_head(tmp_branch_name, job.scm_revision)
# git clone must take file:// syntax for source repo or else options like depth will be ignored
source_as_uri = Path(project_path).as_uri()
git.Repo.clone_from(
source_as_uri, runner_project_folder, branch=source_branch,
depth=1, single_branch=True, # shallow, do not copy full history
recursive=True # include submodules
# Project update does not copy the folder, so copy here
RunProjectUpdate.make_local_copy(
project_path, os.path.join(private_data_dir, 'project'),
job.project.scm_type, job_revision
)
# force option is necessary because remote refs are not counted, although no information is lost
git_repo.delete_head(tmp_branch_name, force=True)
else:
copy_tree(project_path, runner_project_folder)
if job.inventory.kind == 'smart':
# cache smart inventory memberships so that the host_filter query is not
@@ -1737,8 +1864,9 @@ class RunJob(BaseTask):
os.path.join(private_data_dir, 'artifacts', str(job.id), 'fact_cache'),
fact_modification_times,
)
if isolated_manager_instance:
if isolated_manager_instance and not job.is_containerized:
isolated_manager_instance.cleanup()
try:
inventory = job.inventory
except Inventory.DoesNotExist:
@@ -1830,6 +1958,24 @@ class RunProjectUpdate(BaseTask):
env['TMP'] = settings.AWX_PROOT_BASE_PATH
env['PROJECT_UPDATE_ID'] = str(project_update.pk)
env['ANSIBLE_CALLBACK_PLUGINS'] = self.get_path_to('..', 'plugins', 'callback')
env['ANSIBLE_GALAXY_IGNORE'] = True
# Set up the fallback server, which is the normal Ansible Galaxy by default
galaxy_servers = list(settings.FALLBACK_GALAXY_SERVERS)
# If private galaxy URL is non-blank, that means this feature is enabled
if settings.PRIMARY_GALAXY_URL:
galaxy_servers = [{'id': 'primary_galaxy'}] + galaxy_servers
for key in GALAXY_SERVER_FIELDS:
value = getattr(settings, 'PRIMARY_GALAXY_{}'.format(key.upper()))
if value:
galaxy_servers[0][key] = value
for server in galaxy_servers:
for key in GALAXY_SERVER_FIELDS:
if not server.get(key):
continue
env_key = ('ANSIBLE_GALAXY_SERVER_{}_{}'.format(server.get('id', 'unnamed'), key)).upper()
env[env_key] = server[key]
# now set the precedence of galaxy servers
env['ANSIBLE_GALAXY_SERVER_LIST'] = ','.join([server.get('id', 'unnamed') for server in galaxy_servers])
return env
def _build_scm_url_extra_vars(self, project_update):
@@ -1895,8 +2041,8 @@ class RunProjectUpdate(BaseTask):
extra_vars.update(extra_vars_new)
scm_branch = project_update.scm_branch
branch_override = bool(project_update.scm_branch != project_update.project.scm_branch)
if project_update.job_type == 'run' and scm_branch and (not branch_override):
branch_override = bool(scm_branch and project_update.scm_branch != project_update.project.scm_branch)
if project_update.job_type == 'run' and (not branch_override):
scm_branch = project_update.project.scm_revision
elif not scm_branch:
scm_branch = {'hg': 'tip'}.get(project_update.scm_type, 'HEAD')
@@ -2064,15 +2210,51 @@ class RunProjectUpdate(BaseTask):
git_repo = git.Repo(project_path)
self.original_branch = git_repo.active_branch
@staticmethod
def make_local_copy(project_path, destination_folder, scm_type, scm_revision):
if scm_type == 'git':
git_repo = git.Repo(project_path)
if not os.path.exists(destination_folder):
os.mkdir(destination_folder, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
tmp_branch_name = 'awx_internal/{}'.format(uuid4())
# always clone based on specific job revision
if not scm_revision:
raise RuntimeError('Unexpectedly could not determine a revision to run from project.')
source_branch = git_repo.create_head(tmp_branch_name, scm_revision)
# git clone must take file:// syntax for source repo or else options like depth will be ignored
source_as_uri = Path(project_path).as_uri()
git.Repo.clone_from(
source_as_uri, destination_folder, branch=source_branch,
depth=1, single_branch=True, # shallow, do not copy full history
)
# submodules copied in loop because shallow copies from local HEADs are ideal
# and no git clone submodule options are compatible with minimum requirements
for submodule in git_repo.submodules:
subrepo_path = os.path.abspath(os.path.join(project_path, submodule.path))
subrepo_destination_folder = os.path.abspath(os.path.join(destination_folder, submodule.path))
subrepo_uri = Path(subrepo_path).as_uri()
git.Repo.clone_from(subrepo_uri, subrepo_destination_folder, depth=1, single_branch=True)
# force option is necessary because remote refs are not counted, although no information is lost
git_repo.delete_head(tmp_branch_name, force=True)
else:
copy_tree(project_path, destination_folder)
def post_run_hook(self, instance, status):
if self.original_branch:
# for git project syncs, non-default branches can be problems
# restore to branch the repo was on before this run
try:
self.original_branch.checkout()
except Exception:
# this could have failed due to dirty tree, but difficult to predict all cases
logger.exception('Failed to restore project repo to prior state after {}'.format(instance.log_format))
if self.job_private_data_dir:
# copy project folder before resetting to default branch
# because some git-tree-specific resources (like submodules) might matter
self.make_local_copy(
instance.get_project_path(check_if_exists=False), os.path.join(self.job_private_data_dir, 'project'),
instance.scm_type, self.playbook_new_revision
)
if self.original_branch:
# for git project syncs, non-default branches can be problems
# restore to branch the repo was on before this run
try:
self.original_branch.checkout()
except Exception:
# this could have failed due to dirty tree, but difficult to predict all cases
logger.exception('Failed to restore project repo to prior state after {}'.format(instance.log_format))
self.release_lock(instance)
p = instance.project
if self.playbook_new_revision:
@@ -2234,7 +2416,7 @@ class RunInventoryUpdate(BaseTask):
getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper()),])
# Add arguments for the source inventory script
args.append('--source')
args.append(self.psuedo_build_inventory(inventory_update, private_data_dir))
args.append(self.pseudo_build_inventory(inventory_update, private_data_dir))
if src == 'custom':
args.append("--custom")
args.append('-v%d' % inventory_update.verbosity)
@@ -2245,7 +2427,7 @@ class RunInventoryUpdate(BaseTask):
def build_inventory(self, inventory_update, private_data_dir):
return None # what runner expects in order to not deal with inventory
def psuedo_build_inventory(self, inventory_update, private_data_dir):
def pseudo_build_inventory(self, inventory_update, private_data_dir):
"""Inventory imports are ran through a management command
we pass the inventory in args to that command, so this is not considered
to be "Ansible" inventory (by runner) even though it is
@@ -2518,6 +2700,8 @@ class RunAdHocCommand(BaseTask):
'''
Return whether this task should use proot.
'''
if ad_hoc_command.is_containerized:
return False
return getattr(settings, 'AWX_PROOT_ENABLED', False)
def final_run_hook(self, adhoc_job, status, private_data_dir, fact_modification_times, isolated_manager_instance=None):

View File

@@ -154,12 +154,12 @@ def mk_job_template(name, job_type='run',
organization=None, inventory=None,
credential=None, network_credential=None,
cloud_credential=None, persisted=True, extra_vars='',
project=None, spec=None):
project=None, spec=None, webhook_service=''):
if extra_vars:
extra_vars = json.dumps(extra_vars)
jt = JobTemplate(name=name, job_type=job_type, extra_vars=extra_vars,
playbook='helloworld.yml')
webhook_service=webhook_service, playbook='helloworld.yml')
jt.inventory = inventory
if jt.inventory is None:
@@ -200,11 +200,13 @@ def mk_workflow_job(status='new', workflow_job_template=None, extra_vars={},
return job
def mk_workflow_job_template(name, extra_vars='', spec=None, organization=None, persisted=True):
def mk_workflow_job_template(name, extra_vars='', spec=None, organization=None, persisted=True,
webhook_service=''):
if extra_vars:
extra_vars = json.dumps(extra_vars)
wfjt = WorkflowJobTemplate(name=name, extra_vars=extra_vars, organization=organization)
wfjt = WorkflowJobTemplate(name=name, extra_vars=extra_vars, organization=organization,
webhook_service=webhook_service)
wfjt.survey_spec = spec
if wfjt.survey_spec:

View File

@@ -197,7 +197,7 @@ def create_survey_spec(variables=None, default_type='integer', required=True, mi
#
def create_job_template(name, roles=None, persisted=True, **kwargs):
def create_job_template(name, roles=None, persisted=True, webhook_service='', **kwargs):
Objects = generate_objects(["job_template", "jobs",
"organization",
"inventory",
@@ -252,11 +252,10 @@ def create_job_template(name, roles=None, persisted=True, **kwargs):
else:
spec = None
jt = mk_job_template(name, project=proj,
inventory=inv, credential=cred,
jt = mk_job_template(name, project=proj, inventory=inv, credential=cred,
network_credential=net_cred, cloud_credential=cloud_cred,
job_type=job_type, spec=spec, extra_vars=extra_vars,
persisted=persisted)
persisted=persisted, webhook_service=webhook_service)
if 'jobs' in kwargs:
for i in kwargs['jobs']:
@@ -401,7 +400,7 @@ def generate_workflow_job_template_nodes(workflow_job_template,
# TODO: Implement survey and jobs
def create_workflow_job_template(name, organization=None, persisted=True, **kwargs):
def create_workflow_job_template(name, organization=None, persisted=True, webhook_service='', **kwargs):
Objects = generate_objects(["workflow_job_template",
"workflow_job_template_nodes",
"survey",], kwargs)
@@ -418,7 +417,8 @@ def create_workflow_job_template(name, organization=None, persisted=True, **kwar
organization=organization,
spec=spec,
extra_vars=extra_vars,
persisted=persisted)
persisted=persisted,
webhook_service=webhook_service)

View File

@@ -496,9 +496,6 @@ def test_falsey_field_data(get, post, organization, admin, field_value):
@pytest.mark.django_db
@pytest.mark.parametrize('kind, extraneous', [
['ssh', 'ssh_key_unlock'],
['scm', 'ssh_key_unlock'],
['net', 'ssh_key_unlock'],
['net', 'authorize_password'],
])
def test_field_dependencies(get, post, organization, admin, kind, extraneous):

View File

@@ -127,3 +127,53 @@ def test_post_wfjt_running_notification(get, post, admin, notification_template,
response = get(url, admin)
assert response.status_code == 200
assert len(response.data['results']) == 1
@pytest.mark.django_db
def test_search_on_notification_configuration_is_prevented(get, admin):
url = reverse('api:notification_template_list')
response = get(url, {'notification_configuration__regex': 'ABCDEF'}, admin)
assert response.status_code == 403
assert response.data == {"detail": "Filtering on notification_configuration is not allowed."}
@pytest.mark.django_db
def test_get_wfjt_approval_notification(get, admin, workflow_job_template):
url = reverse('api:workflow_job_template_notification_templates_approvals_list', kwargs={'pk': workflow_job_template.pk})
response = get(url, admin)
assert response.status_code == 200
assert len(response.data['results']) == 0
@pytest.mark.django_db
def test_post_wfjt_approval_notification(get, post, admin, notification_template, workflow_job_template):
url = reverse('api:workflow_job_template_notification_templates_approvals_list', kwargs={'pk': workflow_job_template.pk})
response = post(url,
dict(id=notification_template.id,
associate=True),
admin)
assert response.status_code == 204
response = get(url, admin)
assert response.status_code == 200
assert len(response.data['results']) == 1
@pytest.mark.django_db
def test_get_org_approval_notification(get, admin, organization):
url = reverse('api:organization_notification_templates_approvals_list', kwargs={'pk': organization.pk})
response = get(url, admin)
assert response.status_code == 200
assert len(response.data['results']) == 0
@pytest.mark.django_db
def test_post_org_approval_notification(get, post, admin, notification_template, organization):
url = reverse('api:organization_notification_templates_approvals_list', kwargs={'pk': organization.pk})
response = post(url,
dict(id=notification_template.id,
associate=True),
admin)
assert response.status_code == 204
response = get(url, admin)
assert response.status_code == 200
assert len(response.data['results']) == 1

View File

@@ -0,0 +1,260 @@
import pytest
from awx.api.versioning import reverse
from awx.main.models.mixins import WebhookTemplateMixin
from awx.main.models.credential import Credential, CredentialType
@pytest.mark.django_db
@pytest.mark.parametrize(
"user_role, expect", [
('superuser', 200),
('org admin', 200),
('jt admin', 200),
('jt execute', 403),
('org member', 403),
]
)
def test_get_webhook_key_jt(organization_factory, job_template_factory, get, user_role, expect):
objs = organization_factory("org", superusers=['admin'], users=['user'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
if user_role == 'superuser':
user = objs.superusers.admin
else:
user = objs.users.user
grant_obj = objs.organization if user_role.startswith('org') else jt
getattr(grant_obj, '{}_role'.format(user_role.split()[1])).members.add(user)
url = reverse('api:webhook_key', kwargs={'model_kwarg': 'job_templates', 'pk': jt.pk})
response = get(url, user=user, expect=expect)
if expect < 400:
assert response.data == {'webhook_key': ''}
@pytest.mark.django_db
@pytest.mark.parametrize(
"user_role, expect", [
('superuser', 200),
('org admin', 200),
('jt admin', 200),
('jt execute', 403),
('org member', 403),
]
)
def test_get_webhook_key_wfjt(organization_factory, workflow_job_template_factory, get, user_role, expect):
objs = organization_factory("org", superusers=['admin'], users=['user'])
wfjt = workflow_job_template_factory("wfjt", organization=objs.organization).workflow_job_template
if user_role == 'superuser':
user = objs.superusers.admin
else:
user = objs.users.user
grant_obj = objs.organization if user_role.startswith('org') else wfjt
getattr(grant_obj, '{}_role'.format(user_role.split()[1])).members.add(user)
url = reverse('api:webhook_key', kwargs={'model_kwarg': 'workflow_job_templates', 'pk': wfjt.pk})
response = get(url, user=user, expect=expect)
if expect < 400:
assert response.data == {'webhook_key': ''}
@pytest.mark.django_db
@pytest.mark.parametrize(
"user_role, expect", [
('superuser', 201),
('org admin', 201),
('jt admin', 201),
('jt execute', 403),
('org member', 403),
]
)
def test_post_webhook_key_jt(organization_factory, job_template_factory, post, user_role, expect):
objs = organization_factory("org", superusers=['admin'], users=['user'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
if user_role == 'superuser':
user = objs.superusers.admin
else:
user = objs.users.user
grant_obj = objs.organization if user_role.startswith('org') else jt
getattr(grant_obj, '{}_role'.format(user_role.split()[1])).members.add(user)
url = reverse('api:webhook_key', kwargs={'model_kwarg': 'job_templates', 'pk': jt.pk})
response = post(url, {}, user=user, expect=expect)
if expect < 400:
assert bool(response.data.get('webhook_key'))
@pytest.mark.django_db
@pytest.mark.parametrize(
"user_role, expect", [
('superuser', 201),
('org admin', 201),
('jt admin', 201),
('jt execute', 403),
('org member', 403),
]
)
def test_post_webhook_key_wfjt(organization_factory, workflow_job_template_factory, post, user_role, expect):
objs = organization_factory("org", superusers=['admin'], users=['user'])
wfjt = workflow_job_template_factory("wfjt", organization=objs.organization).workflow_job_template
if user_role == 'superuser':
user = objs.superusers.admin
else:
user = objs.users.user
grant_obj = objs.organization if user_role.startswith('org') else wfjt
getattr(grant_obj, '{}_role'.format(user_role.split()[1])).members.add(user)
url = reverse('api:webhook_key', kwargs={'model_kwarg': 'workflow_job_templates', 'pk': wfjt.pk})
response = post(url, {}, user=user, expect=expect)
if expect < 400:
assert bool(response.data.get('webhook_key'))
@pytest.mark.django_db
@pytest.mark.parametrize(
"service", [s for s, _ in WebhookTemplateMixin.SERVICES]
)
def test_set_webhook_service(organization_factory, job_template_factory, patch, service):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
admin = objs.superusers.admin
assert (jt.webhook_service, jt.webhook_key) == ('', '')
url = reverse('api:job_template_detail', kwargs={'pk': jt.pk})
patch(url, {'webhook_service': service}, user=admin, expect=200)
jt.refresh_from_db()
assert jt.webhook_service == service
assert jt.webhook_key != ''
@pytest.mark.django_db
@pytest.mark.parametrize(
"service", [s for s, _ in WebhookTemplateMixin.SERVICES]
)
def test_unset_webhook_service(organization_factory, job_template_factory, patch, service):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization, webhook_service=service,
inventory='test_inv', project='test_proj').job_template
admin = objs.superusers.admin
assert jt.webhook_service == service
assert jt.webhook_key != ''
url = reverse('api:job_template_detail', kwargs={'pk': jt.pk})
patch(url, {'webhook_service': ''}, user=admin, expect=200)
jt.refresh_from_db()
assert (jt.webhook_service, jt.webhook_key) == ('', '')
@pytest.mark.django_db
@pytest.mark.parametrize(
"service", [s for s, _ in WebhookTemplateMixin.SERVICES]
)
def test_set_webhook_credential(organization_factory, job_template_factory, patch, service):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization, webhook_service=service,
inventory='test_inv', project='test_proj').job_template
admin = objs.superusers.admin
assert jt.webhook_service == service
assert jt.webhook_key != ''
cred_type = CredentialType.defaults['{}_token'.format(service)]()
cred_type.save()
cred = Credential.objects.create(credential_type=cred_type, name='test-cred',
inputs={'token': 'secret'})
url = reverse('api:job_template_detail', kwargs={'pk': jt.pk})
patch(url, {'webhook_credential': cred.pk}, user=admin, expect=200)
jt.refresh_from_db()
assert jt.webhook_service == service
assert jt.webhook_key != ''
assert jt.webhook_credential == cred
@pytest.mark.django_db
@pytest.mark.parametrize(
"service,token", [
(s, WebhookTemplateMixin.SERVICES[i - 1][0]) for i, (s, _) in enumerate(WebhookTemplateMixin.SERVICES)
]
)
def test_set_wrong_service_webhook_credential(organization_factory, job_template_factory, patch, service, token):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization, webhook_service=service,
inventory='test_inv', project='test_proj').job_template
admin = objs.superusers.admin
assert jt.webhook_service == service
assert jt.webhook_key != ''
cred_type = CredentialType.defaults['{}_token'.format(token)]()
cred_type.save()
cred = Credential.objects.create(credential_type=cred_type, name='test-cred',
inputs={'token': 'secret'})
url = reverse('api:job_template_detail', kwargs={'pk': jt.pk})
response = patch(url, {'webhook_credential': cred.pk}, user=admin, expect=400)
jt.refresh_from_db()
assert jt.webhook_service == service
assert jt.webhook_key != ''
assert jt.webhook_credential is None
assert response.data == {'webhook_credential': ["Must match the selected webhook service."]}
@pytest.mark.django_db
@pytest.mark.parametrize(
"service", [s for s, _ in WebhookTemplateMixin.SERVICES]
)
def test_set_webhook_credential_without_service(organization_factory, job_template_factory, patch, service):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
admin = objs.superusers.admin
assert jt.webhook_service == ''
assert jt.webhook_key == ''
cred_type = CredentialType.defaults['{}_token'.format(service)]()
cred_type.save()
cred = Credential.objects.create(credential_type=cred_type, name='test-cred',
inputs={'token': 'secret'})
url = reverse('api:job_template_detail', kwargs={'pk': jt.pk})
response = patch(url, {'webhook_credential': cred.pk}, user=admin, expect=400)
jt.refresh_from_db()
assert jt.webhook_service == ''
assert jt.webhook_key == ''
assert jt.webhook_credential is None
assert response.data == {'webhook_credential': ["Must match the selected webhook service."]}
@pytest.mark.django_db
@pytest.mark.parametrize(
"service", [s for s, _ in WebhookTemplateMixin.SERVICES]
)
def test_unset_webhook_service_with_credential(organization_factory, job_template_factory, patch, service):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization, webhook_service=service,
inventory='test_inv', project='test_proj').job_template
admin = objs.superusers.admin
assert jt.webhook_service == service
assert jt.webhook_key != ''
cred_type = CredentialType.defaults['{}_token'.format(service)]()
cred_type.save()
cred = Credential.objects.create(credential_type=cred_type, name='test-cred',
inputs={'token': 'secret'})
jt.webhook_credential = cred
jt.save()
url = reverse('api:job_template_detail', kwargs={'pk': jt.pk})
response = patch(url, {'webhook_service': ''}, user=admin, expect=400)
jt.refresh_from_db()
assert jt.webhook_service == service
assert jt.webhook_key != ''
assert jt.webhook_credential == cred
assert response.data == {'webhook_credential': ["Must match the selected webhook service."]}

View File

@@ -203,6 +203,13 @@ def organization(instance):
return Organization.objects.create(name="test-org", description="test-org-desc")
@pytest.fixture
def credentialtype_kube():
kube = CredentialType.defaults['kubernetes_bearer_token']()
kube.save()
return kube
@pytest.fixture
def credentialtype_ssh():
ssh = CredentialType.defaults['ssh']()
@@ -336,6 +343,12 @@ def other_external_credential(credentialtype_external):
inputs={'url': 'http://testhost.com', 'token': 'secret2'})
@pytest.fixture
def kube_credential(credentialtype_kube):
return Credential.objects.create(credential_type=credentialtype_kube, name='kube-cred',
inputs={'host': 'my.cluster', 'bearer_token': 'my-token', 'verify_ssl': False})
@pytest.fixture
def inventory(organization):
return organization.inventories.create(name="test-inv")

View File

@@ -147,6 +147,39 @@ class TestMetaVars:
assert data['awx_schedule_id'] == schedule.pk
assert 'awx_user_name' not in data
def test_scheduled_workflow_job_node_metavars(self, workflow_job_template):
schedule = Schedule.objects.create(
name='job-schedule',
rrule='DTSTART:20171129T155939z\nFREQ=MONTHLY',
unified_job_template=workflow_job_template
)
workflow_job = WorkflowJob.objects.create(
name='workflow-job',
workflow_job_template=workflow_job_template,
schedule=schedule
)
job = Job.objects.create(
launch_type='workflow'
)
workflow_job.workflow_nodes.create(job=job)
assert job.awx_meta_vars() == {
'awx_job_id': job.id,
'tower_job_id': job.id,
'awx_job_launch_type': 'workflow',
'tower_job_launch_type': 'workflow',
'awx_workflow_job_name': 'workflow-job',
'tower_workflow_job_name': 'workflow-job',
'awx_workflow_job_id': workflow_job.id,
'tower_workflow_job_id': workflow_job.id,
'awx_parent_job_schedule_id': schedule.id,
'tower_parent_job_schedule_id': schedule.id,
'awx_parent_job_schedule_name': 'job-schedule',
'tower_parent_job_schedule_name': 'job-schedule',
}
@pytest.mark.django_db
def test_event_processing_not_finished():

View File

@@ -1,6 +1,8 @@
# Python
import pytest
from unittest import mock
import json
# AWX
from awx.main.models.workflow import (
@@ -248,7 +250,6 @@ class TestWorkflowJobTemplate:
test_view = WorkflowJobTemplateNodeSuccessNodesList()
nodes = wfjt.workflow_job_template_nodes.all()
# test cycle validation
print(nodes[0].success_nodes.get(id=nodes[1].id).failure_nodes.get(id=nodes[2].id))
assert test_view.is_valid_relation(nodes[2], nodes[0]) == {'Error': 'Cycle detected.'}
def test_always_success_failure_creation(self, wfjt, admin, get):
@@ -270,6 +271,103 @@ class TestWorkflowJobTemplate:
wfjt2.validate_unique()
@pytest.mark.django_db
class TestWorkflowJobTemplatePrompts:
"""These are tests for prompts that live on the workflow job template model
not the node, prompts apply for entire workflow
"""
@pytest.fixture
def wfjt_prompts(self):
return WorkflowJobTemplate.objects.create(
ask_inventory_on_launch=True,
ask_variables_on_launch=True,
ask_limit_on_launch=True,
ask_scm_branch_on_launch=True
)
@pytest.fixture
def prompts_data(self, inventory):
return dict(
inventory=inventory,
extra_vars={'foo': 'bar'},
limit='webservers',
scm_branch='release-3.3'
)
def test_apply_workflow_job_prompts(self, workflow_job_template, wfjt_prompts, prompts_data, inventory):
# null or empty fields used
workflow_job = workflow_job_template.create_unified_job()
assert workflow_job.limit is None
assert workflow_job.inventory is None
assert workflow_job.scm_branch is None
# fields from prompts used
workflow_job = workflow_job_template.create_unified_job(**prompts_data)
assert json.loads(workflow_job.extra_vars) == {'foo': 'bar'}
assert workflow_job.limit == 'webservers'
assert workflow_job.inventory == inventory
assert workflow_job.scm_branch == 'release-3.3'
# non-null fields from WFJT used
workflow_job_template.inventory = inventory
workflow_job_template.limit = 'fooo'
workflow_job_template.scm_branch = 'bar'
workflow_job = workflow_job_template.create_unified_job()
assert workflow_job.limit == 'fooo'
assert workflow_job.inventory == inventory
assert workflow_job.scm_branch == 'bar'
@pytest.mark.django_db
def test_process_workflow_job_prompts(self, inventory, workflow_job_template, wfjt_prompts, prompts_data):
accepted, rejected, errors = workflow_job_template._accept_or_ignore_job_kwargs(**prompts_data)
assert accepted == {}
assert rejected == prompts_data
assert errors
accepted, rejected, errors = wfjt_prompts._accept_or_ignore_job_kwargs(**prompts_data)
assert accepted == prompts_data
assert rejected == {}
assert not errors
@pytest.mark.django_db
def test_set_all_the_prompts(self, post, organization, inventory, org_admin):
r = post(
url = reverse('api:workflow_job_template_list'),
data = dict(
name='My new workflow',
organization=organization.id,
inventory=inventory.id,
limit='foooo',
ask_limit_on_launch=True,
scm_branch='bar',
ask_scm_branch_on_launch=True
),
user = org_admin,
expect = 201
)
wfjt = WorkflowJobTemplate.objects.get(id=r.data['id'])
assert wfjt.char_prompts == {
'limit': 'foooo', 'scm_branch': 'bar'
}
assert wfjt.ask_scm_branch_on_launch is True
assert wfjt.ask_limit_on_launch is True
launch_url = r.data['related']['launch']
with mock.patch('awx.main.queue.CallbackQueueDispatcher.dispatch', lambda self, obj: None):
r = post(
url = launch_url,
data = dict(
scm_branch = 'prompt_branch',
limit = 'prompt_limit'
),
user = org_admin,
expect=201
)
assert r.data['limit'] == 'prompt_limit'
assert r.data['scm_branch'] == 'prompt_branch'
@pytest.mark.django_db
def test_workflow_ancestors(organization):
# Spawn order of templates grandparent -> parent -> child

View File

@@ -0,0 +1,56 @@
import subprocess
import yaml
import base64
from unittest import mock # noqa
import pytest
from awx.main.scheduler.kubernetes import PodManager
from awx.main.utils import (
create_temporary_fifo,
)
@pytest.fixture
def containerized_job(default_instance_group, kube_credential, job_template_factory):
default_instance_group.credential = kube_credential
default_instance_group.save()
objects = job_template_factory('jt', organization='org1', project='proj',
inventory='inv', credential='cred',
jobs=['my_job'])
jt = objects.job_template
jt.instance_groups.add(default_instance_group)
j1 = objects.jobs['my_job']
j1.instance_group = default_instance_group
j1.status = 'pending'
j1.save()
return j1
@pytest.mark.django_db
def test_containerized_job(containerized_job):
assert containerized_job.is_containerized
assert containerized_job.instance_group.is_containerized
assert containerized_job.instance_group.credential.kubernetes
@pytest.mark.django_db
def test_kubectl_ssl_verification(containerized_job):
cred = containerized_job.instance_group.credential
cred.inputs['verify_ssl'] = True
key_material = subprocess.run('openssl genrsa 2> /dev/null',
shell=True, check=True,
stdout=subprocess.PIPE)
key = create_temporary_fifo(key_material.stdout)
cmd = f"""
openssl req -x509 -sha256 -new -nodes \
-key {key} -subj '/C=US/ST=North Carolina/L=Durham/O=Ansible/OU=AWX Development/CN=awx.localhost'
"""
cert = subprocess.run(cmd.strip(), shell=True, check=True, stdout=subprocess.PIPE)
cred.inputs['ssl_ca_cert'] = cert.stdout
cred.save()
pm = PodManager(containerized_job)
config = yaml.load(open(pm.kube_config), Loader=yaml.FullLoader)
ca_data = config['clusters'][0]['cluster']['certificate-authority-data']
assert cert.stdout == base64.b64decode(ca_data.encode())

View File

@@ -82,9 +82,12 @@ def test_default_cred_types():
'cloudforms',
'conjur',
'gce',
'github_token',
'gitlab_token',
'hashivault_kv',
'hashivault_ssh',
'insights',
'kubernetes_bearer_token',
'net',
'openstack',
'rhv',
@@ -137,7 +140,6 @@ def test_credential_creation(organization_factory):
[PKCS8_PRIVATE_KEY, None, True], # unencrypted PKCS8 key, no unlock pass
[PKCS8_PRIVATE_KEY, 'passme', False], # unencrypted PKCS8 key, unlock pass
[None, None, True], # no key, no unlock pass
[None, 'super-secret', False], # no key, unlock pass
['INVALID-KEY-DATA', None, False], # invalid key data
[EXAMPLE_PRIVATE_KEY.replace('=', '\u003d'), None, True], # automatically fix JSON-encoded GCE keys
])

View File

@@ -262,7 +262,6 @@ def test_inventory_update_injected_content(this_kind, script_or_plugin, inventor
"""
private_data_dir = envvars.pop('AWX_PRIVATE_DATA_DIR')
assert envvars.pop('ANSIBLE_INVENTORY_ENABLED') == ('auto' if use_plugin else 'script')
assert envvars.pop('ANSIBLE_COLLECTIONS_PATHS') == os.pathsep.join(settings.AWX_ANSIBLE_COLLECTIONS_PATHS)
set_files = bool(os.getenv("MAKE_INVENTORY_REFERENCE_FILES", 'false').lower()[0] not in ['f', '0'])
env, content = read_content(private_data_dir, envvars, inventory_update)
base_dir = os.path.join(DATA, script_or_plugin)

View File

@@ -2,7 +2,7 @@ import pytest
from unittest import mock
import json
from awx.main.models import Job, Instance
from awx.main.models import Job, Instance, JobHostSummary
from awx.main.tasks import cluster_node_heartbeat
from django.test.utils import override_settings
@@ -47,6 +47,24 @@ def test_job_notification_data(inventory, machine_credential, project):
assert json.loads(notification_data['extra_vars'])['SSN'] == encrypted_str
@pytest.mark.django_db
def test_job_notification_host_data(inventory, machine_credential, project, job_template, host):
job = Job.objects.create(
job_template=job_template, inventory=inventory, name='hi world', project=project
)
JobHostSummary.objects.create(job=job, host=host, changed=1, dark=2, failures=3, ok=4, processed=3, skipped=2, rescued=1, ignored=0)
assert job.notification_data()['hosts'] == {'single-host':
{'failed': True,
'changed': 1,
'dark': 2,
'failures': 3,
'ok': 4,
'processed': 3,
'skipped': 2,
'rescued': 1,
'ignored': 0}}
@pytest.mark.django_db
class TestLaunchConfig:

View File

@@ -150,3 +150,24 @@ def test_org_admin_edit_sys_auditor(org_admin, alice, organization):
organization.member_role.members.add(alice)
access = UserAccess(org_admin)
assert not access.can_change(obj=alice, data=dict(is_system_auditor='true'))
@pytest.mark.django_db
def test_org_admin_can_delete_orphan(org_admin, alice):
access = UserAccess(org_admin)
assert access.can_delete(alice)
@pytest.mark.django_db
def test_org_admin_can_delete_group_member(org_admin, org_member):
access = UserAccess(org_admin)
assert access.can_delete(org_member)
@pytest.mark.django_db
def test_org_admin_cannot_delete_member_attached_to_other_group(org_admin, org_member):
other_org = Organization.objects.create(name="other-org", description="other-org-desc")
access = UserAccess(org_admin)
other_org.member_role.members.add(org_member)
assert not access.can_delete(org_member)

View File

@@ -29,6 +29,7 @@ def job_template(mocker):
mock_jt.pk = 5
mock_jt.host_config_key = '9283920492'
mock_jt.validation_errors = mock_JT_resource_data
mock_jt.webhook_service = ''
return mock_jt
@@ -50,6 +51,7 @@ class TestJobTemplateSerializerGetRelated():
'schedules',
'activity_stream',
'launch',
'webhook_key',
'notification_templates_started',
'notification_templates_success',
'notification_templates_error',

View File

@@ -32,6 +32,7 @@ class TestWorkflowJobTemplateSerializerGetRelated():
'workflow_jobs',
'launch',
'workflow_nodes',
'webhook_key',
])
def test_get_related(self, mocker, test_get_related, workflow_job_template, related_resource_name):
test_get_related(WorkflowJobTemplateSerializer,

View File

@@ -3,7 +3,7 @@ import pytest
from awx.main.models.jobs import JobTemplate
from awx.main.models import Inventory, CredentialType, Credential, Project
from awx.main.models.workflow import (
WorkflowJobTemplate, WorkflowJobTemplateNode, WorkflowJobOptions,
WorkflowJobTemplate, WorkflowJobTemplateNode,
WorkflowJob, WorkflowJobNode
)
from unittest import mock
@@ -33,11 +33,11 @@ class TestWorkflowJobInheritNodesMixin():
def test__create_workflow_job_nodes(self, mocker, job_template_nodes):
workflow_job_node_create = mocker.patch('awx.main.models.WorkflowJobTemplateNode.create_workflow_job_node')
mixin = WorkflowJobOptions()
mixin._create_workflow_nodes(job_template_nodes)
workflow_job = WorkflowJob()
workflow_job._create_workflow_nodes(job_template_nodes)
for job_template_node in job_template_nodes:
workflow_job_node_create.assert_any_call(workflow_job=mixin)
workflow_job_node_create.assert_any_call(workflow_job=workflow_job)
class TestMapWorkflowJobNodes():
@pytest.fixture
@@ -236,4 +236,4 @@ class TestWorkflowJobNodeJobKWARGS:
def test_get_ask_mapping_integrity():
assert list(WorkflowJobTemplate.get_ask_mapping().keys()) == ['extra_vars', 'inventory']
assert list(WorkflowJobTemplate.get_ask_mapping().keys()) == ['extra_vars', 'inventory', 'limit', 'scm_branch']

View File

@@ -0,0 +1,55 @@
import pytest
from unittest import mock
from django.conf import settings
from awx.main.models import (
InstanceGroup,
Job,
JobTemplate,
Project,
Inventory,
)
from awx.main.scheduler.kubernetes import PodManager
@pytest.fixture
def container_group():
instance_group = mock.Mock(InstanceGroup(name='container-group'))
return instance_group
@pytest.fixture
def job(container_group):
return Job(pk=1,
id=1,
project=Project(),
instance_group=container_group,
inventory=Inventory(),
job_template=JobTemplate(id=1, name='foo'))
def test_default_pod_spec(job):
default_image = PodManager(job).pod_definition['spec']['containers'][0]['image']
assert default_image == settings.AWX_CONTAINER_GROUP_DEFAULT_IMAGE
def test_custom_pod_spec(job):
job.instance_group.pod_spec_override = """
spec:
containers:
- image: my-custom-image
"""
custom_image = PodManager(job).pod_definition['spec']['containers'][0]['image']
assert custom_image == 'my-custom-image'
def test_pod_manager_namespace_property(job):
pm = PodManager(job)
assert pm.namespace == settings.AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE
job.instance_group.pod_spec_override = """
metadata:
namespace: my-namespace
"""
assert PodManager(job).namespace == 'my-namespace'

View File

@@ -11,6 +11,7 @@ class FakeObject(object):
class Job(FakeObject):
task_impact = 43
is_containerized = False
def log_format(self):
return 'job 382 (fake)'

View File

@@ -130,6 +130,8 @@ def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker
('VMWARE_PASSWORD', 'SECRET'),
('API_SECRET', 'SECRET'),
('CALLBACK_CONNECTION', 'amqp://tower:password@localhost:5672/tower'),
('ANSIBLE_GALAXY_SERVER_PRIMARY_GALAXY_PASSWORD', 'SECRET'),
('ANSIBLE_GALAXY_SERVER_PRIMARY_GALAXY_TOKEN', 'SECRET'),
])
def test_safe_env_filtering(key, value):
assert build_safe_env({key: value})[key] == tasks.HIDDEN_PASSWORD
@@ -366,6 +368,7 @@ class TestGenericRun():
task = tasks.RunJob()
task.update_model = mock.Mock(return_value=job)
task.model.objects.get = mock.Mock(return_value=job)
task.build_private_data_files = mock.Mock(side_effect=OSError())
with mock.patch('awx.main.tasks.copy_tree'):
@@ -385,6 +388,7 @@ class TestGenericRun():
task = tasks.RunJob()
task.update_model = mock.Mock(wraps=update_model_wrapper)
task.model.objects.get = mock.Mock(return_value=job)
task.build_private_data_files = mock.Mock()
with mock.patch('awx.main.tasks.copy_tree'):
@@ -444,7 +448,6 @@ class TestGenericRun():
settings.AWX_PROOT_HIDE_PATHS = ['/AWX_PROOT_HIDE_PATHS1', '/AWX_PROOT_HIDE_PATHS2']
settings.ANSIBLE_VENV_PATH = '/ANSIBLE_VENV_PATH'
settings.AWX_VENV_PATH = '/AWX_VENV_PATH'
settings.AWX_ANSIBLE_COLLECTIONS_PATHS = ['/AWX_COLLECTION_PATH1', '/AWX_COLLECTION_PATH2']
process_isolation_params = task.build_params_process_isolation(job, private_data_dir, cwd)
assert True is process_isolation_params['process_isolation']
@@ -454,10 +457,6 @@ class TestGenericRun():
"The per-job private data dir should be in the list of directories the user can see."
assert cwd in process_isolation_params['process_isolation_show_paths'], \
"The current working directory should be in the list of directories the user can see."
assert '/AWX_COLLECTION_PATH1' in process_isolation_params['process_isolation_show_paths'], \
"AWX global collection directory 1 of 2 should get added to the list of directories the user can see."
assert '/AWX_COLLECTION_PATH2' in process_isolation_params['process_isolation_show_paths'], \
"AWX global collection directory 2 of 2 should get added to the list of directories the user can see."
for p in [settings.AWX_PROOT_BASE_PATH,
'/etc/tower',
@@ -474,6 +473,36 @@ class TestGenericRun():
assert '/AWX_VENV_PATH' in process_isolation_params['process_isolation_ro_paths']
assert 2 == len(process_isolation_params['process_isolation_ro_paths'])
@mock.patch('os.makedirs')
def test_build_params_resource_profiling(self, os_makedirs):
job = Job(project=Project(), inventory=Inventory())
task = tasks.RunJob()
task.should_use_resource_profiling = lambda job: True
task.instance = job
resource_profiling_params = task.build_params_resource_profiling(task.instance, '/fake_private_data_dir')
assert resource_profiling_params['resource_profiling'] is True
assert resource_profiling_params['resource_profiling_base_cgroup'] == 'ansible-runner'
assert resource_profiling_params['resource_profiling_cpu_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_memory_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_pid_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_results_dir'] == '/fake_private_data_dir/artifacts/playbook_profiling'
@pytest.mark.parametrize("scenario, profiling_enabled", [
('global_setting', True),
('default', False)])
def test_should_use_resource_profiling(self, scenario, profiling_enabled, settings):
job = Job(project=Project(), inventory=Inventory())
task = tasks.RunJob()
task.instance = job
if scenario == 'global_setting':
settings.AWX_RESOURCE_PROFILING_ENABLED = True
assert task.should_use_resource_profiling(task.instance) == profiling_enabled
def test_created_by_extra_vars(self):
job = Job(created_by=User(pk=123, username='angry-spud'))
@@ -517,20 +546,6 @@ class TestGenericRun():
env = task.build_env(job, private_data_dir)
assert env['FOO'] == 'BAR'
def test_awx_task_env_respects_ansible_collections_paths(self, patch_Job, private_data_dir):
job = Job(project=Project(), inventory=Inventory())
task = tasks.RunJob()
task._write_extra_vars_file = mock.Mock()
with mock.patch('awx.main.tasks.settings.AWX_ANSIBLE_COLLECTIONS_PATHS', ['/AWX_COLLECTION_PATH']):
with mock.patch('awx.main.tasks.settings.AWX_TASK_ENV', {'ANSIBLE_COLLECTIONS_PATHS': '/MY_COLLECTION1:/MY_COLLECTION2'}):
env = task.build_env(job, private_data_dir)
used_paths = env['ANSIBLE_COLLECTIONS_PATHS'].split(':')
assert used_paths[-1].endswith('/requirements_collections')
used_paths.pop()
assert used_paths == ['/MY_COLLECTION1', '/MY_COLLECTION2', '/AWX_COLLECTION_PATH']
def test_valid_custom_virtualenv(self, patch_Job, private_data_dir):
job = Job(project=Project(), inventory=Inventory())
@@ -565,6 +580,7 @@ class TestAdhocRun(TestJobExecution):
task = tasks.RunAdHocCommand()
task.update_model = mock.Mock(wraps=adhoc_update_model_wrapper)
task.model.objects.get = mock.Mock(return_value=adhoc_job)
task.build_inventory = mock.Mock()
with pytest.raises(Exception):

View File

@@ -5,11 +5,15 @@
import codecs
import re
import os
import logging
from itertools import islice
from configparser import ConfigParser
# Django
from django.utils.encoding import smart_str
logger = logging.getLogger('awx.main.utils.ansible')
__all__ = ['skip_directory', 'could_be_playbook', 'could_be_inventory']
@@ -97,3 +101,20 @@ def could_be_inventory(project_path, dir_path, filename):
except IOError:
return None
return inventory_rel_path
def read_ansible_config(project_path, variables_of_interest):
fnames = ['/etc/ansible/ansible.cfg']
if project_path:
fnames.insert(0, os.path.join(project_path, 'ansible.cfg'))
values = {}
try:
parser = ConfigParser()
parser.read(fnames)
if 'defaults' in parser:
for var in variables_of_interest:
if var in parser['defaults']:
values[var] = parser['defaults'][var]
except Exception:
logger.exception('Failed to read ansible configuration(s) {}'.format(fnames))
return values

View File

@@ -19,9 +19,14 @@ from functools import reduce, wraps
from decimal import Decimal
# Django
from django.core.exceptions import ObjectDoesNotExist
from django.core.exceptions import ObjectDoesNotExist, FieldDoesNotExist
from django.utils.translation import ugettext_lazy as _
from django.utils.functional import cached_property
from django.db.models.fields.related import ForeignObjectRel, ManyToManyField
from django.db.models.fields.related_descriptors import (
ForwardManyToOneDescriptor,
ManyToManyDescriptor
)
from django.db.models.query import QuerySet
from django.db.models import Q
@@ -33,18 +38,22 @@ from django.apps import apps
logger = logging.getLogger('awx.main.utils')
__all__ = ['get_object_or_400', 'camelcase_to_underscore', 'underscore_to_camelcase', 'memoize', 'memoize_delete',
'get_ansible_version', 'get_ssh_version', 'get_licenser', 'get_awx_version', 'update_scm_url',
'get_type_for_model', 'get_model_for_type', 'copy_model_by_class', 'region_sorting',
'copy_m2m_relationships', 'prefetch_page_capabilities', 'to_python_boolean',
'ignore_inventory_computed_fields', 'ignore_inventory_group_removal',
'_inventory_updates', 'get_pk_from_dict', 'getattrd', 'getattr_dne', 'NoDefaultProvided',
'get_current_apps', 'set_current_apps',
'extract_ansible_vars', 'get_search_fields', 'get_system_task_capacity', 'get_cpu_capacity', 'get_mem_capacity',
'wrap_args_with_proot', 'build_proot_temp_dir', 'check_proot_installed', 'model_to_dict',
'model_instance_diff', 'parse_yaml_or_json', 'RequireDebugTrueOrTest',
'has_model_field_prefetched', 'set_environ', 'IllegalArgumentError', 'get_custom_venv_choices', 'get_external_account',
'task_manager_bulk_reschedule', 'schedule_task_manager', 'classproperty', 'create_temporary_fifo']
__all__ = [
'get_object_or_400', 'camelcase_to_underscore', 'underscore_to_camelcase', 'memoize',
'memoize_delete', 'get_ansible_version', 'get_ssh_version', 'get_licenser',
'get_awx_version', 'update_scm_url', 'get_type_for_model', 'get_model_for_type',
'copy_model_by_class', 'region_sorting', 'copy_m2m_relationships',
'prefetch_page_capabilities', 'to_python_boolean', 'ignore_inventory_computed_fields',
'ignore_inventory_group_removal', '_inventory_updates', 'get_pk_from_dict', 'getattrd',
'getattr_dne', 'NoDefaultProvided', 'get_current_apps', 'set_current_apps',
'extract_ansible_vars', 'get_search_fields', 'get_system_task_capacity',
'get_cpu_capacity', 'get_mem_capacity', 'wrap_args_with_proot', 'build_proot_temp_dir',
'check_proot_installed', 'model_to_dict', 'NullablePromptPseudoField',
'model_instance_diff', 'parse_yaml_or_json', 'RequireDebugTrueOrTest',
'has_model_field_prefetched', 'set_environ', 'IllegalArgumentError',
'get_custom_venv_choices', 'get_external_account', 'task_manager_bulk_reschedule',
'schedule_task_manager', 'classproperty', 'create_temporary_fifo', 'truncate_stdout',
]
def get_object_or_400(klass, *args, **kwargs):
@@ -435,6 +444,39 @@ def model_to_dict(obj, serializer_mapping=None):
return attr_d
class CharPromptDescriptor:
"""Class used for identifying nullable launch config fields from class
ex. Schedule.limit
"""
def __init__(self, field):
self.field = field
class NullablePromptPseudoField:
"""
Interface for pseudo-property stored in `char_prompts` dict
Used in LaunchTimeConfig and submodels, defined here to avoid circular imports
"""
def __init__(self, field_name):
self.field_name = field_name
@cached_property
def field_descriptor(self):
return CharPromptDescriptor(self)
def __get__(self, instance, type=None):
if instance is None:
# for inspection on class itself
return self.field_descriptor
return instance.char_prompts.get(self.field_name, None)
def __set__(self, instance, value):
if value in (None, {}):
instance.char_prompts.pop(self.field_name, None)
else:
instance.char_prompts[self.field_name] = value
def copy_model_by_class(obj1, Class2, fields, kwargs):
'''
Creates a new unsaved object of type Class2 using the fields from obj1
@@ -442,9 +484,10 @@ def copy_model_by_class(obj1, Class2, fields, kwargs):
'''
create_kwargs = {}
for field_name in fields:
# Foreign keys can be specified as field_name or field_name_id.
id_field_name = '%s_id' % field_name
if hasattr(obj1, id_field_name):
descriptor = getattr(Class2, field_name)
if isinstance(descriptor, ForwardManyToOneDescriptor): # ForeignKey
# Foreign keys can be specified as field_name or field_name_id.
id_field_name = '%s_id' % field_name
if field_name in kwargs:
value = kwargs[field_name]
elif id_field_name in kwargs:
@@ -454,15 +497,29 @@ def copy_model_by_class(obj1, Class2, fields, kwargs):
if hasattr(value, 'id'):
value = value.id
create_kwargs[id_field_name] = value
elif isinstance(descriptor, CharPromptDescriptor):
# difficult case of copying one launch config to another launch config
new_val = None
if field_name in kwargs:
new_val = kwargs[field_name]
elif hasattr(obj1, 'char_prompts'):
if field_name in obj1.char_prompts:
new_val = obj1.char_prompts[field_name]
elif hasattr(obj1, field_name):
# extremely rare case where a template spawns a launch config - sliced jobs
new_val = getattr(obj1, field_name)
if new_val is not None:
create_kwargs.setdefault('char_prompts', {})
create_kwargs['char_prompts'][field_name] = new_val
elif isinstance(descriptor, ManyToManyDescriptor):
continue # not copied in this method
elif field_name in kwargs:
if field_name == 'extra_vars' and isinstance(kwargs[field_name], dict):
create_kwargs[field_name] = json.dumps(kwargs['extra_vars'])
elif not isinstance(Class2._meta.get_field(field_name), (ForeignObjectRel, ManyToManyField)):
create_kwargs[field_name] = kwargs[field_name]
elif hasattr(obj1, field_name):
field_obj = obj1._meta.get_field(field_name)
if not isinstance(field_obj, ManyToManyField):
create_kwargs[field_name] = getattr(obj1, field_name)
create_kwargs[field_name] = getattr(obj1, field_name)
# Apply class-specific extra processing for origination of unified jobs
if hasattr(obj1, '_update_unified_job_kwargs') and obj1.__class__ != Class2:
@@ -481,7 +538,10 @@ def copy_m2m_relationships(obj1, obj2, fields, kwargs=None):
'''
for field_name in fields:
if hasattr(obj1, field_name):
field_obj = obj1._meta.get_field(field_name)
try:
field_obj = obj1._meta.get_field(field_name)
except FieldDoesNotExist:
continue
if isinstance(field_obj, ManyToManyField):
# Many to Many can be specified as field_name
src_field_value = getattr(obj1, field_name)
@@ -1032,3 +1092,19 @@ def create_temporary_fifo(data):
).start()
return path
def truncate_stdout(stdout, size):
from awx.main.constants import ANSI_SGR_PATTERN
if size <= 0 or len(stdout) <= size:
return stdout
stdout = stdout[:(size - 1)] + u'\u2026'
set_count, reset_count = 0, 0
for m in ANSI_SGR_PATTERN.finditer(stdout):
if m.group() == u'\u001b[0m':
reset_count += 1
else:
set_count += 1
return stdout + u'\u001b[0m' * (set_count - reset_count)

View File

@@ -184,6 +184,8 @@ class LogstashFormatter(LogstashFormatterBase):
data_for_log[key] = 'Exception `{}` producing field'.format(e)
data_for_log['event_display'] = job_event.get_event_display2()
if hasattr(job_event, 'workflow_job_id'):
data_for_log['workflow_job_id'] = job_event.workflow_job_id
elif kind == 'system_tracking':
data.pop('ansible_python_version', None)

View File

@@ -294,6 +294,18 @@ class AWXProxyHandler(logging.Handler):
super(AWXProxyHandler, self).__init__(**kwargs)
self._handler = None
self._old_kwargs = {}
self._auditor = logging.handlers.RotatingFileHandler(
filename='/var/log/tower/external.log',
maxBytes=1024 * 1024 * 50, # 50 MB
backupCount=5,
)
class WritableLogstashFormatter(LogstashFormatter):
@classmethod
def serialize(cls, message):
return json.dumps(message)
self._auditor.setFormatter(WritableLogstashFormatter())
def get_handler_class(self, protocol):
return HANDLER_MAPPING.get(protocol, AWXNullHandler)
@@ -327,6 +339,9 @@ class AWXProxyHandler(logging.Handler):
def emit(self, record):
if AWXProxyHandler.thread_local.enabled:
actual_handler = self.get_handler()
if settings.LOG_AGGREGATOR_AUDIT:
self._auditor.setLevel(settings.LOG_AGGREGATOR_LEVEL)
self._auditor.emit(record)
return actual_handler.emit(record)
def perform_test(self, custom_settings):

View File

@@ -89,7 +89,9 @@ class ActionModule(ActionBase):
playbook_url = '{}/api/remediations/v1/remediations/{}/playbook'.format(
insights_url, item['id'])
res = session.get(playbook_url, timeout=120)
if res.status_code != 200:
if res.status_code == 204:
continue
elif res.status_code != 200:
result['failed'] = True
result['msg'] = (
'Expected {} to return a status code of 200 but returned status '

View File

@@ -20,12 +20,42 @@
mode: pull
delete: yes
recursive: yes
when: ansible_kubectl_config is not defined
- name: Copy daemon log from the isolated host
synchronize:
src: "{{src}}/daemon.log"
dest: "{{src}}/daemon.log"
mode: pull
when: ansible_kubectl_config is not defined
- name: Copy artifacts from pod
synchronize:
src: "{{src}}/artifacts/"
dest: "{{src}}/artifacts/"
mode: pull
delete: yes
recursive: yes
set_remote_user: no
rsync_opts:
- "--rsh=$RSH"
environment:
RSH: "oc rsh --config={{ ansible_kubectl_config }}"
delegate_to: localhost
when: ansible_kubectl_config is defined
- name: Copy daemon log from pod
synchronize:
src: "{{src}}/daemon.log"
dest: "{{src}}/daemon.log"
mode: pull
set_remote_user: no
rsync_opts:
- "--rsh=$RSH"
environment:
RSH: "oc rsh --config={{ ansible_kubectl_config }}"
delegate_to: localhost
when: ansible_kubectl_config is defined
- name: Fail if previous check determined that process is not alive.
fail:

View File

@@ -75,6 +75,8 @@
force: "{{scm_clean}}"
username: "{{scm_username|default(omit)}}"
password: "{{scm_password|default(omit)}}"
environment:
LC_ALL: 'en_US.UTF-8'
register: svn_result
- name: Set the svn repository version
@@ -126,12 +128,14 @@
register: doesRequirementsExist
- name: fetch galaxy roles from requirements.yml
command: ansible-galaxy install -r requirements.yml -p {{roles_destination|quote}}
command: ansible-galaxy install -r requirements.yml -p {{roles_destination|quote}}{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{project_path|quote}}/roles"
register: galaxy_result
when: doesRequirementsExist.stat.exists
changed_when: "'was installed successfully' in galaxy_result.stdout"
environment:
ANSIBLE_FORCE_COLOR: False
when: roles_enabled|bool
delegate_to: localhost
@@ -142,12 +146,15 @@
register: doesCollectionRequirementsExist
- name: fetch galaxy collections from collections/requirements.yml
command: ansible-galaxy collection install -r requirements.yml -p {{collections_destination|quote}}
command: ansible-galaxy collection install -r requirements.yml -p {{collections_destination|quote}}{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{project_path|quote}}/collections"
register: galaxy_collection_result
when: doesCollectionRequirementsExist.stat.exists
changed_when: "'Installing ' in galaxy_collection_result.stdout"
environment:
ANSIBLE_FORCE_COLOR: False
ANSIBLE_COLLECTIONS_PATHS: "{{ collections_destination }}"
when: collections_enabled|bool
delegate_to: localhost

View File

@@ -11,12 +11,25 @@
secret: "{{ lookup('pipe', 'cat ' + src + '/env/ssh_key') }}"
tasks:
- name: synchronize job environment with isolated host
synchronize:
copy_links: true
src: "{{src}}"
dest: "{{dest}}"
copy_links: yes
src: "{{ src }}"
dest: "{{ dest }}"
when: ansible_kubectl_config is not defined
- name: synchronize job environment with remote job container
synchronize:
copy_links: yes
src: "{{ src }}"
dest: "{{ dest }}"
set_remote_user: no
rsync_opts:
- "--rsh=$RSH"
environment:
RSH: "oc rsh --config={{ ansible_kubectl_config }}"
delegate_to: localhost
when: ansible_kubectl_config is defined
- local_action: stat path="{{src}}/env/ssh_key"
register: key
@@ -26,7 +39,7 @@
when: key.stat.exists
- name: spawn the playbook
command: "ansible-runner start {{src}} -p {{playbook}} -i {{ident}}"
command: "ansible-runner start {{src}} -p '{{playbook}}' -i {{ident}}"
when: playbook is defined
- name: spawn the adhoc command

View File

@@ -39,8 +39,9 @@ import uuid
from time import time
from jinja2 import Environment
from six import integer_types, PY3
from six.moves import configparser
from ansible.module_utils.six import integer_types, PY3
from ansible.module_utils.six.moves import configparser
try:
import argparse
@@ -152,7 +153,7 @@ class VMWareInventory(object):
try:
text = str(text)
except UnicodeEncodeError:
text = text.encode('ascii', 'ignore')
text = text.encode('utf-8')
print('%s %s' % (datetime.datetime.now(), text))
def show(self):
@@ -186,14 +187,14 @@ class VMWareInventory(object):
def write_to_cache(self, data):
''' Dump inventory to json file '''
with open(self.cache_path_cache, 'wb') as f:
f.write(json.dumps(data))
with open(self.cache_path_cache, 'w') as f:
f.write(json.dumps(data, indent=2))
def get_inventory_from_cache(self):
''' Read in jsonified inventory '''
jdata = None
with open(self.cache_path_cache, 'rb') as f:
with open(self.cache_path_cache, 'r') as f:
jdata = f.read()
return json.loads(jdata)
@@ -343,10 +344,22 @@ class VMWareInventory(object):
'pwd': self.password,
'port': int(self.port)}
if hasattr(ssl, 'SSLContext') and not self.validate_certs:
if self.validate_certs and hasattr(ssl, 'SSLContext'):
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
context.verify_mode = ssl.CERT_REQUIRED
context.check_hostname = True
kwargs['sslContext'] = context
elif self.validate_certs and not hasattr(ssl, 'SSLContext'):
sys.exit('pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or use validate_certs=false.')
elif not self.validate_certs and hasattr(ssl, 'SSLContext'):
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
context.verify_mode = ssl.CERT_NONE
context.check_hostname = False
kwargs['sslContext'] = context
elif not self.validate_certs and not hasattr(ssl, 'SSLContext'):
# Python 2.7.9 < or RHEL/CentOS 7.4 <
pass
return self._get_instances(kwargs)
@@ -390,7 +403,7 @@ class VMWareInventory(object):
instances = [x for x in instances if x.name == self.args.host]
instance_tuples = []
for instance in sorted(instances):
for instance in instances:
if self.guest_props:
ifacts = self.facts_from_proplist(instance)
else:
@@ -614,7 +627,14 @@ class VMWareInventory(object):
lastref = lastref[x]
else:
lastref[x] = val
if self.args.debug:
self.debugl("For %s" % vm.name)
for key in list(rdata.keys()):
if isinstance(rdata[key], dict):
for ikey in list(rdata[key].keys()):
self.debugl("Property '%s.%s' has value '%s'" % (key, ikey, rdata[key][ikey]))
else:
self.debugl("Property '%s' has value '%s'" % (key, rdata[key]))
return rdata
def facts_from_vobj(self, vobj, level=0):
@@ -685,7 +705,7 @@ class VMWareInventory(object):
if vobj.isalnum():
rdata = vobj
else:
rdata = vobj.decode('ascii', 'ignore')
rdata = vobj.encode('utf-8').decode('utf-8')
elif issubclass(type(vobj), bool) or isinstance(vobj, bool):
rdata = vobj
elif issubclass(type(vobj), integer_types) or isinstance(vobj, integer_types):

Some files were not shown because too many files have changed in this diff Show More