Compare commits

..

459 Commits

Author SHA1 Message Date
softwarefactory-project-zuul[bot]
0a1ecd4fe3 Merge pull request #8248 from ryanpetrello/15.0.0-bump
bump version to 15.0.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-30 13:07:08 +00:00
softwarefactory-project-zuul[bot]
e7a5d4c5d8 Merge pull request #8267 from mabashian/8252-jt-tabs-reload
Reset error/result only after the next request has resolved to prevent render flickering

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-30 12:24:08 +00:00
Ryan Petrello
2e371dd2ea more 15.0.0 changelog 2020-09-29 17:38:42 -04:00
Ryan Petrello
98b24cd2d8 Bump version to 15.0.0 2020-09-29 17:36:32 -04:00
softwarefactory-project-zuul[bot]
abc6a84210 Merge pull request #8260 from ryanpetrello/drf-upgrade
update to the latest Django Rest Framework

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-29 20:59:45 +00:00
softwarefactory-project-zuul[bot]
a9cfae70ff Merge pull request #8041 from mabashian/7680-inv-pending-delete
Adds support for pending deletion on inventory list

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-29 20:07:17 +00:00
softwarefactory-project-zuul[bot]
f47812845e Merge pull request #8229 from nixocio/ui_issue_7410
Make filter a bit more consistent accross UI

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-29 19:53:52 +00:00
mabashian
e13b16bf1c Add erroneously removed exhaustive dep comment 2020-09-29 15:42:11 -04:00
nixocio
aa69b925ad Make filter a bit more consistent accross UI
Add `description`, `created_by` and `modified_by` when those fields are
available.

See: https://github.com/ansible/awx/issues/7410
2020-09-29 15:23:28 -04:00
mabashian
ae1d27255b Add delete error handling on inventory detail view 2020-09-29 15:05:35 -04:00
mabashian
f672cee3a0 Reset error/result only after the next request has resolved to prevent render flicking 2020-09-29 13:11:06 -04:00
softwarefactory-project-zuul[bot]
820d4d292e Merge pull request #8253 from beeankha/edit_approval_node_bugfix
Fix Approval Node Edit Permissions

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-29 16:45:07 +00:00
softwarefactory-project-zuul[bot]
70dfe9a1f2 Merge pull request #8265 from Lodenk/typofix
fixed typo in the word example

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-29 16:36:58 +00:00
softwarefactory-project-zuul[bot]
6567fab1c8 Merge pull request #8251 from ryanpetrello/fix-vault-password-prompt-bug
fix a bug that can break password prompting in certain scenarios

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2020-09-29 16:35:55 +00:00
beeankha
f584c1cc47 Fix Approval Node Edit Permissions 2020-09-29 12:14:12 -04:00
Patrick
359682022f fixed typo in the word example 2020-09-29 12:00:37 -04:00
softwarefactory-project-zuul[bot]
f39015156b Merge pull request #8228 from john-westcott-iv/tower_ad_hoc_module
Adding ad hoc command modules

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-29 14:59:52 +00:00
Ryan Petrello
089b0503bb update to the latest Django Rest Framework 2020-09-29 10:25:07 -04:00
softwarefactory-project-zuul[bot]
2019f808b9 Merge pull request #8254 from RULCSoft/fix-typos
Fix a few typos in awx/ui

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-29 14:04:51 +00:00
mabashian
3f1434f0f5 Only attempt to display sting error messages in ErrorDetail 2020-09-29 09:59:39 -04:00
softwarefactory-project-zuul[bot]
7f7864fe2b Merge pull request #8259 from rooftopcellist/gettext_translations
Include Gettext in dev container image for translation automation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-29 13:48:28 +00:00
Ryan Petrello
c52054951d fix a bug that can break password prompting in certain scenarios
see: https://github.com/ansible/awx/issues/8202
2020-09-29 09:34:38 -04:00
mabashian
6942b4d5b6 Change deleteTeams to deleteInventories 2020-09-29 09:29:18 -04:00
mabashian
10110643ed Flatten out decision tree when an inventory websocket message is processed 2020-09-29 09:09:52 -04:00
mabashian
40e4ba43ef Copy the query params so that we don't add id__in to them. This fixes a bug where a newly copied row would not show up on the list if a websocket message for an inventory source sync had come through beforehand because id__in had been added to the query params. 2020-09-29 09:09:52 -04:00
mabashian
6681ffa8df Add default module_name to adhoc details step test to get rid of logged console error 2020-09-29 09:09:52 -04:00
mabashian
d84615f64b Rename onLoading/onDoneLoading props to onCopyStart and onCopyFinish. Wrap the functions being passed in as those props in useCallback to keep them hooks safe. 2020-09-29 09:09:52 -04:00
mabashian
4b566e9388 Adds support for pending deletion on inventory list 2020-09-29 09:09:52 -04:00
Christian M. Adams
e7b5f311b5 Include Gettext in dev container image for translation automation 2020-09-29 08:53:18 -04:00
Jorge Vallecillo
b335f698e4 Fix a few typos in awx/ui 2020-09-28 19:00:25 -06:00
softwarefactory-project-zuul[bot]
d6201d9eb6 Merge pull request #8224 from tchellomello/import_db
Ability to import standard pgdump into Openshift

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-28 17:24:35 +00:00
softwarefactory-project-zuul[bot]
d6f0e16b4d Merge pull request #8242 from wenottingham/certifi-ably-unbundled
Replace certifi with an alternate version

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-28 16:10:30 +00:00
softwarefactory-project-zuul[bot]
0c7bfa543b Merge pull request #8001 from velzend/allow_skipping_provision_instance_and_register_queue
allow skipping provision instance and register queue

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-28 15:20:35 +00:00
softwarefactory-project-zuul[bot]
36d4f255a3 Merge pull request #8236 from ryanpetrello/more-callback-cleanup
refactor some callback receiver code

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-28 15:20:29 +00:00
softwarefactory-project-zuul[bot]
30fd418cc9 Merge pull request #8220 from mabashian/fix-padding-pol-fields
Fix padding on field labels with prompt on launch checkboxes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-28 13:42:37 +00:00
softwarefactory-project-zuul[bot]
24e9484f55 Merge pull request #7338 from mabashian/cred-plugin-test-button
Hook up Test button on Metadata step in credential plugin wizard

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2020-09-25 19:52:44 +00:00
Bill Nottingham
85b694410b Adjust included licenses 2020-09-25 15:51:55 -04:00
Bill Nottingham
d0ba59735c Replace certifi with an alternate version
This version just uses the system cert store.
2020-09-25 14:39:16 -04:00
beeankha
b34c1f4c79 Update integration tests 2020-09-25 13:23:34 -04:00
Ryan Petrello
baad765179 refactor some callback receiver code
the bigint migration removed the foreign key constraints for:

- host_id
- job_id (and projectupdate_id, etc...)

because of this, we don't really need to check explicitly for a host_id
IntegrityError anymore (because it won't occur)

additionally, while it's possible to insert an event with a mismatched
job_id now (for example, you can totally start a long-running job, and
delete the job record in the background using the ORM or psql), doing
so results in DoesNotExist errors in the code that handles the
playbook_on_stats events
2020-09-25 13:12:42 -04:00
softwarefactory-project-zuul[bot]
b4d6270eab Merge pull request #8232 from mabashian/8219-extra-GET-requests
Fix extra GET requests on Notif Template/Container Groups forms

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-25 13:37:43 +00:00
John Westcott IV
842e490ba6 Removing needs devel ad hoc entry 2020-09-25 08:51:28 -04:00
John Westcott IV
5b10482256 Fixing linting 2020-09-25 08:49:28 -04:00
John Westcott IV
baf3b617cb Initial commit of ad hoc module 2020-09-25 08:49:28 -04:00
softwarefactory-project-zuul[bot]
acc0ba570e Merge pull request #8205 from john-westcott-iv/tower_application_continuation
Tower application continuation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-25 00:24:01 +00:00
mabashian
56ed2c6afa Autopopulate credential on container group form and organization on notification template form 2020-09-24 17:26:33 -04:00
mabashian
24a4236232 Wrap onChange functions passed to lookups in useCallback since these functions are eventually passed to a hook and used in a dependency array. 2020-09-24 17:07:58 -04:00
softwarefactory-project-zuul[bot]
ce65ed0ac6 Merge pull request #8191 from ryanpetrello/callback-directly-to-redis
remove multiprocessing.Queue usage from the callback receiver

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-24 18:21:49 +00:00
Ryan Petrello
cd0b9de7b9 remove multiprocessing.Queue usage from the callback receiver
instead, just have each worker connect directly to redis
this has a few benefits:

- it's simpler to explain and debug
- back pressure on the queue keeps messages around in redis (which is
  observable, and survives the restart of Python processes)
- it's likely notably more performant at high loads
2020-09-24 13:53:58 -04:00
John Westcott IV
a9ea2523c9 Fixing linting/doc issues 2020-09-24 13:33:35 -04:00
softwarefactory-project-zuul[bot]
d97f80df43 Merge pull request #8221 from beeankha/job_timeout_notification_bugfix
Enable "On Fail" Notifications to Send Upon Timeout of Job Templates

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-24 13:53:04 +00:00
Marcelo Moreira de Mello
f1b8a63d91 Ability to import standard pgdump into Openshift 2020-09-23 22:33:57 -04:00
beeankha
c855ce95aa Fix JT timeout notification bug 2020-09-23 15:45:11 -04:00
mabashian
b714a0dc7e Fix padding on field labels with prompt on launch checkboxes 2020-09-23 15:27:31 -04:00
softwarefactory-project-zuul[bot]
aac17b9d2c Merge pull request #8206 from ryanpetrello/more-bulk-update-last-job
change host -> last_job_id bulk update query to avoid locking issues

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-23 15:18:42 +00:00
softwarefactory-project-zuul[bot]
d7ca49ce4a Merge pull request #8193 from ryanpetrello/ws-broadcast-limited
add a few additional optimizations to the callback receiver

Reviewed-by: Shane McDonald <me@shanemcd.com>
             https://github.com/shanemcd
2020-09-23 14:58:41 +00:00
John Westcott IV
4a4e62e035 Removing tower_application from needs_development in completeness test 2020-09-23 09:54:06 -04:00
softwarefactory-project-zuul[bot]
e5f5ad198a Merge pull request #8212 from branic/add_postgres_custom_root_ca
Add custom root ca certificate via configmap

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-23 13:41:13 +00:00
softwarefactory-project-zuul[bot]
ee3f835ea9 Merge pull request #8213 from ryanpetrello/galaxy-ignore-certs-ui
expose GALAXY_IGNORE_CERTS in the job settings UI

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-23 13:15:28 +00:00
John Westcott IV
cb1ba9e3a4 Fixing zuul issues 2020-09-23 09:03:03 -04:00
Ryan Petrello
1f0cd8df71 expose GALAXY_IGNORE_CERTS in the job settings UI 2020-09-23 08:36:52 -04:00
Brant Evans
512da5a01c Add custom root ca certificate via configmap
Signed-off-by: Brant Evans <bevans@redhat.com>
2020-09-22 16:42:39 -07:00
Ryan Petrello
89ff8e1f3e change host -> last_job_id bulk update query to avoid locking issues
see: https://github.com/ansible/awx/issues/8145
2020-09-22 16:04:28 -04:00
softwarefactory-project-zuul[bot]
3184bccb33 Merge pull request #8204 from marshmalien/words-matter-pt2
Replace google oauth2 setting with more inclusive language

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-22 19:43:40 +00:00
John Westcott IV
c5df37777b Addig tests and updating minor module bugs 2020-09-22 15:22:35 -04:00
Marliana Lara
0732b047b5 Replace setting term with inclusive language 2020-09-22 15:09:55 -04:00
Geoffrey Bachelot
1c729518a5 Delete depcreated username parameter 2020-09-22 14:54:35 -04:00
Geoffrey Bachelot
5a374585de delete deprecated parameters and add missing skip_authorization 2020-09-22 14:54:35 -04:00
Geoffrey Bachelot
b9d2e431a6 Create tower_application module 2020-09-22 14:54:35 -04:00
Ryan Petrello
b370e8389e add a few additional optimizations to the callback receiver 2020-09-22 08:51:01 -04:00
Jake McDermott
b6afc085a7 Reload on stats when live updates are disabled 2020-09-21 20:42:44 -04:00
Ryan Petrello
bed2dea04d don't broadcast ws:// events when UI_LIVE_UPDATES_ENABLED is False 2020-09-21 20:42:39 -04:00
softwarefactory-project-zuul[bot]
31cd36b768 Merge pull request #8104 from mabashian/4254-auto-pop-lookup
Auto populate various required lookups on various forms

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-21 21:37:02 +00:00
softwarefactory-project-zuul[bot]
dc492d0cfd Merge pull request #8179 from nixocio/ui_fix_undefined_variable
Fix issue with undefined variable

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-21 18:28:15 +00:00
softwarefactory-project-zuul[bot]
9a8580144c Merge pull request #8189 from kdelee/analytics_logging
Need log level info to show up with new settings

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-21 17:52:15 +00:00
mabashian
10ae6c9042 Cleans up console error thrown from this test. Also fixed the last test as it wasn't actually testing the desired behavior. 2020-09-21 13:24:23 -04:00
mabashian
8bee409a4a Auto populate various required lookups on various forms 2020-09-21 13:24:23 -04:00
Elijah DeLee
7c7d15a8be Need log level info to show up with new settings 2020-09-21 11:22:43 -04:00
softwarefactory-project-zuul[bot]
9eb8ac620f Merge pull request #8180 from jakemcdermott/deps-autofix
Upgrade lingui-cli to 2.9.2

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2020-09-21 15:08:19 +00:00
softwarefactory-project-zuul[bot]
44d1e15ef4 Merge pull request #8177 from ryanpetrello/rrule-bug
fix an rrule bug that causes improper HOURLY/MINUTELY calculation

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-18 20:02:10 +00:00
softwarefactory-project-zuul[bot]
0c81b83080 Merge pull request #8078 from nixocio/ui_issue_8073
Add Container Group details

Reviewed-by: Kersom
             https://github.com/nixocio
2020-09-18 19:15:14 +00:00
softwarefactory-project-zuul[bot]
a2408892a8 Merge pull request #8152 from AlanCoding/import_fix
Fix AWX collection import test interference with Default organization

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-18 19:05:22 +00:00
softwarefactory-project-zuul[bot]
deab7395f2 Merge pull request #8150 from keithjgrant/7878-notification-websockets
Update status after sending test notification

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-18 17:52:38 +00:00
Keith Grant
3ed05e9d9b fix NotificationTemplateItem test 2020-09-18 10:20:17 -07:00
Jake McDermott
34579226ef Upgrade lingui-cli to 2.9.2 2020-09-18 13:01:04 -04:00
nixocio
9d0b37e96c Fix issue with undefined variable
Fix issue with undefined variable when data related to `me` is not
available.
2020-09-18 12:30:53 -04:00
softwarefactory-project-zuul[bot]
256123dc9d Merge pull request #8168 from keithjgrant/3321-session-management
Logout session if config returns 401

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2020-09-18 16:08:06 +00:00
Ryan Petrello
bf1d93168b fix an rrule bug that causes improper HOURLY/MINUTELY calculation
see: https://github.com/ansible/awx/issues/8071
2020-09-18 12:06:07 -04:00
softwarefactory-project-zuul[bot]
39497fa502 Merge pull request #8178 from ryanpetrello/callback-status-sos
report callback receiver status in the sosreport

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-18 15:15:21 +00:00
Ryan Petrello
a1ddbd760d report callback receiver status in the sosreport 2020-09-18 10:49:50 -04:00
softwarefactory-project-zuul[bot]
c17bb36bcd Merge pull request #8167 from ryanpetrello/callback-cleanup
Add support for a `--status` to the callback receiver (and improve our approach to stats collection in general)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-18 13:58:30 +00:00
Keith Grant
43d339d1cd fix test 2020-09-17 13:46:43 -07:00
softwarefactory-project-zuul[bot]
56c5a39087 Merge pull request #8159 from john-westcott-iv/lookup_plugin_fix
Fixing issue with lookup plugin not able to load host, user, etc

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-17 20:03:15 +00:00
Ryan Petrello
57f8e48894 make --status more robust for dispatcher, and add support for receiver
make the --status flag work by fetching a periodically recorded snapshot
of internal process state; additionally, update the callback receiver to
*also* record these statistics so we can gain more insight into any
performance issues
2020-09-17 15:33:37 -04:00
Keith Grant
cde8cb57da allow app skeleton to display while config is loading 2020-09-17 12:31:13 -07:00
softwarefactory-project-zuul[bot]
969f75778c Merge pull request #8166 from wenottingham/log-log-log
Fix analytics logging

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-17 18:42:29 +00:00
Bill Nottingham
18c27437b7 Fix analytics logging
The analytics change PR adjusted the logging for awx.analytics,
which solved the issue, but should have used the targeted awx.main.analytics.

Also flip a couple of loggers to use the regular awx.analytics (awx analytics)
logger instead of awx.main.analytics (the automation anayltics task system).
2020-09-17 13:39:14 -04:00
John Westcott IV
baa00bd582 Truthy strikes again 2020-09-17 07:51:01 -04:00
Keith Grant
6c4f9364ee kick back to login page if config gets 401 response 2020-09-16 15:35:02 -07:00
John Westcott IV
ee3a90d193 Adding example of specifying connection info 2020-09-16 14:52:41 -04:00
John Westcott IV
24bdbd8c58 Fixing issue with lookup plugin not able to load host, user, etc 2020-09-16 14:41:04 -04:00
Ryan Petrello
0df6409244 remove task state tracking from the callback receiver
we don't have support for displaying these stats anyways, so there's
no point in using resources tracking them, especially for high-volume
installs
2020-09-16 13:40:42 -04:00
softwarefactory-project-zuul[bot]
aceb8229ba Merge pull request #8153 from AlanCoding/delete_things
Remove out of date collection testing tools

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-16 15:05:56 +00:00
Alan Rominger
e177432b8f Do not expect failure in tower_import test 2020-09-16 10:34:11 -04:00
softwarefactory-project-zuul[bot]
1860a2f71d Merge pull request #8087 from AlanCoding/update_secrets
Add new option update_secrets to allow lazy or strict updating

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-16 03:19:55 +00:00
Alan Rominger
05fb47dece Remove out of date collection testing tools 2020-09-15 22:59:12 -04:00
Alan Rominger
3aec1a115d fix wording 2020-09-15 22:38:37 -04:00
Alan Rominger
2f1a9a28ea streamline credential test 2020-09-15 22:38:37 -04:00
Alan Rominger
362d6a3204 Add new option update_secrets to allow lazy or strict updating 2020-09-15 22:38:37 -04:00
Nicolas Payart
8e97214309 Add option update_password (always, on_create) to tower_user module 2020-09-15 22:37:50 -04:00
Alan Rominger
48f30c5106 Fix AWX collection import test interference with Default organization 2020-09-15 22:03:37 -04:00
Keith Grant
a10f52c70e poll for notification status after sending test 2020-09-15 15:37:44 -07:00
softwarefactory-project-zuul[bot]
ef3a497c42 Merge pull request #8147 from ryanpetrello/py-path
move an optional import for awxkit

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-15 19:32:26 +00:00
softwarefactory-project-zuul[bot]
4b72630087 Merge pull request #8136 from neoaggelos/awxkit-import-yaml-loader
Support `!import` and `!include` in `awx import -f yaml` command

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-15 18:56:06 +00:00
softwarefactory-project-zuul[bot]
432e167930 Merge pull request #8105 from keithjgrant/7877-notification-custom-messages
Notification Detail: show custom messages

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-15 18:53:41 +00:00
Ryan Petrello
1a533a2a23 move an optional import for awxkit
I'm not sure that this function is actually in use anywhere anymore, but
it shouldn't be a top-level import because it represents an optional
dependency.
2020-09-15 14:50:01 -04:00
Keith Grant
fa0abc0dd8 notification templates: fix un-select all 2020-09-15 10:56:55 -07:00
softwarefactory-project-zuul[bot]
b0875965db Merge pull request #8138 from moreiramarti/devel
K8s ServiceAccount variabilization

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-15 14:35:08 +00:00
nixocio
d3d3fe8892 Add Container Group details
Add Container Group details.

See: https://github.com/ansible/awx/issues/8073
2020-09-15 09:11:34 -04:00
Keith Grant
236ae6c5b6 handle null messages.workflow_approval some more 2020-09-14 16:05:03 -07:00
Keith Grant
b27d9b680a handle null messages.workflow_approval 2020-09-14 15:17:39 -07:00
softwarefactory-project-zuul[bot]
22bff7adec Merge pull request #8141 from kdelee/fix_analytics_tests
Print one targz per line (analytics gather)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-14 22:04:23 +00:00
softwarefactory-project-zuul[bot]
a90bb36b72 Merge pull request #8055 from mabashian/8052-workflow-viz-rbac
Fixes some rbac issues in the workflow toolbar and start screen

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-14 21:14:12 +00:00
Elijah DeLee
d8e4ac773b Print one tarball per line
Printing out a python like list is hard to process for tests
Better to print out one tarball per line
2020-09-14 17:00:31 -04:00
softwarefactory-project-zuul[bot]
1a581a79ea Merge pull request #8070 from nixocio/ui_add_edit_container_groups
Add/Edit Container Groups

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-14 20:48:05 +00:00
softwarefactory-project-zuul[bot]
99da5770a7 Merge pull request #8139 from ansible/jakemcdermott-remove-self-closing-tag-tags
Remove self-closing tags for tag component

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-14 16:59:55 +00:00
Martinho Moreira
8d5914b3f1 K8s ServiceAccount variabilization 2020-09-14 17:37:45 +02:00
Jake McDermott
c6aeb755a4 Remove self-closing tags for tag component 2020-09-14 11:28:52 -04:00
softwarefactory-project-zuul[bot]
9d66b41e84 Merge pull request #7991 from bbayszczak/hashivault_auth_path_in_inputs
hashivault_kv auth_path moved from metadata to inputs

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2020-09-14 15:28:07 +00:00
Aggelos Kolaitis
b8cf644959 Add name to StringIO object to fix failing test 2020-09-13 17:41:53 +03:00
Aggelos Kolaitis
9918b2581c Support !import and !include in awx import -f yaml command 2020-09-13 17:01:12 +03:00
softwarefactory-project-zuul[bot]
b69fad83b1 Merge pull request #7709 from wenottingham/so-many-del-toros
Adjust analytics gathering

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 21:41:05 +00:00
softwarefactory-project-zuul[bot]
424bf94a15 Merge pull request #8101 from ansible/jakemcdermott-expand-nav
Start with navigation expanded

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 20:16:02 +00:00
softwarefactory-project-zuul[bot]
9d4bad559f Merge pull request #8102 from jakemcdermott/remove-dead-projects-code
Remove dead projects code

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 20:09:25 +00:00
softwarefactory-project-zuul[bot]
1e9fb6b640 Merge pull request #8133 from fosterseth/update_task_manager_system_docs
Update task manager docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 18:50:10 +00:00
mabashian
ad1c4b1586 Add unique ID to cred field external plugin button(s) 2020-09-11 14:17:35 -04:00
softwarefactory-project-zuul[bot]
412a294461 Merge pull request #7940 from mabashian/6616-workflow-results-sockets
Update job status and workflow node job status based on websocket events

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 17:31:47 +00:00
Seth Foster
3606b4e334 these rules no longer apply as of PRs 5519 and 5489 2020-09-11 12:59:18 -04:00
softwarefactory-project-zuul[bot]
0c5aaa2872 Merge pull request #8132 from wenottingham/halfway-there
Make ansible venv psutil match awx venv version

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 16:52:23 +00:00
Bill Nottingham
f1c59477c0 Make ansible venv psutil match awx venv version 2020-09-11 12:02:00 -04:00
softwarefactory-project-zuul[bot]
0a871c6107 Merge pull request #8119 from soomsoom/unarchive-rpm-pacakges
Adding unzip dnf package

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 15:57:38 +00:00
softwarefactory-project-zuul[bot]
61c4c5292c Merge pull request #8125 from jakemcdermott/fix-api-urls
Fix malformed urls

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 15:53:39 +00:00
softwarefactory-project-zuul[bot]
88c691fd6e Merge pull request #7993 from mabashian/7364-jobs-list-cancel
Adds cancel button to jobs list

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-11 15:07:30 +00:00
mabashian
68426daaff Fix residual error from merge conflict 2020-09-11 10:50:27 -04:00
mabashian
d654af77cf Moves cred plugin test button down into wizard footer 2020-09-11 10:50:27 -04:00
mabashian
5cfca7896f Adds proptypes to CredentialPluginTestAlert component 2020-09-11 10:50:27 -04:00
mabashian
9f8691dbea Add the credential id and metadata from the form to the dependency array of useCallback when testing cred plugin configuration. This allows us to remove the variables passed in to the request. 2020-09-11 10:50:27 -04:00
mabashian
62054bbfc8 Pulls CredentialPlugins out of CredentialFormFields and into the root of the shared dir 2020-09-11 10:50:27 -04:00
mabashian
aed96de195 Fix merge conflict 2020-09-11 10:50:27 -04:00
mabashian
67f46b4d7e Hook up Test button on Metadata step in credential plugin wizard 2020-09-11 10:50:27 -04:00
mabashian
8fab4559b9 Add data-job-status attr to all StatusIcons so that automated tests can determine whether or not a status has been updated via websockets. 2020-09-11 09:27:44 -04:00
mabashian
45ca9976f3 Fix merge conflict error 2020-09-11 09:17:42 -04:00
mabashian
5b56bda0bb Fix tests after changing node data structure 2020-09-11 09:17:42 -04:00
mabashian
c318a17590 Refresh nodes after workflow has finished running so that we can display all job info for relevant nodes. 2020-09-11 09:17:42 -04:00
mabashian
e28c9bb3c4 Remove unnecessary constant variable 2020-09-11 09:17:42 -04:00
mabashian
c209c98e3f Update job for job detail/workflow details based on websockets. Re-fetch job after job finishes running to display all available info. 2020-09-11 09:17:42 -04:00
mabashian
af77116f1e Convert Job.jsx to functional component in preparation for socket hook usage 2020-09-11 09:17:42 -04:00
mabashian
328e503f5b Update workflow node job status based on websocket messages 2020-09-11 09:17:42 -04:00
mabashian
5d4ef86db7 Rename inv/inv source ws functions 2020-09-11 09:17:42 -04:00
mabashian
3e99e94b8c Redirect user to visualizer page after successful workflow creation 2020-09-11 09:17:42 -04:00
Bill Nottingham
13802fcf2b Don't return error messages for license errors
Just log the exception and return None.
2020-09-10 21:19:07 -04:00
Jake McDermott
b2ad75a1b7 Fix malformed urls 2020-09-10 18:31:49 -04:00
mabashian
130a43f5c4 Simplify kebab modal open logic 2020-09-10 15:32:30 -04:00
mabashian
0fc6affe85 Refactor kebab modal tracking logic in delete/cancel buttons 2020-09-10 15:32:29 -04:00
mabashian
c8a07309ee Fix typo 2020-09-10 15:32:29 -04:00
mabashian
2b60759edc Fix bug where delete/cancel buttons in kebab would not actually make delete/cancel requests 2020-09-10 15:32:29 -04:00
mabashian
225f57fefd Remove use of i18n.plural in favor of low level i18n._. 2020-09-10 15:32:29 -04:00
mabashian
bae10718d5 Remove RelatedAPI model in favor of JobsAPI. Adds cancel method to JobsAPI and uses that on the jobs list. 2020-09-10 15:32:29 -04:00
mabashian
f27b541396 Adds cancel button to jobs list toolbar 2020-09-10 15:32:29 -04:00
mabashian
6889128571 Peel axios instantiation out into it's own file so that it can be used in the RelatedAPI model 2020-09-10 15:32:29 -04:00
softwarefactory-project-zuul[bot]
4b51c71220 Merge pull request #8109 from AlanCoding/hack_null_org
Hack to delete orphaned organizations, consolidate get_one methods

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-10 18:15:45 +00:00
Jake McDermott
59ecefb1c5 Start with navigation expanded 2020-09-10 13:48:32 -04:00
Jake McDermott
c3f9993e18 Remove dead projects code 2020-09-10 13:28:00 -04:00
softwarefactory-project-zuul[bot]
328b270c9f Merge pull request #8121 from ryanpetrello/galaxy-cred-collection
address a few follow-up issues for Org -> Galaxy Credentials support

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-10 15:58:27 +00:00
Ryan Petrello
af238be377 address a few follow-up issues for Org -> Galaxy Credentials support
- add support for managing galaxy creds in the tower organization module
- fix a minor serializer bug
2020-09-10 11:00:21 -04:00
softwarefactory-project-zuul[bot]
4bb851ca66 Merge pull request #8118 from mjeffin/postres-volume-typo
Update docker-compose.yml.j2

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-10 14:39:43 +00:00
Alan Rominger
2e1f5cebb7 Further refine error message and update integration tests
Fix test with spaces

pep8 failure

More test tweaks

Even more test fixes
2020-09-10 09:49:27 -04:00
Alan Rominger
a73323f3d6 Hack to delete orphaned organizations, consolidate get_one methods
minor test update

Updating the message if more than one object is found

Update test to new message
2020-09-10 09:00:50 -04:00
soomsoom
014a38682d Adding unzip dnf package
Adding unzip DNF package to resolve an issue with the unarchive module when run locally
2020-09-10 06:26:07 -04:00
mjeffin
38fd652f89 Update docker-compose.yml.j2
Add quotes around volume value for posgres data. I installed via docker without changing any values and the UI was stuck in upgrading for long time. Browsed around and figured out that issue was due to postgres volume as a query was getting error. Inspected the template and found that there was no quotes around volume, unlike volumes for others. 
I added the quotes and docker compose was working
2020-09-10 13:53:52 +05:30
softwarefactory-project-zuul[bot]
ff7c2e9180 Merge pull request #8111 from ryanpetrello/bye-bye-boto
remove boto as an awx dependency

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-10 01:39:00 +00:00
softwarefactory-project-zuul[bot]
e39622d42e Merge pull request #8058 from sean-m-sullivan/workflow_label
Workflow label

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2020-09-10 01:21:06 +00:00
Bill Nottingham
05ad85e7a6 Remove the model for the now unused TowerAnalyticsState. 2020-09-09 20:18:04 -04:00
Bill Nottingham
a604ecffb8 Adjust query_info to set the collection time based on what's passed. 2020-09-09 20:05:48 -04:00
Bill Nottingham
09f7d70428 Use isoformat() rather than strftime
Reformat SQL in unit tests because sqlite.
2020-09-09 20:05:32 -04:00
Bill Nottingham
d4ba62695f Put awx analytics logs also in the task system logger
Errors/warnings when gathering analytics are about 50/50 split between
the gathering code in analytics and the task code that calls it, so
they should be in the same place for debugging sanity.
2020-09-09 17:42:40 -04:00
Bill Nottingham
c753324872 Move back to less frequent collections, and split large event tables
This should ensure we stay under 100MB at all times.
2020-09-09 17:42:40 -04:00
Bill Nottingham
9f67b6742c Fail more gracefully if analytics.ship() is called with a bad path,
or it's deleted out from under us.
2020-09-09 17:42:40 -04:00
Bill Nottingham
1a15f18be3 Stop using the TowerAnalyticsState solo model
This is now tracked in the AUTOMATION_ANALYTICS_LAST_GATHER setting.
2020-09-09 17:42:40 -04:00
Bill Nottingham
40309e6f70 Ensure we do not send large bundles, or empty bundles
Collect expensive collectors separately, and in a loop
where we make smaller intermediate dumps.

Don't return a table dump if there are no records, and
don't put that CSV in the manifest.

Fix up unit tests.
2020-09-09 17:42:40 -04:00
Bill Nottingham
1c4b06fe1e Refactor analytics collectors.
- Only have one registration class
- Add description fields
- Add automation collector information to /api/v2/config
2020-09-09 17:10:14 -04:00
softwarefactory-project-zuul[bot]
dff7667532 Merge pull request #7905 from AlexSCorey/6603-AdHocCommands
Add Ad Hoc Commands

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-09 20:21:41 +00:00
Daniel Sami
da58db7431 grammar fix 2020-09-09 15:42:02 -04:00
Alex Corey
3402e5db35 updates tooltip component, fixes formik configuration on ad hoc qizard 2020-09-09 15:20:18 -04:00
Daniel Sami
fe115fdd16 dropped translation tag 2020-09-09 14:39:54 -04:00
Ryan Petrello
a817708d70 remove boto as an awx dependency
see: https://github.com/ansible/awx/issues/2115
2020-09-09 14:33:33 -04:00
Alex Corey
678dcad437 updates tooltip component, fixes formik configuration on ad hoc qizard 2020-09-09 12:46:19 -04:00
Alex Corey
0e3fbb74d4 updates keys 2020-09-09 12:46:19 -04:00
Alex Corey
94469cc8c0 Adds Proptypes and updates tooltips to make them more translatable 2020-09-09 12:46:19 -04:00
Alex Corey
e6ae171f4b Adds Ad Hoc Commands Wizard 2020-09-09 12:46:19 -04:00
softwarefactory-project-zuul[bot]
caa7b43fe0 Merge pull request #7817 from ryanpetrello/galaxy-credentials
Support Organization-scoped Galaxy installs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-09 16:17:18 +00:00
mabashian
19886f7ec3 Hide edit button on workflow node details view for users that don't have the ability to edit the workflow 2020-09-09 09:44:09 -04:00
mabashian
945dfbb648 Fixes some rbac issues in the workflow toolbar and start screen. Add/edit-related buttons should be hidden for users that cannot edit the workflow. 2020-09-09 09:07:46 -04:00
Sean Sullivan
470b7aaeea Merge pull request #10 from ansible/devel
Rebase from devel
2020-09-09 07:49:37 -05:00
Keith Grant
6af427d4e1 update test snapshot 2020-09-08 14:45:20 -07:00
softwarefactory-project-zuul[bot]
a3e08a3d09 Merge pull request #8072 from john-westcott-iv/get_one_fix
Modify get_one method to allow IDs in addition to names

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-08 21:30:59 +00:00
Keith Grant
e0e48bf922 add custom messages to Notification Detail 2020-09-08 14:04:38 -07:00
Keith Grant
7042542e6a add ArrayDetail 2020-09-08 14:00:57 -07:00
softwarefactory-project-zuul[bot]
3f63800f58 Merge pull request #7967 from keithjgrant/7876-notifications-form
Notifications form

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-08 20:56:21 +00:00
softwarefactory-project-zuul[bot]
800cf30d92 Merge pull request #8089 from rooftopcellist/update_analytics_job_test
Provide a distinct column name for inventory_name in unifiedjob analy…

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-08 20:04:31 +00:00
John Westcott IV
570251dc3d Modifying get_item_name to handle a None object 2020-09-08 15:28:57 -04:00
John Westcott IV
faa33efdd2 Fixing registered name 2020-09-08 15:28:37 -04:00
John Westcott IV
09b8f82bbb Fixing test issue and 'modornizing' test 2020-09-08 15:28:21 -04:00
sean-m-sullivan
cfdfa911e8 update lint 2020-09-08 14:00:12 -05:00
softwarefactory-project-zuul[bot]
cd8c74e28f Merge pull request #8094 from ryanpetrello/upgrade-django-libs
Update Django and channels_redis

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-08 17:46:35 +00:00
John Westcott IV
4c0e288fee Touching up to missing sport of get_one returning multiple params 2020-09-08 13:38:44 -04:00
Keith Grant
13e6757666 add ids to CodeMirror FormGroup containers 2020-09-08 10:17:18 -07:00
softwarefactory-project-zuul[bot]
90c3bfc6ae Merge pull request #8074 from fosterseth/fix-7655_task_manager_times_out
Prevent task manager timeout by limiting number of jobs to start

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-08 16:42:20 +00:00
Seth Foster
e09274e533 PR #8074 - limit how many jobs the task manager can start on a given run 2020-09-08 12:16:06 -04:00
John Westcott IV
c2cfaec7d1 Fixing name -> username 2020-09-08 11:52:53 -04:00
John Westcott IV
d6f35a71d7 Adding allow_unknown option for get_item_name 2020-09-08 11:51:47 -04:00
John Westcott IV
6467d34445 Touching up tower_inventory_source after rebase 2020-09-08 11:36:33 -04:00
Christian M. Adams
0681444294 Provide a distinct column name for inventory_name in unifiedjob analytics csv 2020-09-08 11:14:24 -04:00
John Westcott IV
0a8db586d1 Changing how get_one returns 2020-09-08 11:10:18 -04:00
John Westcott IV
106157c600 get_one now also returns the name field, and modifying modules for get_one and added in some IDs in a handful of unit tests 2020-09-08 11:06:01 -04:00
John Westcott IV
4c4d6dad49 Adding get_item_name and making static methods 2020-09-08 11:03:10 -04:00
John Westcott IV
24ec129235 Modifying credential for new get_one 2020-09-08 11:03:10 -04:00
John Westcott IV
5042ad3a2b Updating user module for new get_one 2020-09-08 11:03:10 -04:00
John Westcott IV
51959b29de Adding name_or_id to get_one to make module more uniform 2020-09-08 11:03:10 -04:00
Ryan Petrello
c862b3e5a2 Revert "work around a memory leak in channels_redis"
This reverts commit e25da217e8.
2020-09-08 10:40:47 -04:00
Ryan Petrello
f81560b12c update Django and channels_redis
see: https://github.com/ansible/tower/issues/4439
also, addresses CVE-2020-24583 and CVE-2020-24584
2020-09-08 10:39:26 -04:00
softwarefactory-project-zuul[bot]
68265ea9b5 Merge pull request #8092 from nixocio/ui_issue_8091
List jobs for container groups

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2020-09-08 14:33:57 +00:00
softwarefactory-project-zuul[bot]
6e6aa1fdab Merge pull request #8033 from beeankha/update_inv_src_module
Add New tower_inventory_source_update Module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-08 14:04:32 +00:00
Benoit Bayszczak
08c9219f48 rename 'approle_auth_path' to 'default_auth_path' & fix kwargs.get 2020-09-08 10:39:12 +02:00
beeankha
c7114b2571 Add specific lookup data to error message 2020-09-07 11:43:44 -04:00
softwarefactory-project-zuul[bot]
127ca4bc54 Merge pull request #7975 from AlexSCorey/7973-DisableTemplateFormFields
Disables template and workflow template fields.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-06 21:26:23 +00:00
Alex Corey
b51f013880 disables prompt on launch checkbox 2020-09-05 08:32:15 -04:00
Alex Corey
fc4060778b Disables template and workflow template fields for users that do not have permissions for those fields. 2020-09-05 08:32:15 -04:00
Keith Grant
40d3e4ee8b handle error string in FormSubmitError 2020-09-04 16:24:09 -07:00
beeankha
a0b8f6a25d Make org an optional parameter for both inv source and inv source update modules 2020-09-04 18:56:58 -04:00
Keith Grant
9711c33675 fix multiple notification form bugs 2020-09-04 15:50:04 -07:00
Keith Grant
b119bc475f convert http headers field to string for codemirror 2020-09-04 15:50:04 -07:00
Keith Grant
055abd57cd fix new test from rebase 2020-09-04 15:50:04 -07:00
Keith Grant
95c4b6c922 trying to force tests to re-run 2020-09-04 15:50:04 -07:00
Keith Grant
7ab9d899e4 mark text for translation 2020-09-04 15:49:11 -07:00
Keith Grant
c700d07a0a fix lint error 2020-09-04 15:49:11 -07:00
Keith Grant
4a616d9f81 ensure notification template re-loads after saving 2020-09-04 15:49:11 -07:00
mabashian
e830da97f3 Fix broken credential form tests after id change 2020-09-04 15:49:11 -07:00
Keith Grant
43ac5a0574 add some notification form tests; notification add screen 2020-09-04 15:49:11 -07:00
Keith Grant
9bb834a422 create ArrayTextField component 2020-09-04 15:49:11 -07:00
Keith Grant
458d29a579 only submit customized notification messages if changed 2020-09-04 15:49:11 -07:00
Keith Grant
19fc0d9a96 reset notification messages to defaults when switching to new type 2020-09-04 15:49:11 -07:00
Keith Grant
ba95775ded add custom notification messages subform 2020-09-04 15:49:11 -07:00
Keith Grant
bb12e0a3a9 fix initial values for notification type fields 2020-09-04 15:49:10 -07:00
Keith Grant
09178dd5f2 flush out notification form and type subform 2020-09-04 15:49:10 -07:00
nixocio
d4b2e1998e List jobs for container groups
Add list of jobs related to container groups.

See: https://github.com/ansible/awx/issues/8091
2020-09-04 16:21:32 -04:00
softwarefactory-project-zuul[bot]
9c90804300 Merge pull request #7974 from AlexSCorey/6561-WFJTDeetsNotLoad
Adds fix to load template details 

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-04 20:16:30 +00:00
softwarefactory-project-zuul[bot]
95b43c0087 Merge pull request #8075 from ryanpetrello/redis-capacity-check
if redis is unreachable, set instance capacity to zero

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-04 20:06:21 +00:00
softwarefactory-project-zuul[bot]
3f8cd21233 Merge pull request #8084 from ryanpetrello/bye-fact-scanning
remove old fact scanning plugins

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-04 19:13:56 +00:00
softwarefactory-project-zuul[bot]
e6cd27a858 Merge pull request #7960 from mabashian/6113-6703-cred-file-upload
Support file upload/drag and drop on credential mutliline fields

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-04 17:27:43 +00:00
beeankha
4133ec974b Follow same pattern as project update module, add task to integration test 2020-09-04 13:05:34 -04:00
softwarefactory-project-zuul[bot]
70f1bffe42 Merge pull request #8069 from ryanpetrello/saml-autopop
add the ability to prevent SAML auto-population behavior

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-04 16:01:23 +00:00
mabashian
77c211dabe Adds id's to credential form buttons 2020-09-04 11:22:34 -04:00
Ryan Petrello
013b411a0a remove old fact scanning plugins 2020-09-04 10:51:34 -04:00
beeankha
7764f1c1a5 Fix inv source org lookup, adjust unit tests (since org is now required for inv source module), update README, edit typos in test-related README 2020-09-04 10:19:56 -04:00
beeankha
b78dea3e4b Add wait ability to the module 2020-09-04 10:19:56 -04:00
beeankha
1ce5d7d539 Add ability to look up inventory sources by org name/ID 2020-09-04 10:19:56 -04:00
beeankha
65e0ed8c77 Fix linter error 2020-09-04 10:19:56 -04:00
beeankha
72dbd10c2a Initial commit for new inventory sync module 2020-09-04 10:19:56 -04:00
mabashian
e5f9ed827b Undo commenting out of strictmode 2020-09-04 09:20:09 -04:00
mabashian
2d2108b1de Support file upload/drag and drop on credential mutliline fields 2020-09-04 09:20:09 -04:00
Ryan Petrello
b01d204137 if redis is unreachable, set instance capacity to zero 2020-09-03 15:11:53 -04:00
Ryan Petrello
a6d26d7dab add the ability to prevent SAML auto-population behavior 2020-09-03 11:21:14 -04:00
nixocio
adffa29346 Add/Edit Container Groups
Add/Edit container groups.

See: https://github.com/ansible/awx/issues/7955
2020-09-03 10:13:24 -04:00
softwarefactory-project-zuul[bot]
cce66e366f Merge pull request #8065 from e-desouza/patch-1
Update from pip to pip3

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-03 12:35:07 +00:00
Elton de Souza
371276b2e1 Update from pip to pip3 2020-09-02 21:11:46 -04:00
softwarefactory-project-zuul[bot]
77b1afe6fd Merge pull request #7873 from sean-m-sullivan/project_update
Project update

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-02 15:39:23 +00:00
sean-m-sullivan
5478c5f2fb linting 2020-09-02 08:30:09 -05:00
sean-m-sullivan
e2bf3a0287 update to use isdigit and updated tests 2020-09-02 07:19:11 -05:00
sean-m-sullivan
a17eedd9fe update to not use organizations 2020-09-01 23:08:15 -05:00
sean-m-sullivan
d05c7d6cc5 update to use organizations 2020-09-01 22:54:08 -05:00
sean-m-sullivan
6200467629 add label to workflow templates 2020-09-01 18:58:39 -05:00
Sean Sullivan
dbd8431b14 Merge pull request #6 from ansible/devel
Rebase to devel
2020-09-01 18:50:03 -05:00
softwarefactory-project-zuul[bot]
b11a5c1190 Merge pull request #8057 from ryanpetrello/proj-archive-py2
update the project archive sync support for py2 compatability

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-01 23:40:58 +00:00
sean-m-sullivan
532447ed40 update to tower module 2020-09-01 16:43:58 -05:00
sean-m-sullivan
300a4510ac update completness 2020-09-01 15:59:53 -05:00
sean-m-sullivan
d269a6d233 update 2020-09-01 15:59:04 -05:00
softwarefactory-project-zuul[bot]
967e35fec9 Merge pull request #8056 from ryanpetrello/dead-code-ansible-path
remove some dead code

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-01 20:33:43 +00:00
softwarefactory-project-zuul[bot]
fcd1169093 Merge pull request #8043 from john-westcott-iv/instance_groups_module
Adding tower_instance_group module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-01 20:26:44 +00:00
sean-m-sullivan
d95d8121f9 added to no endpoints for module 2020-09-01 15:22:14 -05:00
sean-m-sullivan
5d4413041e update to wait 2020-09-01 14:59:16 -05:00
Ryan Petrello
798f6371af update the project archive sync support for py2 compatability
playbook runs in AWX are still supported in python2, so this adds
support for archive-based project syncs in py2
2020-09-01 15:58:43 -04:00
Sean Sullivan
8e317cabc0 Merge pull request #5 from ansible/devel
Rebase from devel
2020-09-01 14:42:23 -05:00
Ryan Petrello
3edaa6bc14 remove some dead code 2020-09-01 15:26:38 -04:00
softwarefactory-project-zuul[bot]
30616c1fce Merge pull request #8022 from sean-m-sullivan/wait_function
Add wait function and update collection modules

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
             https://github.com/beeankha
2020-09-01 19:13:12 +00:00
softwarefactory-project-zuul[bot]
57eed5863a Merge pull request #8054 from ryanpetrello/postgres-upgrade-note
add a changelog note about upgrade issues from 13.0 -> 14.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-01 19:10:55 +00:00
softwarefactory-project-zuul[bot]
1c55d10d81 Merge pull request #7965 from rebeccahhh/devel
adding in inventory to unified job query for analytics

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-01 18:27:27 +00:00
Ryan Petrello
f4af74dabe renumber the Galaxy credential migration after rebasing 2020-09-01 13:47:26 -04:00
Elyézer Rezende
ad85b176f4 Update awxkit to manage Org galaxy credentials
Add the new credential type to the Credential page object and helper
methods to manage the galaxy credentials for the Organization page
object.
2020-09-01 13:46:47 -04:00
Ryan Petrello
4046b18eff properly handle Galaxy credentials if a Project is orphaned 2020-09-01 13:45:03 -04:00
Ryan Petrello
a869d7da35 create the Galaxy credential for new installs in the preload script
doing this in the migration *before* any Organizations actually exist
is stirring up RBAC dragons that I don't have time to fight

this commit meanst that *new* installs will pre-create the default
Galaxy (public) credential in create_preload_data, while
*upgraded/migrations* installs will do so via the migration
2020-09-01 13:45:03 -04:00
mabashian
895010c675 Pre-populate organization galaxy credential field with the default galaxy credential 2020-09-01 13:45:03 -04:00
Ryan Petrello
a30ca9c19c don't run ansible-galaxy installs if there are no Galaxy credentials 2020-09-01 13:45:03 -04:00
Ryan Petrello
30da93a64e update the help text for Organization galaxy credentials 2020-09-01 13:45:03 -04:00
mabashian
458807c0c7 Add a Galaxy Credential multi-select field to the Organizations form 2020-09-01 13:45:03 -04:00
Ryan Petrello
011822b1f0 make a global "managed by AWX/Tower" Credential to represent Galaxy 2020-09-01 13:45:03 -04:00
Ryan Petrello
e5552b547b properly migrate settings.FALLBACK_GALAXY_SERVERS 2020-09-01 13:45:02 -04:00
Ryan Petrello
1b4dd7c783 enforce Organization ownership of Galaxy credentials 2020-09-01 13:45:02 -04:00
Ryan Petrello
25a9a9c3ba remove redaction exclusions for Galaxy URLs (basic auth support is gone) 2020-09-01 13:45:02 -04:00
Ryan Petrello
130e279012 add a data migration for Galaxy credentials
see: https://github.com/ansible/awx/issues/7813
2020-09-01 13:45:02 -04:00
Ryan Petrello
b8e0d087e5 add model support, an API, and a migration for Org -> Galaxy credentials
see: https://github.com/ansible/awx/issues/7813
2020-09-01 13:44:59 -04:00
softwarefactory-project-zuul[bot]
8996d0a464 Merge pull request #7763 from chrismeyersfsu/feature-inv_plugin_conf
Feature inv plugin conf

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-01 17:44:22 +00:00
Ryan Petrello
40ac719d6d add a changelog note about upgrade issues from 13.0 -> 14.0 2020-09-01 13:32:09 -04:00
John Westcott IV
5f29b4bc18 Fixing typo 2020-09-01 13:17:24 -04:00
Ryan Petrello
059999c7c3 update the inventory source module to respect new fields 2020-09-01 13:15:07 -04:00
Chris Meyers
924273f589 more get_ansible_version test failure fixes 2020-09-01 12:50:59 -04:00
Chris Meyers
72fc314da1 flake8 fix 2020-09-01 12:50:59 -04:00
Chris Meyers
043a7f8599 more get_ansible_version removal 2020-09-01 12:50:59 -04:00
Chris Meyers
a6712cfd60 bump inv plugin migration to avoid conflict 2020-09-01 12:50:59 -04:00
Alan Rominger
99aff93930 Get rid of ansible version checking 2020-09-01 12:50:59 -04:00
Chris Meyers
03ad1aa141 remove backwords migraiton support for inv plugins
* Do not write out inventory source_vars to a file on disk as they _may_
contain sensitive information. This also removes support for backwards
migrations. This is fine, backwards migration is really only useful
during development.
2020-09-01 12:50:59 -04:00
Jake McDermott
dcf5917a4e Check that host_filter is regexp 2020-09-01 12:50:58 -04:00
Jake McDermott
f04aff81c4 Add details to inv source field help text 2020-09-01 12:50:58 -04:00
Chris Meyers
a9cdf07690 push global invsource fields onto invsource obj 2020-09-01 12:50:58 -04:00
Chris Meyers
d518891520 inventory plugin inventory parameter take precedence over env vars 2020-09-01 12:50:58 -04:00
Chris Meyers
48fb1e973c overwrite plugin: at runtime
* Before, we were re-writing `plugin:` when users updated the
InventorySource via the API. Now, we just override at run-time. This
makes for a more sane API interaction
2020-09-01 12:50:58 -04:00
Chris Meyers
c7794fc3e4 fix missed import 2020-09-01 12:50:58 -04:00
Jake McDermott
2fdeba47a5 Add new common inventory source fields 2020-09-01 12:50:58 -04:00
Chris Meyers
12cf607e8a simplify InventorySourceOption inheritance hack 2020-09-01 12:50:58 -04:00
Chris Meyers
7d4493e109 rename inventory migration 2020-09-01 12:50:57 -04:00
Jake McDermott
b253540047 Add new common inventory source fields 2020-09-01 12:50:57 -04:00
Jake McDermott
42e70bc852 Delete inventory source fields 2020-09-01 12:50:57 -04:00
Jake McDermott
dce946e93f Delete inventory source fields 2020-09-01 12:50:57 -04:00
Chris Meyers
2eec1317bd safer migrations 2020-09-01 12:50:57 -04:00
Chris Meyers
b7efad5640 do not enforce plugin: for source=scm
* InventorySource w/ source type scm point to an inventory file via
source_file. source_vars are ignored.
2020-09-01 12:50:57 -04:00
Chris Meyers
35d264d7f8 forgot to add migration helper
* move models/inventory.py plugin injector logic to a place frozen in
time useable by the migration code.
2020-09-01 12:50:57 -04:00
Chris Meyers
34adbe6028 fix tests for new inv plugin behavior
* Enforce plugin:
2020-09-01 12:50:57 -04:00
Chris Meyers
a8a47f314e remove source_regions 2020-09-01 12:50:56 -04:00
Chris Meyers
f32716a0f1 remove instance_filter 2020-09-01 12:50:56 -04:00
Chris Meyers
7278e7c025 remove group_by from inventory source
* Does not remove group_by testing
2020-09-01 12:50:56 -04:00
Chris Meyers
e11040f421 migrate to new style inv plugin 2020-09-01 12:50:53 -04:00
sean-m-sullivan
9f3635be07 update test to timeout message change 2020-09-01 10:45:41 -05:00
sean-m-sullivan
50637807fc fixed typo 2020-09-01 10:38:15 -05:00
John Westcott IV
d01f2d6caf Converting policy_instance_list from dict to list 2020-09-01 11:02:39 -04:00
softwarefactory-project-zuul[bot]
2fa8b7e594 Merge pull request #8007 from ryanpetrello/defer-artifacts
defer loading Job.artifacts and .extra_vars on host views to improve performance

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-01 14:08:00 +00:00
John Westcott IV
36ab0dd03e Fixing white spaces 2020-09-01 09:24:36 -04:00
softwarefactory-project-zuul[bot]
671c571628 Merge pull request #8047 from AlanCoding/less_mounting
Remove more special access to folders outside job private_data_dir

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-09-01 02:42:15 +00:00
Alan Rominger
0b371b4340 Copy library folder to job private data dir
Remove inventory scripts show because they no longer exist

Remove reference to non-existent callback directory

Remove more references to removed path
2020-08-31 22:14:59 -04:00
John Westcott IV
72bdd17518 Fixing regression 2020-08-31 16:56:42 -04:00
John Westcott IV
574c3b65b2 Converting from string to array 2020-08-31 16:56:42 -04:00
John Westcott IV
5a8bcd357b Fixing linting issues 2020-08-31 16:56:42 -04:00
John Westcott IV
1bd8f4ad3e Removing tower_instance_group from completness 2020-08-31 16:56:42 -04:00
John Westcott IV
2369bcb25c Removing forward feature needed for testing locally 2020-08-31 16:56:42 -04:00
John Westcott IV
9e29dd08fb Adding tower_instance_group module 2020-08-31 16:56:42 -04:00
sean-m-sullivan
cd45cfec30 updated doc and pep8 2020-08-31 15:53:54 -05:00
sean-m-sullivan
d9d454d435 Merge branch 'wait_function' of github.com:sean-m-sullivan/awx into wait_function 2020-08-31 15:24:40 -05:00
sean-m-sullivan
0bc927820b updated legacy messages 2020-08-31 15:24:32 -05:00
softwarefactory-project-zuul[bot]
e1095a0a94 Merge pull request #8046 from nixocio/ui_fix_date_projects
Update date format for project list item

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2020-08-31 18:46:51 +00:00
Sean Sullivan
d0ab307787 Merge pull request #4 from ansible/devel
Rebase from Dev
2020-08-31 13:45:49 -05:00
softwarefactory-project-zuul[bot]
6c9e417eb9 Merge pull request #7730 from mabashian/7339-test-button
Adds support for a Test button on the credential form when the credential type is 'external'

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-31 18:43:00 +00:00
softwarefactory-project-zuul[bot]
2837eb7027 Merge pull request #8049 from john-westcott-iv/fix_notification_tests
Cleaning up tower_notification references

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-31 18:19:12 +00:00
softwarefactory-project-zuul[bot]
ad28a36cdf Merge pull request #8048 from AlanCoding/notification_name
correct name of tower_notification redirect

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-31 16:20:37 +00:00
John Westcott IV
ff4ed64978 Cleaning up tower_notification references 2020-08-31 11:57:01 -04:00
Alan Rominger
b84343d292 correct name of tower_notification redirect 2020-08-31 10:18:08 -04:00
nixocio
970ecde0ea Update date format for project list item
Update date format for project list item.

See: https://github.com/ansible/awx/issues/7694
2020-08-30 20:58:58 -04:00
softwarefactory-project-zuul[bot]
ddad5095a4 Merge pull request #8024 from velzend/regsiter_typo
typo: regsiter -> register

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-30 19:53:33 +00:00
softwarefactory-project-zuul[bot]
730cabe597 Merge pull request #8038 from john-westcott-iv/check_token_pre_del
Adding check that we are authenticated and also have a token

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-30 17:54:13 +00:00
softwarefactory-project-zuul[bot]
07ebf677de Merge pull request #8039 from john-westcott-iv/transaction_check
Adding transaction to mock requests

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-30 17:54:07 +00:00
softwarefactory-project-zuul[bot]
c7f4c4bdc1 Merge pull request #8029 from AlanCoding/project_cwd
Run project updates from a copy of the playbook vendoring directory

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-29 03:48:01 +00:00
mabashian
9d511a4c04 Fix id on credential select fields 2020-08-28 17:02:25 -04:00
softwarefactory-project-zuul[bot]
bd4b009bea Merge pull request #8035 from nixocio/ui_update_instance_groups
Update instance groups

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-28 20:37:34 +00:00
mabashian
ae4f1a15d3 Add ID's to the buttons in the external test modal for cred form 2020-08-28 15:43:44 -04:00
mabashian
e93aa34864 Adds support for a Test button on the credential form when the credential type is 'external' 2020-08-28 15:43:44 -04:00
softwarefactory-project-zuul[bot]
bebd882688 Merge pull request #8040 from rooftopcellist/be_more_accepting
Accept all responses <300 from Insights API

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-28 19:43:27 +00:00
Christian M. Adams
4ea648307e Accept all responses <300 from Insights API 2020-08-28 12:45:50 -04:00
John Westcott IV
feb9bcff4d Adding transaction to mock requests 2020-08-28 12:43:33 -04:00
softwarefactory-project-zuul[bot]
40603c213a Merge pull request #8037 from bbayszczak/fix_typo_comment_hashivault
[credential_plugin/hashivault] fix typo

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-28 16:40:46 +00:00
Benoit Bayszczak
e8b54abec4 [credential_plugin/hashivault] edit tests 2020-08-28 17:39:37 +02:00
Benoit Bayszczak
878b754d9f [credential_plugin/hashivault] fix typo 2020-08-28 17:33:19 +02:00
Benoit Bayszczak
16fdf0e28f [credential_plugin/hashivault] add approle_auth_path in inputs 2020-08-28 17:22:07 +02:00
nixocio
21330a54cb Update instance groups
* Simplify criteria to instance group to be considered unavailable
* Round values for used capacity

See: https://github.com/ansible/awx/issues/7467
2020-08-28 11:07:03 -04:00
John Westcott IV
51f4aa2b48 Adding check that we are authenticated and also have a token 2020-08-28 11:04:15 -04:00
softwarefactory-project-zuul[bot]
fe5fb0c523 Merge pull request #7997 from mabashian/7480-webhook-disable
Fixes bug where users were unable to turn webhooks off when editing templates

Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
             https://github.com/tiagodread
2020-08-28 14:27:56 +00:00
sean-m-sullivan
b3ec080e08 updated output 2020-08-28 08:25:21 -05:00
sean-m-sullivan
fd77a8aca5 updated output 2020-08-28 08:22:44 -05:00
sean-m-sullivan
7bd3f9d63c updated to error if finished not in result 2020-08-28 07:37:29 -05:00
sean-m-sullivan
d971375907 updated to error if finished not in result 2020-08-28 07:35:13 -05:00
sean-m-sullivan
007b0d841e updated parameters and errors 2020-08-28 07:19:39 -05:00
softwarefactory-project-zuul[bot]
aa637d515a Merge pull request #7929 from nixocio/ui_associate_instances
Associate instances to instance groups

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-27 17:49:00 +00:00
softwarefactory-project-zuul[bot]
ad3e2cbfcd Merge pull request #8028 from dsesami/8027-workflow-sparkline
Change WFJT details sparkline hyperlink

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-27 16:29:42 +00:00
softwarefactory-project-zuul[bot]
f1ee44b6c2 Merge pull request #8016 from john-westcott-iv/inventory_insights
Adding insights credential to tower_inventory

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-27 16:14:37 +00:00
Daniel Sami
dc7e721968 Change WFJT details sparkline hyperlink 2020-08-27 11:54:38 -04:00
nixocio
632204de83 Associate instances to instance groups
Associate instances to instance groups.

See: https://github.com/ansible/awx/issues/7801
2020-08-27 11:33:45 -04:00
Alan Rominger
e811711a49 Put project playbook in runner project folder 2020-08-27 11:07:10 -04:00
John Westcott IV
64d98a120c Removing debugging end play 2020-08-27 09:47:48 -04:00
Benoit Bayszczak
cf5d1a2d03 restore previous tests as we need to keep backward compatibility
This reverts commit 7c8e5ace52.
2020-08-27 11:06:14 +02:00
tp48cf
5cd12b8088 typo: regsiter -> register 2020-08-27 09:23:41 +02:00
sean-m-sullivan
49e2a3fa5a update documentation 2020-08-26 23:44:31 -05:00
sean-m-sullivan
0c18587851 update to use time function 2020-08-26 23:30:18 -05:00
softwarefactory-project-zuul[bot]
f18d9212cb Merge pull request #7987 from nixocio/ui_add_adv_search
Add advanced search keys for InstanceGroup and CredentialType Lists

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-27 02:55:19 +00:00
softwarefactory-project-zuul[bot]
9b353c70f3 Merge pull request #8019 from mabashian/7663-workflow-viz-button
Adds visualizer button to workflow template rows on templates list

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-27 02:54:03 +00:00
Sean Sullivan
243c2cfe15 Merge pull request #3 from ansible/devel
rebase to devel
2020-08-26 20:53:39 -05:00
beeankha
0f3aefe592 Fix pep8 errors, rebase 2020-08-26 19:51:10 -04:00
John Westcott IV
7afd84dc49 Resolving issue in test_completness 2020-08-26 19:50:04 -04:00
John Westcott IV
1b1a14f220 Fixing linting spacing issue 2020-08-26 19:50:04 -04:00
John Westcott IV
2690fcec31 Adding insights credential to tower_inventory
Fixed a spacing issue in documentation

Enhanced integration test to include a block to ensure cleanup of assets
2020-08-26 19:50:04 -04:00
softwarefactory-project-zuul[bot]
9323156f4c Merge pull request #7983 from beeankha/collections_pylint_fixes
Fix pylint Errors in Collections Modules

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-26 22:31:22 +00:00
mabashian
73d21a01cb Adds visualizer button to workflow template rows on templates list 2020-08-26 17:03:48 -04:00
Rebeccah
9a6f641df0 adding in inventory to unified job query for analytics 2020-08-26 16:32:02 -04:00
beeankha
3ddee3072b Add errors to ignore file, remove noqa directives 2020-08-26 16:09:42 -04:00
softwarefactory-project-zuul[bot]
1514a5ac23 Merge pull request #7963 from john-westcott-iv/tools
Collections Completeness Tests

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-26 19:34:52 +00:00
softwarefactory-project-zuul[bot]
ccb1e0a748 Merge pull request #8008 from mabashian/8002-plurals
Use low level i18n._ instead of i18n.plural for plurals on the schedule form

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2020-08-26 18:11:04 +00:00
Ryan Petrello
73baf3fcf9 defer loading Job.artifacts on host views to improve performance
see: https://github.com/ansible/awx/issues/8006

this data can be *really* large, and we don't actually need it for the summary fields on this API endpoint
2020-08-26 13:34:15 -04:00
mabashian
c731e4282b Use low level i18n._ instead of i18n.plural for plurals on the schedule form. This fixes a bug where the page would crash on static builds when a frequency was selected. 2020-08-26 13:18:07 -04:00
softwarefactory-project-zuul[bot]
57949078bb Merge pull request #7945 from beeankha/tower_role_id_fix
Get tower_role Module to Accept IDs for Related Objects

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-26 14:21:32 +00:00
John Westcott IV
2b6e4fe353 Permissions change, spell check, pep8, etc 2020-08-26 10:16:37 -04:00
John Westcott IV
b7b304eb84 Changing how we check modules
Options which are not in the API POST and are marked in the module as deprecated are ignored

If an option is not in the API POST but is marked as a list we assume its a relation
2020-08-26 10:16:37 -04:00
John Westcott IV
8f97109ac7 Fixing PY2 compile issues 2020-08-26 10:16:37 -04:00
John Westcott IV
730a9b25ac Fixing pep8 and other issues 2020-08-26 10:16:37 -04:00
John Westcott IV
a36d942f67 Migrating old test_notification test to test_notification_template 2020-08-26 10:16:37 -04:00
John Westcott IV
b1ffbf1e39 Change module from tower_notification to tower_notification_template 2020-08-26 10:16:37 -04:00
John Westcott IV
f8290f0ce3 Fixing flake8 and adding more module OKs 2020-08-26 10:16:37 -04:00
John Westcott IV
41ca20dc99 Changing scm_credential to credential to match API 2020-08-26 10:16:37 -04:00
John Westcott IV
3b6152a380 Changing job_timeout to an aliase of timeout to match the API 2020-08-26 10:16:37 -04:00
John Westcott IV
1ad4c4ab86 Fixing pep8 issues 2020-08-26 10:16:37 -04:00
John Westcott IV
9353e94629 Spell checking 2020-08-26 10:16:37 -04:00
John Westcott IV
85a1233764 Adding read-only endpoint check, fixing typo, adding needs_development check 2020-08-26 10:16:37 -04:00
John Westcott IV
c2cdd8e403 Initial commit of test_completeness 2020-08-26 10:16:37 -04:00
softwarefactory-project-zuul[bot]
96e1920d36 Merge pull request #7988 from nixocio/add_more_styles_delete_button
Update styles to delete button to be secondary 

Reviewed-by: Michael Abashian
             https://github.com/mabashian
2020-08-26 14:02:46 +00:00
tp48cf
d9e09f482d allow skipping provision instance and register queue 2020-08-26 15:56:01 +02:00
sean-m-sullivan
bed3a9ee41 added id lookup 2020-08-26 08:11:51 -05:00
Benoit Bayszczak
7c8e5ace52 fix tests 2020-08-26 11:14:04 +02:00
softwarefactory-project-zuul[bot]
b8a04f05d1 Merge pull request #7992 from AlexSCorey/RemovesColonOnListItem
Removes colon from survey list item

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-26 00:48:50 +00:00
softwarefactory-project-zuul[bot]
2b18eee92a Merge pull request #7969 from nixocio/ui_add_isolate_job_label
Add label to identify isolated instance group JobDetail screen

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-08-26 00:48:43 +00:00
mabashian
8fac722b10 Fixes bug where users were unable to turn webhooks off when editing job templates/workflow job templates 2020-08-25 16:02:55 -04:00
beeankha
4bc1a128ec Add noqa directive for super calls 2020-08-25 15:49:20 -04:00
beeankha
da732a3941 Add unit test, update lookup method to favor ID over name 2020-08-25 14:57:02 -04:00
Alex Corey
c0a2a69835 removes colon from suvery list item 2020-08-25 13:32:57 -04:00
Benoit Bayszczak
00fc5f6b93 hashivault_kv auth_path moved from metadata to inputs
The auth_path is used with the approle auth method
It's not linked to the secret we are reading but to the auth method,
this parameter has to be moved to inputs
2020-08-25 18:01:09 +02:00
nixocio
6794f331c3 Update styles to delete button to be secondary
Update styles to delete button to be secondary.

See: https://github.com/ansible/awx/issues/7041
Also: https://github.com/ansible/awx/issues/7984
2020-08-25 10:39:28 -04:00
sean-m-sullivan
13f3292af0 updated test 2020-08-25 07:04:27 -05:00
nixocio
de130eb798 Add advanced search keys for InstanceGroup and CredentialType Lists
Add advanced search keys for `InstanceGroup` and `CredentialType` Lists.

See: https://github.com/ansible/awx/pull/7895/files
2020-08-24 21:27:44 -04:00
beeankha
75a0c0ab1e Prioritize integer names over IDs more effectively 2020-08-24 17:13:22 -04:00
beeankha
54317236f3 Fix pep8 error, remove redundant var assignment 2020-08-24 17:13:22 -04:00
John Westcott IV
0fd618d88b Changing test suite and fixing user issue in tower_api 2020-08-24 17:13:21 -04:00
beeankha
e9d66df77a Initial commit for tower_role name vs id lookup bug fix 2020-08-24 17:13:21 -04:00
beeankha
ef7a74c4a3 Initial commit of pylint fixes 2020-08-24 16:47:16 -04:00
Alex Corey
90e8d5697e Adds fix to load template details screen even for users without webhook key permissions 2020-08-24 11:39:29 -04:00
nixocio
447bde95e3 Add label to identify isolated instance group JobDetail screen
Add label to identify isolated instance group JobDetail screen as per
mock up.

See: https://tower-mockups.testing.ansible.com/patternfly/jobs/jobs-detail/
2020-08-23 14:58:06 -04:00
Sean Sullivan
cda05c4f03 Updated test for lint
Updated test for lint
2020-08-22 20:16:27 -05:00
sean-m-sullivan
3794f095cf update to tower_project_update 2020-08-22 13:43:14 -05:00
Sean Sullivan
d6815e5114 Removed reference to sync
Updated references to sync in favour of update.
2020-08-20 10:30:43 -05:00
sean-m-sullivan
69dc0a892f fix lint 2020-08-11 09:56:08 -05:00
sean-m-sullivan
813e38636a fix documentation 2020-08-11 09:38:18 -05:00
sean-m-sullivan
262b2bf8ff Merge branch 'project_update' of github.com:sean-m-sullivan/awx into project_update 2020-08-11 09:28:31 -05:00
sean-m-sullivan
af1fc5a9e9 add tests 2020-08-11 09:28:17 -05:00
Sean Sullivan
8af315cf29 Create tower_project_update
Add module to update tower project and wait until synced.
2020-08-11 08:54:10 -05:00
Sean Sullivan
7089c5f06e Merge pull request #2 from ansible/devel
Rebase
2020-08-11 07:56:11 -05:00
Sean Sullivan
104073af45 Merge pull request #1 from ansible/devel
Merging remote to devel
2020-06-09 07:50:36 -05:00
446 changed files with 15526 additions and 6239 deletions

View File

@@ -2,12 +2,32 @@
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
## 15.0.0 (September 30, 2020)
- AWX now utilizes a version of certifi that auto-discovers certificates in the system certificate store - https://github.com/ansible/awx/pull/8242
- Added support for arbitrary custom inventory plugin configuration: https://github.com/ansible/awx/issues/5150
- Added improved support for fetching Ansible collections from private Galaxy content sources (such as https://github.com/ansible/galaxy_ng) - https://github.com/ansible/awx/issues/7813
- Added an optional setting to disable the auto-creation of organizations and teams on successful SAML login. - https://github.com/ansible/awx/pull/8069
- Added a number of optimizations to Ansible Tower's callback receiver to improve the speed of stdout processing for simultaneous playbooks runs - https://github.com/ansible/awx/pull/8193 https://github.com/ansible/awx/pull/8191
- Added the ability to use `!include` and `!import` constructors when constructing YAML for use with the AWX CLI - https://github.com/ansible/awx/issues/8135
- Fixed a bug that prevented certain users from being able to edit approval nodes in Workflows - https://github.com/ansible/awx/pull/8253
- Fixed a bug that broke password prompting for credentials in certain cases - https://github.com/ansible/awx/issues/8202
- Fixed a bug which can cause PostgreSQL deadlocks when running many parallel playbooks against large shared inventories - https://github.com/ansible/awx/issues/8145
- Fixed a bug which can cause delays in Ansible Tower's task manager when large numbers of simultaneous jobs are scheduled - https://github.com/ansible/awx/issues/7655
- Fixed a bug which can cause certain scheduled jobs - those that run every X minute(s) or hour(s) - to fail to run at the proper time - https://github.com/ansible/awx/issues/8071
- Fixed a performance issue for playbooks that store large amounts of data using the `set_stats` module - https://github.com/ansible/awx/issues/8006
- Fixed a bug related to AWX's handling of the auth_path argument for the HashiVault KeyValue credential plugin - https://github.com/ansible/awx/pull/7991
- Fixed a bug that broke support for Remote Archive SCM Type project syncs on platforms that utilize Python2 - https://github.com/ansible/awx/pull/8057
- Updated to the latest version of Django Rest Framework.
- Updated to the latest version of Django to address CVE-2020-24583 and CVE-2020-24584
- Updated to the latest verson of channels_redis to address a bug that slowly causes Daphne processes to leak memory over time - https://github.com/django/channels_redis/issues/212
## 14.1.0 (Aug 25, 2020)
- AWX images can now be built on ARM64 - https://github.com/ansible/awx/pull/7607
- Added the Remote Archive SCM Type to support using immutable artifacts and releases (such as tarballs and zip files) as projects - https://github.com/ansible/awx/issues/7954
- Deprecated official support for Mercurial-based project updates - https://github.com/ansible/awx/issues/7932
- Added resource import/export support to the official AWX collection - https://github.com/ansible/awx/issues/7329
- Added the ability to import YAML-based resources (instead of just JSON) when using the AWX CLI - https://github.com/ansible/awx/pull/7808
- Users upgrading from older versions of AWX may encounter an issue that causes their postgres container to restart in a loop (https://github.com/ansible/awx/issues/7854) - if you encounter this, bring your containers down and then back up (e.g., `docker-compose down && docker-compose up -d`) after upgrading to 14.1.0.
- Updated the AWX CLI to export labels associated with Workflow Job Templates - https://github.com/ansible/awx/pull/7847
- Updated to the latest python-ldap to address a bug - https://github.com/ansible/awx/issues/7868
- Upgraded git-python to fix a bug that caused workflows to sometimes fail - https://github.com/ansible/awx/issues/6119

View File

@@ -80,7 +80,7 @@ For Linux platforms, refer to the following from Docker:
If you're not using Docker for Mac, or Docker for Windows, you may need, or choose to, install the Docker compose Python module separately, in which case you'll need to run the following:
```bash
(host)$ pip install docker-compose
(host)$ pip3 install docker-compose
```
#### Frontend Development

View File

@@ -1 +1 @@
14.1.0
15.0.0

View File

@@ -23,7 +23,7 @@ from rest_framework.request import clone_request
# AWX
from awx.api.fields import ChoiceNullField
from awx.main.fields import JSONField, ImplicitRoleField
from awx.main.models import InventorySource, NotificationTemplate
from awx.main.models import NotificationTemplate
from awx.main.scheduler.kubernetes import PodManager
@@ -115,19 +115,6 @@ class Metadata(metadata.SimpleMetadata):
if getattr(field, 'write_only', False):
field_info['write_only'] = True
# Special handling of inventory source_region choices that vary based on
# selected inventory source.
if field.field_name == 'source_regions':
for cp in ('azure_rm', 'ec2', 'gce'):
get_regions = getattr(InventorySource, 'get_%s_region_choices' % cp)
field_info['%s_region_choices' % cp] = get_regions()
# Special handling of group_by choices for EC2.
if field.field_name == 'group_by':
for cp in ('ec2',):
get_group_by_choices = getattr(InventorySource, 'get_%s_group_by_choices' % cp)
field_info['%s_group_by_choices' % cp] = get_group_by_choices()
# Special handling of notification configuration where the required properties
# are conditional on the type selected.
if field.field_name == 'notification_configuration':

View File

@@ -1269,6 +1269,7 @@ class OrganizationSerializer(BaseSerializer):
object_roles = self.reverse('api:organization_object_roles_list', kwargs={'pk': obj.pk}),
access_list = self.reverse('api:organization_access_list', kwargs={'pk': obj.pk}),
instance_groups = self.reverse('api:organization_instance_groups_list', kwargs={'pk': obj.pk}),
galaxy_credentials = self.reverse('api:organization_galaxy_credentials_list', kwargs={'pk': obj.pk}),
))
return res
@@ -1702,7 +1703,10 @@ class HostSerializer(BaseSerializerWithVariables):
'type': j.job.job_type_name,
'status': j.job.status,
'finished': j.job.finished,
} for j in obj.job_host_summaries.select_related('job__job_template').order_by('-created')[:5]])
} for j in obj.job_host_summaries.select_related('job__job_template').order_by('-created').defer(
'job__extra_vars',
'job__artifacts',
)[:5]])
return d
def _get_host_port_from_name(self, name):
@@ -1934,7 +1938,7 @@ class InventorySourceOptionsSerializer(BaseSerializer):
class Meta:
fields = ('*', 'source', 'source_path', 'source_script', 'source_vars', 'credential',
'source_regions', 'instance_filters', 'group_by', 'overwrite', 'overwrite_vars',
'enabled_var', 'enabled_value', 'host_filter', 'overwrite', 'overwrite_vars',
'custom_virtualenv', 'timeout', 'verbosity')
def get_related(self, obj):
@@ -1954,7 +1958,7 @@ class InventorySourceOptionsSerializer(BaseSerializer):
return ret
def validate(self, attrs):
# TODO: Validate source, validate source_regions
# TODO: Validate source
errors = {}
source = attrs.get('source', self.instance and self.instance.source or '')
@@ -2533,10 +2537,11 @@ class CredentialTypeSerializer(BaseSerializer):
class CredentialSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete', 'copy', 'use']
capabilities_prefetch = ['admin', 'use']
managed_by_tower = serializers.ReadOnlyField()
class Meta:
model = Credential
fields = ('*', 'organization', 'credential_type', 'inputs', 'kind', 'cloud', 'kubernetes')
fields = ('*', 'organization', 'credential_type', 'managed_by_tower', 'inputs', 'kind', 'cloud', 'kubernetes')
extra_kwargs = {
'credential_type': {
'label': _('Credential Type'),
@@ -2600,6 +2605,13 @@ class CredentialSerializer(BaseSerializer):
return summary_dict
def validate(self, attrs):
if self.instance and self.instance.managed_by_tower:
raise PermissionDenied(
detail=_("Modifications not allowed for managed credentials")
)
return super(CredentialSerializer, self).validate(attrs)
def get_validation_exclusions(self, obj=None):
ret = super(CredentialSerializer, self).get_validation_exclusions(obj)
for field in ('credential_type', 'inputs'):
@@ -2607,6 +2619,17 @@ class CredentialSerializer(BaseSerializer):
ret.remove(field)
return ret
def validate_organization(self, org):
if (
self.instance and
self.instance.credential_type.kind == 'galaxy' and
org is None
):
raise serializers.ValidationError(_(
"Galaxy credentials must be owned by an Organization."
))
return org
def validate_credential_type(self, credential_type):
if self.instance and credential_type.pk != self.instance.credential_type.pk:
for related_objects in (
@@ -2671,6 +2694,15 @@ class CredentialSerializerCreate(CredentialSerializer):
if attrs.get('team'):
attrs['organization'] = attrs['team'].organization
if (
'credential_type' in attrs and
attrs['credential_type'].kind == 'galaxy' and
list(owner_fields) != ['organization']
):
raise serializers.ValidationError({"organization": _(
"Galaxy credentials must be owned by an Organization."
)})
return super(CredentialSerializerCreate, self).validate(attrs)
def create(self, validated_data):
@@ -4125,7 +4157,10 @@ class JobLaunchSerializer(BaseSerializer):
# verify that credentials (either provided or existing) don't
# require launch-time passwords that have not been provided
if 'credentials' in accepted:
launch_credentials = accepted['credentials']
launch_credentials = Credential.unique_dict(
list(template_credentials.all()) +
list(accepted['credentials'])
).values()
else:
launch_credentials = template_credentials
passwords = attrs.get('credential_passwords', {}) # get from original attrs

View File

@@ -21,6 +21,7 @@ from awx.api.views import (
OrganizationNotificationTemplatesSuccessList,
OrganizationNotificationTemplatesApprovalList,
OrganizationInstanceGroupsList,
OrganizationGalaxyCredentialsList,
OrganizationObjectRolesList,
OrganizationAccessList,
OrganizationApplicationList,
@@ -49,6 +50,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/notification_templates_approvals/$', OrganizationNotificationTemplatesApprovalList.as_view(),
name='organization_notification_templates_approvals_list'),
url(r'^(?P<pk>[0-9]+)/instance_groups/$', OrganizationInstanceGroupsList.as_view(), name='organization_instance_groups_list'),
url(r'^(?P<pk>[0-9]+)/galaxy_credentials/$', OrganizationGalaxyCredentialsList.as_view(), name='organization_galaxy_credentials_list'),
url(r'^(?P<pk>[0-9]+)/object_roles/$', OrganizationObjectRolesList.as_view(), name='organization_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/access_list/$', OrganizationAccessList.as_view(), name='organization_access_list'),
url(r'^(?P<pk>[0-9]+)/applications/$', OrganizationApplicationList.as_view(), name='organization_applications_list'),

View File

@@ -124,6 +124,7 @@ from awx.api.views.organization import ( # noqa
OrganizationNotificationTemplatesSuccessList,
OrganizationNotificationTemplatesApprovalList,
OrganizationInstanceGroupsList,
OrganizationGalaxyCredentialsList,
OrganizationAccessList,
OrganizationObjectRolesList,
)
@@ -1355,6 +1356,13 @@ class CredentialDetail(RetrieveUpdateDestroyAPIView):
model = models.Credential
serializer_class = serializers.CredentialSerializer
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
if instance.managed_by_tower:
raise PermissionDenied(detail=_("Deletion not allowed for managed credentials"))
return super(CredentialDetail, self).destroy(request, *args, **kwargs)
class CredentialActivityStreamList(SubListAPIView):

View File

@@ -22,7 +22,7 @@ from awx.api.generics import (
)
logger = logging.getLogger('awx.main.analytics')
logger = logging.getLogger('awx.analytics')
class MetricsView(APIView):

View File

@@ -7,6 +7,7 @@ import logging
# Django
from django.db.models import Count
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
# AWX
from awx.main.models import (
@@ -20,7 +21,8 @@ from awx.main.models import (
Role,
User,
Team,
InstanceGroup
InstanceGroup,
Credential
)
from awx.api.generics import (
ListCreateAPIView,
@@ -42,7 +44,8 @@ from awx.api.serializers import (
RoleSerializer,
NotificationTemplateSerializer,
InstanceGroupSerializer,
ProjectSerializer, JobTemplateSerializer, WorkflowJobTemplateSerializer
ProjectSerializer, JobTemplateSerializer, WorkflowJobTemplateSerializer,
CredentialSerializer
)
from awx.api.views.mixin import (
RelatedJobsPreventDeleteMixin,
@@ -214,6 +217,20 @@ class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
relationship = 'instance_groups'
class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
model = Credential
serializer_class = CredentialSerializer
parent_model = Organization
relationship = 'galaxy_credentials'
def is_valid_relation(self, parent, sub, created=False):
if sub.kind != 'galaxy_api_token':
return {'msg': _(
f"Credential must be a Galaxy credential, not {sub.credential_type.name}."
)}
class OrganizationAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's

View File

@@ -21,6 +21,7 @@ import requests
from awx.api.generics import APIView
from awx.conf.registry import settings_registry
from awx.main.analytics import all_collectors
from awx.main.ha import is_ha_environment
from awx.main.utils import (
get_awx_version,
@@ -252,6 +253,7 @@ class ApiV2ConfigView(APIView):
ansible_version=get_ansible_version(),
eula=render_to_string("eula.md") if license_data.get('license_type', 'UNLICENSED') != 'open' else '',
analytics_status=pendo_state,
analytics_collectors=all_collectors(),
become_methods=PRIVILEGE_ESCALATION_METHODS,
)

View File

@@ -17,6 +17,8 @@ from django.utils.functional import cached_property
# Django REST Framework
from rest_framework.fields import empty, SkipField
import cachetools
# Tower
from awx.main.utils import encrypt_field, decrypt_field
from awx.conf import settings_registry
@@ -28,6 +30,8 @@ from awx.conf.migrations._reencrypt import decrypt_field as old_decrypt_field
logger = logging.getLogger('awx.conf.settings')
SETTING_MEMORY_TTL = 5 if 'callback_receiver' in ' '.join(sys.argv) else 0
# Store a special value to indicate when a setting is not set in the database.
SETTING_CACHE_NOTSET = '___notset___'
@@ -406,6 +410,7 @@ class SettingsWrapper(UserSettingsHolder):
def SETTINGS_MODULE(self):
return self._get_default('SETTINGS_MODULE')
@cachetools.cached(cache=cachetools.TTLCache(maxsize=2048, ttl=SETTING_MEMORY_TTL))
def __getattr__(self, name):
value = empty
if name in self.all_supported_settings:

View File

@@ -1103,11 +1103,6 @@ class CredentialTypeAccess(BaseAccess):
def can_use(self, obj):
return True
def get_method_capability(self, method, obj, parent_obj):
if obj.managed_by_tower:
return False
return super(CredentialTypeAccess, self).get_method_capability(method, obj, parent_obj)
def filtered_queryset(self):
return self.model.objects.all()
@@ -1182,6 +1177,8 @@ class CredentialAccess(BaseAccess):
def get_user_capabilities(self, obj, **kwargs):
user_capabilities = super(CredentialAccess, self).get_user_capabilities(obj, **kwargs)
user_capabilities['use'] = self.can_use(obj)
if getattr(obj, 'managed_by_tower', False) is True:
user_capabilities['edit'] = user_capabilities['delete'] = False
return user_capabilities
@@ -2753,6 +2750,9 @@ class WorkflowApprovalTemplateAccess(BaseAccess):
else:
return (self.check_related('workflow_approval_template', UnifiedJobTemplate, role_field='admin_role'))
def can_change(self, obj, data):
return self.user.can_access(WorkflowJobTemplate, 'change', obj.workflow_job_template, data={})
def can_start(self, obj, validate_license=False):
# for copying WFJTs that contain approval nodes
if self.user.is_superuser:

View File

@@ -1 +1 @@
from .core import register, gather, ship, table_version # noqa
from .core import all_collectors, expensive_collectors, register, gather, ship # noqa

View File

@@ -20,7 +20,7 @@ from django.conf import settings
BROADCAST_WEBSOCKET_REDIS_KEY_NAME = 'broadcast_websocket_stats'
logger = logging.getLogger('awx.main.analytics.broadcast_websocket')
logger = logging.getLogger('awx.analytics.broadcast_websocket')
def dt_to_seconds(dt):

View File

@@ -1,3 +1,4 @@
import io
import os
import os.path
import platform
@@ -6,13 +7,14 @@ from django.db import connection
from django.db.models import Count
from django.conf import settings
from django.utils.timezone import now
from django.utils.translation import ugettext_lazy as _
from awx.conf.license import get_license
from awx.main.utils import (get_awx_version, get_ansible_version,
get_custom_venv_choices, camelcase_to_underscore)
from awx.main import models
from django.contrib.sessions.models import Session
from awx.main.analytics import register, table_version
from awx.main.analytics import register
'''
This module is used to define metrics collected by awx.main.analytics.gather()
@@ -31,8 +33,8 @@ data _since_ the last report date - i.e., new data in the last 24 hours)
'''
@register('config', '1.1')
def config(since):
@register('config', '1.1', description=_('General platform configuration.'))
def config(since, **kwargs):
license_info = get_license(show_key=False)
install_type = 'traditional'
if os.environ.get('container') == 'oci':
@@ -63,8 +65,8 @@ def config(since):
}
@register('counts', '1.0')
def counts(since):
@register('counts', '1.0', description=_('Counts of objects such as organizations, inventories, and projects'))
def counts(since, **kwargs):
counts = {}
for cls in (models.Organization, models.Team, models.User,
models.Inventory, models.Credential, models.Project,
@@ -98,8 +100,8 @@ def counts(since):
return counts
@register('org_counts', '1.0')
def org_counts(since):
@register('org_counts', '1.0', description=_('Counts of users and teams by organization'))
def org_counts(since, **kwargs):
counts = {}
for org in models.Organization.objects.annotate(num_users=Count('member_role__members', distinct=True),
num_teams=Count('teams', distinct=True)).values('name', 'id', 'num_users', 'num_teams'):
@@ -110,8 +112,8 @@ def org_counts(since):
return counts
@register('cred_type_counts', '1.0')
def cred_type_counts(since):
@register('cred_type_counts', '1.0', description=_('Counts of credentials by credential type'))
def cred_type_counts(since, **kwargs):
counts = {}
for cred_type in models.CredentialType.objects.annotate(num_credentials=Count(
'credentials', distinct=True)).values('name', 'id', 'managed_by_tower', 'num_credentials'):
@@ -122,8 +124,8 @@ def cred_type_counts(since):
return counts
@register('inventory_counts', '1.2')
def inventory_counts(since):
@register('inventory_counts', '1.2', description=_('Inventories, their inventory sources, and host counts'))
def inventory_counts(since, **kwargs):
counts = {}
for inv in models.Inventory.objects.filter(kind='').annotate(num_sources=Count('inventory_sources', distinct=True),
num_hosts=Count('hosts', distinct=True)).only('id', 'name', 'kind'):
@@ -147,8 +149,8 @@ def inventory_counts(since):
return counts
@register('projects_by_scm_type', '1.0')
def projects_by_scm_type(since):
@register('projects_by_scm_type', '1.0', description=_('Counts of projects by source control type'))
def projects_by_scm_type(since, **kwargs):
counts = dict(
(t[0] or 'manual', 0)
for t in models.Project.SCM_TYPE_CHOICES
@@ -166,8 +168,8 @@ def _get_isolated_datetime(last_check):
return last_check
@register('instance_info', '1.0')
def instance_info(since, include_hostnames=False):
@register('instance_info', '1.0', description=_('Cluster topology and capacity'))
def instance_info(since, include_hostnames=False, **kwargs):
info = {}
instances = models.Instance.objects.values_list('hostname').values(
'uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'hostname', 'last_isolated_check', 'enabled')
@@ -192,8 +194,8 @@ def instance_info(since, include_hostnames=False):
return info
@register('job_counts', '1.0')
def job_counts(since):
@register('job_counts', '1.0', description=_('Counts of jobs by status'))
def job_counts(since, **kwargs):
counts = {}
counts['total_jobs'] = models.UnifiedJob.objects.exclude(launch_type='sync').count()
counts['status'] = dict(models.UnifiedJob.objects.exclude(launch_type='sync').values_list('status').annotate(Count('status')).order_by())
@@ -202,8 +204,8 @@ def job_counts(since):
return counts
@register('job_instance_counts', '1.0')
def job_instance_counts(since):
@register('job_instance_counts', '1.0', description=_('Counts of jobs by execution node'))
def job_instance_counts(since, **kwargs):
counts = {}
job_types = models.UnifiedJob.objects.exclude(launch_type='sync').values_list(
'execution_node', 'launch_type').annotate(job_launch_type=Count('launch_type')).order_by()
@@ -217,30 +219,71 @@ def job_instance_counts(since):
return counts
@register('query_info', '1.0')
def query_info(since, collection_type):
@register('query_info', '1.0', description=_('Metadata about the analytics collected'))
def query_info(since, collection_type, until, **kwargs):
query_info = {}
query_info['last_run'] = str(since)
query_info['current_time'] = str(now())
query_info['current_time'] = str(until)
query_info['collection_type'] = collection_type
return query_info
# Copies Job Events from db to a .csv to be shipped
@table_version('events_table.csv', '1.1')
@table_version('unified_jobs_table.csv', '1.0')
@table_version('unified_job_template_table.csv', '1.0')
@table_version('workflow_job_node_table.csv', '1.0')
@table_version('workflow_job_template_node_table.csv', '1.0')
def copy_tables(since, full_path, subset=None):
def _copy_table(table, query, path):
file_path = os.path.join(path, table + '_table.csv')
file = open(file_path, 'w', encoding='utf-8')
with connection.cursor() as cursor:
cursor.copy_expert(query, file)
file.close()
return file_path
'''
The event table can be *very* large, and we have a 100MB upload limit.
Split large table dumps at dump time into a series of files.
'''
MAX_TABLE_SIZE = 200 * 1048576
class FileSplitter(io.StringIO):
def __init__(self, filespec=None, *args, **kwargs):
self.filespec = filespec
self.files = []
self.currentfile = None
self.header = None
self.counter = 0
self.cycle_file()
def cycle_file(self):
if self.currentfile:
self.currentfile.close()
self.counter = 0
fname = '{}_split{}'.format(self.filespec, len(self.files))
self.currentfile = open(fname, 'w', encoding='utf-8')
self.files.append(fname)
if self.header:
self.currentfile.write('{}\n'.format(self.header))
def file_list(self):
self.currentfile.close()
# Check for an empty dump
if len(self.header) + 1 == self.counter:
os.remove(self.files[-1])
self.files = self.files[:-1]
# If we only have one file, remove the suffix
if len(self.files) == 1:
os.rename(self.files[0],self.files[0].replace('_split0',''))
return self.files
def write(self, s):
if not self.header:
self.header = s[0:s.index('\n')]
self.counter += self.currentfile.write(s)
if self.counter >= MAX_TABLE_SIZE:
self.cycle_file()
def _copy_table(table, query, path):
file_path = os.path.join(path, table + '_table.csv')
file = FileSplitter(filespec=file_path)
with connection.cursor() as cursor:
cursor.copy_expert(query, file)
return file.file_list()
@register('events_table', '1.1', format='csv', description=_('Automation task records'), expensive=True)
def events_table(since, full_path, until, **kwargs):
events_query = '''COPY (SELECT main_jobevent.id,
main_jobevent.created,
main_jobevent.uuid,
@@ -262,16 +305,21 @@ def copy_tables(since, full_path, subset=None):
main_jobevent.event_data::json->'res'->'warnings' AS warnings,
main_jobevent.event_data::json->'res'->'deprecations' AS deprecations
FROM main_jobevent
WHERE main_jobevent.created > {}
ORDER BY main_jobevent.id ASC) TO STDOUT WITH CSV HEADER'''.format(since.strftime("'%Y-%m-%d %H:%M:%S'"))
if not subset or 'events' in subset:
_copy_table(table='events', query=events_query, path=full_path)
WHERE (main_jobevent.created > '{}' AND main_jobevent.created <= '{}')
ORDER BY main_jobevent.id ASC) TO STDOUT WITH CSV HEADER
'''.format(since.isoformat(),until.isoformat())
return _copy_table(table='events', query=events_query, path=full_path)
@register('unified_jobs_table', '1.1', format='csv', description=_('Data on jobs run'), expensive=True)
def unified_jobs_table(since, full_path, until, **kwargs):
unified_job_query = '''COPY (SELECT main_unifiedjob.id,
main_unifiedjob.polymorphic_ctype_id,
django_content_type.model,
main_unifiedjob.organization_id,
main_organization.name as organization_name,
main_job.inventory_id,
main_inventory.name as inventory_name,
main_unifiedjob.created,
main_unifiedjob.name,
main_unifiedjob.unified_job_template_id,
@@ -289,13 +337,19 @@ def copy_tables(since, full_path, subset=None):
main_unifiedjob.instance_group_id
FROM main_unifiedjob
JOIN django_content_type ON main_unifiedjob.polymorphic_ctype_id = django_content_type.id
LEFT JOIN main_job ON main_unifiedjob.id = main_job.unifiedjob_ptr_id
LEFT JOIN main_inventory ON main_job.inventory_id = main_inventory.id
LEFT JOIN main_organization ON main_organization.id = main_unifiedjob.organization_id
WHERE (main_unifiedjob.created > {0} OR main_unifiedjob.finished > {0})
WHERE ((main_unifiedjob.created > '{0}' AND main_unifiedjob.created <= '{1}')
OR (main_unifiedjob.finished > '{0}' AND main_unifiedjob.finished <= '{1}'))
AND main_unifiedjob.launch_type != 'sync'
ORDER BY main_unifiedjob.id ASC) TO STDOUT WITH CSV HEADER'''.format(since.strftime("'%Y-%m-%d %H:%M:%S'"))
if not subset or 'unified_jobs' in subset:
_copy_table(table='unified_jobs', query=unified_job_query, path=full_path)
ORDER BY main_unifiedjob.id ASC) TO STDOUT WITH CSV HEADER
'''.format(since.isoformat(),until.isoformat())
return _copy_table(table='unified_jobs', query=unified_job_query, path=full_path)
@register('unified_job_template_table', '1.0', format='csv', description=_('Data on job templates'))
def unified_job_template_table(since, full_path, **kwargs):
unified_job_template_query = '''COPY (SELECT main_unifiedjobtemplate.id,
main_unifiedjobtemplate.polymorphic_ctype_id,
django_content_type.model,
@@ -314,9 +368,11 @@ def copy_tables(since, full_path, subset=None):
FROM main_unifiedjobtemplate, django_content_type
WHERE main_unifiedjobtemplate.polymorphic_ctype_id = django_content_type.id
ORDER BY main_unifiedjobtemplate.id ASC) TO STDOUT WITH CSV HEADER'''
if not subset or 'unified_job_template' in subset:
_copy_table(table='unified_job_template', query=unified_job_template_query, path=full_path)
return _copy_table(table='unified_job_template', query=unified_job_template_query, path=full_path)
@register('workflow_job_node_table', '1.0', format='csv', description=_('Data on workflow runs'), expensive=True)
def workflow_job_node_table(since, full_path, until, **kwargs):
workflow_job_node_query = '''COPY (SELECT main_workflowjobnode.id,
main_workflowjobnode.created,
main_workflowjobnode.modified,
@@ -345,11 +401,14 @@ def copy_tables(since, full_path, subset=None):
FROM main_workflowjobnode_always_nodes
GROUP BY from_workflowjobnode_id
) always_nodes ON main_workflowjobnode.id = always_nodes.from_workflowjobnode_id
WHERE main_workflowjobnode.modified > {}
ORDER BY main_workflowjobnode.id ASC) TO STDOUT WITH CSV HEADER'''.format(since.strftime("'%Y-%m-%d %H:%M:%S'"))
if not subset or 'workflow_job_node' in subset:
_copy_table(table='workflow_job_node', query=workflow_job_node_query, path=full_path)
WHERE (main_workflowjobnode.modified > '{}' AND main_workflowjobnode.modified <= '{}')
ORDER BY main_workflowjobnode.id ASC) TO STDOUT WITH CSV HEADER
'''.format(since.isoformat(),until.isoformat())
return _copy_table(table='workflow_job_node', query=workflow_job_node_query, path=full_path)
@register('workflow_job_template_node_table', '1.0', format='csv', description=_('Data on workflows'))
def workflow_job_template_node_table(since, full_path, **kwargs):
workflow_job_template_node_query = '''COPY (SELECT main_workflowjobtemplatenode.id,
main_workflowjobtemplatenode.created,
main_workflowjobtemplatenode.modified,
@@ -377,7 +436,4 @@ def copy_tables(since, full_path, subset=None):
GROUP BY from_workflowjobtemplatenode_id
) always_nodes ON main_workflowjobtemplatenode.id = always_nodes.from_workflowjobtemplatenode_id
ORDER BY main_workflowjobtemplatenode.id ASC) TO STDOUT WITH CSV HEADER'''
if not subset or 'workflow_job_template_node' in subset:
_copy_table(table='workflow_job_template_node', query=workflow_job_template_node_query, path=full_path)
return
return _copy_table(table='workflow_job_template_node', query=workflow_job_template_node_query, path=full_path)

View File

@@ -14,17 +14,13 @@ from rest_framework.exceptions import PermissionDenied
from awx.conf.license import get_license
from awx.main.models import Job
from awx.main.access import access_registry
from awx.main.models.ha import TowerAnalyticsState
from awx.main.utils import get_awx_http_client_headers, set_environ
__all__ = ['register', 'gather', 'ship', 'table_version']
__all__ = ['register', 'gather', 'ship']
logger = logging.getLogger('awx.main.analytics')
manifest = dict()
def _valid_license():
try:
@@ -37,11 +33,38 @@ def _valid_license():
return True
def register(key, version):
def all_collectors():
from awx.main.analytics import collectors
collector_dict = {}
module = collectors
for name, func in inspect.getmembers(module):
if inspect.isfunction(func) and hasattr(func, '__awx_analytics_key__'):
key = func.__awx_analytics_key__
desc = func.__awx_analytics_description__ or ''
version = func.__awx_analytics_version__
collector_dict[key] = { 'name': key, 'version': version, 'description': desc}
return collector_dict
def expensive_collectors():
from awx.main.analytics import collectors
ret = []
module = collectors
for name, func in inspect.getmembers(module):
if inspect.isfunction(func) and hasattr(func, '__awx_analytics_key__') and func.__awx_expensive__:
ret.append(func.__awx_analytics_key__)
return ret
def register(key, version, description=None, format='json', expensive=False):
"""
A decorator used to register a function as a metric collector.
Decorated functions should return JSON-serializable objects.
Decorated functions should do the following based on format:
- json: return JSON-serializable objects.
- csv: write CSV data to a filename named 'key'
@register('projects_by_scm_type', 1)
def projects_by_scm_type():
@@ -51,100 +74,153 @@ def register(key, version):
def decorate(f):
f.__awx_analytics_key__ = key
f.__awx_analytics_version__ = version
f.__awx_analytics_description__ = description
f.__awx_analytics_type__ = format
f.__awx_expensive__ = expensive
return f
return decorate
def table_version(file_name, version):
global manifest
manifest[file_name] = version
def decorate(f):
return f
return decorate
def gather(dest=None, module=None, collection_type='scheduled'):
def gather(dest=None, module=None, subset = None, since = None, until = now(), collection_type='scheduled'):
"""
Gather all defined metrics and write them as JSON files in a .tgz
:param dest: the (optional) absolute path to write a compressed tarball
:pararm module: the module to search for registered analytic collector
:param module: the module to search for registered analytic collector
functions; defaults to awx.main.analytics.collectors
"""
def _write_manifest(destdir, manifest):
path = os.path.join(destdir, 'manifest.json')
with open(path, 'w', encoding='utf-8') as f:
try:
json.dump(manifest, f)
except Exception:
f.close()
os.remove(f.name)
logger.exception("Could not generate manifest.json")
run_now = now()
state = TowerAnalyticsState.get_solo()
last_run = state.last_run
logger.debug("Last analytics run was: {}".format(last_run))
last_run = since or settings.AUTOMATION_ANALYTICS_LAST_GATHER or (now() - timedelta(weeks=4))
logger.debug("Last analytics run was: {}".format(settings.AUTOMATION_ANALYTICS_LAST_GATHER))
max_interval = now() - timedelta(weeks=4)
if last_run < max_interval or not last_run:
last_run = max_interval
if _valid_license() is False:
logger.exception("Invalid License provided, or No License Provided")
return "Error: Invalid License provided, or No License Provided"
return None
if collection_type != 'dry-run' and not settings.INSIGHTS_TRACKING_STATE:
logger.error("Automation Analytics not enabled. Use --dry-run to gather locally without sending.")
return
return None
if module is None:
collector_list = []
if module:
collector_module = module
else:
from awx.main.analytics import collectors
module = collectors
collector_module = collectors
for name, func in inspect.getmembers(collector_module):
if (
inspect.isfunction(func) and
hasattr(func, '__awx_analytics_key__') and
(not subset or name in subset)
):
collector_list.append((name, func))
manifest = dict()
dest = dest or tempfile.mkdtemp(prefix='awx_analytics')
for name, func in inspect.getmembers(module):
if inspect.isfunction(func) and hasattr(func, '__awx_analytics_key__'):
gather_dir = os.path.join(dest, 'stage')
os.mkdir(gather_dir, 0o700)
num_splits = 1
for name, func in collector_list:
if func.__awx_analytics_type__ == 'json':
key = func.__awx_analytics_key__
manifest['{}.json'.format(key)] = func.__awx_analytics_version__
path = '{}.json'.format(os.path.join(dest, key))
path = '{}.json'.format(os.path.join(gather_dir, key))
with open(path, 'w', encoding='utf-8') as f:
try:
if func.__name__ == 'query_info':
json.dump(func(last_run, collection_type=collection_type), f)
else:
json.dump(func(last_run), f)
json.dump(func(last_run, collection_type=collection_type, until=until), f)
manifest['{}.json'.format(key)] = func.__awx_analytics_version__
except Exception:
logger.exception("Could not generate metric {}.json".format(key))
f.close()
os.remove(f.name)
path = os.path.join(dest, 'manifest.json')
with open(path, 'w', encoding='utf-8') as f:
try:
json.dump(manifest, f)
except Exception:
logger.exception("Could not generate manifest.json")
f.close()
os.remove(f.name)
elif func.__awx_analytics_type__ == 'csv':
key = func.__awx_analytics_key__
try:
files = func(last_run, full_path=gather_dir, until=until)
if files:
manifest['{}.csv'.format(key)] = func.__awx_analytics_version__
if len(files) > num_splits:
num_splits = len(files)
except Exception:
logger.exception("Could not generate metric {}.csv".format(key))
try:
collectors.copy_tables(since=last_run, full_path=dest)
except Exception:
logger.exception("Could not copy tables")
# can't use isoformat() since it has colons, which GNU tar doesn't like
tarname = '_'.join([
settings.SYSTEM_UUID,
run_now.strftime('%Y-%m-%d-%H%M%S%z')
])
try:
tgz = shutil.make_archive(
os.path.join(os.path.dirname(dest), tarname),
'gztar',
dest
)
return tgz
except Exception:
logger.exception("Failed to write analytics archive file")
finally:
if not manifest:
# No data was collected
logger.warning("No data from {} to {}".format(last_run, until))
shutil.rmtree(dest)
return None
# Always include config.json if we're using our collectors
if 'config.json' not in manifest.keys() and not module:
from awx.main.analytics import collectors
config = collectors.config
path = '{}.json'.format(os.path.join(gather_dir, config.__awx_analytics_key__))
with open(path, 'w', encoding='utf-8') as f:
try:
json.dump(collectors.config(last_run), f)
manifest['config.json'] = config.__awx_analytics_version__
except Exception:
logger.exception("Could not generate metric {}.json".format(key))
f.close()
os.remove(f.name)
shutil.rmtree(dest)
return None
stage_dirs = [gather_dir]
if num_splits > 1:
for i in range(0, num_splits):
split_path = os.path.join(dest, 'split{}'.format(i))
os.mkdir(split_path, 0o700)
filtered_manifest = {}
shutil.copy(os.path.join(gather_dir, 'config.json'), split_path)
filtered_manifest['config.json'] = manifest['config.json']
suffix = '_split{}'.format(i)
for file in os.listdir(gather_dir):
if file.endswith(suffix):
old_file = os.path.join(gather_dir, file)
new_filename = file.replace(suffix, '')
new_file = os.path.join(split_path, new_filename)
shutil.move(old_file, new_file)
filtered_manifest[new_filename] = manifest[new_filename]
_write_manifest(split_path, filtered_manifest)
stage_dirs.append(split_path)
for item in list(manifest.keys()):
if not os.path.exists(os.path.join(gather_dir, item)):
manifest.pop(item)
_write_manifest(gather_dir, manifest)
tarfiles = []
try:
for i in range(0, len(stage_dirs)):
stage_dir = stage_dirs[i]
# can't use isoformat() since it has colons, which GNU tar doesn't like
tarname = '_'.join([
settings.SYSTEM_UUID,
until.strftime('%Y-%m-%d-%H%M%S%z'),
str(i)
])
tgz = shutil.make_archive(
os.path.join(os.path.dirname(dest), tarname),
'gztar',
stage_dir
)
tarfiles.append(tgz)
except Exception:
shutil.rmtree(stage_dir, ignore_errors = True)
logger.exception("Failed to write analytics archive file")
finally:
shutil.rmtree(dest, ignore_errors = True)
return tarfiles
def ship(path):
@@ -154,6 +230,9 @@ def ship(path):
if not path:
logger.error('Automation Analytics TAR not found')
return
if not os.path.exists(path):
logger.error('Automation Analytics TAR {} not found'.format(path))
return
if "Error:" in str(path):
return
try:
@@ -180,13 +259,11 @@ def ship(path):
auth=(rh_user, rh_password),
headers=s.headers,
timeout=(31, 31))
if response.status_code != 202:
# Accept 2XX status_codes
if response.status_code >= 300:
return logger.exception('Upload failed with status {}, {}'.format(response.status_code,
response.text))
run_now = now()
state = TowerAnalyticsState.get_solo()
state.last_run = run_now
state.save()
finally:
# cleanup tar.gz
os.remove(path)
if os.path.exists(path):
os.remove(path)

View File

@@ -2,7 +2,6 @@
import json
import logging
import os
from distutils.version import LooseVersion as Version
# Django
from django.utils.translation import ugettext_lazy as _
@@ -436,93 +435,12 @@ register(
category_slug='jobs',
)
register(
'PRIMARY_GALAXY_URL',
field_class=fields.URLField,
required=False,
allow_blank=True,
label=_('Primary Galaxy Server URL'),
help_text=_(
'For organizations that run their own Galaxy service, this gives the option to specify a '
'host as the primary galaxy server. Requirements will be downloaded from the primary if the '
'specific role or collection is available there. If the content is not avilable in the primary, '
'or if this field is left blank, it will default to galaxy.ansible.com.'
),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PRIMARY_GALAXY_USERNAME',
field_class=fields.CharField,
required=False,
allow_blank=True,
label=_('Primary Galaxy Server Username'),
help_text=_('(This setting is deprecated and will be removed in a future release) '
'For using a galaxy server at higher precedence than the public Ansible Galaxy. '
'The username to use for basic authentication against the Galaxy instance, '
'this is mutually exclusive with PRIMARY_GALAXY_TOKEN.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PRIMARY_GALAXY_PASSWORD',
field_class=fields.CharField,
encrypted=True,
required=False,
allow_blank=True,
label=_('Primary Galaxy Server Password'),
help_text=_('(This setting is deprecated and will be removed in a future release) '
'For using a galaxy server at higher precedence than the public Ansible Galaxy. '
'The password to use for basic authentication against the Galaxy instance, '
'this is mutually exclusive with PRIMARY_GALAXY_TOKEN.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PRIMARY_GALAXY_TOKEN',
field_class=fields.CharField,
encrypted=True,
required=False,
allow_blank=True,
label=_('Primary Galaxy Server Token'),
help_text=_('For using a galaxy server at higher precedence than the public Ansible Galaxy. '
'The token to use for connecting with the Galaxy instance, '
'this is mutually exclusive with corresponding username and password settings.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PRIMARY_GALAXY_AUTH_URL',
field_class=fields.CharField,
required=False,
allow_blank=True,
label=_('Primary Galaxy Authentication URL'),
help_text=_('For using a galaxy server at higher precedence than the public Ansible Galaxy. '
'The token_endpoint of a Keycloak server.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'PUBLIC_GALAXY_ENABLED',
field_class=fields.BooleanField,
default=True,
label=_('Allow Access to Public Galaxy'),
help_text=_('Allow or deny access to the public Ansible Galaxy during project updates.'),
category=_('Jobs'),
category_slug='jobs'
)
register(
'GALAXY_IGNORE_CERTS',
field_class=fields.BooleanField,
default=False,
label=_('Ignore Ansible Galaxy SSL Certificate Verification'),
help_text=_('If set to true, certificate validation will not be done when'
help_text=_('If set to true, certificate validation will not be done when '
'installing content from any Galaxy server.'),
category=_('Jobs'),
category_slug='jobs'
@@ -856,84 +774,4 @@ def logging_validate(serializer, attrs):
return attrs
def galaxy_validate(serializer, attrs):
"""Ansible Galaxy config options have mutual exclusivity rules, these rules
are enforced here on serializer validation so that users will not be able
to save settings which obviously break all project updates.
"""
prefix = 'PRIMARY_GALAXY_'
errors = {}
def _new_value(setting_name):
if setting_name in attrs:
return attrs[setting_name]
elif not serializer.instance:
return ''
return getattr(serializer.instance, setting_name, '')
if not _new_value('PRIMARY_GALAXY_URL'):
if _new_value('PUBLIC_GALAXY_ENABLED') is False:
msg = _('A URL for Primary Galaxy must be defined before disabling public Galaxy.')
# put error in both keys because UI has trouble with errors in toggles
for key in ('PRIMARY_GALAXY_URL', 'PUBLIC_GALAXY_ENABLED'):
errors.setdefault(key, [])
errors[key].append(msg)
raise serializers.ValidationError(errors)
from awx.main.constants import GALAXY_SERVER_FIELDS
if not any('{}{}'.format(prefix, subfield.upper()) in attrs for subfield in GALAXY_SERVER_FIELDS):
return attrs
galaxy_data = {}
for subfield in GALAXY_SERVER_FIELDS:
galaxy_data[subfield] = _new_value('{}{}'.format(prefix, subfield.upper()))
if not galaxy_data['url']:
for k, v in galaxy_data.items():
if v:
setting_name = '{}{}'.format(prefix, k.upper())
errors.setdefault(setting_name, [])
errors[setting_name].append(_(
'Cannot provide field if PRIMARY_GALAXY_URL is not set.'
))
for k in GALAXY_SERVER_FIELDS:
if galaxy_data[k]:
setting_name = '{}{}'.format(prefix, k.upper())
if (not serializer.instance) or (not getattr(serializer.instance, setting_name, '')):
# new auth is applied, so check if compatible with version
from awx.main.utils import get_ansible_version
current_version = get_ansible_version()
min_version = '2.9'
if Version(current_version) < Version(min_version):
errors.setdefault(setting_name, [])
errors[setting_name].append(_(
'Galaxy server settings are not available until Ansible {min_version}, '
'you are running {current_version}.'
).format(min_version=min_version, current_version=current_version))
if (galaxy_data['password'] or galaxy_data['username']) and (galaxy_data['token'] or galaxy_data['auth_url']):
for k in ('password', 'username', 'token', 'auth_url'):
setting_name = '{}{}'.format(prefix, k.upper())
if setting_name in attrs:
errors.setdefault(setting_name, [])
errors[setting_name].append(_(
'Setting Galaxy token and authentication URL is mutually exclusive with username and password.'
))
if bool(galaxy_data['username']) != bool(galaxy_data['password']):
msg = _('If authenticating via username and password, both must be provided.')
for k in ('username', 'password'):
setting_name = '{}{}'.format(prefix, k.upper())
errors.setdefault(setting_name, [])
errors[setting_name].append(msg)
if bool(galaxy_data['token']) != bool(galaxy_data['auth_url']):
msg = _('If authenticating via token, both token and authentication URL must be provided.')
for k in ('token', 'auth_url'):
setting_name = '{}{}'.format(prefix, k.upper())
errors.setdefault(setting_name, [])
errors[setting_name].append(msg)
if errors:
raise serializers.ValidationError(errors)
return attrs
register_validate('logging', logging_validate)
register_validate('jobs', galaxy_validate)

View File

@@ -50,7 +50,3 @@ LOGGER_BLOCKLIST = (
# loggers that may be called getting logging settings
'awx.conf'
)
# these correspond to both AWX and Ansible settings to keep naming consistent
# for instance, settings.PRIMARY_GALAXY_AUTH_URL vs env var ANSIBLE_GALAXY_SERVER_FOO_AUTH_URL
GALAXY_SERVER_FIELDS = ('url', 'username', 'password', 'token', 'auth_url')

View File

@@ -1,5 +1,3 @@
import collections
import functools
import json
import logging
import time
@@ -14,40 +12,12 @@ from django.contrib.auth.models import User
from channels.generic.websocket import AsyncJsonWebsocketConsumer
from channels.layers import get_channel_layer
from channels.db import database_sync_to_async
from channels_redis.core import RedisChannelLayer
logger = logging.getLogger('awx.main.consumers')
XRF_KEY = '_auth_user_xrf'
class BoundedQueue(asyncio.Queue):
def put_nowait(self, item):
if self.full():
# dispose the oldest item
# if we actually get into this code block, it likely means that
# this specific consumer has stopped reading
# unfortunately, channels_redis will just happily continue to
# queue messages specific to their channel until the heat death
# of the sun: https://github.com/django/channels_redis/issues/212
# this isn't a huge deal for browser clients that disconnect,
# but it *does* cause a problem for our global broadcast topic
# that's used to broadcast messages to peers in a cluster
# if we get into this code block, it's better to drop messages
# than to continue to malloc() forever
self.get_nowait()
return super(BoundedQueue, self).put_nowait(item)
class ExpiringRedisChannelLayer(RedisChannelLayer):
def __init__(self, *args, **kw):
super(ExpiringRedisChannelLayer, self).__init__(*args, **kw)
self.receive_buffer = collections.defaultdict(
functools.partial(BoundedQueue, self.capacity)
)
class WebsocketSecretAuthHelper:
"""
Middlewareish for websockets to verify node websocket broadcast interconnect.

View File

@@ -40,6 +40,13 @@ base_inputs = {
'multiline': False,
'secret': True,
'help_text': _('The Secret ID for AppRole Authentication')
}, {
'id': 'default_auth_path',
'label': _('Path to Approle Auth'),
'type': 'string',
'multiline': False,
'default': 'approle',
'help_text': _('The AppRole Authentication path to use if one isn\'t provided in the metadata when linking to an input field. Defaults to \'approle\'')
}
],
'metadata': [{
@@ -47,10 +54,11 @@ base_inputs = {
'label': _('Path to Secret'),
'type': 'string',
'help_text': _('The path to the secret stored in the secret backend e.g, /some/secret/')
},{
}, {
'id': 'auth_path',
'label': _('Path to Auth'),
'type': 'string',
'multiline': False,
'help_text': _('The path where the Authentication method is mounted e.g, approle')
}],
'required': ['url', 'secret_path'],
@@ -118,7 +126,9 @@ def handle_auth(**kwargs):
def approle_auth(**kwargs):
role_id = kwargs['role_id']
secret_id = kwargs['secret_id']
auth_path = kwargs.get('auth_path') or 'approle'
# we first try to use the 'auth_path' from the metadata
# if not found we try to fetch the 'default_auth_path' from inputs
auth_path = kwargs.get('auth_path') or kwargs['default_auth_path']
url = urljoin(kwargs['url'], 'v1')
cacert = kwargs.get('cacert', None)
@@ -152,7 +162,7 @@ def kv_backend(**kwargs):
sess = requests.Session()
sess.headers['Authorization'] = 'Bearer {}'.format(token)
# Compatability header for older installs of Hashicorp Vault
# Compatibility header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
if api_version == 'v2':

View File

@@ -2,6 +2,9 @@ import logging
import uuid
import json
from django.conf import settings
import redis
from awx.main.dispatch import get_local_queuename
from . import pg_bus_conn
@@ -21,7 +24,15 @@ class Control(object):
self.queuename = host or get_local_queuename()
def status(self, *args, **kwargs):
return self.control_with_reply('status', *args, **kwargs)
r = redis.Redis.from_url(settings.BROKER_URL)
if self.service == 'dispatcher':
stats = r.get(f'awx_{self.service}_statistics') or b''
return stats.decode('utf-8')
else:
workers = []
for key in r.keys('awx_callback_receiver_statistics_*'):
workers.append(r.get(key).decode('utf-8'))
return '\n'.join(workers)
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)

View File

@@ -5,6 +5,7 @@ import signal
import sys
import time
import traceback
from datetime import datetime
from uuid import uuid4
import collections
@@ -27,6 +28,12 @@ else:
logger = logging.getLogger('awx.main.dispatch')
class NoOpResultQueue(object):
def put(self, item):
pass
class PoolWorker(object):
'''
Used to track a worker child process and its pending and finished messages.
@@ -56,11 +63,13 @@ class PoolWorker(object):
It is "idle" when self.managed_tasks is empty.
'''
def __init__(self, queue_size, target, args):
track_managed_tasks = False
def __init__(self, queue_size, target, args, **kwargs):
self.messages_sent = 0
self.messages_finished = 0
self.managed_tasks = collections.OrderedDict()
self.finished = MPQueue(queue_size)
self.finished = MPQueue(queue_size) if self.track_managed_tasks else NoOpResultQueue()
self.queue = MPQueue(queue_size)
self.process = Process(target=target, args=(self.queue, self.finished) + args)
self.process.daemon = True
@@ -74,7 +83,8 @@ class PoolWorker(object):
if not body.get('uuid'):
body['uuid'] = str(uuid4())
uuid = body['uuid']
self.managed_tasks[uuid] = body
if self.track_managed_tasks:
self.managed_tasks[uuid] = body
self.queue.put(body, block=True, timeout=5)
self.messages_sent += 1
self.calculate_managed_tasks()
@@ -111,6 +121,8 @@ class PoolWorker(object):
return str(self.process.exitcode)
def calculate_managed_tasks(self):
if not self.track_managed_tasks:
return
# look to see if any tasks were finished
finished = []
for _ in range(self.finished.qsize()):
@@ -135,6 +147,8 @@ class PoolWorker(object):
@property
def current_task(self):
if not self.track_managed_tasks:
return None
self.calculate_managed_tasks()
# the task at [0] is the one that's running right now (or is about to
# be running)
@@ -145,6 +159,8 @@ class PoolWorker(object):
@property
def orphaned_tasks(self):
if not self.track_managed_tasks:
return []
orphaned = []
if not self.alive:
# if this process had a running task that never finished,
@@ -179,6 +195,11 @@ class PoolWorker(object):
return not self.busy
class StatefulPoolWorker(PoolWorker):
track_managed_tasks = True
class WorkerPool(object):
'''
Creates a pool of forked PoolWorkers.
@@ -200,6 +221,7 @@ class WorkerPool(object):
)
'''
pool_cls = PoolWorker
debug_meta = ''
def __init__(self, min_workers=None, queue_size=None):
@@ -225,7 +247,7 @@ class WorkerPool(object):
# for the DB and cache connections (that way lies race conditions)
django_connection.close()
django_cache.close()
worker = PoolWorker(self.queue_size, self.target, (idx,) + self.target_args)
worker = self.pool_cls(self.queue_size, self.target, (idx,) + self.target_args)
self.workers.append(worker)
try:
worker.start()
@@ -236,13 +258,13 @@ class WorkerPool(object):
return idx, worker
def debug(self, *args, **kwargs):
self.cleanup()
tmpl = Template(
'Recorded at: {{ dt }} \n'
'{{ pool.name }}[pid:{{ pool.pid }}] workers total={{ workers|length }} {{ meta }} \n'
'{% for w in workers %}'
'. worker[pid:{{ w.pid }}]{% if not w.alive %} GONE exit={{ w.exitcode }}{% endif %}'
' sent={{ w.messages_sent }}'
' finished={{ w.messages_finished }}'
'{% if w.messages_finished %} finished={{ w.messages_finished }}{% endif %}'
' qsize={{ w.managed_tasks|length }}'
' rss={{ w.mb }}MB'
'{% for task in w.managed_tasks.values() %}'
@@ -260,7 +282,11 @@ class WorkerPool(object):
'\n'
'{% endfor %}'
)
return tmpl.render(pool=self, workers=self.workers, meta=self.debug_meta)
now = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S UTC')
return tmpl.render(
pool=self, workers=self.workers, meta=self.debug_meta,
dt=now
)
def write(self, preferred_queue, body):
queue_order = sorted(range(len(self.workers)), key=lambda x: -1 if x==preferred_queue else x)
@@ -293,6 +319,8 @@ class AutoscalePool(WorkerPool):
down based on demand
'''
pool_cls = StatefulPoolWorker
def __init__(self, *args, **kwargs):
self.max_workers = kwargs.pop('max_workers', None)
super(AutoscalePool, self).__init__(*args, **kwargs)
@@ -309,6 +337,10 @@ class AutoscalePool(WorkerPool):
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)
def debug(self, *args, **kwargs):
self.cleanup()
return super(AutoscalePool, self).debug(*args, **kwargs)
@property
def should_grow(self):
if len(self.workers) < self.min_workers:

View File

@@ -43,6 +43,9 @@ class WorkerSignalHandler:
class AWXConsumerBase(object):
last_stats = time.time()
def __init__(self, name, worker, queues=[], pool=None):
self.should_stop = False
@@ -54,6 +57,7 @@ class AWXConsumerBase(object):
if pool is None:
self.pool = WorkerPool()
self.pool.init_workers(self.worker.work_loop)
self.redis = redis.Redis.from_url(settings.BROKER_URL)
@property
def listening_on(self):
@@ -99,6 +103,16 @@ class AWXConsumerBase(object):
queue = 0
self.pool.write(queue, body)
self.total_messages += 1
self.record_statistics()
def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
try:
self.redis.set(f'awx_{self.name}_statistics', self.pool.debug())
self.last_stats = time.time()
except Exception:
logger.exception(f"encountered an error communicating with redis to store {self.name} statistics")
self.last_stats = time.time()
def run(self, *args, **kwargs):
signal.signal(signal.SIGINT, self.stop)
@@ -118,23 +132,9 @@ class AWXConsumerRedis(AWXConsumerBase):
super(AWXConsumerRedis, self).run(*args, **kwargs)
self.worker.on_start()
time_to_sleep = 1
while True:
queue = redis.Redis.from_url(settings.BROKER_URL)
while True:
try:
res = queue.blpop(self.queues)
time_to_sleep = 1
res = json.loads(res[1])
self.process_task(res)
except redis.exceptions.RedisError:
time_to_sleep = min(time_to_sleep * 2, 30)
logger.exception(f"encountered an error communicating with redis. Reconnect attempt in {time_to_sleep} seconds")
time.sleep(time_to_sleep)
except (json.JSONDecodeError, KeyError):
logger.exception("failed to decode JSON message from redis")
if self.should_stop:
return
logger.debug(f'{os.getpid()} is alive')
time.sleep(60)
class AWXConsumerPG(AWXConsumerBase):

View File

@@ -1,4 +1,5 @@
import cProfile
import json
import logging
import os
import pstats
@@ -6,12 +7,15 @@ import signal
import tempfile
import time
import traceback
from queue import Empty as QueueEmpty
from django.conf import settings
from django.utils.timezone import now as tz_now
from django.db import DatabaseError, OperationalError, connection as django_connection
from django.db.utils import InterfaceError, InternalError, IntegrityError
from django.db.utils import InterfaceError, InternalError
import psutil
import redis
from awx.main.consumers import emit_channel_notification
from awx.main.models import (JobEvent, AdHocCommandEvent, ProjectUpdateEvent,
@@ -24,10 +28,6 @@ from .base import BaseWorker
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
# the number of seconds to buffer events in memory before flushing
# using JobEvent.objects.bulk_create()
BUFFER_SECONDS = .1
class CallbackBrokerWorker(BaseWorker):
'''
@@ -39,21 +39,57 @@ class CallbackBrokerWorker(BaseWorker):
'''
MAX_RETRIES = 2
last_stats = time.time()
total = 0
last_event = ''
prof = None
def __init__(self):
self.buff = {}
self.pid = os.getpid()
self.redis = redis.Redis.from_url(settings.BROKER_URL)
for key in self.redis.keys('awx_callback_receiver_statistics_*'):
self.redis.delete(key)
def read(self, queue):
try:
return queue.get(block=True, timeout=BUFFER_SECONDS)
except QueueEmpty:
return {'event': 'FLUSH'}
res = self.redis.blpop(settings.CALLBACK_QUEUE, timeout=settings.JOB_EVENT_BUFFER_SECONDS)
if res is None:
return {'event': 'FLUSH'}
self.total += 1
return json.loads(res[1])
except redis.exceptions.RedisError:
logger.exception("encountered an error communicating with redis")
time.sleep(1)
except (json.JSONDecodeError, KeyError):
logger.exception("failed to decode JSON message from redis")
finally:
self.record_statistics()
return {'event': 'FLUSH'}
def record_statistics(self):
# buffer stat recording to once per (by default) 5s
if time.time() - self.last_stats > settings.JOB_EVENT_STATISTICS_INTERVAL:
try:
self.redis.set(f'awx_callback_receiver_statistics_{self.pid}', self.debug())
self.last_stats = time.time()
except Exception:
logger.exception("encountered an error communicating with redis")
self.last_stats = time.time()
def debug(self):
return f'. worker[pid:{self.pid}] sent={self.total} rss={self.mb}MB {self.last_event}'
@property
def mb(self):
return '{:0.3f}'.format(
psutil.Process(self.pid).memory_info().rss / 1024.0 / 1024.0
)
def toggle_profiling(self, *args):
if self.prof:
self.prof.disable()
filename = f'callback-{os.getpid()}.pstats'
filename = f'callback-{self.pid}.pstats'
filepath = os.path.join(tempfile.gettempdir(), filename)
with open(filepath, 'w') as f:
pstats.Stats(self.prof, stream=f).sort_stats('cumulative').print_stats()
@@ -84,20 +120,12 @@ class CallbackBrokerWorker(BaseWorker):
e.modified = now
try:
cls.objects.bulk_create(events)
except Exception as exc:
except Exception:
# if an exception occurs, we should re-attempt to save the
# events one-by-one, because something in the list is
# broken/stale (e.g., an IntegrityError on a specific event)
# broken/stale
for e in events:
try:
if (
isinstance(exc, IntegrityError) and
getattr(e, 'host_id', '')
):
# this is one potential IntegrityError we can
# work around - if the host disappears before
# the event can be processed
e.host_id = None
e.save()
except Exception:
logger.exception('Database Error Saving Job Event')
@@ -108,6 +136,8 @@ class CallbackBrokerWorker(BaseWorker):
def perform_work(self, body):
try:
flush = body.get('event') == 'FLUSH'
if flush:
self.last_event = ''
if not flush:
event_map = {
'job_id': JobEvent,
@@ -123,6 +153,8 @@ class CallbackBrokerWorker(BaseWorker):
job_identifier = body[key]
break
self.last_event = f'\n\t- {cls.__name__} for #{job_identifier} ({body.get("event", "")} {body.get("uuid", "")})' # noqa
if body.get('event') == 'EOF':
try:
final_counter = body.get('final_counter', 0)

View File

@@ -42,6 +42,16 @@ class Command(BaseCommand):
},
created_by=superuser)
c.admin_role.members.add(superuser)
public_galaxy_credential = Credential(
name='Ansible Galaxy',
managed_by_tower=True,
credential_type=CredentialType.objects.get(kind='galaxy'),
inputs = {
'url': 'https://galaxy.ansible.com/'
}
)
public_galaxy_credential.save()
o.galaxy_credentials.add(public_galaxy_credential)
i = Inventory.objects.create(name='Demo Inventory',
organization=o,
created_by=superuser)

View File

@@ -1,6 +1,9 @@
import logging
from awx.main.analytics import gather, ship
from dateutil import parser
from django.core.management.base import BaseCommand
from django.utils.timezone import now
class Command(BaseCommand):
@@ -15,6 +18,10 @@ class Command(BaseCommand):
help='Gather analytics without shipping. Works even if analytics are disabled in settings.')
parser.add_argument('--ship', dest='ship', action='store_true',
help='Enable to ship metrics to the Red Hat Cloud')
parser.add_argument('--since', dest='since', action='store',
help='Start date for collection')
parser.add_argument('--until', dest='until', action='store',
help='End date for collection')
def init_logging(self):
self.logger = logging.getLogger('awx.main.analytics')
@@ -28,11 +35,28 @@ class Command(BaseCommand):
self.init_logging()
opt_ship = options.get('ship')
opt_dry_run = options.get('dry-run')
opt_since = options.get('since') or None
opt_until = options.get('until') or None
if opt_since:
since = parser.parse(opt_since)
else:
since = None
if opt_until:
until = parser.parse(opt_until)
else:
until = now()
if opt_ship and opt_dry_run:
self.logger.error('Both --ship and --dry-run cannot be processed at the same time.')
return
tgz = gather(collection_type='manual' if not opt_dry_run else 'dry-run')
if tgz:
self.logger.debug(tgz)
tgzfiles = gather(collection_type='manual' if not opt_dry_run else 'dry-run', since = since, until = until)
if tgzfiles:
for tgz in tgzfiles:
self.logger.info(tgz)
else:
self.logger.error('No analytics collected')
if opt_ship:
ship(tgz)
if tgzfiles:
for tgz in tgzfiles:
ship(tgz)

View File

@@ -12,7 +12,6 @@ import sys
import time
import traceback
import shutil
from distutils.version import LooseVersion as Version
# Django
from django.conf import settings
@@ -39,7 +38,6 @@ from awx.main.utils import (
build_proot_temp_dir,
get_licenser
)
from awx.main.utils.common import _get_ansible_version
from awx.main.signals import disable_activity_stream
from awx.main.constants import STANDARD_INVENTORY_UPDATE_ENV
from awx.main.utils.pglock import advisory_lock
@@ -136,15 +134,10 @@ class AnsibleInventoryLoader(object):
# inside of /venv/ansible, so we override the specified interpreter
# https://github.com/ansible/ansible/issues/50714
bargs = ['python', ansible_inventory_path, '-i', self.source]
ansible_version = _get_ansible_version(ansible_inventory_path[:-len('-inventory')])
if ansible_version != 'unknown':
this_version = Version(ansible_version)
if this_version >= Version('2.5'):
bargs.extend(['--playbook-dir', self.source_dir])
if this_version >= Version('2.8'):
if self.verbosity:
# INFO: -vvv, DEBUG: -vvvvv, for inventory, any more than 3 makes little difference
bargs.append('-{}'.format('v' * min(5, self.verbosity * 2 + 1)))
bargs.extend(['--playbook-dir', self.source_dir])
if self.verbosity:
# INFO: -vvv, DEBUG: -vvvvv, for inventory, any more than 3 makes little difference
bargs.append('-{}'.format('v' * min(5, self.verbosity * 2 + 1)))
logger.debug('Using base command: {}'.format(' '.join(bargs)))
return bargs

View File

@@ -13,7 +13,7 @@ from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
"""
Internal tower command.
Regsiter this instance with the database for HA tracking.
Register this instance with the database for HA tracking.
"""
help = (

View File

@@ -4,6 +4,7 @@
from django.conf import settings
from django.core.management.base import BaseCommand
from awx.main.dispatch.control import Control
from awx.main.dispatch.worker import AWXConsumerRedis, CallbackBrokerWorker
@@ -15,7 +16,14 @@ class Command(BaseCommand):
'''
help = 'Launch the job callback receiver'
def add_arguments(self, parser):
parser.add_argument('--status', dest='status', action='store_true',
help='print the internal state of any running dispatchers')
def handle(self, *arg, **options):
if options.get('status'):
print(Control('callback_receiver').status())
return
consumer = None
try:
consumer = AWXConsumerRedis(

View File

@@ -48,7 +48,13 @@ class HostManager(models.Manager):
"""When the parent instance of the host query set has a `kind=smart` and a `host_filter`
set. Use the `host_filter` to generate the queryset for the hosts.
"""
qs = super(HostManager, self).get_queryset()
qs = super(HostManager, self).get_queryset().defer(
'last_job__extra_vars',
'last_job_host_summary__job__extra_vars',
'last_job__artifacts',
'last_job_host_summary__job__artifacts',
)
if (hasattr(self, 'instance') and
hasattr(self.instance, 'host_filter') and
hasattr(self.instance, 'kind')):

View File

@@ -0,0 +1,104 @@
# Generated by Django 2.2.11 on 2020-07-20 19:56
import logging
import yaml
from django.db import migrations, models
from awx.main.models.base import VarsDictProperty
from ._inventory_source_vars import FrozenInjectors
logger = logging.getLogger('awx.main.migrations')
def _get_inventory_sources(InventorySource):
return InventorySource.objects.filter(source__in=['ec2', 'gce', 'azure_rm', 'vmware', 'satellite6', 'openstack', 'rhv', 'tower'])
def inventory_source_vars_forward(apps, schema_editor):
InventorySource = apps.get_model("main", "InventorySource")
'''
The Django app registry does not keep track of model inheritance. The
source_vars_dict property comes from InventorySourceOptions via inheritance.
This adds that property. Luckily, other properteries and functionality from
InventorySourceOptions is not needed by the injector logic.
'''
setattr(InventorySource, 'source_vars_dict', VarsDictProperty('source_vars'))
source_vars_backup = dict()
for inv_source_obj in _get_inventory_sources(InventorySource):
if inv_source_obj.source in FrozenInjectors:
source_vars_backup[inv_source_obj.id] = dict(inv_source_obj.source_vars_dict)
injector = FrozenInjectors[inv_source_obj.source]()
new_inv_source_vars = injector.inventory_as_dict(inv_source_obj, None)
inv_source_obj.source_vars = yaml.dump(new_inv_source_vars)
inv_source_obj.save()
class Migration(migrations.Migration):
dependencies = [
('main', '0118_add_remote_archive_scm_type'),
]
operations = [
migrations.RunPython(inventory_source_vars_forward),
migrations.RemoveField(
model_name='inventorysource',
name='group_by',
),
migrations.RemoveField(
model_name='inventoryupdate',
name='group_by',
),
migrations.RemoveField(
model_name='inventorysource',
name='instance_filters',
),
migrations.RemoveField(
model_name='inventoryupdate',
name='instance_filters',
),
migrations.RemoveField(
model_name='inventorysource',
name='source_regions',
),
migrations.RemoveField(
model_name='inventoryupdate',
name='source_regions',
),
migrations.AddField(
model_name='inventorysource',
name='enabled_value',
field=models.TextField(blank=True, default='', help_text='Only used when enabled_var is set. Value when the host is considered enabled. For example if enabled_var="status.power_state"and enabled_value="powered_on" with host variables:{ "status": { "power_state": "powered_on", "created": "2020-08-04T18:13:04+00:00", "healthy": true }, "name": "foobar", "ip_address": "192.168.2.1"}The host would be marked enabled. If power_state where any value other than powered_on then the host would be disabled when imported into Tower. If the key is not found then the host will be enabled'),
),
migrations.AddField(
model_name='inventorysource',
name='enabled_var',
field=models.TextField(blank=True, default='', help_text='Retrieve the enabled state from the given dict of host variables. The enabled variable may be specified as "foo.bar", in which case the lookup will traverse into nested dicts, equivalent to: from_dict.get("foo", {}).get("bar", default)'),
),
migrations.AddField(
model_name='inventorysource',
name='host_filter',
field=models.TextField(blank=True, default='', help_text='Regex where only matching hosts will be imported into Tower.'),
),
migrations.AddField(
model_name='inventoryupdate',
name='enabled_value',
field=models.TextField(blank=True, default='', help_text='Only used when enabled_var is set. Value when the host is considered enabled. For example if enabled_var="status.power_state"and enabled_value="powered_on" with host variables:{ "status": { "power_state": "powered_on", "created": "2020-08-04T18:13:04+00:00", "healthy": true }, "name": "foobar", "ip_address": "192.168.2.1"}The host would be marked enabled. If power_state where any value other than powered_on then the host would be disabled when imported into Tower. If the key is not found then the host will be enabled'),
),
migrations.AddField(
model_name='inventoryupdate',
name='enabled_var',
field=models.TextField(blank=True, default='', help_text='Retrieve the enabled state from the given dict of host variables. The enabled variable may be specified as "foo.bar", in which case the lookup will traverse into nested dicts, equivalent to: from_dict.get("foo", {}).get("bar", default)'),
),
migrations.AddField(
model_name='inventoryupdate',
name='host_filter',
field=models.TextField(blank=True, default='', help_text='Regex where only matching hosts will be imported into Tower.'),
),
]

View File

@@ -0,0 +1,51 @@
# Generated by Django 2.2.11 on 2020-08-04 15:19
import logging
import awx.main.fields
from awx.main.utils.encryption import encrypt_field, decrypt_field
from django.db import migrations, models
from django.utils.timezone import now
import django.db.models.deletion
from awx.main.migrations import _galaxy as galaxy
from awx.main.models import CredentialType as ModernCredentialType
from awx.main.utils.common import set_current_apps
logger = logging.getLogger('awx.main.migrations')
class Migration(migrations.Migration):
dependencies = [
('main', '0119_inventory_plugins'),
]
operations = [
migrations.AlterField(
model_name='credentialtype',
name='kind',
field=models.CharField(choices=[('ssh', 'Machine'), ('vault', 'Vault'), ('net', 'Network'), ('scm', 'Source Control'), ('cloud', 'Cloud'), ('token', 'Personal Access Token'), ('insights', 'Insights'), ('external', 'External'), ('kubernetes', 'Kubernetes'), ('galaxy', 'Galaxy/Automation Hub')], max_length=32),
),
migrations.CreateModel(
name='OrganizationGalaxyCredentialMembership',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('position', models.PositiveIntegerField(db_index=True, default=None, null=True)),
('credential', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.Credential')),
('organization', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.Organization')),
],
),
migrations.AddField(
model_name='organization',
name='galaxy_credentials',
field=awx.main.fields.OrderedManyToManyField(blank=True, related_name='organization_galaxy_credentials', through='main.OrganizationGalaxyCredentialMembership', to='main.Credential'),
),
migrations.AddField(
model_name='credential',
name='managed_by_tower',
field=models.BooleanField(default=False, editable=False),
),
migrations.RunPython(galaxy.migrate_galaxy_settings)
]

View File

@@ -0,0 +1,16 @@
# Generated by Django 2.2.11 on 2020-07-24 17:41
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0120_galaxy_credentials'),
]
operations = [
migrations.DeleteModel(
name='TowerAnalyticsState',
),
]

View File

@@ -0,0 +1,125 @@
# Generated by Django 2.2.11 on 2020-08-04 15:19
import logging
from awx.main.utils.encryption import encrypt_field, decrypt_field
from django.conf import settings
from django.utils.timezone import now
from awx.main.models import CredentialType as ModernCredentialType
from awx.main.utils.common import set_current_apps
logger = logging.getLogger('awx.main.migrations')
def migrate_galaxy_settings(apps, schema_editor):
Organization = apps.get_model('main', 'Organization')
if Organization.objects.count() == 0:
# nothing to migrate
return
set_current_apps(apps)
ModernCredentialType.setup_tower_managed_defaults()
CredentialType = apps.get_model('main', 'CredentialType')
Credential = apps.get_model('main', 'Credential')
Setting = apps.get_model('conf', 'Setting')
galaxy_type = CredentialType.objects.get(kind='galaxy')
private_galaxy_url = Setting.objects.filter(key='PRIMARY_GALAXY_URL').first()
# by default, prior versions of AWX/Tower automatically pulled content
# from galaxy.ansible.com
public_galaxy_enabled = True
public_galaxy_setting = Setting.objects.filter(key='PUBLIC_GALAXY_ENABLED').first()
if public_galaxy_setting and public_galaxy_setting.value is False:
# ...UNLESS this behavior was explicitly disabled via this setting
public_galaxy_enabled = False
public_galaxy_credential = Credential(
created=now(),
modified=now(),
name='Ansible Galaxy',
managed_by_tower=True,
credential_type=galaxy_type,
inputs = {
'url': 'https://galaxy.ansible.com/'
}
)
public_galaxy_credential.save()
for org in Organization.objects.all():
if private_galaxy_url and private_galaxy_url.value:
# If a setting exists for a private Galaxy URL, make a credential for it
username = Setting.objects.filter(key='PRIMARY_GALAXY_USERNAME').first()
password = Setting.objects.filter(key='PRIMARY_GALAXY_PASSWORD').first()
if (username and username.value) or (password and password.value):
logger.error(
f'Specifying HTTP basic auth for the Ansible Galaxy API '
f'({private_galaxy_url.value}) is no longer supported. '
'Please provide an API token instead after your upgrade '
'has completed',
)
inputs = {
'url': private_galaxy_url.value
}
token = Setting.objects.filter(key='PRIMARY_GALAXY_TOKEN').first()
if token and token.value:
inputs['token'] = decrypt_field(token, 'value')
auth_url = Setting.objects.filter(key='PRIMARY_GALAXY_AUTH_URL').first()
if auth_url and auth_url.value:
inputs['auth_url'] = auth_url.value
name = f'Private Galaxy ({private_galaxy_url.value})'
if 'cloud.redhat.com' in inputs['url']:
name = f'Ansible Automation Hub ({private_galaxy_url.value})'
cred = Credential(
created=now(),
modified=now(),
name=name,
organization=org,
credential_type=galaxy_type,
inputs=inputs
)
cred.save()
if token and token.value:
# encrypt based on the primary key from the prior save
cred.inputs['token'] = encrypt_field(cred, 'token')
cred.save()
org.galaxy_credentials.add(cred)
fallback_servers = getattr(settings, 'FALLBACK_GALAXY_SERVERS', [])
for fallback in fallback_servers:
url = fallback.get('url', None)
auth_url = fallback.get('auth_url', None)
username = fallback.get('username', None)
password = fallback.get('password', None)
token = fallback.get('token', None)
if username or password:
logger.error(
f'Specifying HTTP basic auth for the Ansible Galaxy API '
f'({url}) is no longer supported. '
'Please provide an API token instead after your upgrade '
'has completed',
)
inputs = {'url': url}
if token:
inputs['token'] = token
if auth_url:
inputs['auth_url'] = auth_url
cred = Credential(
created=now(),
modified=now(),
name=f'Ansible Galaxy ({url})',
organization=org,
credential_type=galaxy_type,
inputs=inputs
)
cred.save()
if token:
# encrypt based on the primary key from the prior save
cred.inputs['token'] = encrypt_field(cred, 'token')
cred.save()
org.galaxy_credentials.add(cred)
if public_galaxy_enabled:
# If public Galaxy was enabled, associate it to the org
org.galaxy_credentials.add(public_galaxy_credential)

View File

@@ -0,0 +1,751 @@
import json
from django.utils.translation import ugettext_lazy as _
FrozenInjectors = dict()
class PluginFileInjector(object):
plugin_name = None # Ansible core name used to reference plugin
# every source should have collection, these are for the collection name
namespace = None
collection = None
def inventory_as_dict(self, inventory_source, private_data_dir):
"""Default implementation of inventory plugin file contents.
There are some valid cases when all parameters can be obtained from
the environment variables, example "plugin: linode" is valid
ideally, however, some options should be filled from the inventory source data
"""
if self.plugin_name is None:
raise NotImplementedError('At minimum the plugin name is needed for inventory plugin use.')
proper_name = f'{self.namespace}.{self.collection}.{self.plugin_name}'
return {'plugin': proper_name}
class azure_rm(PluginFileInjector):
plugin_name = 'azure_rm'
namespace = 'azure'
collection = 'azcollection'
def inventory_as_dict(self, inventory_source, private_data_dir):
ret = super(azure_rm, self).inventory_as_dict(inventory_source, private_data_dir)
source_vars = inventory_source.source_vars_dict
ret['fail_on_template_errors'] = False
group_by_hostvar = {
'location': {'prefix': '', 'separator': '', 'key': 'location'},
'tag': {'prefix': '', 'separator': '', 'key': 'tags.keys() | list if tags else []'},
# Introduced with https://github.com/ansible/ansible/pull/53046
'security_group': {'prefix': '', 'separator': '', 'key': 'security_group'},
'resource_group': {'prefix': '', 'separator': '', 'key': 'resource_group'},
# Note, os_family was not documented correctly in script, but defaulted to grouping by it
'os_family': {'prefix': '', 'separator': '', 'key': 'os_disk.operating_system_type'}
}
# by default group by everything
# always respect user setting, if they gave it
group_by = [
grouping_name for grouping_name in group_by_hostvar
if source_vars.get('group_by_{}'.format(grouping_name), True)
]
ret['keyed_groups'] = [group_by_hostvar[grouping_name] for grouping_name in group_by]
if 'tag' in group_by:
# Nasty syntax to reproduce "key_value" group names in addition to "key"
ret['keyed_groups'].append({
'prefix': '', 'separator': '',
'key': r'dict(tags.keys() | map("regex_replace", "^(.*)$", "\1_") | list | zip(tags.values() | list)) if tags else []'
})
# Compatibility content
# TODO: add proper support for instance_filters non-specific to compatibility
# TODO: add proper support for group_by non-specific to compatibility
# Dashes were not configurable in azure_rm.py script, we do not want unicode, so always use this
ret['use_contrib_script_compatible_sanitization'] = True
# use same host names as script
ret['plain_host_names'] = True
# By default the script did not filter hosts
ret['default_host_filters'] = []
# User-given host filters
user_filters = []
old_filterables = [
('resource_groups', 'resource_group'),
('tags', 'tags')
# locations / location would be an entry
# but this would conflict with source_regions
]
for key, loc in old_filterables:
value = source_vars.get(key, None)
if value and isinstance(value, str):
# tags can be list of key:value pairs
# e.g. 'Creator:jmarshall, peanutbutter:jelly'
# or tags can be a list of keys
# e.g. 'Creator, peanutbutter'
if key == "tags":
# grab each key value pair
for kvpair in value.split(','):
# split into key and value
kv = kvpair.split(':')
# filter out any host that does not have key
# in their tags.keys() variable
user_filters.append('"{}" not in tags.keys()'.format(kv[0].strip()))
# if a value is provided, check that the key:value pair matches
if len(kv) > 1:
user_filters.append('tags["{}"] != "{}"'.format(kv[0].strip(), kv[1].strip()))
else:
user_filters.append('{} not in {}'.format(
loc, value.split(',')
))
if user_filters:
ret.setdefault('exclude_host_filters', [])
ret['exclude_host_filters'].extend(user_filters)
ret['conditional_groups'] = {'azure': True}
ret['hostvar_expressions'] = {
'provisioning_state': 'provisioning_state | title',
'computer_name': 'name',
'type': 'resource_type',
'private_ip': 'private_ipv4_addresses[0] if private_ipv4_addresses else None',
'public_ip': 'public_ipv4_addresses[0] if public_ipv4_addresses else None',
'public_ip_name': 'public_ip_name if public_ip_name is defined else None',
'public_ip_id': 'public_ip_id if public_ip_id is defined else None',
'tags': 'tags if tags else None'
}
# Special functionality from script
if source_vars.get('use_private_ip', False):
ret['hostvar_expressions']['ansible_host'] = 'private_ipv4_addresses[0]'
# end compatibility content
if inventory_source.source_regions and 'all' not in inventory_source.source_regions:
# initialize a list for this section in inventory file
ret.setdefault('exclude_host_filters', [])
# make a python list of the regions we will use
python_regions = [x.strip() for x in inventory_source.source_regions.split(',')]
# convert that list in memory to python syntax in a string
# now put that in jinja2 syntax operating on hostvar key "location"
# and put that as an entry in the exclusions list
ret['exclude_host_filters'].append("location not in {}".format(repr(python_regions)))
return ret
class ec2(PluginFileInjector):
plugin_name = 'aws_ec2'
namespace = 'amazon'
collection = 'aws'
def _get_ec2_group_by_choices(self):
return [
('ami_id', _('Image ID')),
('availability_zone', _('Availability Zone')),
('aws_account', _('Account')),
('instance_id', _('Instance ID')),
('instance_state', _('Instance State')),
('platform', _('Platform')),
('instance_type', _('Instance Type')),
('key_pair', _('Key Name')),
('region', _('Region')),
('security_group', _('Security Group')),
('tag_keys', _('Tags')),
('tag_none', _('Tag None')),
('vpc_id', _('VPC ID')),
]
def _compat_compose_vars(self):
return {
# vars that change
'ec2_block_devices': (
"dict(block_device_mappings | map(attribute='device_name') | list | zip(block_device_mappings "
"| map(attribute='ebs.volume_id') | list))"
),
'ec2_dns_name': 'public_dns_name',
'ec2_group_name': 'placement.group_name',
'ec2_instance_profile': 'iam_instance_profile | default("")',
'ec2_ip_address': 'public_ip_address',
'ec2_kernel': 'kernel_id | default("")',
'ec2_monitored': "monitoring.state in ['enabled', 'pending']",
'ec2_monitoring_state': 'monitoring.state',
'ec2_placement': 'placement.availability_zone',
'ec2_ramdisk': 'ramdisk_id | default("")',
'ec2_reason': 'state_transition_reason',
'ec2_security_group_ids': "security_groups | map(attribute='group_id') | list | join(',')",
'ec2_security_group_names': "security_groups | map(attribute='group_name') | list | join(',')",
'ec2_tag_Name': 'tags.Name',
'ec2_state': 'state.name',
'ec2_state_code': 'state.code',
'ec2_state_reason': 'state_reason.message if state_reason is defined else ""',
'ec2_sourceDestCheck': 'source_dest_check | default(false) | lower | string', # snake_case syntax intended
'ec2_account_id': 'owner_id',
# vars that just need ec2_ prefix
'ec2_ami_launch_index': 'ami_launch_index | string',
'ec2_architecture': 'architecture',
'ec2_client_token': 'client_token',
'ec2_ebs_optimized': 'ebs_optimized',
'ec2_hypervisor': 'hypervisor',
'ec2_image_id': 'image_id',
'ec2_instance_type': 'instance_type',
'ec2_key_name': 'key_name',
'ec2_launch_time': r'launch_time | regex_replace(" ", "T") | regex_replace("(\+)(\d\d):(\d)(\d)$", ".\g<2>\g<3>Z")',
'ec2_platform': 'platform | default("")',
'ec2_private_dns_name': 'private_dns_name',
'ec2_private_ip_address': 'private_ip_address',
'ec2_public_dns_name': 'public_dns_name',
'ec2_region': 'placement.region',
'ec2_root_device_name': 'root_device_name',
'ec2_root_device_type': 'root_device_type',
# many items need blank defaults because the script tended to keep a common schema
'ec2_spot_instance_request_id': 'spot_instance_request_id | default("")',
'ec2_subnet_id': 'subnet_id | default("")',
'ec2_virtualization_type': 'virtualization_type',
'ec2_vpc_id': 'vpc_id | default("")',
# same as ec2_ip_address, the script provided this
'ansible_host': 'public_ip_address',
# new with https://github.com/ansible/ansible/pull/53645
'ec2_eventsSet': 'events | default("")',
'ec2_persistent': 'persistent | default(false)',
'ec2_requester_id': 'requester_id | default("")'
}
def inventory_as_dict(self, inventory_source, private_data_dir):
ret = super(ec2, self).inventory_as_dict(inventory_source, private_data_dir)
keyed_groups = []
group_by_hostvar = {
'ami_id': {'prefix': '', 'separator': '', 'key': 'image_id', 'parent_group': 'images'},
# 2 entries for zones for same groups to establish 2 parentage trees
'availability_zone': {'prefix': '', 'separator': '', 'key': 'placement.availability_zone', 'parent_group': 'zones'},
'aws_account': {'prefix': '', 'separator': '', 'key': 'ec2_account_id', 'parent_group': 'accounts'}, # composed var
'instance_id': {'prefix': '', 'separator': '', 'key': 'instance_id', 'parent_group': 'instances'}, # normally turned off
'instance_state': {'prefix': 'instance_state', 'key': 'ec2_state', 'parent_group': 'instance_states'}, # composed var
# ec2_platform is a composed var, but group names do not match up to hostvar exactly
'platform': {'prefix': 'platform', 'key': 'platform | default("undefined")', 'parent_group': 'platforms'},
'instance_type': {'prefix': 'type', 'key': 'instance_type', 'parent_group': 'types'},
'key_pair': {'prefix': 'key', 'key': 'key_name', 'parent_group': 'keys'},
'region': {'prefix': '', 'separator': '', 'key': 'placement.region', 'parent_group': 'regions'},
# Security requires some ninja jinja2 syntax, credit to s-hertel
'security_group': {'prefix': 'security_group', 'key': 'security_groups | map(attribute="group_name")', 'parent_group': 'security_groups'},
# tags cannot be parented in exactly the same way as the script due to
# https://github.com/ansible/ansible/pull/53812
'tag_keys': [
{'prefix': 'tag', 'key': 'tags', 'parent_group': 'tags'},
{'prefix': 'tag', 'key': 'tags.keys()', 'parent_group': 'tags'}
],
# 'tag_none': None, # grouping by no tags isn't a different thing with plugin
# naming is redundant, like vpc_id_vpc_8c412cea, but intended
'vpc_id': {'prefix': 'vpc_id', 'key': 'vpc_id', 'parent_group': 'vpcs'},
}
# -- same-ish as script here --
group_by = [x.strip().lower() for x in inventory_source.group_by.split(',') if x.strip()]
for choice in self._get_ec2_group_by_choices():
value = bool((group_by and choice[0] in group_by) or (not group_by and choice[0] != 'instance_id'))
# -- end sameness to script --
if value:
this_keyed_group = group_by_hostvar.get(choice[0], None)
# If a keyed group syntax does not exist, there is nothing we can do to get this group
if this_keyed_group is not None:
if isinstance(this_keyed_group, list):
keyed_groups.extend(this_keyed_group)
else:
keyed_groups.append(this_keyed_group)
# special case, this parentage is only added if both zones and regions are present
if not group_by or ('region' in group_by and 'availability_zone' in group_by):
keyed_groups.append({'prefix': '', 'separator': '', 'key': 'placement.availability_zone', 'parent_group': '{{ placement.region }}'})
source_vars = inventory_source.source_vars_dict
# This is a setting from the script, hopefully no one used it
# if true, it replaces dashes, but not in region / loc names
replace_dash = bool(source_vars.get('replace_dash_in_groups', True))
# Compatibility content
legacy_regex = {
True: r"[^A-Za-z0-9\_]",
False: r"[^A-Za-z0-9\_\-]" # do not replace dash, dash is allowed
}[replace_dash]
list_replacer = 'map("regex_replace", "{rx}", "_") | list'.format(rx=legacy_regex)
# this option, a plugin option, will allow dashes, but not unicode
# when set to False, unicode will be allowed, but it was not allowed by script
# thus, we always have to use this option, and always use our custom regex
ret['use_contrib_script_compatible_sanitization'] = True
for grouping_data in keyed_groups:
if grouping_data['key'] in ('placement.region', 'placement.availability_zone'):
# us-east-2 is always us-east-2 according to ec2.py
# no sanitization in region-ish groups for the script standards, ever ever
continue
if grouping_data['key'] == 'tags':
# dict jinja2 transformation
grouping_data['key'] = 'dict(tags.keys() | {replacer} | zip(tags.values() | {replacer}))'.format(
replacer=list_replacer
)
elif grouping_data['key'] == 'tags.keys()' or grouping_data['prefix'] == 'security_group':
# list jinja2 transformation
grouping_data['key'] += ' | {replacer}'.format(replacer=list_replacer)
else:
# string transformation
grouping_data['key'] += ' | regex_replace("{rx}", "_")'.format(rx=legacy_regex)
# end compatibility content
if source_vars.get('iam_role_arn', None):
ret['iam_role_arn'] = source_vars['iam_role_arn']
# This was an allowed ec2.ini option, also plugin option, so pass through
if source_vars.get('boto_profile', None):
ret['boto_profile'] = source_vars['boto_profile']
elif not replace_dash:
# Using the plugin, but still want dashes allowed
ret['use_contrib_script_compatible_sanitization'] = True
if source_vars.get('nested_groups') is False:
for this_keyed_group in keyed_groups:
this_keyed_group.pop('parent_group', None)
if keyed_groups:
ret['keyed_groups'] = keyed_groups
# Instance ID not part of compat vars, because of settings.EC2_INSTANCE_ID_VAR
compose_dict = {'ec2_id': 'instance_id'}
inst_filters = {}
# Compatibility content
compose_dict.update(self._compat_compose_vars())
# plugin provides "aws_ec2", but not this which the script gave
ret['groups'] = {'ec2': True}
if source_vars.get('hostname_variable') is not None:
hnames = []
for expr in source_vars.get('hostname_variable').split(','):
if expr == 'public_dns_name':
hnames.append('dns-name')
elif not expr.startswith('tag:') and '_' in expr:
hnames.append(expr.replace('_', '-'))
else:
hnames.append(expr)
ret['hostnames'] = hnames
else:
# public_ip as hostname is non-default plugin behavior, script behavior
ret['hostnames'] = [
'network-interface.addresses.association.public-ip',
'dns-name',
'private-dns-name'
]
# The script returned only running state by default, the plugin does not
# https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html#options
# options: pending | running | shutting-down | terminated | stopping | stopped
inst_filters['instance-state-name'] = ['running']
# end compatibility content
if source_vars.get('destination_variable') or source_vars.get('vpc_destination_variable'):
for fd in ('destination_variable', 'vpc_destination_variable'):
if source_vars.get(fd):
compose_dict['ansible_host'] = source_vars.get(fd)
break
if compose_dict:
ret['compose'] = compose_dict
if inventory_source.instance_filters:
# logic used to live in ec2.py, now it belongs to us. Yay more code?
filter_sets = [f for f in inventory_source.instance_filters.split(',') if f]
for instance_filter in filter_sets:
# AND logic not supported, unclear how to...
instance_filter = instance_filter.strip()
if not instance_filter or '=' not in instance_filter:
continue
filter_key, filter_value = [x.strip() for x in instance_filter.split('=', 1)]
if not filter_key:
continue
inst_filters[filter_key] = filter_value
if inst_filters:
ret['filters'] = inst_filters
if inventory_source.source_regions and 'all' not in inventory_source.source_regions:
ret['regions'] = inventory_source.source_regions.split(',')
return ret
class gce(PluginFileInjector):
plugin_name = 'gcp_compute'
namespace = 'google'
collection = 'cloud'
def _compat_compose_vars(self):
# missing: gce_image, gce_uuid
# https://github.com/ansible/ansible/issues/51884
return {
'gce_description': 'description if description else None',
'gce_machine_type': 'machineType',
'gce_name': 'name',
'gce_network': 'networkInterfaces[0].network.name',
'gce_private_ip': 'networkInterfaces[0].networkIP',
'gce_public_ip': 'networkInterfaces[0].accessConfigs[0].natIP | default(None)',
'gce_status': 'status',
'gce_subnetwork': 'networkInterfaces[0].subnetwork.name',
'gce_tags': 'tags.get("items", [])',
'gce_zone': 'zone',
'gce_metadata': 'metadata.get("items", []) | items2dict(key_name="key", value_name="value")',
# NOTE: image hostvar is enabled via retrieve_image_info option
'gce_image': 'image',
# We need this as long as hostnames is non-default, otherwise hosts
# will not be addressed correctly, was returned in script
'ansible_ssh_host': 'networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)'
}
def inventory_as_dict(self, inventory_source, private_data_dir):
ret = super(gce, self).inventory_as_dict(inventory_source, private_data_dir)
# auth related items
ret['auth_kind'] = "serviceaccount"
filters = []
# TODO: implement gce group_by options
# gce never processed the group_by field, if it had, we would selectively
# apply those options here, but it did not, so all groups are added here
keyed_groups = [
# the jinja2 syntax is duplicated with compose
# https://github.com/ansible/ansible/issues/51883
{'prefix': 'network', 'key': 'gce_subnetwork'}, # composed var
{'prefix': '', 'separator': '', 'key': 'gce_private_ip'}, # composed var
{'prefix': '', 'separator': '', 'key': 'gce_public_ip'}, # composed var
{'prefix': '', 'separator': '', 'key': 'machineType'},
{'prefix': '', 'separator': '', 'key': 'zone'},
{'prefix': 'tag', 'key': 'gce_tags'}, # composed var
{'prefix': 'status', 'key': 'status | lower'},
# NOTE: image hostvar is enabled via retrieve_image_info option
{'prefix': '', 'separator': '', 'key': 'image'},
]
# This will be used as the gce instance_id, must be universal, non-compat
compose_dict = {'gce_id': 'id'}
# Compatibility content
# TODO: proper group_by and instance_filters support, irrelevant of compat mode
# The gce.py script never sanitized any names in any way
ret['use_contrib_script_compatible_sanitization'] = True
# Perform extra API query to get the image hostvar
ret['retrieve_image_info'] = True
# Add in old hostvars aliases
compose_dict.update(self._compat_compose_vars())
# Non-default names to match script
ret['hostnames'] = ['name', 'public_ip', 'private_ip']
# end compatibility content
if keyed_groups:
ret['keyed_groups'] = keyed_groups
if filters:
ret['filters'] = filters
if compose_dict:
ret['compose'] = compose_dict
if inventory_source.source_regions and 'all' not in inventory_source.source_regions:
ret['zones'] = inventory_source.source_regions.split(',')
return ret
class vmware(PluginFileInjector):
plugin_name = 'vmware_vm_inventory'
namespace = 'community'
collection = 'vmware'
def inventory_as_dict(self, inventory_source, private_data_dir):
ret = super(vmware, self).inventory_as_dict(inventory_source, private_data_dir)
ret['strict'] = False
# Documentation of props, see
# https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_vm_attributes.rst
UPPERCASE_PROPS = [
"availableField",
"configIssue",
"configStatus",
"customValue", # optional
"datastore",
"effectiveRole",
"guestHeartbeatStatus", # optional
"layout", # optional
"layoutEx", # optional
"name",
"network",
"overallStatus",
"parentVApp", # optional
"permission",
"recentTask",
"resourcePool",
"rootSnapshot",
"snapshot", # optional
"triggeredAlarmState",
"value"
]
NESTED_PROPS = [
"capability",
"config",
"guest",
"runtime",
"storage",
"summary", # repeat of other properties
]
ret['properties'] = UPPERCASE_PROPS + NESTED_PROPS
ret['compose'] = {'ansible_host': 'guest.ipAddress'} # default value
ret['compose']['ansible_ssh_host'] = ret['compose']['ansible_host']
# the ansible_uuid was unique every host, every import, from the script
ret['compose']['ansible_uuid'] = '99999999 | random | to_uuid'
for prop in UPPERCASE_PROPS:
if prop == prop.lower():
continue
ret['compose'][prop.lower()] = prop
ret['with_nested_properties'] = True
# ret['property_name_format'] = 'lower_case' # only dacrystal/topic/vmware-inventory-plugin-property-format
# process custom options
vmware_opts = dict(inventory_source.source_vars_dict.items())
if inventory_source.instance_filters:
vmware_opts.setdefault('host_filters', inventory_source.instance_filters)
if inventory_source.group_by:
vmware_opts.setdefault('groupby_patterns', inventory_source.group_by)
alias_pattern = vmware_opts.get('alias_pattern')
if alias_pattern:
ret.setdefault('hostnames', [])
for alias in alias_pattern.split(','): # make best effort
striped_alias = alias.replace('{', '').replace('}', '').strip() # make best effort
if not striped_alias:
continue
ret['hostnames'].append(striped_alias)
host_pattern = vmware_opts.get('host_pattern') # not working in script
if host_pattern:
stripped_hp = host_pattern.replace('{', '').replace('}', '').strip() # make best effort
ret['compose']['ansible_host'] = stripped_hp
ret['compose']['ansible_ssh_host'] = stripped_hp
host_filters = vmware_opts.get('host_filters')
if host_filters:
ret.setdefault('filters', [])
for hf in host_filters.split(','):
striped_hf = hf.replace('{', '').replace('}', '').strip() # make best effort
if not striped_hf:
continue
ret['filters'].append(striped_hf)
else:
# default behavior filters by power state
ret['filters'] = ['runtime.powerState == "poweredOn"']
groupby_patterns = vmware_opts.get('groupby_patterns')
ret.setdefault('keyed_groups', [])
if groupby_patterns:
for pattern in groupby_patterns.split(','):
stripped_pattern = pattern.replace('{', '').replace('}', '').strip() # make best effort
ret['keyed_groups'].append({
'prefix': '', 'separator': '',
'key': stripped_pattern
})
else:
# default groups from script
for entry in ('config.guestId', '"templates" if config.template else "guests"'):
ret['keyed_groups'].append({
'prefix': '', 'separator': '',
'key': entry
})
return ret
class openstack(PluginFileInjector):
plugin_name = 'openstack'
namespace = 'openstack'
collection = 'cloud'
def inventory_as_dict(self, inventory_source, private_data_dir):
def use_host_name_for_name(a_bool_maybe):
if not isinstance(a_bool_maybe, bool):
# Could be specified by user via "host" or "uuid"
return a_bool_maybe
elif a_bool_maybe:
return 'name' # plugin default
else:
return 'uuid'
ret = super(openstack, self).inventory_as_dict(inventory_source, private_data_dir)
ret['fail_on_errors'] = True
ret['expand_hostvars'] = True
ret['inventory_hostname'] = use_host_name_for_name(False)
# Note: mucking with defaults will break import integrity
# For the plugin, we need to use the same defaults as the old script
# or else imports will conflict. To find script defaults you have
# to read source code of the script.
#
# Script Defaults Plugin Defaults
# 'use_hostnames': False, 'name' (True)
# 'expand_hostvars': True, 'no' (False)
# 'fail_on_errors': True, 'no' (False)
#
# These are, yet again, different from ansible_variables in script logic
# but those are applied inconsistently
source_vars = inventory_source.source_vars_dict
for var_name in ['expand_hostvars', 'fail_on_errors']:
if var_name in source_vars:
ret[var_name] = source_vars[var_name]
if 'use_hostnames' in source_vars:
ret['inventory_hostname'] = use_host_name_for_name(source_vars['use_hostnames'])
return ret
class rhv(PluginFileInjector):
"""ovirt uses the custom credential templating, and that is all
"""
plugin_name = 'ovirt'
initial_version = '2.9'
namespace = 'ovirt'
collection = 'ovirt'
def inventory_as_dict(self, inventory_source, private_data_dir):
ret = super(rhv, self).inventory_as_dict(inventory_source, private_data_dir)
ret['ovirt_insecure'] = False # Default changed from script
# TODO: process strict option upstream
ret['compose'] = {
'ansible_host': '(devices.values() | list)[0][0] if devices else None'
}
ret['keyed_groups'] = []
for key in ('cluster', 'status'):
ret['keyed_groups'].append({'prefix': key, 'separator': '_', 'key': key})
ret['keyed_groups'].append({'prefix': 'tag', 'separator': '_', 'key': 'tags'})
ret['ovirt_hostname_preference'] = ['name', 'fqdn']
source_vars = inventory_source.source_vars_dict
for key, value in source_vars.items():
if key == 'plugin':
continue
ret[key] = value
return ret
class satellite6(PluginFileInjector):
plugin_name = 'foreman'
namespace = 'theforeman'
collection = 'foreman'
def inventory_as_dict(self, inventory_source, private_data_dir):
ret = super(satellite6, self).inventory_as_dict(inventory_source, private_data_dir)
ret['validate_certs'] = False
group_patterns = '[]'
group_prefix = 'foreman_'
want_hostcollections = False
want_ansible_ssh_host = False
want_facts = True
foreman_opts = inventory_source.source_vars_dict.copy()
for k, v in foreman_opts.items():
if k == 'satellite6_group_patterns' and isinstance(v, str):
group_patterns = v
elif k == 'satellite6_group_prefix' and isinstance(v, str):
group_prefix = v
elif k == 'satellite6_want_hostcollections' and isinstance(v, bool):
want_hostcollections = v
elif k == 'satellite6_want_ansible_ssh_host' and isinstance(v, bool):
want_ansible_ssh_host = v
elif k == 'satellite6_want_facts' and isinstance(v, bool):
want_facts = v
# add backwards support for ssl_verify
# plugin uses new option, validate_certs, instead
elif k == 'ssl_verify' and isinstance(v, bool):
ret['validate_certs'] = v
else:
ret[k] = str(v)
# Compatibility content
group_by_hostvar = {
"environment": {"prefix": "{}environment_".format(group_prefix),
"separator": "",
"key": "foreman['environment_name'] | lower | regex_replace(' ', '') | "
"regex_replace('[^A-Za-z0-9_]', '_') | regex_replace('none', '')"},
"location": {"prefix": "{}location_".format(group_prefix),
"separator": "",
"key": "foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')"},
"organization": {"prefix": "{}organization_".format(group_prefix),
"separator": "",
"key": "foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')"},
"lifecycle_environment": {"prefix": "{}lifecycle_environment_".format(group_prefix),
"separator": "",
"key": "foreman['content_facet_attributes']['lifecycle_environment_name'] | "
"lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')"},
"content_view": {"prefix": "{}content_view_".format(group_prefix),
"separator": "",
"key": "foreman['content_facet_attributes']['content_view_name'] | "
"lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')"}
}
ret['legacy_hostvars'] = True # convert hostvar structure to the form used by the script
ret['want_params'] = True
ret['group_prefix'] = group_prefix
ret['want_hostcollections'] = want_hostcollections
ret['want_facts'] = want_facts
if want_ansible_ssh_host:
ret['compose'] = {'ansible_ssh_host': "foreman['ip6'] | default(foreman['ip'], true)"}
ret['keyed_groups'] = [group_by_hostvar[grouping_name] for grouping_name in group_by_hostvar]
def form_keyed_group(group_pattern):
"""
Converts foreman group_pattern to
inventory plugin keyed_group
e.g. {app_param}-{tier_param}-{dc_param}
becomes
"%s-%s-%s" | format(app_param, tier_param, dc_param)
"""
if type(group_pattern) is not str:
return None
params = re.findall('{[^}]*}', group_pattern)
if len(params) == 0:
return None
param_names = []
for p in params:
param_names.append(p[1:-1].strip()) # strip braces and space
# form keyed_group key by
# replacing curly braces with '%s'
# (for use with jinja's format filter)
key = group_pattern
for p in params:
key = key.replace(p, '%s', 1)
# apply jinja filter to key
key = '"{}" | format({})'.format(key, ', '.join(param_names))
keyed_group = {'key': key,
'separator': ''}
return keyed_group
try:
group_patterns = json.loads(group_patterns)
if type(group_patterns) is list:
for group_pattern in group_patterns:
keyed_group = form_keyed_group(group_pattern)
if keyed_group:
ret['keyed_groups'].append(keyed_group)
except json.JSONDecodeError:
logger.warning('Could not parse group_patterns. Expected JSON-formatted string, found: {}'
.format(group_patterns))
return ret
class tower(PluginFileInjector):
plugin_name = 'tower'
namespace = 'awx'
collection = 'awx'
def inventory_as_dict(self, inventory_source, private_data_dir):
ret = super(tower, self).inventory_as_dict(inventory_source, private_data_dir)
# Credentials injected as env vars, same as script
try:
# plugin can take an actual int type
identifier = int(inventory_source.instance_filters)
except ValueError:
# inventory_id could be a named URL
identifier = iri_to_uri(inventory_source.instance_filters)
ret['inventory_id'] = identifier
ret['include_metadata'] = True # used for license check
return ret
for cls in PluginFileInjector.__subclasses__():
FrozenInjectors[cls.__name__] = cls

View File

@@ -96,6 +96,10 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin):
help_text=_('Specify the type of credential you want to create. Refer '
'to the Ansible Tower documentation for details on each type.')
)
managed_by_tower = models.BooleanField(
default=False,
editable=False
)
organization = models.ForeignKey(
'Organization',
null=True,
@@ -331,6 +335,7 @@ class CredentialType(CommonModelNameNotUnique):
('insights', _('Insights')),
('external', _('External')),
('kubernetes', _('Kubernetes')),
('galaxy', _('Galaxy/Automation Hub')),
)
kind = models.CharField(
@@ -1173,6 +1178,38 @@ ManagedCredentialType(
)
ManagedCredentialType(
namespace='galaxy_api_token',
kind='galaxy',
name=ugettext_noop('Ansible Galaxy/Automation Hub API Token'),
inputs={
'fields': [{
'id': 'url',
'label': ugettext_noop('Galaxy Server URL'),
'type': 'string',
'help_text': ugettext_noop('The URL of the Galaxy instance to connect to.')
},{
'id': 'auth_url',
'label': ugettext_noop('Auth Server URL'),
'type': 'string',
'help_text': ugettext_noop(
'The URL of a Keycloak server token_endpoint, if using '
'SSO auth.'
)
},{
'id': 'token',
'label': ugettext_noop('API Token'),
'type': 'string',
'secret': True,
'help_text': ugettext_noop(
'A token to use for authentication against the Galaxy instance.'
)
}],
'required': ['url'],
}
)
class CredentialInputSource(PrimordialModel):
class Meta:

View File

@@ -4,6 +4,8 @@ import datetime
import logging
from collections import defaultdict
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
from django.db import models, DatabaseError, connection
from django.utils.dateparse import parse_datetime
from django.utils.text import Truncator
@@ -57,7 +59,18 @@ def create_host_status_counts(event_data):
return dict(host_status_counts)
MINIMAL_EVENTS = set([
'playbook_on_play_start', 'playbook_on_task_start',
'playbook_on_stats', 'EOF'
])
def emit_event_detail(event):
if (
settings.UI_LIVE_UPDATES_ENABLED is False and
event.event not in MINIMAL_EVENTS
):
return
cls = event.__class__
relation = {
JobEvent: 'job_id',
@@ -337,41 +350,47 @@ class BasePlaybookEvent(CreatedModifiedModel):
pass
if isinstance(self, JobEvent):
hostnames = self._hostnames()
self._update_host_summary_from_stats(set(hostnames))
if self.job.inventory:
try:
self.job.inventory.update_computed_fields()
except DatabaseError:
logger.exception('Computed fields database error saving event {}'.format(self.pk))
try:
job = self.job
except ObjectDoesNotExist:
job = None
if job:
hostnames = self._hostnames()
self._update_host_summary_from_stats(set(hostnames))
if job.inventory:
try:
job.inventory.update_computed_fields()
except DatabaseError:
logger.exception('Computed fields database error saving event {}'.format(self.pk))
# find parent links and progagate changed=T and failed=T
changed = self.job.job_events.filter(changed=True).exclude(parent_uuid=None).only('parent_uuid').values_list('parent_uuid', flat=True).distinct() # noqa
failed = self.job.job_events.filter(failed=True).exclude(parent_uuid=None).only('parent_uuid').values_list('parent_uuid', flat=True).distinct() # noqa
# find parent links and progagate changed=T and failed=T
changed = job.job_events.filter(changed=True).exclude(parent_uuid=None).only('parent_uuid').values_list('parent_uuid', flat=True).distinct() # noqa
failed = job.job_events.filter(failed=True).exclude(parent_uuid=None).only('parent_uuid').values_list('parent_uuid', flat=True).distinct() # noqa
JobEvent.objects.filter(
job_id=self.job_id, uuid__in=changed
).update(changed=True)
JobEvent.objects.filter(
job_id=self.job_id, uuid__in=failed
).update(failed=True)
JobEvent.objects.filter(
job_id=self.job_id, uuid__in=changed
).update(changed=True)
JobEvent.objects.filter(
job_id=self.job_id, uuid__in=failed
).update(failed=True)
# send success/failure notifications when we've finished handling the playbook_on_stats event
from awx.main.tasks import handle_success_and_failure_notifications # circular import
# send success/failure notifications when we've finished handling the playbook_on_stats event
from awx.main.tasks import handle_success_and_failure_notifications # circular import
def _send_notifications():
handle_success_and_failure_notifications.apply_async([self.job.id])
connection.on_commit(_send_notifications)
def _send_notifications():
handle_success_and_failure_notifications.apply_async([job.id])
connection.on_commit(_send_notifications)
for field in ('playbook', 'play', 'task', 'role'):
value = force_text(event_data.get(field, '')).strip()
if value != getattr(self, field):
setattr(self, field, value)
analytics_logger.info(
'Event data saved.',
extra=dict(python_objects=dict(job_event=self))
)
if settings.LOG_AGGREGATOR_ENABLED:
analytics_logger.info(
'Event data saved.',
extra=dict(python_objects=dict(job_event=self))
)
@classmethod
def create_from_data(cls, **kwargs):
@@ -484,7 +503,11 @@ class JobEvent(BasePlaybookEvent):
def _update_host_summary_from_stats(self, hostnames):
with ignore_inventory_computed_fields():
if not self.job or not self.job.inventory:
try:
if not self.job or not self.job.inventory:
logger.info('Event {} missing job or inventory, host summaries not updated'.format(self.pk))
return
except ObjectDoesNotExist:
logger.info('Event {} missing job or inventory, host summaries not updated'.format(self.pk))
return
job = self.job
@@ -520,13 +543,21 @@ class JobEvent(BasePlaybookEvent):
(summary['host_id'], summary['id'])
for summary in JobHostSummary.objects.filter(job_id=job.id).values('id', 'host_id')
)
updated_hosts = set()
for h in all_hosts:
# if the hostname *shows up* in the playbook_on_stats event
if h.name in hostnames:
h.last_job_id = job.id
updated_hosts.add(h)
if h.id in host_mapping:
h.last_job_host_summary_id = host_mapping[h.id]
Host.objects.bulk_update(all_hosts, ['last_job_id', 'last_job_host_summary_id'])
updated_hosts.add(h)
Host.objects.bulk_update(
list(updated_hosts),
['last_job_id', 'last_job_host_summary_id'],
batch_size=100
)
@property

View File

@@ -12,6 +12,7 @@ from django.utils.translation import ugettext_lazy as _
from django.conf import settings
from django.utils.timezone import now, timedelta
import redis
from solo.models import SingletonModel
from awx import __version__ as awx_application_version
@@ -23,7 +24,7 @@ from awx.main.models.unified_jobs import UnifiedJob
from awx.main.utils import get_cpu_capacity, get_mem_capacity, get_system_task_capacity
from awx.main.models.mixins import RelatedJobsMixin
__all__ = ('Instance', 'InstanceGroup', 'TowerScheduleState', 'TowerAnalyticsState')
__all__ = ('Instance', 'InstanceGroup', 'TowerScheduleState')
class HasPolicyEditsMixin(HasEditsMixin):
@@ -152,6 +153,14 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.capacity = get_system_task_capacity(self.capacity_adjustment)
else:
self.capacity = 0
try:
# if redis is down for some reason, that means we can't persist
# playbook event data; we should consider this a zero capacity event
redis.Redis.from_url(settings.BROKER_URL).ping()
except redis.ConnectionError:
self.capacity = 0
self.cpu = cpu[0]
self.memory = mem[0]
self.cpu_capacity = cpu[1]
@@ -287,10 +296,6 @@ class TowerScheduleState(SingletonModel):
schedule_last_run = models.DateTimeField(auto_now_add=True)
class TowerAnalyticsState(SingletonModel):
last_run = models.DateTimeField(auto_now_add=True)
def schedule_policy_task():
from awx.main.tasks import apply_cluster_membership_policies
connection.on_commit(lambda: apply_cluster_membership_policies.apply_async())

File diff suppressed because it is too large Load Diff

View File

@@ -45,6 +45,12 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
blank=True,
through='OrganizationInstanceGroupMembership'
)
galaxy_credentials = OrderedManyToManyField(
'Credential',
blank=True,
through='OrganizationGalaxyCredentialMembership',
related_name='%(class)s_galaxy_credentials'
)
max_hosts = models.PositiveIntegerField(
blank=True,
default=0,
@@ -108,6 +114,23 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
return UnifiedJob.objects.non_polymorphic().filter(organization=self)
class OrganizationGalaxyCredentialMembership(models.Model):
organization = models.ForeignKey(
'Organization',
on_delete=models.CASCADE
)
credential = models.ForeignKey(
'Credential',
on_delete=models.CASCADE
)
position = models.PositiveIntegerField(
null=True,
default=None,
db_index=True,
)
class Team(CommonModelNameNotUnique, ResourceMixin):
'''
A team is a group of users that work on common projects.

View File

@@ -205,10 +205,15 @@ class Schedule(PrimordialModel, LaunchTimeConfig):
'A valid TZID must be provided (e.g., America/New_York)'
)
if fast_forward and ('MINUTELY' in rrule or 'HOURLY' in rrule):
if (
fast_forward and
('MINUTELY' in rrule or 'HOURLY' in rrule) and
'COUNT=' not in rrule
):
try:
first_event = x[0]
if first_event < now():
# If the first event was over a week ago...
if (now() - first_event).days > 7:
# hourly/minutely rrules with far-past DTSTART values
# are *really* slow to precompute
# start *from* one week ago to speed things up drastically

View File

@@ -1,8 +1,6 @@
import re
import urllib.parse as urlparse
from django.conf import settings
REPLACE_STR = '$encrypted$'
@@ -12,12 +10,6 @@ class UriCleaner(object):
@staticmethod
def remove_sensitive(cleartext):
# exclude_list contains the items that will _not_ be redacted
exclude_list = [settings.PUBLIC_GALAXY_SERVER['url']]
if settings.PRIMARY_GALAXY_URL:
exclude_list += [settings.PRIMARY_GALAXY_URL]
if settings.FALLBACK_GALAXY_SERVERS:
exclude_list += [server['url'] for server in settings.FALLBACK_GALAXY_SERVERS]
redactedtext = cleartext
text_index = 0
while True:
@@ -25,10 +17,6 @@ class UriCleaner(object):
if not match:
break
uri_str = match.group(1)
# Do not redact items from the exclude list
if any(uri_str.startswith(exclude_uri) for exclude_uri in exclude_list):
text_index = match.start() + len(uri_str)
continue
try:
# May raise a ValueError if invalid URI for one reason or another
o = urlparse.urlsplit(uri_str)

View File

@@ -12,6 +12,7 @@ import random
from django.db import transaction, connection
from django.utils.translation import ugettext_lazy as _, gettext_noop
from django.utils.timezone import now as tz_now
from django.conf import settings
# AWX
from awx.main.dispatch.reaper import reap_job
@@ -45,6 +46,12 @@ class TaskManager():
def __init__(self):
self.graph = dict()
# start task limit indicates how many pending jobs can be started on this
# .schedule() run. Starting jobs is expensive, and there is code in place to reap
# the task manager after 5 minutes. At scale, the task manager can easily take more than
# 5 minutes to start pending jobs. If this limit is reached, pending jobs
# will no longer be started and will be started on the next task manager cycle.
self.start_task_limit = settings.START_TASK_LIMIT
for rampart_group in InstanceGroup.objects.prefetch_related('instances'):
self.graph[rampart_group.name] = dict(graph=DependencyGraph(rampart_group.name),
capacity_total=rampart_group.capacity,
@@ -189,6 +196,10 @@ class TaskManager():
return result
def start_task(self, task, rampart_group, dependent_tasks=None, instance=None):
self.start_task_limit -= 1
if self.start_task_limit == 0:
# schedule another run immediately after this task manager
schedule_task_manager()
from awx.main.tasks import handle_work_error, handle_work_success
dependent_tasks = dependent_tasks or []
@@ -448,6 +459,8 @@ class TaskManager():
def process_pending_tasks(self, pending_tasks):
running_workflow_templates = set([wf.unified_job_template_id for wf in self.get_running_workflow_jobs()])
for task in pending_tasks:
if self.start_task_limit <= 0:
break
if self.is_job_blocked(task):
logger.debug("{} is blocked from running".format(task.log_format))
continue

View File

@@ -23,6 +23,7 @@ import fcntl
from pathlib import Path
from uuid import uuid4
import urllib.parse as urlparse
import shlex
# Django
from django.conf import settings
@@ -50,8 +51,9 @@ import ansible_runner
# AWX
from awx import __version__ as awx_application_version
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV, GALAXY_SERVER_FIELDS
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV
from awx.main.access import access_registry
from awx.main.analytics import all_collectors, expensive_collectors
from awx.main.redact import UriCleaner
from awx.main.models import (
Schedule, TowerScheduleState, Instance, InstanceGroup,
@@ -72,7 +74,7 @@ from awx.main.utils import (update_scm_url,
ignore_inventory_group_removal, extract_ansible_vars, schedule_task_manager,
get_awx_version)
from awx.main.utils.ansible import read_ansible_config
from awx.main.utils.common import _get_ansible_version, get_custom_venv_choices
from awx.main.utils.common import get_custom_venv_choices
from awx.main.utils.external_logging import reconfigure_rsyslog
from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja
from awx.main.utils.reload import stop_local_services
@@ -354,6 +356,26 @@ def send_notifications(notification_list, job_id=None):
@task(queue=get_local_queuename)
def gather_analytics():
def _gather_and_ship(subset, since, until):
tgzfiles = []
try:
tgzfiles = analytics.gather(subset=subset, since=since, until=until)
# empty analytics without raising an exception is not an error
if not tgzfiles:
return True
logger.info('Gathered analytics from {} to {}: {}'.format(since, until, tgzfiles))
for tgz in tgzfiles:
analytics.ship(tgz)
except Exception:
logger.exception('Error gathering and sending analytics for {} to {}.'.format(since,until))
return False
finally:
if tgzfiles:
for tgz in tgzfiles:
if os.path.exists(tgz):
os.remove(tgz)
return True
from awx.conf.models import Setting
from rest_framework.fields import DateTimeField
if not settings.INSIGHTS_TRACKING_STATE:
@@ -372,16 +394,29 @@ def gather_analytics():
if acquired is False:
logger.debug('Not gathering analytics, another task holds lock')
return
try:
tgz = analytics.gather()
if not tgz:
return
logger.info('gathered analytics: {}'.format(tgz))
analytics.ship(tgz)
settings.AUTOMATION_ANALYTICS_LAST_GATHER = gather_time
finally:
if os.path.exists(tgz):
os.remove(tgz)
subset = list(all_collectors().keys())
incremental_collectors = []
for collector in expensive_collectors():
if collector in subset:
subset.remove(collector)
incremental_collectors.append(collector)
# Cap gathering at 4 weeks of data if there has been no data gathering
since = last_time or (gather_time - timedelta(weeks=4))
if incremental_collectors:
start = since
until = None
while start < gather_time:
until = start + timedelta(hours = 4)
if (until > gather_time):
until = gather_time
if not _gather_and_ship(incremental_collectors, since=start, until=until):
break
start = until
settings.AUTOMATION_ANALYTICS_LAST_GATHER = until
if subset:
_gather_and_ship(subset, since=since, until=gather_time)
@task(queue=get_local_queuename)
@@ -840,25 +875,12 @@ class BaseTask(object):
logger.error('Failed to update %s after %d retries.',
self.model._meta.object_name, _attempt)
def get_ansible_version(self, instance):
if not hasattr(self, '_ansible_version'):
self._ansible_version = _get_ansible_version(
ansible_path=self.get_path_to_ansible(instance, executable='ansible'))
return self._ansible_version
def get_path_to(self, *args):
'''
Return absolute path relative to this file.
'''
return os.path.abspath(os.path.join(os.path.dirname(__file__), *args))
def get_path_to_ansible(self, instance, executable='ansible-playbook', **kwargs):
venv_path = getattr(instance, 'ansible_virtualenv_path', settings.ANSIBLE_VENV_PATH)
venv_exe = os.path.join(venv_path, 'bin', executable)
if os.path.exists(venv_exe):
return venv_exe
return shutil.which(executable)
def build_private_data(self, instance, private_data_dir):
'''
Return SSH private key data (only if stored in DB as ssh_key_data).
@@ -1484,6 +1506,8 @@ class BaseTask(object):
self.instance.job_explanation = "Job terminated due to timeout"
status = 'failed'
extra_update_fields['job_explanation'] = self.instance.job_explanation
# ensure failure notification sends even if playbook_on_stats event is not triggered
handle_success_and_failure_notifications.apply_async([self.instance.job.id])
except InvalidVirtualenvError as e:
extra_update_fields['job_explanation'] = e.message
@@ -1630,21 +1654,10 @@ class RunJob(BaseTask):
return passwords
def add_ansible_venv(self, venv_path, env, isolated=False):
super(RunJob, self).add_ansible_venv(venv_path, env, isolated=isolated)
# Add awx/lib to PYTHONPATH.
env['PYTHONPATH'] = env.get('PYTHONPATH', '') + self.get_path_to('..', 'lib') + ':'
def build_env(self, job, private_data_dir, isolated=False, private_data_files=None):
'''
Build environment dictionary for ansible-playbook.
'''
plugin_dir = self.get_path_to('..', 'plugins', 'callback')
plugin_dirs = [plugin_dir]
if hasattr(settings, 'AWX_ANSIBLE_CALLBACK_PLUGINS') and \
settings.AWX_ANSIBLE_CALLBACK_PLUGINS:
plugin_dirs.extend(settings.AWX_ANSIBLE_CALLBACK_PLUGINS)
plugin_path = ':'.join(plugin_dirs)
env = super(RunJob, self).build_env(job, private_data_dir,
isolated=isolated,
private_data_files=private_data_files)
@@ -1655,20 +1668,13 @@ class RunJob(BaseTask):
# callbacks to work.
env['JOB_ID'] = str(job.pk)
env['INVENTORY_ID'] = str(job.inventory.pk)
if job.use_fact_cache:
library_path = env.get('ANSIBLE_LIBRARY')
env['ANSIBLE_LIBRARY'] = ':'.join(
filter(None, [
library_path,
self.get_path_to('..', 'plugins', 'library')
])
)
if job.project:
env['PROJECT_REVISION'] = job.project.scm_revision
env['ANSIBLE_RETRY_FILES_ENABLED'] = "False"
env['MAX_EVENT_RES'] = str(settings.MAX_EVENT_RES_DATA)
if not isolated:
env['ANSIBLE_CALLBACK_PLUGINS'] = plugin_path
if hasattr(settings, 'AWX_ANSIBLE_CALLBACK_PLUGINS') and settings.AWX_ANSIBLE_CALLBACK_PLUGINS:
env['ANSIBLE_CALLBACK_PLUGINS'] = ':'.join(settings.AWX_ANSIBLE_CALLBACK_PLUGINS)
env['AWX_HOST'] = settings.TOWER_URL_BASE
# Create a directory for ControlPath sockets that is unique to each
@@ -2043,38 +2049,27 @@ class RunProjectUpdate(BaseTask):
# like https://github.com/ansible/ansible/issues/30064
env['TMP'] = settings.AWX_PROOT_BASE_PATH
env['PROJECT_UPDATE_ID'] = str(project_update.pk)
env['ANSIBLE_CALLBACK_PLUGINS'] = self.get_path_to('..', 'plugins', 'callback')
if settings.GALAXY_IGNORE_CERTS:
env['ANSIBLE_GALAXY_IGNORE'] = True
# Set up the public Galaxy server, if enabled
galaxy_configured = False
if settings.PUBLIC_GALAXY_ENABLED:
galaxy_servers = [settings.PUBLIC_GALAXY_SERVER] # static setting
else:
galaxy_configured = True
galaxy_servers = []
# Set up fallback Galaxy servers, if configured
if settings.FALLBACK_GALAXY_SERVERS:
galaxy_configured = True
galaxy_servers = settings.FALLBACK_GALAXY_SERVERS + galaxy_servers
# Set up the primary Galaxy server, if configured
if settings.PRIMARY_GALAXY_URL:
galaxy_configured = True
galaxy_servers = [{'id': 'primary_galaxy'}] + galaxy_servers
for key in GALAXY_SERVER_FIELDS:
value = getattr(settings, 'PRIMARY_GALAXY_{}'.format(key.upper()))
if value:
galaxy_servers[0][key] = value
if galaxy_configured:
for server in galaxy_servers:
for key in GALAXY_SERVER_FIELDS:
if not server.get(key):
continue
env_key = ('ANSIBLE_GALAXY_SERVER_{}_{}'.format(server.get('id', 'unnamed'), key)).upper()
env[env_key] = server[key]
if galaxy_servers:
# now set the precedence of galaxy servers
env['ANSIBLE_GALAXY_SERVER_LIST'] = ','.join([server.get('id', 'unnamed') for server in galaxy_servers])
# build out env vars for Galaxy credentials (in order)
galaxy_server_list = []
if project_update.project.organization:
for i, cred in enumerate(
project_update.project.organization.galaxy_credentials.all()
):
env[f'ANSIBLE_GALAXY_SERVER_SERVER{i}_URL'] = cred.get_input('url')
auth_url = cred.get_input('auth_url', default=None)
token = cred.get_input('token', default=None)
if token:
env[f'ANSIBLE_GALAXY_SERVER_SERVER{i}_TOKEN'] = token
if auth_url:
env[f'ANSIBLE_GALAXY_SERVER_SERVER{i}_AUTH_URL'] = auth_url
galaxy_server_list.append(f'server{i}')
if galaxy_server_list:
env['ANSIBLE_GALAXY_SERVER_LIST'] = ','.join(galaxy_server_list)
return env
def _build_scm_url_extra_vars(self, project_update):
@@ -2147,6 +2142,19 @@ class RunProjectUpdate(BaseTask):
raise RuntimeError('Could not determine a revision to run from project.')
elif not scm_branch:
scm_branch = {'hg': 'tip'}.get(project_update.scm_type, 'HEAD')
galaxy_creds_are_defined = (
project_update.project.organization and
project_update.project.organization.galaxy_credentials.exists()
)
if not galaxy_creds_are_defined and (
settings.AWX_ROLES_ENABLED or settings.AWX_COLLECTIONS_ENABLED
):
logger.debug(
'Galaxy role/collection syncing is enabled, but no '
f'credentials are configured for {project_update.project.organization}.'
)
extra_vars.update({
'projects_root': settings.PROJECTS_ROOT.rstrip('/'),
'local_path': os.path.basename(project_update.project.local_path),
@@ -2157,8 +2165,8 @@ class RunProjectUpdate(BaseTask):
'scm_url': scm_url,
'scm_branch': scm_branch,
'scm_clean': project_update.scm_clean,
'roles_enabled': settings.AWX_ROLES_ENABLED,
'collections_enabled': settings.AWX_COLLECTIONS_ENABLED,
'roles_enabled': galaxy_creds_are_defined and settings.AWX_ROLES_ENABLED,
'collections_enabled': galaxy_creds_are_defined and settings.AWX_COLLECTIONS_ENABLED,
})
# apply custom refspec from user for PR refs and the like
if project_update.scm_refspec:
@@ -2169,7 +2177,7 @@ class RunProjectUpdate(BaseTask):
self._write_extra_vars_file(private_data_dir, extra_vars)
def build_cwd(self, project_update, private_data_dir):
return self.get_path_to('..', 'playbooks')
return os.path.join(private_data_dir, 'project')
def build_playbook_path_relative_to_cwd(self, project_update, private_data_dir):
return os.path.join('project_update.yml')
@@ -2310,6 +2318,12 @@ class RunProjectUpdate(BaseTask):
shutil.rmtree(stage_path)
os.makedirs(stage_path) # presence of empty cache indicates lack of roles or collections
# the project update playbook is not in a git repo, but uses a vendoring directory
# to be consistent with the ansible-runner model,
# that is moved into the runner projecct folder here
awx_playbooks = self.get_path_to('..', 'playbooks')
copy_tree(awx_playbooks, os.path.join(private_data_dir, 'project'))
@staticmethod
def clear_project_cache(cache_dir, keep_value):
if os.path.isdir(cache_dir):
@@ -2449,7 +2463,7 @@ class RunInventoryUpdate(BaseTask):
@property
def proot_show_paths(self):
return [self.get_path_to('..', 'plugins', 'inventory'), settings.AWX_ANSIBLE_COLLECTIONS_PATHS]
return [settings.AWX_ANSIBLE_COLLECTIONS_PATHS]
def build_private_data(self, inventory_update, private_data_dir):
"""
@@ -2467,7 +2481,7 @@ class RunInventoryUpdate(BaseTask):
If no private data is needed, return None.
"""
if inventory_update.source in InventorySource.injectors:
injector = InventorySource.injectors[inventory_update.source](self.get_ansible_version(inventory_update))
injector = InventorySource.injectors[inventory_update.source]()
return injector.build_private_data(inventory_update, private_data_dir)
def build_env(self, inventory_update, private_data_dir, isolated, private_data_files=None):
@@ -2495,7 +2509,7 @@ class RunInventoryUpdate(BaseTask):
injector = None
if inventory_update.source in InventorySource.injectors:
injector = InventorySource.injectors[inventory_update.source](self.get_ansible_version(inventory_update))
injector = InventorySource.injectors[inventory_update.source]()
if injector is not None:
env = injector.build_env(inventory_update, env, private_data_dir, private_data_files)
@@ -2567,23 +2581,18 @@ class RunInventoryUpdate(BaseTask):
args.extend(['--venv', inventory_update.ansible_virtualenv_path])
src = inventory_update.source
# Add several options to the shell arguments based on the
# inventory-source-specific setting in the AWX configuration.
# These settings are "per-source"; it's entirely possible that
# they will be different between cloud providers if an AWX user
# actively uses more than one.
if getattr(settings, '%s_ENABLED_VAR' % src.upper(), False):
args.extend(['--enabled-var',
getattr(settings, '%s_ENABLED_VAR' % src.upper())])
if getattr(settings, '%s_ENABLED_VALUE' % src.upper(), False):
args.extend(['--enabled-value',
getattr(settings, '%s_ENABLED_VALUE' % src.upper())])
if getattr(settings, '%s_GROUP_FILTER' % src.upper(), False):
args.extend(['--group-filter',
getattr(settings, '%s_GROUP_FILTER' % src.upper())])
if getattr(settings, '%s_HOST_FILTER' % src.upper(), False):
args.extend(['--host-filter',
getattr(settings, '%s_HOST_FILTER' % src.upper())])
if inventory_update.enabled_var:
args.extend(['--enabled-var', shlex.quote(inventory_update.enabled_var)])
args.extend(['--enabled-value', shlex.quote(inventory_update.enabled_value)])
else:
if getattr(settings, '%s_ENABLED_VAR' % src.upper(), False):
args.extend(['--enabled-var',
getattr(settings, '%s_ENABLED_VAR' % src.upper())])
if getattr(settings, '%s_ENABLED_VALUE' % src.upper(), False):
args.extend(['--enabled-value',
getattr(settings, '%s_ENABLED_VALUE' % src.upper())])
if inventory_update.host_filter:
args.extend(['--host-filter', shlex.quote(inventory_update.host_filter)])
if getattr(settings, '%s_EXCLUDE_EMPTY_GROUPS' % src.upper()):
args.append('--exclude-empty-groups')
if getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper(), False):
@@ -2613,7 +2622,7 @@ class RunInventoryUpdate(BaseTask):
injector = None
if inventory_update.source in InventorySource.injectors:
injector = InventorySource.injectors[src](self.get_ansible_version(inventory_update))
injector = InventorySource.injectors[src]()
if injector is not None:
content = injector.inventory_contents(inventory_update, private_data_dir)
@@ -2756,7 +2765,6 @@ class RunAdHocCommand(BaseTask):
'''
Build environment dictionary for ansible.
'''
plugin_dir = self.get_path_to('..', 'plugins', 'callback')
env = super(RunAdHocCommand, self).build_env(ad_hoc_command, private_data_dir,
isolated=isolated,
private_data_files=private_data_files)
@@ -2766,7 +2774,6 @@ class RunAdHocCommand(BaseTask):
env['AD_HOC_COMMAND_ID'] = str(ad_hoc_command.pk)
env['INVENTORY_ID'] = str(ad_hoc_command.inventory.pk)
env['INVENTORY_HOSTVARS'] = str(True)
env['ANSIBLE_CALLBACK_PLUGINS'] = plugin_dir
env['ANSIBLE_LOAD_CALLBACK_PLUGINS'] = '1'
env['ANSIBLE_SFTP_BATCH_MODE'] = 'False'

View File

@@ -1,43 +0,0 @@
conditional_groups:
azure: true
default_host_filters: []
exclude_host_filters:
- resource_group not in ['foo_resources', 'bar_resources']
- '"Creator" not in tags.keys()'
- tags["Creator"] != "jmarshall"
- '"peanutbutter" not in tags.keys()'
- tags["peanutbutter"] != "jelly"
- location not in ['southcentralus', 'westus']
fail_on_template_errors: false
hostvar_expressions:
ansible_host: private_ipv4_addresses[0]
computer_name: name
private_ip: private_ipv4_addresses[0] if private_ipv4_addresses else None
provisioning_state: provisioning_state | title
public_ip: public_ipv4_addresses[0] if public_ipv4_addresses else None
public_ip_id: public_ip_id if public_ip_id is defined else None
public_ip_name: public_ip_name if public_ip_name is defined else None
tags: tags if tags else None
type: resource_type
keyed_groups:
- key: location
prefix: ''
separator: ''
- key: tags.keys() | list if tags else []
prefix: ''
separator: ''
- key: security_group
prefix: ''
separator: ''
- key: resource_group
prefix: ''
separator: ''
- key: os_disk.operating_system_type
prefix: ''
separator: ''
- key: dict(tags.keys() | map("regex_replace", "^(.*)$", "\1_") | list | zip(tags.values() | list)) if tags else []
prefix: ''
separator: ''
plain_host_names: true
plugin: azure.azcollection.azure_rm
use_contrib_script_compatible_sanitization: true

View File

@@ -1,81 +0,0 @@
boto_profile: /tmp/my_boto_stuff
compose:
ansible_host: public_dns_name
ec2_account_id: owner_id
ec2_ami_launch_index: ami_launch_index | string
ec2_architecture: architecture
ec2_block_devices: dict(block_device_mappings | map(attribute='device_name') | list | zip(block_device_mappings | map(attribute='ebs.volume_id') | list))
ec2_client_token: client_token
ec2_dns_name: public_dns_name
ec2_ebs_optimized: ebs_optimized
ec2_eventsSet: events | default("")
ec2_group_name: placement.group_name
ec2_hypervisor: hypervisor
ec2_id: instance_id
ec2_image_id: image_id
ec2_instance_profile: iam_instance_profile | default("")
ec2_instance_type: instance_type
ec2_ip_address: public_ip_address
ec2_kernel: kernel_id | default("")
ec2_key_name: key_name
ec2_launch_time: launch_time | regex_replace(" ", "T") | regex_replace("(\+)(\d\d):(\d)(\d)$", ".\g<2>\g<3>Z")
ec2_monitored: monitoring.state in ['enabled', 'pending']
ec2_monitoring_state: monitoring.state
ec2_persistent: persistent | default(false)
ec2_placement: placement.availability_zone
ec2_platform: platform | default("")
ec2_private_dns_name: private_dns_name
ec2_private_ip_address: private_ip_address
ec2_public_dns_name: public_dns_name
ec2_ramdisk: ramdisk_id | default("")
ec2_reason: state_transition_reason
ec2_region: placement.region
ec2_requester_id: requester_id | default("")
ec2_root_device_name: root_device_name
ec2_root_device_type: root_device_type
ec2_security_group_ids: security_groups | map(attribute='group_id') | list | join(',')
ec2_security_group_names: security_groups | map(attribute='group_name') | list | join(',')
ec2_sourceDestCheck: source_dest_check | default(false) | lower | string
ec2_spot_instance_request_id: spot_instance_request_id | default("")
ec2_state: state.name
ec2_state_code: state.code
ec2_state_reason: state_reason.message if state_reason is defined else ""
ec2_subnet_id: subnet_id | default("")
ec2_tag_Name: tags.Name
ec2_virtualization_type: virtualization_type
ec2_vpc_id: vpc_id | default("")
filters:
instance-state-name:
- running
groups:
ec2: true
hostnames:
- dns-name
iam_role_arn: arn:aws:iam::123456789012:role/test-role
keyed_groups:
- key: placement.availability_zone
parent_group: zones
prefix: ''
separator: ''
- key: instance_type | regex_replace("[^A-Za-z0-9\_]", "_")
parent_group: types
prefix: type
- key: placement.region
parent_group: regions
prefix: ''
separator: ''
- key: dict(tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list | zip(tags.values() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list))
parent_group: tags
prefix: tag
- key: tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
parent_group: tags
prefix: tag
- key: placement.availability_zone
parent_group: '{{ placement.region }}'
prefix: ''
separator: ''
plugin: amazon.aws.aws_ec2
regions:
- us-east-2
- ap-south-1
use_contrib_script_compatible_sanitization: true

View File

@@ -1,50 +0,0 @@
auth_kind: serviceaccount
compose:
ansible_ssh_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
gce_description: description if description else None
gce_id: id
gce_image: image
gce_machine_type: machineType
gce_metadata: metadata.get("items", []) | items2dict(key_name="key", value_name="value")
gce_name: name
gce_network: networkInterfaces[0].network.name
gce_private_ip: networkInterfaces[0].networkIP
gce_public_ip: networkInterfaces[0].accessConfigs[0].natIP | default(None)
gce_status: status
gce_subnetwork: networkInterfaces[0].subnetwork.name
gce_tags: tags.get("items", [])
gce_zone: zone
hostnames:
- name
- public_ip
- private_ip
keyed_groups:
- key: gce_subnetwork
prefix: network
- key: gce_private_ip
prefix: ''
separator: ''
- key: gce_public_ip
prefix: ''
separator: ''
- key: machineType
prefix: ''
separator: ''
- key: zone
prefix: ''
separator: ''
- key: gce_tags
prefix: tag
- key: status | lower
prefix: status
- key: image
prefix: ''
separator: ''
plugin: google.cloud.gcp_compute
projects:
- fooo
retrieve_image_info: true
use_contrib_script_compatible_sanitization: true
zones:
- us-east4-a
- us-west1-b

View File

@@ -1,7 +1,3 @@
ansible:
expand_hostvars: true
fail_on_errors: true
use_hostnames: false
clouds:
devstack:
auth:
@@ -11,5 +7,5 @@ clouds:
project_domain_name: fooo
project_name: fooo
username: fooo
private: false
private: true
verify: false

View File

@@ -1,4 +0,0 @@
expand_hostvars: true
fail_on_errors: true
inventory_hostname: uuid
plugin: openstack.cloud.openstack

View File

@@ -1,20 +0,0 @@
base_source_var: value_of_var
compose:
ansible_host: (devices.values() | list)[0][0] if devices else None
groups:
dev: '"dev" in tags'
keyed_groups:
- key: cluster
prefix: cluster
separator: _
- key: status
prefix: status
separator: _
- key: tags
prefix: tag
separator: _
ovirt_hostname_preference:
- name
- fqdn
ovirt_insecure: false
plugin: ovirt.ovirt.ovirt

View File

@@ -1,30 +0,0 @@
base_source_var: value_of_var
compose:
ansible_ssh_host: foreman['ip6'] | default(foreman['ip'], true)
group_prefix: foo_group_prefix
keyed_groups:
- key: foreman['environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') | regex_replace('none', '')
prefix: foo_group_prefixenvironment_
separator: ''
- key: foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
prefix: foo_group_prefixlocation_
separator: ''
- key: foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
prefix: foo_group_prefixorganization_
separator: ''
- key: foreman['content_facet_attributes']['lifecycle_environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
prefix: foo_group_prefixlifecycle_environment_
separator: ''
- key: foreman['content_facet_attributes']['content_view_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
prefix: foo_group_prefixcontent_view_
separator: ''
- key: '"%s-%s-%s" | format(app, tier, color)'
separator: ''
- key: '"%s-%s" | format(app, color)'
separator: ''
legacy_hostvars: true
plugin: theforeman.foreman.foreman
validate_certs: false
want_facts: true
want_hostcollections: true
want_params: true

View File

@@ -1,3 +0,0 @@
include_metadata: true
inventory_id: 42
plugin: awx.awx.tower

View File

@@ -1,55 +0,0 @@
compose:
ansible_host: guest.ipAddress
ansible_ssh_host: guest.ipAddress
ansible_uuid: 99999999 | random | to_uuid
availablefield: availableField
configissue: configIssue
configstatus: configStatus
customvalue: customValue
effectiverole: effectiveRole
guestheartbeatstatus: guestHeartbeatStatus
layoutex: layoutEx
overallstatus: overallStatus
parentvapp: parentVApp
recenttask: recentTask
resourcepool: resourcePool
rootsnapshot: rootSnapshot
triggeredalarmstate: triggeredAlarmState
filters:
- config.zoo == "DC0_H0_VM0"
hostnames:
- config.foo
keyed_groups:
- key: config.asdf
prefix: ''
separator: ''
plugin: community.vmware.vmware_vm_inventory
properties:
- availableField
- configIssue
- configStatus
- customValue
- datastore
- effectiveRole
- guestHeartbeatStatus
- layout
- layoutEx
- name
- network
- overallStatus
- parentVApp
- permission
- recentTask
- resourcePool
- rootSnapshot
- snapshot
- triggeredAlarmState
- value
- capability
- config
- guest
- runtime
- storage
- summary
strict: false
with_nested_properties: true

View File

@@ -52,11 +52,11 @@ patterns
--------
`mk` functions are single object fixtures. They should create only a single object with the minimum deps.
They should also accept a `persited` flag, if they must be persisted to work, they raise an error if persisted=False
They should also accept a `persisted` flag, if they must be persisted to work, they raise an error if persisted=False
`generate` and `apply` functions are helpers that build up the various parts of a `create` functions objects. These
should be useful for more than one create function to use and should explicitly accept all of the values needed
to execute. These functions should also be robust and have very speciifc error reporting about constraints and/or
to execute. These functions should also be robust and have very specific error reporting about constraints and/or
bad values.
`create` functions compose many of the `mk` and `generate` functions to make different object

View File

@@ -1,6 +1,7 @@
import pytest
import tempfile
import os
import re
import shutil
import csv
@@ -27,7 +28,8 @@ def sqlite_copy_expert(request):
def write_stdout(self, sql, fd):
# Would be cool if we instead properly disected the SQL query and verified
# it that way. But instead, we just take the nieve approach here.
# it that way. But instead, we just take the naive approach here.
sql = sql.strip()
assert sql.startswith("COPY (")
assert sql.endswith(") TO STDOUT WITH CSV HEADER")
@@ -35,6 +37,10 @@ def sqlite_copy_expert(request):
sql = sql.replace(") TO STDOUT WITH CSV HEADER", "")
# sqlite equivalent
sql = sql.replace("ARRAY_AGG", "GROUP_CONCAT")
# SQLite doesn't support isoformatted dates, because that would be useful
sql = sql.replace("+00:00", "")
i = re.compile(r'(?P<date>\d\d\d\d-\d\d-\d\d)T')
sql = i.sub(r'\g<date> ', sql)
# Remove JSON style queries
# TODO: could replace JSON style queries with sqlite kind of equivalents
@@ -86,7 +92,7 @@ def test_copy_tables_unified_job_query(
job_name = job_template.create_unified_job().name
with tempfile.TemporaryDirectory() as tmpdir:
collectors.copy_tables(time_start, tmpdir, subset="unified_jobs")
collectors.unified_jobs_table(time_start, tmpdir, until = now() + timedelta(seconds=1))
with open(os.path.join(tmpdir, "unified_jobs_table.csv")) as f:
lines = "".join([line for line in f])
@@ -134,7 +140,7 @@ def test_copy_tables_workflow_job_node_query(sqlite_copy_expert, workflow_job):
time_start = now() - timedelta(hours=9)
with tempfile.TemporaryDirectory() as tmpdir:
collectors.copy_tables(time_start, tmpdir, subset="workflow_job_node_query")
collectors.workflow_job_node_table(time_start, tmpdir, until = now() + timedelta(seconds=1))
with open(os.path.join(tmpdir, "workflow_job_node_table.csv")) as f:
reader = csv.reader(f)
# Pop the headers

View File

@@ -10,17 +10,17 @@ from awx.main.analytics import gather, register
@register('example', '1.0')
def example(since):
def example(since, **kwargs):
return {'awx': 123}
@register('bad_json', '1.0')
def bad_json(since):
def bad_json(since, **kwargs):
return set()
@register('throws_error', '1.0')
def throws_error(since):
def throws_error(since, **kwargs):
raise ValueError()
@@ -39,9 +39,9 @@ def mock_valid_license():
def test_gather(mock_valid_license):
settings.INSIGHTS_TRACKING_STATE = True
tgz = gather(module=importlib.import_module(__name__))
tgzfiles = gather(module=importlib.import_module(__name__))
files = {}
with tarfile.open(tgz, "r:gz") as archive:
with tarfile.open(tgzfiles[0], "r:gz") as archive:
for member in archive.getmembers():
files[member.name] = archive.extractfile(member)
@@ -53,7 +53,8 @@ def test_gather(mock_valid_license):
assert './bad_json.json' not in files.keys()
assert './throws_error.json' not in files.keys()
try:
os.remove(tgz)
for tgz in tgzfiles:
os.remove(tgz)
except Exception:
pass

View File

@@ -220,7 +220,7 @@ def test_create_valid_kind(kind, get, post, admin):
@pytest.mark.django_db
@pytest.mark.parametrize('kind', ['ssh', 'vault', 'scm', 'insights', 'kubernetes'])
@pytest.mark.parametrize('kind', ['ssh', 'vault', 'scm', 'insights', 'kubernetes', 'galaxy'])
def test_create_invalid_kind(kind, get, post, admin):
response = post(reverse('api:credential_type_list'), {
'kind': kind,

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
import pytest
import json
from unittest import mock
from django.core.exceptions import ValidationError
@@ -8,8 +9,6 @@ from awx.api.versioning import reverse
from awx.main.models import InventorySource, Inventory, ActivityStream
import json
@pytest.fixture
def scm_inventory(inventory, project):
@@ -522,7 +521,8 @@ class TestInventorySourceCredential:
data={
'inventory': inventory.pk, 'name': 'fobar', 'source': 'scm',
'source_project': project.pk, 'source_path': '',
'credential': vault_credential.pk
'credential': vault_credential.pk,
'source_vars': 'plugin: a.b.c',
},
expect=400,
user=admin_user
@@ -561,7 +561,7 @@ class TestInventorySourceCredential:
data={
'inventory': inventory.pk, 'name': 'fobar', 'source': 'scm',
'source_project': project.pk, 'source_path': '',
'credential': os_cred.pk
'credential': os_cred.pk, 'source_vars': 'plugin: a.b.c',
},
expect=201,
user=admin_user
@@ -636,8 +636,14 @@ class TestControlledBySCM:
assert scm_inventory.inventory_sources.count() == 0
def test_adding_inv_src_ok(self, post, scm_inventory, project, admin_user):
post(reverse('api:inventory_inventory_sources_list', kwargs={'pk': scm_inventory.id}),
{'name': 'new inv src', 'source_project': project.pk, 'update_on_project_update': False, 'source': 'scm', 'overwrite_vars': True},
post(reverse('api:inventory_inventory_sources_list',
kwargs={'pk': scm_inventory.id}),
{'name': 'new inv src',
'source_project': project.pk,
'update_on_project_update': False,
'source': 'scm',
'overwrite_vars': True,
'source_vars': 'plugin: a.b.c'},
admin_user, expect=201)
def test_adding_inv_src_prohibited(self, post, scm_inventory, project, admin_user):
@@ -657,7 +663,7 @@ class TestControlledBySCM:
def test_adding_inv_src_without_proj_access_prohibited(self, post, project, inventory, rando):
inventory.admin_role.members.add(rando)
post(reverse('api:inventory_inventory_sources_list', kwargs={'pk': inventory.id}),
{'name': 'new inv src', 'source_project': project.pk, 'source': 'scm', 'overwrite_vars': True},
{'name': 'new inv src', 'source_project': project.pk, 'source': 'scm', 'overwrite_vars': True, 'source_vars': 'plugin: a.b.c'},
rando, expect=403)

View File

@@ -359,6 +359,71 @@ def test_job_launch_fails_with_missing_vault_password(machine_credential, vault_
assert response.data['passwords_needed_to_start'] == ['vault_password']
@pytest.mark.django_db
def test_job_launch_with_added_cred_and_vault_password(credential, machine_credential, vault_credential,
deploy_jobtemplate, post, admin):
# see: https://github.com/ansible/awx/issues/8202
vault_credential.inputs['vault_password'] = 'ASK'
vault_credential.save()
payload = {
'credentials': [vault_credential.id, machine_credential.id],
'credential_passwords': {'vault_password': 'vault-me'},
}
deploy_jobtemplate.ask_credential_on_launch = True
deploy_jobtemplate.credentials.remove(credential)
deploy_jobtemplate.credentials.add(vault_credential)
deploy_jobtemplate.save()
with mock.patch.object(Job, 'signal_start') as signal_start:
post(
reverse('api:job_template_launch', kwargs={'pk': deploy_jobtemplate.pk}),
payload,
admin,
expect=201,
)
signal_start.assert_called_with(**{
'vault_password': 'vault-me'
})
@pytest.mark.django_db
def test_job_launch_with_multiple_launch_time_passwords(credential, machine_credential, vault_credential,
deploy_jobtemplate, post, admin):
# see: https://github.com/ansible/awx/issues/8202
deploy_jobtemplate.ask_credential_on_launch = True
deploy_jobtemplate.credentials.remove(credential)
deploy_jobtemplate.credentials.add(machine_credential)
deploy_jobtemplate.credentials.add(vault_credential)
deploy_jobtemplate.save()
second_machine_credential = Credential(
name='SSH #2',
credential_type=machine_credential.credential_type,
inputs={'password': 'ASK'}
)
second_machine_credential.save()
vault_credential.inputs['vault_password'] = 'ASK'
vault_credential.save()
payload = {
'credentials': [vault_credential.id, second_machine_credential.id],
'credential_passwords': {'ssh_password': 'ssh-me', 'vault_password': 'vault-me'},
}
with mock.patch.object(Job, 'signal_start') as signal_start:
post(
reverse('api:job_template_launch', kwargs={'pk': deploy_jobtemplate.pk}),
payload,
admin,
expect=201,
)
signal_start.assert_called_with(**{
'ssh_password': 'ssh-me',
'vault_password': 'vault-me',
})
@pytest.mark.django_db
@pytest.mark.parametrize('launch_kwargs', [
{'vault_password.abc': 'vault-me-1', 'vault_password.xyz': 'vault-me-2'},

View File

@@ -9,7 +9,7 @@ from django.conf import settings
import pytest
# AWX
from awx.main.models import ProjectUpdate
from awx.main.models import ProjectUpdate, CredentialType, Credential
from awx.api.versioning import reverse
@@ -288,3 +288,90 @@ def test_organization_delete_with_active_jobs(delete, admin, organization, organ
assert resp.data['error'] == u"Resource is being used by running jobs."
assert resp_sorted == expect_sorted
@pytest.mark.django_db
def test_galaxy_credential_association_forbidden(alice, organization, post):
galaxy = CredentialType.defaults['galaxy_api_token']()
galaxy.save()
cred = Credential.objects.create(
credential_type=galaxy,
name='Public Galaxy',
organization=organization,
inputs={
'url': 'https://galaxy.ansible.com/'
}
)
url = reverse('api:organization_galaxy_credentials_list', kwargs={'pk': organization.id})
post(
url,
{'associate': True, 'id': cred.pk},
user=alice,
expect=403
)
@pytest.mark.django_db
def test_galaxy_credential_type_enforcement(admin, organization, post):
ssh = CredentialType.defaults['ssh']()
ssh.save()
cred = Credential.objects.create(
credential_type=ssh,
name='SSH Credential',
organization=organization,
)
url = reverse('api:organization_galaxy_credentials_list', kwargs={'pk': organization.id})
resp = post(
url,
{'associate': True, 'id': cred.pk},
user=admin,
expect=400
)
assert resp.data['msg'] == 'Credential must be a Galaxy credential, not Machine.'
@pytest.mark.django_db
def test_galaxy_credential_association(alice, admin, organization, post, get):
galaxy = CredentialType.defaults['galaxy_api_token']()
galaxy.save()
for i in range(5):
cred = Credential.objects.create(
credential_type=galaxy,
name=f'Public Galaxy {i + 1}',
organization=organization,
inputs={
'url': 'https://galaxy.ansible.com/'
}
)
url = reverse('api:organization_galaxy_credentials_list', kwargs={'pk': organization.id})
post(
url,
{'associate': True, 'id': cred.pk},
user=admin,
expect=204
)
resp = get(url, user=admin)
assert [cred['name'] for cred in resp.data['results']] == [
'Public Galaxy 1',
'Public Galaxy 2',
'Public Galaxy 3',
'Public Galaxy 4',
'Public Galaxy 5',
]
post(
url,
{'disassociate': True, 'id': Credential.objects.get(name='Public Galaxy 3').pk},
user=admin,
expect=204
)
resp = get(url, user=admin)
assert [cred['name'] for cred in resp.data['results']] == [
'Public Galaxy 1',
'Public Galaxy 2',
'Public Galaxy 4',
'Public Galaxy 5',
]

View File

@@ -2,7 +2,6 @@
import pytest
from unittest import mock
import json
from django.core.exceptions import ValidationError
@@ -256,33 +255,22 @@ class TestInventorySourceInjectors:
are named correctly, because Ansible will reject files that do
not have these exact names
"""
injector = InventorySource.injectors[source]('2.7.7')
injector = InventorySource.injectors[source]()
assert injector.filename == filename
def test_group_by_azure(self):
injector = InventorySource.injectors['azure_rm']('2.9')
inv_src = InventorySource(
name='azure source', source='azure_rm',
source_vars={'group_by_os_family': True}
)
group_by_on = injector.inventory_as_dict(inv_src, '/tmp/foo')
# suspicious, yes, that is just what the script did
expected_groups = 6
assert len(group_by_on['keyed_groups']) == expected_groups
inv_src.source_vars = json.dumps({'group_by_os_family': False})
group_by_off = injector.inventory_as_dict(inv_src, '/tmp/foo')
# much better, everyone should turn off the flag and live in the future
assert len(group_by_off['keyed_groups']) == expected_groups - 1
def test_tower_plugin_named_url(self):
injector = InventorySource.injectors['tower']('2.9')
inv_src = InventorySource(
name='my tower source', source='tower',
# named URL pattern "inventory++organization"
instance_filters='Designer hair 읰++Cosmetic_products䵆'
)
result = injector.inventory_as_dict(inv_src, '/tmp/foo')
assert result['inventory_id'] == 'Designer%20hair%20%EC%9D%B0++Cosmetic_products%E4%B5%86'
@pytest.mark.parametrize('source,proper_name', [
('ec2', 'amazon.aws.aws_ec2'),
('openstack', 'openstack.cloud.openstack'),
('gce', 'google.cloud.gcp_compute'),
('azure_rm', 'azure.azcollection.azure_rm'),
('vmware', 'community.vmware.vmware_vm_inventory'),
('rhv', 'ovirt.ovirt.ovirt'),
('satellite6', 'theforeman.foreman.foreman'),
('tower', 'awx.awx.tower'),
])
def test_plugin_proper_names(self, source, proper_name):
injector = InventorySource.injectors[source]()
assert injector.get_proper_name() == proper_name
@pytest.mark.django_db

View File

@@ -1,7 +1,7 @@
import pytest
from unittest import mock
from awx.main.models import Project
from awx.main.models import Project, Credential, CredentialType
from awx.main.models.organization import Organization
@@ -57,3 +57,31 @@ def test_foreign_key_change_changes_modified_by(project, organization):
def test_project_related_jobs(project):
update = project.create_unified_job()
assert update.id in [u.id for u in project._get_related_jobs()]
@pytest.mark.django_db
def test_galaxy_credentials(project):
org = project.organization
galaxy = CredentialType.defaults['galaxy_api_token']()
galaxy.save()
for i in range(5):
cred = Credential.objects.create(
name=f'Ansible Galaxy {i + 1}',
organization=org,
credential_type=galaxy,
inputs={
'url': 'https://galaxy.ansible.com/'
}
)
cred.save()
org.galaxy_credentials.add(cred)
assert [
cred.name for cred in org.galaxy_credentials.all()
] == [
'Ansible Galaxy 1',
'Ansible Galaxy 2',
'Ansible Galaxy 3',
'Ansible Galaxy 4',
'Ansible Galaxy 5',
]

View File

@@ -1,4 +1,4 @@
from datetime import datetime
from datetime import datetime, timedelta
from contextlib import contextmanager
from django.utils.timezone import now
@@ -161,6 +161,58 @@ class TestComputedFields:
assert job_template.next_schedule == expected_schedule
@pytest.mark.django_db
@pytest.mark.parametrize('freq, delta', (
('MINUTELY', 1),
('HOURLY', 1)
))
def test_past_week_rrule(job_template, freq, delta):
# see: https://github.com/ansible/awx/issues/8071
recent = (datetime.utcnow() - timedelta(days=3))
recent = recent.replace(hour=0, minute=0, second=0, microsecond=0)
recent_dt = recent.strftime('%Y%m%d')
rrule = f'DTSTART;TZID=America/New_York:{recent_dt}T000000 RRULE:FREQ={freq};INTERVAL={delta};COUNT=5' # noqa
sched = Schedule.objects.create(
name='example schedule',
rrule=rrule,
unified_job_template=job_template
)
first_event = sched.rrulestr(sched.rrule)[0]
assert first_event.replace(tzinfo=None) == recent
@pytest.mark.django_db
@pytest.mark.parametrize('freq, delta', (
('MINUTELY', 1),
('HOURLY', 1)
))
def test_really_old_dtstart(job_template, freq, delta):
# see: https://github.com/ansible/awx/issues/8071
# If an event is per-minute/per-hour and was created a *really long*
# time ago, we should just bump forward to start counting "in the last week"
rrule = f'DTSTART;TZID=America/New_York:20150101T000000 RRULE:FREQ={freq};INTERVAL={delta}' # noqa
sched = Schedule.objects.create(
name='example schedule',
rrule=rrule,
unified_job_template=job_template
)
last_week = (datetime.utcnow() - timedelta(days=7)).date()
first_event = sched.rrulestr(sched.rrule)[0]
assert last_week == first_event.date()
# the next few scheduled events should be the next minute/hour incremented
next_five_events = list(sched.rrulestr(sched.rrule).xafter(now(), count=5))
assert next_five_events[0] > now()
last = None
for event in next_five_events:
if last:
assert event == last + (
timedelta(minutes=1) if freq == 'MINUTELY' else timedelta(hours=1)
)
last = event
@pytest.mark.django_db
def test_repeats_forever(job_template):
s = Schedule(

View File

@@ -81,6 +81,7 @@ def test_default_cred_types():
'azure_rm',
'cloudforms',
'conjur',
'galaxy_api_token',
'gce',
'github_token',
'gitlab_token',

View File

@@ -10,7 +10,7 @@ import pytest
from awx.main.models import Job, WorkflowJob, Instance
from awx.main.dispatch import reaper
from awx.main.dispatch.pool import PoolWorker, WorkerPool, AutoscalePool
from awx.main.dispatch.pool import StatefulPoolWorker, WorkerPool, AutoscalePool
from awx.main.dispatch.publish import task
from awx.main.dispatch.worker import BaseWorker, TaskWorker
@@ -80,7 +80,7 @@ class SlowResultWriter(BaseWorker):
class TestPoolWorker:
def setup_method(self, test_method):
self.worker = PoolWorker(1000, self.tick, tuple())
self.worker = StatefulPoolWorker(1000, self.tick, tuple())
def tick(self):
self.worker.finished.put(self.worker.queue.get()['uuid'])

View File

@@ -0,0 +1,115 @@
import importlib
from django.conf import settings
from django.contrib.contenttypes.models import ContentType
import pytest
from awx.main.models import Credential, Organization
from awx.conf.models import Setting
from awx.main.migrations import _galaxy as galaxy
class FakeApps(object):
def get_model(self, app, model):
if app == 'contenttypes':
return ContentType
return getattr(importlib.import_module(f'awx.{app}.models'), model)
apps = FakeApps()
@pytest.mark.django_db
def test_default_public_galaxy():
org = Organization.objects.create()
assert org.galaxy_credentials.count() == 0
galaxy.migrate_galaxy_settings(apps, None)
assert org.galaxy_credentials.count() == 1
creds = org.galaxy_credentials.all()
assert creds[0].name == 'Ansible Galaxy'
assert creds[0].inputs['url'] == 'https://galaxy.ansible.com/'
@pytest.mark.django_db
def test_public_galaxy_disabled():
Setting.objects.create(key='PUBLIC_GALAXY_ENABLED', value=False)
org = Organization.objects.create()
assert org.galaxy_credentials.count() == 0
galaxy.migrate_galaxy_settings(apps, None)
assert org.galaxy_credentials.count() == 0
@pytest.mark.django_db
def test_rh_automation_hub():
Setting.objects.create(key='PRIMARY_GALAXY_URL', value='https://cloud.redhat.com/api/automation-hub/')
Setting.objects.create(key='PRIMARY_GALAXY_TOKEN', value='secret123')
org = Organization.objects.create()
assert org.galaxy_credentials.count() == 0
galaxy.migrate_galaxy_settings(apps, None)
assert org.galaxy_credentials.count() == 2
assert org.galaxy_credentials.first().name == 'Ansible Automation Hub (https://cloud.redhat.com/api/automation-hub/)' # noqa
@pytest.mark.django_db
def test_multiple_galaxies():
for i in range(5):
Organization.objects.create(name=f'Org {i}')
Setting.objects.create(key='PRIMARY_GALAXY_URL', value='https://example.org/')
Setting.objects.create(key='PRIMARY_GALAXY_AUTH_URL', value='https://auth.example.org/')
Setting.objects.create(key='PRIMARY_GALAXY_USERNAME', value='user')
Setting.objects.create(key='PRIMARY_GALAXY_PASSWORD', value='pass')
Setting.objects.create(key='PRIMARY_GALAXY_TOKEN', value='secret123')
for org in Organization.objects.all():
assert org.galaxy_credentials.count() == 0
galaxy.migrate_galaxy_settings(apps, None)
for org in Organization.objects.all():
assert org.galaxy_credentials.count() == 2
creds = org.galaxy_credentials.all()
assert creds[0].name == 'Private Galaxy (https://example.org/)'
assert creds[0].inputs['url'] == 'https://example.org/'
assert creds[0].inputs['auth_url'] == 'https://auth.example.org/'
assert creds[0].inputs['token'].startswith('$encrypted$')
assert creds[0].get_input('token') == 'secret123'
assert creds[1].name == 'Ansible Galaxy'
assert creds[1].inputs['url'] == 'https://galaxy.ansible.com/'
public_galaxy_creds = Credential.objects.filter(name='Ansible Galaxy')
assert public_galaxy_creds.count() == 1
assert public_galaxy_creds.first().managed_by_tower is True
@pytest.mark.django_db
def test_fallback_galaxies():
org = Organization.objects.create()
assert org.galaxy_credentials.count() == 0
Setting.objects.create(key='PRIMARY_GALAXY_URL', value='https://example.org/')
Setting.objects.create(key='PRIMARY_GALAXY_AUTH_URL', value='https://auth.example.org/')
Setting.objects.create(key='PRIMARY_GALAXY_TOKEN', value='secret123')
try:
settings.FALLBACK_GALAXY_SERVERS = [{
'id': 'abc123',
'url': 'https://some-other-galaxy.example.org/',
'auth_url': 'https://some-other-galaxy.sso.example.org/',
'username': 'user',
'password': 'pass',
'token': 'fallback123',
}]
galaxy.migrate_galaxy_settings(apps, None)
finally:
settings.FALLBACK_GALAXY_SERVERS = []
assert org.galaxy_credentials.count() == 3
creds = org.galaxy_credentials.all()
assert creds[0].name == 'Private Galaxy (https://example.org/)'
assert creds[0].inputs['url'] == 'https://example.org/'
assert creds[1].name == 'Ansible Galaxy (https://some-other-galaxy.example.org/)'
assert creds[1].inputs['url'] == 'https://some-other-galaxy.example.org/'
assert creds[1].inputs['auth_url'] == 'https://some-other-galaxy.sso.example.org/'
assert creds[1].inputs['token'].startswith('$encrypted$')
assert creds[1].get_input('token') == 'fallback123'
assert creds[2].name == 'Ansible Galaxy'
assert creds[2].inputs['url'] == 'https://galaxy.ansible.com/'

View File

@@ -14,69 +14,6 @@ from django.conf import settings
DATA = os.path.join(os.path.dirname(data.__file__), 'inventory')
TEST_SOURCE_FIELDS = {
'vmware': {
'instance_filters': '{{ config.name == "only_my_server" }},{{ somevar == "bar"}}',
'group_by': 'fouo'
},
'ec2': {
'instance_filters': 'foobaa',
# group_by selected to capture some non-trivial cross-interactions
'group_by': 'availability_zone,instance_type,tag_keys,region',
'source_regions': 'us-east-2,ap-south-1'
},
'gce': {
'source_regions': 'us-east4-a,us-west1-b' # surfaced as env var
},
'azure_rm': {
'source_regions': 'southcentralus,westus'
},
'tower': {
'instance_filters': '42'
}
}
INI_TEST_VARS = {
'ec2': {
'boto_profile': '/tmp/my_boto_stuff',
'iam_role_arn': 'arn:aws:iam::123456789012:role/test-role',
'hostname_variable': 'public_dns_name',
'destination_variable': 'public_dns_name'
},
'gce': {},
'openstack': {
'private': False,
'use_hostnames': False,
'expand_hostvars': True,
'fail_on_errors': True
},
'tower': {}, # there are none
'vmware': {
'alias_pattern': "{{ config.foo }}",
'host_filters': '{{ config.zoo == "DC0_H0_VM0" }}',
'groupby_patterns': "{{ config.asdf }}",
# setting VMWARE_VALIDATE_CERTS is duplicated with env var
},
'azure_rm': {
'use_private_ip': True,
'resource_groups': 'foo_resources,bar_resources',
'tags': 'Creator:jmarshall, peanutbutter:jelly'
},
'satellite6': {
'satellite6_group_patterns': '["{app}-{tier}-{color}", "{app}-{color}"]',
'satellite6_group_prefix': 'foo_group_prefix',
'satellite6_want_hostcollections': True,
'satellite6_want_ansible_ssh_host': True,
'satellite6_want_facts': True
},
'rhv': { # options specific to the plugin
'ovirt_insecure': False,
'groups': {
'dev': '"dev" in tags'
}
}
}
def generate_fake_var(element):
"""Given a credential type field element, makes up something acceptable.
@@ -245,25 +182,21 @@ def create_reference_data(source_dir, env, content):
@pytest.mark.django_db
@pytest.mark.parametrize('this_kind', CLOUD_PROVIDERS)
def test_inventory_update_injected_content(this_kind, inventory, fake_credential_factory):
injector = InventorySource.injectors[this_kind]
if injector.plugin_name is None:
pytest.skip('Use of inventory plugin is not enabled for this source')
src_vars = dict(base_source_var='value_of_var')
if this_kind in INI_TEST_VARS:
src_vars.update(INI_TEST_VARS[this_kind])
extra_kwargs = {}
if this_kind in TEST_SOURCE_FIELDS:
extra_kwargs.update(TEST_SOURCE_FIELDS[this_kind])
src_vars['plugin'] = injector.get_proper_name()
inventory_source = InventorySource.objects.create(
inventory=inventory,
source=this_kind,
source_vars=src_vars,
**extra_kwargs
)
inventory_source.credentials.add(fake_credential_factory(this_kind))
inventory_update = inventory_source.create_unified_job()
task = RunInventoryUpdate()
if InventorySource.injectors[this_kind].plugin_name is None:
pytest.skip('Use of inventory plugin is not enabled for this source')
def substitute_run(envvars=None, **_kw):
"""This method will replace run_pexpect
instead of running, it will read the private data directory contents
@@ -274,6 +207,12 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
assert envvars.pop('ANSIBLE_INVENTORY_ENABLED') == 'auto'
set_files = bool(os.getenv("MAKE_INVENTORY_REFERENCE_FILES", 'false').lower()[0] not in ['f', '0'])
env, content = read_content(private_data_dir, envvars, inventory_update)
# Assert inventory plugin inventory file is in private_data_dir
inventory_filename = InventorySource.injectors[inventory_update.source]().filename
assert len([True for k in content.keys() if k.endswith(inventory_filename)]) > 0, \
f"'{inventory_filename}' file not found in inventory update runtime files {content.keys()}"
env.pop('ANSIBLE_COLLECTIONS_PATHS', None) # collection paths not relevant to this test
base_dir = os.path.join(DATA, 'plugins')
if not os.path.exists(base_dir):
@@ -283,6 +222,8 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
create_reference_data(source_dir, env, content)
pytest.skip('You set MAKE_INVENTORY_REFERENCE_FILES, so this created files, unset to run actual test.')
else:
source_dir = os.path.join(base_dir, this_kind) # this_kind is a global
if not os.path.exists(source_dir):
raise FileNotFoundError(
'Maybe you never made reference files? '
@@ -292,9 +233,6 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
expected_file_list = os.listdir(files_dir)
except FileNotFoundError:
expected_file_list = []
assert set(expected_file_list) == set(content.keys()), (
'Inventory update runtime environment does not have expected files'
)
for f_name in expected_file_list:
with open(os.path.join(files_dir, f_name), 'r') as f:
ref_content = f.read()
@@ -314,8 +252,7 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
with mock.patch('awx.main.queue.CallbackQueueDispatcher.dispatch', lambda self, obj: None):
# Also do not send websocket status updates
with mock.patch.object(UnifiedJob, 'websocket_emit_status', mock.Mock()):
with mock.patch.object(task, 'get_ansible_version', return_value='2.13'):
# The point of this test is that we replace run with assertions
with mock.patch('awx.main.tasks.ansible_runner.interface.run', substitute_run):
# so this sets up everything for a run and then yields control over to substitute_run
task.run(inventory_update.pk)
# The point of this test is that we replace run with assertions
with mock.patch('awx.main.tasks.ansible_runner.interface.run', substitute_run):
# so this sets up everything for a run and then yields control over to substitute_run
task.run(inventory_update.pk)

View File

@@ -1,3 +1,4 @@
import redis
import pytest
from unittest import mock
import json
@@ -25,7 +26,8 @@ def test_orphan_unified_job_creation(instance, inventory):
@mock.patch('awx.main.utils.common.get_mem_capacity', lambda: (8000,62))
def test_job_capacity_and_with_inactive_node():
i = Instance.objects.create(hostname='test-1')
i.refresh_capacity()
with mock.patch.object(redis.client.Redis, 'ping', lambda self: True):
i.refresh_capacity()
assert i.capacity == 62
i.enabled = False
i.save()
@@ -35,6 +37,19 @@ def test_job_capacity_and_with_inactive_node():
assert i.capacity == 0
@pytest.mark.django_db
@mock.patch('awx.main.utils.common.get_cpu_capacity', lambda: (2,8))
@mock.patch('awx.main.utils.common.get_mem_capacity', lambda: (8000,62))
def test_job_capacity_with_redis_disabled():
i = Instance.objects.create(hostname='test-1')
def _raise(self):
raise redis.ConnectionError()
with mock.patch.object(redis.client.Redis, 'ping', _raise):
i.refresh_capacity()
assert i.capacity == 0
@pytest.mark.django_db
def test_job_type_name():
job = Job.objects.create()

View File

@@ -72,23 +72,6 @@ def test_invalid_kind_clean_insights_credential():
assert json.dumps(str(e.value)) == json.dumps(str([u'Assignment not allowed for Smart Inventory']))
@pytest.mark.parametrize('source_vars,validate_certs', [
({'ssl_verify': True}, True),
({'ssl_verify': False}, False),
({'validate_certs': True}, True),
({'validate_certs': False}, False)])
def test_satellite_plugin_backwards_support_for_ssl_verify(source_vars, validate_certs):
injector = InventorySource.injectors['satellite6']('2.9')
inv_src = InventorySource(
name='satellite source', source='satellite6',
source_vars=source_vars
)
ret = injector.inventory_as_dict(inv_src, '/tmp/foo')
assert 'validate_certs' in ret
assert ret['validate_certs'] in (validate_certs, str(validate_certs))
class TestControlledBySCM():
def test_clean_source_path_valid(self):
inv_src = InventorySource(source_path='/not_real/',

View File

@@ -25,6 +25,7 @@ from awx.main.models import (
Job,
JobTemplate,
Notification,
Organization,
Project,
ProjectUpdate,
UnifiedJob,
@@ -59,6 +60,19 @@ def patch_Job():
yield
@pytest.fixture
def patch_Organization():
_credentials = []
credentials_mock = mock.Mock(**{
'all': lambda: _credentials,
'add': _credentials.append,
'exists': lambda: len(_credentials) > 0,
'spec_set': ['all', 'add', 'exists'],
})
with mock.patch.object(Organization, 'galaxy_credentials', credentials_mock):
yield
@pytest.fixture
def job():
return Job(
@@ -131,7 +145,6 @@ def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker
('SECRET_KEY', 'SECRET'),
('VMWARE_PASSWORD', 'SECRET'),
('API_SECRET', 'SECRET'),
('ANSIBLE_GALAXY_SERVER_PRIMARY_GALAXY_PASSWORD', 'SECRET'),
('ANSIBLE_GALAXY_SERVER_PRIMARY_GALAXY_TOKEN', 'SECRET'),
])
def test_safe_env_filtering(key, value):
@@ -1780,10 +1793,108 @@ class TestJobCredentials(TestJobExecution):
assert env['FOO'] == 'BAR'
@pytest.mark.usefixtures("patch_Organization")
class TestProjectUpdateGalaxyCredentials(TestJobExecution):
@pytest.fixture
def project_update(self):
org = Organization(pk=1)
proj = Project(pk=1, organization=org)
project_update = ProjectUpdate(pk=1, project=proj, scm_type='git')
project_update.websocket_emit_status = mock.Mock()
return project_update
parametrize = {
'test_galaxy_credentials_ignore_certs': [
dict(ignore=True),
dict(ignore=False),
],
}
def test_galaxy_credentials_ignore_certs(self, private_data_dir, project_update, ignore):
settings.GALAXY_IGNORE_CERTS = ignore
task = tasks.RunProjectUpdate()
env = task.build_env(project_update, private_data_dir)
if ignore:
assert env['ANSIBLE_GALAXY_IGNORE'] is True
else:
assert 'ANSIBLE_GALAXY_IGNORE' not in env
def test_galaxy_credentials_empty(self, private_data_dir, project_update):
class RunProjectUpdate(tasks.RunProjectUpdate):
__vars__ = {}
def _write_extra_vars_file(self, private_data_dir, extra_vars, *kw):
self.__vars__ = extra_vars
task = RunProjectUpdate()
env = task.build_env(project_update, private_data_dir)
task.build_extra_vars_file(project_update, private_data_dir)
assert task.__vars__['roles_enabled'] is False
assert task.__vars__['collections_enabled'] is False
for k in env:
assert not k.startswith('ANSIBLE_GALAXY_SERVER')
def test_single_public_galaxy(self, private_data_dir, project_update):
class RunProjectUpdate(tasks.RunProjectUpdate):
__vars__ = {}
def _write_extra_vars_file(self, private_data_dir, extra_vars, *kw):
self.__vars__ = extra_vars
credential_type = CredentialType.defaults['galaxy_api_token']()
public_galaxy = Credential(pk=1, credential_type=credential_type, inputs={
'url': 'https://galaxy.ansible.com/',
})
project_update.project.organization.galaxy_credentials.add(public_galaxy)
task = RunProjectUpdate()
env = task.build_env(project_update, private_data_dir)
task.build_extra_vars_file(project_update, private_data_dir)
assert task.__vars__['roles_enabled'] is True
assert task.__vars__['collections_enabled'] is True
assert sorted([
(k, v) for k, v in env.items()
if k.startswith('ANSIBLE_GALAXY')
]) == [
('ANSIBLE_GALAXY_SERVER_LIST', 'server0'),
('ANSIBLE_GALAXY_SERVER_SERVER0_URL', 'https://galaxy.ansible.com/'),
]
def test_multiple_galaxy_endpoints(self, private_data_dir, project_update):
credential_type = CredentialType.defaults['galaxy_api_token']()
public_galaxy = Credential(pk=1, credential_type=credential_type, inputs={
'url': 'https://galaxy.ansible.com/',
})
rh = Credential(pk=2, credential_type=credential_type, inputs={
'url': 'https://cloud.redhat.com/api/automation-hub/',
'auth_url': 'https://sso.redhat.com/example/openid-connect/token/',
'token': 'secret123'
})
project_update.project.organization.galaxy_credentials.add(public_galaxy)
project_update.project.organization.galaxy_credentials.add(rh)
task = tasks.RunProjectUpdate()
env = task.build_env(project_update, private_data_dir)
assert sorted([
(k, v) for k, v in env.items()
if k.startswith('ANSIBLE_GALAXY')
]) == [
('ANSIBLE_GALAXY_SERVER_LIST', 'server0,server1'),
('ANSIBLE_GALAXY_SERVER_SERVER0_URL', 'https://galaxy.ansible.com/'),
('ANSIBLE_GALAXY_SERVER_SERVER1_AUTH_URL', 'https://sso.redhat.com/example/openid-connect/token/'), # noqa
('ANSIBLE_GALAXY_SERVER_SERVER1_TOKEN', 'secret123'),
('ANSIBLE_GALAXY_SERVER_SERVER1_URL', 'https://cloud.redhat.com/api/automation-hub/'),
]
@pytest.mark.usefixtures("patch_Organization")
class TestProjectUpdateCredentials(TestJobExecution):
@pytest.fixture
def project_update(self):
project_update = ProjectUpdate(pk=1, project=Project(pk=1))
project_update = ProjectUpdate(
pk=1,
project=Project(pk=1, organization=Organization(pk=1)),
)
project_update.websocket_emit_status = mock.Mock()
return project_update
@@ -1880,13 +1991,6 @@ class TestProjectUpdateCredentials(TestJobExecution):
assert env['FOO'] == 'BAR'
@pytest.fixture
def mock_ansible_version():
with mock.patch('awx.main.tasks._get_ansible_version', mock.MagicMock(return_value='2.10')) as _fixture:
yield _fixture
@pytest.mark.usefixtures("mock_ansible_version")
class TestInventoryUpdateCredentials(TestJobExecution):
@pytest.fixture
def inventory_update(self):
@@ -2020,7 +2124,6 @@ class TestInventoryUpdateCredentials(TestJobExecution):
task = tasks.RunInventoryUpdate()
azure_rm = CredentialType.defaults['azure_rm']()
inventory_update.source = 'azure_rm'
inventory_update.source_regions = 'north, south, east, west'
def get_cred():
cred = Credential(
@@ -2059,7 +2162,6 @@ class TestInventoryUpdateCredentials(TestJobExecution):
task = tasks.RunInventoryUpdate()
azure_rm = CredentialType.defaults['azure_rm']()
inventory_update.source = 'azure_rm'
inventory_update.source_regions = 'all'
def get_cred():
cred = Credential(
@@ -2097,7 +2199,6 @@ class TestInventoryUpdateCredentials(TestJobExecution):
task = tasks.RunInventoryUpdate()
gce = CredentialType.defaults['gce']()
inventory_update.source = 'gce'
inventory_update.source_regions = 'all'
def get_cred():
cred = Credential(
@@ -2216,7 +2317,6 @@ class TestInventoryUpdateCredentials(TestJobExecution):
task = tasks.RunInventoryUpdate()
tower = CredentialType.defaults['tower']()
inventory_update.source = 'tower'
inventory_update.instance_filters = '12345'
inputs = {
'host': 'https://tower.example.org',
'username': 'bob',
@@ -2248,7 +2348,6 @@ class TestInventoryUpdateCredentials(TestJobExecution):
task = tasks.RunInventoryUpdate()
tower = CredentialType.defaults['tower']()
inventory_update.source = 'tower'
inventory_update.instance_filters = '12345'
inputs = {
'host': 'https://tower.example.org',
'username': 'bob',

View File

@@ -215,11 +215,3 @@ def test_get_custom_venv_choices():
os.path.join(temp_dir, ''),
os.path.join(custom_venv_1, '')
]
def test_region_sorting():
s = [('Huey', 'China1'),
('Dewey', 'UK1'),
('Lewie', 'US1'),
('All', 'All')]
assert [x[1] for x in sorted(s, key=common.region_sorting)] == ['All', 'US1', 'China1', 'UK1']

View File

@@ -45,7 +45,7 @@ __all__ = [
'get_object_or_400', 'camelcase_to_underscore', 'underscore_to_camelcase', 'memoize',
'memoize_delete', 'get_ansible_version', 'get_licenser', 'get_awx_http_client_headers',
'get_awx_version', 'update_scm_url', 'get_type_for_model', 'get_model_for_type',
'copy_model_by_class', 'region_sorting', 'copy_m2m_relationships',
'copy_model_by_class', 'copy_m2m_relationships',
'prefetch_page_capabilities', 'to_python_boolean', 'ignore_inventory_computed_fields',
'ignore_inventory_group_removal', '_inventory_updates', 'get_pk_from_dict', 'getattrd',
'getattr_dne', 'NoDefaultProvided', 'get_current_apps', 'set_current_apps',
@@ -87,15 +87,6 @@ def to_python_boolean(value, allow_none=False):
raise ValueError(_(u'Unable to convert "%s" to boolean') % value)
def region_sorting(region):
# python3's removal of sorted(cmp=...) is _stupid_
if region[1].lower() == 'all':
return ''
elif region[1].lower().startswith('us'):
return region[1]
return 'ZZZ' + str(region[1])
def camelcase_to_underscore(s):
'''
Convert CamelCase names to lowercase_with_underscore.
@@ -171,13 +162,14 @@ def memoize_delete(function_name):
return cache.delete(function_name)
def _get_ansible_version(ansible_path):
@memoize()
def get_ansible_version():
'''
Return Ansible version installed.
Ansible path needs to be provided to account for custom virtual environments
'''
try:
proc = subprocess.Popen([ansible_path, '--version'],
proc = subprocess.Popen(['ansible', '--version'],
stdout=subprocess.PIPE)
result = smart_str(proc.communicate()[0])
return result.split('\n')[0].replace('ansible', '').strip()
@@ -185,11 +177,6 @@ def _get_ansible_version(ansible_path):
return 'unknown'
@memoize()
def get_ansible_version():
return _get_ansible_version('ansible')
def get_awx_version():
'''
Return AWX version as reported by setuptools.

View File

@@ -2,15 +2,21 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
import zipfile
import tarfile
import errno
import os
import tarfile
import zipfile
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
display = Display()
try:
from zipfile import BadZipFile
except ImportError:
from zipfile import BadZipfile as BadZipFile # py2 compat
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
@@ -26,14 +32,15 @@ class ActionModule(ActionBase):
archive = zipfile.ZipFile(src)
get_filenames = archive.namelist
get_members = archive.infolist
except zipfile.BadZipFile:
archive = tarfile.open(src)
except BadZipFile:
try:
archive = tarfile.open(src)
except tarfile.ReadError:
result["failed"] = True
result["msg"] = "{0} is not a valid archive".format(src)
return result
get_filenames = archive.getnames
get_members = archive.getmembers
except tarfile.ReadError:
result["failed"] = True
result["msg"] = "{0} is not a valid archive".format(src)
return result
# Most well formed archives contain a single root directory, typically named
# project-name-1.0.0. The project contents should be inside that directory.
@@ -62,10 +69,19 @@ class ActionModule(ActionBase):
try:
is_dir = member.is_dir()
except AttributeError:
is_dir = member.isdir()
try:
is_dir = member.isdir()
except AttributeError:
is_dir = member.filename[-1] == '/' # py2 compat for ZipInfo
if is_dir:
os.makedirs(dest, exist_ok=True)
try:
os.makedirs(dest)
except OSError as exc: # Python >= 2.5
if exc.errno == errno.EEXIST and os.path.isdir(dest):
pass
else:
raise
else:
try:
member_f = archive.open(member)

View File

@@ -1,36 +0,0 @@
---
- hosts: all
vars:
scan_use_checksum: false
scan_use_recursive: false
tasks:
- name: "Scan packages (Unix/Linux)"
scan_packages:
os_family: '{{ ansible_os_family }}'
when: ansible_os_family != "Windows"
- name: "Scan services (Unix/Linux)"
scan_services:
when: ansible_os_family != "Windows"
- name: "Scan files (Unix/Linux)"
scan_files:
paths: '{{ scan_file_paths }}'
get_checksum: '{{ scan_use_checksum }}'
recursive: '{{ scan_use_recursive }}'
when: scan_file_paths is defined and ansible_os_family != "Windows"
- name: "Scan Insights for Machine ID (Unix/Linux)"
scan_insights:
when: ansible_os_family != "Windows"
- name: "Scan packages (Windows)"
win_scan_packages:
when: ansible_os_family == "Windows"
- name: "Scan services (Windows)"
win_scan_services:
when: ansible_os_family == "Windows"
- name: "Scan files (Windows)"
win_scan_files:
paths: '{{ scan_file_paths }}'
get_checksum: '{{ scan_use_checksum }}'
recursive: '{{ scan_use_recursive }}'
when: scan_file_paths is defined and ansible_os_family == "Windows"

View File

@@ -1,166 +0,0 @@
#!/usr/bin/env python
import os
import stat
from ansible.module_utils.basic import * # noqa
DOCUMENTATION = '''
---
module: scan_files
short_description: Return file state information as fact data for a directory tree
description:
- Return file state information recursively for a directory tree on the filesystem
version_added: "1.9"
options:
path:
description: The path containing files to be analyzed
required: true
default: null
recursive:
description: scan this directory and all subdirectories
required: false
default: no
get_checksum:
description: Checksum files that you can access
required: false
default: false
requirements: [ ]
author: Matthew Jones
'''
EXAMPLES = '''
# Example fact output:
# host | success >> {
# "ansible_facts": {
# "files": [
# {
# "atime": 1427313854.0755742,
# "checksum": "cf7566e6149ad9af91e7589e0ea096a08de9c1e5",
# "ctime": 1427129299.22948,
# "dev": 51713,
# "gid": 0,
# "inode": 149601,
# "isblk": false,
# "ischr": false,
# "isdir": false,
# "isfifo": false,
# "isgid": false,
# "islnk": false,
# "isreg": true,
# "issock": false,
# "isuid": false,
# "mode": "0644",
# "mtime": 1427112663.0321455,
# "nlink": 1,
# "path": "/var/log/dmesg.1.gz",
# "rgrp": true,
# "roth": true,
# "rusr": true,
# "size": 28,
# "uid": 0,
# "wgrp": false,
# "woth": false,
# "wusr": true,
# "xgrp": false,
# "xoth": false,
# "xusr": false
# },
# {
# "atime": 1427314385.1155744,
# "checksum": "16fac7be61a6e4591a33ef4b729c5c3302307523",
# "ctime": 1427384148.5755742,
# "dev": 51713,
# "gid": 43,
# "inode": 149564,
# "isblk": false,
# "ischr": false,
# "isdir": false,
# "isfifo": false,
# "isgid": false,
# "islnk": false,
# "isreg": true,
# "issock": false,
# "isuid": false,
# "mode": "0664",
# "mtime": 1427384148.5755742,
# "nlink": 1,
# "path": "/var/log/wtmp",
# "rgrp": true,
# "roth": true,
# "rusr": true,
# "size": 48768,
# "uid": 0,
# "wgrp": true,
# "woth": false,
# "wusr": true,
# "xgrp": false,
# "xoth": false,
# "xusr": false
# },
'''
def main():
module = AnsibleModule( # noqa
argument_spec = dict(paths=dict(required=True, type='list'),
recursive=dict(required=False, default='no', type='bool'),
get_checksum=dict(required=False, default='no', type='bool')))
files = []
paths = module.params.get('paths')
for path in paths:
path = os.path.expanduser(path)
if not os.path.exists(path) or not os.path.isdir(path):
module.fail_json(msg = "Given path must exist and be a directory")
get_checksum = module.params.get('get_checksum')
should_recurse = module.params.get('recursive')
if not should_recurse:
path_list = [os.path.join(path, subpath) for subpath in os.listdir(path)]
else:
path_list = [os.path.join(w_path, f) for w_path, w_names, w_file in os.walk(path) for f in w_file]
for filepath in path_list:
try:
st = os.stat(filepath)
except OSError:
continue
mode = st.st_mode
d = {
'path' : filepath,
'mode' : "%04o" % stat.S_IMODE(mode),
'isdir' : stat.S_ISDIR(mode),
'ischr' : stat.S_ISCHR(mode),
'isblk' : stat.S_ISBLK(mode),
'isreg' : stat.S_ISREG(mode),
'isfifo' : stat.S_ISFIFO(mode),
'islnk' : stat.S_ISLNK(mode),
'issock' : stat.S_ISSOCK(mode),
'uid' : st.st_uid,
'gid' : st.st_gid,
'size' : st.st_size,
'inode' : st.st_ino,
'dev' : st.st_dev,
'nlink' : st.st_nlink,
'atime' : st.st_atime,
'mtime' : st.st_mtime,
'ctime' : st.st_ctime,
'wusr' : bool(mode & stat.S_IWUSR),
'rusr' : bool(mode & stat.S_IRUSR),
'xusr' : bool(mode & stat.S_IXUSR),
'wgrp' : bool(mode & stat.S_IWGRP),
'rgrp' : bool(mode & stat.S_IRGRP),
'xgrp' : bool(mode & stat.S_IXGRP),
'woth' : bool(mode & stat.S_IWOTH),
'roth' : bool(mode & stat.S_IROTH),
'xoth' : bool(mode & stat.S_IXOTH),
'isuid' : bool(mode & stat.S_ISUID),
'isgid' : bool(mode & stat.S_ISGID),
}
if get_checksum and stat.S_ISREG(mode) and os.access(filepath, os.R_OK):
d['checksum'] = module.sha1(filepath)
files.append(d)
results = dict(ansible_facts=dict(files=files))
module.exit_json(**results)
main()

View File

@@ -1,66 +0,0 @@
#!/usr/bin/env python
from ansible.module_utils.basic import * # noqa
DOCUMENTATION = '''
---
module: scan_insights
short_description: Return insights id as fact data
description:
- Inspects the /etc/redhat-access-insights/machine-id file for insights id and returns the found id as fact data
version_added: "2.3"
options:
requirements: [ ]
author: Chris Meyers
'''
EXAMPLES = '''
# Example fact output:
# host | success >> {
# "ansible_facts": {
# "insights": {
# "system_id": "4da7d1f8-14f3-4cdc-acd5-a3465a41f25d"
# }, ... }
'''
INSIGHTS_SYSTEM_ID_FILE='/etc/redhat-access-insights/machine-id'
def get_system_id(filname):
system_id = None
try:
f = open(INSIGHTS_SYSTEM_ID_FILE, "r")
except IOError:
return None
else:
try:
data = f.readline()
system_id = str(data)
except (IOError, ValueError):
pass
finally:
f.close()
if system_id:
system_id = system_id.strip()
return system_id
def main():
module = AnsibleModule( # noqa
argument_spec = dict()
)
system_id = get_system_id(INSIGHTS_SYSTEM_ID_FILE)
results = {
'ansible_facts': {
'insights': {
'system_id': system_id
}
}
}
module.exit_json(**results)
main()

View File

@@ -1,111 +0,0 @@
#!/usr/bin/env python
from ansible.module_utils.basic import * # noqa
DOCUMENTATION = '''
---
module: scan_packages
short_description: Return installed packages information as fact data
description:
- Return information about installed packages as fact data
version_added: "1.9"
options:
requirements: [ ]
author: Matthew Jones
'''
EXAMPLES = '''
# Example fact output:
# host | success >> {
# "ansible_facts": {
# "packages": {
# "libbz2-1.0": [
# {
# "version": "1.0.6-5",
# "source": "apt",
# "arch": "amd64",
# "name": "libbz2-1.0"
# }
# ],
# "patch": [
# {
# "version": "2.7.1-4ubuntu1",
# "source": "apt",
# "arch": "amd64",
# "name": "patch"
# }
# ],
# "gcc-4.8-base": [
# {
# "version": "4.8.2-19ubuntu1",
# "source": "apt",
# "arch": "amd64",
# "name": "gcc-4.8-base"
# },
# {
# "version": "4.9.2-19ubuntu1",
# "source": "apt",
# "arch": "amd64",
# "name": "gcc-4.8-base"
# }
# ]
# }
'''
def rpm_package_list():
import rpm
trans_set = rpm.TransactionSet()
installed_packages = {}
for package in trans_set.dbMatch():
package_details = dict(name=package[rpm.RPMTAG_NAME],
version=package[rpm.RPMTAG_VERSION],
release=package[rpm.RPMTAG_RELEASE],
epoch=package[rpm.RPMTAG_EPOCH],
arch=package[rpm.RPMTAG_ARCH],
source='rpm')
if package_details['name'] not in installed_packages:
installed_packages[package_details['name']] = [package_details]
else:
installed_packages[package_details['name']].append(package_details)
return installed_packages
def deb_package_list():
import apt
apt_cache = apt.Cache()
installed_packages = {}
apt_installed_packages = [pk for pk in apt_cache.keys() if apt_cache[pk].is_installed]
for package in apt_installed_packages:
ac_pkg = apt_cache[package].installed
package_details = dict(name=package,
version=ac_pkg.version,
arch=ac_pkg.architecture,
source='apt')
if package_details['name'] not in installed_packages:
installed_packages[package_details['name']] = [package_details]
else:
installed_packages[package_details['name']].append(package_details)
return installed_packages
def main():
module = AnsibleModule( # noqa
argument_spec = dict(os_family=dict(required=True))
)
ans_os = module.params['os_family']
if ans_os in ('RedHat', 'Suse', 'openSUSE Leap'):
packages = rpm_package_list()
elif ans_os == 'Debian':
packages = deb_package_list()
else:
packages = None
if packages is not None:
results = dict(ansible_facts=dict(packages=packages))
else:
results = dict(skipped=True, msg="Unsupported Distribution")
module.exit_json(**results)
main()

View File

@@ -1,190 +0,0 @@
#!/usr/bin/env python
import re
from ansible.module_utils.basic import * # noqa
DOCUMENTATION = '''
---
module: scan_services
short_description: Return service state information as fact data
description:
- Return service state information as fact data for various service management utilities
version_added: "1.9"
options:
requirements: [ ]
author: Matthew Jones
'''
EXAMPLES = '''
- monit: scan_services
# Example fact output:
# host | success >> {
# "ansible_facts": {
# "services": {
# "network": {
# "source": "sysv",
# "state": "running",
# "name": "network"
# },
# "arp-ethers.service": {
# "source": "systemd",
# "state": "stopped",
# "name": "arp-ethers.service"
# }
# }
# }
'''
class BaseService(object):
def __init__(self, module):
self.module = module
self.incomplete_warning = False
class ServiceScanService(BaseService):
def gather_services(self):
services = {}
service_path = self.module.get_bin_path("service")
if service_path is None:
return None
initctl_path = self.module.get_bin_path("initctl")
chkconfig_path = self.module.get_bin_path("chkconfig")
# sysvinit
if service_path is not None and chkconfig_path is None:
rc, stdout, stderr = self.module.run_command("%s --status-all 2>&1 | grep -E \"\\[ (\\+|\\-) \\]\"" % service_path, use_unsafe_shell=True)
for line in stdout.split("\n"):
line_data = line.split()
if len(line_data) < 4:
continue # Skipping because we expected more data
service_name = " ".join(line_data[3:])
if line_data[1] == "+":
service_state = "running"
else:
service_state = "stopped"
services[service_name] = {"name": service_name, "state": service_state, "source": "sysv"}
# Upstart
if initctl_path is not None and chkconfig_path is None:
p = re.compile(r'^\s?(?P<name>.*)\s(?P<goal>\w+)\/(?P<state>\w+)(\,\sprocess\s(?P<pid>[0-9]+))?\s*$')
rc, stdout, stderr = self.module.run_command("%s list" % initctl_path)
real_stdout = stdout.replace("\r","")
for line in real_stdout.split("\n"):
m = p.match(line)
if not m:
continue
service_name = m.group('name')
service_goal = m.group('goal')
service_state = m.group('state')
if m.group('pid'):
pid = m.group('pid')
else:
pid = None # NOQA
payload = {"name": service_name, "state": service_state, "goal": service_goal, "source": "upstart"}
services[service_name] = payload
# RH sysvinit
elif chkconfig_path is not None:
#print '%s --status-all | grep -E "is (running|stopped)"' % service_path
p = re.compile(
r'(?P<service>.*?)\s+[0-9]:(?P<rl0>on|off)\s+[0-9]:(?P<rl1>on|off)\s+[0-9]:(?P<rl2>on|off)\s+'
r'[0-9]:(?P<rl3>on|off)\s+[0-9]:(?P<rl4>on|off)\s+[0-9]:(?P<rl5>on|off)\s+[0-9]:(?P<rl6>on|off)')
rc, stdout, stderr = self.module.run_command('%s' % chkconfig_path, use_unsafe_shell=True)
# Check for special cases where stdout does not fit pattern
match_any = False
for line in stdout.split('\n'):
if p.match(line):
match_any = True
if not match_any:
p_simple = re.compile(r'(?P<service>.*?)\s+(?P<rl0>on|off)')
match_any = False
for line in stdout.split('\n'):
if p_simple.match(line):
match_any = True
if match_any:
# Try extra flags " -l --allservices" needed for SLES11
rc, stdout, stderr = self.module.run_command('%s -l --allservices' % chkconfig_path, use_unsafe_shell=True)
elif '--list' in stderr:
# Extra flag needed for RHEL5
rc, stdout, stderr = self.module.run_command('%s --list' % chkconfig_path, use_unsafe_shell=True)
for line in stdout.split('\n'):
m = p.match(line)
if m:
service_name = m.group('service')
service_state = 'stopped'
if m.group('rl3') == 'on':
rc, stdout, stderr = self.module.run_command('%s %s status' % (service_path, service_name), use_unsafe_shell=True)
service_state = rc
if rc in (0,):
service_state = 'running'
#elif rc in (1,3):
else:
if 'root' in stderr or 'permission' in stderr.lower() or 'not in sudoers' in stderr.lower():
self.incomplete_warning = True
continue
else:
service_state = 'stopped'
service_data = {"name": service_name, "state": service_state, "source": "sysv"}
services[service_name] = service_data
return services
class SystemctlScanService(BaseService):
def systemd_enabled(self):
# Check if init is the systemd command, using comm as cmdline could be symlink
try:
f = open('/proc/1/comm', 'r')
except IOError:
# If comm doesn't exist, old kernel, no systemd
return False
for line in f:
if 'systemd' in line:
return True
return False
def gather_services(self):
services = {}
if not self.systemd_enabled():
return None
systemctl_path = self.module.get_bin_path("systemctl", opt_dirs=["/usr/bin", "/usr/local/bin"])
if systemctl_path is None:
return None
rc, stdout, stderr = self.module.run_command("%s list-unit-files --type=service | tail -n +2 | head -n -2" % systemctl_path, use_unsafe_shell=True)
for line in stdout.split("\n"):
line_data = line.split()
if len(line_data) != 2:
continue
if line_data[1] == "enabled":
state_val = "running"
else:
state_val = "stopped"
services[line_data[0]] = {"name": line_data[0], "state": state_val, "source": "systemd"}
return services
def main():
module = AnsibleModule(argument_spec = dict()) # noqa
service_modules = (ServiceScanService, SystemctlScanService)
all_services = {}
incomplete_warning = False
for svc_module in service_modules:
svcmod = svc_module(module)
svc = svcmod.gather_services()
if svc is not None:
all_services.update(svc)
if svcmod.incomplete_warning:
incomplete_warning = True
if len(all_services) == 0:
results = dict(skipped=True, msg="Failed to find any services. Sometimes this is due to insufficient privileges.")
else:
results = dict(ansible_facts=dict(services=all_services))
if incomplete_warning:
results['msg'] = "WARNING: Could not find status for all services. Sometimes this is due to insufficient privileges."
module.exit_json(**results)
main()

View File

@@ -1,102 +0,0 @@
#!powershell
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# WANT_JSON
# POWERSHELL_COMMON
$params = Parse-Args $args $true;
$paths = Get-Attr $params "paths" $FALSE;
If ($paths -eq $FALSE)
{
Fail-Json (New-Object psobject) "missing required argument: paths";
}
$get_checksum = Get-Attr $params "get_checksum" $false | ConvertTo-Bool;
$recursive = Get-Attr $params "recursive" $false | ConvertTo-Bool;
function Date_To_Timestamp($start_date, $end_date)
{
If($start_date -and $end_date)
{
Write-Output (New-TimeSpan -Start $start_date -End $end_date).TotalSeconds
}
}
$files = @()
ForEach ($path In $paths)
{
"Path: " + $path
ForEach ($file in Get-ChildItem $path -Recurse: $recursive)
{
"File: " + $file.FullName
$fileinfo = New-Object psobject
Set-Attr $fileinfo "path" $file.FullName
$info = Get-Item $file.FullName;
$iscontainer = Get-Attr $info "PSIsContainer" $null;
$length = Get-Attr $info "Length" $null;
$extension = Get-Attr $info "Extension" $null;
$attributes = Get-Attr $info "Attributes" "";
If ($info)
{
$accesscontrol = $info.GetAccessControl();
}
Else
{
$accesscontrol = $null;
}
$owner = Get-Attr $accesscontrol "Owner" $null;
$creationtime = Get-Attr $info "CreationTime" $null;
$lastaccesstime = Get-Attr $info "LastAccessTime" $null;
$lastwritetime = Get-Attr $info "LastWriteTime" $null;
$epoch_date = Get-Date -Date "01/01/1970"
If ($iscontainer)
{
Set-Attr $fileinfo "isdir" $TRUE;
}
Else
{
Set-Attr $fileinfo "isdir" $FALSE;
Set-Attr $fileinfo "size" $length;
}
Set-Attr $fileinfo "extension" $extension;
Set-Attr $fileinfo "attributes" $attributes.ToString();
# Set-Attr $fileinfo "owner" $getaccesscontrol.Owner;
# Set-Attr $fileinfo "owner" $info.GetAccessControl().Owner;
Set-Attr $fileinfo "owner" $owner;
Set-Attr $fileinfo "creationtime" (Date_To_Timestamp $epoch_date $creationtime);
Set-Attr $fileinfo "lastaccesstime" (Date_To_Timestamp $epoch_date $lastaccesstime);
Set-Attr $fileinfo "lastwritetime" (Date_To_Timestamp $epoch_date $lastwritetime);
If (($get_checksum) -and -not $fileinfo.isdir)
{
$hash = Get-FileChecksum($file.FullName);
Set-Attr $fileinfo "checksum" $hash;
}
$files += $fileinfo
}
}
$result = New-Object psobject @{
ansible_facts = New-Object psobject @{
files = $files
}
}
Exit-Json $result;

View File

@@ -1,66 +0,0 @@
#!powershell
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# WANT_JSON
# POWERSHELL_COMMON
$uninstall_native_path = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall"
$uninstall_wow6432_path = "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall"
if ([System.IntPtr]::Size -eq 4) {
# This is a 32-bit Windows system, so we only check for 32-bit programs, which will be
# at the native registry location.
[PSObject []]$packages = Get-ChildItem -Path $uninstall_native_path |
Get-ItemProperty |
Select-Object -Property @{Name="name"; Expression={$_."DisplayName"}},
@{Name="version"; Expression={$_."DisplayVersion"}},
@{Name="publisher"; Expression={$_."Publisher"}},
@{Name="arch"; Expression={ "Win32" }} |
Where-Object { $_.name }
} else {
# This is a 64-bit Windows system, so we check for 64-bit programs in the native
# registry location, and also for 32-bit programs under Wow6432Node.
[PSObject []]$packages = Get-ChildItem -Path $uninstall_native_path |
Get-ItemProperty |
Select-Object -Property @{Name="name"; Expression={$_."DisplayName"}},
@{Name="version"; Expression={$_."DisplayVersion"}},
@{Name="publisher"; Expression={$_."Publisher"}},
@{Name="arch"; Expression={ "Win64" }} |
Where-Object { $_.name }
$packages += Get-ChildItem -Path $uninstall_wow6432_path |
Get-ItemProperty |
Select-Object -Property @{Name="name"; Expression={$_."DisplayName"}},
@{Name="version"; Expression={$_."DisplayVersion"}},
@{Name="publisher"; Expression={$_."Publisher"}},
@{Name="arch"; Expression={ "Win32" }} |
Where-Object { $_.name }
}
$result = New-Object psobject @{
ansible_facts = New-Object psobject @{
packages = $packages
}
changed = $false
}
Exit-Json $result;

View File

@@ -1,30 +0,0 @@
#!powershell
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# WANT_JSON
# POWERSHELL_COMMON
$result = New-Object psobject @{
ansible_facts = New-Object psobject @{
services = Get-Service |
Select-Object -Property @{Name="name"; Expression={$_."DisplayName"}},
@{Name="win_svc_name"; Expression={$_."Name"}},
@{Name="state"; Expression={$_."Status".ToString().ToLower()}}
}
changed = $false
}
Exit-Json $result;

View File

@@ -8,8 +8,6 @@ from datetime import timedelta
# global settings
from django.conf import global_settings
# ugettext lazy
from django.utils.translation import ugettext_lazy as _
# Update this module's local settings from the global settings module.
this_module = sys.modules[__name__]
@@ -199,12 +197,23 @@ LOCAL_STDOUT_EXPIRE_TIME = 2592000
# events into the database
JOB_EVENT_WORKERS = 4
# The number of seconds (must be an integer) to buffer callback receiver bulk
# writes in memory before flushing via JobEvent.objects.bulk_create()
JOB_EVENT_BUFFER_SECONDS = 1
# The interval at which callback receiver statistics should be
# recorded
JOB_EVENT_STATISTICS_INTERVAL = 5
# The maximum size of the job event worker queue before requests are blocked
JOB_EVENT_MAX_QUEUE_SIZE = 10000
# The number of job events to migrate per-transaction when moving from int -> bigint
JOB_EVENT_MIGRATION_CHUNK_SIZE = 1000000
# The maximum allowed jobs to start on a given task manager cycle
START_TASK_LIMIT = 100
# Disallow sending session cookies over insecure connections
SESSION_COOKIE_SECURE = True
@@ -479,6 +488,7 @@ SOCIAL_AUTH_SAML_PIPELINE = _SOCIAL_AUTH_PIPELINE_BASE + (
'awx.sso.pipeline.update_user_orgs',
'awx.sso.pipeline.update_user_teams',
)
SAML_AUTO_CREATE_OBJECTS = True
SOCIAL_AUTH_LOGIN_URL = '/'
SOCIAL_AUTH_LOGIN_REDIRECT_URL = '/sso/complete/'
@@ -569,28 +579,9 @@ AWX_COLLECTIONS_ENABLED = True
# Follow symlinks when scanning for playbooks
AWX_SHOW_PLAYBOOK_LINKS = False
# Settings for primary galaxy server, should be set in the UI
PRIMARY_GALAXY_URL = ''
PRIMARY_GALAXY_USERNAME = ''
PRIMARY_GALAXY_TOKEN = ''
PRIMARY_GALAXY_PASSWORD = ''
PRIMARY_GALAXY_AUTH_URL = ''
# Settings for the public galaxy server(s).
PUBLIC_GALAXY_ENABLED = True
PUBLIC_GALAXY_SERVER = {
'id': 'galaxy',
'url': 'https://galaxy.ansible.com'
}
# Applies to any galaxy server
GALAXY_IGNORE_CERTS = False
# List of dicts of fallback (additional) Galaxy servers. If configured, these
# will be higher precedence than public Galaxy, but lower than primary Galaxy.
# Available options: 'id', 'url', 'username', 'password', 'token', 'auth_url'
FALLBACK_GALAXY_SERVERS = []
# Enable bubblewrap support for running jobs (playbook runs only).
# Note: This setting may be overridden by database settings.
AWX_PROOT_ENABLED = True
@@ -671,145 +662,32 @@ INV_ENV_VARIABLE_BLOCKED = ("HOME", "USER", "_", "TERM")
# ----------------
# -- Amazon EC2 --
# ----------------
# AWS does not appear to provide pretty region names via any API, so store the
# list of names here. The available region IDs will be pulled from boto.
# http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region
EC2_REGION_NAMES = {
'us-east-1': _('US East (Northern Virginia)'),
'us-east-2': _('US East (Ohio)'),
'us-west-2': _('US West (Oregon)'),
'us-west-1': _('US West (Northern California)'),
'ca-central-1': _('Canada (Central)'),
'eu-central-1': _('EU (Frankfurt)'),
'eu-west-1': _('EU (Ireland)'),
'eu-west-2': _('EU (London)'),
'ap-southeast-1': _('Asia Pacific (Singapore)'),
'ap-southeast-2': _('Asia Pacific (Sydney)'),
'ap-northeast-1': _('Asia Pacific (Tokyo)'),
'ap-northeast-2': _('Asia Pacific (Seoul)'),
'ap-south-1': _('Asia Pacific (Mumbai)'),
'sa-east-1': _('South America (Sao Paulo)'),
'us-gov-west-1': _('US West (GovCloud)'),
'cn-north-1': _('China (Beijing)'),
}
# Inventory variable name/values for determining if host is active/enabled.
EC2_ENABLED_VAR = 'ec2_state'
EC2_ENABLED_VALUE = 'running'
# Inventory variable name containing unique instance ID.
EC2_INSTANCE_ID_VAR = 'ec2_id'
# Filter for allowed group/host names when importing inventory from EC2.
EC2_GROUP_FILTER = r'^.+$'
EC2_HOST_FILTER = r'^.+$'
EC2_EXCLUDE_EMPTY_GROUPS = True
# ------------
# -- VMware --
# ------------
# Inventory variable name/values for determining whether a host is
# active in vSphere.
VMWARE_ENABLED_VAR = 'guest.gueststate'
VMWARE_ENABLED_VALUE = 'running'
# Inventory variable name containing the unique instance ID.
VMWARE_INSTANCE_ID_VAR = 'config.instanceUuid, config.instanceuuid'
# Filter for allowed group and host names when importing inventory
# from VMware.
VMWARE_GROUP_FILTER = r'^.+$'
VMWARE_HOST_FILTER = r'^.+$'
VMWARE_EXCLUDE_EMPTY_GROUPS = True
VMWARE_VALIDATE_CERTS = False
# ---------------------------
# -- Google Compute Engine --
# ---------------------------
# It's not possible to get zones in GCE without authenticating, so we
# provide a list here.
# Source: https://developers.google.com/compute/docs/zones
GCE_REGION_CHOICES = [
('us-east1-b', _('US East 1 (B)')),
('us-east1-c', _('US East 1 (C)')),
('us-east1-d', _('US East 1 (D)')),
('us-east4-a', _('US East 4 (A)')),
('us-east4-b', _('US East 4 (B)')),
('us-east4-c', _('US East 4 (C)')),
('us-central1-a', _('US Central (A)')),
('us-central1-b', _('US Central (B)')),
('us-central1-c', _('US Central (C)')),
('us-central1-f', _('US Central (F)')),
('us-west1-a', _('US West (A)')),
('us-west1-b', _('US West (B)')),
('us-west1-c', _('US West (C)')),
('europe-west1-b', _('Europe West 1 (B)')),
('europe-west1-c', _('Europe West 1 (C)')),
('europe-west1-d', _('Europe West 1 (D)')),
('europe-west2-a', _('Europe West 2 (A)')),
('europe-west2-b', _('Europe West 2 (B)')),
('europe-west2-c', _('Europe West 2 (C)')),
('asia-east1-a', _('Asia East (A)')),
('asia-east1-b', _('Asia East (B)')),
('asia-east1-c', _('Asia East (C)')),
('asia-southeast1-a', _('Asia Southeast (A)')),
('asia-southeast1-b', _('Asia Southeast (B)')),
('asia-northeast1-a', _('Asia Northeast (A)')),
('asia-northeast1-b', _('Asia Northeast (B)')),
('asia-northeast1-c', _('Asia Northeast (C)')),
('australia-southeast1-a', _('Australia Southeast (A)')),
('australia-southeast1-b', _('Australia Southeast (B)')),
('australia-southeast1-c', _('Australia Southeast (C)')),
]
# Inventory variable name/value for determining whether a host is active
# in Google Compute Engine.
GCE_ENABLED_VAR = 'status'
GCE_ENABLED_VALUE = 'running'
# Filter for allowed group and host names when importing inventory from
# Google Compute Engine.
GCE_GROUP_FILTER = r'^.+$'
GCE_HOST_FILTER = r'^.+$'
GCE_EXCLUDE_EMPTY_GROUPS = True
GCE_INSTANCE_ID_VAR = 'gce_id'
# --------------------------------------
# -- Microsoft Azure Resource Manager --
# --------------------------------------
# It's not possible to get zones in Azure without authenticating, so we
# provide a list here.
AZURE_RM_REGION_CHOICES = [
('eastus', _('US East')),
('eastus2', _('US East 2')),
('centralus', _('US Central')),
('northcentralus', _('US North Central')),
('southcentralus', _('US South Central')),
('westcentralus', _('US West Central')),
('westus', _('US West')),
('westus2', _('US West 2')),
('canadaeast', _('Canada East')),
('canadacentral', _('Canada Central')),
('brazilsouth', _('Brazil South')),
('northeurope', _('Europe North')),
('westeurope', _('Europe West')),
('ukwest', _('UK West')),
('uksouth', _('UK South')),
('eastasia', _('Asia East')),
('southestasia', _('Asia Southeast')),
('australiaeast', _('Australia East')),
('australiasoutheast', _('Australia Southeast')),
('westindia', _('India West')),
('southindia', _('India South')),
('japaneast', _('Japan East')),
('japanwest', _('Japan West')),
('koreacentral', _('Korea Central')),
('koreasouth', _('Korea South')),
]
AZURE_RM_GROUP_FILTER = r'^.+$'
AZURE_RM_HOST_FILTER = r'^.+$'
AZURE_RM_ENABLED_VAR = 'powerstate'
AZURE_RM_ENABLED_VALUE = 'running'
AZURE_RM_INSTANCE_ID_VAR = 'id'
@@ -820,8 +698,6 @@ AZURE_RM_EXCLUDE_EMPTY_GROUPS = True
# ---------------------
OPENSTACK_ENABLED_VAR = 'status'
OPENSTACK_ENABLED_VALUE = 'ACTIVE'
OPENSTACK_GROUP_FILTER = r'^.+$'
OPENSTACK_HOST_FILTER = r'^.+$'
OPENSTACK_EXCLUDE_EMPTY_GROUPS = True
OPENSTACK_INSTANCE_ID_VAR = 'openstack.id'
@@ -830,8 +706,6 @@ OPENSTACK_INSTANCE_ID_VAR = 'openstack.id'
# ---------------------
RHV_ENABLED_VAR = 'status'
RHV_ENABLED_VALUE = 'up'
RHV_GROUP_FILTER = r'^.+$'
RHV_HOST_FILTER = r'^.+$'
RHV_EXCLUDE_EMPTY_GROUPS = True
RHV_INSTANCE_ID_VAR = 'id'
@@ -840,8 +714,6 @@ RHV_INSTANCE_ID_VAR = 'id'
# ---------------------
TOWER_ENABLED_VAR = 'remote_tower_enabled'
TOWER_ENABLED_VALUE = 'true'
TOWER_GROUP_FILTER = r'^.+$'
TOWER_HOST_FILTER = r'^.+$'
TOWER_EXCLUDE_EMPTY_GROUPS = True
TOWER_INSTANCE_ID_VAR = 'remote_tower_id'
@@ -850,8 +722,6 @@ TOWER_INSTANCE_ID_VAR = 'remote_tower_id'
# ---------------------
SATELLITE6_ENABLED_VAR = 'foreman.enabled'
SATELLITE6_ENABLED_VALUE = 'True'
SATELLITE6_GROUP_FILTER = r'^.+$'
SATELLITE6_HOST_FILTER = r'^.+$'
SATELLITE6_EXCLUDE_EMPTY_GROUPS = True
SATELLITE6_INSTANCE_ID_VAR = 'foreman.id'
# SATELLITE6_GROUP_PREFIX and SATELLITE6_GROUP_PATTERNS defined in source vars
@@ -861,8 +731,6 @@ SATELLITE6_INSTANCE_ID_VAR = 'foreman.id'
# ---------------------
#CUSTOM_ENABLED_VAR =
#CUSTOM_ENABLED_VALUE =
CUSTOM_GROUP_FILTER = r'^.+$'
CUSTOM_HOST_FILTER = r'^.+$'
CUSTOM_EXCLUDE_EMPTY_GROUPS = False
#CUSTOM_INSTANCE_ID_VAR =
@@ -871,8 +739,6 @@ CUSTOM_EXCLUDE_EMPTY_GROUPS = False
# ---------------------
#SCM_ENABLED_VAR =
#SCM_ENABLED_VALUE =
SCM_GROUP_FILTER = r'^.+$'
SCM_HOST_FILTER = r'^.+$'
SCM_EXCLUDE_EMPTY_GROUPS = False
#SCM_INSTANCE_ID_VAR =
@@ -916,7 +782,7 @@ ASGI_APPLICATION = "awx.main.routing.application"
CHANNEL_LAYERS = {
"default": {
"BACKEND": "awx.main.consumers.ExpiringRedisChannelLayer",
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [BROKER_URL],
"capacity": 10000,
@@ -1129,6 +995,11 @@ LOGGING = {
'handlers': ['task_system', 'external_logger'],
'propagate': False
},
'awx.main.analytics': {
'handlers': ['task_system', 'external_logger'],
'level': 'INFO',
'propagate': False
},
'awx.main.scheduler': {
'handlers': ['task_system', 'external_logger'],
'propagate': False

View File

@@ -575,7 +575,7 @@ register(
'SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_DOMAINS',
field_class=fields.StringListField,
default=[],
label=_('Google OAuth2 Whitelisted Domains'),
label=_('Google OAuth2 Allowed Domains'),
help_text=_('Update this setting to restrict the domains who are allowed to '
'login using Google OAuth2.'),
category=_('Google OAuth2'),
@@ -919,6 +919,17 @@ def get_saml_entity_id():
return settings.TOWER_URL_BASE
register(
'SAML_AUTO_CREATE_OBJECTS',
field_class=fields.BooleanField,
default=True,
label=_('Automatically Create Organizations and Teams on SAML Login'),
help_text=_('When enabled (the default), mapped Organizations and Teams '
'will be created automatically on successful SAML login.'),
category=_('SAML'),
category_slug='saml',
)
register(
'SOCIAL_AUTH_SAML_CALLBACK_URL',
field_class=fields.CharField,

View File

@@ -10,6 +10,7 @@ import logging
from social_core.exceptions import AuthException
# Django
from django.core.exceptions import ObjectDoesNotExist
from django.utils.translation import ugettext_lazy as _
from django.db.models import Q
@@ -80,11 +81,18 @@ def _update_m2m_from_expression(user, related, expr, remove=True):
def _update_org_from_attr(user, related, attr, remove, remove_admins, remove_auditors):
from awx.main.models import Organization
from django.conf import settings
org_ids = []
for org_name in attr:
org = Organization.objects.get_or_create(name=org_name)[0]
try:
if settings.SAML_AUTO_CREATE_OBJECTS:
org = Organization.objects.get_or_create(name=org_name)[0]
else:
org = Organization.objects.get(name=org_name)
except ObjectDoesNotExist:
continue
org_ids.append(org.id)
getattr(org, related).members.add(user)
@@ -199,11 +207,24 @@ def update_user_teams_by_saml_attr(backend, details, user=None, *args, **kwargs)
if organization_alias:
organization_name = organization_alias
org = Organization.objects.get_or_create(name=organization_name)[0]
try:
if settings.SAML_AUTO_CREATE_OBJECTS:
org = Organization.objects.get_or_create(name=organization_name)[0]
else:
org = Organization.objects.get(name=organization_name)
except ObjectDoesNotExist:
continue
if team_alias:
team_name = team_alias
team = Team.objects.get_or_create(name=team_name, organization=org)[0]
try:
if settings.SAML_AUTO_CREATE_OBJECTS:
team = Team.objects.get_or_create(name=team_name, organization=org)[0]
else:
team = Team.objects.get(name=team_name, organization=org)
except ObjectDoesNotExist:
continue
team_ids.append(team.id)
team.member_role.members.add(user)

View File

@@ -174,8 +174,15 @@ class TestSAMLAttr():
return (o1, o2, o3)
@pytest.fixture
def mock_settings(self):
def mock_settings(self, request):
fixture_args = request.node.get_closest_marker('fixture_args')
if fixture_args and 'autocreate' in fixture_args.kwargs:
autocreate = fixture_args.kwargs['autocreate']
else:
autocreate = True
class MockSettings():
SAML_AUTO_CREATE_OBJECTS = autocreate
SOCIAL_AUTH_SAML_ORGANIZATION_ATTR = {
'saml_attr': 'memberOf',
'saml_admin_attr': 'admins',
@@ -304,3 +311,41 @@ class TestSAMLAttr():
assert Team.objects.get(
name='Yellow_Alias', organization__name='Default4_Alias').member_role.members.count() == 1
@pytest.mark.fixture_args(autocreate=False)
def test_autocreate_disabled(self, users, kwargs, mock_settings):
kwargs['response']['attributes']['memberOf'] = ['Default1', 'Default2', 'Default3']
kwargs['response']['attributes']['groups'] = ['Blue', 'Red', 'Green']
with mock.patch('django.conf.settings', mock_settings):
for u in users:
update_user_orgs_by_saml_attr(None, None, u, **kwargs)
update_user_teams_by_saml_attr(None, None, u, **kwargs)
assert Organization.objects.count() == 0
assert Team.objects.count() == 0
# precreate everything
o1 = Organization.objects.create(name='Default1')
o2 = Organization.objects.create(name='Default2')
o3 = Organization.objects.create(name='Default3')
Team.objects.create(name='Blue', organization_id=o1.id)
Team.objects.create(name='Blue', organization_id=o2.id)
Team.objects.create(name='Blue', organization_id=o3.id)
Team.objects.create(name='Red', organization_id=o1.id)
Team.objects.create(name='Green', organization_id=o1.id)
Team.objects.create(name='Green', organization_id=o3.id)
for u in users:
update_user_orgs_by_saml_attr(None, None, u, **kwargs)
update_user_teams_by_saml_attr(None, None, u, **kwargs)
assert o1.member_role.members.count() == 3
assert o2.member_role.members.count() == 3
assert o3.member_role.members.count() == 3
assert Team.objects.get(name='Blue', organization__name='Default1').member_role.members.count() == 3
assert Team.objects.get(name='Blue', organization__name='Default2').member_role.members.count() == 3
assert Team.objects.get(name='Blue', organization__name='Default3').member_role.members.count() == 3
assert Team.objects.get(name='Red', organization__name='Default1').member_role.members.count() == 3
assert Team.objects.get(name='Green', organization__name='Default1').member_role.members.count() == 3
assert Team.objects.get(name='Green', organization__name='Default3').member_role.members.count() == 3

View File

@@ -119,6 +119,10 @@ function OutputStream ($q) {
this.counters.ready = ready;
this.counters.used = used;
this.counters.missing = missing;
if (!window.liveUpdates) {
this.counters.ready = event.counter;
}
};
this.bufferEmpty = threshold => {
@@ -141,6 +145,10 @@ function OutputStream ($q) {
const { total } = this.counters;
const readyCount = this.getReadyCount();
if (!window.liveUpdates) {
return true;
}
if (readyCount <= 0) {
return false;
}

View File

@@ -23,12 +23,12 @@
icon="external"
tag="state._tagValue"
remove-tag="state._onRemoveTag(state)"
/>
></at-tag>
<at-tag
ng-show="state._disabled && state._tagValue"
icon="external"
tag="state._tagValue"
/>
></at-tag>
</div>
</span>
<input ng-if="!state.asTag" type="{{ type }}"

View File

@@ -65,6 +65,9 @@ export default ['i18n', function(i18n) {
PROJECT_UPDATE_VVV: {
type: 'toggleSwitch',
},
GALAXY_IGNORE_CERTS: {
type: 'toggleSwitch',
},
AWX_ROLES_ENABLED: {
type: 'toggleSwitch',
},
@@ -74,31 +77,6 @@ export default ['i18n', function(i18n) {
AWX_SHOW_PLAYBOOK_LINKS: {
type: 'toggleSwitch',
},
PRIMARY_GALAXY_URL: {
type: 'text',
reset: 'PRIMARY_GALAXY_URL',
},
PRIMARY_GALAXY_USERNAME: {
type: 'text',
reset: 'PRIMARY_GALAXY_USERNAME',
},
PRIMARY_GALAXY_PASSWORD: {
type: 'sensitive',
hasShowInputButton: true,
reset: 'PRIMARY_GALAXY_PASSWORD',
},
PRIMARY_GALAXY_TOKEN: {
type: 'sensitive',
hasShowInputButton: true,
reset: 'PRIMARY_GALAXY_TOKEN',
},
PRIMARY_GALAXY_AUTH_URL: {
type: 'text',
reset: 'PRIMARY_GALAXY_AUTH_URL',
},
PUBLIC_GALAXY_ENABLED: {
type: 'toggleSwitch',
},
AWX_TASK_ENV: {
type: 'textarea',
reset: 'AWX_TASK_ENV',

View File

@@ -24,49 +24,6 @@ export default ['$state', 'ConfigData', '$scope', 'SourcesFormDefinition', 'Pars
const virtualEnvs = ConfigData.custom_virtualenvs || [];
$scope.custom_virtualenvs_options = virtualEnvs;
GetChoices({
scope: $scope,
field: 'source_regions',
variable: 'rax_regions',
choice_name: 'rax_region_choices',
options: inventorySourcesOptions
});
GetChoices({
scope: $scope,
field: 'source_regions',
variable: 'ec2_regions',
choice_name: 'ec2_region_choices',
options: inventorySourcesOptions
});
GetChoices({
scope: $scope,
field: 'source_regions',
variable: 'gce_regions',
choice_name: 'gce_region_choices',
options: inventorySourcesOptions
});
GetChoices({
scope: $scope,
field: 'source_regions',
variable: 'azure_regions',
choice_name: 'azure_rm_region_choices',
options: inventorySourcesOptions
});
// Load options for group_by
GetChoices({
scope: $scope,
field: 'group_by',
variable: 'ec2_group_by',
choice_name: 'ec2_group_by_choices',
options: inventorySourcesOptions
});
initRegionSelect();
GetChoices({
scope: $scope,
field: 'verbosity',
@@ -205,20 +162,11 @@ export default ['$state', 'ConfigData', '$scope', 'SourcesFormDefinition', 'Pars
$scope.projectBasePath = GetBasePath('projects') + '?not__status=never updated';
}
// reset fields
$scope.group_by_choices = source === 'ec2' ? $scope.ec2_group_by : null;
// azure_rm regions choices are keyed as "azure" in an OPTIONS request to the inventory_sources endpoint
$scope.source_region_choices = source === 'azure_rm' ? $scope.azure_regions : $scope[source + '_regions'];
$scope.cloudCredentialRequired = source !== '' && source !== 'scm' && source !== 'custom' && source !== 'ec2' ? true : false;
$scope.source_regions = null;
$scope.credential = null;
$scope.credential_name = null;
$scope.group_by = null;
$scope.group_by_choices = [];
$scope.overwrite_vars = false;
initRegionSelect();
};
// region / source options callback
$scope.$on('sourceTypeOptionsReady', function() {
CreateSelect2({
@@ -227,57 +175,6 @@ export default ['$state', 'ConfigData', '$scope', 'SourcesFormDefinition', 'Pars
});
});
function initRegionSelect(){
CreateSelect2({
element: '#inventory_source_source_regions',
multiple: true
});
let add_new = false;
if( _.get($scope, 'source') === 'ec2' || _.get($scope.source, 'value') === 'ec2') {
$scope.group_by_choices = $scope.ec2_group_by;
$scope.groupByPopOver = "<p>" + i18n._("Select which groups to create automatically. ") +
$rootScope.BRAND_NAME + i18n._(" will create group names similar to the following examples based on the options selected:") + "</p><ul>" +
"<li>" + i18n._("Availability Zone:") + "<strong>zones &raquo; us-east-1b</strong></li>" +
"<li>" + i18n._("Image ID:") + "<strong>images &raquo; ami-b007ab1e</strong></li>" +
"<li>" + i18n._("Instance ID:") + "<strong>instances &raquo; i-ca11ab1e</strong></li>" +
"<li>" + i18n._("Instance Type:") + "<strong>types &raquo; type_m1_medium</strong></li>" +
"<li>" + i18n._("Key Name:") + "<strong>keys &raquo; key_testing</strong></li>" +
"<li>" + i18n._("Region:") + "<strong>regions &raquo; us-east-1</strong></li>" +
"<li>" + i18n._("Security Group:") + "<strong>security_groups &raquo; security_group_default</strong></li>" +
"<li>" + i18n._("Tags:") + "<strong>tags &raquo; tag_Name &raquo; tag_Name_host1</strong></li>" +
"<li>" + i18n._("VPC ID:") + "<strong>vpcs &raquo; vpc-5ca1ab1e</strong></li>" +
"<li>" + i18n._("Tag None:") + "<strong>tags &raquo; tag_none</strong></li>" +
"</ul><p>" + i18n._("If blank, all groups above are created except") + "<em>" + i18n._("Instance ID") + "</em>.</p>";
$scope.instanceFilterPopOver = "<p>" + i18n._("Provide a comma-separated list of filter expressions. ") +
i18n._("Hosts are imported to ") + $rootScope.BRAND_NAME + i18n._(" when ") + "<em>" + i18n._("ANY") + "</em>" + i18n._(" of the filters match.") + "</p>" +
i18n._("Limit to hosts having a tag:") + "<br />\n" +
"<blockquote>tag-key=TowerManaged</blockquote>\n" +
i18n._("Limit to hosts using either key pair:") + "<br />\n" +
"<blockquote>key-name=staging, key-name=production</blockquote>\n" +
i18n._("Limit to hosts where the Name tag begins with ") + "<em>" + i18n._("test") + "</em>:<br />\n" +
"<blockquote>tag:Name=test*</blockquote>\n" +
"<p>" + i18n._("View the ") + "<a href=\"http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html\" target=\"_blank\">" + i18n._("Describe Instances documentation") + "</a> " +
i18n._("for a complete list of supported filters.") + "</p>";
}
if( _.get($scope, 'source') === 'vmware' || _.get($scope.source, 'value') === 'vmware') {
add_new = true;
$scope.group_by_choices = [];
$scope.group_by = $scope.group_by_choices;
$scope.groupByPopOver = i18n._("Specify which groups to create automatically. Group names will be created similar to the options selected. If blank, all groups above are created. Refer to Ansible Tower documentation for more detail.");
$scope.instanceFilterPopOver = i18n._("Provide a comma-separated list of filter expressions. Hosts are imported when all of the filters match. Refer to Ansible Tower documentation for more detail.");
}
if( _.get($scope, 'source') === 'tower' || _.get($scope.source, 'value') === 'tower') {
$scope.instanceFilterPopOver = i18n._("Provide the named URL encoded name or id of the remote Tower inventory to be imported.");
}
CreateSelect2({
element: '#inventory_source_group_by',
multiple: true,
addNew: add_new
});
}
$scope.formCancel = function() {
$state.go('^');
};
@@ -289,7 +186,6 @@ export default ['$state', 'ConfigData', '$scope', 'SourcesFormDefinition', 'Pars
name: $scope.name,
description: $scope.description,
inventory: inventoryData.id,
instance_filters: $scope.instance_filters,
source_script: $scope.inventory_script,
credential: $scope.credential,
overwrite: $scope.overwrite,
@@ -298,9 +194,9 @@ export default ['$state', 'ConfigData', '$scope', 'SourcesFormDefinition', 'Pars
verbosity: $scope.verbosity.value,
update_cache_timeout: $scope.update_cache_timeout || 0,
custom_virtualenv: $scope.custom_virtualenv || null,
// comma-delimited strings
group_by: SourcesService.encodeGroupBy($scope.source, $scope.group_by),
source_regions: _.map($scope.source_regions, 'value').join(','),
enabled_var: $scope.enabled_var,
enabled_value: $scope.enabled_value,
host_filter: $scope.host_filter
};
if ($scope.source) {

View File

@@ -34,7 +34,6 @@ export default ['$state', '$scope', 'ParseVariableString', 'ParseTypeChange',
{overwrite_vars: inventorySourceData.overwrite_vars},
{update_on_launch: inventorySourceData.update_on_launch},
{update_cache_timeout: inventorySourceData.update_cache_timeout},
{instance_filters: inventorySourceData.instance_filters},
{inventory_script: inventorySourceData.source_script},
{verbosity: inventorySourceData.verbosity});
@@ -100,56 +99,6 @@ export default ['$state', '$scope', 'ParseVariableString', 'ParseTypeChange',
scope: $scope,
variable: 'source_type_options'
});
GetChoices({
scope: $scope,
field: 'source_regions',
variable: 'rax_regions',
choice_name: 'rax_region_choices',
options: inventorySourcesOptions
});
GetChoices({
scope: $scope,
field: 'source_regions',
variable: 'ec2_regions',
choice_name: 'ec2_region_choices',
options: inventorySourcesOptions
});
GetChoices({
scope: $scope,
field: 'source_regions',
variable: 'gce_regions',
choice_name: 'gce_region_choices',
options: inventorySourcesOptions
});
GetChoices({
scope: $scope,
field: 'source_regions',
variable: 'azure_regions',
choice_name: 'azure_rm_region_choices',
options: inventorySourcesOptions
});
GetChoices({
scope: $scope,
field: 'group_by',
variable: 'ec2_group_by',
choice_name: 'ec2_group_by_choices',
options: inventorySourcesOptions
});
var source = $scope.source === 'azure_rm' ? 'azure' : $scope.source;
var regions = inventorySourceData.source_regions.split(',');
// azure_rm regions choices are keyed as "azure" in an OPTIONS request to the inventory_sources endpoint
$scope.source_region_choices = $scope[source + '_regions'];
// the API stores azure regions as all-lowercase strings - but the azure regions received from OPTIONS are Snake_Cased
if (source === 'azure') {
$scope.source_regions = _.map(regions, (region) => _.find($scope[source + '_regions'], (o) => o.value.toLowerCase() === region));
}
// all other regions are 1-1
else {
$scope.source_regions = _.map(regions, (region) => _.find($scope[source + '_regions'], (o) => o.value === region));
}
initRegionSelect();
GetChoices({
scope: $scope,
@@ -236,63 +185,6 @@ export default ['$state', '$scope', 'ParseVariableString', 'ParseTypeChange',
}
}
function initRegionSelect() {
CreateSelect2({
element: '#inventory_source_source_regions',
multiple: true
});
let add_new = false;
if( _.get($scope, 'source') === 'ec2' || _.get($scope.source, 'value') === 'ec2') {
$scope.group_by_choices = $scope.ec2_group_by;
let group_by = inventorySourceData.group_by.split(',');
$scope.group_by = _.map(group_by, (item) => _.find($scope.ec2_group_by, { value: item }));
$scope.groupByPopOver = "<p>" + i18n._("Select which groups to create automatically. ") +
$rootScope.BRAND_NAME + i18n._(" will create group names similar to the following examples based on the options selected:") + "</p><ul>" +
"<li>" + i18n._("Availability Zone:") + "<strong>zones &raquo; us-east-1b</strong></li>" +
"<li>" + i18n._("Image ID:") + "<strong>images &raquo; ami-b007ab1e</strong></li>" +
"<li>" + i18n._("Instance ID:") + "<strong>instances &raquo; i-ca11ab1e</strong></li>" +
"<li>" + i18n._("Instance Type:") + "<strong>types &raquo; type_m1_medium</strong></li>" +
"<li>" + i18n._("Key Name:") + "<strong>keys &raquo; key_testing</strong></li>" +
"<li>" + i18n._("Region:") + "<strong>regions &raquo; us-east-1</strong></li>" +
"<li>" + i18n._("Security Group:") + "<strong>security_groups &raquo; security_group_default</strong></li>" +
"<li>" + i18n._("Tags:") + "<strong>tags &raquo; tag_Name &raquo; tag_Name_host1</strong></li>" +
"<li>" + i18n._("VPC ID:") + "<strong>vpcs &raquo; vpc-5ca1ab1e</strong></li>" +
"<li>" + i18n._("Tag None:") + "<strong>tags &raquo; tag_none</strong></li>" +
"</ul><p>" + i18n._("If blank, all groups above are created except") + "<em>" + i18n._("Instance ID") + "</em>.</p>";
$scope.instanceFilterPopOver = "<p>" + i18n._("Provide a comma-separated list of filter expressions. ") +
i18n._("Hosts are imported to ") + $rootScope.BRAND_NAME + i18n._(" when ") + "<em>" + i18n._("ANY") + "</em>" + i18n._(" of the filters match.") + "</p>" +
i18n._("Limit to hosts having a tag:") + "<br />\n" +
"<blockquote>tag-key=TowerManaged</blockquote>\n" +
i18n._("Limit to hosts using either key pair:") + "<br />\n" +
"<blockquote>key-name=staging, key-name=production</blockquote>\n" +
i18n._("Limit to hosts where the Name tag begins with ") + "<em>" + i18n._("test") + "</em>:<br />\n" +
"<blockquote>tag:Name=test*</blockquote>\n" +
"<p>" + i18n._("View the ") + "<a href=\"http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html\" target=\"_blank\">" + i18n._("Describe Instances documentation") + "</a> " +
i18n._("for a complete list of supported filters.") + "</p>";
}
if( _.get($scope, 'source') === 'vmware' || _.get($scope.source, 'value') === 'vmware') {
add_new = true;
$scope.group_by_choices = (inventorySourceData.group_by) ? inventorySourceData.group_by.split(',')
.map((i) => ({name: i, label: i, value: i})) : [];
$scope.group_by = $scope.group_by_choices;
$scope.groupByPopOver = i18n._(`Specify which groups to create automatically. Group names will be created similar to the options selected. If blank, all groups above are created. Refer to Ansible Tower documentation for more detail.`);
$scope.instanceFilterPopOver = i18n._(`Provide a comma-separated list of filter expressions. Hosts are imported when all of the filters match. Refer to Ansible Tower documentation for more detail.`);
}
if( _.get($scope, 'source') === 'tower' || _.get($scope.source, 'value') === 'tower') {
$scope.instanceFilterPopOver = i18n._(`Provide the named URL encoded name or id of the remote Tower inventory to be imported.`);
}
CreateSelect2({
element: '#inventory_source_group_by',
multiple: true,
addNew: add_new
});
}
$scope.lookupProject = function(){
$state.go('.project', {
project_search: {
@@ -341,12 +233,13 @@ export default ['$state', '$scope', 'ParseVariableString', 'ParseTypeChange',
$scope.formSave = function() {
var params;
console.log($scope);
params = {
id: inventorySourceData.id,
name: $scope.name,
description: $scope.description,
inventory: inventoryData.id,
instance_filters: $scope.instance_filters,
source_script: $scope.inventory_script,
credential: $scope.credential,
overwrite: $scope.overwrite,
@@ -355,9 +248,9 @@ export default ['$state', '$scope', 'ParseVariableString', 'ParseTypeChange',
update_cache_timeout: $scope.update_cache_timeout || 0,
verbosity: $scope.verbosity.value,
custom_virtualenv: $scope.custom_virtualenv || null,
// comma-delimited strings
group_by: SourcesService.encodeGroupBy($scope.source, $scope.group_by),
source_regions: _.map($scope.source_regions, 'value').join(',')
enabled_var: $scope.enabled_var,
enabled_value: $scope.enabled_value,
host_filter: $scope.host_filter
};
if ($scope.source) {
@@ -417,20 +310,10 @@ export default ['$state', '$scope', 'ParseVariableString', 'ParseTypeChange',
});
}
// reset fields
$scope.group_by_choices = source === 'ec2' ? $scope.ec2_group_by : null;
// azure_rm regions choices are keyed as "azure" in an OPTIONS request to the inventory_sources endpoint
$scope.source_region_choices = source === 'azure_rm' ? $scope.azure_regions : $scope[source + '_regions'];
$scope.cloudCredentialRequired = source !== '' && source !== 'scm' && source !== 'custom' && source !== 'ec2' ? true : false;
$scope.source_regions = null;
$scope.credential = null;
$scope.credential_name = null;
$scope.group_by = null;
$scope.group_by_choices = [];
$scope.overwrite_vars = false;
initRegionSelect();
};
}
];

View File

@@ -126,46 +126,6 @@ export default ['NotificationsList', 'i18n', function(NotificationsList, i18n){
includeInventoryFileNotFoundError: true,
subForm: 'sourceSubForm'
},
source_regions: {
label: i18n._('Regions'),
type: 'select',
ngOptions: 'source.label for source in source_region_choices track by source.value',
multiSelect: true,
ngShow: "source && (source.value == 'rax' || source.value == 'ec2' || source.value == 'gce' || source.value == 'azure_rm')",
dataTitle: i18n._('Source Regions'),
dataPlacement: 'right',
awPopOver: "<p>" + i18n._("Click on the regions field to see a list of regions for your cloud provider. You can select multiple regions, or choose") +
"<em>" + i18n._("All") + "</em> " + i18n._("to include all regions. Only Hosts associated with the selected regions will be updated.") + "</p>",
dataContainer: 'body',
ngDisabled: '!(inventory_source_obj.summary_fields.user_capabilities.edit || canAdd)',
subForm: 'sourceSubForm'
},
instance_filters: {
label: i18n._("Instance Filters"),
type: 'text',
ngShow: "source && (source.value == 'ec2' || source.value == 'vmware' || source.value == 'tower')",
dataTitle: i18n._('Instance Filters'),
dataPlacement: 'right',
awPopOverWatch: 'instanceFilterPopOver',
awPopOver: '{{ instanceFilterPopOver }}',
dataContainer: 'body',
ngDisabled: '!(inventory_source_obj.summary_fields.user_capabilities.edit || canAdd)',
subForm: 'sourceSubForm'
},
group_by: {
label: i18n._('Only Group By'),
type: 'select',
ngShow: "source && (source.value == 'ec2' || source.value == 'vmware')",
ngOptions: 'source.label for source in group_by_choices track by source.value',
multiSelect: true,
dataTitle: i18n._("Only Group By"),
dataPlacement: 'right',
awPopOverWatch: 'groupByPopOver',
awPopOver: '{{ groupByPopOver }}',
dataContainer: 'body',
ngDisabled: '!(inventory_source_obj.summary_fields.user_capabilities.edit || canAdd)',
subForm: 'sourceSubForm'
},
inventory_script: {
label : i18n._("Custom Inventory Script"),
type: 'lookup',
@@ -340,6 +300,36 @@ export default ['NotificationsList', 'i18n', function(NotificationsList, i18n){
ngDisabled: '!(inventory_source_obj.summary_fields.user_capabilities.edit || canAdd)',
subForm: 'sourceSubForm'
},
host_filter: {
label: i18n._("Host Filter"),
type: 'text',
dataTitle: i18n._('Host Filter'),
dataPlacement: 'right',
awPopOver: "<p>" + i18n._("Regular expression where only matching host names will be imported. The filter is applied as a post-processing step after any inventory plugin filters are applied.") + "</p>",
dataContainer: 'body',
ngDisabled: '!(inventory_source_obj.summary_fields.user_capabilities.edit || canAdd)',
subForm: 'sourceSubForm'
},
enabled_var: {
label: i18n._("Enabled Variable"),
type: 'text',
dataTitle: i18n._('Enabled Variable'),
dataPlacement: 'right',
awPopOver: "<p>" + i18n._("Retrieve the enabled state from the given dict of host variables. The enabled variable may be specified using dot notation, e.g: 'foo.bar'") + "</p>",
dataContainer: 'body',
ngDisabled: '!(inventory_source_obj.summary_fields.user_capabilities.edit || canAdd)',
subForm: 'sourceSubForm'
},
enabled_value: {
label: i18n._("Enabled Value"),
type: 'text',
dataTitle: i18n._('Enabled Value'),
dataPlacement: 'right',
awPopOver: "<p>" + i18n._("This field is ignored unless an Enabled Variable is set. If the enabled variable matches this value, the host will be enabled on import.") + "</p>",
dataContainer: 'body',
ngDisabled: '!(inventory_source_obj.summary_fields.user_capabilities.edit || canAdd)',
subForm: 'sourceSubForm'
},
checkbox_group: {
label: i18n._('Update Options'),
type: 'checkbox_group',

View File

@@ -116,24 +116,6 @@ export default
.catch(this.error.bind(this))
.finally(Wait('stop'));
},
encodeGroupBy(source, group_by){
source = source && source.value ? source.value : '';
if(source === 'ec2'){
return _.map(group_by, 'value').join(',');
}
if(source === 'vmware'){
group_by = _.map(group_by, (i) => {return i.value;});
$("#inventory_source_group_by").siblings(".select2").first().find(".select2-selection__choice").each(function(optionIndex, option){
group_by.push(option.title);
});
group_by = (Array.isArray(group_by)) ? _.uniq(group_by).join() : "";
return group_by;
}
else {
return;
}
},
deleteHosts(id) {
this.url = GetBasePath('inventory_sources') + id + '/hosts/';
Rest.setUrl(this.url);

View File

@@ -4,11 +4,12 @@
* All Rights Reserved
*************************************************/
export default ['$scope', '$rootScope', '$location', '$stateParams',
'OrganizationForm', 'GenerateForm', 'Rest', 'Alert',
'ProcessErrors', 'GetBasePath', 'Wait', 'CreateSelect2', '$state','InstanceGroupsService', 'ConfigData',
function($scope, $rootScope, $location, $stateParams, OrganizationForm,
GenerateForm, Rest, Alert, ProcessErrors, GetBasePath, Wait, CreateSelect2, $state, InstanceGroupsService, ConfigData) {
export default ['$scope', '$rootScope', '$location', '$stateParams', 'OrganizationForm',
'GenerateForm', 'Rest', 'Alert', 'ProcessErrors', 'GetBasePath', 'Wait', 'CreateSelect2',
'$state','InstanceGroupsService', 'ConfigData', 'MultiCredentialService', 'defaultGalaxyCredential',
function($scope, $rootScope, $location, $stateParams, OrganizationForm,
GenerateForm, Rest, Alert, ProcessErrors, GetBasePath, Wait, CreateSelect2,
$state, InstanceGroupsService, ConfigData, MultiCredentialService, defaultGalaxyCredential) {
Rest.setUrl(GetBasePath('organizations'));
Rest.options()
@@ -37,6 +38,8 @@ export default ['$scope', '$rootScope', '$location', '$stateParams',
// apply form definition's default field values
GenerateForm.applyDefaults(form, $scope);
$scope.credentials = defaultGalaxyCredential || [];
}
// Save
@@ -57,18 +60,32 @@ export default ['$scope', '$rootScope', '$location', '$stateParams',
const organization_id = data.id,
instance_group_url = data.related.instance_groups;
InstanceGroupsService.addInstanceGroups(instance_group_url, $scope.instance_groups)
MultiCredentialService
.saveRelatedSequentially({
related: {
credentials: data.related.galaxy_credentials
}
}, $scope.credentials)
.then(() => {
Wait('stop');
$rootScope.$broadcast("EditIndicatorChange", "organizations", organization_id);
$state.go('organizations.edit', {organization_id: organization_id}, {reload: true});
})
.catch(({data, status}) => {
InstanceGroupsService.addInstanceGroups(instance_group_url, $scope.instance_groups)
.then(() => {
Wait('stop');
$rootScope.$broadcast("EditIndicatorChange", "organizations", organization_id);
$state.go('organizations.edit', {organization_id: organization_id}, {reload: true});
})
.catch(({data, status}) => {
ProcessErrors($scope, data, status, form, {
hdr: 'Error!',
msg: 'Failed to save instance groups. POST returned status: ' + status
});
});
}).catch(({data, status}) => {
ProcessErrors($scope, data, status, form, {
hdr: 'Error!',
msg: 'Failed to save instance groups. POST returned status: ' + status
msg: 'Failed to save Galaxy credentials. POST returned status: ' + status
});
});
})
.catch(({data, status}) => {
let explanation = _.has(data, "name") ? data.name[0] : "";

View File

@@ -6,10 +6,12 @@
export default ['$scope', '$location', '$stateParams', 'isOrgAdmin', 'isNotificationAdmin',
'OrganizationForm', 'Rest', 'ProcessErrors', 'Prompt', 'i18n', 'isOrgAuditor',
'GetBasePath', 'Wait', '$state', 'ToggleNotification', 'CreateSelect2', 'InstanceGroupsService', 'InstanceGroupsData', 'ConfigData',
'GetBasePath', 'Wait', '$state', 'ToggleNotification', 'CreateSelect2', 'InstanceGroupsService',
'InstanceGroupsData', 'ConfigData', 'GalaxyCredentialsData', 'MultiCredentialService',
function($scope, $location, $stateParams, isOrgAdmin, isNotificationAdmin,
OrganizationForm, Rest, ProcessErrors, Prompt, i18n, isOrgAuditor,
GetBasePath, Wait, $state, ToggleNotification, CreateSelect2, InstanceGroupsService, InstanceGroupsData, ConfigData) {
GetBasePath, Wait, $state, ToggleNotification, CreateSelect2, InstanceGroupsService,
InstanceGroupsData, ConfigData, GalaxyCredentialsData, MultiCredentialService) {
let form = OrganizationForm(),
defaultUrl = GetBasePath('organizations'),
@@ -29,6 +31,7 @@ export default ['$scope', '$location', '$stateParams', 'isOrgAdmin', 'isNotifica
});
$scope.instance_groups = InstanceGroupsData;
$scope.credentials = GalaxyCredentialsData;
const virtualEnvs = ConfigData.custom_virtualenvs || [];
$scope.custom_virtualenvs_visible = virtualEnvs.length > 1;
$scope.custom_virtualenvs_options = virtualEnvs.filter(
@@ -100,7 +103,14 @@ export default ['$scope', '$location', '$stateParams', 'isOrgAdmin', 'isNotifica
Rest.setUrl(defaultUrl + id + '/');
Rest.put(params)
.then(() => {
InstanceGroupsService.editInstanceGroups(instance_group_url, $scope.instance_groups)
MultiCredentialService
.saveRelatedSequentially({
related: {
credentials: $scope.organization_obj.related.galaxy_credentials
}
}, $scope.credentials)
.then(() => {
InstanceGroupsService.editInstanceGroups(instance_group_url, $scope.instance_groups)
.then(() => {
Wait('stop');
$state.go($state.current, {}, { reload: true });
@@ -111,6 +121,12 @@ export default ['$scope', '$location', '$stateParams', 'isOrgAdmin', 'isNotifica
msg: 'Failed to update instance groups. POST returned status: ' + status
});
});
}).catch(({data, status}) => {
ProcessErrors($scope, data, status, form, {
hdr: 'Error!',
msg: 'Failed to save Galaxy credentials. POST returned status: ' + status
});
});
$scope.organization_name = $scope.name;
main = params;
})

View File

@@ -0,0 +1,123 @@
export default ['templateUrl', '$window', function(templateUrl, $window) {
return {
restrict: 'E',
scope: {
galaxyCredentials: '='
},
templateUrl: templateUrl('organizations/galaxy-credentials-multiselect/galaxy-credentials-modal/galaxy-credentials-modal'),
link: function(scope, element) {
$('#galaxy-credentials-modal').on('hidden.bs.modal', function () {
$('#galaxy-credentials-modal').off('hidden.bs.modal');
$(element).remove();
});
scope.showModal = function() {
$('#galaxy-credentials-modal').modal('show');
};
scope.destroyModal = function() {
$('#galaxy-credentials-modal').modal('hide');
};
},
controller: ['$scope', '$compile', 'QuerySet', 'GetBasePath','generateList', 'CredentialList', function($scope, $compile, qs, GetBasePath, GenerateList, CredentialList) {
function init() {
$scope.credential_queryset = {
order_by: 'name',
page_size: 5,
credential_type__kind: 'galaxy'
};
$scope.credential_default_params = {
order_by: 'name',
page_size: 5,
credential_type__kind: 'galaxy'
};
qs.search(GetBasePath('credentials'), $scope.credential_queryset)
.then(res => {
$scope.credential_dataset = res.data;
$scope.credentials = $scope.credential_dataset.results;
let credentialList = _.cloneDeep(CredentialList);
credentialList.listTitle = false;
credentialList.well = false;
credentialList.multiSelect = true;
credentialList.multiSelectPreview = {
selectedRows: 'credTags',
availableRows: 'credentials'
};
credentialList.fields.name.ngClick = "linkoutCredential(credential)";
credentialList.fields.name.columnClass = 'col-md-11 col-sm-11 col-xs-11';
delete credentialList.fields.consumed_capacity;
delete credentialList.fields.jobs_running;
let html = `${GenerateList.build({
list: credentialList,
input_type: 'galaxy-credentials-modal-body',
hideViewPerPage: true,
mode: 'lookup'
})}`;
$scope.list = credentialList;
$('#galaxy-credentials-modal-body').append($compile(html)($scope));
if ($scope.galaxyCredentials) {
$scope.galaxyCredentials = $scope.galaxyCredentials.map( (item) => {
item.isSelected = true;
if (!$scope.credTags) {
$scope.credTags = [];
}
$scope.credTags.push(item);
return item;
});
}
$scope.showModal();
});
$scope.$watch('credentials', function(){
angular.forEach($scope.credentials, function(credentialRow) {
angular.forEach($scope.credTags, function(selectedCredential){
if(selectedCredential.id === credentialRow.id) {
credentialRow.isSelected = true;
}
});
});
});
}
init();
$scope.$on("selectedOrDeselected", function(e, value) {
let item = value.value;
if (value.isSelected) {
if(!$scope.credTags) {
$scope.credTags = [];
}
$scope.credTags.push(item);
} else {
_.remove($scope.credTags, { id: item.id });
}
});
$scope.linkoutCredential = function(credential) {
$window.open('/#/credentials/' + credential.id,'_blank');
};
$scope.cancelForm = function() {
$scope.destroyModal();
};
$scope.saveForm = function() {
$scope.galaxyCredentials = $scope.credTags;
$scope.destroyModal();
};
}]
};
}];

View File

@@ -0,0 +1,22 @@
<div id="galaxy-credentials-modal" class="Lookup modal fade">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header Form-header">
<div class="Form-title Form-title--uppercase" translate>Select Galaxy Credentials</div>
<div class="Form-header--fields"></div>
<div class="Form-exitHolder">
<button aria-label="{{'Close'|translate}}" type="button" class="Form-exit" ng-click="cancelForm()">
<i class="fa fa-times-circle"></i>
</button>
</div>
</div>
<div class="modal-body">
<div id="galaxy-credentials-modal-body"> {{ credential }} </div>
</div>
<div class="modal-footer">
<button type="button" ng-click="cancelForm()" class="btn btn-default" translate>CANCEL</button>
<button type="button" ng-click="saveForm()" ng-disabled="!credentials || credentials.length === 0" class="Lookup-save btn btn-primary" translate>SAVE</button>
</div>
</div>
</div>
</div>

View File

@@ -0,0 +1,14 @@
export default ['$scope',
function($scope) {
$scope.galaxyCredentialsTags = [];
$scope.$watch('galaxyCredentials', function() {
$scope.galaxyCredentialsTags = $scope.galaxyCredentials;
}, true);
$scope.deleteTag = function(tag){
_.remove($scope.galaxyCredentials, {id: tag.id});
};
}
];

View File

@@ -0,0 +1,15 @@
#instance-groups-panel {
table {
overflow: hidden;
}
.List-header {
margin-bottom: 20px;
}
.isActive {
border-left: 10px solid @list-row-select-bord;
}
.instances-list,
.instance-jobs-list {
margin-top: 20px;
}
}

View File

@@ -0,0 +1,19 @@
import galaxyCredentialsMultiselectController from './galaxy-credentials-multiselect.controller';
export default ['templateUrl', '$compile',
function(templateUrl, $compile) {
return {
scope: {
galaxyCredentials: '=',
fieldIsDisabled: '='
},
restrict: 'E',
templateUrl: templateUrl('organizations/galaxy-credentials-multiselect/galaxy-credentials'),
controller: galaxyCredentialsMultiselectController,
link: function(scope) {
scope.openInstanceGroupsModal = function() {
$('#content-container').append($compile('<galaxy-credentials-modal galaxy-credentials="galaxyCredentials"></galaxy-credentials-modal>')(scope));
};
}
};
}
];

View File

@@ -0,0 +1,18 @@
<div class="input-group Form-mixedInputGroup">
<span class="input-group-btn input-group-prepend Form-variableHeightButtonGroup">
<button aria-label="{{'Open Galaxy credentials'|translate}}" type="button" class="Form-lookupButton Form-lookupButton--variableHeight btn btn-default" ng-click="openInstanceGroupsModal()"
ng-disabled="fieldIsDisabled">
<i class="fa fa-search"></i>
</button>
</span>
<span id="InstanceGroups" class="form-control Form-textInput Form-textInput--variableHeight input-medium lookup LabelList-lookupTags"
ng-disabled="fieldIsDisabled"
ng-class="{'LabelList-lookupTags--disabled' : fieldIsDisabled}">
<div ng-if="!fieldIsDisabled" class="LabelList-tagContainer" ng-repeat="tag in galaxyCredentialsTags">
<at-tag tag="tag.name" remove-tag="deleteTag(tag)"></at-tag>
</div>
<div ng-if="fieldIsDisabled" class="LabelList-tag" ng-repeat="tag in galaxyCredentialsTags">
<span class="LabelList-name">{{tag.name | sanitize}}</span>
</div>
</span>
</div>

Some files were not shown because too many files have changed in this diff Show More