Compare commits

...

579 Commits
1.0.3 ... 1.0.5

Author SHA1 Message Date
John Mitchell
ac70945071 Merge pull request #1657 from jlmitch5/jobsNewListUi
implement new style jobs list in ui
2018-03-26 11:07:56 -04:00
Michael Abashian
e486b16706 Merge pull request #1662 from mabashian/1555-permissions-checkboxes
Fixed permissions multi-select deselect bug
2018-03-26 10:30:42 -04:00
Michael Abashian
b1e959bdaa Merge pull request #1671 from mabashian/t-1099-workflow-nodes
Fixed several workflow node bugs
2018-03-26 10:30:23 -04:00
John Mitchell
01982e7eab utilize transation for instance groups jobs sub panels titles
and fix a few linting errors
2018-03-26 10:07:47 -04:00
Matthew Jones
f5252d9147 Merge pull request #1624 from theblazehen/devel
Add Rocket.Chat notification type
2018-03-26 06:41:48 -07:00
Jeandre Le Roux
c25d8a5d34 Fix rocket.chat notification test flake8
Signed-off-by: Jeandre Le Roux <theblazehen@theblazehen.com>
2018-03-26 15:13:33 +02:00
Christian Adams
8646aa8c34 Merge pull request #1673 from HNKNTA/devel
Fixed parentless function
2018-03-26 09:06:18 -04:00
HNKNTA
7ddbc49568 Fixed parentless function
Signed-off-by: HNKNTA <hnknta@gmail.com>
2018-03-25 18:33:46 +03:00
John Mitchell
f3329c8cce fix instance groups sub jobs lists 2018-03-23 17:00:41 -04:00
mabashian
348de30a17 Fixed several workflow node bugs 2018-03-23 15:50:34 -04:00
John Mitchell
babad0b868 move all jobs views to using new view 2018-03-23 14:53:20 -04:00
Shane McDonald
1595947ae2 Merge pull request #1663 from jakemcdermott/fix-docker-installer-paths
update reference to role file path to work with installer roles dir
2018-03-23 12:43:54 -04:00
Jake McDermott
4a8f24becc update reference to role file path to work with roles dir 2018-03-23 12:43:13 -04:00
mabashian
bf142fa434 Fixed permissions multi-select deselect bug 2018-03-23 10:51:44 -04:00
Ryan Petrello
07680dd7c0 Merge pull request #1652 from ryanpetrello/fix-500
send job notification templates _after_ all events have been processed
2018-03-23 10:51:06 -04:00
John Mitchell
95f80ce512 implement new style jobs list in ui 2018-03-23 09:35:41 -04:00
Michael Abashian
e7cfe1e0b6 Merge pull request #1640 from mabashian/1561-survey-multi-select
Fixed bug on non-required multiple choice survey questions
2018-03-23 09:31:56 -04:00
Michael Abashian
224d996b9c Merge pull request #1622 from mabashian/169-prompt-cleanup
Propagate new launch/relaunch logic across the app
2018-03-23 09:31:33 -04:00
mabashian
7a4bc233f6 Pass job into the relaunch component rather than pull it from the parent. Added launch template component, use it on the templates lists. 2018-03-22 16:14:54 -04:00
Shane McDonald
caf576cac0 Merge pull request #1655 from shanemcd/devel
Move installer roles into roles directory
2018-03-22 14:39:03 -04:00
Shane McDonald
84cd933702 Move installer roles into roles directory
Signed-off-by: Shane McDonald <me@shanemcd.com>
2018-03-22 14:34:03 -04:00
mabashian
c3b32e2a73 Cleaned up awRequireMultiple and fixed broken survey question error messaging 2018-03-22 11:58:47 -04:00
Jeandre Le Roux
0525df595e Add unit test for rocket.chat notifications
Signed-off-by: Jeandre Le Roux <theblazehen@theblazehen.com>
2018-03-22 16:11:03 +02:00
Ryan Petrello
f59f47435b send job notification templates _after_ all events have been processed
see: https://github.com/ansible/awx/issues/500
2018-03-22 09:30:41 -04:00
Chris Meyers
ddf000e8e7 Merge pull request #1643 from chrismeyersfsu/fix-tower_special_group
do not allow tower group delete or name change
2018-03-22 08:06:03 -04:00
chris meyers
305ef6fa7e do not allow tower group delete or name change
* DO allow policy changes and other attribute changes
2018-03-22 08:05:06 -04:00
Chris Meyers
3446134501 Merge pull request #1646 from chrismeyersfsu/fix-kombu_unicode
use non-unicode queue names
2018-03-21 21:59:05 -04:00
Bill Nottingham
eae85e803e Merge pull request #1644 from wenottingham/botbotbot
update team map
2018-03-21 19:35:58 -04:00
mabashian
8d04be0fc8 Fixed unit test failures 2018-03-21 19:22:08 -04:00
chris meyers
e0803b9f08 use non-unicode queue names
* Use unicode InstanceGroup and queue names up until the point we
actually create the queue
* kombu add_consumers returns a dict with a value that contians the
passed in queue name. Trouble is, the returned dict value is a string
and not a unicode string and this results in an error.
2018-03-21 16:50:07 -04:00
Chris Meyers
724812e87c Merge pull request #1637 from chrismeyersfsu/fix-instance_removed_from_group
handle instance group names unicode
2018-03-21 15:47:00 -04:00
Bill Nottingham
45240a6bf0 update team map 2018-03-21 15:46:50 -04:00
mabashian
0cadea1cb5 Fixed bug preventing the user from ignoring a non-required multi-select survey question on launch 2018-03-21 14:55:38 -04:00
Alan Rominger
b3e15f70cb Merge pull request #1612 from AlanCoding/token_no
Make user_capabilities False for read tokens
2018-03-21 14:45:19 -04:00
Ryan Petrello
a13ddff81a Merge pull request #1627 from aperigault/fix_deprecation
Replace deprecated -U option by --become-user
2018-03-21 14:32:13 -04:00
Bill Nottingham
88ef889cf1 Merge pull request #1634 from wenottingham/winrm-rf
Cherry-pick fix for WinRM listener to AzureRM inventory script.
2018-03-21 14:13:08 -04:00
chris meyers
91bfed3d50 handle instance group names unicode 2018-03-21 13:41:48 -04:00
AlanCoding
4f1f578fde make user_capabilities False for read tokens 2018-03-21 13:14:14 -04:00
Ryan Petrello
1a542c5e06 Merge pull request #1620 from ryanpetrello/dynamic-autoscale
dynamically set worker autoscale max_concurrency based on system memory
2018-03-21 11:52:16 -04:00
Marliana Lara
6e11b5b9c8 Merge pull request #1557 from marshmalien/feat/final_granular_permission_types
New RBAC roles at the Org level
2018-03-21 11:35:15 -04:00
Marliana Lara
4106e496df Merge pull request #1574 from marshmalien/fix/capacity_adjustment_value
Add capacity adjuster slider label
2018-03-21 11:35:03 -04:00
Ryan Petrello
6a96e6a268 dynamically set worker autoscale max_concurrency based on system memory 2018-03-21 11:10:48 -04:00
mabashian
bee7148c61 Addressed jshint errors 2018-03-21 10:59:13 -04:00
Marliana Lara
2ae02fda82 Update pr based on feedback 2018-03-21 10:55:00 -04:00
Marliana Lara
01d35ea9c0 Show organizations based on more granular RBAC roles 2018-03-21 10:54:59 -04:00
Marliana Lara
c156a0af99 Add capacity adjustment slider label 2018-03-21 10:53:16 -04:00
Bill Nottingham
531e5b5137 Cherry-pick fix for WinRM listenr to AzureRM inventory script.
(ref: https://github.com/ansible/ansible/pull/37499/)
2018-03-21 10:46:30 -04:00
mabashian
f0ff578923 Cleanup linting errors 2018-03-21 10:27:48 -04:00
Antony PERIGAULT
3adcdb43ad Replace deprecated -U option by --become-user 2018-03-21 12:28:27 +01:00
Jeandre Le Roux
fd12c44ada Add Rocket.Chat notification type
Summary: Add Rocket.Chat notification type
Issue type: Feature Pull Request
Component: Notifications

Signed-off-by: Jeandre Le Roux <theblazehen@theblazehen.com>
2018-03-21 10:02:50 +02:00
Bill Nottingham
e58038b056 Merge pull request #1623 from cvick/patch-1
Added a space before closing quote to fix spelling
2018-03-20 23:25:12 -04:00
Chris Vick
fca1e7028f Added a space before closing quote to fix spelling
Without the space before the lines closing quote, 'or' and 'other' get concatenated to 'orother' in the tooltip
2018-03-20 19:36:37 -07:00
Wayne Witzel III
f4e57e2906 Merge pull request #1608 from wwitzel3/devel
System Setting for Orgainization User/Team permissions.
2018-03-20 17:17:49 -04:00
mabashian
2e858790db Propagate launch/relaunch logic across the app. Removed some old launch related factories. 2018-03-20 15:53:21 -04:00
Alan Rominger
8056ac5393 Delay import of freeze to make tests run (#1617)
* Delay import of freeze to make tests run

* fix flake8 error
2018-03-20 11:58:11 -04:00
Matthew Jones
6419339094 Merge pull request #1570 from rooftopcellist/authorization_flow_docs
add authorization grant to docs
2018-03-20 07:26:10 -07:00
Matthew Jones
3ee8b3b514 Merge pull request #1613 from EagleIJoe/patch-1
Corrected alternate dns servers entries in docker-compose template
2018-03-20 07:24:31 -07:00
Wayne Witzel III
d7f26f417d Reword help text for manage org auth 2018-03-20 07:31:08 -04:00
Alan Rominger
30fb4076df Merge pull request #1569 from AlanCoding/relaunch_survey
Allow normal users to relaunch jobs with survey answers
2018-03-20 07:14:09 -04:00
Matthew Jones
c0661722b6 Merge pull request #1523 from paihu/slack-color-notification
support slack color notification #1490
2018-03-19 18:05:28 -07:00
Martin Adler
ca7b6ad648 Corrected alternate dns servers entries
As lstrip_blocks: True was added, this broke the formating when adding alternate DNS servers within the template. Removing the extra white space removals within the if and endif statements fixed the resulting yml formating.
2018-03-19 21:08:52 +01:00
Wayne Witzel III
d5564e8d81 Fix user capabilities when MANAGE_ORGANIZATION_AUTH is disabled 2018-03-19 15:16:54 -04:00
Wayne Witzel III
a9da494904 switch to single toggle and change name 2018-03-19 14:45:52 -04:00
John Mitchell
a9e13cc5f4 Merge pull request #1580 from jlmitch5/usersAppCrudUi
implement users tokens sub list
2018-03-19 13:23:29 -04:00
Wayne Witzel III
771108e298 Protect team assignment for the roles access point 2018-03-19 12:10:13 -04:00
Wayne Witzel III
eb3b518507 Add Organization User/Team toggle to UI 2018-03-19 11:25:14 -04:00
Wayne Witzel III
33ac8a9668 System wide toggle for org admin user/team abilities 2018-03-19 11:24:36 -04:00
Ryan Petrello
2b443b51eb Merge pull request #1606 from ryanpetrello/uwsgi-top
add uwsgitop as a dependency
2018-03-19 10:58:22 -04:00
Ryan Petrello
918f372c20 add uwsgitop as a dependency
see: https://github.com/ansible/ansible-tower/issues/7966
2018-03-19 08:53:30 -04:00
Matthew Jones
681918be9a Merge pull request #1598 from ryanpetrello/pin-boto-core
pin botocore to avoid dependency hell re: latest python-dateutil
2018-03-17 13:08:44 -07:00
Bill Nottingham
2780cd0d4c Merge pull request #1601 from wenottingham/following-a-new-path
Just set ANSIBLE_SSH_CONTROL_PATH_DIR, and don't worry about the socket file name.
2018-03-16 22:35:26 -04:00
John Mitchell
cbc20093d7 move users tokens to features folder 2018-03-16 17:15:28 -04:00
Bill Nottingham
6fc4274c68 Just set ANSIBLE_SSH_CONTROL_PATH_DIR, and don't worry about the socket file name.
Ansible itself (since 2.3) has code to have a shorter hashed control path socket name.
2018-03-16 16:56:45 -04:00
Ryan Petrello
4f585dd09e pin botocore to avoid dependency hell re: latest python-dateutil
boto decided to pin python-dateutil on a version _lower than_ what we
need for the TZID= bug fix:
90d7692702 (diff-b4ef698db8ca845e5845c4618278f29a)
2018-03-16 16:08:03 -04:00
John Mitchell
8babac49a6 update users token crud list to utilize string file 2018-03-16 15:56:30 -04:00
Matthew Jones
8aa7e4692d Merge pull request #1596 from ansible/jlmitch5-patch-3
update .gitignore to include tower license dir
2018-03-16 12:36:00 -07:00
John Mitchell
cf20943434 update .gitignore to include tower license dir 2018-03-16 15:33:18 -04:00
Bill Nottingham
d5d2858626 Merge pull request #1591 from wenottingham/bad-date
Bump copyright date.
2018-03-16 15:11:10 -04:00
Bill Nottingham
52599f16ad Bump copyright date.
We don't need to do this at the source code level, but we should do it for the app as a whole.
2018-03-16 14:57:08 -04:00
Ryan Petrello
a1f15362ab Merge pull request #1575 from aperigault/fix_nginx_upstreams
Fix nginx upstreams
2018-03-16 14:53:48 -04:00
Alan Rominger
1413659f5f Merge pull request #1593 from AlanCoding/fix_processed
Fix bug with non-event model
2018-03-16 14:53:05 -04:00
AlanCoding
bbbb7def0a fix bug with non-event model 2018-03-16 14:27:36 -04:00
Marliana Lara
85a95c8cb8 Merge pull request #1577 from marshmalien/fix/empty_ig_list_results_error
Fix error where list directive requires results attribute
2018-03-16 13:52:34 -04:00
adamscmRH
5f6a8ca2c0 add authorization grant to docs 2018-03-16 12:21:22 -04:00
Alan Rominger
75dd8d7d30 Merge pull request #1587 from AlanCoding/more_event_blocking
Block deletion of resources with unprocessed events
2018-03-16 11:26:33 -04:00
AlanCoding
66108164b9 remove unnecessary mock 2018-03-16 10:55:48 -04:00
AlanCoding
69eccd3130 move ACTIVE_STATES to constants 2018-03-16 10:31:41 -04:00
AlanCoding
7881c921ac block deletion of resources w unprocessed events 2018-03-16 10:14:28 -04:00
Wayne Witzel III
16aa3d724f Merge pull request #1586 from wwitzel3/devel
Moved RelatedJobMixin impl to Project instead of ProjectUpdate
2018-03-16 09:58:34 -04:00
Wayne Witzel III
6231742f71 Move RelatedJob mixin to Project 2018-03-16 09:42:32 -04:00
Wayne Witzel III
c628e9de0a Filter active jobs by WFT/JT 2018-03-16 09:32:42 -04:00
Wayne Witzel III
c54d9a9445 Fix query using self -> self.project and fix imports 2018-03-16 09:24:46 -04:00
Wayne Witzel III
f594f62dfc Project needs to expose all of its ProjectUpdate jobs in an active state 2018-03-16 09:12:20 -04:00
Chris Meyers
0689cea806 Merge pull request #1572 from chrismeyersfsu/fix-instance_removed_from_group
handle unicode things in task logger
2018-03-15 16:25:13 -04:00
Chris Meyers
0cf1b4d603 Merge pull request #1535 from chrismeyersfsu/fix-protect_tower_group
prevent tower group delete and update
2018-03-15 16:02:36 -04:00
chris meyers
1f7506e982 prevent tower group delete and update
* related to https://github.com/ansible/ansible-tower/issues/7931
* The Tower Instance group is special. It should always exist, so
prevent any delete to it.
* Only allow super users to associate/disassociate instances the 'tower'
instance group.
* Do not allow fields of tower instance group to be changed.
2018-03-15 15:23:06 -04:00
Chris Meyers
2640ef8b1c Merge pull request #1536 from chrismeyersfsu/fix-protect_instance_groups
prevent instance group delete if running jobs
2018-03-15 14:57:45 -04:00
John Mitchell
e7a0bbb5db implement users tokens sub list 2018-03-15 14:53:49 -04:00
chris meyers
5d5d8152c5 prevent instance group delete if running jobs
* related to https://github.com/ansible/ansible-tower/issues/7936
2018-03-15 14:25:49 -04:00
Matthew Jones
3928f536d8 Merge pull request #1571 from matburt/fixing_cluster_resources
Fixing some issues defining resource requests in openshift and k8s
2018-03-15 11:20:56 -07:00
Marliana Lara
84904420ad Pass results attr to list directive from instance groups list 2018-03-15 14:13:33 -04:00
chris meyers
2ea0b31e2b handle unicode things in task logger
Related to https://github.com/ansible/ansible-tower/issues/7957

* Problem presented itself as Instances falling out of Instance Groups.
This was due to the cluster membership policy decider erroring out on a
logger message with unicode.
* Fixed up potential other unicode logger unicode issues in tasks.py
2018-03-15 14:04:39 -04:00
Antony PERIGAULT
8cf1c1a180 Fix nginx configuration to avoid ipv6 resolutions errors 2018-03-15 17:54:51 +01:00
Matthew Jones
192dc82458 Update documentation for default pod resource requests
Including information on how to override the default resources
2018-03-15 12:01:02 -04:00
Matthew Jones
3ba7095ba4 Fixing some issues defining resource requests in openshift and k8s
* Allow overriding all container resource requests by setting defaults/
* Fix an issue where template vars were reversed in the deployment config
* Remove `limit` usage to allow for resource ballooning if it's available
* Fix type error when using templated values in the config map for resources
2018-03-15 12:00:53 -04:00
AlanCoding
43aef6c630 allow normal users to relaunch jobs w survey answers 2018-03-15 07:43:03 -04:00
Michael Abashian
597874b849 Merge pull request #1489 from mabashian/169-workflow-nodes
Implemented new workflow node prompting
2018-03-14 16:58:35 -04:00
mabashian
9873bab451 Removed unused/commented code 2018-03-14 16:26:02 -04:00
Matthew Jones
cec77964ac Merge pull request #1563 from matburt/container_cluster_capacity
Implement container-cluster aware capacity determination
2018-03-14 12:06:25 -07:00
Christian Adams
2abf4ccf3b Merge pull request #1562 from rooftopcellist/python_saml_upgrade
add xmlsec flag to docker installs
2018-03-14 14:53:26 -04:00
Matthew Jones
b0cf4de072 Implement container-cluster aware capacity determination
* Added two settings values for declaring absolute cpu and memory
  capacity that will be picked up by the capacity utility methods
* installer inventory variables for controlling the amount of cpu and
  memory container requests/limits for the awx task containers
* Added fixed values for cpu and memory container requests for other
  containers
* configmap uses the declared inventory variables to define the
  capacity inputs that will be used by AWX to correspond to the same
  inputs for requests/limits on the deployment.
2018-03-14 14:35:45 -04:00
Shane McDonald
2af085e1fe Merge pull request #1552 from jffz/devel
Add ca_trust_dir to local docker installations
2018-03-14 14:32:55 -04:00
adamscmRH
8d460490c1 add xmlsec flag to docker installs 2018-03-14 14:28:35 -04:00
John Mitchell
5eed816c4d Merge pull request #1558 from ansible/jlmitch5-patch-2
encode username and password when sending login POST from ui
2018-03-14 11:29:34 -04:00
John Mitchell
17cdbef376 encode username and password when sending login POST from ui
fixes #1553
2018-03-14 11:12:50 -04:00
Alexander Bauer
709cb0ae2b fixup! Add local_docker facility for bind-mounting ca-trust 2018-03-14 10:52:36 -04:00
Alexander Bauer
db8df5f724 Add local_docker facility for bind-mounting ca-trust
This implements one possible solution for #411, but does not solve it for
Kubernetes or Openshift installations.

# Conflicts:
#	installer/inventory
2018-03-14 10:52:36 -04:00
Alan Rominger
5c0a52df16 Merge pull request #1533 from AlanCoding/count_events
Track emitted events on model
2018-03-14 10:30:43 -04:00
John Mitchell
ea5ab2df7f Merge pull request #1453 from jlmitch5/licenseInSettingsUi
[Tower only] Make pendo license settings opt out whenever license is added
2018-03-14 10:26:08 -04:00
jeff
4fa0d2406a Remove unneeded jinja endif 2018-03-14 15:16:26 +01:00
Alan Rominger
92b8fc7e73 Merge pull request #1554 from AlanCoding/poly_who
fix bugs with UJT optimizations
2018-03-14 09:11:57 -04:00
Matthew Jones
63f0082e4d Merge pull request #1543 from matburt/k8s_helm_instructions
Adding information on Kubernetes RBAC considerations for Helm
2018-03-14 05:52:12 -07:00
AlanCoding
5170fb80dc fix bugs with UJT optimizations 2018-03-14 08:19:53 -04:00
AlanCoding
04a27d5b4d Namechange events_processed -> event_processing_finished
from PR review, also adding tests to assert that the
value is passed from the stdout_handle to the UnifiedJob
object on finalization of job run in tasks.py
2018-03-14 07:53:04 -04:00
AlanCoding
b803a6e557 Track emitted events on model 2018-03-14 07:53:02 -04:00
Alan Rominger
0db584e23e Merge pull request #1530 from AlanCoding/inv_env_vars
More restrictive inventory env vars management
2018-03-14 07:23:42 -04:00
jeff
f9f91ecf81 Add ca_trust_dir to task image 2018-03-14 11:41:10 +01:00
jeff
aca74d05ae Add 'ca_trust_dir' variable to allow Custom CA sharing between host and containers 2018-03-14 11:40:56 +01:00
Matthew Jones
b646e675d6 Merge pull request #1544 from matburt/sorting_region_choices
Sort cloud regions in a stable way
2018-03-13 18:02:31 -07:00
Matthew Jones
4a5f458a36 Merge pull request #1542 from matburt/adding_more_extra_vars
Adding more helpful job extra vars
2018-03-13 17:53:44 -07:00
John Mitchell
04bc044340 Merge pull request #1437 from jlmitch5/appCrudUi
implements application crud ui
2018-03-13 18:07:10 -04:00
John Mitchell
c65342acc9 make call to pendo setting and set check box based on that when license already exists 2018-03-13 15:41:45 -04:00
Matthew Jones
acde2520d0 Sort cloud regions in a stable way
* All comes first
* Then US regions
* Then all other regions alphabetically
2018-03-13 15:31:28 -04:00
Wayne Witzel III
4b27b05fd2 Merge pull request #1541 from wwitzel3/devel
Fix member_role parent to include credential_admin_role
2018-03-13 13:52:58 -04:00
Matthew Jones
dcf0b49840 Adding information on Kubernetes RBAC considerations for Helm 2018-03-13 13:48:10 -04:00
John Mitchell
f8c6187007 add back in comments describing license type payload 2018-03-13 13:34:03 -04:00
John Mitchell
d9f5eab404 add separator above checkbox 2018-03-13 13:34:03 -04:00
John Mitchell
8b10d64d73 update code formatting based on feedback 2018-03-13 13:34:02 -04:00
John Mitchell
2b4a53147e turn pendo tracking off in settings when checkbox is unchecked 2018-03-13 13:34:02 -04:00
John Mitchell
5f4f4a2fb9 make pendo license settings opt out whenever license is added 2018-03-13 13:34:00 -04:00
Matthew Jones
45ad94f057 Adding more helpful job extra vars
* Adds email, first name, last name as extra vars to job launches
* Remove old ad-hoc command extra vars population... use our
  base-class method instead
2018-03-13 13:33:54 -04:00
Wayne Witzel III
db38cf8f93 Fix member_role parent to include credential_admin_role 2018-03-13 12:20:40 -04:00
Alan Rominger
dcae4f65b5 Merge pull request #1330 from AlanCoding/capable_of_anything
New copy fields, clean up user_capabilities logic
2018-03-13 12:05:45 -04:00
paihu
dfea3a4b95 fix: broken backward compatibility
fix: param hex_color isn't optional

Signed-off-by: paihu <paihu_j@yahoo.co.jp>
2018-03-13 18:04:47 +09:00
John Mitchell
9d6fab9417 update edit controller to PUT app instead of POST
remove old applications tokens code
2018-03-12 17:40:08 -04:00
Christian Adams
f995b99af6 Merge pull request #1531 from rooftopcellist/application_description
add description to app serializer
2018-03-12 17:19:54 -04:00
Chris Meyers
724ca23685 Merge pull request #1534 from chrismeyersfsu/fix-4_job_limit
autoscale celery up to 50 workers
2018-03-12 15:45:01 -04:00
chris meyers
a4859a929c autoscale celery up to 50 workers 2018-03-12 15:36:15 -04:00
adamscmRH
91214aa899 add description to app serializer 2018-03-12 15:07:59 -04:00
John Mitchell
80db90b34c reduce delete prompting cruft for app ui 2018-03-12 14:35:03 -04:00
AlanCoding
3566140ecc more restrictive inventory env vars management 2018-03-12 13:35:22 -04:00
John Mitchell
3cf447c49b remove N_ dependency in favor strings files 2018-03-12 13:31:19 -04:00
John Mitchell
a22f1387d1 adjust user tokens list labeling 2018-03-12 13:31:19 -04:00
John Mitchell
8a28d7c950 remove permissions subview code from applications ui crud 2018-03-12 13:31:18 -04:00
John Mitchell
8031337114 add applications.edit.organization route 2018-03-12 13:31:18 -04:00
John Mitchell
8d2c0b58e1 remove unnecessary conditional 2018-03-12 13:31:18 -04:00
John Mitchell
f4ad9afc5e add app crud ui 2018-03-12 13:31:18 -04:00
Marliana Lara
c19bb79587 Merge pull request #1499 from marshmalien/style/display_invalid_items
Add border between invalid and active template flags
2018-03-12 12:37:42 -04:00
Ryan Petrello
6d9b386727 Merge pull request #1529 from ryanpetrello/new-dateutil
bump python-dateutil to latest
2018-03-12 12:34:02 -04:00
Ryan Petrello
44adab0e9e bump python-dateutil to latest
this change provides support for numerous bug fixes, along with
support for parsing TZINFO= from rrule strings

related: https://github.com/ansible/ansible-tower/issues/823
related: https://github.com/dateutil/dateutil/issues/614
2018-03-12 12:20:03 -04:00
mabashian
8aa9569074 In the JT form, moved options from its own line to in-line 2018-03-12 10:56:36 -04:00
mabashian
982b83c2d3 Fixed several workflow prompting and edge type bugs 2018-03-12 10:50:33 -04:00
Matthew Jones
346c9fcc8a Merge pull request #1514 from wenottingham/a-period-piece
Add some periods.
2018-03-12 07:40:16 -07:00
Matthew Jones
eaff7443d2 Merge pull request #1522 from therealmaxmouse/patch-1
Update INSTALL.md
2018-03-12 07:39:48 -07:00
Matthew Jones
8a9397a997 Merge pull request #1528 from jffz/devel
Fix project_data_dir templating for local_docker install
2018-03-12 07:37:11 -07:00
jeff
4972755ccb Fix project_data_dir templating for local_docker install 2018-03-12 14:50:44 +01:00
Ryan Petrello
6d43b8c4dd Merge pull request #1527 from ryanpetrello/oauth2-filter
restrict API filtering on oauth-related fields
2018-03-12 09:43:05 -04:00
Ryan Petrello
a61187e132 restrict API filtering on oauth-related fields
related: https://github.com/ansible/awx/issues/1354
2018-03-12 09:16:37 -04:00
paihu
9b5e088d70 support slack color notification #1490
Signed-off-by: paihu <paihu_j@yahoo.co.jp>
2018-03-12 14:14:44 +09:00
therealmaxmouse
54ae039b95 Update INSTALL.md
fixing typo
2018-03-11 11:45:35 -04:00
Bill Nottingham
7b2b71e3ef ... update string in tests as well. 2018-03-09 17:49:46 -05:00
Bill Nottingham
fb05eecee0 Add some periods. 2018-03-09 17:23:52 -05:00
Ryan Petrello
dcab97f94f Merge pull request #1504 from ryanpetrello/oauth2-swagger
properly categorize OAuth2 endpoints for swagger autogen
2018-03-09 15:27:02 -05:00
Ryan Petrello
397b9071a6 properly categorize OAuth2 endpoints for swagger autogen 2018-03-09 15:07:50 -05:00
Shane McDonald
7984bd2824 Merge pull request #1493 from jffz/devel
Fix for dns and dns_search templating
2018-03-09 12:52:10 -05:00
Marliana Lara
6f7cb0a16e Add border between invalid and active indicators 2018-03-09 12:21:46 -05:00
Marliana Lara
bfbbb95256 Merge pull request #1475 from marshmalien/feat/style_upgrade_page
Style migrations-pending page
2018-03-09 11:47:34 -05:00
Marliana Lara
882ed4d05a Merge pull request #1497 from marshmalien/feat/display_invalid_items_onPrompt
Denote invalid template when no inventory and no prompt-for-inventory
2018-03-09 10:38:20 -05:00
Christian Adams
cee12c4e6c Merge pull request #1378 from rooftopcellist/no_patch_app
disallow changing token-app
2018-03-09 10:33:24 -05:00
Marliana Lara
c2a3e82d29 Check Inventory ask_inventory_on_launch value when verifying template validity 2018-03-09 10:08:39 -05:00
Chris Meyers
181af03ab9 Merge pull request #1495 from chrismeyersfsu/fix-celery_rollback
more celery rollback
2018-03-09 09:31:31 -05:00
chris meyers
e2ed1542e6 more celery rollback
* Setting reload code calls a celery 4.x method signature. This changes
it back to a 3.x safe call.
2018-03-09 09:27:09 -05:00
jffz
ca27dee4fc Fix dns and dns_search templating
Fix templating for dns and dns_search entries for both `awx_web` and `awx_task` images.

Multiple entries were templated in a oneliner style while docker-compose wanted them in a list style.
2018-03-09 11:04:26 +01:00
mabashian
c98e7f6ecd Implemented workflow node prompting 2018-03-08 18:45:28 -05:00
Christian Adams
8a25342ce5 Merge pull request #1373 from rooftopcellist/oauth_doc_csrf
update docs
2018-03-08 18:15:04 -05:00
Alan Rominger
b41d9c4620 Merge pull request #1470 from AlanCoding/mo_exceptions
Include stack trace for delete_inventory logs
2018-03-08 17:18:40 -05:00
adamscmRH
91c0f2da6f simplifies detail serializer 2018-03-08 14:55:25 -05:00
Matthew Jones
b11b1acc68 Update middleware warning for latest minor version 2018-03-08 12:54:26 -05:00
adamscmRH
9b195bc80f fix oauth docs 2018-03-08 12:44:53 -05:00
adamscmRH
fd7c078a8b update docs 2018-03-08 12:10:29 -05:00
adamscmRH
06bacd7bdc add serializer for token detail 2018-03-08 12:03:50 -05:00
Marliana Lara
6f23147d98 Style migrations/pending page 2018-03-08 11:47:59 -05:00
Alan Rominger
3605dbfd73 Merge pull request #1472 from AlanCoding/more_deps
Add shade back into AWX requirements
2018-03-08 11:04:20 -05:00
Michael Abashian
599d84403b Merge pull request #1425 from mabashian/169-credentials
Added add/replace credential validation on jt launch and schedule
2018-03-08 10:57:27 -05:00
AlanCoding
4a01805a19 add shade back into AWX requirements
Last round of dependency updates showed that AWX
depended on packages which came implicitly from shade
decorator is added as an explicit dependency
and all of the rest of shade requirements are
added back in here.
2018-03-08 10:32:19 -05:00
Michael Abashian
c580146c77 Merge branch 'devel' into 169-credentials 2018-03-08 10:03:29 -05:00
mabashian
ce3dc40649 Edit schedule credential prompting code cleanup 2018-03-08 09:58:31 -05:00
Shane McDonald
5bf2e00d24 Merge pull request #1471 from shanemcd/devel
Fix container boots on AppArmor protected systems
2018-03-08 09:44:33 -05:00
Shane McDonald
02102f5ba8 Fix container boots on AppArmor protected systems
Link https://github.com/ansible/awx/issues/1297

Signed-off-by: Shane McDonald <me@shanemcd.com>
2018-03-08 09:41:04 -05:00
Shane McDonald
2861397433 Set imagePullPolicy to Always
Not sure why we werent doing this before.
2018-03-08 09:41:04 -05:00
AlanCoding
54a68da088 include stack trace for delete_inventory logs 2018-03-08 08:30:59 -05:00
Alan Rominger
044b85ce7a Merge pull request #1415 from AlanCoding/depgrades
Dependency Upgrades
2018-03-08 08:29:46 -05:00
Michael Abashian
b970452950 Merge pull request #1441 from marshmalien/feat/display_invalid_items
Denote invalid job templates and scheduled jobs
2018-03-07 15:26:13 -05:00
adamscmRH
f485a04dfc disallow changing token-app 2018-03-07 15:13:56 -05:00
mabashian
a5043029c1 Implemented the ability to specify credentials when creating a scheduled job run. Added validation for removing but not replacing default credentials. 2018-03-07 11:57:31 -05:00
Matthew Jones
61a48996ee Merge pull request #1459 from rooftopcellist/update_session_setting
add csrf & session settings
2018-03-07 08:41:17 -08:00
Jake McDermott
8f58d0b998 Merge pull request #1455 from ansible/jakemcdermott-patch-2
auto hide multi credential scrollbar
2018-03-07 11:33:29 -05:00
adamscmRH
0490bca268 add csrf & session settings 2018-03-07 09:32:24 -05:00
Christian Adams
095515bb56 Merge pull request #1458 from rooftopcellist/fix_expiration
Fix expiration
2018-03-07 08:13:25 -05:00
adamscmRH
efaa698939 fix token expiration time 2018-03-07 00:42:44 -05:00
AlanCoding
556e6c4a11 Dependency Updates
Upgrades of minor dependency upgrades
Inventory scripts were upgraded in separate commit

Major exclusions from this update
- celery was already downgraded for other reasons
- Django / DRF major update already done, minor bumps here
- asgi-amqp has fixes coming independently, not touched
- TACACS plus added features not needed

Removals of note
- remove shade from AWX requirements
- remove kombu from Ansible requirements

Other notes

Add note about pinning setuptools and pip,
done but not mentioned previously

Stop pinning gevent-websocket and twisted

upgrade Azure to Ansible core requirements

more detailed notes
https://gist.github.com/AlanCoding/9442a512ab6977940bc7b5b346d4f70b

upgrade version of Django for Exception
2018-03-06 16:04:01 -05:00
Matthew Jones
8421d2b0d2 Merge pull request #1457 from matburt/remove_old_migrations
Remove old south migrations from before a previous django upgrade
2018-03-06 12:59:32 -08:00
Jake McDermott
f8ca0a613f Update prompt.block.less 2018-03-06 14:56:38 -05:00
Christian Adams
db91e30464 Merge pull request #1449 from rooftopcellist/fix_exp_time
fix token expiration time
2018-03-06 14:55:38 -05:00
Matthew Jones
d19ef60d97 Remove old south migrations from before a previous django upgrade 2018-03-06 14:47:09 -05:00
Jake McDermott
5971d79a8f auto hide multi credential scrollbar 2018-03-06 14:33:28 -05:00
Wayne Witzel III
a3b2f29478 Merge pull request #1454 from wwitzel3/fix-role-summary
Fix role summary when role description is overloaded
2018-03-06 14:03:22 -05:00
adamscmRH
a80e3855cd fix token expiration time 2018-03-06 13:22:12 -05:00
Wayne Witzel III
8dce5c826c Fix role summary when role description is overloaded 2018-03-06 13:07:29 -05:00
Demin, Petr
f4a241aba2 Constrain requests 2018-03-06 12:47:34 -05:00
Jake McDermott
105b4982c4 Merge pull request #1451 from tburko/devel
Fix "System settings panel form is not rendering #1440"
2018-03-06 11:15:16 -05:00
Alan Rominger
cc33109412 Merge pull request #1445 from AlanCoding/platform
Add platform to ec2 group by options
2018-03-06 07:23:22 -05:00
Taras Burko
8a5cd3ec7d Fix "System settings panel form is not rendering #1440" 2018-03-06 14:03:36 +02:00
Ryan Petrello
d6af0bfd50 Merge pull request #1448 from ryanpetrello/fix-7923
normalize custom_virtualenv empty values to null
2018-03-05 17:25:51 -05:00
Ryan Petrello
8955e6bc1c normalize custom_virtualenv empty values to null
see: https://github.com/ansible/ansible-tower/issues/7923
2018-03-05 17:05:10 -05:00
Shane McDonald
61087940c5 Merge pull request #1446 from matburt/fix_kubernetes_configmap
Apply celery rollback changes to kubernetes configmap
2018-03-05 16:46:57 -05:00
Matthew Jones
e99184656e Apply rabbitmq and setting kubernetes changes post-celery rollback 2018-03-05 16:22:27 -05:00
AlanCoding
341e2c0fe2 add platform to ec2 group by options 2018-03-05 15:43:28 -05:00
Matthew Jones
105b82c436 Apply celery rollback changes to kubernetes configmap 2018-03-05 15:32:24 -05:00
Ryan Petrello
1596b2907b Merge pull request #1439 from ryanpetrello/fix-7923
add validation to InventorySource.inventory to avoid task manager death
2018-03-05 15:12:24 -05:00
Marliana Lara
18f3c79bc3 Denote invalid job templates and scheduled jobs by displaying a red invalid bar 2018-03-05 14:52:40 -05:00
Ryan Petrello
8887be5952 add validation to InventorySource.inventory to avoid task manager death
see: https://github.com/ansible/awx/issues/1438
2018-03-05 14:40:57 -05:00
Shane McDonald
44f6423af3 Merge pull request #1442 from ryanpetrello/devel
fix busted shippable builds
2018-03-05 14:38:35 -05:00
Chris Meyers
80a970288d Merge pull request #1443 from chrismeyersfsu/fix-named_urls
handle 404 returned by resolve()
2018-03-05 14:37:07 -05:00
chris meyers
ccfb6d64bf handle 404 returned by resolve()
* related to https://github.com/ansible/ansible-tower/issues/7926
* if 404 on url in migration loading middelware, do NOT short circuit
middleware. Simply call the normal middlware code path in this case.
2018-03-05 14:34:53 -05:00
Ryan Petrello
13672cc88c fix busted shippable builds 2018-03-05 14:16:42 -05:00
Shane McDonald
d5773c58d3 Merge pull request #1426 from chrismeyersfsu/fix-migration_in_progress
short-circuit middleware if migration loading url
2018-03-03 10:48:50 -05:00
Jake McDermott
5370c5e07d Merge pull request #1431 from wenottingham/check-check-check-it-out
Adjust some wording in the UI.
2018-03-02 21:07:18 -05:00
Bill Nottingham
1606380f61 Adjust some wording in the UI.
Attempt to make the 'scm update' vs 'scm checkout' more clear.
Remove 'future' from scheduling tooltips (superfluous).
2018-03-02 19:54:02 -05:00
Christian Adams
953850a0d7 Merge pull request #1427 from rooftopcellist/hide_client_secret
Hide client_secret from activity stream
2018-03-02 15:43:49 -05:00
adamscmRH
701a5c9a36 hides client_secret from act stream 2018-03-02 14:47:49 -05:00
chris meyers
36d59651af inherit rather than monkey patch
* Enable migration in progress page in ALL environments
2018-03-02 12:37:48 -05:00
Christian Adams
d1319b7394 Merge pull request #1414 from rooftopcellist/testing_oauth
fix token creation at `api/o/token`
2018-03-02 11:36:59 -05:00
chris meyers
746a2c1eea short-circuit middleware if migration loading url
* Had to monkey patch django middleware logic.
* Left checks to tell coders to use new middleware behavior in favor of
monkey patching.
2018-03-02 11:21:26 -05:00
Chris Meyers
9df76f963b Merge pull request #1412 from chrismeyersfsu/reap_new_nodes_too
reap all nodes that havn't checked in
2018-03-02 10:03:37 -05:00
chris meyers
17de084d04 perform the min needed DB ops to offline a node
* Don't do an extra save to the DB that could conflict with another
heartbeat when it isn't needed since we will be deleting the node
anyway.
2018-03-02 07:57:59 -05:00
Chris Meyers
f907995374 Merge pull request #1417 from chrismeyersfsu/fix-config_watcher
invoke main() in config watcher script
2018-03-01 17:11:58 -05:00
chris meyers
b69315f2eb fix up the config map watcher script
* invoke main() in config watcher script
* correctly call hash update by passing the filename
2018-03-01 17:06:07 -05:00
chris meyers
a3a618d733 call node init procedures as early as possible
* invoke the first heartbeat as early as possible. Results in a much
better user experience where when a user scales up an awx node, the node
appears with capacity earlier.
2018-03-01 17:05:58 -05:00
adamscmRH
fa7647f828 fix token creation 2018-03-01 16:19:58 -05:00
Chris Meyers
8c1ec37c80 Merge pull request #1411 from chrismeyersfsu/early_first_heartbeat
call node init procedures as early as possible
2018-03-01 13:01:37 -05:00
Jake McDermott
d7616accf5 Improve documentation for AWX E2E (#1381)
* Improve documentation for AWX E2E
2018-03-01 12:00:16 -05:00
chris meyers
5c647c2a0d call node init procedures as early as possible
* invoke the first heartbeat as early as possible. Results in a much
better user experience where when a user scales up an awx node, the node
appears with capacity earlier.
2018-03-01 11:24:45 -05:00
chris meyers
e94bd128b8 reap all nodes that havn't checked in
* Before this change we would exclude the reaping of new nodes. With
this change, new nodes will be considered for reaping just like old
nodes.
2018-03-01 11:21:54 -05:00
Alan Rominger
8d57b84251 Merge pull request #1353 from AlanCoding/dep_scripts
Update inventory scripts
2018-03-01 10:56:11 -05:00
Chris Meyers
f18d99d7a9 Merge pull request #1409 from chrismeyersfsu/openshift_runtime_rabbitmq_cookie
dynamically set rabbitmq cookie
2018-03-01 09:57:11 -05:00
chris meyers
9436e8ae25 dynamically set rabbitmq cookie 2018-03-01 09:23:45 -05:00
Wayne Witzel III
dba78e6bfb Merge pull request #1398 from wwitzel3/devel
Update to latest asgi-amqp
2018-02-28 15:20:55 -05:00
Wayne Witzel III
73f0a0d147 Update to latest asgi-amqp 2018-02-28 14:43:37 -05:00
Cédric Levasseur
a2d543eb3b Inserting a note about PostgreSQL minimal version 9.4 in installation doc (#1385)
* Minimal postgresql version

* moving the Postgresql minimal version note.

* moved to System requirements and 'minimal' replaced by 'minimum'.
2018-02-28 13:44:50 -05:00
Shane McDonald
7087341570 Merge pull request #1397 from shanemcd/devel
Fix celery 3 broker url reference in standalone docker install
2018-02-28 12:50:57 -05:00
Shane McDonald
0e9a8d5592 Fix celery 3 broker url reference 2018-02-28 12:47:05 -05:00
Alan Rominger
4fba2d61e6 Merge pull request #1394 from AlanCoding/text_type2
Prevent unicode bug in job_explanation
2018-02-28 12:42:11 -05:00
AlanCoding
54c0436959 prevent unicode bug in job_explanation 2018-02-28 11:01:20 -05:00
Alan Rominger
ee0e239a9e Merge pull request #1374 from AlanCoding/your_name
More consistent representations of model objects
2018-02-28 09:08:29 -05:00
Matthew Jones
dc4b9341da Merge pull request #1383 from jakemcdermott/401-on-invalid-login
issue a 401 on invalid login
2018-02-28 08:35:11 -05:00
Jake McDermott
75a27f2457 issue 401 on invalid login 2018-02-28 02:02:52 -05:00
Jake McDermott
ee20fc478b add test for invalid login 2018-02-28 02:02:39 -05:00
Jake McDermott
01ee2adf30 Merge pull request #1382 from jakemcdermott/cookie-settings
adding in default session cookie setting for docker stand alone
2018-02-27 20:42:46 -05:00
Jake McDermott
877cde9a7f add default cookie settings 2018-02-27 20:40:41 -05:00
Ryan Petrello
b5a46c346d Merge pull request #1379 from ryanpetrello/fix-1366
don't inject custom extra_vars for inventory updates
2018-02-27 17:00:22 -05:00
Christian Adams
6e39388090 Merge pull request #1380 from rooftopcellist/csrf_flag
adds csrf flag to support http
2018-02-27 16:36:23 -05:00
Christian Adams
47c4eb38df Merge pull request #1377 from rooftopcellist/remove_authtoken_model
Removes Auth token
2018-02-27 16:33:20 -05:00
adamscmRH
69f8304643 adds csrf flag to support http 2018-02-27 16:19:46 -05:00
adamscmRH
40d563626e removes authtoken 2018-02-27 16:12:13 -05:00
Ryan Petrello
b9ab06734d don't inject custom extra_vars for inventory updates
see: https://github.com/ansible/awx/issues/1366
2018-02-27 16:10:23 -05:00
Chris Meyers
d551566b4d Merge pull request #1372 from chrismeyersfsu/old-celery3
celery 4.x to 3.x roll back
2018-02-27 15:26:46 -05:00
chris meyers
6606a29f57 celery 4.x -> 3.x change route config name 2018-02-27 14:13:05 -05:00
Jake McDermott
f9129aefba Merge pull request #1361 from mabashian/1279-preview-credentials
Put credentials on their own line in the launch preview
2018-02-27 13:42:00 -05:00
AlanCoding
bacd895705 more consistent representations of model objects 2018-02-27 12:18:57 -05:00
chris meyers
148baf7674 add explicit awx_celery container version 2018-02-27 11:37:10 -05:00
chris meyers
5918fa5573 remove () from postgres port value
* awx task container uses postgres port to wait for postgres to become
available before the container init continues. The () are problematic
and are removed.
* () was originally added to fix an openshift issues. That error does
NOT occur with this fix.
2018-02-27 11:36:55 -05:00
chris meyers
e4470aa4cf remove uneeded celery configs
* Celery routes and queues are set and defined at runtime. Thus, a
static definition of routes and queues is not needed.
2018-02-27 11:36:55 -05:00
chris meyers
fe05b4c0d5 use celery 3.x BROKER_URL
* Celery 4.x specifies the broker via CELERY_BROKER_URL. Since we are
now on 3.x, use 3.x way of specifying the broker via BROKER_URL
2018-02-27 11:36:55 -05:00
Alan Rominger
6d7f60ea61 Merge pull request #1368 from AlanCoding/none_client
Fix server error with absent client_secret
2018-02-27 10:39:50 -05:00
Ryan Petrello
a4ab424134 Merge pull request #1362 from ryanpetrello/rdb-sdb
replace our rdb tooling w/ the sdb PyPI package
2018-02-27 10:06:21 -05:00
Ryan Petrello
3636a7c582 Merge pull request #1355 from ryanpetrello/devel
set $HOME via an API call so AWX_TASK_ENV isn't marked as readonly
2018-02-27 09:57:17 -05:00
AlanCoding
c900027f82 fix server error with absent client_secret 2018-02-27 09:23:36 -05:00
Ryan Petrello
d743b77353 replace our rdb tooling w/ the sdb PyPI package 2018-02-26 19:05:50 -05:00
Ryan Petrello
7741de5153 set $HOME via an API call so AWX_TASK_ENV isn't marked as readonly
see: https://github.com/ansible/awx/issues/1315
2018-02-26 16:35:36 -05:00
Michael Abashian
c3968ca2b6 Merge pull request #1357 from mabashian/1281-prompt-inv
Fixed bug preventing users from selecting non-default inventory on job launch
2018-02-26 16:18:42 -05:00
mabashian
c58ea0ea25 Put credentials on their own line in the launch preview and forced them to wrap 2018-02-26 16:06:52 -05:00
Bill Nottingham
4519013a13 Merge pull request #1356 from wenottingham/mongo-only-pawn
Remove some obsolete code.
2018-02-26 15:30:37 -05:00
Bill Nottingham
c1203942e0 Remove obsolete ansible_awx.egg-info. 2018-02-26 15:04:37 -05:00
Bill Nottingham
e7a8ecc05a Fix another instance. 2018-02-26 14:57:24 -05:00
Bill Nottingham
9c722cba22 Remove some obsolete code. 2018-02-26 14:55:13 -05:00
mabashian
9ad8bdf8de Fixed bug preventing users from selecting non-default inventory on job launch 2018-02-26 14:50:31 -05:00
AlanCoding
b878a844d0 Update inventory scripts
ec2
- added support for tags and instance attributes
- allow filtering RDS instances by tags
- add option to group by platform
- set missing defaults
- make cache unique to script ran
- bug fixes
- implement AND'd filters
azure_rm
- minor python 3 upgrades
cloudforms
- minor regex fix
foreman
- several new configurables
- changes to caching
gce
- python 3 upgrades
- added gce_subnetwork param
openstack
- added `--cloud` parameter
ovirt4
- obtain defaults from env vars
vmware_inventory
- changed imports
- allow for custom filters
- changed host_filters
- error handling
- python 3 upgrades
2018-02-26 13:46:21 -05:00
AlanCoding
7b78a2ebcc update tests for new call pattern for capabilities prefetch 2018-02-26 12:13:41 -05:00
AlanCoding
ce9234df0f Revamp user_capabilities with new copy fields
Add copy fields corresponding to new server-side copying

Refactor the way user_capabilities are delivered
 - move the prefetch definition from views to serializer
 - store temporary mapping in serializer context
 - use serializer backlinks to denote polymorphic prefetch model exclusions
2018-02-26 12:13:41 -05:00
Christian Adams
9493b72f29 Merge pull request #904 from ansible/oauth_n_session
Implement session-based  and OAuth 2 authentications
2018-02-26 12:12:38 -05:00
Jake McDermott
7430856ac9 Merge pull request #1344 from jakemcdermott/e2e-updates
e2e / nightwatch updates
2018-02-26 11:58:29 -05:00
adamscmRH
407bcd0cbd fix def application test 2018-02-26 11:35:09 -05:00
Jake McDermott
350f25c6e5 Merge pull request #1343 from jakemcdermott/oauth_n_session
ui tooling fixes / updates for oauth changes
2018-02-26 10:42:04 -05:00
Jake McDermott
c786736688 add setup step for org lookup check 2018-02-25 19:40:22 -05:00
Jake McDermott
01a8b2771a add worker file push command 2018-02-25 19:40:19 -05:00
Jake McDermott
a23e4732b6 bump nightwatch and chromedriver versions 2018-02-25 19:40:15 -05:00
Jake McDermott
24fd4a360e use updated project when checking copy 2018-02-25 19:40:11 -05:00
Jake McDermott
8bf31600b0 stabilize local test runs 2018-02-25 19:40:08 -05:00
Jake McDermott
0e7db2a816 do searchability check last
This fixes a small race condition that sometimes occurs when running
locally by ensuring that the delayed paged scrolling that happens
from using search doesn't put the password reset button out of view
when the test runner is trying to find and click it.
2018-02-25 19:40:02 -05:00
Jake McDermott
59e278a648 ensure correct url is built for inventory hosts page 2018-02-25 19:39:38 -05:00
Jake McDermott
44acecf61e use basic auth by default for data setup 2018-02-25 14:28:09 -05:00
adamscmRH
30b473b0df remove default app creation 2018-02-24 21:34:07 -05:00
Jake McDermott
6bdcba307c fix missing comma 2018-02-24 13:59:55 -05:00
Marliana Lara
434cd31df8 Merge pull request #1338 from marshmalien/feat/multiple_venvs
Implement UI selects for Playbook, Project, and Organization Virtualenvs
2018-02-23 15:48:41 -05:00
Jake McDermott
2b4e631838 Merge pull request #1339 from jakemcdermott/use-navbar-in-smoke-test
use navbar when accessing project and template views
2018-02-23 15:36:46 -05:00
Jake McDermott
b0e0b8f0e3 use navbar when accessing project and template views 2018-02-23 15:08:44 -05:00
Marliana Lara
8a163b5939 Add error handling 2018-02-23 14:49:00 -05:00
Marliana Lara
23300003ab Add dropdown inputs for Job Template, Project, and Organization virtual
envs
2018-02-23 14:49:00 -05:00
adamscmRH
87350e1014 prelim update to docs 2018-02-23 14:10:29 -05:00
adamscmRH
2911dec324 fixes app token endpoint 2018-02-23 11:06:53 -05:00
adamscmRH
99989892cd fixs naming 2018-02-23 09:25:23 -05:00
Alan Rominger
ad8822bcfc Merge pull request #1314 from AlanCoding/fix_rescheduling
Correct permission check for job rescheduling
2018-02-22 16:04:04 -05:00
Ryan Petrello
c35c01e7b1 Merge pull request #1328 from ryanpetrello/devel
Revert "changes to license compliance"
2018-02-22 15:28:54 -05:00
adamscmRH
ecc61b62ca reverts cookie change 2018-02-22 15:18:12 -05:00
John Mitchell
09efc03163 update incorrect logic for auth service rootscope/cookie logged in status vars 2018-02-22 15:18:12 -05:00
John Mitchell
db748775c8 make auth function convert values from cookies to boolean 2018-02-22 15:18:12 -05:00
adamscmRH
310f37dd37 clears authtoken & add PAT 2018-02-22 15:18:12 -05:00
John Mitchell
88bc4a0a9c ui auth works on 8013 now 2018-02-22 15:18:12 -05:00
John Mitchell
976766e4a3 excise token-based auth from ui 2018-02-22 15:18:12 -05:00
Aaron Tan
1c2621cd60 Implement session-based and OAuth 2 authentications
Relates #21. Please see acceptance docs for feature details.

Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
2018-02-22 15:18:12 -05:00
Ryan Petrello
35f629d42c Revert "changes to license compliance"
This reverts commit 218dfb680e.
2018-02-22 15:02:33 -05:00
Alan Rominger
db39ab1b0c Merge pull request #1322 from AlanCoding/check_version
Loosen overwrite_vars constraint for new feature
2018-02-22 14:26:30 -05:00
Shane McDonald
c612ab1c89 Merge pull request #1327 from marshmalien/fix/bump-angular-scheduler-version
Bump angular-scheduler version to 0.3.2
2018-02-22 13:55:07 -05:00
Shane McDonald
c0fe6866c4 Merge pull request #1070 from vrutkovs/installer-ocp-token
Allow authenticating with Openshift via a token
2018-02-22 13:40:01 -05:00
Marliana Lara
746b99046f Bump angular-scheduler version to 0.3.2 2018-02-22 13:35:33 -05:00
Wayne Witzel III
91c6d406c5 Rollback celery 2018-02-22 09:37:14 -05:00
AlanCoding
4727cda336 Loosen overwrite_vars constraint for new feature 2018-02-22 07:47:28 -05:00
Alan Rominger
2ebee58727 Merge pull request #1321 from AlanCoding/magic_credential
Alias filters by credential to credentials
2018-02-21 15:51:42 -05:00
AlanCoding
91e59ebd29 alias filters by credential to credentials 2018-02-21 14:57:26 -05:00
AlanCoding
992d7831b1 add test for ScheduleAccess prompts 2018-02-21 14:11:55 -05:00
Marliana Lara
88b67c894c Merge pull request #1231 from marshmalien/fix/tzid-schedules
Add support for TZID in schedule rrules
2018-02-21 13:55:31 -05:00
Marliana Lara
d71ecf1eee Fix jshint confusing semantics error 2018-02-21 13:18:43 -05:00
Marliana Lara
b9a2f7a87e Add debounce function to preview list to reduce overhead 2018-02-21 13:18:40 -05:00
Marliana Lara
e0cfd18aac Set local timezone dropdown to rrule TZID value 2018-02-21 13:18:39 -05:00
Marliana Lara
73916ade45 Filter dates with moment.js instead of built-in angular date filter 2018-02-21 13:18:38 -05:00
Marliana Lara
1768001881 Add support for TZID in schedule rrules 2018-02-21 13:18:33 -05:00
Chris Church
795681a887 Merge pull request #1311 from cchurch/fix-dummy-data
Fix dummy data generator for WFJT node credentials.
2018-02-21 10:19:50 -05:00
AlanCoding
de4e95f396 correct permission check for job rescheduling 2018-02-21 09:25:43 -05:00
Chris Church
4ec683efcb Fix dummy data generator for WFJT node credentials. 2018-02-21 08:55:43 -05:00
Chris Church
727ded2d4d Merge pull request #1308 from cclauss/patch-1
Py3 Syntax Errors: 0700 -> 0x700 and 0600 -> 0x600
2018-02-21 08:29:49 -05:00
cclauss
8967afc645 octal, not hex 2018-02-21 14:13:47 +01:00
cclauss
d66cad3e0e Py3 Syntax Errors: 0700 -> 0x700 and 0600 -> 0x600
$ __python3 -c "0700"__
```
  File "<string>", line 1
    0700
       ^
SyntaxError: invalid token
```
2018-02-21 12:18:52 +01:00
Ryan Petrello
7db05855de Merge pull request #1306 from ryanpetrello/isolated-fact-cache
support fact caching for isolated hosts
2018-02-20 15:50:49 -05:00
Ryan Petrello
7d9e4d6e2f support fact caching for isolated hosts
see: https://github.com/ansible/awx/issues/198
2018-02-20 15:00:47 -05:00
Ryan Petrello
662f4ec346 Merge pull request #1304 from ryanpetrello/devel
remove dead code
2018-02-20 14:44:52 -05:00
Ryan Petrello
ac3ce82eb1 remove dead code
the code that persists `set_stat` data for workflows now lives elsewhere

related: d57470ce49
2018-02-20 14:14:23 -05:00
Alan Rominger
1582fcbb50 Merge pull request #1277 from AlanCoding/inv_multicred
Use the m2m field for inventory source credentials
2018-02-20 14:08:22 -05:00
AlanCoding
bb6032cff6 docs and review change for IS multivault
Mention inventory sources /credentials/ endpoint in docs
Also change means of identifying projects for the purose
of injecting custom credentials
2018-02-20 12:34:58 -05:00
AlanCoding
9c4d89f512 use the m2m field for inventory source creds 2018-02-20 12:34:56 -05:00
Matthew Jones
8505783350 Merge remote-tracking branch 'tower/release_3.2.3' into devel
* tower/release_3.2.3:
  fix unicode bugs with log statements
  use --export option for ansible-inventory
  add support for new "BECOME" prompt in Ansible 2.5+ for adhoc commands
  enforce strings for secret password inputs on Credentials
  fix a bug for "users should be able to change type of unused credential"
  fix xss vulnerabilities - on host recent jobs popover - on schedule name tooltip
  fix a bug when testing UDP-based logging configuration
  bump templates form credential_types page limit
  Wait for Slack RTM API websocket connection to be established
  don't process artifacts from custom `set_stat` calls asynchronously
  don't overwrite env['ANSIBLE_LIBRARY'] when fact caching is enabled
  only allow facts to cache in the proper file system location
  replace our memcached-based fact cache implementation with local files
  add support for new "BECOME" prompt in Ansible 2.5+
  fix a bug in inventory generation for isolated nodes
  properly handle unicode for isolated job buffers
2018-02-20 12:22:25 -05:00
Ryan Petrello
76ff925b77 Merge pull request #1298 from sjenning/add-import-playbook
add import_playbook as a top-level playbook indicator
2018-02-20 09:54:23 -05:00
Seth Jennings
42ff1cfd67 add import_playbook as top-level playbook indicator 2018-02-19 16:03:08 -06:00
Ryan Petrello
90bb43ce74 Merge pull request #1292 from ryanpetrello/fix-1291
don't require credentials to relaunch a job
2018-02-19 12:01:42 -05:00
Ryan Petrello
56e3d98e62 don't require credentials to relaunch a job
see: https://github.com/ansible/awx/issues/1291
2018-02-19 11:15:55 -05:00
Matthew Jones
7d51b3b6b6 Merge pull request #1116 from bmduffy/bugfix-pem-validation
[bugfix-pem-validation]
2018-02-19 07:53:19 -05:00
Vadim Rutkovsky
5e25859069 Allow authenticating with Openshift via a token 2018-02-18 16:24:16 +01:00
Brian Duffy
4270e3a17b [bugfix] updated pem validation unit tests 2018-02-18 15:11:42 +00:00
Brian Duffy
098f4eb198 [bugfix-pem-validation] pass flake8 2018-02-18 01:46:31 +00:00
Jake McDermott
ae1167ab15 Merge pull request #1282 from ansible/jakemcdermott-patch-1
fix last run job variable reference
2018-02-16 16:35:35 -05:00
Jake McDermott
5b5411fecd fix last run job variable reference 2018-02-16 16:32:13 -05:00
Brian Duffy
235213bd3b updated regex 2018-02-16 16:06:33 +00:00
Wayne Witzel III
2c71a27630 Merge pull request #1123 from wwitzel3/new-permissions
New RBAC Roles
2018-02-15 16:56:03 -05:00
Alan Rominger
1a6819cdea Merge pull request #630 from AlanCoding/text_type
Fix unicode bugs with log statements
2018-02-15 15:52:29 -05:00
AlanCoding
465e605464 fix unicode bugs with log statements 2018-02-15 15:26:58 -05:00
Alan Rominger
22f1a53266 Merge pull request #1233 from AlanCoding/no_turning_back
Raise 400 error on removal of credential on launch
2018-02-15 14:11:57 -05:00
Ryan Petrello
733b4b874e Merge pull request #1255 from ryanpetrello/license-compliance
changes to license compliance
2018-02-15 09:30:41 -05:00
AlanCoding
3d433350d3 raise 400 error on removal of credential on launch
Definition of removal is providing a `credentials` list on launch
that lacks a type of credential that the job template has.
This assures that every category of credential the job template
has will also exist on jobs ran from that job template.
This restriction already existed, but this makes the endpoint
fail instead of re-adding the credentials.
This change makes manual launch congruent with saved launch
configurations.
2018-02-15 08:16:03 -05:00
Wayne Witzel III
30a5617825 Address PR feedback 2018-02-14 22:53:33 +00:00
Alan Rominger
5935c410e4 Merge pull request #629 from AlanCoding/export
Use --export option for ansible-inventory
2018-02-14 15:56:05 -05:00
Alan Rominger
c90cf7c5e2 Merge pull request #1253 from AlanCoding/group_vars
Use --export option for ansible-inventory
2018-02-14 15:52:00 -05:00
Ryan Petrello
218dfb680e changes to license compliance
now if a license is expired or over the managed node limit, it won't
prevent host creation or Job/JobTemplate launches

see: https://github.com/ansible/ansible-tower/issues/7860
2018-02-14 15:51:19 -05:00
AlanCoding
b01deb393e use --export option for ansible-inventory 2018-02-14 14:48:13 -05:00
Chris Church
410111b8c8 Merge pull request #1241 from cclauss/six.string_types_in_mixins.py
six.string_types in mixins.py
2018-02-14 13:38:44 -05:00
AlanCoding
05e6eda453 use --export option for ansible-inventory 2018-02-14 12:34:41 -05:00
Bill Nottingham
4a4b44955b Merge pull request #1247 from pkoro/anchor-link-fix
Fix in anchor link
2018-02-14 10:37:43 -05:00
Ryan Petrello
9d82098162 Merge pull request #1249 from ryanpetrello/no-travis
we don't use travis for tests; remove .travis.yml
2018-02-14 10:15:48 -05:00
Ryan Petrello
1c62c142f1 we don't use travis for tests; remove .travis.yml 2018-02-14 10:07:17 -05:00
Paschalis Korosoglou
5215bbcbf6 Fix in anchor link 2018-02-14 16:05:58 +02:00
Matthew Jones
0d2daecf49 Merge pull request #1243 from matburt/fix_clustering_isolated
Fix isolated instance clustering implementation
2018-02-14 08:32:24 -05:00
cclauss
552b69592c six.string_types in mixins.py 2018-02-14 08:35:14 +01:00
Matthew Jones
ffe5a92eb9 Update isolated instance capacity calculaltion 2018-02-13 21:51:50 -05:00
Matthew Jones
925d9efecf Fixing up isolated node execution after cluster changes
* Rework queue detection to include control groups and isolated instances
* Fix up development tooling around isolated nodes
* Update unit tests
2018-02-13 21:51:38 -05:00
Jake McDermott
c1b6595a0b Merge pull request #1201 from jakemcdermott/item_copy_ui
api-backed copy ui
2018-02-13 17:42:00 -05:00
Jake McDermott
d4e46a35ce get exact match on ids 2018-02-13 17:15:59 -05:00
Jake McDermott
bf0683f7fe replace usage of all and spread 2018-02-13 17:15:56 -05:00
Jake McDermott
0ff94c63f2 use edit capability for showing copy on most views 2018-02-13 17:15:52 -05:00
Jake McDermott
16153daa14 add e2e test for inventory script copy 2018-02-13 17:15:48 -05:00
Jake McDermott
a680d188c0 implement model based copy for inventory scripts 2018-02-13 17:15:44 -05:00
Jake McDermott
d56f1a0120 add e2e test for credential copy 2018-02-13 17:15:41 -05:00
Jake McDermott
50d95ddc3f implement model-based credential copy 2018-02-13 17:15:37 -05:00
Jake McDermott
21a32f90ce add e2e test for notification template copy 2018-02-13 17:15:34 -05:00
Jake McDermott
09d3e6cd98 implement model-based copy for notification templates 2018-02-13 17:15:30 -05:00
Jake McDermott
29f1d695ae add NotificationTemplate model 2018-02-13 17:15:26 -05:00
Jake McDermott
e0f3e4feb7 add e2e test for inventory copy 2018-02-13 17:15:20 -05:00
Jake McDermott
e9ce9621f2 implement model-based copy for inventories 2018-02-13 17:15:16 -05:00
Jake McDermott
a02eda1bea add e2e test for project copy 2018-02-13 17:15:12 -05:00
Jake McDermott
779385ddb6 implement model based copy for projects 2018-02-13 17:15:05 -05:00
Jake McDermott
e5fd483d06 implement model-based copy for job templates and workflow templates 2018-02-13 17:15:01 -05:00
Jake McDermott
8679651d4c add e2e test for template copy and delete warnings 2018-02-13 17:14:57 -05:00
Jake McDermott
4c988fbc02 add WorkflowJobTemplate model 2018-02-13 17:14:48 -05:00
Jake McDermott
c40feb52b7 add base model unit test 2018-02-13 17:14:43 -05:00
Jake McDermott
78b975b2a9 add copy to base model 2018-02-13 17:14:38 -05:00
Jake McDermott
cfba11f8d7 slight cleanup of templates list controller
lint / fix all of the indentation issues
smaller functions
use a variable for any string a user sees
2018-02-13 17:14:30 -05:00
Jake McDermott
73fa8521d0 add e2e test for job and workflow template copy 2018-02-13 17:14:18 -05:00
Jake McDermott
894f0cf2c5 update current workflow copy implementation to be compatible with recent api changes 2018-02-13 17:13:54 -05:00
Chris Church
67ec811e8d Merge pull request #1186 from cclauss/execfile-file-reduce-StandardError
Miscellaneous Python 3 changes: execfile(), file(), reduce(), StandardError
2018-02-13 15:11:24 -05:00
Chris Church
31d0e55c2a Merge pull request #1175 from cclauss/unicode-to-six-u
Change unicode() --> six.text_type() for Python 3
2018-02-13 15:11:11 -05:00
Ryan Petrello
3a0f2ce2fe Merge pull request #628 from ryanpetrello/sudo-become-adhoc
add support for new "BECOME" prompt in Ansible 2.5+ for adhoc commands
2018-02-13 14:38:30 -05:00
Ryan Petrello
613d48cdbc add support for new "BECOME" prompt in Ansible 2.5+ for adhoc commands
see: https://github.com/ansible/ansible-tower/issues/7850
2018-02-13 14:26:27 -05:00
Alan Rominger
39362aab4b Merge pull request #1204 from AlanCoding/default_omission
Omit placeholder vars with survey password defaults
2018-02-13 12:58:11 -05:00
Alan Rominger
6cb3267ebe Merge pull request #1214 from AlanCoding/fix_schedule_qs
Change schedule queryset logic to avoid server error
2018-02-13 12:54:05 -05:00
Bill Nottingham
f8c66b826a Merge pull request #1217 from wenottingham/eat-your-celery-messages
Tweak celery-related messages.
2018-02-13 11:48:21 -05:00
Bill Nottingham
7b288ef98a Tweak celery-related messages. 2018-02-13 10:52:14 -05:00
AlanCoding
58a94be428 Omit placeholder vars with survey password defaults
WFJT nodes & schedules (launch configs) will accept POST/PATCH/PUT
with variables in extra_data that have $encrypted$ for their value
if a valid survey default exists.

In this case, the variable is simply removed from the extra_data.
This is done so that it does not affect pre-existing value
substitution for $encrypted$ values from the config itself
2018-02-13 09:07:59 -05:00
AlanCoding
960845883d change schedule qs logic, avoid server error 2018-02-13 08:32:00 -05:00
Ryan Petrello
eda53eb548 Merge pull request #627 from ryanpetrello/fix-7898
enforce strings for secret password inputs on Credentials
2018-02-12 17:11:02 -05:00
Ryan Petrello
82e41b40bb enforce strings for secret password inputs on Credentials
see: https://github.com/ansible/ansible-tower/issues/7898
2018-02-12 17:03:32 -05:00
Alan Rominger
0268d575f8 Merge pull request #1193 from AlanCoding/no_sneaking_credential_in
Validation clause for WFJT node to follow credential prompt rule
2018-02-12 12:46:12 -05:00
Brian Duffy
6b5a6e9226 [bugfix-pem-validation] left a print statement in 2018-02-12 16:44:32 +00:00
Ryan Petrello
56d01cda6b Merge pull request #1205 from ryanpetrello/fix-pexpect-test
improve a bwrap test
2018-02-12 10:50:00 -05:00
Ryan Petrello
194c2dcf0b improve a bwrap test 2018-02-12 10:14:37 -05:00
Ryan Petrello
b38be89d1a Merge pull request #1203 from ryanpetrello/update-pexpect
upgrade to the latest pexpect
2018-02-12 09:49:26 -05:00
Ryan Petrello
2a168faf6a upgrade to the latest pexpect
see: https://github.com/ansible/awx/issues/417
2018-02-12 09:18:14 -05:00
Ryan Petrello
83b5377387 Merge pull request #1187 from ryanpetrello/file-your-vars-away-for-a-rainy-day
pass extra vars via file rather than via commandline
2018-02-12 08:48:19 -05:00
cclauss
2e623ad80c Change unicode() --> six.text_type() for Python 3 2018-02-11 21:09:12 +01:00
Ryan Petrello
7e42c54868 Merge pull request #1184 from cclauss/basestring-to-six.string_types
basestring to six.string_types for Python 3
2018-02-10 09:49:16 -05:00
Bill Nottingham
aa5bd9f5bf Pass extra vars via file rather than via commandline, including custom creds.
The extra vars file created lives in the playbook private runtime
directory, and will be reaped along with the rest of the directory.

Adjust assorted unit tests as necessary.
2018-02-10 09:27:24 -05:00
Wayne Witzel III
13e777f01b Rename migration files 2018-02-10 02:52:26 +00:00
Wayne Witzel III
819b318fe5 Add Org Execute 2018-02-10 02:52:26 +00:00
Wayne Witzel III
9e7bd55579 Add Notification Admin 2018-02-10 02:52:26 +00:00
Wayne Witzel III
fbece6bdde Updating and adding tests for new RBAC roles 2018-02-10 02:52:26 +00:00
Wayne Witzel III
9fdd00785f Add new RBAC role migrations 2018-02-10 02:52:26 +00:00
Wayne Witzel III
b478740f28 Add Workflow Admin 2018-02-10 02:52:25 +00:00
Wayne Witzel III
109841c350 Add Credential Admin role 2018-02-10 02:52:25 +00:00
Wayne Witzel III
6c951aa883 Add Inventory Admin role 2018-02-10 02:52:25 +00:00
Wayne Witzel III
e7e83afd00 Add Project Admin role 2018-02-10 02:52:25 +00:00
Brian Duffy
7d956a3b68 [bugfix-pem-validation] update from code review 2018-02-10 01:08:29 +00:00
AlanCoding
02ac139d5c validation clause for WFJT node to follow cred prompt rule 2018-02-09 16:17:21 -05:00
Jake McDermott
605a2c7e01 Merge pull request #1189 from jakemcdermott/fix-multivault-select
fix recent multi-vault select breakage
2018-02-09 13:47:17 -05:00
Jake McDermott
484caf29b6 fix recent multi-vault select breakage 2018-02-09 12:52:16 -05:00
Jake McDermott
b2b519e48d Merge pull request #1096 from mabashian/169-v1
UI support for prompting on job template schedules
2018-02-09 11:34:25 -05:00
Jake McDermott
e8e6f50573 Merge branch 'devel' into 169-v1 2018-02-09 11:32:40 -05:00
cclauss
260aec543e Misc Python 3 changes: execfile(), file(), reduce(), StandardError 2018-02-09 17:17:05 +01:00
Marliana Lara
7c95cd008f Merge pull request #1152 from marshmalien/feat/ui_clustering_bugs
Fix UI bugs related to UI Clustering
2018-02-09 11:13:59 -05:00
Ryan Petrello
0ff11ac026 Merge pull request #1185 from ryanpetrello/stop-it-uwsgi
fix celery pid restart issues
2018-02-09 11:07:01 -05:00
Ryan Petrello
605c5e3276 fix celery pid restart issues 2018-02-09 11:03:00 -05:00
cclauss
c371b869dc basestring to six.string_types for Python 3 2018-02-09 16:28:36 +01:00
Shane McDonald
476dbe58c5 Merge pull request #1183 from ryanpetrello/swagger
normalize dates in the Swagger output to minimize diffs
2018-02-09 10:18:19 -05:00
Ryan Petrello
3c43aaef21 normalize dates in the Swagger output to minimize diffs 2018-02-09 10:16:27 -05:00
Ryan Petrello
76d5c02e07 Merge pull request #1181 from ryanpetrello/swagger
move swagger doc metadata out of the awx repo
2018-02-09 10:09:03 -05:00
Ryan Petrello
fe02abe630 move swagger doc metadata out of the awx repo 2018-02-09 09:45:23 -05:00
Ryan Petrello
ce9cb24995 Merge pull request #1171 from cclauss/from-six-import-xrange
from six.moves import xrange for Python 3
2018-02-09 09:02:38 -05:00
Jake McDermott
6cb6c61e5c Merge pull request #1176 from jakemcdermott/stabilize-xss
use project details view to check permissions list
2018-02-08 17:32:39 -05:00
Jake McDermott
67e5d083b8 use project details view to check permissions list 2018-02-08 17:26:54 -05:00
Ryan Petrello
5932c54126 Merge pull request #1165 from ryanpetrello/remove_new_in
remove the `new_in_<version>` in API doc gen
2018-02-08 17:07:50 -05:00
cclauss
e1a8b69736 from six.moves import xrange for Python 3 2018-02-08 22:41:33 +01:00
Ryan Petrello
7472026cca remove the new_in_<version> in API doc gen
see: https://github.com/ansible/awx/issues/73
2018-02-08 16:21:22 -05:00
Jake McDermott
8475bdfdc4 Merge pull request #1170 from shanemcd/fix_standalone_docker_wait_fors
Fix wait_fors in standalone Docker installs
2018-02-08 16:08:31 -05:00
Ryan Petrello
bd2f1568fb Merge pull request #626 from ryanpetrello/release_3.2.3
fix a bug for "users should be able to change type of unused credential"
2018-02-08 15:59:22 -05:00
Alan Rominger
b3dcfc8c18 Merge pull request #903 from ansible/item_copy
Implement item copy feature
2018-02-08 15:51:16 -05:00
Ryan Petrello
72715df751 fix a bug for "users should be able to change type of unused credential"
see: https://github.com/ansible/ansible-tower/issues/7516
related: https://github.com/ansible/tower/pull/441
2018-02-08 15:44:14 -05:00
Shane McDonald
6b3ca32827 Fix wait_fors in standalone Docker installs 2018-02-08 15:08:44 -05:00
Ryan Petrello
1ccdb305e3 Merge pull request #1164 from cclauss/use-new-style-exceptions
Modernize Python 2 code to get ready for Python 3
2018-02-08 14:10:25 -05:00
Ryan Petrello
033bec693b Merge pull request #1166 from ryanpetrello/fix-system-job-stdout
properly handle STDOUT_MAX_BYTES_DISPLAY for system jobs
2018-02-08 13:55:59 -05:00
Ryan Petrello
f2c5859fde properly handle STDOUT_MAX_BYTES_DISPLAY for system jobs
see: https://github.com/ansible/ansible-tower/issues/7890
2018-02-08 11:37:05 -05:00
cclauss
e18838a4b7 Modernize Python 2 code to get ready for Python 3 2018-02-08 17:26:22 +01:00
Shane McDonald
48300da443 Merge pull request #1163 from ryanpetrello/swagger
add indention to swagger docs
2018-02-08 10:52:47 -05:00
Ryan Petrello
5b9dc41015 add indention to swagger docs
this will make it easier to spot changes as our APIs change
2018-02-08 10:51:42 -05:00
Alan Rominger
01c6463b1b Merge pull request #1162 from AlanCoding/remove_cred_sf
Remove credential from node and schedule summary fields
2018-02-08 10:37:46 -05:00
Alan Rominger
181399df7a Merge pull request #1159 from AlanCoding/reschedule_msg
Verbose error messages for failure to re-schedule
2018-02-08 10:28:11 -05:00
Ryan Petrello
9bc0a0743b Merge pull request #1161 from ryanpetrello/zone-names
update zoneinfo endpoint to be a list of dicts
2018-02-08 09:48:11 -05:00
Ryan Petrello
c1d0768e37 Merge pull request #1160 from ryanpetrello/fix-old-rrule-dtstart
add a few schedule RRULE parsing improvements
2018-02-08 09:47:59 -05:00
Marliana Lara
d743faf33e Fix UI bugs related to instance groups views
* Fix bug where capacity_adjustment sets to "1.00" when instance is toggled
* Hookup websockets for instance group jobs and instance jobs
* Add Wait spinner to Capacity_Adjuster, Instance association modal, and Instance group delete
* Add updateDataset event listener to update instance and instanceGroups list after smartSearch query
2018-02-08 09:33:24 -05:00
AlanCoding
0f66892d06 remove credential from node and schedule summary fields 2018-02-08 09:22:55 -05:00
Ryan Petrello
c866d85b8c update zoneinfo endpoint to be a list of dicts 2018-02-08 09:12:26 -05:00
Ryan Petrello
3c799b007e don't allow rrule values that contain both COUNT and UNTIL
see: https://github.com/ansible/ansible-tower/issues/7887
2018-02-08 08:59:52 -05:00
Ryan Petrello
887f16023a improve detection of expensive DTSTART RRULE values 2018-02-08 08:54:30 -05:00
AlanCoding
87b59903a5 verbose error messages for failure to re-schedule 2018-02-08 08:46:56 -05:00
Bill Nottingham
e982f6ed06 Merge pull request #1154 from wenottingham/namespaces-the-final-frontier
Have bubblewrap mount a new /proc in the wrapped environment.
2018-02-07 17:24:38 -05:00
Ryan Petrello
fb5428dd63 Merge pull request #1151 from ansible/jakemcdermott-patch-1-1
always return schema from get_default_schema
2018-02-07 16:56:48 -05:00
Alan Rominger
b38aa3dfb6 Merge pull request #1153 from AlanCoding/fix_wfjt_scheduling
fix bug scheduling WFJT without prompts
2018-02-07 15:49:13 -05:00
Bill Nottingham
c1a0e2cd16 Have bubblewrap mount a new /proc in the wrapped environment.
Since we're running with a new pid namespace, we should have
a new /proc that is in that namespace. Otherwise things will
be weird.
2018-02-07 15:47:03 -05:00
AlanCoding
fe69a23a4e fix bug scheduling WFJT without prompts 2018-02-07 14:34:25 -05:00
Jake McDermott
90f555d684 always return schema from get_default_schema 2018-02-07 13:42:01 -05:00
Matthew Jones
4002f2071d Adding instance group policy unit tests
also remove async call for applying topology change
2018-02-07 11:14:53 -05:00
Ryan Petrello
244dfa1c92 Merge pull request #1145 from ryanpetrello/swagger
fix a bad swagger-related import that breaks the build
2018-02-07 09:12:28 -05:00
Ryan Petrello
1adb4cefec fix a bad swagger-related import that breaks the build 2018-02-07 08:56:59 -05:00
Bill Nottingham
4abcbf949a Merge pull request #1142 from geerlingguy/fix-some-text
Fix grammar for tasks - replace 'state' with 'stage'.
2018-02-06 19:28:20 -05:00
Jeff Geerling
19f0b9ba92 Fix grammar for tasks - replace 'state' with 'stage'. 2018-02-06 16:57:59 -06:00
Ryan Petrello
b1c4c75360 Merge pull request #1141 from ryanpetrello/swagger
a bit of extra Swagger doc tinkering
2018-02-06 14:33:24 -05:00
Ryan Petrello
cc3659d375 fix a busted swagger import 2018-02-06 13:43:31 -05:00
Ryan Petrello
b1695fe107 add instructions for generating Swagger/OpenAPI docs 2018-02-06 13:37:33 -05:00
Jake McDermott
8cd0870253 Merge pull request #1135 from chrismeyersfsu/tests-recent_jobs_xss
xss test for per-host recent jobs popup
2018-02-06 11:51:05 -05:00
Ryan Petrello
84dc40d141 Merge pull request #1124 from ryanpetrello/swagger
add support for building swagger/OpenAPI JSON
2018-02-06 11:12:36 -05:00
Ryan Petrello
8b976031cb use VERSION_TARGET for Swagger doc generation 2018-02-06 10:48:51 -05:00
Chris Meyers
aaf87c0c04 xss test for per-host recent jobs popup 2018-02-06 10:37:00 -05:00
Ryan Petrello
7ff9f0b7d1 build example Swagger request and response bodies from our API tests 2018-02-06 10:36:25 -05:00
Ryan Petrello
527594285f more Swagger template markup 2018-02-06 10:12:58 -05:00
Ryan Petrello
07dfab648c add some tests to prove that OpenAPI JSON compilation works 2018-02-06 10:12:58 -05:00
Ryan Petrello
10974159b5 add support for marking Swagger paths deprecated 2018-02-06 10:12:58 -05:00
Ryan Petrello
ac7c5f8648 clean up API markdown docs 2018-02-06 10:12:57 -05:00
Ryan Petrello
57c22c20b2 add support for building swagger/OpenAPI JSON
to build, run `make swagger`; a file named `swagger.json` will be
written to the current working directory
2018-02-06 10:12:57 -05:00
Matthew Jones
c61efc0af8 Add information on enabled flag 2018-02-05 15:44:26 -05:00
Ryan Petrello
772fcc9149 Merge pull request #1097 from rbywater/feature/preferipv4
Add ability to select to prefer IPv4 addresses for ansible_ssh_host
2018-02-05 14:57:10 -05:00
Matthew Jones
8e94a9e599 Adding capacity docs
Updating capacity for callback jobs to include parent process impact
2018-02-05 09:49:01 -05:00
Shane McDonald
1e9b0c2786 Merge pull request #1130 from shanemcd/fix-etcd-template
Fix variable reference in k8s etcd template
2018-02-05 09:18:20 -05:00
Richard Bywater
5e5790e7d1 Use correct source_vars syntax 2018-02-05 12:45:52 +13:00
Richard Bywater
9f8b9b8d7f Fix unit test 2018-02-05 08:55:10 +13:00
Richard Bywater
6d69087db8 Add prefer_ipv4 to whitelist and add unit test for config value 2018-02-05 08:55:10 +13:00
Richard Bywater
a737663dde Add ability to select to prefer IPv4 addresses for ansible_ssh_host
Currently Cloudforms can return a mix of IPv4 and IPv6 addresses in the
ipaddresses field and this mix comes in a "random" order (that is the
first entry may be IPv4 sometimes but IPv6 other times). If you wish to
always use IPv4 for the ansible_ssh_host value then this is problematic.

This change adds a new prefer_ipv4 flag which will look for the first
IPv4 address in the ipaddresses list and uses that instead of just the
first entry.
2018-02-05 08:55:10 +13:00
Shane McDonald
dce934577b Fix variable reference in k8s etcd template 2018-02-03 10:29:53 -05:00
Jake McDermott
3d421cc595 Merge pull request #1078 from jakemcdermott/saml-ldap-updates
update configuration views for multiple LDAP servers, SAML 2FA, and SAML attribute mapping
2018-02-02 12:15:44 -05:00
Ryan Petrello
93c8cc9f8e Merge pull request #696 from jladdjr/awx_349_custom_cred_write_multiple_files
Feature: Multi-file support for Credential Types
2018-02-02 11:39:11 -05:00
Chris Meyers
1808559586 Merge pull request #1102 from chrismeyersfsu/tests-job_schedules_xss
add xss test for jobs schedules
2018-02-02 11:29:42 -05:00
Jim Ladd
d558299b1f Add test for injecting multiple files 2018-02-02 11:07:13 -05:00
Bill Nottingham
ef5b040f70 Merge pull request #1121 from jeis2497052/devel
Propose small spelling changes
2018-02-02 10:55:23 -05:00
John Eismeier
026cbeb018 Propose small spelling changes 2018-02-02 10:49:55 -05:00
Matthew Jones
6163cc6b5c Merge pull request #1058 from ansible/scalable_clustering
Implement Container Cluster-based dynamic scaling
2018-02-02 09:22:06 -05:00
Brian Duffy
68057560e5 [bugfix-pem-validation] added unit test to simulate catted data
Signed-off-by: Brian Duffy <bmduffy@gmail.com>
2018-02-02 01:20:31 +00:00
Brian Duffy
047ff7b55f [bugfix-pem-validation]
Signed-off-by: Brian Duffy <bmduffy@gmail.com>
2018-02-01 23:50:02 +00:00
Marliana Lara
d4a461e5b4 Switch Array.map() in favor of Array.forEach() 2018-02-01 16:57:10 -05:00
Marliana Lara
f9265ee329 Create an InstancePolicyList directive to replace the pre-existing
modal implementation

* Remove Instance-List-Policy controller
* Replace let with const when values aren't being reassigned
* Update CapacityAdjuster directive to use replace:true
* Assign less values that are specific to element
* Add more error handling
2018-02-01 16:57:10 -05:00
Marliana Lara
fa70d108d7 Apply UI feedback changes
* Remove input slider css mixin
* Remove unused dependencies
* Improve error handling by plugging in the ProcessErrors factory
2018-02-01 16:57:10 -05:00
Marliana Lara
e07f441e32 Add Instance enable/disable toggle to list 2018-02-01 16:57:10 -05:00
Marliana Lara
70786c53a7 Add capacity adjuster directive 2018-02-01 16:57:10 -05:00
Marliana Lara
342958ece3 Add stringToNumber directive 2018-02-01 16:57:09 -05:00
Marliana Lara
368101812c Add Instance and InstanceGroup models 2018-02-01 16:57:09 -05:00
Matthew Jones
70bf78e29f Apply capacity algorithm changes
* This also adds fields to the instance view for tracking cpu and
  memory usage as well as information on what the capacity ranges are
* Also adds a flag for enabling/disabling instances which removes them
  from all queues and has them stop processing new work
* The capacity is now based almost exclusively on some value relative
  to forks
* capacity_adjustment allows you to commit an instance to a certain
  amount of forks, cpu focused or memory focused
* Each job run adds a single fork overhead (that's the reasoning
  behind the +1)
2018-02-01 16:57:09 -05:00
Matthew Jones
6a85fc38dd Add scalable cluster kubernetes support 2018-02-01 16:57:09 -05:00
Matthew Jones
6e9930a45f Use on_commit hook for triggering ig policy
* also Apply console handlers to loggers for dev environment
2018-02-01 16:56:43 -05:00
Matthew Jones
d9e774c4b6 Updates for automatic triggering of policies
* Switch policy router queue to not be "tower" so that we don't
  fall into a chicken/egg scenario
* Show fixed policy list in serializer so a user can determine if
  an instance is manually managed
* Change IG membership mixin to not directly handle applying topology
  changes. Instead it just makes sure the policy instance list is
  accurate
* Add create/delete hooks for instances and groups to trigger policy
  re-evaluation
* Update policy algorithm for fairer distribution
* Fix an issue where CELERY_ROUTES wasn't renamed after celery/django
  upgrade
* Update unit tests to be more explicit
* Update count calculations used by algorithm to only consider
  non-manual instances
* Adding unit tests and fixture
* Don't propagate logging messages from awx.main.tasks and
  awx.main.scheduler
* Use advisory lock to prevent policy eval conflicts
* Allow updating instance groups from view
2018-02-01 16:56:16 -05:00
Matthew Jones
56abfa732e Adding initial instance group policies
and policy evaluation planner
2018-02-01 16:56:15 -05:00
Matthew Jones
c819560d39 Add automatic deprovisioning support, only enabled for openshift
* Implement a config watcher for service restarts
* If the configmap bind point changes then restart all services
2018-02-01 16:51:40 -05:00
Chris Meyers
0e97dc4b84 Beat and celery clustering fixes
* use embedded beat rather than standalone
* dynamically set celeryd hostname at runtime
* add embeded beat flag to celery startup
* Embedded beat mode routes will piggyback off of celery worker setup
signal
2018-02-01 16:47:33 -05:00
Matthew Jones
624289bed7 Add support for directly managing instance groups
* Associating/Disassociating an instance with a group
* Triggering a topology rebuild on that change
* Force rabbitmq cleanup of offline nodes
* Automatically check for dependent service startup
* Fetch and set hostname for celery so it doesn't clobber other
  celeries
* Rely on celery init signals to dyanmically set listen queues
* Removing old total_capacity instance manager property
2018-02-01 16:46:44 -05:00
Matthew Jones
6ede1dfbea Update openshift installer to support rabbitmq autoscale
* Switch rabbitmq container out for one that supports autoscale
* Add etcd pod to support autoscale negotiation
2018-02-01 16:38:10 -05:00
Chris Meyers
c9ff3e99b8 celeryd attach to queues dynamically
* Based on the tower topology (Instance and InstanceGroup
relationships), have celery dyamically listen to queues on boot
* Add celery task capable of "refreshing" what queues each celeryd
worker listens to. This will be used to support changes in the topology.
* Cleaned up some celery task definitions.
* Converged wrongly targeted job launch/finish messages to 'tower'
queue, rather than a 1-off queue.
* Dynamically route celery tasks destined for the local node
* separate beat process

add support for separate beat process
2018-02-01 16:37:33 -05:00
John Mitchell
7e400413db Merge pull request #625 from jlmitch5/fixXSS
fix xss vulnerabilities
2018-02-01 11:49:35 -05:00
Chris Meyers
290a296f9f add xss test for jobs schedules
* Test for tooltip regression on job schedules list entries
2018-02-01 10:55:13 -05:00
mabashian
e57d200d6e Implemented generic prompt modal for launching and saving launch configurations. Added UI support for prompting on job template schedules. 2018-01-31 15:40:23 -05:00
Jim Ladd
4c1dddcaf9 Respond to PR feedback 2018-01-31 11:22:01 -05:00
John Mitchell
28596b7d5e fix xss vulnerabilities
- on host recent jobs popover
- on schedule name tooltip
2018-01-30 16:30:00 -05:00
Jake McDermott
a2e274d1f9 Merge pull request #623 from jakemcdermott/fix-ansible-tower-7871
bump templates form credential_types page limit
2018-01-30 14:48:36 -05:00
Ryan Petrello
d96cc51431 Merge pull request #624 from ryanpetrello/release_3.2.3
fix a bug when testing UDP-based logging configuration
2018-01-30 10:27:39 -05:00
Jake McDermott
4cd6a6e566 add fields for saml + 2fa 2018-01-30 00:28:13 -05:00
Jake McDermott
ed138fccf6 add forms + select for additional ldap servers 2018-01-30 00:28:02 -05:00
Jake McDermott
44d223b6c9 add fields for team and organization saml attribute mappings 2018-01-30 00:27:51 -05:00
Ryan Petrello
982539f444 fix a bug when testing UDP-based logging configuration
see: https://github.com/ansible/ansible-tower/issues/7868
2018-01-29 12:05:51 -05:00
Jake McDermott
4c79e6912e bump templates form credential_types page limit 2018-01-28 21:50:30 -05:00
Jim Ladd
4b13bcdce2 Update tests for custom credentials 2018-01-28 21:02:48 -05:00
Jim Ladd
18178c83b3 Validate single and multi-file injection 2018-01-28 21:02:47 -05:00
Jim Ladd
7aa1ae69b3 Add backwards compatibility for injecting single file 2018-01-28 20:50:44 -05:00
Jim Ladd
286a70f2ca Add support for multi-file injection in custom creds 2018-01-28 20:50:43 -05:00
Matthew Jones
42098bfa6d Merge pull request #621 from ryanpetrello/set_stat_workflow_race_condition
don't process artifacts from custom `set_stat` calls asynchronously
2018-01-24 10:27:19 -05:00
Wayne Witzel III
b205630490 Merge pull request #622 from wwitzel3/release_3.2.3
Wait for Slack RTM API websocket connection to be established
2018-01-24 08:59:45 -05:00
Wayne Witzel III
aa469d730e Wait for Slack RTM API websocket connection to be established 2018-01-24 13:48:42 +00:00
Ryan Petrello
d57470ce49 don't process artifacts from custom set_stat calls asynchronously
previously, we persisted custom artifacts to the database on
`Job.artifacts` via the callback receiver.  when the callback receiver
is backed up processing events, this can result in race conditions for
workflows where a playbook calls `set_stat()`, but the artifact data is
not persisted in the database before the next job in the workflow starts

see: https://github.com/ansible/ansible-tower/issues/7831
2018-01-23 17:09:23 -05:00
Ryan Petrello
fa9c6287f7 Merge pull request #620 from ryanpetrello/fix-815
don't overwrite env['ANSIBLE_LIBRARY'] when fact caching is enabled
2018-01-15 13:55:42 -05:00
Ryan Petrello
2955842c44 don't overwrite env['ANSIBLE_LIBRARY'] when fact caching is enabled
see: https://github.com/ansible/awx/issues/815
see: https://github.com/ansible/ansible-tower/issues/7830
2018-01-15 13:39:46 -05:00
Ryan Petrello
64028dba66 Merge pull request #619 from ryanpetrello/file_based_tower_fact_cache
replace our memcached-based fact cache implementation with local files
2018-01-15 11:57:18 -05:00
Ryan Petrello
e1d50a43fd only allow facts to cache in the proper file system location 2018-01-15 11:45:49 -05:00
Ryan Petrello
983b192a45 replace our memcached-based fact cache implementation with local files
see: https://github.com/ansible/ansible-tower/issues/7840
2018-01-15 09:16:44 -05:00
Ryan Petrello
e0c04df1ee Merge pull request #618 from ryanpetrello/become_who_you_were_meant_to_be
add support for new "BECOME" prompt in Ansible 2.5+
2018-01-12 11:45:08 -05:00
Ryan Petrello
563f730268 add support for new "BECOME" prompt in Ansible 2.5+
see: https://github.com/ansible/ansible-tower/issues/7850
2018-01-12 10:40:40 -05:00
Ryan Petrello
89b9d7ac8b Merge pull request #617 from ryanpetrello/release_3.2.3
fix a bug in inventory generation for isolated nodes
2018-01-11 11:04:09 -05:00
Ryan Petrello
b8758044e0 fix a bug in inventory generation for isolated nodes
see: https://github.com/ansible/ansible-tower/issues/7849
related: https://github.com/ansible/awx/pull/551
2018-01-11 10:41:58 -05:00
Ryan Petrello
4c40791d06 properly handle unicode for isolated job buffers
from: https://docs.python.org/2/library/stringio.html#module-cStringIO
"Unlike the StringIO module, this module is not able to accept Unicode
strings that cannot be encoded as plain ASCII strings."

see: https://github.com/ansible/ansible-tower/issues/7846
2018-01-10 10:56:59 -05:00
Aaron Tan
a2fd78add4 Implement item copy feature
See acceptance doc for implement details.

Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
2018-01-02 10:20:44 -05:00
741 changed files with 21093 additions and 63817 deletions

4
.github/BOTMETA.yml vendored
View File

@@ -12,5 +12,5 @@ files:
labels: component:installer
macros:
team_api: wwitzel3 matburt chrismeyersfsu cchurch AlanCoding ryanpetrello jangstur
team_ui: jlmitch5 jaredevantabor mabashian gconsidine marshmalien benthomasson
team_api: wwitzel3 matburt chrismeyersfsu cchurch AlanCoding ryanpetrello rooftopcellist
team_ui: jlmitch5 jaredevantabor mabashian marshmalien benthomasson jakemcdermott

1
.gitignore vendored
View File

@@ -22,6 +22,7 @@ awx/ui/build_test
awx/ui/client/languages
awx/ui/templates/ui/index.html
awx/ui/templates/ui/installing.html
/tower-license/**
# Tower setup playbook testing
setup/test/roles/postgresql

View File

@@ -1,31 +0,0 @@
sudo: false
language: python
python:
- '2.7'
env:
- TOXENV=api-lint
- TOXENV=api
- TOXENV=ui-lint
- TOXENV=ui
install:
- pip install tox
script:
- tox
# after_success:
# - TOXENV=coveralls tox
addons:
apt:
packages:
- swig
- libxmlsec1-dev
- postgresql-9.5
- libssl-dev
cache:
pip: true
directories:
- node_modules
- .tox
services:
- mongodb
# Enable when we stop using sqlite for API tests
# - postgresql

View File

@@ -24,6 +24,7 @@ Have questions about this document or anything not covered here? Come chat with
* [Start a shell](#start-the-shell)
* [Create a superuser](#create-a-superuser)
* [Load the data](#load-the-data)
* [Building API Documentation](#build-documentation)
* [Accessing the AWX web interface](#accessing-the-awx-web-interface)
* [Purging containers and images](#purging-containers-and-images)
* [What should I work on?](#what-should-i-work-on)
@@ -261,6 +262,20 @@ You can optionally load some demo data. This will create a demo project, invento
> This information will persist in the database running in the `tools_postgres_1` container, until the container is removed. You may periodically need to recreate
this container, and thus the database, if the database schema changes in an upstream commit.
##### Building API Documentation
AWX includes support for building [Swagger/OpenAPI
documentation](https://swagger.io). To build the documentation locally, run:
```bash
(container)/awx_devel$ make swagger
```
This will write a file named `swagger.json` that contains the API specification
in OpenAPI format. A variety of online tools are available for translating
this data into more consumable formats (such as HTML). http://editor.swagger.io
is an example of one such service.
### Accessing the AWX web interface
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).

View File

@@ -23,6 +23,7 @@ This document provides a guide for installing AWX.
- [Kubernetes](#kubernetes)
- [Prerequisites](#prerequisites-2)
- [Pre-build steps](#pre-build-steps-1)
- [Configuring Helm](#configuring-helm)
- [Start the build](#start-the-build-1)
- [Accessing AWX](#accessing-awx-1)
- [SSL Termination](#ssl-termination)
@@ -70,6 +71,7 @@ The system that runs the AWX service will need to satisfy the following requirem
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 9.4.
### AWX Tunables
@@ -82,9 +84,9 @@ We currently support running AWX as a containerized application using Docker ima
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
In the sections below, you'll find deployment details and instructions for each platform:
- [Docker and Docker Compose](#docker-and-docker-compose)
- [OpenShift](#openshift)
- [Kubernetes](#kubernetes).
- [Kubernetes](#kubernetes)
- [Docker or Docker Compose](#docker-or-docker-compose).
### Official vs Building Images
@@ -115,6 +117,15 @@ To complete a deployment to OpenShift, you will obviously need access to an Open
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
The default resource requests per-pod requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/openshift/defaults/main.yml](/installer/openshift/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources](https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources)
#### Deploying to Minishift
Install Minishift by following the [installation guide](https://docs.openshift.org/latest/minishift/getting-started/installing.html).
@@ -287,6 +298,15 @@ A Kubernetes deployment will require you to have access to a Kubernetes cluster
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
The default resource requests per-pod requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/kubernetes/defaults/main.yml](/installer/kubernetes[/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
### Pre-build steps
Before starting the build process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section uncommenting when necessary. Make sure the openshift and standalone docker sections are commented out:
@@ -303,6 +323,12 @@ Before starting the build process, review the [inventory](./installer/inventory)
> These settings should be used if building your own base images. You'll need access to an external registry and are responsible for making sure your kube cluster can talk to it and use it. If these are undefined and the dockerhub_ configuration settings are uncommented then the images will be pulled from dockerhub instead
### Configuring Helm
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://docs.helm.sh/using_helm/#quickstart-guide](https://docs.helm.sh/using_helm/#quickstart-guide).
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://docs.helm.sh/using_helm/#role-based-access-control](https://docs.helm.sh/using_helm/#role-based-access-control)
### Start the build
After making changes to the `inventory` file use `ansible-playbook` to begin the install

View File

@@ -23,7 +23,7 @@ COMPOSE_HOST ?= $(shell hostname)
VENV_BASE ?= /venv
SCL_PREFIX ?=
CELERY_SCHEDULE_FILE ?= /celerybeat-schedule
CELERY_SCHEDULE_FILE ?= /var/lib/awx/beat.db
DEV_DOCKER_TAG_BASE ?= gcr.io/ansible-tower-engineering
# Python packages to install only from source (not from binary wheels)
@@ -184,7 +184,6 @@ requirements_awx: virtualenv_awx
requirements_awx_dev:
$(VENV_BASE)/awx/bin/pip install -r requirements/requirements_dev.txt
$(VENV_BASE)/awx/bin/pip uninstall --yes -r requirements/requirements_dev_uninstall.txt
requirements: requirements_ansible requirements_awx
@@ -216,13 +215,11 @@ init:
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(MANAGEMENT_COMMAND) provision_instance --hostname=$(COMPOSE_HOST); \
$(MANAGEMENT_COMMAND) register_queue --queuename=tower --hostnames=$(COMPOSE_HOST);\
$(MANAGEMENT_COMMAND) register_queue --queuename=tower --instance_percent=100;\
if [ "$(AWX_GROUP_QUEUES)" == "tower,thepentagon" ]; then \
$(MANAGEMENT_COMMAND) provision_instance --hostname=isolated; \
$(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \
$(MANAGEMENT_COMMAND) generate_isolated_key | ssh -o "StrictHostKeyChecking no" root@isolated 'cat > /root/.ssh/authorized_keys'; \
elif [ "$(AWX_GROUP_QUEUES)" != "tower" ]; then \
$(MANAGEMENT_COMMAND) register_queue --queuename=$(firstword $(subst $(comma), ,$(AWX_GROUP_QUEUES))) --hostnames=$(COMPOSE_HOST); \
fi;
# Refresh development environment after pulling new code.
@@ -299,7 +296,7 @@ uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
uwsgi -b 32768 --socket 127.0.0.1:8050 --module=awx.wsgi:application --home=/venv/awx --chdir=/awx_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --master-fifo=/awxfifo --lazy-apps --logformat "%(addr) %(method) %(uri) - %(proto) %(status)" --hook-accepting1-once="exec:/bin/sh -c '[ -f /tmp/celery_pid ] && kill -1 `cat /tmp/celery_pid`'"
uwsgi -b 32768 --socket 127.0.0.1:8050 --module=awx.wsgi:application --home=/venv/awx --chdir=/awx_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --master-fifo=/awxfifo --lazy-apps --logformat "%(addr) %(method) %(uri) - %(proto) %(status)" --hook-accepting1-once="exec:/bin/sh -c '[ -f /tmp/celery_pid ] && kill -1 `cat /tmp/celery_pid` || true'"
daphne:
@if [ "$(VENV_BASE)" ]; then \
@@ -326,7 +323,7 @@ celeryd:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
celery worker -A awx -l DEBUG -B -Ofair --autoscale=100,4 --schedule=$(CELERY_SCHEDULE_FILE) -Q tower_scheduler,tower_broadcast_all,$(COMPOSE_HOST),$(AWX_GROUP_QUEUES) -n celery@$(COMPOSE_HOST) --pidfile /tmp/celery_pid
celery worker -A awx -l DEBUG -B -Ofair --autoscale=100,4 --schedule=$(CELERY_SCHEDULE_FILE) -n celery@$(COMPOSE_HOST) --pidfile /tmp/celery_pid
# Run to start the zeromq callback receiver
receiver:
@@ -338,9 +335,6 @@ receiver:
nginx:
nginx -g "daemon off;"
rdb:
$(PYTHON) tools/rdb.py
jupyter:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -365,6 +359,12 @@ pyflakes: reports
pylint: reports
@(set -o pipefail && $@ | reports/$@.report)
swagger: reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
check: flake8 pep8 # pyflakes pylint
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests

View File

@@ -7,7 +7,7 @@ import sys
import warnings
from pkg_resources import get_distribution
from .celery import app as celery_app
from .celery import app as celery_app # noqa
__version__ = get_distribution('awx').version

View File

@@ -2,126 +2,21 @@
# All Rights Reserved.
# Python
import urllib
import logging
# Django
from django.conf import settings
from django.utils.timezone import now as tz_now
from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework import authentication
from rest_framework import exceptions
from rest_framework import HTTP_HEADER_ENCODING
# AWX
from awx.main.models import AuthToken
# Django OAuth Toolkit
from oauth2_provider.contrib.rest_framework import OAuth2Authentication
logger = logging.getLogger('awx.api.authentication')
class TokenAuthentication(authentication.TokenAuthentication):
'''
Custom token authentication using tokens that expire and are associated
with parameters specific to the request.
'''
model = AuthToken
@staticmethod
def _get_x_auth_token_header(request):
auth = request.META.get('HTTP_X_AUTH_TOKEN', '')
if isinstance(auth, type('')):
# Work around django test client oddness
auth = auth.encode(HTTP_HEADER_ENCODING)
return auth
@staticmethod
def _get_auth_token_cookie(request):
token = request.COOKIES.get('token', '')
if token:
token = urllib.unquote(token).strip('"')
return 'token %s' % token
return ''
def authenticate(self, request):
self.request = request
# Prefer the custom X-Auth-Token header over the Authorization header,
# to handle cases where the browser submits saved Basic auth and
# overrides the UI's normal use of the Authorization header.
auth = TokenAuthentication._get_x_auth_token_header(request).split()
if not auth or auth[0].lower() != 'token':
auth = authentication.get_authorization_header(request).split()
# Prefer basic auth over cookie token
if auth and auth[0].lower() == 'basic':
return None
elif not auth or auth[0].lower() != 'token':
auth = TokenAuthentication._get_auth_token_cookie(request).split()
if not auth or auth[0].lower() != 'token':
return None
if len(auth) == 1:
msg = _('Invalid token header. No credentials provided.')
raise exceptions.AuthenticationFailed(msg)
elif len(auth) > 2:
msg = _('Invalid token header. Token string should not contain spaces.')
raise exceptions.AuthenticationFailed(msg)
return self.authenticate_credentials(auth[1])
def authenticate_credentials(self, key):
now = tz_now()
# Retrieve the request hash and token.
try:
request_hash = self.model.get_request_hash(self.request)
token = self.model.objects.select_related('user').get(
key=key,
request_hash=request_hash,
)
except self.model.DoesNotExist:
raise exceptions.AuthenticationFailed(AuthToken.reason_long('invalid_token'))
# Tell the user why their token was previously invalidated.
if token.invalidated:
raise exceptions.AuthenticationFailed(AuthToken.reason_long(token.reason))
# Explicitly handle expired tokens
if token.is_expired(now=now):
token.invalidate(reason='timeout_reached')
raise exceptions.AuthenticationFailed(AuthToken.reason_long('timeout_reached'))
# Token invalidated due to session limit config being reduced
# Session limit reached invalidation will also take place on authentication
if settings.AUTH_TOKEN_PER_USER != -1:
if not token.in_valid_tokens(now=now):
token.invalidate(reason='limit_reached')
raise exceptions.AuthenticationFailed(AuthToken.reason_long('limit_reached'))
# If the user is inactive, then return an error.
if not token.user.is_active:
raise exceptions.AuthenticationFailed(_('User inactive or deleted'))
# Refresh the token.
# The token is extended from "right now" + configurable setting amount.
token.refresh(now=now)
# Return the user object and the token.
return (token.user, token)
class TokenGetAuthentication(TokenAuthentication):
def authenticate(self, request):
if request.method.lower() == 'get':
token = request.GET.get('token', None)
if token:
request.META['HTTP_X_AUTH_TOKEN'] = 'Token %s' % token
return super(TokenGetAuthentication, self).authenticate(request)
class LoggedBasicAuthentication(authentication.BasicAuthentication):
def authenticate(self, request):
@@ -137,3 +32,28 @@ class LoggedBasicAuthentication(authentication.BasicAuthentication):
if not settings.AUTH_BASIC_ENABLED:
return
return super(LoggedBasicAuthentication, self).authenticate_header(request)
class SessionAuthentication(authentication.SessionAuthentication):
def authenticate_header(self, request):
return 'Session'
def enforce_csrf(self, request):
return None
class LoggedOAuth2Authentication(OAuth2Authentication):
def authenticate(self, request):
ret = super(LoggedOAuth2Authentication, self).authenticate(request)
if ret:
user, token = ret
username = user.username if user else '<none>'
logger.debug(smart_text(
u"User {} performed a {} to {} through the API using OAuth token {}.".format(
username, request.method, request.path, token.pk
)
))
setattr(user, 'oauth_scopes', [x for x in token.scope.split() if x])
return ret

View File

@@ -1,12 +1,13 @@
# Django
from django.utils.translation import ugettext_lazy as _
# Tower
# AWX
from awx.conf import fields, register
from awx.api.fields import OAuth2ProviderField
register(
'AUTH_TOKEN_EXPIRATION',
'SESSION_COOKIE_AGE',
field_class=fields.IntegerField,
min_value=60,
label=_('Idle Time Force Log Out'),
@@ -14,17 +15,15 @@ register(
category=_('Authentication'),
category_slug='authentication',
)
register(
'AUTH_TOKEN_PER_USER',
'SESSIONS_PER_USER',
field_class=fields.IntegerField,
min_value=-1,
label=_('Maximum number of simultaneous logins'),
help_text=_('Maximum number of simultaneous logins a user may have. To disable enter -1.'),
label=_('Maximum number of simultaneous logged in sessions'),
help_text=_('Maximum number of simultaneous logged in sessions a user may have. To disable enter -1.'),
category=_('Authentication'),
category_slug='authentication',
)
register(
'AUTH_BASIC_ENABLED',
field_class=fields.BooleanField,
@@ -33,3 +32,16 @@ register(
category=_('Authentication'),
category_slug='authentication',
)
register(
'OAUTH2_PROVIDER',
field_class=OAuth2ProviderField,
default={'ACCESS_TOKEN_EXPIRE_SECONDS': 315360000000,
'AUTHORIZATION_CODE_EXPIRE_SECONDS': 600},
label=_('OAuth 2 Timeout Settings'),
help_text=_('Dictionary for customizing OAuth 2 timeouts, available items are '
'`ACCESS_TOKEN_EXPIRE_SECONDS`, the duration of access tokens in the number '
'of seconds, and `AUTHORIZATION_CODE_EXPIRE_SECONDS`, the duration of '
'authorization grants in the number of seconds.'),
category=_('Authentication'),
category_slug='authentication',
)

18
awx/api/exceptions.py Normal file
View File

@@ -0,0 +1,18 @@
# Copyright (c) 2018 Ansible by Red Hat
# All Rights Reserved.
# Django
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import ValidationError
class ActiveJobConflict(ValidationError):
status_code = 409
def __init__(self, active_jobs):
super(ActiveJobConflict, self).__init__({
"error": _("Resource is being used by running jobs."),
"active_jobs": active_jobs
})

View File

@@ -1,10 +1,15 @@
# Copyright (c) 2016 Ansible, Inc.
# All Rights Reserved.
# Django
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework import serializers
# AWX
from awx.conf import fields
__all__ = ['BooleanNullField', 'CharNullField', 'ChoiceNullField', 'VerbatimField']
@@ -66,3 +71,19 @@ class VerbatimField(serializers.Field):
def to_representation(self, value):
return value
class OAuth2ProviderField(fields.DictField):
default_error_messages = {
'invalid_key_names': _('Invalid key names: {invalid_key_names}'),
}
valid_key_names = {'ACCESS_TOKEN_EXPIRE_SECONDS', 'AUTHORIZATION_CODE_EXPIRE_SECONDS'}
child = fields.IntegerField(min_value=1)
def to_internal_value(self, data):
data = super(OAuth2ProviderField, self).to_internal_value(data)
invalid_flags = (set(data.keys()) - self.valid_key_names)
if invalid_flags:
self.fail('invalid_key_names', invalid_key_names=', '.join(list(invalid_flags)))
return data

View File

@@ -27,13 +27,6 @@ from awx.main.models.credential import CredentialType
from awx.main.models.rbac import RoleAncestorEntry
class MongoFilterBackend(BaseFilterBackend):
# FIX: Note that MongoEngine can't use the filter backends from DRF
def filter_queryset(self, request, queryset, view):
return queryset
class V1CredentialFilterBackend(BaseFilterBackend):
'''
For /api/v1/ requests, filter out v2 (custom) credentials
@@ -138,6 +131,8 @@ class FieldLookupBackend(BaseFilterBackend):
new_parts.append(name_alt)
else:
field = model._meta.get_field(name)
if 'auth' in name or 'token' in name:
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
if isinstance(field, ForeignObjectRel) and getattr(field.field, '__prevent_search__', False):
raise PermissionDenied(_('Filtering on %s is not allowed.' % name))
elif getattr(field, '__prevent_search__', False):
@@ -276,7 +271,7 @@ class FieldLookupBackend(BaseFilterBackend):
# TODO: remove after API v1 deprecation period
if queryset.model._meta.object_name in ('JobTemplate', 'Job') and key in (
'credential', 'vault_credential', 'cloud_credential', 'network_credential'
):
) or queryset.model._meta.object_name in ('InventorySource', 'InventoryUpdate') and key == 'credential':
key = 'credentials'
# Make legacy v1 Credential fields work for backwards compatability

View File

@@ -5,6 +5,7 @@
import inspect
import logging
import time
import six
# Django
from django.conf import settings
@@ -18,6 +19,7 @@ from django.utils.encoding import smart_text
from django.utils.safestring import mark_safe
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth import views as auth_views
# Django REST Framework
from rest_framework.authentication import get_authorization_header
@@ -26,6 +28,10 @@ from rest_framework import generics
from rest_framework.response import Response
from rest_framework import status
from rest_framework import views
from rest_framework.permissions import AllowAny
# cryptography
from cryptography.fernet import InvalidToken
# AWX
from awx.api.filters import FieldLookupBackend
@@ -33,9 +39,9 @@ from awx.main.models import * # noqa
from awx.main.access import access_registry
from awx.main.utils import * # noqa
from awx.main.utils.db import get_all_field_names
from awx.api.serializers import ResourceAccessListElementSerializer
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer
from awx.api.versioning import URLPathVersioning, get_request_version
from awx.api.metadata import SublistAttachDetatchMetadata
from awx.api.metadata import SublistAttachDetatchMetadata, Metadata
__all__ = ['APIView', 'GenericAPIView', 'ListAPIView', 'SimpleListAPIView',
'ListCreateAPIView', 'SubListAPIView', 'SubListCreateAPIView',
@@ -47,12 +53,41 @@ __all__ = ['APIView', 'GenericAPIView', 'ListAPIView', 'SimpleListAPIView',
'ResourceAccessList',
'ParentMixin',
'DeleteLastUnattachLabelMixin',
'SubListAttachDetachAPIView',]
'SubListAttachDetachAPIView',
'CopyAPIView']
logger = logging.getLogger('awx.api.generics')
analytics_logger = logging.getLogger('awx.analytics.performance')
class LoggedLoginView(auth_views.LoginView):
def post(self, request, *args, **kwargs):
original_user = getattr(request, 'user', None)
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
current_user = getattr(request, 'user', None)
if current_user and getattr(current_user, 'pk', None) and current_user != original_user:
logger.info("User {} logged in.".format(current_user.username))
if request.user.is_authenticated:
return ret
else:
ret.status_code = 401
return ret
class LoggedLogoutView(auth_views.LogoutView):
def dispatch(self, request, *args, **kwargs):
original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)
current_user = getattr(request, 'user', None)
if (not current_user or not getattr(current_user, 'pk', True)) \
and current_user != original_user:
logger.info("User {} logged out.".format(original_user.username))
return ret
def get_view_name(cls, suffix=None):
'''
Wrapper around REST framework get_view_name() to support get_name() method
@@ -91,8 +126,17 @@ def get_view_description(cls, request, html=False):
return mark_safe(desc)
def get_default_schema():
if settings.SETTINGS_MODULE == 'awx.settings.development':
from awx.api.swagger import AutoSchema
return AutoSchema()
else:
return views.APIView.schema
class APIView(views.APIView):
schema = get_default_schema()
versioning_class = URLPathVersioning
def initialize_request(self, request, *args, **kwargs):
@@ -176,27 +220,14 @@ class APIView(views.APIView):
and in the browsable API.
"""
func = self.settings.VIEW_DESCRIPTION_FUNCTION
return func(self.__class__, self._request, html)
return func(self.__class__, getattr(self, '_request', None), html)
def get_description_context(self):
return {
'view': self,
'docstring': type(self).__doc__ or '',
'new_in_13': getattr(self, 'new_in_13', False),
'new_in_14': getattr(self, 'new_in_14', False),
'new_in_145': getattr(self, 'new_in_145', False),
'new_in_148': getattr(self, 'new_in_148', False),
'new_in_200': getattr(self, 'new_in_200', False),
'new_in_210': getattr(self, 'new_in_210', False),
'new_in_220': getattr(self, 'new_in_220', False),
'new_in_230': getattr(self, 'new_in_230', False),
'new_in_240': getattr(self, 'new_in_240', False),
'new_in_300': getattr(self, 'new_in_300', False),
'new_in_310': getattr(self, 'new_in_310', False),
'new_in_320': getattr(self, 'new_in_320', False),
'new_in_330': getattr(self, 'new_in_330', False),
'new_in_api_v2': getattr(self, 'new_in_api_v2', False),
'deprecated': getattr(self, 'deprecated', False),
'swagger_method': getattr(self.request, 'swagger_method', None),
}
def get_description(self, request, html=False):
@@ -214,7 +245,7 @@ class APIView(views.APIView):
context['deprecated'] = True
description = render_to_string(template_list, context)
if context.get('deprecated'):
if context.get('deprecated') and context.get('swagger_method') is None:
# render deprecation messages at the very top
description = '\n'.join([render_to_string('api/_deprecated.md', context), description])
return description
@@ -325,13 +356,6 @@ class ListAPIView(generics.ListAPIView, GenericAPIView):
def get_queryset(self):
return self.request.user.get_queryset(self.model)
def paginate_queryset(self, queryset):
page = super(ListAPIView, self).paginate_queryset(queryset)
# Queries RBAC info & stores into list objects
if hasattr(self, 'capabilities_prefetch') and page is not None:
cache_list_capabilities(page, self.capabilities_prefetch, self.model, self.request.user)
return page
def get_description_context(self):
if 'username' in get_all_field_names(self.model):
order_field = 'username'
@@ -747,3 +771,152 @@ class ResourceAccessList(ParentMixin, ListAPIView):
for r in roles:
ancestors.update(set(r.ancestors.all()))
return User.objects.filter(roles__in=list(ancestors)).distinct()
def trigger_delayed_deep_copy(*args, **kwargs):
from awx.main.tasks import deep_copy_model_obj
connection.on_commit(lambda: deep_copy_model_obj.delay(*args, **kwargs))
class CopyAPIView(GenericAPIView):
serializer_class = CopySerializer
permission_classes = (AllowAny,)
copy_return_serializer_class = None
new_in_330 = True
new_in_api_v2 = True
def _get_copy_return_serializer(self, *args, **kwargs):
if not self.copy_return_serializer_class:
return self.get_serializer(*args, **kwargs)
serializer_class_store = self.serializer_class
self.serializer_class = self.copy_return_serializer_class
ret = self.get_serializer(*args, **kwargs)
self.serializer_class = serializer_class_store
return ret
@staticmethod
def _decrypt_model_field_if_needed(obj, field_name, field_val):
if field_name in getattr(type(obj), 'REENCRYPTION_BLACKLIST_AT_COPY', []):
return field_val
if isinstance(field_val, dict):
for sub_field in field_val:
if isinstance(sub_field, six.string_types) \
and isinstance(field_val[sub_field], six.string_types):
try:
field_val[sub_field] = decrypt_field(obj, field_name, sub_field)
except InvalidToken:
# Catching the corner case with v1 credential fields
field_val[sub_field] = decrypt_field(obj, sub_field)
elif isinstance(field_val, six.string_types):
field_val = decrypt_field(obj, field_name)
return field_val
def _build_create_dict(self, obj):
ret = {}
if self.copy_return_serializer_class:
all_fields = Metadata().get_serializer_info(
self._get_copy_return_serializer(), method='POST'
)
for field_name, field_info in all_fields.items():
if not hasattr(obj, field_name) or field_info.get('read_only', True):
continue
ret[field_name] = CopyAPIView._decrypt_model_field_if_needed(
obj, field_name, getattr(obj, field_name)
)
return ret
@staticmethod
def copy_model_obj(old_parent, new_parent, model, obj, creater, copy_name='', create_kwargs=None):
fields_to_preserve = set(getattr(model, 'FIELDS_TO_PRESERVE_AT_COPY', []))
fields_to_discard = set(getattr(model, 'FIELDS_TO_DISCARD_AT_COPY', []))
m2m_to_preserve = {}
o2m_to_preserve = {}
create_kwargs = create_kwargs or {}
for field_name in fields_to_discard:
create_kwargs.pop(field_name, None)
for field in model._meta.get_fields():
try:
field_val = getattr(obj, field.name)
except AttributeError:
continue
# Adjust copy blacklist fields here.
if field.name in fields_to_discard or field.name in [
'id', 'pk', 'polymorphic_ctype', 'unifiedjobtemplate_ptr', 'created_by', 'modified_by'
] or field.name.endswith('_role'):
create_kwargs.pop(field.name, None)
continue
if field.one_to_many:
if field.name in fields_to_preserve:
o2m_to_preserve[field.name] = field_val
elif field.many_to_many:
if field.name in fields_to_preserve and not old_parent:
m2m_to_preserve[field.name] = field_val
elif field.many_to_one and not field_val:
create_kwargs.pop(field.name, None)
elif field.many_to_one and field_val == old_parent:
create_kwargs[field.name] = new_parent
elif field.name == 'name' and not old_parent:
create_kwargs[field.name] = copy_name or field_val + ' copy'
elif field.name in fields_to_preserve:
create_kwargs[field.name] = CopyAPIView._decrypt_model_field_if_needed(
obj, field.name, field_val
)
new_obj = model.objects.create(**create_kwargs)
# Need to save separatedly because Djang-crum get_current_user would
# not work properly in non-request-response-cycle context.
new_obj.created_by = creater
new_obj.save()
for m2m in m2m_to_preserve:
for related_obj in m2m_to_preserve[m2m].all():
getattr(new_obj, m2m).add(related_obj)
if not old_parent:
sub_objects = []
for o2m in o2m_to_preserve:
for sub_obj in o2m_to_preserve[o2m].all():
sub_model = type(sub_obj)
sub_objects.append((sub_model.__module__, sub_model.__name__, sub_obj.pk))
return new_obj, sub_objects
ret = {obj: new_obj}
for o2m in o2m_to_preserve:
for sub_obj in o2m_to_preserve[o2m].all():
ret.update(CopyAPIView.copy_model_obj(obj, new_obj, type(sub_obj), sub_obj, creater))
return ret
def get(self, request, *args, **kwargs):
obj = self.get_object()
create_kwargs = self._build_create_dict(obj)
for key in create_kwargs:
create_kwargs[key] = getattr(create_kwargs[key], 'pk', None) or create_kwargs[key]
return Response({'can_copy': request.user.can_access(self.model, 'add', create_kwargs)})
def post(self, request, *args, **kwargs):
obj = self.get_object()
create_kwargs = self._build_create_dict(obj)
create_kwargs_check = {}
for key in create_kwargs:
create_kwargs_check[key] = getattr(create_kwargs[key], 'pk', None) or create_kwargs[key]
if not request.user.can_access(self.model, 'add', create_kwargs_check):
raise PermissionDenied()
serializer = self.get_serializer(data=request.data)
if not serializer.is_valid():
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
new_obj, sub_objs = CopyAPIView.copy_model_obj(
None, None, self.model, obj, request.user, create_kwargs=create_kwargs,
copy_name=serializer.validated_data.get('name', '')
)
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role:
new_obj.admin_role.members.add(request.user)
if sub_objs:
permission_check_func = None
if hasattr(type(self), 'deep_copy_permission_check_func'):
permission_check_func = (
type(self).__module__, type(self).__name__, 'deep_copy_permission_check_func'
)
trigger_delayed_deep_copy(
self.model.__module__, self.model.__name__,
obj.pk, new_obj.pk, request.user.pk, sub_objs,
permission_check_func=permission_check_func
)
serializer = self._get_copy_return_serializer(new_obj)
return Response(serializer.data, status=status.HTTP_201_CREATED)

View File

@@ -190,23 +190,6 @@ class Metadata(metadata.SimpleMetadata):
finally:
delattr(view, '_request')
# Add version number in which view was added to Tower.
added_in_version = '1.2'
for version in ('3.2.0', '3.1.0', '3.0.0', '2.4.0', '2.3.0', '2.2.0',
'2.1.0', '2.0.0', '1.4.8', '1.4.5', '1.4', '1.3'):
if getattr(view, 'new_in_%s' % version.replace('.', ''), False):
added_in_version = version
break
metadata['added_in_version'] = added_in_version
# Add API version number in which view was added to Tower.
added_in_api_version = 'v1'
for version in ('v2',):
if getattr(view, 'new_in_api_%s' % version, False):
added_in_api_version = version
break
metadata['added_in_api_version'] = added_in_api_version
# Add type(s) handled by this view/serializer.
if hasattr(view, 'get_serializer'):
serializer = view.get_serializer()

View File

@@ -33,7 +33,7 @@ class OrderedDictLoader(yaml.SafeLoader):
key = self.construct_object(key_node, deep=deep)
try:
hash(key)
except TypeError, exc:
except TypeError as exc:
raise yaml.constructor.ConstructorError(
"while constructing a mapping", node.start_mark,
"found unacceptable key (%s)" % exc, key_node.start_mark

View File

@@ -17,7 +17,7 @@ logger = logging.getLogger('awx.api.permissions')
__all__ = ['ModelAccessPermission', 'JobTemplateCallbackPermission',
'TaskPermission', 'ProjectUpdatePermission', 'InventoryInventorySourcesUpdatePermission',
'UserPermission', 'IsSuperUser']
'UserPermission', 'IsSuperUser', 'InstanceGroupTowerPermission',]
class ModelAccessPermission(permissions.BasePermission):
@@ -103,7 +103,8 @@ class ModelAccessPermission(permissions.BasePermission):
return False
# Always allow superusers
if getattr(view, 'always_allow_superuser', True) and request.user.is_superuser:
if getattr(view, 'always_allow_superuser', True) and request.user.is_superuser \
and not hasattr(request.user, 'oauth_scopes'):
return True
# Check if view supports the request method before checking permission
@@ -226,3 +227,14 @@ class IsSuperUser(permissions.BasePermission):
def has_permission(self, request, view):
return request.user and request.user.is_superuser
class InstanceGroupTowerPermission(ModelAccessPermission):
def has_object_permission(self, request, view, obj):
if request.method == 'DELETE' and obj.name == "tower":
return False
if request.method in ['PATCH', 'PUT'] and obj.name == 'tower' and \
request and request.data and request.data.get('name', '') != 'tower':
return False
return super(InstanceGroupTowerPermission, self).has_object_permission(request, view, obj)

View File

@@ -5,6 +5,8 @@
from rest_framework import renderers
from rest_framework.request import override_method
import six
class BrowsableAPIRenderer(renderers.BrowsableAPIRenderer):
'''
@@ -69,8 +71,8 @@ class PlainTextRenderer(renderers.BaseRenderer):
format = 'txt'
def render(self, data, media_type=None, renderer_context=None):
if not isinstance(data, basestring):
data = unicode(data)
if not isinstance(data, six.string_types):
data = six.text_type(data)
return data.encode(self.charset)

File diff suppressed because it is too large Load Diff

103
awx/api/swagger.py Normal file
View File

@@ -0,0 +1,103 @@
import json
import warnings
from coreapi.document import Object, Link
from rest_framework import exceptions
from rest_framework.permissions import AllowAny
from rest_framework.renderers import CoreJSONRenderer
from rest_framework.response import Response
from rest_framework.schemas import SchemaGenerator, AutoSchema as DRFAuthSchema
from rest_framework.views import APIView
from rest_framework_swagger import renderers
class AutoSchema(DRFAuthSchema):
def get_link(self, path, method, base_url):
link = super(AutoSchema, self).get_link(path, method, base_url)
try:
serializer = self.view.get_serializer()
except Exception:
serializer = None
warnings.warn('{}.get_serializer() raised an exception during '
'schema generation. Serializer fields will not be '
'generated for {} {}.'
.format(self.view.__class__.__name__, method, path))
link.__dict__['deprecated'] = getattr(self.view, 'deprecated', False)
# auto-generate a topic/tag for the serializer based on its model
if hasattr(self.view, 'swagger_topic'):
link.__dict__['topic'] = str(self.view.swagger_topic).title()
elif serializer and hasattr(serializer, 'Meta'):
link.__dict__['topic'] = str(
serializer.Meta.model._meta.verbose_name_plural
).title()
elif hasattr(self.view, 'model'):
link.__dict__['topic'] = str(self.view.model._meta.verbose_name_plural).title()
else:
warnings.warn('Could not determine a Swagger tag for path {}'.format(path))
return link
def get_description(self, path, method):
self.view._request = self.view.request
setattr(self.view.request, 'swagger_method', method)
description = super(AutoSchema, self).get_description(path, method)
return description
class SwaggerSchemaView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
renderer_classes = [
CoreJSONRenderer,
renderers.OpenAPIRenderer,
renderers.SwaggerUIRenderer
]
def get(self, request):
generator = SchemaGenerator(
title='Ansible Tower API',
patterns=None,
urlconf=None
)
schema = generator.get_schema(request=request)
# python core-api doesn't support the deprecation yet, so track it
# ourselves and return it in a response header
_deprecated = []
# By default, DRF OpenAPI serialization places all endpoints in
# a single node based on their root path (/api). Instead, we want to
# group them by topic/tag so that they're categorized in the rendered
# output
document = schema._data.pop('api')
for path, node in document.items():
if isinstance(node, Object):
for action in node.values():
topic = getattr(action, 'topic', None)
if topic:
schema._data.setdefault(topic, Object())
schema._data[topic]._data[path] = node
if isinstance(action, Object):
for link in action.links.values():
if link.deprecated:
_deprecated.append(link.url)
elif isinstance(node, Link):
topic = getattr(node, 'topic', None)
if topic:
schema._data.setdefault(topic, Object())
schema._data[topic]._data[path] = node
if not schema:
raise exceptions.ValidationError(
'The schema generator did not return a schema Document'
)
return Response(
schema,
headers={'X-Deprecated-Paths': json.dumps(_deprecated)}
)

View File

@@ -1,14 +0,0 @@
{% if not version_label_flag or version_label_flag == 'true' %}
{% if new_in_13 %}> _Added in AWX 1.3_{% endif %}
{% if new_in_14 %}> _Added in AWX 1.4_{% endif %}
{% if new_in_145 %}> _Added in Ansible Tower 1.4.5_{% endif %}
{% if new_in_148 %}> _Added in Ansible Tower 1.4.8_{% endif %}
{% if new_in_200 %}> _Added in Ansible Tower 2.0.0_{% endif %}
{% if new_in_220 %}> _Added in Ansible Tower 2.2.0_{% endif %}
{% if new_in_230 %}> _Added in Ansible Tower 2.3.0_{% endif %}
{% if new_in_240 %}> _Added in Ansible Tower 2.4.0_{% endif %}
{% if new_in_300 %}> _Added in Ansible Tower 3.0.0_{% endif %}
{% if new_in_310 %}> _New in Ansible Tower 3.1.0_{% endif %}
{% if new_in_320 %}> _New in Ansible Tower 3.2.0_{% endif %}
{% if new_in_330 %}> _New in Ansible Tower 3.3.0_{% endif %}
{% endif %}

View File

@@ -0,0 +1,3 @@
Relaunch an Ad Hoc Command:
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.

View File

@@ -0,0 +1,122 @@
# Token Handling using OAuth2
This page lists OAuth 2 utility endpoints used for authorization, token refresh and revoke.
Note endpoints other than `/api/o/authorize/` are not meant to be used in browsers and do not
support HTTP GET. The endpoints here strictly follow
[RFC specs for OAuth2](https://tools.ietf.org/html/rfc6749), so please use that for detailed
reference. Note AWX net location default to `http://localhost:8013` in examples:
## Create Token for an Application using Authorization code grant type
Given an application "AuthCodeApp" of grant type `authorization-code`,
from the client app, the user makes a GET to the Authorize endpoint with
* `response_type`
* `client_id`
* `redirect_uris`
* `scope`
AWX will respond with the authorization `code` and `state`
to the redirect_uri specified in the application. The client application will then make a POST to the
`api/o/token/` endpoint on AWX with
* `code`
* `client_id`
* `client_secret`
* `grant_type`
* `redirect_uri`
AWX will respond with the `access_token`, `token_type`, `refresh_token`, and `expires_in`. For more
information on testing this flow, refer to [django-oauth-toolkit](http://django-oauth-toolkit.readthedocs.io/en/latest/tutorial/tutorial_01.html#test-your-authorization-server).
## Create Token for an Application using Implicit grant type
Suppose we have an application "admin's app" of grant type `implicit`.
In API browser, first make sure the user is logged in via session auth, then visit authorization
endpoint with given parameters:
```text
http://localhost:8013/api/o/authorize/?response_type=token&client_id=L0uQQWW8pKX51hoqIRQGsuqmIdPi2AcXZ9EJRGmj&scope=read
```
Here the value of `client_id` should be the same as that of `client_id` field of underlying application.
On success, an authorization page should be displayed asking the logged in user to grant/deny the access token.
Once the user clicks on 'grant', the API browser will try POSTing to the same endpoint with the same parameters
in POST body, on success a 302 redirect will be returned.
## Create Token for an Application using Password grant type
Log in is not required for `password` grant type, so a simple `curl` can be used to acquire a personal access token
via `/api/o/token/` with
* `grant_type`: Required to be "password"
* `username`
* `password`
* `client_id`: Associated application must have grant_type "password"
* `client_secret`
For example:
```bash
curl -X POST \
-d "grant_type=password&username=<username>&password=<password>&scope=read" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569e
IaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/token/ -i
```
In the above post request, parameters `username` and `password` are username and password of the related
AWX user of the underlying application, and the authentication information is of format
`<client_id>:<client_secret>`, where `client_id` and `client_secret` are the corresponding fields of
underlying application.
Upon success, access token, refresh token and other information are given in the response body in JSON
format:
```text
{
"access_token": "9epHOqHhnXUcgYK8QanOmUQPSgX92g",
"token_type": "Bearer",
"expires_in": 31536000000,
"refresh_token": "jMRX6QvzOTf046KHee3TU5mT3nyXsz",
"scope": "read"
}
```
## Refresh an existing access token
The `/api/o/token/` endpoint is used for refreshing access token:
```bash
curl -X POST \
-d "grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/token/ -i
```
In the above post request, `refresh_token` is provided by `refresh_token` field of the access token
above. The authentication information is of format `<client_id>:<client_secret>`, where `client_id`
and `client_secret` are the corresponding fields of underlying related application of the access token.
Upon success, the new (refreshed) access token with the same scope information as the previous one is
given in the response body in JSON format:
```text
{
"access_token": "NDInWxGJI4iZgqpsreujjbvzCfJqgR",
"token_type": "Bearer",
"expires_in": 31536000000,
"refresh_token": "DqOrmz8bx3srlHkZNKmDpqA86bnQkT",
"scope": "read write"
}
```
Internally, the refresh operation deletes the existing token and a new token is created immediately
after, with information like scope and related application identical to the original one. We can
verify by checking the new token is present at the `api/v2/tokens` endpoint.
## Revoke an access token
Revoking an access token is the same as deleting the token resource object.
Revoking is done by POSTing to `/api/o/revoke_token/` with the token to revoke as parameter:
```bash
curl -X POST -d "token=rQONsve372fQwuc2pn76k3IHDCYpi7" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/revoke_token/ -i
```
`200 OK` means a successful delete.

View File

@@ -1,4 +1,5 @@
Site configuration settings and general information.
{% ifmeth GET %}
# Site configuration settings and general information
Make a GET request to this resource to retrieve the configuration containing
the following fields (some fields may not be visible to all users):
@@ -11,6 +12,10 @@ the following fields (some fields may not be visible to all users):
* `license_info`: Information about the current license.
* `version`: Version of Ansible Tower package installed.
* `eula`: The current End-User License Agreement
{% endifmeth %}
{% ifmeth POST %}
# Install or update an existing license
(_New in Ansible Tower 2.0.0_) Make a POST request to this resource as a super
user to install or update the existing license. The license data itself can
@@ -18,3 +23,11 @@ be POSTed as a normal json data structure.
(_New in Ansible Tower 2.1.1_) The POST must include a `eula_accepted` boolean
element indicating acceptance of the End-User License Agreement.
{% endifmeth %}
{% ifmeth DELETE %}
# Delete an existing license
(_New in Ansible Tower 2.0.0_) Make a DELETE request to this resource as a super
user to delete the existing license
{% endifmeth %}

View File

@@ -1,3 +1 @@
{{ docstring }}
{% include "api/_new_in_awx.md" %}

View File

@@ -1,37 +0,0 @@
Make a POST request to this resource with `username` and `password` fields to
obtain an authentication token to use for subsequent requests.
Example JSON to POST (content type is `application/json`):
{"username": "user", "password": "my pass"}
Example form data to post (content type is `application/x-www-form-urlencoded`):
username=user&password=my%20pass
If the username and password provided are valid, the response will contain a
`token` field with the authentication token to use and an `expires` field with
the timestamp when the token will expire:
{
"token": "8f17825cf08a7efea124f2638f3896f6637f8745",
"expires": "2013-09-05T21:46:35.729Z"
}
Otherwise, the response will indicate the error that occurred and return a 4xx
status code.
For subsequent requests, pass the token via the HTTP `Authorization` request
header:
Authorization: Token 8f17825cf08a7efea124f2638f3896f6637f8745
The auth token is only valid when used from the same remote address and user
agent that originally obtained it.
Each request that uses the token for authentication will refresh its expiration
timestamp and keep it from expiring. A token only expires when it is not used
for the configured timeout interval (default 1800 seconds).
A DELETE request with the token set will cause the token to be invalidated and
no further requests can be made with it.

View File

@@ -1,9 +1,13 @@
{% ifmeth GET %}
# Retrieve {{ model_verbose_name|title }} Variable Data:
Make a GET request to this resource to retrieve all variables defined for this
Make a GET request to this resource to retrieve all variables defined for a
{{ model_verbose_name }}.
{% endifmeth %}
{% ifmeth PUT PATCH %}
# Update {{ model_verbose_name|title }} Variable Data:
Make a PUT request to this resource to update variables defined for this
Make a PUT or PATCH request to this resource to update variables defined for a
{{ model_verbose_name }}.
{% endifmeth %}

View File

@@ -38,5 +38,3 @@ Data about failed and successfull hosts by inventory will be given as:
"id": 2,
"name": "Test Inventory"
},
{% include "api/_new_in_awx.md" %}

View File

@@ -1,3 +1,5 @@
# View Statistics for Job Runs
Make a GET request to this resource to retrieve aggregate statistics about job runs suitable for graphing.
## Parmeters and Filtering
@@ -33,5 +35,3 @@ Data will be returned in the following format:
Each element contains an epoch timestamp represented in seconds and a numerical value indicating
the number of events during that time period
{% include "api/_new_in_awx.md" %}

View File

@@ -1,3 +1 @@
Make a GET request to this resource to retrieve aggregate statistics for Tower.
{% include "api/_new_in_awx.md" %}

View File

@@ -1,4 +1,4 @@
# List All {{ model_verbose_name_plural|title }} for this {{ parent_model_verbose_name|title }}:
# List All {{ model_verbose_name_plural|title }} for {{ parent_model_verbose_name|title|anora }}:
Make a GET request to this resource to retrieve a list of all
{{ model_verbose_name_plural }} directly or indirectly belonging to this

View File

@@ -1,9 +1,7 @@
# List Potential Child Groups for this {{ parent_model_verbose_name|title }}:
# List Potential Child Groups for {{ parent_model_verbose_name|title|anora }}:
Make a GET request to this resource to retrieve a list of
{{ model_verbose_name_plural }} available to be added as children of the
current {{ parent_model_verbose_name }}.
{% include "api/_list_common.md" %}
{% include "api/_new_in_awx.md" %}

View File

@@ -1,4 +1,4 @@
# List All {{ model_verbose_name_plural|title }} for this {{ parent_model_verbose_name|title }}:
# List All {{ model_verbose_name_plural|title }} for {{ parent_model_verbose_name|title|anora }}:
Make a GET request to this resource to retrieve a list of all
{{ model_verbose_name_plural }} of which the selected

View File

@@ -1,3 +1,5 @@
# List Fact Scans for a Host Specific Host Scan
Make a GET request to this resource to retrieve system tracking data for a particular scan
You may filter by datetime:
@@ -7,5 +9,3 @@ You may filter by datetime:
and module
`?datetime=2015-06-01&module=ansible`
{% include "api/_new_in_awx.md" %}

View File

@@ -1,3 +1,5 @@
# List Fact Scans for a Host by Module and Date
Make a GET request to this resource to retrieve system tracking scans by module and date/time
You may filter scan runs using the `from` and `to` properties:
@@ -7,5 +9,3 @@ You may filter scan runs using the `from` and `to` properties:
You may also filter by module
`?module=packages`
{% include "api/_new_in_awx.md" %}

View File

@@ -0,0 +1 @@
# List Red Hat Insights for a Host

View File

@@ -29,5 +29,3 @@ Response code from this action will be:
- 202 if some inventory source updates were successful, but some failed
- 400 if all of the inventory source updates failed
- 400 if there are no inventory sources in the inventory
{% include "api/_new_in_awx.md" %}

View File

@@ -1,7 +1,9 @@
# List Root {{ model_verbose_name_plural|title }} for this {{ parent_model_verbose_name|title }}:
{% ifmeth GET %}
# List Root {{ model_verbose_name_plural|title }} for {{ parent_model_verbose_name|title|anora }}:
Make a GET request to this resource to retrieve a list of root (top-level)
{{ model_verbose_name_plural }} associated with this
{{ parent_model_verbose_name }}.
{% include "api/_list_common.md" %}
{% endifmeth %}

View File

@@ -9,5 +9,3 @@ cancelled. The response will include the following field:
Make a POST request to this resource to cancel a pending or running inventory
update. The response status code will be 202 if successful, or 405 if the
update cannot be canceled.
{% include "api/_new_in_awx.md" %}

View File

@@ -9,5 +9,3 @@ from its inventory source. The response will include the following field:
Make a POST request to this resource to update the inventory source. If
successful, the response status code will be 202. If the inventory source is
not defined or cannot be updated, a 405 status code will be returned.
{% include "api/_new_in_awx.md" %}

View File

@@ -1,4 +1,4 @@
# Group Tree for this {{ model_verbose_name|title }}:
# Group Tree for {{ model_verbose_name|title|anora }}:
Make a GET request to this resource to retrieve a hierarchical view of groups
associated with the selected {{ model_verbose_name }}.
@@ -11,5 +11,3 @@ also containing a list of its children.
Each group data structure includes the following fields:
{% include "api/_result_fields_common.md" %}
{% include "api/_new_in_awx.md" %}

View File

@@ -1,10 +1,15 @@
# Cancel Job
{% ifmeth GET %}
# Determine if a Job can be cancelled
Make a GET request to this resource to determine if the job can be cancelled.
The response will include the following field:
* `can_cancel`: Indicates whether this job can be canceled (boolean, read-only)
{% endifmeth %}
{% ifmeth POST %}
# Cancel a Job
Make a POST request to this resource to cancel a pending or running job. The
response status code will be 202 if successful, or 405 if the job cannot be
canceled.
{% endifmeth %}

View File

@@ -23,5 +23,3 @@ Will show only failed plays. Alternatively `false` may be used.
?play__icontains=test
Will filter plays matching the substring `test`
{% include "api/_new_in_awx.md" %}

View File

@@ -25,5 +25,3 @@ Will show only failed plays. Alternatively `false` may be used.
?task__icontains=test
Will filter tasks matching the substring `test`
{% include "api/_new_in_awx.md" %}

View File

@@ -1,3 +1,3 @@
Relaunch a job:
Relaunch a Job:
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.

View File

@@ -1,4 +1,5 @@
# Start Job
{% ifmeth GET %}
# Determine if a Job can be started
Make a GET request to this resource to determine if the job can be started and
whether any passwords are required to start the job. The response will include
@@ -7,10 +8,14 @@ the following fields:
* `can_start`: Flag indicating if this job can be started (boolean, read-only)
* `passwords_needed_to_start`: Password names required to start the job (array,
read-only)
{% endifmeth %}
{% ifmeth POST %}
# Start a Job
Make a POST request to this resource to start the job. If any passwords are
required, they must be passed via POST data.
If successful, the response status code will be 202. If any required passwords
are not provided, a 400 status code will be returned. If the job cannot be
started, a 405 status code will be returned.
{% endifmeth %}

View File

@@ -1,13 +1,7 @@
{% with 'false' as version_label_flag %}
{% include "api/sub_list_create_api_view.md" %}
{% endwith %}
Labels not associated with any other resources are deleted. A label can become disassociated with a resource as a result of 3 events.
1. A label is explicitly disassociated with a related job template
2. A job is deleted with labels
3. A cleanup job deletes a job with labels
{% with 'true' as version_label_flag %}
{% include "api/_new_in_awx.md" %}
{% endwith %}

View File

@@ -1,8 +1,8 @@
{% ifmeth GET %}
# List {{ model_verbose_name_plural|title }}:
Make a GET request to this resource to retrieve the list of
{{ model_verbose_name_plural }}.
{% include "api/_list_common.md" %}
{% include "api/_new_in_awx.md" %}
{% endifmeth %}

View File

@@ -1,6 +1,6 @@
{% include "api/list_api_view.md" %}
# Create {{ model_verbose_name_plural|title }}:
# Create {{ model_verbose_name|title|anora }}:
Make a POST request to this resource with the following {{ model_verbose_name }}
fields to create a new {{ model_verbose_name }}:
@@ -8,5 +8,3 @@ fields to create a new {{ model_verbose_name }}:
{% with write_only=1 %}
{% include "api/_result_fields_common.md" with serializer_fields=serializer_create_fields %}
{% endwith %}
{% include "api/_new_in_awx.md" %}

View File

@@ -1,4 +1,4 @@
# Retrieve {{ model_verbose_name|title }} Playbooks:
Make GET request to this resource to retrieve a list of playbooks available
for this {{ model_verbose_name }}.
for {{ model_verbose_name|anora }}.

View File

@@ -9,5 +9,3 @@ cancelled. The response will include the following field:
Make a POST request to this resource to cancel a pending or running project
update. The response status code will be 202 if successful, or 405 if the
update cannot be canceled.
{% include "api/_new_in_awx.md" %}

View File

@@ -8,5 +8,3 @@ from its SCM source. The response will include the following field:
Make a POST request to this resource to update the project. If the project
cannot be updated, a 405 status code will be returned.
{% include "api/_new_in_awx.md" %}

View File

@@ -2,11 +2,9 @@
### Note: starting from api v2, this resource object can be accessed via its named URL.
{% endif %}
# Retrieve {{ model_verbose_name|title }}:
# Retrieve {{ model_verbose_name|title|anora }}:
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
record containing the following fields:
{% include "api/_result_fields_common.md" %}
{% include "api/_new_in_awx.md" %}

View File

@@ -2,15 +2,17 @@
### Note: starting from api v2, this resource object can be accessed via its named URL.
{% endif %}
# Retrieve {{ model_verbose_name|title }}:
{% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}:
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
record containing the following fields:
{% include "api/_result_fields_common.md" %}
{% endifmeth %}
# Delete {{ model_verbose_name|title }}:
{% ifmeth DELETE %}
# Delete {{ model_verbose_name|title|anora }}:
Make a DELETE request to this resource to delete this {{ model_verbose_name }}.
{% include "api/_new_in_awx.md" %}
{% endifmeth %}

View File

@@ -2,14 +2,17 @@
### Note: starting from api v2, this resource object can be accessed via its named URL.
{% endif %}
# Retrieve {{ model_verbose_name|title }}:
{% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}:
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
record containing the following fields:
{% include "api/_result_fields_common.md" %}
{% endifmeth %}
# Update {{ model_verbose_name|title }}:
{% ifmeth PUT PATCH %}
# Update {{ model_verbose_name|title|anora }}:
Make a PUT or PATCH request to this resource to update this
{{ model_verbose_name }}. The following fields may be modified:
@@ -17,9 +20,12 @@ Make a PUT or PATCH request to this resource to update this
{% with write_only=1 %}
{% include "api/_result_fields_common.md" with serializer_fields=serializer_update_fields %}
{% endwith %}
{% endifmeth %}
{% ifmeth PUT %}
For a PUT request, include **all** fields in the request.
{% endifmeth %}
{% ifmeth PATCH %}
For a PATCH request, include only the fields that are being modified.
{% include "api/_new_in_awx.md" %}
{% endifmeth %}

View File

@@ -2,14 +2,17 @@
### Note: starting from api v2, this resource object can be accessed via its named URL.
{% endif %}
# Retrieve {{ model_verbose_name|title }}:
{% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}:
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
record containing the following fields:
{% include "api/_result_fields_common.md" %}
{% endifmeth %}
# Update {{ model_verbose_name|title }}:
{% ifmeth PUT PATCH %}
# Update {{ model_verbose_name|title|anora }}:
Make a PUT or PATCH request to this resource to update this
{{ model_verbose_name }}. The following fields may be modified:
@@ -17,13 +20,18 @@ Make a PUT or PATCH request to this resource to update this
{% with write_only=1 %}
{% include "api/_result_fields_common.md" with serializer_fields=serializer_update_fields %}
{% endwith %}
{% endifmeth %}
{% ifmeth PUT %}
For a PUT request, include **all** fields in the request.
{% endifmeth %}
{% ifmeth PATCH %}
For a PATCH request, include only the fields that are being modified.
{% endifmeth %}
# Delete {{ model_verbose_name|title }}:
{% ifmeth DELETE %}
# Delete {{ model_verbose_name|title|anora }}:
Make a DELETE request to this resource to delete this {{ model_verbose_name }}.
{% include "api/_new_in_awx.md" %}
{% endifmeth %}

View File

@@ -0,0 +1 @@
# Test Logging Configuration

View File

@@ -1,9 +1,9 @@
# List {{ model_verbose_name_plural|title }} for this {{ parent_model_verbose_name|title }}:
{% ifmeth GET %}
# List {{ model_verbose_name_plural|title }} for {{ parent_model_verbose_name|title|anora }}:
Make a GET request to this resource to retrieve a list of
{{ model_verbose_name_plural }} associated with the selected
{{ parent_model_verbose_name }}.
{% include "api/_list_common.md" %}
{% include "api/_new_in_awx.md" %}
{% endifmeth %}

View File

@@ -1,6 +1,6 @@
{% include "api/sub_list_api_view.md" %}
# Create {{ model_verbose_name_plural|title }} for this {{ parent_model_verbose_name|title }}:
# Create {{ model_verbose_name|title|anora }} for {{ parent_model_verbose_name|title|anora }}:
Make a POST request to this resource with the following {{ model_verbose_name }}
fields to create a new {{ model_verbose_name }} associated with this
@@ -25,7 +25,7 @@ delete the associated {{ model_verbose_name }}.
}
{% else %}
# Add {{ model_verbose_name_plural|title }} for this {{ parent_model_verbose_name|title }}:
# Add {{ model_verbose_name_plural|title }} for {{ parent_model_verbose_name|title|anora }}:
Make a POST request to this resource with only an `id` field to associate an
existing {{ model_verbose_name }} with this {{ parent_model_verbose_name }}.
@@ -37,5 +37,3 @@ remove the {{ model_verbose_name }} from this {{ parent_model_verbose_name }}
{% if model_verbose_name != "label" %} without deleting the {{ model_verbose_name }}{% endif %}.
{% endif %}
{% endif %}
{% include "api/_new_in_awx.md" %}

View File

@@ -1,12 +1,16 @@
# List Roles for this Team:
# List Roles for a Team:
{% ifmeth GET %}
Make a GET request to this resource to retrieve a list of roles associated with the selected team.
{% include "api/_list_common.md" %}
{% endifmeth %}
{% ifmeth POST %}
# Associate Roles with this Team:
Make a POST request to this resource to add or remove a role from this team. The following fields may be modified:
* `id`: The Role ID to add to the team. (int, required)
* `disassociate`: Provide if you want to remove the role. (any value, optional)
{% endifmeth %}

View File

@@ -25,5 +25,3 @@ dark background.
Files over {{ settings.STDOUT_MAX_BYTES_DISPLAY|filesizeformat }} (configurable)
will not display in the browser. Use the `txt_download` or `ansi_download`
formats to download the file directly to view it.
{% include "api/_new_in_awx.md" %}

View File

@@ -1,3 +1,5 @@
# Retrieve Information about the current User
Make a GET request to retrieve user information about the current user.
One result should be returned containing the following fields:

View File

@@ -1,12 +1,16 @@
# List Roles for this User:
# List Roles for a User:
{% ifmeth GET %}
Make a GET request to this resource to retrieve a list of roles associated with the selected user.
{% include "api/_list_common.md" %}
{% endifmeth %}
{% ifmeth POST %}
# Associate Roles with this User:
Make a POST request to this resource to add or remove a role from this user. The following fields may be modified:
* `id`: The Role ID to add to the user. (int, required)
* `disassociate`: Provide if you want to remove the role. (any value, optional)
{% endifmeth %}

View File

@@ -11,6 +11,7 @@ from awx.api.views import (
CredentialObjectRolesList,
CredentialOwnerUsersList,
CredentialOwnerTeamsList,
CredentialCopy,
)
@@ -22,6 +23,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/object_roles/$', CredentialObjectRolesList.as_view(), name='credential_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/owner_users/$', CredentialOwnerUsersList.as_view(), name='credential_owner_users_list'),
url(r'^(?P<pk>[0-9]+)/owner_teams/$', CredentialOwnerTeamsList.as_view(), name='credential_owner_teams_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', CredentialCopy.as_view(), name='credential_copy'),
]
__all__ = ['urls']

View File

@@ -20,6 +20,7 @@ from awx.api.views import (
InventoryAccessList,
InventoryObjectRolesList,
InventoryInstanceGroupsList,
InventoryCopy,
)
@@ -40,6 +41,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/access_list/$', InventoryAccessList.as_view(), name='inventory_access_list'),
url(r'^(?P<pk>[0-9]+)/object_roles/$', InventoryObjectRolesList.as_view(), name='inventory_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/instance_groups/$', InventoryInstanceGroupsList.as_view(), name='inventory_instance_groups_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', InventoryCopy.as_view(), name='inventory_copy'),
]
__all__ = ['urls']

View File

@@ -7,6 +7,7 @@ from awx.api.views import (
InventoryScriptList,
InventoryScriptDetail,
InventoryScriptObjectRolesList,
InventoryScriptCopy,
)
@@ -14,6 +15,7 @@ urls = [
url(r'^$', InventoryScriptList.as_view(), name='inventory_script_list'),
url(r'^(?P<pk>[0-9]+)/$', InventoryScriptDetail.as_view(), name='inventory_script_detail'),
url(r'^(?P<pk>[0-9]+)/object_roles/$', InventoryScriptObjectRolesList.as_view(), name='inventory_script_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', InventoryScriptCopy.as_view(), name='inventory_script_copy'),
]
__all__ = ['urls']

View File

@@ -10,6 +10,7 @@ from awx.api.views import (
InventorySourceUpdatesList,
InventorySourceActivityStreamList,
InventorySourceSchedulesList,
InventorySourceCredentialsList,
InventorySourceGroupsList,
InventorySourceHostsList,
InventorySourceNotificationTemplatesAnyList,
@@ -25,6 +26,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/inventory_updates/$', InventorySourceUpdatesList.as_view(), name='inventory_source_updates_list'),
url(r'^(?P<pk>[0-9]+)/activity_stream/$', InventorySourceActivityStreamList.as_view(), name='inventory_source_activity_stream_list'),
url(r'^(?P<pk>[0-9]+)/schedules/$', InventorySourceSchedulesList.as_view(), name='inventory_source_schedules_list'),
url(r'^(?P<pk>[0-9]+)/credentials/$', InventorySourceCredentialsList.as_view(), name='inventory_source_credentials_list'),
url(r'^(?P<pk>[0-9]+)/groups/$', InventorySourceGroupsList.as_view(), name='inventory_source_groups_list'),
url(r'^(?P<pk>[0-9]+)/hosts/$', InventorySourceHostsList.as_view(), name='inventory_source_hosts_list'),
url(r'^(?P<pk>[0-9]+)/notification_templates_any/$', InventorySourceNotificationTemplatesAnyList.as_view(),

View File

@@ -9,6 +9,7 @@ from awx.api.views import (
InventoryUpdateCancel,
InventoryUpdateStdout,
InventoryUpdateNotificationsList,
InventoryUpdateCredentialsList,
InventoryUpdateEventsList,
)
@@ -19,6 +20,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/cancel/$', InventoryUpdateCancel.as_view(), name='inventory_update_cancel'),
url(r'^(?P<pk>[0-9]+)/stdout/$', InventoryUpdateStdout.as_view(), name='inventory_update_stdout'),
url(r'^(?P<pk>[0-9]+)/notifications/$', InventoryUpdateNotificationsList.as_view(), name='inventory_update_notifications_list'),
url(r'^(?P<pk>[0-9]+)/credentials/$', InventoryUpdateCredentialsList.as_view(), name='inventory_update_credentials_list'),
url(r'^(?P<pk>[0-9]+)/events/$', InventoryUpdateEventsList.as_view(), name='inventory_update_events_list'),
]

View File

@@ -19,6 +19,7 @@ from awx.api.views import (
JobTemplateAccessList,
JobTemplateObjectRolesList,
JobTemplateLabelList,
JobTemplateCopy,
)
@@ -41,6 +42,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/access_list/$', JobTemplateAccessList.as_view(), name='job_template_access_list'),
url(r'^(?P<pk>[0-9]+)/object_roles/$', JobTemplateObjectRolesList.as_view(), name='job_template_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/labels/$', JobTemplateLabelList.as_view(), name='job_template_label_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', JobTemplateCopy.as_view(), name='job_template_copy'),
]
__all__ = ['urls']

View File

@@ -8,6 +8,7 @@ from awx.api.views import (
NotificationTemplateDetail,
NotificationTemplateTest,
NotificationTemplateNotificationList,
NotificationTemplateCopy,
)
@@ -16,6 +17,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/$', NotificationTemplateDetail.as_view(), name='notification_template_detail'),
url(r'^(?P<pk>[0-9]+)/test/$', NotificationTemplateTest.as_view(), name='notification_template_test'),
url(r'^(?P<pk>[0-9]+)/notifications/$', NotificationTemplateNotificationList.as_view(), name='notification_template_notification_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', NotificationTemplateCopy.as_view(), name='notification_template_copy'),
]
__all__ = ['urls']

18
awx/api/urls/oauth.py Normal file
View File

@@ -0,0 +1,18 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.conf.urls import url
from oauth2_provider.urls import base_urlpatterns
from awx.api.views import (
ApiOAuthAuthorizationRootView,
)
urls = [
url(r'^$', ApiOAuthAuthorizationRootView.as_view(), name='oauth_authorization_root_view'),
] + base_urlpatterns
__all__ = ['urls']

View File

@@ -19,10 +19,11 @@ from awx.api.views import (
ProjectNotificationTemplatesSuccessList,
ProjectObjectRolesList,
ProjectAccessList,
ProjectCopy,
)
urls = [
urls = [
url(r'^$', ProjectList.as_view(), name='project_list'),
url(r'^(?P<pk>[0-9]+)/$', ProjectDetail.as_view(), name='project_detail'),
url(r'^(?P<pk>[0-9]+)/playbooks/$', ProjectPlaybooks.as_view(), name='project_playbooks'),
@@ -39,6 +40,7 @@ urls = [
name='project_notification_templates_success_list'),
url(r'^(?P<pk>[0-9]+)/object_roles/$', ProjectObjectRolesList.as_view(), name='project_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/access_list/$', ProjectAccessList.as_view(), name='project_access_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', ProjectCopy.as_view(), name='project_copy'),
]
__all__ = ['urls']

View File

@@ -2,8 +2,13 @@
# All Rights Reserved.
from __future__ import absolute_import, unicode_literals
from django.conf import settings
from django.conf.urls import include, url
from awx.api.generics import (
LoggedLoginView,
LoggedLogoutView,
)
from awx.api.views import (
ApiRootView,
ApiV1RootView,
@@ -11,7 +16,6 @@ from awx.api.views import (
ApiV1PingView,
ApiV1ConfigView,
AuthView,
AuthTokenView,
UserMeList,
DashboardView,
DashboardJobsGraphView,
@@ -24,6 +28,10 @@ from awx.api.views import (
JobTemplateExtraCredentialsList,
SchedulePreview,
ScheduleZoneInfo,
OAuth2ApplicationList,
OAuth2TokenList,
ApplicationOAuth2TokenList,
OAuth2ApplicationDetail,
)
from .organization import urls as organization_urls
@@ -59,6 +67,8 @@ from .schedule import urls as schedule_urls
from .activity_stream import urls as activity_stream_urls
from .instance import urls as instance_urls
from .instance_group import urls as instance_group_urls
from .user_oauth import urls as user_oauth_urls
from .oauth import urls as oauth_urls
v1_urls = [
@@ -66,7 +76,6 @@ v1_urls = [
url(r'^ping/$', ApiV1PingView.as_view(), name='api_v1_ping_view'),
url(r'^config/$', ApiV1ConfigView.as_view(), name='api_v1_config_view'),
url(r'^auth/$', AuthView.as_view()),
url(r'^authtoken/$', AuthTokenView.as_view(), name='auth_token_view'),
url(r'^me/$', UserMeList.as_view(), name='user_me_list'),
url(r'^dashboard/$', DashboardView.as_view(), name='dashboard_view'),
url(r'^dashboard/graphs/jobs/$', DashboardJobsGraphView.as_view(), name='dashboard_jobs_graph_view'),
@@ -117,11 +126,29 @@ v2_urls = [
url(r'^job_templates/(?P<pk>[0-9]+)/credentials/$', JobTemplateCredentialsList.as_view(), name='job_template_credentials_list'),
url(r'^schedules/preview/$', SchedulePreview.as_view(), name='schedule_rrule'),
url(r'^schedules/zoneinfo/$', ScheduleZoneInfo.as_view(), name='schedule_zoneinfo'),
url(r'^applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
url(r'^applications/(?P<pk>[0-9]+)/$', OAuth2ApplicationDetail.as_view(), name='o_auth2_application_detail'),
url(r'^applications/(?P<pk>[0-9]+)/tokens/$', ApplicationOAuth2TokenList.as_view(), name='application_o_auth2_token_list'),
url(r'^tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'),
url(r'^', include(user_oauth_urls)),
]
app_name = 'api'
urlpatterns = [
url(r'^$', ApiRootView.as_view(), name='api_root_view'),
url(r'^(?P<version>(v2))/', include(v2_urls)),
url(r'^(?P<version>(v1|v2))/', include(v1_urls))
url(r'^(?P<version>(v1|v2))/', include(v1_urls)),
url(r'^login/$', LoggedLoginView.as_view(
template_name='rest_framework/login.html',
extra_context={'inside_login_context': True}
), name='login'),
url(r'^logout/$', LoggedLogoutView.as_view(
next_page='/api/', redirect_field_name='next'
), name='logout'),
url(r'^o/', include(oauth_urls)),
]
if settings.SETTINGS_MODULE == 'awx.settings.development':
from awx.api.swagger import SwaggerSchemaView
urlpatterns += [
url(r'^swagger/$', SwaggerSchemaView.as_view(), name='swagger_view'),
]

View File

@@ -14,9 +14,12 @@ from awx.api.views import (
UserRolesList,
UserActivityStreamList,
UserAccessList,
OAuth2ApplicationList,
OAuth2TokenList,
OAuth2PersonalTokenList,
UserAuthorizedTokenList,
)
urls = [
url(r'^$', UserList.as_view(), name='user_list'),
url(r'^(?P<pk>[0-9]+)/$', UserDetail.as_view(), name='user_detail'),
@@ -28,6 +31,11 @@ urls = [
url(r'^(?P<pk>[0-9]+)/roles/$', UserRolesList.as_view(), name='user_roles_list'),
url(r'^(?P<pk>[0-9]+)/activity_stream/$', UserActivityStreamList.as_view(), name='user_activity_stream_list'),
url(r'^(?P<pk>[0-9]+)/access_list/$', UserAccessList.as_view(), name='user_access_list'),
url(r'^(?P<pk>[0-9]+)/applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
url(r'^(?P<pk>[0-9]+)/tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'),
url(r'^(?P<pk>[0-9]+)/authorized_tokens/$', UserAuthorizedTokenList.as_view(), name='user_authorized_token_list'),
url(r'^(?P<pk>[0-9]+)/personal_tokens/$', OAuth2PersonalTokenList.as_view(), name='o_auth2_personal_token_list'),
]
__all__ = ['urls']

View File

@@ -0,0 +1,49 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.conf.urls import url
from awx.api.views import (
OAuth2ApplicationList,
OAuth2ApplicationDetail,
ApplicationOAuth2TokenList,
OAuth2ApplicationActivityStreamList,
OAuth2TokenList,
OAuth2TokenDetail,
OAuth2TokenActivityStreamList,
OAuth2PersonalTokenList
)
urls = [
url(r'^applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
url(
r'^applications/(?P<pk>[0-9]+)/$',
OAuth2ApplicationDetail.as_view(),
name='o_auth2_application_detail'
),
url(
r'^applications/(?P<pk>[0-9]+)/tokens/$',
ApplicationOAuth2TokenList.as_view(),
name='o_auth2_application_token_list'
),
url(
r'^applications/(?P<pk>[0-9]+)/activity_stream/$',
OAuth2ApplicationActivityStreamList.as_view(),
name='o_auth2_application_activity_stream_list'
),
url(r'^tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'),
url(
r'^tokens/(?P<pk>[0-9]+)/$',
OAuth2TokenDetail.as_view(),
name='o_auth2_token_detail'
),
url(
r'^tokens/(?P<pk>[0-9]+)/activity_stream/$',
OAuth2TokenActivityStreamList.as_view(),
name='o_auth2_token_activity_stream_list'
),
url(r'^personal_tokens/$', OAuth2PersonalTokenList.as_view(), name='o_auth2_personal_token_list'),
]
__all__ = ['urls']

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
@@ -5,6 +6,7 @@ from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings # noqa
try:
@@ -16,8 +18,8 @@ except ImportError: # pragma: no cover
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'awx.settings.%s' % MODE)
app = Celery('awx')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
if __name__ == '__main__':
app.start()

View File

@@ -11,8 +11,16 @@ class ConfConfig(AppConfig):
name = 'awx.conf'
verbose_name = _('Configuration')
def configure_oauth2_provider(self, settings):
from oauth2_provider import settings as o_settings
o_settings.oauth2_settings = o_settings.OAuth2ProviderSettings(
settings.OAUTH2_PROVIDER, o_settings.DEFAULTS,
o_settings.IMPORT_STRINGS, o_settings.MANDATORY
)
def ready(self):
self.module.autodiscover()
from .settings import SettingsWrapper
SettingsWrapper.initialize()
configure_external_logger(settings)
self.configure_oauth2_provider(settings)

View File

@@ -1,6 +1,8 @@
# Django REST Framework
from rest_framework import serializers
import six
# Tower
from awx.api.fields import VerbatimField
from awx.api.serializers import BaseSerializer
@@ -45,12 +47,12 @@ class SettingFieldMixin(object):
"""Mixin to use a registered setting field class for API display/validation."""
def to_representation(self, obj):
if getattr(self, 'encrypted', False) and isinstance(obj, basestring) and obj:
if getattr(self, 'encrypted', False) and isinstance(obj, six.string_types) and obj:
return '$encrypted$'
return obj
def to_internal_value(self, value):
if getattr(self, 'encrypted', False) and isinstance(value, basestring) and value.startswith('$encrypted$'):
if getattr(self, 'encrypted', False) and isinstance(value, six.string_types) and value.startswith('$encrypted$'):
raise serializers.SkipField()
obj = super(SettingFieldMixin, self).to_internal_value(value)
return super(SettingFieldMixin, self).to_representation(obj)

View File

@@ -275,7 +275,7 @@ class SettingsWrapper(UserSettingsHolder):
setting_ids[setting.key] = setting.id
try:
value = decrypt_field(setting, 'value')
except ValueError, e:
except ValueError as e:
#TODO: Remove in Tower 3.3
logger.debug('encountered error decrypting field: %s - attempting fallback to old', e)
value = old_decrypt_field(setting, 'value')

View File

@@ -6,6 +6,8 @@ import glob
import os
import shutil
import six
# AWX
from awx.conf.registry import settings_registry
@@ -13,7 +15,7 @@ __all__ = ['comment_assignments', 'conf_to_dict']
def comment_assignments(patterns, assignment_names, dry_run=True, backup_suffix='.old'):
if isinstance(patterns, basestring):
if isinstance(patterns, six.string_types):
patterns = [patterns]
diffs = []
for pattern in patterns:
@@ -32,7 +34,7 @@ def comment_assignments(patterns, assignment_names, dry_run=True, backup_suffix=
def comment_assignments_in_file(filename, assignment_names, dry_run=True, backup_filename=None):
from redbaron import RedBaron, indent
if isinstance(assignment_names, basestring):
if isinstance(assignment_names, six.string_types):
assignment_names = [assignment_names]
else:
assignment_names = assignment_names[:]

View File

@@ -21,7 +21,7 @@ from awx.api.generics import * # noqa
from awx.api.permissions import IsSuperUser
from awx.api.versioning import reverse, get_request_version
from awx.main.utils import * # noqa
from awx.main.utils.handlers import BaseHTTPSHandler, LoggingConnectivityException
from awx.main.utils.handlers import BaseHTTPSHandler, UDPHandler, LoggingConnectivityException
from awx.main.tasks import handle_setting_changes
from awx.conf.license import get_licensed_features
from awx.conf.models import Setting
@@ -44,7 +44,6 @@ class SettingCategoryList(ListAPIView):
model = Setting # Not exactly, but needed for the view.
serializer_class = SettingCategorySerializer
filter_backends = []
new_in_310 = True
view_name = _('Setting Categories')
def get_queryset(self):
@@ -69,7 +68,6 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
model = Setting # Not exactly, but needed for the view.
serializer_class = SettingSingletonSerializer
filter_backends = []
new_in_310 = True
view_name = _('Setting Detail')
def get_queryset(self):
@@ -170,7 +168,6 @@ class SettingLoggingTest(GenericAPIView):
serializer_class = SettingSingletonSerializer
permission_classes = (IsSuperUser,)
filter_backends = []
new_in_320 = True
def post(self, request, *args, **kwargs):
defaults = dict()
@@ -202,7 +199,11 @@ class SettingLoggingTest(GenericAPIView):
for k, v in serializer.validated_data.items():
setattr(mock_settings, k, v)
mock_settings.LOG_AGGREGATOR_LEVEL = 'DEBUG'
BaseHTTPSHandler.perform_test(mock_settings)
if mock_settings.LOG_AGGREGATOR_PROTOCOL.upper() == 'UDP':
UDPHandler.perform_test(mock_settings)
return Response(status=status.HTTP_201_CREATED)
else:
BaseHTTPSHandler.perform_test(mock_settings)
except LoggingConnectivityException as e:
return Response({'error': str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
return Response(status=status.HTTP_200_OK)

View File

@@ -29,6 +29,8 @@ import threading
import uuid
import memcache
from six.moves import xrange
__all__ = ['event_context']

View File

@@ -25,4 +25,5 @@ import ansible
# Because of the way Ansible loads plugins, it's not possible to import
# ansible.plugins.callback.minimal when being loaded as the minimal plugin. Ugh.
execfile(os.path.join(os.path.dirname(ansible.__file__), 'plugins', 'callback', 'minimal.py'))
with open(os.path.join(os.path.dirname(ansible.__file__), 'plugins', 'callback', 'minimal.py')) as in_file:
exec(in_file.read())

View File

@@ -18,7 +18,11 @@
from __future__ import (absolute_import, division, print_function)
# Python
import codecs
import contextlib
import json
import os
import stat
import sys
import uuid
from copy import copy
@@ -292,10 +296,22 @@ class BaseCallbackModule(CallbackBase):
failures=stats.failures,
ok=stats.ok,
processed=stats.processed,
skipped=stats.skipped,
artifact_data=stats.custom.get('_run', {}) if hasattr(stats, 'custom') else {}
skipped=stats.skipped
)
# write custom set_stat artifact data to the local disk so that it can
# be persisted by awx after the process exits
custom_artifact_data = stats.custom.get('_run', {}) if hasattr(stats, 'custom') else {}
if custom_artifact_data:
# create the directory for custom stats artifacts to live in (if it doesn't exist)
custom_artifacts_dir = os.path.join(os.getenv('AWX_PRIVATE_DATA_DIR'), 'artifacts')
os.makedirs(custom_artifacts_dir, mode=stat.S_IXUSR + stat.S_IWUSR + stat.S_IRUSR)
custom_artifacts_path = os.path.join(custom_artifacts_dir, 'custom')
with codecs.open(custom_artifacts_path, 'w', encoding='utf-8') as f:
os.chmod(custom_artifacts_path, stat.S_IRUSR | stat.S_IWUSR)
json.dump(custom_artifact_data, f)
with self.capture_event_data('playbook_on_stats', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_stats(stats)

View File

@@ -7,7 +7,9 @@ from collections import OrderedDict
import json
import mock
import os
import shutil
import sys
import tempfile
import pytest
@@ -259,3 +261,26 @@ def test_callback_plugin_strips_task_environ_variables(executor, cache, playbook
assert len(cache)
for event in cache.values():
assert os.environ['PATH'] not in json.dumps(event)
@pytest.mark.parametrize('playbook', [
{'custom_set_stat.yml': '''
- name: custom set_stat calls should persist to the local disk so awx can save them
connection: local
hosts: all
tasks:
- set_stats:
data:
foo: "bar"
'''}, # noqa
])
def test_callback_plugin_saves_custom_stats(executor, cache, playbook):
try:
private_data_dir = tempfile.mkdtemp()
with mock.patch.dict(os.environ, {'AWX_PRIVATE_DATA_DIR': private_data_dir}):
executor.run()
artifacts_path = os.path.join(private_data_dir, 'artifacts', 'custom')
with open(artifacts_path, 'r') as f:
assert json.load(f) == {'foo': 'bar'}
finally:
shutil.rmtree(os.path.join(private_data_dir))

View File

@@ -15,7 +15,10 @@ from django.utils.translation import ugettext_lazy as _
from django.core.exceptions import ObjectDoesNotExist
# Django REST Framework
from rest_framework.exceptions import ParseError, PermissionDenied, ValidationError
from rest_framework.exceptions import ParseError, PermissionDenied
# Django OAuth Toolkit
from awx.main.models.oauth import OAuth2Application, OAuth2AccessToken
# AWX
from awx.main.utils import (
@@ -25,14 +28,13 @@ from awx.main.utils import (
get_licenser,
)
from awx.main.models import * # noqa
from awx.main.models.unified_jobs import ACTIVE_STATES
from awx.main.models.mixins import ResourceMixin
from awx.conf.license import LicenseForbids, feature_enabled
__all__ = ['get_user_queryset', 'check_user_access', 'check_user_access_with_errors',
'user_accessible_objects', 'consumer_access',
'user_admin_role', 'ActiveJobConflict',]
'user_admin_role',]
logger = logging.getLogger('awx.main.access')
@@ -72,16 +74,6 @@ def get_object_from_data(field, Model, data, obj=None):
raise ParseError(_("Bad data found in related field %s." % field))
class ActiveJobConflict(ValidationError):
status_code = 409
def __init__(self, active_jobs):
super(ActiveJobConflict, self).__init__({
"conflict": _("Resource is being used by running jobs."),
"active_jobs": active_jobs
})
def register_access(model_class, access_class):
access_registry[model_class] = access_class
@@ -117,6 +109,8 @@ def check_user_access(user, model_class, action, *args, **kwargs):
Return True if user can perform action against model_class with the
provided parameters.
'''
if 'write' not in getattr(user, 'oauth_scopes', ['write']) and action != 'read':
return False
access_class = access_registry[model_class]
access_instance = access_class(user)
access_method = getattr(access_instance, 'can_%s' % action)
@@ -233,6 +227,9 @@ class BaseAccess(object):
def can_delete(self, obj):
return self.user.is_superuser
def can_copy(self, obj):
return self.can_add({'reference_obj': obj})
def can_attach(self, obj, sub_obj, relationship, data,
skip_sub_obj_read_check=False):
if skip_sub_obj_read_check:
@@ -328,7 +325,7 @@ class BaseAccess(object):
elif "features" not in validation_info:
raise LicenseForbids(_("Features not found in active license."))
def get_user_capabilities(self, obj, method_list=[], parent_obj=None):
def get_user_capabilities(self, obj, method_list=[], parent_obj=None, capabilities_cache={}):
if obj is None:
return {}
user_capabilities = {}
@@ -338,9 +335,16 @@ class BaseAccess(object):
if display_method not in method_list:
continue
if not settings.MANAGE_ORGANIZATION_AUTH and isinstance(obj, (Team, User)):
user_capabilities[display_method] = self.user.is_superuser
continue
# Actions not possible for reason unrelated to RBAC
# Cannot copy with validation errors, or update a manual group/project
if display_method == 'copy' and isinstance(obj, JobTemplate):
if 'write' not in getattr(self.user, 'oauth_scopes', ['write']):
user_capabilities[display_method] = False # Read tokens cannot take any actions
continue
elif display_method == 'copy' and isinstance(obj, JobTemplate):
if obj.validation_errors:
user_capabilities[display_method] = False
continue
@@ -351,6 +355,10 @@ class BaseAccess(object):
elif display_method == 'copy' and isinstance(obj, WorkflowJobTemplate) and obj.organization_id is None:
user_capabilities[display_method] = self.user.is_superuser
continue
elif display_method == 'copy' and isinstance(obj, Project) and obj.scm_type == '':
# Connot copy manual project without errors
user_capabilities[display_method] = False
continue
elif display_method in ['start', 'schedule'] and isinstance(obj, Group): # TODO: remove in 3.3
try:
if obj.deprecated_inventory_source and not obj.deprecated_inventory_source._can_update():
@@ -365,8 +373,8 @@ class BaseAccess(object):
continue
# Grab the answer from the cache, if available
if hasattr(obj, 'capabilities_cache') and display_method in obj.capabilities_cache:
user_capabilities[display_method] = obj.capabilities_cache[display_method]
if display_method in capabilities_cache:
user_capabilities[display_method] = capabilities_cache[display_method]
if self.user.is_superuser and not user_capabilities[display_method]:
# Cache override for models with bad orphaned state
user_capabilities[display_method] = True
@@ -384,10 +392,10 @@ class BaseAccess(object):
if display_method == 'schedule':
user_capabilities['schedule'] = user_capabilities['start']
continue
elif display_method == 'delete' and not isinstance(obj, (User, UnifiedJob)):
elif display_method == 'delete' and not isinstance(obj, (User, UnifiedJob, CustomInventoryScript)):
user_capabilities['delete'] = user_capabilities['edit']
continue
elif display_method == 'copy' and isinstance(obj, (Group, Host)):
elif display_method == 'copy' and isinstance(obj, (Group, Host, CustomInventoryScript)):
user_capabilities['copy'] = user_capabilities['edit']
continue
@@ -424,6 +432,18 @@ class InstanceAccess(BaseAccess):
return Instance.objects.filter(
rampart_groups__in=self.user.get_queryset(InstanceGroup)).distinct()
def can_attach(self, obj, sub_obj, relationship, data,
skip_sub_obj_read_check=False):
if relationship == 'rampart_groups' and isinstance(sub_obj, InstanceGroup):
return self.user.is_superuser
return super(InstanceAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, data=None):
if relationship == 'rampart_groups' and isinstance(sub_obj, InstanceGroup):
return self.user.is_superuser
return super(InstanceAccess, self).can_unattach(obj, sub_obj, relationship, *args, **kwargs)
def can_add(self, data):
return False
@@ -444,19 +464,25 @@ class InstanceGroupAccess(BaseAccess):
organization__in=Organization.accessible_pk_qs(self.user, 'admin_role'))
def can_add(self, data):
return False
return self.user.is_superuser
def can_change(self, obj, data):
return False
return self.user.is_superuser
def can_delete(self, obj):
return False
return self.user.is_superuser
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
return self.user.is_superuser
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
return self.user.is_superuser
class UserAccess(BaseAccess):
'''
I can see user records when:
- I'm a useruser
- I'm a superuser
- I'm in a role with them (such as in an organization or team)
- They are in a role which includes a role of mine
- I am in a role that includes a role of theirs
@@ -495,6 +521,8 @@ class UserAccess(BaseAccess):
return False
if self.user.is_superuser:
return True
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
return Organization.accessible_objects(self.user, 'admin_role').exists()
def can_change(self, obj, data):
@@ -507,10 +535,14 @@ class UserAccess(BaseAccess):
# A user can be changed if they are themselves, or by org admins or
# superusers. Change permission implies changing only certain fields
# that a user should be able to edit for themselves.
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
return bool(self.user == obj or self.can_admin(obj, data))
@check_superuser
def can_admin(self, obj, data):
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
return Organization.objects.filter(Q(member_role__members=obj) | Q(admin_role__members=obj),
Q(admin_role__members=self.user)).exists()
@@ -527,19 +559,92 @@ class UserAccess(BaseAccess):
return False
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
"Reverse obj and sub_obj, defer to RoleAccess if this is a role assignment."
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
# Reverse obj and sub_obj, defer to RoleAccess if this is a role assignment.
if relationship == 'roles':
role_access = RoleAccess(self.user)
return role_access.can_attach(sub_obj, obj, 'members', *args, **kwargs)
return super(UserAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
if relationship == 'roles':
role_access = RoleAccess(self.user)
return role_access.can_unattach(sub_obj, obj, 'members', *args, **kwargs)
return super(UserAccess, self).can_unattach(obj, sub_obj, relationship, *args, **kwargs)
class OAuth2ApplicationAccess(BaseAccess):
'''
I can read, change or delete OAuth applications when:
- I am a superuser.
- I am the admin of the organization of the user of the application.
- I am the user of the application.
I can create OAuth applications when:
- I am a superuser.
- I am the admin of the organization of the user of the application.
'''
model = OAuth2Application
select_related = ('user',)
def filtered_queryset(self):
accessible_users = User.objects.filter(
pk__in=self.user.admin_of_organizations.values('member_role__members')
) | User.objects.filter(pk=self.user.pk)
return self.model.objects.filter(user__in=accessible_users)
def can_change(self, obj, data):
return self.can_read(obj)
def can_delete(self, obj):
return self.can_read(obj)
def can_add(self, data):
if self.user.is_superuser:
return True
user = get_object_from_data('user', User, data)
if not user:
return False
return set(self.user.admin_of_organizations.all()) & set(user.organizations.all())
class OAuth2TokenAccess(BaseAccess):
'''
I can read, change or delete an OAuth2 token when:
- I am a superuser.
- I am the admin of the organization of the user of the token.
- I am the user of the token.
I can create an OAuth token when:
- I have the read permission of the related application.
'''
model = OAuth2AccessToken
select_related = ('user', 'application')
def filtered_queryset(self):
accessible_users = User.objects.filter(
pk__in=self.user.admin_of_organizations.values('member_role__members')
) | User.objects.filter(pk=self.user.pk)
return self.model.objects.filter(user__in=accessible_users)
def can_change(self, obj, data):
return self.can_read(obj)
def can_delete(self, obj):
return self.can_read(obj)
def can_add(self, data):
app = get_object_from_data('application', OAuth2Application, data)
if not app:
return True
return OAuth2ApplicationAccess(self.user).can_read(app)
class OrganizationAccess(BaseAccess):
'''
I can see organizations when:
@@ -567,15 +672,6 @@ class OrganizationAccess(BaseAccess):
is_change_possible = self.can_change(obj, None)
if not is_change_possible:
return False
active_jobs = []
active_jobs.extend([dict(type="job", id=o.id)
for o in Job.objects.filter(project__in=obj.projects.all(), status__in=ACTIVE_STATES)])
active_jobs.extend([dict(type="project_update", id=o.id)
for o in ProjectUpdate.objects.filter(project__in=obj.projects.all(), status__in=ACTIVE_STATES)])
active_jobs.extend([dict(type="inventory_update", id=o.id)
for o in InventoryUpdate.objects.filter(inventory_source__inventory__organization=obj, status__in=ACTIVE_STATES)])
if len(active_jobs) > 0:
raise ActiveJobConflict(active_jobs)
return True
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
@@ -596,6 +692,7 @@ class InventoryAccess(BaseAccess):
I can see inventory when:
- I'm a superuser.
- I'm an org admin of the inventory's org.
- I'm an inventory admin of the inventory's org.
- I have read, write or admin permissions on it.
I can change inventory when:
- I'm a superuser.
@@ -629,9 +726,9 @@ class InventoryAccess(BaseAccess):
def can_add(self, data):
# If no data is specified, just checking for generic add permission?
if not data:
return Organization.accessible_objects(self.user, 'admin_role').exists()
return Organization.accessible_objects(self.user, 'inventory_admin_role').exists()
return self.check_related('organization', Organization, data)
return self.check_related('organization', Organization, data, role_field='inventory_admin_role')
@check_superuser
def can_change(self, obj, data):
@@ -647,7 +744,7 @@ class InventoryAccess(BaseAccess):
# Verify that the user has access to the new organization if moving an
# inventory to a new organization. Otherwise, just check for admin permission.
return (
self.check_related('organization', Organization, data, obj=obj,
self.check_related('organization', Organization, data, obj=obj, role_field='inventory_admin_role',
mandatory=org_admin_mandatory) and
self.user in obj.admin_role
)
@@ -657,19 +754,7 @@ class InventoryAccess(BaseAccess):
return self.user in obj.update_role
def can_delete(self, obj):
is_can_admin = self.can_admin(obj, None)
if not is_can_admin:
return False
active_jobs = []
active_jobs.extend([dict(type="job", id=o.id)
for o in Job.objects.filter(inventory=obj, status__in=ACTIVE_STATES)])
active_jobs.extend([dict(type="inventory_update", id=o.id)
for o in InventoryUpdate.objects.filter(inventory_source__inventory=obj, status__in=ACTIVE_STATES)])
active_jobs.extend([dict(type="ad_hoc_command", id=o.id)
for o in AdHocCommand.objects.filter(inventory=obj, status__in=ACTIVE_STATES)])
if len(active_jobs) > 0:
raise ActiveJobConflict(active_jobs)
return True
return self.can_admin(obj, None)
def can_run_ad_hoc_commands(self, obj):
return self.user in obj.adhoc_role
@@ -786,15 +871,7 @@ class GroupAccess(BaseAccess):
return True
def can_delete(self, obj):
is_delete_allowed = bool(obj and self.user in obj.inventory.admin_role)
if not is_delete_allowed:
return False
active_jobs = []
active_jobs.extend([dict(type="inventory_update", id=o.id)
for o in InventoryUpdate.objects.filter(inventory_source__in=obj.inventory_sources.all(), status__in=ACTIVE_STATES)])
if len(active_jobs) > 0:
raise ActiveJobConflict(active_jobs)
return True
return bool(obj and self.user in obj.inventory.admin_role)
def can_start(self, obj, validate_license=True):
# TODO: Delete for 3.3, only used by v1 serializer
@@ -817,7 +894,9 @@ class InventorySourceAccess(BaseAccess):
'''
model = InventorySource
select_related = ('created_by', 'modified_by', 'inventory',)
select_related = ('created_by', 'modified_by', 'inventory')
prefetch_related = ('credentials__credential_type', 'last_job',
'source_script', 'source_project')
def filtered_queryset(self):
return self.model.objects.filter(inventory__in=Inventory.accessible_pk_qs(self.user, 'read_role'))
@@ -841,9 +920,6 @@ class InventorySourceAccess(BaseAccess):
if not self.user.is_superuser and \
not (obj and obj.inventory and self.user.can_access(Inventory, 'admin', obj.inventory, None)):
return False
active_jobs_qs = InventoryUpdate.objects.filter(inventory_source=obj, status__in=ACTIVE_STATES)
if active_jobs_qs.exists():
raise ActiveJobConflict([dict(type="inventory_update", id=o.id) for o in active_jobs_qs.all()])
return True
@check_superuser
@@ -865,6 +941,21 @@ class InventorySourceAccess(BaseAccess):
return self.user in obj.inventory.update_role
return False
@check_superuser
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if relationship == 'credentials' and isinstance(sub_obj, Credential):
return (
obj and obj.inventory and self.user in obj.inventory.admin_role and
self.user in sub_obj.use_role)
return super(InventorySourceAccess, self).can_attach(
obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
@check_superuser
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if relationship == 'credentials' and isinstance(sub_obj, Credential):
return obj and obj.inventory and self.user in obj.inventory.admin_role
return super(InventorySourceAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
class InventoryUpdateAccess(BaseAccess):
'''
@@ -875,7 +966,7 @@ class InventoryUpdateAccess(BaseAccess):
model = InventoryUpdate
select_related = ('created_by', 'modified_by', 'inventory_source__inventory',)
prefetch_related = ('unified_job_template', 'instance_group',)
prefetch_related = ('unified_job_template', 'instance_group', 'credentials',)
def filtered_queryset(self):
return self.model.objects.filter(inventory_source__inventory__in=Inventory.accessible_pk_qs(self.user, 'read_role'))
@@ -933,8 +1024,12 @@ class CredentialAccess(BaseAccess):
- I'm a superuser.
- It's a user credential and it's my credential.
- It's a user credential and I'm an admin of an organization where that
user is a member of admin of the organization.
user is a member.
- It's a user credential and I'm a credential_admin of an organization
where that user is a member.
- It's a team credential and I'm an admin of the team's organization.
- It's a team credential and I'm a credential admin of the team's
organization.
- It's a team credential and I'm a member of the team.
I can change/delete when:
- I'm a superuser.
@@ -968,7 +1063,8 @@ class CredentialAccess(BaseAccess):
return check_user_access(self.user, Team, 'change', team_obj, None)
if data and data.get('organization', None):
organization_obj = get_object_from_data('organization', Organization, data)
return check_user_access(self.user, Organization, 'change', organization_obj, None)
return any([check_user_access(self.user, Organization, 'change', organization_obj, None),
self.user in organization_obj.credential_admin_role])
return False
@check_superuser
@@ -979,7 +1075,7 @@ class CredentialAccess(BaseAccess):
def can_change(self, obj, data):
if not obj:
return False
return self.user in obj.admin_role and self.check_related('organization', Organization, data, obj=obj)
return self.user in obj.admin_role and self.check_related('organization', Organization, data, obj=obj, role_field='credential_admin_role')
def can_delete(self, obj):
# Unassociated credentials may be marked deleted by anyone, though we
@@ -1010,6 +1106,8 @@ class TeamAccess(BaseAccess):
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'admin_role').exists()
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
return self.check_related('organization', Organization, data)
def can_change(self, obj, data):
@@ -1019,6 +1117,8 @@ class TeamAccess(BaseAccess):
raise PermissionDenied(_('Unable to change organization on a team.'))
if self.user.is_superuser:
return True
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
return self.user in obj.admin_role
def can_delete(self, obj):
@@ -1027,6 +1127,8 @@ class TeamAccess(BaseAccess):
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
"""Reverse obj and sub_obj, defer to RoleAccess if this is an assignment
of a resource role to the team."""
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
if isinstance(sub_obj, Role):
if sub_obj.content_object is None:
raise PermissionDenied(_("The {} role cannot be assigned to a team").format(sub_obj.name))
@@ -1037,10 +1139,15 @@ class TeamAccess(BaseAccess):
role_access = RoleAccess(self.user)
return role_access.can_attach(sub_obj, obj, 'member_role.parents',
*args, **kwargs)
if self.user.is_superuser:
return True
return super(TeamAccess, self).can_attach(obj, sub_obj, relationship,
*args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
if isinstance(sub_obj, Role):
if isinstance(sub_obj.content_object, ResourceMixin):
role_access = RoleAccess(self.user)
@@ -1055,6 +1162,7 @@ class ProjectAccess(BaseAccess):
I can see projects when:
- I am a superuser.
- I am an admin in an organization associated with the project.
- I am a project admin in an organization associated with the project.
- I am a user in an organization associated with the project.
- I am on a team associated with the project.
- I have been explicitly granted permission to run/check jobs using the
@@ -1075,32 +1183,22 @@ class ProjectAccess(BaseAccess):
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'admin_role').exists()
return self.check_related('organization', Organization, data, mandatory=True)
return Organization.accessible_objects(self.user, 'project_admin_role').exists()
return self.check_related('organization', Organization, data, role_field='project_admin_role', mandatory=True)
@check_superuser
def can_change(self, obj, data):
if not self.check_related('organization', Organization, data, obj=obj):
if not self.check_related('organization', Organization, data, obj=obj, role_field='project_admin_role'):
return False
return self.user in obj.admin_role
def can_delete(self, obj):
is_change_allowed = self.can_change(obj, None)
if not is_change_allowed:
return False
active_jobs = []
active_jobs.extend([dict(type="job", id=o.id)
for o in Job.objects.filter(project=obj, status__in=ACTIVE_STATES)])
active_jobs.extend([dict(type="project_update", id=o.id)
for o in ProjectUpdate.objects.filter(project=obj, status__in=ACTIVE_STATES)])
if len(active_jobs) > 0:
raise ActiveJobConflict(active_jobs)
return True
@check_superuser
def can_start(self, obj, validate_license=True):
return obj and self.user in obj.update_role
def can_delete(self, obj):
return self.can_change(obj, None)
class ProjectUpdateAccess(BaseAccess):
'''
@@ -1162,6 +1260,7 @@ class JobTemplateAccess(BaseAccess):
a user can create a job template if
- they are a superuser
- an org admin of any org that the project is a member
- if they are a project_admin for any org that project is a member of
- if they have user or team
based permissions tying the project to the inventory source for the
given action as well as the 'create' deploy permission.
@@ -1208,9 +1307,6 @@ class JobTemplateAccess(BaseAccess):
else:
return False
def can_copy(self, obj):
return self.can_add({'reference_obj': obj})
def can_start(self, obj, validate_license=True):
# Check license.
if validate_license:
@@ -1269,14 +1365,7 @@ class JobTemplateAccess(BaseAccess):
return True
def can_delete(self, obj):
is_delete_allowed = self.user.is_superuser or self.user in obj.admin_role
if not is_delete_allowed:
return False
active_jobs = [dict(type="job", id=o.id)
for o in obj.jobs.filter(status__in=ACTIVE_STATES)]
if len(active_jobs) > 0:
raise ActiveJobConflict(active_jobs)
return True
return self.user.is_superuser or self.user in obj.admin_role
@check_superuser
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
@@ -1420,7 +1509,7 @@ class JobAccess(BaseAccess):
elif not jt_access:
return False
org_access = obj.inventory and self.user in obj.inventory.organization.admin_role
org_access = obj.inventory and self.user in obj.inventory.organization.inventory_admin_role
project_access = obj.project is None or self.user in obj.project.admin_role
credential_access = all([self.user in cred.use_role for cred in obj.credentials.all()])
@@ -1713,13 +1802,14 @@ class WorkflowJobTemplateAccess(BaseAccess):
Users who are able to create deploy jobs can also run normal and check (dry run) jobs.
'''
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'admin_role').exists()
return Organization.accessible_objects(self.user, 'workflow_admin_role').exists()
# will check this if surveys are added to WFJT
if 'survey_enabled' in data and data['survey_enabled']:
self.check_license(feature='surveys')
return self.check_related('organization', Organization, data, mandatory=True)
return self.check_related('organization', Organization, data, role_field='workflow_admin_role',
mandatory=True)
def can_copy(self, obj):
if self.save_messages:
@@ -1746,7 +1836,8 @@ class WorkflowJobTemplateAccess(BaseAccess):
if missing_inventories:
self.messages['inventories_unable_to_copy'] = missing_inventories
return self.check_related('organization', Organization, {'reference_obj': obj}, mandatory=True)
return self.check_related('organization', Organization, {'reference_obj': obj}, role_field='workflow_admin_role',
mandatory=True)
def can_start(self, obj, validate_license=True):
if validate_license:
@@ -1771,17 +1862,11 @@ class WorkflowJobTemplateAccess(BaseAccess):
if self.user.is_superuser:
return True
return self.check_related('organization', Organization, data, obj=obj) and self.user in obj.admin_role
return (self.check_related('organization', Organization, data, role_field='workflow_admin_field', obj=obj) and
self.user in obj.admin_role)
def can_delete(self, obj):
is_delete_allowed = self.user.is_superuser or self.user in obj.admin_role
if not is_delete_allowed:
return False
active_jobs = [dict(type="workflow_job", id=o.id)
for o in obj.workflow_jobs.filter(status__in=ACTIVE_STATES)]
if len(active_jobs) > 0:
raise ActiveJobConflict(active_jobs)
return True
return self.user.is_superuser or self.user in obj.admin_role
class WorkflowJobAccess(BaseAccess):
@@ -1812,7 +1897,7 @@ class WorkflowJobAccess(BaseAccess):
def can_delete(self, obj):
return (obj.workflow_job_template and
obj.workflow_job_template.organization and
self.user in obj.workflow_job_template.organization.admin_role)
self.user in obj.workflow_job_template.organization.workflow_admin_role)
def get_method_capability(self, method, obj, parent_obj):
if method == 'start':
@@ -2147,13 +2232,9 @@ class ScheduleAccess(BaseAccess):
prefetch_related = ('unified_job_template', 'credentials',)
def filtered_queryset(self):
qs = self.model.objects.all()
unified_pk_qs = UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role')
inv_src_qs = InventorySource.objects.filter(inventory_id=Inventory._accessible_pk_qs(Inventory, self.user, 'read_role'))
return qs.filter(
Q(unified_job_template_id__in=unified_pk_qs) |
Q(unified_job_template_id__in=inv_src_qs.values_list('pk', flat=True)))
return self.model.objects.filter(
unified_job_template__in=UnifiedJobTemplateAccess(self.user).filtered_queryset()
)
@check_superuser
def can_add(self, data):
@@ -2196,7 +2277,7 @@ class NotificationTemplateAccess(BaseAccess):
def filtered_queryset(self):
return self.model.objects.filter(
Q(organization__in=self.user.admin_of_organizations) |
Q(organization__in=Organization.accessible_objects(self.user, 'notification_admin_role')) |
Q(organization__in=self.user.auditor_of_organizations)
).distinct()
@@ -2204,22 +2285,22 @@ class NotificationTemplateAccess(BaseAccess):
if self.user.is_superuser or self.user.is_system_auditor:
return True
if obj.organization is not None:
if self.user in obj.organization.admin_role or self.user in obj.organization.auditor_role:
if self.user in obj.organization.notification_admin_role or self.user in obj.organization.auditor_role:
return True
return False
@check_superuser
def can_add(self, data):
if not data:
return Organization.accessible_objects(self.user, 'admin_role').exists()
return self.check_related('organization', Organization, data, mandatory=True)
return Organization.accessible_objects(self.user, 'notification_admin_role').exists()
return self.check_related('organization', Organization, data, role_field='notification_admin_role', mandatory=True)
@check_superuser
def can_change(self, obj, data):
if obj.organization is None:
# only superusers are allowed to edit orphan notification templates
return False
return self.check_related('organization', Organization, data, obj=obj, mandatory=True)
return self.check_related('organization', Organization, data, obj=obj, role_field='notification_admin_role', mandatory=True)
def can_admin(self, obj, data):
return self.can_change(obj, data)
@@ -2231,7 +2312,7 @@ class NotificationTemplateAccess(BaseAccess):
def can_start(self, obj, validate_license=True):
if obj.organization is None:
return False
return self.user in obj.organization.admin_role
return self.user in obj.organization.notification_admin_role
class NotificationAccess(BaseAccess):
@@ -2243,7 +2324,7 @@ class NotificationAccess(BaseAccess):
def filtered_queryset(self):
return self.model.objects.filter(
Q(notification_template__organization__in=self.user.admin_of_organizations) |
Q(notification_template__organization__in=Organization.accessible_objects(self.user, 'notification_admin_role')) |
Q(notification_template__organization__in=self.user.auditor_of_organizations)
).distinct()
@@ -2430,6 +2511,10 @@ class RoleAccess(BaseAccess):
@check_superuser
def can_unattach(self, obj, sub_obj, relationship, data=None, skip_sub_obj_read_check=False):
if isinstance(obj.content_object, Team):
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
if not skip_sub_obj_read_check and relationship in ['members', 'member_role.parents', 'parents']:
# If we are unattaching a team Role, check the Team read access
if relationship == 'parents':

View File

@@ -43,6 +43,16 @@ register(
category_slug='system',
)
register(
'MANAGE_ORGANIZATION_AUTH',
field_class=fields.BooleanField,
label=_('Organization Admins Can Manage Users and Teams'),
help_text=_('Controls whether any Organization Admin has the privileges to create and manage users and teams. '
'You may want to disable this ability if you are using an LDAP or SAML integration.'),
category=_('System'),
category_slug='system',
)
register(
'TOWER_ADMIN_ALERTS',
field_class=fields.BooleanField,
@@ -411,7 +421,7 @@ register(
field_class=fields.BooleanField,
default=False,
label=_('Log System Tracking Facts Individually'),
help_text=_('If set, system tracking facts will be sent for each package, service, or'
help_text=_('If set, system tracking facts will be sent for each package, service, or '
'other item found in a scan, allowing for greater search query granularity. '
'If unset, facts will be sent as a single dictionary, allowing for greater '
'efficiency in fact processing.'),

View File

@@ -5,9 +5,17 @@ import re
from django.utils.translation import ugettext_lazy as _
__all__ = [
'CLOUD_PROVIDERS', 'SCHEDULEABLE_PROVIDERS', 'PRIVILEGE_ESCALATION_METHODS',
'ANSI_SGR_PATTERN', 'CAN_CANCEL', 'ACTIVE_STATES'
]
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'cloudforms', 'tower')
SCHEDULEABLE_PROVIDERS = CLOUD_PROVIDERS + ('custom', 'scm',)
PRIVILEGE_ESCALATION_METHODS = [
('sudo', _('Sudo')), ('su', _('Su')), ('pbrun', _('Pbrun')), ('pfexec', _('Pfexec')),
('dzdo', _('DZDO')), ('pmrun', _('Pmrun')), ('runas', _('Runas'))]
ANSI_SGR_PATTERN = re.compile(r'\x1b\[[0-9;]*m')
CAN_CANCEL = ('new', 'pending', 'waiting', 'running')
ACTIVE_STATES = CAN_CANCEL

View File

@@ -1,17 +1,11 @@
import json
import logging
import urllib
from channels import Group, channel_layers
from channels.sessions import channel_session
from channels.handler import AsgiRequest
from channels import Group
from channels.auth import channel_session_user_from_http, channel_session_user
from django.conf import settings
from django.core.serializers.json import DjangoJSONEncoder
from django.contrib.auth.models import User
from awx.main.models.organization import AuthToken
logger = logging.getLogger('awx.main.consumers')
@@ -22,51 +16,29 @@ def discard_groups(message):
Group(group).discard(message.reply_channel)
@channel_session
@channel_session_user_from_http
def ws_connect(message):
message.reply_channel.send({"accept": True})
message.content['method'] = 'FAKE'
request = AsgiRequest(message)
token = request.COOKIES.get('token', None)
if token is not None:
token = urllib.unquote(token).strip('"')
try:
auth_token = AuthToken.objects.get(key=token)
if auth_token.in_valid_tokens:
message.channel_session['user_id'] = auth_token.user_id
message.reply_channel.send({"text": json.dumps({"accept": True, "user": auth_token.user_id})})
return None
except AuthToken.DoesNotExist:
logger.error("auth_token provided was invalid.")
message.reply_channel.send({"close": True})
if message.user.is_authenticated():
message.reply_channel.send(
{"text": json.dumps({"accept": True, "user": message.user.id})}
)
else:
logger.error("Request user is not authenticated to use websocket.")
message.reply_channel.send({"close": True})
return None
@channel_session
@channel_session_user
def ws_disconnect(message):
discard_groups(message)
@channel_session
@channel_session_user
def ws_receive(message):
from awx.main.access import consumer_access
channel_layer_settings = channel_layers.configs[message.channel_layer.alias]
max_retries = channel_layer_settings.get('RECEIVE_MAX_RETRY', settings.CHANNEL_LAYER_RECEIVE_MAX_RETRY)
user_id = message.channel_session.get('user_id', None)
if user_id is None:
retries = message.content.get('connect_retries', 0) + 1
message.content['connect_retries'] = retries
message.reply_channel.send({"text": json.dumps({"error": "no valid user"})})
retries_left = max_retries - retries
if retries_left > 0:
message.channel_layer.send(message.channel.name, message.content)
else:
logger.error("No valid user found for websocket.")
return None
user = User.objects.get(pk=user_id)
user = message.user
raw_data = message.content['text']
data = json.loads(raw_data)

View File

@@ -34,3 +34,4 @@ class _AwxTaskError():
AwxTaskError = _AwxTaskError()

View File

@@ -14,7 +14,7 @@ from django.conf import settings
import awx
from awx.main.expect import run
from awx.main.utils import OutputEventFilter
from awx.main.utils import OutputEventFilter, get_system_task_capacity
from awx.main.queue import CallbackQueueDispatcher
logger = logging.getLogger('awx.isolated.manager')
@@ -381,10 +381,14 @@ class IsolatedManager(object):
logger.error(err_template.format(instance.hostname, instance.version, awx_application_version))
instance.capacity = 0
else:
if instance.capacity == 0 and task_result['capacity']:
if instance.capacity == 0 and task_result['capacity_cpu']:
logger.warning('Isolated instance {} has re-joined.'.format(instance.hostname))
instance.capacity = int(task_result['capacity'])
instance.save(update_fields=['capacity', 'version', 'modified'])
instance.cpu_capacity = int(task_result['capacity_cpu'])
instance.mem_capacity = int(task_result['capacity_mem'])
instance.capacity = get_system_task_capacity(scale=instance.capacity_adjustment,
cpu_capacity=int(task_result['capacity_cpu']),
mem_capacity=int(task_result['capacity_mem']))
instance.save(update_fields=['cpu_capacity', 'mem_capacity', 'capacity', 'version', 'modified'])
@classmethod
def health_check(cls, instance_qs, awx_application_version):
@@ -428,7 +432,7 @@ class IsolatedManager(object):
task_result = result['plays'][0]['tasks'][0]['hosts'][instance.hostname]
except (KeyError, IndexError):
task_result = {}
if 'capacity' in task_result:
if 'capacity_cpu' in task_result and 'capacity_mem' in task_result:
cls.update_capacity(instance, task_result, awx_application_version)
elif instance.capacity == 0:
logger.debug('Isolated instance {} previously marked as lost, could not re-join.'.format(

View File

@@ -47,7 +47,7 @@ def open_fifo_write(path, data):
This blocks the thread until an external process (such as ssh-agent)
reads data from the pipe.
'''
os.mkfifo(path, 0600)
os.mkfifo(path, 0o600)
thread.start_new_thread(lambda p, d: open(p, 'w').write(d), (path, data))

View File

@@ -74,7 +74,7 @@ class JSONField(upstream_JSONField):
class JSONBField(upstream_JSONBField):
def get_prep_lookup(self, lookup_type, value):
if isinstance(value, basestring) and value == "null":
if isinstance(value, six.string_types) and value == "null":
return 'null'
return super(JSONBField, self).get_prep_lookup(lookup_type, value)
@@ -356,7 +356,7 @@ class SmartFilterField(models.TextField):
value = urllib.unquote(value)
try:
SmartFilter().query_from_string(value)
except RuntimeError, e:
except RuntimeError as e:
raise models.base.ValidationError(e)
return super(SmartFilterField, self).get_prep_value(value)
@@ -506,6 +506,12 @@ class CredentialInputField(JSONSchemaField):
v != '$encrypted$',
model_instance.pk
]):
if not isinstance(getattr(model_instance, k), six.string_types):
raise django_exceptions.ValidationError(
_('secret values must be of type string, not {}').format(type(v).__name__),
code='invalid',
params={'value': v},
)
decrypted_values[k] = utils.decrypt_field(model_instance, k)
else:
decrypted_values[k] = v
@@ -695,11 +701,10 @@ class CredentialTypeInjectorField(JSONSchemaField):
'properties': {
'file': {
'type': 'object',
'properties': {
'template': {'type': 'string'},
'patternProperties': {
'^template(\.[a-zA-Z_]+[a-zA-Z0-9_]*)?$': {'type': 'string'},
},
'additionalProperties': False,
'required': ['template'],
},
'env': {
'type': 'object',
@@ -749,8 +754,22 @@ class CredentialTypeInjectorField(JSONSchemaField):
class TowerNamespace:
filename = None
valid_namespace['tower'] = TowerNamespace()
# ensure either single file or multi-file syntax is used (but not both)
template_names = [x for x in value.get('file', {}).keys() if x.startswith('template')]
if 'template' in template_names and len(template_names) > 1:
raise django_exceptions.ValidationError(
_('Must use multi-file syntax when injecting multiple files'),
code='invalid',
params={'value': value},
)
if 'template' not in template_names:
valid_namespace['tower'].filename = TowerNamespace()
for template_name in template_names:
template_name = template_name.split('.')[1]
setattr(valid_namespace['tower'].filename, template_name, 'EXAMPLE')
for type_, injector in value.items():
for key, tmpl in injector.items():
try:

View File

@@ -5,6 +5,8 @@
import datetime
import logging
import six
# Django
from django.core.management.base import BaseCommand
from django.utils.timezone import now
@@ -41,7 +43,7 @@ class Command(BaseCommand):
n_deleted_items = 0
pks_to_delete = set()
for asobj in ActivityStream.objects.iterator():
asobj_disp = '"%s" id: %s' % (unicode(asobj), asobj.id)
asobj_disp = '"%s" id: %s' % (six.text_type(asobj), asobj.id)
if asobj.timestamp >= self.cutoff:
if self.dry_run:
self.logger.info("would skip %s" % asobj_disp)

View File

@@ -1,35 +0,0 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
# Python
import logging
# Django
from django.db import transaction
from django.core.management.base import BaseCommand
from django.utils.timezone import now
# AWX
from awx.main.models import * # noqa
class Command(BaseCommand):
'''
Management command to cleanup expired auth tokens
'''
help = 'Cleanup expired auth tokens.'
def init_logging(self):
self.logger = logging.getLogger('awx.main.commands.cleanup_authtokens')
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(message)s'))
self.logger.addHandler(handler)
self.logger.propagate = False
@transaction.atomic
def handle(self, *args, **options):
self.init_logging()
tokens_removed = AuthToken.objects.filter(expires__lt=now())
self.logger.log(99, "Removing %d expired auth tokens" % tokens_removed.count())
tokens_removed.delete()

View File

@@ -5,6 +5,8 @@
import datetime
import logging
import six
# Django
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
@@ -66,7 +68,7 @@ class Command(BaseCommand):
jobs = Job.objects.filter(created__lt=self.cutoff)
for job in jobs.iterator():
job_display = '"%s" (%d host summaries, %d events)' % \
(unicode(job),
(six.text_type(job),
job.job_host_summaries.count(), job.job_events.count())
if job.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
@@ -87,7 +89,7 @@ class Command(BaseCommand):
ad_hoc_commands = AdHocCommand.objects.filter(created__lt=self.cutoff)
for ad_hoc_command in ad_hoc_commands.iterator():
ad_hoc_command_display = '"%s" (%d events)' % \
(unicode(ad_hoc_command),
(six.text_type(ad_hoc_command),
ad_hoc_command.ad_hoc_command_events.count())
if ad_hoc_command.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
@@ -107,7 +109,7 @@ class Command(BaseCommand):
skipped, deleted = 0, 0
project_updates = ProjectUpdate.objects.filter(created__lt=self.cutoff)
for pu in project_updates.iterator():
pu_display = '"%s" (type %s)' % (unicode(pu), unicode(pu.launch_type))
pu_display = '"%s" (type %s)' % (six.text_type(pu), six.text_type(pu.launch_type))
if pu.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
self.logger.debug('%s %s project update %s', action_text, pu.status, pu_display)
@@ -130,7 +132,7 @@ class Command(BaseCommand):
skipped, deleted = 0, 0
inventory_updates = InventoryUpdate.objects.filter(created__lt=self.cutoff)
for iu in inventory_updates.iterator():
iu_display = '"%s" (source %s)' % (unicode(iu), unicode(iu.source))
iu_display = '"%s" (source %s)' % (six.text_type(iu), six.text_type(iu.source))
if iu.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
self.logger.debug('%s %s inventory update %s', action_text, iu.status, iu_display)
@@ -153,7 +155,7 @@ class Command(BaseCommand):
skipped, deleted = 0, 0
system_jobs = SystemJob.objects.filter(created__lt=self.cutoff)
for sj in system_jobs.iterator():
sj_display = '"%s" (type %s)' % (unicode(sj), unicode(sj.job_type))
sj_display = '"%s" (type %s)' % (six.text_type(sj), six.text_type(sj.job_type))
if sj.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
self.logger.debug('%s %s system_job %s', action_text, sj.status, sj_display)
@@ -183,7 +185,7 @@ class Command(BaseCommand):
workflow_jobs = WorkflowJob.objects.filter(created__lt=self.cutoff)
for workflow_job in workflow_jobs.iterator():
workflow_job_display = '"{}" ({} nodes)'.format(
unicode(workflow_job),
six.text_type(workflow_job),
workflow_job.workflow_nodes.count())
if workflow_job.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
@@ -204,7 +206,7 @@ class Command(BaseCommand):
notifications = Notification.objects.filter(created__lt=self.cutoff)
for notification in notifications.iterator():
notification_display = '"{}" (started {}, {} type, {} sent)'.format(
unicode(notification), unicode(notification.created),
six.text_type(notification), six.text_type(notification.created),
notification.notification_type, notification.notifications_sent)
if notification.status in ('pending',):
action_text = 'would skip' if self.dry_run else 'skipping'
@@ -246,4 +248,3 @@ class Command(BaseCommand):
self.logger.log(99, '%s: %d would be deleted, %d would be skipped.', m.replace('_', ' '), deleted, skipped)
else:
self.logger.log(99, '%s: %d deleted, %d skipped.', m.replace('_', ' '), deleted, skipped)

View File

@@ -17,7 +17,7 @@ class Command(BaseCommand):
def handle(self, *args, **kwargs):
if getattr(settings, 'AWX_ISOLATED_PRIVATE_KEY', False):
print settings.AWX_ISOLATED_PUBLIC_KEY
print(settings.AWX_ISOLATED_PUBLIC_KEY)
return
key = rsa.generate_private_key(
@@ -41,4 +41,4 @@ class Command(BaseCommand):
) + " generated-by-awx@%s" % datetime.datetime.utcnow().isoformat()
)
pemfile.save()
print pemfile.value
print(pemfile.value)

View File

@@ -155,7 +155,7 @@ class AnsibleInventoryLoader(object):
if self.tmp_private_dir:
shutil.rmtree(self.tmp_private_dir, True)
if proc.returncode != 0 or 'file not found' in stderr:
if proc.returncode != 0:
raise RuntimeError('%s failed (rc=%d) with stdout:\n%s\nstderr:\n%s' % (
self.method, proc.returncode, stdout, stderr))

View File

@@ -17,6 +17,10 @@ class Command(BaseCommand):
help='Comma-Delimited Hosts to add to the Queue')
parser.add_argument('--controller', dest='controller', type=str,
default='', help='The controlling group (makes this an isolated group)')
parser.add_argument('--instance_percent', dest='instance_percent', type=int, default=0,
help='The percentage of active instances that will be assigned to this group'),
parser.add_argument('--instance_minimum', dest='instance_minimum', type=int, default=0,
help='The minimum number of instance that will be retained for this group from available instances')
def handle(self, **options):
queuename = options.get('queuename')
@@ -38,7 +42,9 @@ class Command(BaseCommand):
changed = True
else:
print("Creating instance group {}".format(queuename))
ig = InstanceGroup(name=queuename)
ig = InstanceGroup(name=queuename,
policy_instance_percentage=options.get('instance_percent'),
policy_instance_minimum=options.get('instance_minimum'))
if control_ig:
ig.controller = control_ig
ig.save()
@@ -60,5 +66,7 @@ class Command(BaseCommand):
sys.exit(1)
else:
print("Instance already registered {}".format(instance[0].hostname))
ig.policy_instance_list = instance_list
ig.save()
if changed:
print('(changed: True)')

View File

@@ -161,14 +161,35 @@ class CallbackBrokerWorker(ConsumerMixin):
break
if body.get('event') == 'EOF':
# EOF events are sent when stdout for the running task is
# closed. don't actually persist them to the database; we
# just use them to report `summary` websocket events as an
# approximation for when a job is "done"
emit_channel_notification(
'jobs-summary',
dict(group_name='jobs', unified_job_id=job_identifier)
)
try:
logger.info('Event processing is finished for Job {}, sending notifications'.format(job_identifier))
# EOF events are sent when stdout for the running task is
# closed. don't actually persist them to the database; we
# just use them to report `summary` websocket events as an
# approximation for when a job is "done"
emit_channel_notification(
'jobs-summary',
dict(group_name='jobs', unified_job_id=job_identifier)
)
# Additionally, when we've processed all events, we should
# have all the data we need to send out success/failure
# notification templates
uj = UnifiedJob.objects.get(pk=job_identifier)
if hasattr(uj, 'send_notification_templates'):
retries = 0
while retries < 5:
if uj.finished:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
break
else:
# wait a few seconds to avoid a race where the
# events are persisted _before_ the UJ.status
# changes from running -> successful
retries += 1
time.sleep(1)
uj = UnifiedJob.objects.get(pk=job_identifier)
except Exception:
logger.exception('Worker failed to emit notifications: Job {}'.format(job_identifier))
continue
retries = 0
@@ -208,7 +229,7 @@ class Command(BaseCommand):
help = 'Launch the job callback receiver'
def handle(self, *arg, **options):
with Connection(settings.CELERY_BROKER_URL) as conn:
with Connection(settings.BROKER_URL) as conn:
try:
worker = CallbackBrokerWorker(conn)
worker.run()

Some files were not shown because too many files have changed in this diff Show More