Compare commits

..

1144 Commits
2.1.0 ... 4.0.0

Author SHA1 Message Date
softwarefactory-project-zuul[bot]
9479b1b824 Merge pull request #3535 from shanemcd/oops
Fix bug where init scripts didnt create the admin user correctly

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-28 00:39:05 +00:00
Shane McDonald
fcf6b4ae45 Fix bug where init scripts didnt create the admin user correctly 2019-03-27 19:43:47 -04:00
softwarefactory-project-zuul[bot]
7b4c63037a Merge pull request #3523 from ryanpetrello/iso-cancel
properly handle isolated cancellation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-27 22:44:10 +00:00
softwarefactory-project-zuul[bot]
37a90320ec Merge pull request #3532 from jbradberry/double-jeopardy
Log errors directly from inventory_import.py only if running by hand

Reviewed-by: awxbot
             https://github.com/awxbot
2019-03-27 22:27:58 +00:00
Jeff Bradberry
a803e86a95 Log errors directly from inventory_import.py only if running by hand 2019-03-27 18:02:46 -04:00
softwarefactory-project-zuul[bot]
196a6ff36c Merge pull request #3525 from shanemcd/make-things-work
Fix docker-compose installs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-27 17:38:36 +00:00
Shane McDonald
c3ba851908 Fix docker-compose installs
In a series of unfortunate events, my patch yesterday didnt actually work. This fixes that.
2019-03-27 13:06:55 -04:00
softwarefactory-project-zuul[bot]
11223472d3 Merge pull request #3524 from kdelee/update_install_docs
update install docs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-27 16:26:03 +00:00
softwarefactory-project-zuul[bot]
d0a996b139 Merge pull request #3520 from ryanpetrello/runner131
pin runner 1.3.1

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-27 15:58:34 +00:00
Elijah DeLee
7dd635cd8d update install docs 2019-03-27 11:54:33 -04:00
softwarefactory-project-zuul[bot]
f715e4e410 Merge pull request #3519 from chrismeyersfsu/fix-iso_bwrap
runner expects process isolation flags in settings

Reviewed-by: awxbot
             https://github.com/awxbot
2019-03-27 15:49:24 +00:00
Ryan Petrello
a983d4bc1f properly handle isolated cancellation 2019-03-27 11:44:26 -04:00
Ryan Petrello
f7cffbfe5c pin runner 1.3.1 2019-03-27 11:23:25 -04:00
chris meyers
2329079326 runner expects process isolation flags in settings
* Towards the goal of converging the iso code path w/ the non-iso code
path. More process isolation control flags into settings.
2019-03-27 11:08:41 -04:00
softwarefactory-project-zuul[bot]
055e7b4974 Merge pull request #3515 from shanemcd/docker-compose-permissions
Fix permissions of sensitive files in docker-compose installation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-27 14:11:31 +00:00
Shane McDonald
c44bf6f903 Allow for platform specific variables in docker-compose install
This changes the default docker_compose_dir on macos to a writeable location
2019-03-27 09:32:04 -04:00
Shane McDonald
a6d031f46f Fix permissions of sensitive files in docker-compose installation 2019-03-27 09:31:10 -04:00
softwarefactory-project-zuul[bot]
2129f12085 Merge pull request #3505 from shanemcd/devel
Address CVE-2019-3869

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-26 19:38:28 +00:00
softwarefactory-project-zuul[bot]
23185ca22f Merge pull request #3232 from AlanCoding/truly_empty_groups
Surface empty groups as children of all group

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-26 19:17:36 +00:00
Shane McDonald
2b6cf97157 Do not set credentials via environment variables 2019-03-26 15:13:28 -04:00
Shane McDonald
07e5a00f14 Remove “standalone Docker” installation path
This has been a burden to maintain. docker-compose is now required
2019-03-26 15:13:28 -04:00
softwarefactory-project-zuul[bot]
1b0f5b05ad Merge pull request #3502 from marshmalien/feat-toolbar-sort-instance-modal-list
Feat toolbar sort instance modal list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-26 18:57:24 +00:00
softwarefactory-project-zuul[bot]
5ff4625eb1 Merge pull request #3280 from AlanCoding/playbook_dir
Set --playbook-dir in calls to ansible-inventory

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-26 18:53:12 +00:00
Marliana Lara
1829e7cad4 Add sort toolbar to instance modal 2019-03-26 14:34:33 -04:00
AlanCoding
e097f5a021 implement playbook-dir option in ansible-inventory calls 2019-03-26 14:09:08 -04:00
softwarefactory-project-zuul[bot]
caa5596386 Merge pull request #3320 from vismay-golwala/custom_venvs
Feature: custom virtual environment directories

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-26 16:26:50 +00:00
Vismay Golwala
ec390b049d Feature: custom virtual environment directories
Currently, users are allowed to define virtual environments in
`settings.BASE_VENV_PATH` only, because that's the only place
Tower looks for virtual environments. This feature allows users
to custom define the directory paths, using API or UI, to look
for virtual environments. Tower aggregates virtual environments
from all these paths, except environments with special name `awx`.

Signed-off-by: Vismay Golwala <vgolwala@redhat.com>
2019-03-26 12:04:17 -04:00
softwarefactory-project-zuul[bot]
0814a9c4a1 Merge pull request #3266 from ansible/inventory_plugins
Transition to inventory plugins

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2019-03-26 15:14:27 +00:00
Marliana Lara
0a1b220f56 Link to instances list via sref directive 2019-03-26 10:35:22 -04:00
AlanCoding
d39b3b3165 Remove compatibility_mode field, simplify jinja2 syntax
fix minor bug where credential not shown in API
2019-03-26 10:29:39 -04:00
AlanCoding
19ad7d3983 Inventory plugins data tweaks and finalization
Disable use of azure_rm inventory plugin
Disable use of ec2 inventory plugin
due to compatibility issues that are unresolved

Fix conflicts with ansible runner integration

Add additional content enabled by Ansible core changes
2019-03-26 10:29:39 -04:00
AlanCoding
cd7e358b73 Inventory plugins transition dev finishing work
Bump keystone auth to resolve problem with openstack script

Clarify code path, routing to template vs. managed injector
  behavior is also now reflected in test data files

Refactor test data layout for inventory injector logic

Add developer docs for inventory plugins transition

Memoize only get_ansible_version with no parameters

Make inventory plugin injector enablement a separate
  concept from the initial_version
  switch tests to look for plugin_name as well

Add plugin injectors for tower and foreman.

Add jinja2 native types compat feature

move tower source license compare logic to management command

introduce inventory source compat mode

pin jinja2 for native Ansible types

Add parent group keys, and additional translations

manual dash sanitization for un-region-like ec2 groups

nest zones under regions using Ansible core feature just merged
  implement conditionally only with BOTH group_by options

Make compat mode default be true
  in API models, UI add and edit controllers

Add several additional hostvars to translation
Add Azure tags null case translation

Make Azure group_by key off source_vars
  to be consistent with the script

support top-level ec2 boto_profile setting
2019-03-26 10:29:39 -04:00
AlanCoding
bc5881ad21 Primary development of inventory plugins, partial compat layer
Initialize some inventory plugin test data files
Implement openstack inventory plugin

This may be removed later:
- port non-JSON line strip method from core

Dupliate effort with AWX mainline devel
- Produce ansible_version related to venv

Refactor some of injector management, moving more
  of this overhead into tasks.py, when it comes to
  managing injector kwargs

Upgrade and move openstack inventory script
  sync up parameters

Add extremely detailed logic to inventory file creation
for ec2, Azure, and gce so that they are closer to a
genuine superset of what the contrib script used to give.
2019-03-26 10:29:39 -04:00
Jim Ladd
dd854baba2 Add support for Azure inventory plugin 2019-03-26 10:29:39 -04:00
Jim Ladd
7cce3cad06 Add support for ec2 inventory plugin 2019-03-26 10:29:38 -04:00
AlanCoding
622fbc116b move script injection logic to inventory file 2019-03-26 10:29:38 -04:00
AlanCoding
b9d489c788 Use randomized file names for injector credential files 2019-03-26 10:29:38 -04:00
AlanCoding
5cbcfbe0c6 Port inventory source injector tests to functional tests
This new batch of tests assures that the injector logic
for inventory source in their old script version remains
untouched with the refactoring underway.

Plugins are also tested by the same means of comparing
to reference files, these will be used to assure that
all parameters that used to be respected are still
respected in the plugin system.
2019-03-26 10:29:38 -04:00
Jim Ladd
d46a403a49 GCE plugin should not set any regions when 'all' specified 2019-03-26 10:29:38 -04:00
Jim Ladd
de808d4911 Only install futures on py2 2019-03-26 10:29:38 -04:00
AlanCoding
43eff55fd4 fix bugs related to python3 2019-03-26 10:29:37 -04:00
AlanCoding
6c130fa6c3 Build-in inventory plugin code structure with gce working
supporting and related changes
 - Fix inconsistency between can_update / can_start
 - Avoid creating inventory file twice unnecessarily
 - Non-functional consolidation in Azure injection logic
 - Inject GCE creds as indented JSON for readability
 - Create new injector class structure, add gce
 - Reduce management command overrides of runtime environment
2019-03-26 10:29:35 -04:00
softwarefactory-project-zuul[bot]
90ea9a8cc4 Merge pull request #3500 from Spredzy/etc_ssh
bwrap/runner: Add /etc/ssh in bind mounted folder

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-26 12:27:43 +00:00
softwarefactory-project-zuul[bot]
b09bca54b7 Merge pull request #3499 from rooftopcellist/content-oauth
update content-type for oauth2 docs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-26 12:07:32 +00:00
Ryan Petrello
8e4a87d0af fix tests for add /etc/ssh in bind mounted folder 2019-03-26 08:04:16 -04:00
Yanis Guenane
fd50feb258 bwrap/runner: Add /etc/ssh in bind mounted folder
/etc/ssh is currently not bound when run into bwrap, this leads to
error like "Bad owner or permissions on /etc/ssh/ssh_config.d/05-redhat.conf"
since it cannot access this file.

https://github.com/ansible/awx/pull/3391 was done pre runner
integration.

Fixes: https://github.com/ansible/awx/issues/3392

Signed-off-by: Yanis Guenane <yanis@guenane.org>
2019-03-26 12:43:59 +01:00
AlanCoding
f749a5d44d Surface empty groups as children of all group 2019-03-26 07:18:52 -04:00
Christian Adams
c3366db5ca update content-type for oauth2 docs 2019-03-25 23:55:56 -04:00
softwarefactory-project-zuul[bot]
07a9cd106e Merge pull request #3494 from ryanpetrello/more-iso
more iso cleanup and bug fixes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-25 23:21:06 +00:00
Ryan Petrello
b2a1824d21 store set_stat data for isolated job runs 2019-03-25 18:53:42 -04:00
softwarefactory-project-zuul[bot]
303443796e Merge pull request #3458 from marshmalien/feat-toolbar-sort-user-tokens
Add sort to User Tokens list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-25 22:25:55 +00:00
Ryan Petrello
495dc2202f more iso cleanup and bug fixes 2019-03-25 17:47:58 -04:00
softwarefactory-project-zuul[bot]
33f5200a20 Merge pull request #3497 from ryanpetrello/whoops-dot-js
remove an errant console.log

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-25 19:59:47 +00:00
Ryan Petrello
8674e3b4de remove an errant console.log 2019-03-25 15:24:34 -04:00
softwarefactory-project-zuul[bot]
ace459cf70 Merge pull request #3447 from beeankha/node_activity_stream
WFJT Node Activity Stream Bug Fix

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-25 19:23:27 +00:00
softwarefactory-project-zuul[bot]
d0c952692d Merge pull request #3481 from ryanpetrello/minor-runner-cleanup
Minor post-runner cleanup

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-25 16:57:15 +00:00
Ryan Petrello
af8e071840 remove old callback plugin code and tests 2019-03-25 12:26:51 -04:00
Ryan Petrello
e6abd77c96 remove py2 compatability from awx.main.expect.run
this module is no longer run on isolated nodes (they use runner now)
2019-03-25 12:26:47 -04:00
Ryan Petrello
42bfff301c remove python-memcached as a base dependency for playbook execution 2019-03-25 12:26:44 -04:00
Marliana Lara
0aff1a2c75 Add sort to users tokens list 2019-03-25 11:18:18 -04:00
Keith Grant
685f4018f2 improve verbiage in activity stream for associating/disassociating wf nodes 2019-03-25 10:59:24 -04:00
softwarefactory-project-zuul[bot]
1dff691830 Merge pull request #3468 from kialam/fix-groups-list-view
Fix 'Groups' list item styling.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-25 14:54:25 +00:00
softwarefactory-project-zuul[bot]
525021214c Merge pull request #3483 from keithjgrant/inventory-vars-popout
Inventory vars popout round 2 (codemirror)

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-22 20:27:13 +00:00
softwarefactory-project-zuul[bot]
c12c64f5e7 Merge pull request #3484 from shanemcd/devel
Fix python3 offline installs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-22 18:31:05 +00:00
Shane McDonald
0eaeadad87 Fix python3 offline installs 2019-03-22 13:30:09 -04:00
softwarefactory-project-zuul[bot]
eb5846d1be Merge pull request #3482 from chrismeyersfsu/fix-iso_ident
do not generate a random ident

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-22 16:43:04 +00:00
chris meyers
87e1ba4dea do not generate a random ident
* instead, set the ident passed to ansible runner to be the job id. That
way, on we know what directory to look in for results when the directory
structure is created.
2019-03-22 12:19:42 -04:00
softwarefactory-project-zuul[bot]
a9427dbf1b Merge pull request #3459 from beeankha/email_timeout_option
Provide Default Email Timeout Value

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-22 15:37:02 +00:00
Keith Grant
e96e1e925c update e2e test for codemirror change 2019-03-22 11:30:31 -04:00
Keith Grant
7476fefd65 fix codemirror in add host, add inventory, and workflow job details 2019-03-22 10:11:18 -04:00
softwarefactory-project-zuul[bot]
8b2fc26219 Merge pull request #3041 from chrismeyersfsu/runnerpy3
ansible-runner integration

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-21 22:21:50 +00:00
softwarefactory-project-zuul[bot]
9480f911b2 Merge pull request #3471 from ansible/add_dev_supervisor
Install supervisor into the dev environment

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-21 20:39:03 +00:00
beeankha
d7fc3f53b8 Update indentation 2019-03-21 16:14:56 -04:00
Matthew Jones
91cbaa1096 Install supervisor into the dev environment 2019-03-21 15:58:24 -04:00
beeankha
3e13eff7f4 Change serializer to take in init paramdefault value 2019-03-21 15:22:34 -04:00
softwarefactory-project-zuul[bot]
a562994b64 Merge pull request #3466 from ryanpetrello/libcloud-minus-pycrypto
pin apache-libcloud to a version that doesn't use PyCrypto

Reviewed-by: Matthew Jones <mat@matburt.net>
             https://github.com/matburt
2019-03-21 18:59:28 +00:00
Ryan Petrello
b02d9ae282 pin apache-libcloud to a version that doesn't use PyCrypto
once a new version lands on PyPI, we'll pin to _it_
2019-03-21 14:21:04 -04:00
Kia Lam
57820b7056 Fix 'Groups' list item styling. 2019-03-21 13:44:56 -04:00
softwarefactory-project-zuul[bot]
e3bbd436b4 Merge pull request #3215 from vismay-golwala/survey_allow_empty_defaults
Allow empty default values for numerical survey answers.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-21 16:37:02 +00:00
softwarefactory-project-zuul[bot]
9aa9524257 Merge pull request #3440 from marshmalien/feat-toolbar-sort-instances-list
Add sort and pagination to instances list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-21 16:19:50 +00:00
softwarefactory-project-zuul[bot]
af5a898919 Merge pull request #3433 from jlmitch5/lookupEnhancement
hide rows from lists where adding would be redundant

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-21 15:09:17 +00:00
softwarefactory-project-zuul[bot]
a04329efed Merge pull request #3453 from marshmalien/feat-toolbar-sort-application-tokens-list
Add sort to application tokens list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-21 14:58:52 +00:00
Keith Grant
a0b2ce3ef1 Merge pull request #3451 from keithjgrant/inventory-vars-popout
Host/Inventory vars popout (code mirror)
2019-03-21 10:55:12 -04:00
softwarefactory-project-zuul[bot]
cd62f39bce Merge pull request #3219 from mickfeech/devel
update documentation to include kuberentes initContainers

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-21 14:36:17 +00:00
chris meyers
b7b97dd58d doc update fix 2019-03-21 09:26:59 -04:00
beeankha
1d03625b27 Remove comment from serializer 2019-03-21 09:08:36 -04:00
softwarefactory-project-zuul[bot]
af55c4c05e Merge pull request #3455 from ansible/websockets-again
e2e fixtures and WS

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-21 12:51:26 +00:00
chris meyers
0a670e8db1 change from runner master to runner 1.3 2019-03-21 07:46:11 -04:00
Keith Grant
cf62fa67bd add links to activity stream for workflow_job_template_node 2019-03-20 17:00:42 -04:00
mickfeech
3c382322b0 Fix misspelled word 2019-03-20 16:33:02 -04:00
mickfeech
f4ef3024fd Merge branch 'devel' of https://github.com/mickfeech/awx into devel 2019-03-20 16:30:55 -04:00
Unknown
67ca2fa335 update documentation to include kuberentes initContainers
Update documentation to include Kubernetes initContainers in custom virtualenvs
2019-03-20 16:25:49 -04:00
beeankha
c9ac805eed [WIP] Provide Default Email Timeout Value 2019-03-20 16:17:48 -04:00
softwarefactory-project-zuul[bot]
f40b637efc Merge pull request #3457 from AlexSCorey/3300-JTCreationPermissionOnDashboard
Shows the button the add a JT to users with permissions to make JTs.

Reviewed-by: Alex Corey <Alex.swansboro@gmail.com>
             https://github.com/AlexSCorey
2019-03-20 20:16:37 +00:00
chris meyers
60ef160e85 flake8 fix 2019-03-20 16:12:45 -04:00
Alex Corey
be507dbefb Shows the button the add a JT to users with permissions to make JTs. 2019-03-20 14:55:28 -04:00
chris meyers
8c26f20188 add license files for python modules
* python-daemon
* ansible-runner
2019-03-20 14:51:41 -04:00
chris meyers
2c52a7d9a8 fix more unit tests for runner
* isolated will be fixed in the future so pytest skip those
* fact cache moved one directory level up, account for that
2019-03-20 14:32:52 -04:00
chris meyers
b006510035 do not save sensitive env vars
* job_env gets exposed via the api. Sensitive env variables should be
redacted before saved into job_env.
2019-03-20 14:00:22 -04:00
Keith Grant
8e48a3a523 interpret empty codemirror json content as empty object 2019-03-20 13:51:10 -04:00
Daniel Sami
b26c8f6b62 e2e adjustments for fixtures and WS
lint adjustment
2019-03-20 12:55:37 -04:00
softwarefactory-project-zuul[bot]
68d7532d01 Merge pull request #3452 from wenottingham/where-did-you-come-from
Add originating address for the failed login message

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-20 16:18:42 +00:00
Keith Grant
e9cf1475ca de-lint 2019-03-20 11:45:51 -04:00
Bill Nottingham
1b3ae50076 Add originating address for the failed login message 2019-03-20 11:34:35 -04:00
John Mitchell
7791c5f5ba hide groups that host is already associated with on relevant lists 2019-03-20 11:34:05 -04:00
Marliana Lara
19abd24c91 Add sort to application tokens list 2019-03-20 11:30:56 -04:00
softwarefactory-project-zuul[bot]
b2ae68850c Merge pull request #3449 from elyezer/users-e2e
users e2e

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-20 15:28:57 +00:00
chris meyers
1a6ae6e107 allow for runner setting parent_uuid
* Previously, parent_uuid was expected only on events generated for a
Job run. Now, there maybe a parent_uuid for any job type. AWX does not
support parenting events for any job type other than Job.
2019-03-20 11:05:01 -04:00
Keith Grant
86c7fd3b5d codemirror: sync data when closing modal by clicking outside 2019-03-20 11:01:42 -04:00
Elyézer Rezende
46ad3fa7b1 users e2e 2019-03-20 09:48:25 -04:00
chris meyers
060585434a update tests 2019-03-20 09:44:38 -04:00
Keith Grant
2657779eda cleanup comments 2019-03-20 08:56:42 -04:00
Keith Grant
ac890b8cda don't show code-mirror tooltip icon if no tooltip provided 2019-03-20 08:36:53 -04:00
Keith Grant
b6d8f9c6f6 use code-mirror directive for host facts 2019-03-20 08:36:52 -04:00
Keith Grant
5583af2a58 form generator: add ng-disabled support to code mirror fields 2019-03-20 08:36:52 -04:00
Keith Grant
2ee6713050 centralize variable parsing logic in code mirror directive 2019-03-20 08:36:52 -04:00
Keith Grant
b28409c1c7 fix saving inventory variables 2019-03-20 08:36:52 -04:00
Keith Grant
ac5dec272b code-mirror: keep yaml/json in sync when opening modal 2019-03-20 08:36:52 -04:00
Keith Grant
33b19ebe1f code-mirror: keep yaml/json setting in sync with modal 2019-03-20 08:36:52 -04:00
Keith Grant
5d3e39beac add use code-mirror directive for host variables; fix multiple code-mirrors on page at once 2019-03-20 08:36:52 -04:00
mabashian
bed63b3690 Work on getting extra vars popout working on inv form 2019-03-20 08:36:52 -04:00
Keith Grant
43ef4183df trying to make codemirror editable 2019-03-20 08:36:52 -04:00
softwarefactory-project-zuul[bot]
74869494f9 Merge pull request #3441 from AlexSCorey/3258-workflowDisplay
makes the card for the workflow display taller

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-19 19:25:27 +00:00
Alex Corey
bd4337976e makes the card for the workflow display taller 2019-03-19 15:02:05 -04:00
John Mitchell
50079c0441 fix issue where pagination would lose role filter 2019-03-19 12:52:59 -04:00
John Mitchell
f3173dbe26 address pr and product feedback 2019-03-19 12:08:36 -04:00
beeankha
a1dd5a4e19 WIP WFJT Node Activity Stream Bug Fix 2019-03-19 11:29:19 -04:00
John Mitchell
52e86cf0c3 hide rows from lists where adding would be redundant 2019-03-19 10:30:31 -04:00
softwarefactory-project-zuul[bot]
3d9a47f0d9 Merge pull request #3424 from falencastro/devel
Makes daphne websocket_timeout infinite.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-19 12:08:50 +00:00
chris meyers
5135b8a969 fixup unit tests for tasks 2019-03-18 14:21:47 -04:00
chris meyers
8a04c22b2b point at another runner branch
* revert parent_uuid because it causes problems with unexpected
parameter on event creation for some event type.
2019-03-18 14:21:47 -04:00
chris meyers
f7842cf283 refactor and fix unit tests
* fixup task TestGenericRun
* make runner callback functions accessable to testing
* reduce isintance() usage in run() by using build_ pattern
* move process_isolation param building to build_ function so it can be
tested
2019-03-18 14:21:47 -04:00
chris meyers
827ad0fa75 remove safe_args and add status_handler
* safe_args no longer makes sense. We have moved extra_vars to a file
and thus do not pass sensitive content on the cmdline
2019-03-18 14:21:47 -04:00
Ryan Petrello
602ef9750f update isolated task execution for ansible-runner 2019-03-18 14:21:47 -04:00
chris meyers
8fb65b40de use ansible runner to run playbooks
* Project Updates
* Jobs
* Inventory Updates
* System Jobs
* AdHoc Commands

* Notifications
* Fact Cache
* proot
2019-03-18 14:21:47 -04:00
chris meyers
a7cda95803 init ansible-runner requirements 2019-03-18 14:21:47 -04:00
Marliana Lara
bb33ed6415 Add sort and pagination to instances list 2019-03-18 13:36:58 -04:00
softwarefactory-project-zuul[bot]
358ad05e51 Merge pull request #3439 from beeankha/notification_doc
Make edit to Notification doc

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-18 15:54:25 +00:00
beeankha
74c84bd7df Update email section of Notification doc 2019-03-18 11:28:57 -04:00
softwarefactory-project-zuul[bot]
62e1d5fdd2 Merge pull request #3435 from AlexSCorey/3421-fixInventoryGroupColumnV2
Fix inventory column  width issues to make their contents readable

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-18 14:26:17 +00:00
Alex Corey
b8beb1c64e Fix invetory column width issues to make their contents readable 2019-03-18 09:57:42 -04:00
Shane McDonald
14d86ef5d3 Merge pull request #3426 from sugitk/patch-1
propose the change translation in Japanese
2019-03-18 09:36:42 -04:00
softwarefactory-project-zuul[bot]
9ab7752d32 Merge pull request #3416 from Spredzy/urllib_parse
plugins/tower.py: Use urllib.parse rather than urlparse

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-18 13:16:29 +00:00
Yanis Guenane
3a4f56bb2b plugins/tower.py: Use urllib.parse rather than urlparse
urlparse does not exist in python3, it has been replaced by urllib.parse

Signed-off-by: Yanis Guenane <yguenane@redhat.com>
2019-03-18 09:43:24 +01:00
softwarefactory-project-zuul[bot]
8f1c20423b Merge pull request #3222 from beeankha/timeout_config
Add Timeout Feature to Email Notification Template

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-16 12:46:17 +00:00
softwarefactory-project-zuul[bot]
6fd5f9c6d8 Merge pull request #3425 from ryanpetrello/dispatcher-stop-log
send callback receiver log lines to the correct logger

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-15 15:20:50 +00:00
softwarefactory-project-zuul[bot]
6b187946fb Merge pull request #3370 from vismay-golwala/scroll_top
UI - scroll to top in pagination

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-15 13:42:04 +00:00
softwarefactory-project-zuul[bot]
d54f633a7b Merge pull request #3401 from marshmalien/feat-toolbar-sort-applications-list
Add sort toolbar to applications list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-15 13:32:17 +00:00
softwarefactory-project-zuul[bot]
d0571c2cab Merge pull request #3408 from marshmalien/feat-toolbar-sort-instance-groups-list
Add sort toolbar to instance groups list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-15 13:21:08 +00:00
Ryan Petrello
32ee9838af use the correct logger for the callback receiver
the callback receiver and dispatcher share several modules, so add logic
to use the correct logger
2019-03-15 08:09:47 -04:00
softwarefactory-project-zuul[bot]
c41068edc4 Merge pull request #3420 from baxeno/https_download_postgresql_rpm
docker: yum: use https for postgresql pgdg rpm download

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-15 12:04:21 +00:00
Takashi Sugimura
f2548c5e66 #3415 propose the change translation in Japanese
regarding https://github.com/ansible/awx/issues/3415
2019-03-15 08:27:41 +09:00
beeankha
66a52655df Change email notification success/fail messages and add a timeout feature 2019-03-14 16:37:22 -04:00
Vismay Golwala
32dbe3f86a UI - scroll to top in pagination
Currently in pagination, when we switch from one page to another,
the view is stuck at the bottom and is slightly inconvenient to
scroll all the way to top, in potentially a large list of records.
So in order to prevent that, this commit sets automatic scroll to
top while switching between pages, using jQuery's animate and
scrollTop methods.

Signed-off-by: Vismay Golwala <vgolwala@redhat.com>
2019-03-14 16:35:30 -04:00
softwarefactory-project-zuul[bot]
c6ae7d84a2 Merge pull request #3417 from AlexSCorey/3395-deleteLongGroupName
Fixes issue with inventory group names that are extremely long.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-14 20:17:49 +00:00
Felipe Alencastro
7d384262e4 Makes daphne websocket_timeout infinite.
Daphne has a default timeout of 86400 seconds, so after 1 day of starting
awx_web container, the stdout stops refreshing automatically on the web UI.
This fixes this issue by making the timeout infinite, so the connection
between nginx and daphne's websocket never closes.
2019-03-14 17:17:09 -03:00
softwarefactory-project-zuul[bot]
64debd7230 Merge pull request #3423 from kdelee/update_git_repo_for_tests
Update reference to test playbooks

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-14 20:13:12 +00:00
Elijah DeLee
d39cfd1778 Update reference to test playbooks 2019-03-14 14:33:16 -04:00
Bruno Thomsen
2e0edcbabd docker: yum: use https for postgresql rpm download.
Signed-off-by: Bruno Thomsen <bruno.thomsen@gmail.com>
2019-03-14 17:14:17 +01:00
Alex Corey
2650cbfc87 Fixes issue with inventory group names that are extremely long. 2019-03-14 11:16:08 -04:00
softwarefactory-project-zuul[bot]
df72a01f27 Merge pull request #3412 from AlanCoding/put_down_the_dispatcher
Run computed fields once for bulk delete requests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-14 00:05:09 +00:00
AlanCoding
7cf2bc2410 Run computed fields once for bulk delete requests 2019-03-13 15:37:01 -04:00
softwarefactory-project-zuul[bot]
a63a204a21 Merge pull request #3409 from ansible/thedoubl3j-patch-1
typo in inventory

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-13 18:43:59 +00:00
softwarefactory-project-zuul[bot]
928de6127b Merge pull request #3411 from AlexSCorey/3337-brokenDockLink
fixes broken documentation link

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-13 14:47:48 +00:00
Alex Corey
85eca47a93 fixes broken documentation link 2019-03-12 16:37:15 -04:00
softwarefactory-project-zuul[bot]
99c8c4bf2b Merge pull request #3410 from ryanpetrello/fix-swagger-doc-builds
fix a failing test in the unit-test target used to generate swagger docs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-12 19:23:40 +00:00
Ryan Petrello
50d8eb30e1 fix a failing test in the unit-test target used to generate swagger docs 2019-03-12 14:59:24 -04:00
softwarefactory-project-zuul[bot]
b1d9b14ab1 Merge pull request #3403 from shanemcd/ootpa
Working out some python3 kinks

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-12 16:00:45 +00:00
Marliana Lara
56b3d6c79b Add sort toolbar to instance groups list 2019-03-12 11:56:26 -04:00
Jake Jackson
9e528ea898 typo in inventory
simple typo fix `this` -> `these`
2019-03-12 11:48:41 -04:00
Shane McDonald
e09684462c Working out some python3 kinks 2019-03-12 11:27:50 -04:00
softwarefactory-project-zuul[bot]
22c4b28917 Merge pull request #3404 from ryanpetrello/fix-missing-swagger-endpoints
generate Swagger schemas as if view permissions didn't matter

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-12 15:09:52 +00:00
softwarefactory-project-zuul[bot]
9d0a8d2047 Merge pull request #3377 from mabashian/root-all-groups-toggle
Adds toggle for all/root groups to inventory groups view

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-12 14:09:45 +00:00
Ryan Petrello
023fbc931d generate Swagger schemas as if view permissions didn't matter
this fixes several scenarios where certain POST endpoints don't show up
in our generated Swagger doc; /api/swagger is only
discoverable/accessible in the development environment where we generate
the schema
2019-03-11 21:26:32 -04:00
softwarefactory-project-zuul[bot]
c3ae700888 Merge pull request #3402 from AlexSCorey/3238-addFirstNameLastNameonUsersTeamList
add first name and last name to the headers row of the teams users list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-11 20:25:36 +00:00
Marliana Lara
ee6445d620 Update dataset from event listener instead of queryset search method 2019-03-11 15:08:53 -04:00
Alex Corey
8fb7cb6e82 add first name and last name to the headers row of the teams users list 2019-03-11 14:16:26 -04:00
Marliana Lara
b55212368b Add sort toolbar to applications list 2019-03-11 13:44:21 -04:00
softwarefactory-project-zuul[bot]
649d854225 Merge pull request #3394 from jbradberry/update-become-methods-list
Add the ksu, machinectl, and sesu methods to the builtin list of become methods

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-11 15:25:31 +00:00
softwarefactory-project-zuul[bot]
fb1d918c2d Merge pull request #3391 from Spredzy/add_etc_ssh_in_ro_bind
bwrap: Add /etc/ssh in bind mounted folder

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-08 17:46:17 +00:00
Jeff Bradberry
e8d93c99a6 Add the ksu, machinectl, and sesu methods to the builtin list of become methods 2019-03-08 11:14:18 -05:00
softwarefactory-project-zuul[bot]
b54ec6b9c8 Merge pull request #3389 from keithjgrant/launch-tooltips
add tooltips for fields prompted on launch

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-08 16:12:59 +00:00
Keith Grant
3acb474b19 de-dupe element ids 2019-03-08 10:16:42 -05:00
softwarefactory-project-zuul[bot]
2e0d381f8f Merge pull request #3386 from jbradberry/org-limits-message
Update the error message when exceeding the organization hosts limit

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-08 15:09:04 +00:00
Yanis Guenane
7eb483d810 bwrap: Add /etc/ssh in bind mounted folder
/etc/ssh is currently not bound when run into bwrap, this leads to error
like "Bad owner or permissions on /etc/ssh/ssh_config.d/05-redhat.conf"
since it cannot access this file.
2019-03-08 15:20:53 +01:00
softwarefactory-project-zuul[bot]
7b570b59c6 Merge pull request #3367 from marshmalien/feat-toolbar-sort-jobs-list
Add toolbar sort to Jobs lists

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-08 14:18:15 +00:00
Keith Grant
09f9204917 add tooltips for fields prompted on launch 2019-03-07 16:27:01 -05:00
Jeff Bradberry
2a8e6ecba1 Update the error message when exceeding the organization hosts limit 2019-03-07 14:13:54 -05:00
Marliana Lara
abb221d942 Stretch Layout container to full height 2019-03-07 13:44:15 -05:00
softwarefactory-project-zuul[bot]
c8fdf46dda Merge pull request #3360 from jlmitch5/addUnsavedWorkflowChanges
add unsaved workflow changes flow

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-07 18:37:19 +00:00
softwarefactory-project-zuul[bot]
a9226fc25f Merge pull request #3375 from shanemcd/devel
Fix dev environment when running as root on the host

Reviewed-by: Matthew Jones <mat@matburt.net>
             https://github.com/matburt
2019-03-07 17:20:32 +00:00
softwarefactory-project-zuul[bot]
67eba3cf5c Merge pull request #3381 from matburt/revert_xlib_deps
Revert xlib deps

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-07 16:03:33 +00:00
softwarefactory-project-zuul[bot]
20b8cdfb3d Merge pull request #3382 from jakemcdermott/devdocs
update docs for development environment

Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
             https://github.com/rooftopcellist
2019-03-07 15:58:07 +00:00
softwarefactory-project-zuul[bot]
1dcb7591c5 Merge pull request #3363 from mabashian/dashboard-dropdowns
Fix dashboard dropdowns after bootstrap upgrade

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-07 15:38:11 +00:00
Jake McDermott
bca9735534 update docs for development environment 2019-03-07 10:30:31 -05:00
softwarefactory-project-zuul[bot]
f95576764d Merge pull request #3373 from AlexSCorey/2960-addTemplateTitlePromptDiag
add template name to launch prompt modal.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-07 15:21:45 +00:00
softwarefactory-project-zuul[bot]
92a600aaa9 Merge pull request #3359 from mabashian/yaml-comments
Show yaml comments when possible on launch prompt

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-07 15:14:19 +00:00
Matthew Jones
5cf7cc21c8 Revert "Fix chrome can not be started with unit-tests due to missing shared libraries"
This reverts commit d558ffd699.
2019-03-07 10:07:08 -05:00
Matthew Jones
499fd7b2f1 Revert "Fix chrome can not be started with unit-tests due to missing shared libraries"
This reverts commit 3e5f328b52.
2019-03-07 10:06:30 -05:00
Alex Corey
0593ac197c add template name to launch prompt modal. 2019-03-07 09:42:59 -05:00
softwarefactory-project-zuul[bot]
4fc0d220cc Merge pull request #3376 from keithjgrant/fix-save-confirmation-modal
fix save success modal when adding new application

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-07 14:23:34 +00:00
softwarefactory-project-zuul[bot]
d309acfddb Merge pull request #3372 from beeankha/jt_strings2
Update fields.py to Display Correct Error Message When JT Credential Is Not an Integer

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-06 23:32:38 +00:00
beeankha
2196089216 Update fields.py to convert JT Credential input into integers 2019-03-06 18:05:53 -05:00
mabashian
b6bf68427a Remove unused variable 2019-03-06 17:40:33 -05:00
mabashian
502decf8fe Adds toggle for all/root groups to inventory groups view 2019-03-06 17:31:54 -05:00
Keith Grant
20347420ca fix save success modal when adding new application 2019-03-06 17:22:25 -05:00
Shane McDonald
b29a9cd86e Fix dev environment when running as root on the host
Without this, CURRENT_UID isnt actually passed in from the host, and wipes out /etc/passwd even when we’re actually running as root.

I tested this as a non-root user on Linux, and on Docker for Mac
2019-03-06 17:08:56 -05:00
Marliana Lara
2d15d13359 Refactor styles and add queryset to updateDataset event emmitters 2019-03-06 15:09:46 -05:00
Marliana Lara
31f5d13a69 Initialize paginate queryset with an empty object 2019-03-06 12:19:55 -05:00
Marliana Lara
67753b790c Move toolbar default variable above the toolbar setting funciton 2019-03-06 10:48:14 -05:00
Marliana Lara
661a54d356 Add toolbar sort to Jobs list 2019-03-05 15:09:12 -05:00
Marliana Lara
970a714291 Update URL params when sorting on projects list 2019-03-05 15:08:24 -05:00
John Mitchell
dc206c9ad6 fix wf unsaved changes exit functionality 2019-03-05 14:20:38 -05:00
softwarefactory-project-zuul[bot]
658bdddac3 Merge pull request #3354 from ansible/404_test
404 validation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-05 17:19:52 +00:00
John Mitchell
9f5f86c6a7 remove unused service and fix lint issue 2019-03-05 12:00:12 -05:00
mabashian
5e37882267 Remove errant console 2019-03-05 11:29:18 -05:00
mabashian
0fc0106cc7 Fix dashboard dropdowns after bootstrap upgrade 2019-03-05 11:25:05 -05:00
Daniel Sami
bcdb590a29 404 tests for varied resources 2019-03-05 10:55:39 -05:00
mabashian
6b46c7db8f Fixes linting errors 2019-03-05 08:50:43 -05:00
John Mitchell
8c5bcffd42 add unsaved workflow changes flow 2019-03-04 16:58:10 -05:00
softwarefactory-project-zuul[bot]
aad185e785 Merge pull request #3356 from marshmalien/feat-toolbar-sort-projects-list
Add toolbar sort to projects list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-04 21:31:44 +00:00
mabashian
2ff22bd681 Show yaml comments when possible on launch prompt 2019-03-04 16:18:23 -05:00
softwarefactory-project-zuul[bot]
2934fabd98 Merge pull request #3357 from mabashian/jobs-list-socket-refresh
Fixes bug where jobs list search was lost on socket message

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-04 20:52:29 +00:00
mabashian
2359231bda Remove console.logs 2019-03-04 15:14:27 -05:00
mabashian
ca5f27aa9e Removes vm.querySet from jobs list and ensures that state params are updated when search is performed to prevent context loss on data refresh. 2019-03-04 15:11:54 -05:00
Marliana Lara
80adcaab81 Set sort dropdown to state param order_by value 2019-03-04 14:40:15 -05:00
Marliana Lara
8100fc1cfb Add toolbar sort configuration to project list 2019-03-04 12:32:25 -05:00
Daniel Sami
006797014c fix url 2019-03-04 11:58:32 -05:00
Daniel Sami
38934dc8d0 lint fixes 2019-03-04 11:56:48 -05:00
Daniel Sami
0ec6d652f7 check for 404s 2019-03-04 11:53:35 -05:00
Marliana Lara
1525c6d97e Extend Toolbar directive to include sort dropdown 2019-03-04 11:25:58 -05:00
softwarefactory-project-zuul[bot]
bf1769af6c Merge pull request #3322 from mopahle/docker_install_ssl_default
Add SSL suport for docker install

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-01 19:54:05 +00:00
softwarefactory-project-zuul[bot]
6384e638f5 Merge pull request #3340 from AlexSCorey/3297-smartInventoryLabel
Add padding to label list tag.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-03-01 16:20:09 +00:00
Alex Corey
1df5e55a4e add padding to label list tag. 2019-03-01 10:56:08 -05:00
softwarefactory-project-zuul[bot]
d1005f91e7 Merge pull request #3174 from jbradberry/org_hosts_limit
[WIP] Org hosts limit

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-28 21:27:50 +00:00
softwarefactory-project-zuul[bot]
d9451ac12c Merge pull request #3335 from AlexSCorey/3309-mgmtJobTitleStyling
Add margin to Management Jobs Notifications list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-28 21:12:59 +00:00
mabashian
a82304765d Adds help text to host limit error shown in the output details of inventory sync jobs 2019-02-28 15:54:10 -05:00
mabashian
cc3f2e0819 Properly pass path to error message 2019-02-28 15:54:10 -05:00
mabashian
03c07c0843 Adds error handling to launch api calls 2019-02-28 15:54:10 -05:00
mabashian
1b94b616f0 Default empty max hosts to 0 2019-02-28 15:54:09 -05:00
mabashian
98c5cb1c4c Adds max value to host limit input 2019-02-28 15:54:09 -05:00
mabashian
ce5a85a53b Add support for max hosts on org 2019-02-28 15:54:09 -05:00
Jeff Bradberry
046385d72e Make the inventory_import command log an error message on exception
when it happens in the big try/except block in the middle of handle().
Previously it wasn't doing anything with it, except exiting with a
code of 1.
2019-02-28 15:54:09 -05:00
Jeff Bradberry
7eba55fbde Change the wording of the error when adding a host
to "Organization host limit of %s would be exceeded...", since the
host will probably not actually be made active.
2019-02-28 15:54:09 -05:00
Jeff Bradberry
6ac51b7b13 Update the permission error to include max_hosts and the current host count 2019-02-28 15:54:09 -05:00
Jeff Bradberry
97cc467ae1 Restrict edit permissions on the Organization.max_hosts field to superusers 2019-02-28 15:54:09 -05:00
Jeff Bradberry
3312ebcb05 Remove the hosts count from related_field_counts in the org api endpoints
It is probably not needed, and adds an additional db query.
2019-02-28 15:54:09 -05:00
Jeff Bradberry
4d06ae48d3 Deal with the (erroneous) case where a job is missing the inventory
by bailing out of check_org_host_limit early.  Validation catches this
situation later on.
2019-02-28 15:54:09 -05:00
Jeff Bradberry
cf75ea91a1 Properly use the inventory in the can_start permissions checks 2019-02-28 15:54:09 -05:00
Jeff Bradberry
5e34f6582b Tests for start permissions for JobTemplate and WorkflowJob
when max_hosts is set.
2019-02-28 15:54:09 -05:00
Jeff Bradberry
60008dbd74 Add a test that AWX doesn't care about max_hosts when editing hosts
on an inventory.
2019-02-28 15:54:09 -05:00
Jeff Bradberry
4afd0672a1 Add test that AWX doesn't care about the limit when creating new hosts 2019-02-28 15:54:09 -05:00
Jeff Bradberry
875a1c0b5f Remove the mention of the max_hosts value from the limit check messages 2019-02-28 15:54:09 -05:00
Jeff Bradberry
f5d7ca6913 Update the Organization API list and detail tests to check the host counts 2019-02-28 15:54:09 -05:00
Jeff Bradberry
df8a66e504 Correct the org limit check for changing hosts to use the host's org
instead of an inventory passed in from the user data, which is not allowed.
2019-02-28 15:54:09 -05:00
Jeff Bradberry
43a0a15f6f Expose the new InventoryUpdate.org_host_limit_error field in the API 2019-02-28 15:54:09 -05:00
Jeff Bradberry
36ed890c14 Add permissions checks for the organization host limit 2019-02-28 15:54:09 -05:00
Jeff Bradberry
0e8e5f65e1 Update to fix tests 2019-02-28 15:54:09 -05:00
Jeff Bradberry
6399ec59c9 Include in the API the count of hosts used by an organization 2019-02-28 15:54:09 -05:00
Jeff Bradberry
e44c73883e Expose Organization.max_hosts in the API 2019-02-28 15:54:09 -05:00
Jeff Bradberry
c3406748de Add the new fields to the database 2019-02-28 15:54:03 -05:00
Jeff Bradberry
6d70651611 Update the inventory_import management command
to respect the new Organization.max_hosts limit.
2019-02-28 15:51:50 -05:00
Jeff Bradberry
5e13da62a4 Added a max_hosts field to Organization
in order to optionally set a limit to the number of hosts an
organization that is sharing a license is allowed to manage.

related #1542
2019-02-28 15:51:50 -05:00
softwarefactory-project-zuul[bot]
658e5f0fc8 Merge pull request #3272 from jladdjr/mo_stats
Add support for new playbook stats

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-28 18:24:03 +00:00
softwarefactory-project-zuul[bot]
cebd918e49 Merge pull request #3331 from mabashian/IG-instances-modal
Fixes bug where all instances were preselected when modal was initially opened

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-28 14:30:43 +00:00
Alex Corey
a8c4e92804 Add margin to Management Jobs Notifications list 2019-02-28 09:17:27 -05:00
softwarefactory-project-zuul[bot]
dea71d2682 Merge pull request #3333 from ryanpetrello/more-py3-bugs
fix a few additional py3 issues

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-28 13:31:05 +00:00
Markus Opahle
ed568f569c only use ssl if certificate is specified
Signed-off-by: Markus Opahle <3225748+mopahle@users.noreply.github.com>
2019-02-28 14:06:59 +01:00
Ryan Petrello
13c05c68fc fix a few additional py3 issues
related: https://github.com/ansible/awx/issues/3329
2019-02-27 16:56:36 -05:00
mabashian
2be7f853f3 Fixes bug where all instances were preselected when modal was initially opened 2019-02-27 15:24:29 -05:00
softwarefactory-project-zuul[bot]
3b5681465a Merge pull request #3308 from AlexSCorey/3283-BorderIssues
Removes top left border radius on lists

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-27 16:55:01 +00:00
softwarefactory-project-zuul[bot]
a6c7502217 Merge pull request #3294 from ansible/form-error-handling
Tests for form validation and error checks

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-27 15:58:39 +00:00
softwarefactory-project-zuul[bot]
50a87843ee Merge pull request #3315 from mabashian/elapsed-timer
Show elapsed timer while job is running

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-27 15:54:20 +00:00
Daniel Sami
238b6cbb61 E2E tests for form validation 2019-02-27 10:33:09 -05:00
walkafwalka
3a7bf6a8ac Add SSL suport for docker install
Signed-off-by: walkafwalka <41709139+walkafwalka@users.noreply.github.com>
2019-02-27 10:45:34 +01:00
mabashian
6e64aa81fd Always show elapsed timer regardless of running state 2019-02-26 11:38:46 -05:00
Alex Corey
445612315b Removes top left border radius on list item where the list has a tool bar
Removes redundant styling.
2019-02-26 09:52:07 -05:00
softwarefactory-project-zuul[bot]
bb276a8fcb Merge pull request #3260 from jlmitch5/launchJTWFForm
add launch button to jt and wf forms

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-25 17:52:01 +00:00
Jim Ladd
b7b0bdaeca Ansible 2.8 deprecates use of -U 2019-02-25 00:42:19 -08:00
Jim Ladd
cc1a97b6d8 Update JobHostSummary.__str__ and corresponding tests 2019-02-25 00:42:19 -08:00
Jim Ladd
c6227797b4 Make new host summary fields backwards compatible 2019-02-22 14:07:07 -08:00
Chris Meyers
b383144b69 Merge pull request #3293 from chrismeyersfsu/fix-serial
fix handling of serial strategy
2019-02-22 13:30:24 -05:00
softwarefactory-project-zuul[bot]
d1bc013da9 Merge pull request #3292 from kialam/fix-dummy-data-generator-script-errors
Fix various errors when trying to run `make bulk_data`.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-22 18:30:06 +00:00
chris meyers
723f581fd0 fix handling of serial strategy
* v2_playbook_on_play_start is called multiple times for the same UUID.
Specifically, once for each host in the play. This changes makes the
uuid unique before going to the dispatcher.
2019-02-22 13:14:35 -05:00
Kia Lam
d2c345a374 Fix API lint error. 2019-02-22 13:06:20 -05:00
softwarefactory-project-zuul[bot]
22677029e1 Merge pull request #3282 from AlexSCorey/2261-LimitHelper
Add helper text to the limit field prompt modal

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-22 17:55:08 +00:00
Kia Lam
0943f989ce Fix various errors when trying to run make bulk_data.
- Properly import PrimoridialModel.
- Use floor division operator for range() method to avoid float vs int errors.
2019-02-22 11:59:18 -05:00
softwarefactory-project-zuul[bot]
c1698fff8e Merge pull request #3264 from kialam/status-service-on-success
Call `this.sync` when we receive a ws complete message.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-21 21:16:33 +00:00
John Mitchell
200028b269 updated initial tooltip setting 2019-02-21 16:07:27 -05:00
Alex Corey
0cae612159 Add helper text to the limit field prompt modal 2019-02-21 16:06:18 -05:00
softwarefactory-project-zuul[bot]
5584f6a98b Merge pull request #3279 from mabashian/prompt-extra-var-wf
Add prompt for extra vars to wfjt

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-21 18:36:11 +00:00
softwarefactory-project-zuul[bot]
e5acf93c66 Merge pull request #3265 from mabashian/socket-time-band
Prevent jobs/templates lists from refreshing list data more than once every 5 seconds

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-21 18:06:41 +00:00
mabashian
01b4b47087 Fixes unit test failure after adding ui support for extra var prompting on workflows 2019-02-21 12:55:30 -05:00
John Mitchell
14e9923037 remove select2count that wasn't being used 2019-02-21 12:19:45 -05:00
John Mitchell
7e47a924c5 break out credential watch statement in order to disable launch when creds are removed 2019-02-21 12:18:34 -05:00
John Mitchell
635aa9fd56 promise-ify createselect2 and use that instead for deferring launch button enabling 2019-02-21 12:18:34 -05:00
John Mitchell
2e4eb1885f add launch button to jt and wf forms 2019-02-21 12:18:33 -05:00
mabashian
0ce70c08bd Add prompt for extra vars to wfjt 2019-02-21 11:37:53 -05:00
mabashian
e3c9a42741 Hook up status update sockets on completed jobs routes and project templates 2019-02-21 11:24:37 -05:00
mabashian
d5d45e644d Prevent jobs/templates lists from refreshing list data more than once every 5 seconds 2019-02-21 11:24:37 -05:00
softwarefactory-project-zuul[bot]
09684e2c41 Merge pull request #3269 from ryanpetrello/dep-update
remove redbaron and update minor dependencies

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-21 15:56:02 +00:00
Ryan Petrello
04622d5786 remove redbaron and update dependencies 2019-02-21 10:08:24 -05:00
Jim Ladd
8c9544e5ed Add support for new ansible stats 2019-02-20 17:13:29 -08:00
softwarefactory-project-zuul[bot]
ca043d9bfd Merge pull request #3275 from AlanCoding/i_love_caches
Clear the test cache at end of every test

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-20 20:19:16 +00:00
AlanCoding
711937b104 fix some patches that were never unapplied 2019-02-20 14:40:25 -05:00
softwarefactory-project-zuul[bot]
a9a2c1fa7b Merge pull request #3245 from wenottingham/we-meet-again-for-the-last-login-time
[WIP] Show last login along with the badges on the user view screen.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-20 19:38:07 +00:00
kialam
ec8a452f1d Call this.sync when we receive a ws complete message.
- There can be a race condition where a job run finishes event processing before sending off a "successful/failed" websocket message. In this instance, we would miss gathering known artifacts for a job. The proposed fix is to do a manual detection of when a job status updates to "successful/failed/unknown" and perform a GET request to the job endpoint.
2019-02-20 14:11:13 -05:00
AlanCoding
07def62373 clear the test cache at end of every test 2019-02-20 14:02:36 -05:00
softwarefactory-project-zuul[bot]
30352e375f Merge pull request #3271 from kialam/compact-expand-lists-responsive
Fix Compact/Expand Lists responsive issues

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-20 16:59:40 +00:00
softwarefactory-project-zuul[bot]
6abe9d5c0f Merge pull request #3270 from saito-hideki/issue/tower_3344
Add action to output login failure event to logfile

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2019-02-20 16:01:31 +00:00
softwarefactory-project-zuul[bot]
9b04e93765 Merge pull request #3262 from ryanpetrello/openstack_cred_verify_ssl
add verify_ssl to OpenStack credential type

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-20 15:35:10 +00:00
kialam
728fb1aaef Fix hidden relaunch dropdown in new lists. 2019-02-20 10:05:21 -05:00
kialam
2596ad26b9 Make compact rows responsive. 2019-02-20 09:35:22 -05:00
Hideki Saito
ef3b1ee195 Add action to output login failure to logger
Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-02-20 14:27:44 +00:00
Ryan Petrello
b1a33869dc convey OpenStack verify_ssl defaults in the CredentialType schema 2019-02-20 09:02:48 -05:00
Hideki Saito
9f04fbe4a4 Add verify_ssl to OpenStack credential type
To avoid verification failures when using a self-signed certificate file,
 Added "Verify SSL" check box to the openstack credential type edit page.

Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-02-19 12:53:13 -05:00
softwarefactory-project-zuul[bot]
1ece764547 Merge pull request #3257 from ryanpetrello/native_credential_types
define native CredentialType inputs/injectors in code, not in the DB

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-19 17:27:19 +00:00
softwarefactory-project-zuul[bot]
2358e306c1 Merge pull request #3261 from elyezer/resize-window
Resize window to avoid breaking the UI

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-19 17:21:12 +00:00
Elyézer Rezende
82b9f8ebb0 Resize window to avoid breaking the UI
Update test-jobs-portal-list-actions to resize the browser window and
avoid a flake test because of broken layout.
2019-02-19 13:53:12 -03:00
Ryan Petrello
43ca4526b1 define native CredentialType inputs/injectors in code, not in the DB
This has a few benefits:

1.  It makes adding new fields to built-in CredentialTypes _much_
    simpler.  In the past, we've had to write a migration every time we
    want to modify an existing type (changing a label/help text,
    changing options like the recent become_method changes) or
    when adding a new field entirely

2.  It paves the way for third party credential plugins support, where
    importable libraries will define their own source code-based schema
2019-02-19 10:22:26 -05:00
softwarefactory-project-zuul[bot]
4174fc22b0 Merge pull request #3254 from vismay-golwala/job_host_count
Fixed # of hosts - job details UI

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-18 21:16:32 +00:00
Vismay Golwala
16e135249c Fixed # of hosts - job details UI
In the UI job details page, the number of hosts for any job was always displayed '1'
irrespective of the actual count. This was caused because of a faulty initialization
of variable followed by unreachable code. It has been fixed by updating init value.

Signed-off-by: Vismay Golwala <vgolwala@redhat.com>
2019-02-18 13:57:58 -05:00
softwarefactory-project-zuul[bot]
889dae357b Merge pull request #3235 from ryanpetrello/sql-profiling
add a custom DB backend that provides system-level SQL profiling

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-15 21:56:28 +00:00
softwarefactory-project-zuul[bot]
0063668582 Merge pull request #3247 from ryanpetrello/fix-serial-display-error
fix a bug in the display of playbooks using serial or free strategy

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-15 21:51:54 +00:00
Ryan Petrello
1e4cd9ea8f document the SQL profiler 2019-02-15 16:34:34 -05:00
Ryan Petrello
954ccccbc5 fix a bug in the display of playbooks using serial or free strategy 2019-02-15 16:13:12 -05:00
Ryan Petrello
6f43875e80 record profile data in /var/log/tower, not /var/lib/awx 2019-02-15 14:34:55 -05:00
Christian Adams
80cccab919 Merge pull request #3244 from e-tienne/fix_wording_wf
Fix ambiguous workflow vizualizer 5000 wording
2019-02-15 11:23:58 -05:00
softwarefactory-project-zuul[bot]
088673ceb0 Merge pull request #3246 from Klaas-/Klaas--patch-2
Avoid pg password ending up in syslog/shell output

Reviewed-by: awxbot
             https://github.com/awxbot
2019-02-15 15:48:39 +00:00
Vismay Golwala
4f13255f35 Allow empty default values for numerical survey answers.
Signed-off-by: Vismay Golwala <vgolwala@redhat.com>
2019-02-15 10:41:19 -05:00
Klaas Demter
8f36e21c97 Avoid pg password ending up in syslog/shell output
Currently if an error occurs the pgpassword would be exposed to syslog / shell during playbook backup.yml
2019-02-15 16:15:33 +01:00
Bill Nottingham
6a11281355 Show last login along with the badges on the user view screen. 2019-02-14 20:25:50 -05:00
Etienne Simard
df4e4f80ad Fix ambiguous workflow vizualizer wording
Resolves: #2998

Signed-off-by: Etienne Simard <etienne@redhat.com>
2019-02-14 18:01:23 -05:00
softwarefactory-project-zuul[bot]
5682fb1503 Merge pull request #3243 from mabashian/more-strings-to-translate
Translate job status in smart status tooltip.  Mark strings for translation in project form

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 22:46:25 +00:00
mabashian
640d2a2797 Fix linting error 2019-02-14 16:32:54 -05:00
softwarefactory-project-zuul[bot]
b173880766 Merge pull request #3240 from wenottingham/when-last-we-met
Add django last_login information to user object.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 21:32:09 +00:00
mabashian
30ce85b80a Translate job status in smart status tooltip. Mark strings for translation in project form 2019-02-14 16:20:33 -05:00
softwarefactory-project-zuul[bot]
003ec64413 Merge pull request #3241 from ryanpetrello/workflow-convergence-i18n
mark a workflow convergence error message for i18n

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 21:16:54 +00:00
softwarefactory-project-zuul[bot]
eda6d729d6 Merge pull request #3239 from wenottingham/did-this-ever-work
Remove `awx-manage user_info` command.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 21:12:50 +00:00
Ryan Petrello
4f83d44142 mark a workflow convergence error message for i18n 2019-02-14 15:55:20 -05:00
Bill Nottingham
7452eb2fa1 Remove broken user_info command.
This has not worked in a long time, and does not serve much purpose.
2019-02-14 15:34:24 -05:00
Bill Nottingham
8300f7f51b Add django last_login information to user object. 2019-02-14 15:17:37 -05:00
Ryan Petrello
eed94b641e add a custom DB backend that provides system-level SQL profiling
run this command on _any_ node in an awx cluster:

$ awx-manage profile_sql --threshold=2.0 --minutes=1

...and for 1 minute, the timing for _every_ SQL query in _every_ awx
Python process that uses the Django ORM will be measured

queries that run longer than (in this example) 2 seconds will be
written to a per-process sqlite database in /var/lib/awx/profile, and
the file will contain an EXPLAIN VERBOSE for the query and the full
Python stack that led to that SQL query's execution (this includes not
just WSGI requests, but background processes like the runworker and
dispatcher)

$ awx-manage profile_sql --threshold=0

...can be used to disable profiling again (if you don't want to wait for
the minute to expire)
2019-02-14 15:04:46 -05:00
Unknown
0138e92ddc update documentation to include kuberentes initContainers
Update documentation to include Kubernetes initContainers in custom virtualenvs
2019-02-14 14:07:26 -05:00
softwarefactory-project-zuul[bot]
456ef49ee3 Merge pull request #3229 from mabashian/popover-quote-entity
Replace single quote with appropriate entity when generating new attribute

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 16:32:57 +00:00
softwarefactory-project-zuul[bot]
b91dee68ac Merge pull request #3221 from mabashian/workflow-results-inv-tooltip
Show the proper tooltip on workflow results inventory

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 16:28:34 +00:00
softwarefactory-project-zuul[bot]
781d36ef83 Merge pull request #3220 from mabashian/credential-modal-400
Prevent extra fetch of cred list as cred modal is closing

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 16:06:52 +00:00
softwarefactory-project-zuul[bot]
a1cef744a7 Merge pull request #3230 from impca/patch-1
Update compose configuration

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 15:45:41 +00:00
softwarefactory-project-zuul[bot]
ba5319f479 Merge pull request #3213 from mabashian/3158-related-groups-style
Fix styling on related groups labels after bootstrap upgrade

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 15:40:56 +00:00
softwarefactory-project-zuul[bot]
0dbf21a15c Merge pull request #3176 from digipok/issue-3010-ca-trust-awx_task
update-ca-trust: Ensure CA trust is updated in awx_task container

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 15:36:55 +00:00
softwarefactory-project-zuul[bot]
45d522829a Merge pull request #3190 from mabashian/i18n-strings
Mark various strings for translation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 15:26:50 +00:00
softwarefactory-project-zuul[bot]
8b1c358dc6 Merge pull request #3165 from mabashian/2630-become-plugins
Makes priv escalation method a dynamic select element

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 15:08:09 +00:00
softwarefactory-project-zuul[bot]
ebd9d3dc67 Merge pull request #3234 from wenottingham/the-only-good-code-is-a-dead-code
Delete some unused functions.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 14:10:31 +00:00
softwarefactory-project-zuul[bot]
80cf154fb7 Merge pull request #3233 from ryanpetrello/F405
remove usage of import * and enforce F405 in our linter

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-14 13:32:18 +00:00
impca
9add96a0d3 update docker compose installer
Only run commands to update certs when config changes.
2019-02-14 08:29:47 +01:00
mickfeech
ed2ad1e210 update documentation to include kuberentes initContainers 2019-02-13 20:01:30 -05:00
softwarefactory-project-zuul[bot]
808ed74700 Merge pull request #3177 from wenottingham/i-am-going-to-need-a-diagram-for-this
Fix project updates to properly pull in role requirements.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-13 23:16:34 +00:00
Ryan Petrello
9bebf3217e remove usage of import * and enforce F405 in our linter
import * is a scourge upon the earth
2019-02-13 17:10:33 -05:00
softwarefactory-project-zuul[bot]
ae7d26fab0 Merge pull request #3186 from vismay-golwala/update_schedule_constraint
Update standalone schedule name uniqueness combining it with unified …

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-13 22:03:54 +00:00
Bill Nottingham
0f54d30f2c Remove some unused functions. 2019-02-13 16:29:59 -05:00
softwarefactory-project-zuul[bot]
631d3515f2 Merge pull request #3217 from mabashian/3164-cred-checkbox-alignment
Fixes cred form checkbox input styling

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-13 20:08:57 +00:00
softwarefactory-project-zuul[bot]
551218fd44 Merge pull request #3206 from AlanCoding/learn_to_share
Do not remove edges from other inventory sources

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-13 18:09:18 +00:00
Vismay Golwala
4af54517d2 Update standalone schedule name uniqueness combining it with unified job template.
Signed-off-by: Vismay Golwala <vgolwala@redhat.com>
2019-02-13 12:46:28 -05:00
mabashian
334f571ad3 Fixes bug where extra blank option was being added to select input 2019-02-13 11:48:43 -05:00
softwarefactory-project-zuul[bot]
295afa805c Merge pull request #3212 from AlanCoding/model_star_imports
Remove star imports in tasks and non-base models

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-13 15:05:01 +00:00
softwarefactory-project-zuul[bot]
ad4d286db5 Merge pull request #3225 from ryanpetrello/take-it-to-the-limit-one-more-time
remove field size limit on adhoc `limit`

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-13 14:50:56 +00:00
softwarefactory-project-zuul[bot]
eea97c8928 Merge pull request #3228 from jakemcdermott/dev-migrations-page
load migrations page in dev environment

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-13 14:44:53 +00:00
impca
c29275315e Update compose configuration
When running awx via docker-compose and using custom certificates (for LDAP auth or whatever else...), update-ca-trust has to be called afer starting the container to actually use new certificates (just as it is called when using docker to run - https://github.com/ansible/awx/blob/devel/installer/roles/local_docker/tasks/standalone.yml#L119-L120 ).
2019-02-13 15:39:52 +01:00
mabashian
ef89195e6c Replace single quote with appropriate entity when generating new attribute 2019-02-13 09:32:46 -05:00
softwarefactory-project-zuul[bot]
06ff26752a Merge pull request #3227 from ryanpetrello/fix-migration-tran-view
fix `/migrations_notran`

Reviewed-by: awxbot
             https://github.com/awxbot
2019-02-13 14:25:29 +00:00
softwarefactory-project-zuul[bot]
58f5e1882e Merge pull request #3218 from kialam/fix-3081-copy-project-expand-collapse
Fix expanded view not persisting when a user copies a project.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-13 14:24:50 +00:00
Jake McDermott
2765367308 load migrations page in dev environment 2019-02-13 09:14:11 -05:00
Ryan Petrello
2a94611801 fix /migrations_notran 2019-02-13 08:59:20 -05:00
Ryan Petrello
e4eda3ef0d remove field size limit on adhoc limit
related: 4b669fb16d
2019-02-13 08:34:10 -05:00
AlanCoding
fbf6315a8c remove star imports in tasks and non-base models 2019-02-12 19:50:30 -05:00
softwarefactory-project-zuul[bot]
8a3c10686e Merge pull request #3191 from chrismeyersfsu/fix-job_event_smart_inv_slow_take_two_devel
do not observe queries when constructing them

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-12 22:27:11 +00:00
chris meyers
c121565209 add tests for host_filter
* Ensure that building the smart inventory query string doesn't invoke
the database.
2019-02-12 16:11:54 -05:00
mabashian
3c3e659042 Show the proper tooltip on workflow results inventory 2019-02-12 15:50:52 -05:00
mabashian
406cb07018 Prevent extra fetch of cred list as cred modal is closing 2019-02-12 15:37:45 -05:00
kialam
099a82fdf8 Fix expanded view not persisting when a user copies a project. 2019-02-12 15:24:17 -05:00
softwarefactory-project-zuul[bot]
52b88d839e Merge pull request #3216 from ryanpetrello/inv-gen-speedup
optimize a slow query in inventory script generation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-12 18:48:30 +00:00
Ryan Petrello
e245e50ee4 optimize a slow query in inventory script generation
if we don't preload this column, Django needs it, and so it generates
one query per-host (!!!) to get it.  For large (10k+ host) inventories,
this is incredibly slow.

see: https://github.com/ansible/awx/issues/3214
2019-02-12 12:55:38 -05:00
mabashian
a52c0415d9 Copied old bootstrap styles to display cred form checkboxes as they were before bootstrap upgrade 2019-02-12 11:48:48 -05:00
mabashian
98c7df3399 Fix styling on related groups labels after bootstrap upgrade 2019-02-12 11:18:28 -05:00
softwarefactory-project-zuul[bot]
570283fba2 Merge pull request #3207 from mabashian/pull-strings-ejs
Extract translations from ejs files

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-11 18:29:55 +00:00
AlanCoding
1bf2a455c6 Do not remove edges from other inventory sources 2019-02-11 13:08:19 -05:00
mabashian
cb222aaa40 Extract translations from ejs files 2019-02-11 12:59:47 -05:00
softwarefactory-project-zuul[bot]
6c9fc4a592 Merge pull request #3198 from mabashian/workflow-link-tooltip-overflow
Dynamically place link hover tooltip based on its size

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-11 17:01:08 +00:00
softwarefactory-project-zuul[bot]
53a6341320 Merge pull request #3148 from elyezer/e2e-screenshots
Add settings to control screenshot capturing

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-11 16:47:06 +00:00
mabashian
c0f9ee5e6e Fixes linting errors 2019-02-11 11:38:28 -05:00
mabashian
3321f3e34d Fixes bug where tooltip was clipped when graph is zoomed out. Fixes linting error 2019-02-11 11:30:20 -05:00
Elyézer Rezende
953b6679ef Add settings to control screenshot capturing 2019-02-11 14:05:11 -02:00
mabashian
d285261697 Fixes split job unit test confirming string match 2019-02-11 10:28:35 -05:00
softwarefactory-project-zuul[bot]
d84b58c857 Merge pull request #3196 from kialam/detect-empty-artifacts
Job Artifacts: check for empty object rather than string value.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-11 15:21:07 +00:00
softwarefactory-project-zuul[bot]
4d3cacf87f Merge pull request #3185 from mabashian/edit-node-arrow
Use arrow instead of 'to' in workflow edit link title

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-11 14:51:05 +00:00
softwarefactory-project-zuul[bot]
b9b2affe44 Merge pull request #3202 from Spredzy/fix_3200
awx.main.tasks: Remove reference to unimport six

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-11 14:42:08 +00:00
Yanis Guenane
f61b6f9615 awx.main.tasks: Remove reference to unimport six
d4c3c08 re:introduced the use of six that has been removed by daeeaf4.
This lead to ""NameError: name 'six' is not defined"". This commit fixes
the issue.

Signed-off-by: Yanis Guenane <yguenane@redhat.com>
2019-02-11 09:41:09 +01:00
kialam
fcba02cd86 Linter fix. 2019-02-08 14:52:15 -07:00
mabashian
205dc93e65 Dynamically place link hover tooltip based on its size 2019-02-08 16:44:26 -05:00
kialam
28a29293c7 Check for empty object rather than string value. 2019-02-08 14:31:06 -07:00
softwarefactory-project-zuul[bot]
3b259de200 Merge pull request #3192 from chrismeyersfsu/awx-3.0.1
AWX 3.0.1

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-08 19:38:37 +00:00
chris meyers
63e3e733e0 AWX 3.0.1 2019-02-08 14:17:33 -05:00
chris meyers
ed78978b5f do not observe queries when constructing them
* While parsing host_filter in the smart inventory code it was
triggering sql queries. This changset avoids executing the query that is
being constructed.
2019-02-08 12:30:51 -05:00
mabashian
6e1457607e Mark various strings for translation 2019-02-08 11:28:04 -05:00
softwarefactory-project-zuul[bot]
844b0f86b8 Merge pull request #3187 from chrismeyersfsu/fix-schedule_too_damn_fast_devel
fix scheduled jobs race condition

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-08 14:39:43 +00:00
chris meyers
d4c3c089df fix scheduled jobs race condition
* The periodic scheduler that runs and spawns jobs from Schedule()'s can
end up spawning more jobs than intended, for a single Schedule.
Specifically, when tower clustering is involed. This change adds a
"global" database lock around this critical code. If another process is
already doing the scheduling, short circuit.
2019-02-08 08:43:11 -05:00
softwarefactory-project-zuul[bot]
1328fb80a0 Merge pull request #3181 from mabashian/notif-related-tab
Show notification tab to notif admin users on jt/wf/project/inv source forms

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-07 21:39:33 +00:00
softwarefactory-project-zuul[bot]
1fbcd1b10b Merge pull request #3183 from mabashian/3018-wf-save
Cancel node form when user deletes node being edited

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-07 20:59:45 +00:00
softwarefactory-project-zuul[bot]
11b26c199b Merge pull request #3173 from mabashian/workflow-403
Adds proper error handling to workflow save related promises

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-07 19:25:40 +00:00
mabashian
b4d54895ff Use arrow instead of 'to' in workflow edit link title 2019-02-07 13:46:22 -05:00
softwarefactory-project-zuul[bot]
463c4c1f7e Merge pull request #3182 from kialam/artifacts-code-mirror
Artifacts Code Mirror

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-07 16:46:20 +00:00
kialam
76a16b329e Conditionally show the Artifacts field. 2019-02-07 08:59:44 -07:00
mabashian
123f646cea Cancel node form when user deletes node being edited 2019-02-07 10:47:21 -05:00
softwarefactory-project-zuul[bot]
d99c9c8dce Merge pull request #3179 from ryanpetrello/failed_to_change
don't update parent event changed|failed in bulk (it's expensive for large tables)

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-07 15:16:22 +00:00
mabashian
4f3a8ef766 Show notification tab to notif admin users on jt/wf/project/inv source forms 2019-02-07 10:02:05 -05:00
kialam
c114243082 Add artifacts as a subscriber to job details status service.
- This will emerge the artifacts field values if they become available to the UI from the API once a job has completed successfully.
2019-02-07 07:59:45 -07:00
Ryan Petrello
229e997e7e don't update parent event changed|failed in bulk (it's expensive) 2019-02-06 20:02:52 -05:00
kialam
dc7ec9dfe0 Adjust WF results to account for codemirror changes. 2019-02-06 12:11:15 -07:00
Bill Nottingham
5df384edd6 Fix project updates to properly pull in role requirements.
"check" runs check out the version that is saved in the database,
so for git repos, any subsequent "checkout" run on the same node
would always report that we have the proper version, and we would
not properly force a role update when requiremets.yml changes.

Also, don't use `scm_result` for all SCMs, as the skipped tasks will
overwrite earlier `scm_result` variables.
2019-02-06 13:20:09 -05:00
kialam
07aae8cefc Add Artifacts CodeMirror field to job details. 2019-02-06 10:52:18 -07:00
softwarefactory-project-zuul[bot]
902fb83493 Merge pull request #3172 from bverschueren/fix_inventory_sync_virtualenv
use source_project custom_virtualenv if configured

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-06 16:55:49 +00:00
Mathieu Mallet
dce3795e0c update-ca-trust: Ensure CA trust is updated in awx_task container
Related #3010

Both awx_web and awx_task containers can have a volume mounted in
specified by the ca_trust_dir variable. Unfortunately only the
awx_web container's trust is updated. This patch makes sure the
awx_task container's trust is updated as well

Testing Done: ansible-playbook --syntax-check installer/install.yml

Signed-off-by: Mathieu Mallet <mmallet@digipok.io>
2019-02-06 16:51:14 +00:00
softwarefactory-project-zuul[bot]
1ef2d4cdad Merge pull request #3175 from ryanpetrello/exact_ansible_facts
fix a subtle bug in ansible_facts lookup filtering

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-06 16:51:13 +00:00
Ryan Petrello
a6b362e455 fix a subtle bug in ansible_facts lookup filtering 2019-02-06 11:02:47 -05:00
mabashian
2c3549331c Adds proper error handling to worklfow save related promises. Fixes bug watching for prompt changes after the node has been edited once. 2019-02-06 10:35:00 -05:00
Bram Verschueren
016fc7f6bf use source_project custom_virtualenv if configured
Signed-off-by: Bram Verschueren <verschueren.bram@gmail.com>
2019-02-06 10:51:52 +01:00
softwarefactory-project-zuul[bot]
e8eda28ce5 Merge pull request #3162 from wenottingham/cady-heron-was-right
Remove limit on `limit` field.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-06 01:22:34 +00:00
softwarefactory-project-zuul[bot]
83c232eb20 Merge pull request #3112 from saito-hideki/pr/fix_ui-test-ci
Fix chrome can not be started with unit-tests due to missing shared libraries

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-06 01:20:19 +00:00
softwarefactory-project-zuul[bot]
c30639c4e6 Merge pull request #3166 from ryanpetrello/old-ansible-inventory-error-msg
provide a better ansible-inventory fallback error message old ansibles

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-05 22:32:13 +00:00
Ryan Petrello
5e84782b9c provide a better ansible-inventory fallback error message old ansibles
see: https://github.com/ansible/awx/issues/3139
2019-02-05 17:12:48 -05:00
mabashian
1a619de91f Makes priv escalation method a dynamic select element 2019-02-05 14:01:29 -05:00
softwarefactory-project-zuul[bot]
d134291097 Merge pull request #3123 from elyezer/users-crud-e2e
Add e2e tests for user creating and editing

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-05 18:54:49 +00:00
Bill Nottingham
4b669fb16d Remove limit on limit field.
This allows 'relaunch-on-failed' on an arbitrary number of failed hosts.
2019-02-05 13:30:51 -05:00
softwarefactory-project-zuul[bot]
b53621e74c Merge pull request #3152 from ryanpetrello/jsonb_prevent_field_lookups
prevent field lookups on Host.ansible_facts keys (it doesn't work)

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-05 17:24:06 +00:00
Elyézer Rezende
925c6543c4 Add users CRUD e2e tests 2019-02-05 14:59:59 -02:00
Ryan Petrello
bb5312f4fc prevent field lookups on Host.ansible_facts keys (it doesn't work)
under the hood, Host.ansible_facts is a postgres jsonb field which
performs match operations using the JSON containment operator (@>)

this operator _only_ works on exact matches on containment (i.e.,
"does the `ansible_distribution` jsonb value contain _this exact_ JSON
structure"):

SELECT ...
FROM main_host
WHERE ansible_facts @> '{"ansible_distribution": "centos"}'

SELECT ...
FROM main_host
WHERE ansible_facts @> '{"packages": {"dnsmasq": [{"version": 2}]}}'

postgres does _not_ expose any operator for fuzzy or lookup-based
matches with this operator, so host filter values like these don't
really make sense (postgres can't _filter_ in the way intended in these
examples):

ansible_distribution__startswith=\"Cent\"
ansible_distribution__icontains=\"CentOS\"
ansible_facts__packages__dnsmasq[]__version__startswith=\"2\"
2019-02-05 10:43:51 -05:00
softwarefactory-project-zuul[bot]
7333e55748 Merge pull request #3156 from Spredzy/schedule_max_jobs_conditiion
jt, wfjt: Ensure SCHEDULE_MAX_JOBS is accurately respect

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-05 13:35:35 +00:00
Yanis Guenane
5e20dcb6ca jt, wfjt: Ensure SCHEDULE_MAX_JOBS is accurately respect
Currently SCHEDULE_MAX_JOBS+1 can be scheduled rather than
SCHEDULE_MAX_JOBS. This is due to the fact that we using strictly
greater rather than greater or equal.

Imagine we set SCHEDULE_MAX_JOBS=1, current logic:

  * First time (count = 0), count < 1 -> proceed
  * Second time (count = 1), count =< 1 -> proceed
  * Third time (count = 2), count > 1 -> prevented

Imagine we set SCHEDULE_MAX_JOBS=1, new logic:

  * First time (count = 0), count < 1 -> proceed
  * Second time (count = 1), count =< 1 -> prevented

Signed-off-by: Yanis Guenane <yguenane@redhat.com>
2019-02-05 13:44:39 +01:00
softwarefactory-project-zuul[bot]
cab6b8b333 Merge pull request #3147 from jbradberry/become_method_list
Expose the known privilege escalation methods in the API config view

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2019-02-04 17:12:28 +00:00
Jeff Bradberry
46020379aa Expose the known privilege escalation methods in the API config view
so that the UI can obtain them and make use of them for an autocomplete widget.
2019-02-04 11:45:27 -05:00
softwarefactory-project-zuul[bot]
e23fb31a4a Merge pull request #3140 from AlanCoding/isolate_venv
Use custom venv in inventory proot-ing

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-02 12:17:25 +00:00
Marliana Lara
17c95f200a Merge pull request #3136 from marshmalien/fix-inv-host-tab
Disable hosts toggle in smart inventory hosts tab
2019-02-01 16:36:02 -05:00
AlanCoding
7676ccdbac use custom venv in inventory prooting 2019-02-01 13:29:45 -05:00
softwarefactory-project-zuul[bot]
4626aa0144 Merge pull request #3093 from jbradberry/become_plugins
Support become plugins

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-01 17:48:09 +00:00
Marliana Lara
fb7596929f Disable hosts toggle in smart inventory hosts tab 2019-02-01 11:28:00 -05:00
softwarefactory-project-zuul[bot]
d63518d789 Merge pull request #3005 from b0urn3/devel
Add SCHEDULE_MAX_JOBS implementation for WFJTs for #2975

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-02-01 13:54:21 +00:00
Robert Zahradníček
6f1cbac324 Add SCHEDULE_MAX_JOBS implementation for WFJTs for #2975 2019-01-31 21:52:36 +01:00
softwarefactory-project-zuul[bot]
2b80f0f7b6 Merge pull request #3121 from ryanpetrello/busted-isolated
properly detect ansible-playbook vs ansible runs in bwrap arg building

Reviewed-by: Elijah DeLee <kdelee@redhat.com>
             https://github.com/kdelee
2019-01-31 16:27:06 +00:00
Ryan Petrello
10945faba1 properly detect ansible-playbook vs ansible runs in bwrap arg building
a recent change (65641c7) made it so that we call
ansible-playbook using its _absolute path_ (i.e.,
/usr/bin/ansible-playbook, /var/lib/venv/xyz/bin/ansible-playbook), so
this logic is no longer correct
2019-01-31 11:06:50 -05:00
softwarefactory-project-zuul[bot]
d4ccb00338 Merge pull request #3118 from ryanpetrello/fix-3108
work around a bug in Django that breaks the stdout HTML view in py3

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-31 14:43:42 +00:00
Ryan Petrello
d10d5f1539 work around a bug in Django that breaks the stdout HTML view in py3
see: https://github.com/ansible/awx/issues/3108
2019-01-31 01:01:40 -05:00
Hideki Saito
3e5f328b52 Fix chrome can not be started with unit-tests due to missing shared libraries
- Added installation packages conforming to the unit-tests/Dockerfile

Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-01-31 13:02:23 +09:00
Hideki Saito
d558ffd699 Fix chrome can not be started with unit-tests due to missing shared libraries
- Modify Dockerfile to install necesarry shared libraries for chrome

Signed-off-by: Hideki Saito <saito@fgrep.org>
2019-01-31 09:54:55 +09:00
softwarefactory-project-zuul[bot]
b64d401e74 Merge pull request #3086 from kialam/fix-3081-duplicate-template-expanded
Fix expanded view not persisting when a user copies a JT/WF.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-30 18:46:24 +00:00
softwarefactory-project-zuul[bot]
0a3f131adc Merge pull request #3100 from ryanpetrello/die_settings_migration_die
remove awx-manage migrate_to_database_settings

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-30 18:33:14 +00:00
Kia Lam
6f9cf6a649 Fix expanded view not persisting when a user copies a JT/WF. 2019-01-30 13:23:21 -05:00
softwarefactory-project-zuul[bot]
5db43b8283 Merge pull request #3101 from jakemcdermott/bump-prompt-credential-type-page-size
increase request page size for credential types in launch prompts

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-30 18:05:53 +00:00
softwarefactory-project-zuul[bot]
aa9e60c508 Merge pull request #3087 from kialam/fix-3082-sidenav-on-resize
Some sidebar nav fixes.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-30 17:40:46 +00:00
softwarefactory-project-zuul[bot]
2162e8e0cc Merge pull request #3109 from ryanpetrello/overindent
fix overindent lint failures

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-30 17:36:52 +00:00
Ryan Petrello
1eeffe4ae2 remove awx-manage migrate_to_database_settings 2019-01-30 12:23:36 -05:00
Ryan Petrello
2927803a82 fix overindent lint failures 2019-01-30 12:12:39 -05:00
Jake McDermott
1b50b26901 support up to 200 credential types in launch prompts 2019-01-29 20:10:13 -05:00
softwarefactory-project-zuul[bot]
44819987f7 Merge pull request #3097 from ansible/jakemcdermott-sanitize-jdetails
sanitize reflected user input on job details page

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-29 21:34:32 +00:00
softwarefactory-project-zuul[bot]
9bf0d052ab Merge pull request #3089 from marshmalien/2330-execution-node
Add execution node field to job details panel

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-29 21:17:37 +00:00
Marliana Lara
5c98d04e09 Update execution node field from job status subscriber 2019-01-29 15:48:18 -05:00
Jeff Bradberry
6560ab0fab Migrated the inputs schema on existing CredentialTypes
to convert the custom become_method into a plain string.

related #2630

Signed-off-by: Jeff Bradberry <jeff.bradberry@gmail.com>
2019-01-29 15:04:35 -05:00
softwarefactory-project-zuul[bot]
efb7a729c7 Merge pull request #3091 from beeankha/dupe_logout_msg
Remove error message for regular logout

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-29 19:51:52 +00:00
Jeff Bradberry
6e1deed79e Removed the special-case logic for maintaining the schema of the become_method field
related #2630

Signed-off-by: Jeff Bradberry <jeff.bradberry@gmail.com>
2019-01-29 14:06:26 -05:00
beeankha
ad3721bdb2 Ensure all other error messages don't double up with the logout message 2019-01-29 13:09:28 -05:00
Jake McDermott
ca64630740 sanitize reflected user input on job details page
This makes sure we're applying the 'sanitize' filter to reflected user input for some of the new information we're displaying on the job details page.
2019-01-29 09:35:57 -05:00
softwarefactory-project-zuul[bot]
8686575311 Merge pull request #3092 from ansible/jakemcdermott-patch-1
Install dependencies for Chromium in unit test image

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-28 22:26:59 +00:00
Jeff Bradberry
0ecd6542bf Changed the become_method field into one that takes arbitrary input
related #2630

Signed-off-by: Jeff Bradberry <jeff.bradberry@gmail.com>
2019-01-28 16:53:54 -05:00
beeankha
5e3d47683d Ensure error messages are showing up, but not doubled 2019-01-28 16:12:20 -05:00
Jake McDermott
73f617d811 Install dependencies for Chromium in unit test image 2019-01-28 15:57:17 -05:00
beeankha
ea7e15bfc4 Remove error message for regular logout 2019-01-28 15:16:28 -05:00
Marliana Lara
eca530c788 Add execution node field to job details panel 2019-01-28 13:48:50 -05:00
Kia Lam
c1b48e2c9c Some sidebar nav fixes.
- Set isExpanded state to false when window width is < 700px.
- Set `pointer-events` to `none` for logo to allow users to easily click on hamburger icon.
2019-01-28 13:25:44 -05:00
softwarefactory-project-zuul[bot]
9d501327fc Merge pull request #3078 from shawnmolnar/fixtypos
Fixed role download typo in help text

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-28 17:53:23 +00:00
Molnar, Shawn
31b3bad658 Fixed role download typo in help text 2019-01-25 16:37:51 -08:00
softwarefactory-project-zuul[bot]
155c214df0 Merge pull request #3077 from mabashian/i18n-sweep
Mark strings for translation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-25 21:11:30 +00:00
mabashian
5931c13b04 Mark strings for translation 2019-01-25 14:21:35 -05:00
softwarefactory-project-zuul[bot]
5d7b7d5888 Merge pull request #3074 from kialam/projects-expand-collapse
Add expand/collapse toolbar to Projects List.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-25 17:41:01 +00:00
softwarefactory-project-zuul[bot]
53ad819d65 Merge pull request #3068 from kialam/job-template-expand-collapse
Add expand/collapse toolbar for Job Templates list.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-25 17:40:54 +00:00
softwarefactory-project-zuul[bot]
3ce3786303 Merge pull request #3071 from saito-hideki/pr/fix_doc_saml_team_attr_map
Fixed incorrect setting item name for SAML Team Attribute Mapping

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-25 16:45:29 +00:00
softwarefactory-project-zuul[bot]
46bc146e26 Merge pull request #3076 from ryanpetrello/test-data-cleanup
move awx.main.utils.ansible tests data into the correct location

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-25 16:38:38 +00:00
Ryan Petrello
88eaf1154a move awx.main.utils.ansible tests data into the correct location 2019-01-25 11:11:12 -05:00
Kia Lam
5421c243d7 Add expand/collapse toolbar to Projects List.
Add some responsive styling.
2019-01-25 10:31:38 -05:00
softwarefactory-project-zuul[bot]
5cdab1b57a Merge pull request #3070 from ryanpetrello/bye-bye-six
clean up unnecessary usage of the six library (awx only supports py3)

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-25 15:20:38 +00:00
softwarefactory-project-zuul[bot]
2a86c5b944 Merge pull request #3061 from jakemcdermott/inventory-scm-job-linkage
add linked status indicator for scm inventory project updates

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-25 13:05:36 +00:00
Ryan Petrello
daeeaf413a clean up unnecessary usage of the six library (awx only supports py3) 2019-01-25 00:19:48 -05:00
Hideki Saito
2d119f7b02 Fixed incorrect setting item name for SAML Team Attribute Mapping 2019-01-25 09:24:12 +09:00
softwarefactory-project-zuul[bot]
68950d56ca Merge pull request #3065 from ryanpetrello/dispatcher-reap-refactor
clean up some unnecessary dispatcher reaping code

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-24 22:30:45 +00:00
Jake McDermott
477c5df022 add linked status indicator for scm inventory project updates
Signed-off-by: Jake McDermott <yo@jakemcdermott.me>
2019-01-24 16:24:42 -05:00
softwarefactory-project-zuul[bot]
4c8f4f4cc5 Merge pull request #3067 from mabashian/2264-custom-venv
Adds environment to output details for jts and inv syncs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-24 20:35:49 +00:00
softwarefactory-project-zuul[bot]
6726e203b9 Merge pull request #3050 from jlmitch5/jobTimeoutInUi
Job timeout in ui

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-24 19:16:48 +00:00
Kia Lam
9a10811366 Add expand/collapse toolbar for Job Templates list. 2019-01-24 13:50:32 -05:00
mabashian
62bffaa7e6 Make environment subscribe-able 2019-01-24 13:08:19 -05:00
mabashian
14423c4f3f Add param to getEnvironmentDetails allowing us to pass in a value rather than pulling it from the model 2019-01-24 12:58:29 -05:00
mabashian
8037cddfe5 Adds environment to output details for jts and inv syncs 2019-01-24 12:19:24 -05:00
softwarefactory-project-zuul[bot]
be4b3c75b4 Merge pull request #3052 from jakemcdermott/inputs_enc
explicitly handle .inputs in decrypt_field, encrypt_field

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2019-01-24 17:04:30 +00:00
softwarefactory-project-zuul[bot]
818b261bea Merge pull request #3057 from kialam/add-list-toolbar
Expand and Collapse Jobs List

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-24 16:45:57 +00:00
Ryan Petrello
4707dc2a05 clean up some unnecessary dispatcher reaping code 2019-01-24 11:11:05 -05:00
Kia Lam
ebc2b821be Only show toolbar if the job list is not empty. 2019-01-24 10:44:58 -05:00
Kia Lam
85a875bbfe Styling changes. 2019-01-24 10:44:58 -05:00
Kia Lam
a9663c2900 Add expand/collapse toolbar to Jobs List view. 2019-01-24 10:44:58 -05:00
Kia Lam
05c24df9e3 Add list toolbar component. 2019-01-24 10:44:58 -05:00
Ryan Petrello
1becd4c39d Merge pull request #3066 from ryanpetrello/frosted-flakes-are-more-than-good
clean up some minor linting issues
2019-01-24 10:40:37 -05:00
Ryan Petrello
9817ab14d0 clean up some minor linting issues 2019-01-24 10:23:56 -05:00
softwarefactory-project-zuul[bot]
51b51a9bf7 Merge pull request #3051 from ryanpetrello/record-venv-usage
record the virtualenv used when a job or inventory sync runs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-23 21:33:31 +00:00
softwarefactory-project-zuul[bot]
55c5dd06cf Merge pull request #3058 from ryanpetrello/ansible-inventory-custom-venv
use the correct ansible-inventory executable for custom venvs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-23 19:30:13 +00:00
Ryan Petrello
0e5e23372d use the correct ansible-inventory executable for custom venvs 2019-01-23 14:07:48 -05:00
softwarefactory-project-zuul[bot]
1e44d5c833 Merge pull request #2899 from AlanCoding/copy_created_by
Copy WFJT created_by to jobs it spawns

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-23 16:53:49 +00:00
Ryan Petrello
f7bc8fb662 record the virtualenv used when a job or inventory sync runs
see: https://github.com/ansible/awx/issues/2264
2019-01-23 11:48:06 -05:00
softwarefactory-project-zuul[bot]
416dcc83c9 Merge pull request #3020 from beeankha/fix_unicode_error
Fix unicode error

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2019-01-23 15:26:07 +00:00
AlanCoding
13ed656506 Copy WFJT created_by to jobs it spawns 2019-01-23 09:33:14 -05:00
softwarefactory-project-zuul[bot]
f683f87ce3 Merge pull request #3053 from girishramnani/bug/jt-patch
resolved the error message for JT Patch

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-23 13:39:09 +00:00
Girish Ramnani
c15cbe0f6e resolved the error message for JT Patch
Signed-off-by: Girish Ramnani <girishramnani95@gmail.com>
2019-01-23 11:53:27 +05:30
Jake McDermott
a8728670e1 handle credential.inputs in decryption utils 2019-01-22 22:56:24 -05:00
softwarefactory-project-zuul[bot]
cf9dffbaf8 Merge pull request #3035 from ansible/e2e-websockets
E2E tests for websockets (and other things)

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-23 02:34:19 +00:00
Daniel Sami
3d1b32c72f websocket tests and fixture updates 2019-01-23 11:06:42 +09:00
John Mitchell
e95da84e5a make forks fill for value of 0 and update timeout help text based on docs feedback 2019-01-22 13:48:57 -05:00
John Mitchell
fcd759fa1f add timeout field to ui 2019-01-22 13:33:51 -05:00
softwarefactory-project-zuul[bot]
6772c81927 Merge pull request #3047 from jakemcdermott/npm-ci
use npm ci to install ui dependencies

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-22 17:28:40 +00:00
softwarefactory-project-zuul[bot]
774ec40989 Merge pull request #3048 from matburt/schema_detector_regex
Make schema generator handle alternative iso 8601 datetimes

Reviewed-by: awxbot
             https://github.com/awxbot
2019-01-22 17:09:23 +00:00
softwarefactory-project-zuul[bot]
b7ba280da3 Merge pull request #3040 from jiuka/709-support-ssl-connections-for-postgresql
Add pg_sslmode option to access PostgreSQL by ssl

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-22 16:51:41 +00:00
Matthew Jones
058e2c0d81 Make schema generator handle alternative iso 8601 datetimes 2019-01-22 11:44:56 -05:00
Marius Rieder
072919040b Omit DATABASE_SSLMODE if not set. 2019-01-22 17:24:44 +01:00
softwarefactory-project-zuul[bot]
91ae343e3b Merge pull request #2847 from marshmalien/testing-smart-inv-host-filter
Fix host_filter smart search bug when searching by "or" operator

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-22 16:10:26 +00:00
Jake McDermott
5afabc7a19 use npm ci to install ui dependencies 2019-01-22 11:04:58 -05:00
softwarefactory-project-zuul[bot]
4788f0814f Merge pull request #3045 from jakemcdermott/regenerate-package-lock
updating package-lock.json

Reviewed-by: Shane McDonald <me@shanemcd.com>
             https://github.com/shanemcd
2019-01-22 15:01:43 +00:00
softwarefactory-project-zuul[bot]
c528ece5df Merge pull request #3046 from shanemcd/3.0.0
AWX 3.0.0

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-22 14:58:33 +00:00
Jake McDermott
a1c03cd6a1 update license files 2019-01-22 09:33:13 -05:00
Shane McDonald
42fbb81337 AWX 3.0.0 2019-01-22 09:32:39 -05:00
softwarefactory-project-zuul[bot]
5286e24721 Merge pull request #3044 from ryanpetrello/dispatcher-reap-db-error
detect dead DB connections in the dispatcher when reaping jobs

Reviewed-by: Chris Meyers
             https://github.com/chrismeyersfsu
2019-01-22 14:32:34 +00:00
Jake McDermott
0bde309d23 updating package-lock.json 2019-01-22 09:04:27 -05:00
Ryan Petrello
b2442d42a3 detect dead DB connections in the dispatcher when reaping jobs 2019-01-22 08:40:26 -05:00
softwarefactory-project-zuul[bot]
18409f89c5 Merge pull request #3042 from ryanpetrello/py3-sso-complete
fix a py3 bug that breaks the SSO complete endpoint

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-22 13:12:41 +00:00
softwarefactory-project-zuul[bot]
88d5fb0420 Merge pull request #3039 from AlanCoding/inventory_venv
Use custom virtual environment in inventory updates

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2019-01-22 13:12:37 +00:00
softwarefactory-project-zuul[bot]
1cc0f81913 Merge pull request #3034 from rooftopcellist/upgrade_social_auth_core
upgrade social-auth-core to v3.0.0

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-21 22:10:50 +00:00
Ryan Petrello
8cb8e63db5 fix a py3 bug that breaks the SSO complete endpoint 2019-01-21 17:04:13 -05:00
Christian Adams
8597670299 upgrade social-auth-core to v3.0.0 2019-01-21 16:35:47 -05:00
softwarefactory-project-zuul[bot]
c0ff4dad59 Merge pull request #3021 from jakemcdermott/credential_input_access_methods
add input access methods to credentials

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-21 21:10:12 +00:00
Marliana Lara
9b2ca04118 Fix host_filter smart search bug when searching by "or" operator 2019-01-21 15:50:33 -05:00
softwarefactory-project-zuul[bot]
d98c60519e Merge pull request #2895 from AlanCoding/scm_vars
Allow SCM overwrite vars in the UI

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-21 19:13:47 +00:00
Marius Rieder
589531163a Add pg_sslmode option.
Allows to use PostgreSQL over SSL #709
2019-01-21 19:47:34 +01:00
AlanCoding
5dd8c3ace2 Allow SCM overwrite vars in the UI 2019-01-21 13:27:57 -05:00
softwarefactory-project-zuul[bot]
d021c253aa Merge pull request #2959 from crab86/devel
Add Grafana notification type

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-21 17:21:14 +00:00
softwarefactory-project-zuul[bot]
c48c8c04f4 Merge pull request #3038 from Spredzy/x_frame_options
Nginx: Specify X-Frame-Options "DENY" header

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-21 15:31:44 +00:00
softwarefactory-project-zuul[bot]
a2102c92ec Merge pull request #3030 from ryanpetrello/fix-non-utf8-scm
add robust handling of non-UTF8 when detecting inventory/playbooks

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2019-01-21 13:52:55 +00:00
AlanCoding
99288a5e18 Use custom virtual environment in inventory updates 2019-01-21 06:45:55 -05:00
Yanis Guenane
44c48d1d66 Nginx: Specify X-Frame-Options "DENY" header
Adding the X-Frame-Options "DENY"; header to avoid possible clickjacking
attack.

More info of the why available here:
https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)

Signed-off-by: Yanis Guenane <yguenane@redhat.com>
2019-01-21 12:34:17 +01:00
Daniel Sami
c785c38748 websocket tests initial commit 2019-01-21 13:54:03 +09:00
Sebastian
ebe0ded9c2 Add grafana notification type unit tests 2019-01-20 22:42:03 +01:00
Jake McDermott
2dadfbcc14 use credential input access methods in injectors.py 2019-01-20 14:02:01 -05:00
Jake McDermott
3a58a5b772 use credential input access methods in views/__init__.py 2019-01-20 13:08:41 -05:00
Jake McDermott
5010e98b8f use credential input access methods in projects.py 2019-01-20 13:08:38 -05:00
Jake McDermott
3ef4cc9bfa use credential input access methods in serializers.py 2019-01-20 13:08:34 -05:00
Jake McDermott
c01c671642 use credential input access methods in tasks.py 2019-01-20 13:08:30 -05:00
Jake McDermott
a86e270905 add credential input access methods 2019-01-20 13:08:23 -05:00
Sebastian
4058d18593 Add grafana notification type 2019-01-20 13:51:23 +01:00
Ryan Petrello
caa55f112f add robust handling of non-UTF8 when detecting inventory/playbooks 2019-01-19 13:25:41 -05:00
softwarefactory-project-zuul[bot]
d0af952685 Merge pull request #3027 from rooftopcellist/amend_auth_code_help_txt
correct authorization code expiration help-text

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-18 20:37:21 +00:00
softwarefactory-project-zuul[bot]
fbc7f496c5 Merge pull request #3024 from ryanpetrello/ldap-gc-deadlock
fix a deadlock when Python garbage collects LDAPBackend objects

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-18 19:25:11 +00:00
John Mitchell
bb19a4234e Merge pull request #3003 from jlmitch5/newSettingsInUI
add new settings to ui
2019-01-18 14:20:31 -05:00
softwarefactory-project-zuul[bot]
11f7e90f6a Merge pull request #3025 from ryanpetrello/django-cors-headers
add Django CORS middleware

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-18 18:47:14 +00:00
Christian Adams
5c080678a6 correct authorization code expiration help-text 2019-01-18 13:27:01 -05:00
John Mitchell
2df51a923d change grant reference to code in ui help text 2019-01-18 13:13:15 -05:00
Tyler Cross
0da0a8e67b CORS Support
Added the django-cors-headers app and middleware to make CORS possible.
2019-01-18 12:49:00 -05:00
John Mitchell
b75ba7ebea remove auth misc form and move fields under system misc form 2019-01-18 12:43:21 -05:00
John Mitchell
24de951f6c add access token and authorization code expiration settings to ui 2019-01-18 12:43:21 -05:00
John Mitchell
974306541e add isolated settings to ui 2019-01-18 12:43:21 -05:00
softwarefactory-project-zuul[bot]
d2fa5cc182 Merge pull request #3013 from ryanpetrello/ansible-py3-venv
add support for custom py3 ansible virtualenvs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-18 16:29:27 +00:00
Ryan Petrello
e45e4b3cda fix a deadlock when Python garbage collects LDAPBackend objects
we shouldn't call signal.disconnect in __del__ because it can lead to
deadlocks in Django signal dispatch code

The Signal.connect, Signal.disconnect, and Signal._live_receivers
methods all share a threading.Lock():

22a60f8d0b/django/dispatch/dispatcher.py (L49)

It's possible for this to lead to a deadlock:

1.  Have code that calls Signal._live_receivers and enter the critical
    path inside the shared threading.Lock()
2.  Python garbage collection occurs and finds one or more LDAPBackend
    objects with no more references
3.  This __del__ is called, which calls Signal.disconnect
4.  Code in Signal._disconnect attempts to obtain the (already held)
    threading.Lock
5.  Python hangs forever while attempting to garbage collect
2019-01-18 11:27:50 -05:00
Ryan Petrello
65641c7edd add support for custom py3 ansible virtualenvs 2019-01-18 10:55:53 -05:00
softwarefactory-project-zuul[bot]
5f01c3f5a8 Merge pull request #2994 from coreywan/pod-limits
Add POD Limits

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-18 04:28:11 +00:00
softwarefactory-project-zuul[bot]
7b39198f26 Merge pull request #2995 from coreywan/postgres_helm
adds persistence.storageClass and limits to postgress helm install

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-18 04:24:18 +00:00
Bianca
f1e3be5ec8 Removing unicode fix-related lines in ha.py 2019-01-17 14:42:38 -05:00
Bianca
bf5657a06a Update register_queue.py file to enable registration of instance groups with unicode 2019-01-17 14:36:46 -05:00
softwarefactory-project-zuul[bot]
f583dd73e8 Merge pull request #3008 from AlanCoding/inv_cleanup2
Remove deprecated logic & components from inventory import command

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-17 19:23:02 +00:00
softwarefactory-project-zuul[bot]
57b8aa4892 Merge pull request #3002 from themr0c/pg_password_10_character_limit
pg_password should be random 10 character alphanumeric string, when p…

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-17 18:15:38 +00:00
softwarefactory-project-zuul[bot]
474876872e Merge pull request #2999 from themr0c/issue-2991
related #2991 - Helm creation of postgreql on multiple namespaces

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-17 14:28:05 +00:00
softwarefactory-project-zuul[bot]
3eaed52b83 Merge pull request #3017 from ryanpetrello/beat-shelve-error
close the persistent shelve when we're done checking it

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2019-01-17 13:46:09 +00:00
AlanCoding
28822d891c remove unneeded steps in inventory import
Delete some cases that directly loads scripts due
to ansible-inventory group_vars problem (now fixed)

Delete intermediate method that was a go-between the
command and the loader class

Change return type of loader from MemInventory to
a simple python dict

remove backport script and star imports
2019-01-17 08:44:55 -05:00
Ryan Petrello
37dbfa88f9 close the persistent shelve when we're done checking it
this code was added to detect celerybeat shelve .db corruption, but it caused a different issue; opening the shelve in this way puts a write lock on it, which means that when we attempt to open it _again_ moments later, we can't.

when we're done checking the validity of the file, we need to close it
2019-01-17 08:20:21 -05:00
Fabrice Flore-Thebault
b6c30e8ef5 it's a limitation of the official postgres helm chart
Signed-off-by: Fabrice Flore-Thebault <themr0c@users.noreply.github.com>
2019-01-17 12:56:17 +01:00
Fabrice Flore-Thebault
d938c96a76 pg_password should be random 10 character alphanumeric string, when postgresql is running on kubernetes
Signed-off-by: Fabrice Flore-Thebault <themr0c@users.noreply.github.com>
2019-01-17 12:56:06 +01:00
softwarefactory-project-zuul[bot]
4ce18618cb Merge pull request #3014 from jakemcdermott/skip-ui-release-chromium-download
skip chromium download when using ui-release make target

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-17 04:58:22 +00:00
Jake McDermott
6c7f11395b skip chromium download when building release 2019-01-16 20:48:12 -05:00
softwarefactory-project-zuul[bot]
134950ade1 Merge pull request #3006 from ansible/documentation
Documentation of E2E test fixtures, etc.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-16 23:43:46 +00:00
Daniel Sami
7258a43bad rewording, typo corrections 2019-01-17 08:28:22 +09:00
Ryan Petrello
27f98163ff Merge pull request #3012 from ryanpetrello/fix-swagger-key-ordering
enforce key order when writing swagger docs JSON
2019-01-16 16:43:43 -05:00
Ryan Petrello
6d04bd34ce enforce key order when writing swagger docs JSON 2019-01-16 16:00:27 -05:00
softwarefactory-project-zuul[bot]
584ec9cf75 Merge pull request #2538 from ryanpetrello/py3
port awx to run natively on python3.6+

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-16 19:39:14 +00:00
Corey Wanless
aebeeb170e adds pod limits
Signed-off-by: Corey Wanless <corey.wanless@wwt.com>
2019-01-16 09:23:18 -06:00
Fabrice Flore-Thebault
c434d38876 adding helm chart version for postgresql
Signed-off-by: Fabrice Flore-Thebault <themr0c@users.noreply.github.com>
2019-01-16 09:40:49 +01:00
Daniel Sami
ae3ab89515 lint fix 2019-01-16 11:06:25 +09:00
Daniel Sami
0c250cd6af Updated parameter info 2019-01-16 10:44:22 +09:00
Ryan Petrello
33c1416f6c work around a py3 bug in celerybeat 2019-01-15 15:19:02 -05:00
Ryan Petrello
3d7fcb3835 fix a few isolated issues related to the py2 -> py3 move 2019-01-15 14:09:05 -05:00
Shane McDonald
04da4503db Python 3 / Upstream Kubernetes 2019-01-15 14:09:05 -05:00
Ryan Petrello
2016798e0f fix a few UTF-8 bugs on Ubuntu related to stdout text downloads 2019-01-15 14:09:05 -05:00
Ryan Petrello
39d119534c support isolated runs in py2 *and* py3 (for now)
once we merge in runner support for isolated environments, we can
revert this commit (because we'll always run isolated code using python3
executables)
2019-01-15 14:09:05 -05:00
Shane McDonald
d273472927 Enable py3 SCL if needed 2019-01-15 14:09:05 -05:00
Shane McDonald
5aa99b2ca1 Dependency updates for Python 3 2019-01-15 14:09:05 -05:00
Ryan Petrello
96b9bd6ab6 make py3 packaging work for k8s 2019-01-15 14:09:05 -05:00
Author: Jim Ladd
2c5bdf3611 fix some isolated py3 bugs 2019-01-15 14:09:05 -05:00
Ryan Petrello
af4234556e remove dm.xmlsec.binding
python-saml uses dm.xmlsec.binding only supports python2
by moving to py3, we now use python3-saml (which uses python-xmlsec
instead)

see: https://github.com/onelogin/python-saml/issues/145#issuecomment-222021691
2019-01-15 14:09:05 -05:00
Ryan Petrello
c6482137d1 parametrize PYTHON for Ubuntu py35 support 2019-01-15 14:09:05 -05:00
Ryan Petrello
f223df303f convert py2 -> py3 2019-01-15 14:09:01 -05:00
Ryan Petrello
f132ce9b64 switch image builds to py3 2019-01-15 13:25:13 -05:00
softwarefactory-project-zuul[bot]
f22fd58392 Merge pull request #3007 from AlanCoding/test_logging
Updates to logging, specifically for unit tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-15 18:19:58 +00:00
AlanCoding
cccc038600 Updates to logging, specifically for unit tests 2019-01-15 11:34:54 -05:00
softwarefactory-project-zuul[bot]
b9607dd415 Merge pull request #2983 from mabashian/upgrade-jquery-mabashian
Upgrades jQuery and Bootstrap

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-15 14:06:33 +00:00
Fabrice Flore-Thebault
7b32262f75 revert pg_hostname
Signed-off-by: Fabrice Flore-Thebault <themr0c@users.noreply.github.com>
2019-01-15 14:59:17 +01:00
Fabrice Flore-Thebault
d69f6acf64 add helm repo update and fix helm upgrade
Signed-off-by: Fabrice Flore-Thebault <themr0c@users.noreply.github.com>
2019-01-15 14:48:26 +01:00
mabashian
66a859872e Fixes for numerous bootstrap upgrade bugs, uses variables for colors in bootstrap override style file 2019-01-15 08:40:35 -05:00
Fabrice Flore-Thebault
ef3aab1357 related #2991 - unify postgresql_service_name
Signed-off-by: Fabrice Flore-Thebault <themr0c@users.noreply.github.com>
2019-01-15 11:44:08 +01:00
Daniel Sami
62ebf85b96 Documentation of functions 2019-01-14 19:18:47 -05:00
Corey Wanless
0c074e0988 * adds persistence.storageClass and limits to postgress helm install
* adds new variables to the inventory

Signed-off-by: Corey Wanless <corey.wanless@wwt.com>
2019-01-14 11:28:21 -06:00
softwarefactory-project-zuul[bot]
32c705a62a Merge pull request #2996 from coreywan/setup-postgress-activation-wait
adds wait time for postgres setup as a variable

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-14 17:22:54 +00:00
mabashian
7194338653 Reduces flake on launch job e2e test 2019-01-14 10:54:37 -05:00
Fabrice Flore-Thebault
d43521bb77 fix #2991 - make Helm creation of postgreql succeed when installing multiple AWX on different namespaces on same kubernetes
Signed-off-by: Fabrice Flore-Thebault <themr0c@users.noreply.github.com>
2019-01-14 10:32:21 +01:00
Corey Wanless
b1710f9523 adds wait time for postgres setup as a variable 2019-01-11 22:23:43 -06:00
mabashian
3b456d3e72 Fix credential list e2e test 2019-01-11 16:44:59 -05:00
softwarefactory-project-zuul[bot]
12a04a6da6 Merge pull request #2956 from ansible/swap-diff-order-schema-change
swap file order so diff of schema makes more sense

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-11 20:11:42 +00:00
mabashian
99205fde16 Fixes linting errors 2019-01-11 13:06:52 -05:00
mabashian
8539eae114 Fixes for e2e tests 2019-01-11 12:50:01 -05:00
softwarefactory-project-zuul[bot]
3e2dd4f86b Merge pull request #2993 from ryanpetrello/fix-ctint-db-failure
catch _all_ types of django.db.utils.Error on CTinT key lookups

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-11 14:56:11 +00:00
Ryan Petrello
32c14d6eab catch _all_ types of django.db.utils.Error on CTinT key lookups 2019-01-11 08:49:47 -05:00
softwarefactory-project-zuul[bot]
4f9901db38 Merge pull request #2867 from AlanCoding/supervisor_names
Make docker environment interoperable with supervisorctl commands

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-11 13:28:35 +00:00
softwarefactory-project-zuul[bot]
a77c981e0c Merge pull request #2992 from AlanCoding/opt_dashboard
Optimize dashboard using Django annotation for sum

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2019-01-11 13:15:42 +00:00
AlanCoding
77d2364022 Make docker environment interoperable with supervisorctl commands 2019-01-10 13:41:15 -05:00
AlanCoding
d1b42fd583 Optimize dashboard using Django annotation for sum 2019-01-10 12:22:39 -05:00
mabashian
2dfb0abb69 Fixes jshint errors 2019-01-09 11:45:51 -05:00
mabashian
7bcbaabd71 Removed extraneous comments 2019-01-09 11:03:51 -05:00
mabashian
9c20e1b494 Upgrades jquery and bootstrap 2019-01-09 11:03:51 -05:00
softwarefactory-project-zuul[bot]
2b5210842d Merge pull request #2977 from MarBra/devel
Fix typo in ca_trust_dir

Reviewed-by: Bill Nottingham
             https://github.com/wenottingham
2019-01-07 19:42:50 +00:00
marcel
0b3e51458d Fix typo in ca_trust_dir
The correct path is used in docker-compose template:
- "{{ ca_trust_dir +':/etc/pki/ca-trust/source/anchors:ro' }}"
2019-01-07 19:29:34 +01:00
Chris Meyers
d57fc998d5 Merge pull request #2972 from wwitzel3/devel
update to the latest asgi-amqp
2019-01-03 08:22:05 -05:00
Wayne Witzel III
1079051b12 update to the latest asgi-amqp 2019-01-03 07:52:16 -05:00
Chris Meyers
4641056829 Merge pull request #2942 from chrismeyersfsu/doc-tm_affinity
add docs for task manager node decider
2019-01-02 13:57:58 -05:00
chris meyers
db2bb19d65 add docs for task manager node decider 2019-01-02 12:17:28 -05:00
Elijah DeLee
e5ad2e44fb swap file order so diff of schema makes more sense
This way we will get +'s next to new content :)
2018-12-20 15:05:01 -05:00
softwarefactory-project-zuul[bot]
aa8cda0001 Merge pull request #2801 from ryanpetrello/more-robust-isolated-capacity
collect isolated capacity using a cache plugin, not stdout parsing

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-20 20:01:00 +00:00
softwarefactory-project-zuul[bot]
949f383564 Merge pull request #2947 from wenottingham/spelling-is-gud
Fix 'credential' typo.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-19 13:54:47 +00:00
Bill Nottingham
479ad13630 Fix some more typos while here. 2018-12-18 16:23:17 -05:00
Bill Nottingham
23c2e1be31 Fix 'credential' typo. 2018-12-18 16:12:10 -05:00
softwarefactory-project-zuul[bot]
7628ef01f1 Merge pull request #2938 from mabashian/wf-details-click
Updates to workflow node details link

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-18 12:06:33 +00:00
mabashian
c0730aa562 Prevent mousedown on details link from triggering pan functionality 2018-12-17 15:20:34 -05:00
mabashian
67d6a9f9ea Fixes display of wf node details link in FF by adding height and width 2018-12-17 14:33:48 -05:00
mabashian
f9854abfa1 Fixed linting error 2018-12-17 10:40:20 -05:00
mabashian
2697615dbf Undo GET request that was made for workflow node jobs missing a type and instead leverage the redirect route. The workflow node results redirect now works for all job types. 2018-12-17 10:26:01 -05:00
softwarefactory-project-zuul[bot]
a131250dc1 Merge pull request #2917 from AlanCoding/relaunch_sjt
Fx bug where some SJs could not be relaunched

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-17 15:07:32 +00:00
Michael Abashian
2d237b6dbb Merge pull request #2 from jakemcdermott/wf-details-click
add output redirect route for workflow playbook nodes
2018-12-17 09:41:41 -05:00
Jake McDermott
c8b15005b4 link to workflow playbook node route from workflow viewer 2018-12-14 19:22:23 -05:00
Jake McDermott
a5c4350695 add redirect route for workflow viewer 2018-12-14 19:20:01 -05:00
mabashian
9f18f8dbdb Fixes split job inside workflow details link bug 2018-12-14 18:04:39 -05:00
mabashian
a8e1c8960f Remove details function 2018-12-14 16:27:35 -05:00
mabashian
7f66053654 Changed workflow node details link to href so that i can be opened in new tab 2018-12-14 14:59:19 -05:00
softwarefactory-project-zuul[bot]
1d6c88b7e2 Merge pull request #2933 from ryanpetrello/openshift-ha-policy
configure an HA policy for openshift/k8s installs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-14 19:45:26 +00:00
softwarefactory-project-zuul[bot]
049f85f3c9 Merge pull request #2937 from mabashian/2932-schedule-prompts
Fixes bug scheduling jt where first survey question is optional

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-14 19:37:03 +00:00
Ryan Petrello
4858868428 configure an HA policy for openshift/k8s installs 2018-12-14 14:08:30 -05:00
mabashian
4e37076955 Fixes bug scheduling jt where first survey question is optional 2018-12-14 14:08:06 -05:00
softwarefactory-project-zuul[bot]
e6f654b568 Merge pull request #2924 from AlanCoding/run_as_root
Catch python error when unable to find user

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-13 18:01:55 +00:00
AlanCoding
65e110cdbf catch python error when unable to find user 2018-12-13 11:47:54 -05:00
softwarefactory-project-zuul[bot]
10e99c76a8 Merge pull request #2921 from GitStorm/patch-1
tower-cli config: host value needs to be URL

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-13 13:13:42 +00:00
GitStorm
b0e3bc96dd tower-cli config: host value needs to be URL
Since the key name "host" is slightly misleading, it would help to point out in this documentation, that in fact an URL is required for that key/value pair "host" in the tower-cli config. Failing to do so drops the follwing error:

Error: There was a network error of some kind trying to connect to Tower.
The most common  reason for this is a settings issue; is your "host" value in `tower-cli config` correct?
2018-12-13 11:52:56 +01:00
softwarefactory-project-zuul[bot]
59df54b363 Merge pull request #2919 from ryanpetrello/dispatcher-task-import-hardening
only allow the task dispatch worker to import and run decorated tasks

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-13 01:12:48 +00:00
Ryan Petrello
5950f26c69 only allow the task dispatch worker to import and run decorated tasks
this _technically_ prevents a remote code exploit where a user who has
access to publish AMQP messages to the dispatch queue could craft
a special message that would import and run arbitrary Python functions;
that said, the types of user with this privilege level are generally
_already_ the awx user (so they can already do this by hand if they
want)
2018-12-12 17:46:41 -05:00
AlanCoding
a3a5c6bf9f fix bug where some SJs could not be relaunched 2018-12-12 11:56:57 -05:00
softwarefactory-project-zuul[bot]
ca16787e7c Merge pull request #2697 from ryanpetrello/smarter-websocket-expiry
stop various async background requests from bumping the session expiry

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-12 15:39:20 +00:00
softwarefactory-project-zuul[bot]
271bd10b47 Merge pull request #2907 from shanemcd/devel
Bump version to 2.1.2

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-11 17:59:17 +00:00
Shane McDonald
e3872ebd58 Bump version to 2.1.2 2018-12-11 12:30:10 -05:00
softwarefactory-project-zuul[bot]
eee716644b Merge pull request #2875 from MrMEEE/patch-1
Bumped Version to 2.1.1

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-11 17:25:26 +00:00
Ryan Petrello
c2660af60d stop various async background requests from bumping the session expiry
if a user has an active session that just sits on the dashboard or job
list, websocket messages that come in (for e.g., job status changes)
will trigger AJAX requests for more data; this process causes a user
with an idle login to continue to generate API requests, which in turn
ticks their expiry timer.  As a result, users with active sessions
sitting on these two (popular) pages will never be automatically logged
out via SESSION_MAX_AGE.

this change introduces a special header that the UI can use to signify
that a request shouldn't bump the expiry timer
2018-12-11 09:15:58 -05:00
softwarefactory-project-zuul[bot]
2758a38485 Merge pull request #2898 from mabashian/2851-perms
Fixes bug where admin/member roles weren't showing up when adding user to org

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-10 16:51:03 +00:00
softwarefactory-project-zuul[bot]
9104f485e6 Merge pull request #2856 from wenottingham/one-small-step-foreman
Update foreman.py from Ansible devel, primarily for unicode fixes.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-10 16:27:52 +00:00
mabashian
ae7361f82d Fixes bug where admin/member roles weren't showing up when adding user to org 2018-12-10 11:12:09 -05:00
softwarefactory-project-zuul[bot]
42562e86e4 Merge pull request #2888 from mabashian/sanitize-app-token-list
Sanitize username and description in application tokens list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-07 21:42:38 +00:00
softwarefactory-project-zuul[bot]
982ed37b06 Merge pull request #2890 from mabashian/3198-survey
Fixes bug launching jt where first survey question is optional and empty

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-07 20:40:07 +00:00
mabashian
c0c666cc87 Fixes bug launching jt where first survey question is optional and empty 2018-12-07 15:12:22 -05:00
softwarefactory-project-zuul[bot]
8a284889f5 Merge pull request #2889 from mabashian/2887-workflow-cred
Properly POST credentials to workflow nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-07 20:08:36 +00:00
mabashian
b891e2c204 Properly POST credentials to workflow nodes 2018-12-07 14:37:56 -05:00
mabashian
a8bf7366cf Sanitize username and description in application tokens list 2018-12-07 14:24:12 -05:00
softwarefactory-project-zuul[bot]
e517f81b8f Merge pull request #2880 from AlanCoding/fix_v1_links
Fix links to some resources that lack v1 pages

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-07 14:33:15 +00:00
softwarefactory-project-zuul[bot]
c4c99332fc Merge pull request #2873 from ansible/related_slices
Show type in related_jobs, link based on type

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-06 20:51:00 +00:00
AlanCoding
40b5ce4b2e link v1 pages to v2 credential type page 2018-12-06 15:41:26 -05:00
AlanCoding
d2cd337c1f fix links to some resources that lack v1 pages 2018-12-06 08:29:23 -05:00
softwarefactory-project-zuul[bot]
b9913fb4f9 Merge pull request #2877 from chrismeyersfsu/improvement-better_default_loggers
more sane default log handlers

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-05 15:59:12 +00:00
chris meyers
d1705dd0cc more sane default log handlers
* Removed the emailing of admins on request error. When turned on, the
handler will include all django settings in the email. This is not
desirable from a security standpoint.
2018-12-05 09:38:27 -05:00
Martin Juhl
816cc29132 Bumped Version to 2.1.1 2018-12-05 00:33:04 +01:00
AlanCoding
f09b8efa87 tests and optimizations for UJT list with non-joblet recent_jobs 2018-12-04 16:16:05 -05:00
softwarefactory-project-zuul[bot]
6ebc6809eb Merge pull request #2869 from AlanCoding/syncs_do_not_update_project
Do not update project details after sync jobs

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-04 20:10:01 +00:00
AlanCoding
4b31367945 Do not update project details after sync jobs 2018-12-04 13:00:23 -05:00
kialam
2a62e300a2 UI update to check recent job type for routing to detail pages. 2018-12-04 11:19:25 -05:00
softwarefactory-project-zuul[bot]
f6b075843e Merge pull request #2845 from marshmalien/fix-org-ig-modal
Fix instance group modal selection

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-04 15:42:49 +00:00
Marliana Lara
4723773354 Fix instance group modal selection 2018-12-04 10:15:38 -05:00
softwarefactory-project-zuul[bot]
70be95cec5 Merge pull request #2861 from wenottingham/tighten-up-the-slack
Fix tooltip for slack channel list to note '#' is required.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-04 04:23:20 +00:00
softwarefactory-project-zuul[bot]
201b17012d Merge pull request #2865 from ansible/output-search-docslink
point output search doc link to latest

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-04 04:17:57 +00:00
Jake McDermott
47264b0809 point output search doc link to latest 2018-12-03 17:09:00 -05:00
Bill Nottingham
c51f235fab Fix tooltip for slack channel list to note '#' is required. 2018-12-03 14:22:59 -05:00
Bill Nottingham
f1b1224a27 Update foreman.py from Ansible devel, primarily for unicode fixes. 2018-12-03 10:23:28 -05:00
softwarefactory-project-zuul[bot]
63b0796738 Merge pull request #2852 from jlmitch5/updateOrgCardsCountWhenDatasetChanges
update org cards count when dataset chnages

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-02 20:05:19 +00:00
softwarefactory-project-zuul[bot]
5961e3ef2e Merge pull request #2846 from kialam/fix-3016-missing-job-events
Fix 3016 missing job events

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-12-01 05:12:05 +00:00
softwarefactory-project-zuul[bot]
246d80f177 Merge pull request #2850 from jlmitch5/addUserTokenPagination
add pagination to user tokens list

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 21:29:10 +00:00
softwarefactory-project-zuul[bot]
8005b47c14 Merge pull request #2849 from ryanpetrello/fix-custom-cred-encryption-nit
allow encrypted fields in custom credentials to be empty

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 21:07:30 +00:00
John Mitchell
1317572979 update org cards count when dataset chnages 2018-11-30 15:54:15 -05:00
AlanCoding
b763c51f8a add type to recent_jobs 2018-11-30 15:16:09 -05:00
softwarefactory-project-zuul[bot]
e70055a333 Merge pull request #2848 from wenottingham/more-fields-for-the-fields-god
Add timeout & slice count to the job field whitelist.

Reviewed-by: Bill Nottingham
             https://github.com/wenottingham
2018-11-30 20:02:38 +00:00
John Mitchell
52f86a206a add pagination to user tokens list 2018-11-30 14:27:23 -05:00
Ryan Petrello
7252883094 allow encrypted fields in custom credentials to be empty 2018-11-30 14:07:56 -05:00
kialam
afa7c2d69f Update unit tests. 2018-11-30 13:58:13 -05:00
Bill Nottingham
9c44d1f526 Add timeout & slice count to the job field whitelist. 2018-11-30 13:43:21 -05:00
kialam
473ce95c86 Fix failing unit tests. 2018-11-30 12:16:28 -05:00
kialam
3e1e068013 Add boundary checks for getReadyCount method. 2018-11-30 12:03:26 -05:00
kialam
746a154f2b Address missing job events.
- Fix off by one error.
- Add unit tests for Stream Service.
2018-11-30 11:23:15 -05:00
softwarefactory-project-zuul[bot]
28733800c4 Merge pull request #2842 from mabashian/2839-workflow-key
Changes workflow key icon from fa-key to fa-compass

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 13:21:45 +00:00
softwarefactory-project-zuul[bot]
cf2deefa41 Merge pull request #2843 from ansible/update-e2e-tests-2
Updating e2e tests to match change in order layout

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-30 02:04:21 +00:00
John Hill
e5645dd798 one more 2018-11-29 19:35:11 -05:00
John Hill
6205a5db83 Updating to fix linting error 2018-11-29 19:11:27 -05:00
John Hill
e50dd92425 Cannot depend on the id and order, reverting to workflow node names 2018-11-29 18:16:53 -05:00
softwarefactory-project-zuul[bot]
4955fc8bc4 Merge pull request #2840 from ryanpetrello/project-update-bug
resolve a nuanced traceback for JTs that run w/ a failed project

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 22:35:59 +00:00
mabashian
15adb1e828 Changes workflow key icon from fa-key to fa-compass 2018-11-29 17:16:25 -05:00
Ryan Petrello
c90d81b914 resolve a nuanced traceback for JTs that run w/ a failed project
related: https://github.com/ansible/awx/pull/2719
2018-11-29 17:10:23 -05:00
softwarefactory-project-zuul[bot]
8e9c28701e Merge pull request #2836 from ryanpetrello/better-dispatch-retries
add additional DB retry logic to the callback receiver

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 21:18:24 +00:00
softwarefactory-project-zuul[bot]
abc74fc9b8 Merge pull request #2824 from chrismeyersfsu/workflow-convergence_enforce2
enforce 1 edge between 2 nodes constraint

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 19:34:25 +00:00
chris meyers
21fce00102 python3 compliance
* This ones for you rydog
2018-11-29 14:07:43 -05:00
chris meyers
d347a06e3d do not deny existing workflow node relationships 2018-11-29 13:29:16 -05:00
Ryan Petrello
0391dbc292 add additional DB retry logic to the callback receiver
initially, I implemented this for _only_ the task worker, but it's
probably needed for callback event workers, too
2018-11-29 11:57:46 -05:00
softwarefactory-project-zuul[bot]
349c7efa69 Merge pull request #2792 from AlanCoding/how_many_slices
Prohibit relaunching sliced jobs with changed count

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 15:57:40 +00:00
softwarefactory-project-zuul[bot]
0f451595d7 Merge pull request #2826 from ryanpetrello/remove-deprovision-node
remove the deprecated `awx-manage deprovision_node` command

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 15:52:13 +00:00
Ryan Petrello
fcb6ce2907 remove a few deprecated awx-manage commands 2018-11-29 10:09:57 -05:00
softwarefactory-project-zuul[bot]
273d7a83f2 Merge pull request #2825 from ryanpetrello/dont-fear-the-reaper
don't reap jobs that aren't running

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-29 14:31:37 +00:00
chris meyers
916c92ffc7 save state 2018-11-29 08:53:46 -05:00
Ryan Petrello
38bf174bda don't reap jobs that aren't running
this is a simple sanity check, but it should help us avoid shooting
ourselves in the foot in complicated scenarios, such as:

1.  A dispatcher worker is running a job, and it's killed with `kill -9`
2.  The dispatcher attempts to reap jobs with a matching celery_task_id
3.  The associated sync project update has the *same* celery_task_id
    (an implementation detail of how we implemented that), and it ends
    up getting reaped _even though_ it's already finished and has
    status=successful
2018-11-28 18:11:12 -05:00
chris meyers
09dff99340 enforce 1 edge between 2 nodes constraint 2018-11-28 16:57:50 -05:00
softwarefactory-project-zuul[bot]
7f178ef28b Merge pull request #2822 from ansible/workflow-e2e-update
Small update to the node order to reflect e2e test workflow changes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 21:08:36 +00:00
softwarefactory-project-zuul[bot]
68328109d7 Merge pull request #2807 from AlanCoding/yuck_artifacts
Do not pass artifacts to non-job nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 21:05:24 +00:00
John Hill
d573a9a346 Small update to the node order to reflect workflow changes 2018-11-28 15:37:30 -05:00
AlanCoding
d6e89689ae do not pass artifacts to non-job nodes 2018-11-28 15:19:47 -05:00
softwarefactory-project-zuul[bot]
d1d97598e2 Merge pull request #2821 from AlanCoding/clean_nodes
Clean up unwanted data in activity stream of nodes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 19:13:45 +00:00
softwarefactory-project-zuul[bot]
f57fa9d1fb Merge pull request #2810 from chrismeyersfsu/feature-replay_job_status
emit job status lifecycle in event replayer

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 18:57:56 +00:00
chris meyers
83760deb9d align tests with new replay get_job interface 2018-11-28 13:33:44 -05:00
softwarefactory-project-zuul[bot]
3893e29a33 Merge pull request #2815 from ryanpetrello/fix-iso-nodes-dev
fix isolated nodes in the dev environment

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 17:06:16 +00:00
softwarefactory-project-zuul[bot]
feeaa0bf5c Merge pull request #2747 from kialam/remove-md5
Remove MD5 usage and dependency from UI

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 16:15:59 +00:00
kialam
22802e7a64 Update README to include needed npm version. 2018-11-28 10:43:53 -05:00
Ryan Petrello
1ac5bc5e2b remove angular-md5 license 2018-11-28 10:43:53 -05:00
kialam
362a3753d0 Remove 'angular-md5' from our dependencies. 2018-11-28 10:43:53 -05:00
kialam
71ee9d28b9 Add link to original gist and rename file. 2018-11-28 10:43:53 -05:00
kialam
d8d89d253d Remove instances of "md5" from the UI. 2018-11-28 10:43:53 -05:00
Ryan Petrello
a72f3d2f2f generate host_config_key using random UUIDs, not a time-based md5 hash 2018-11-28 10:43:45 -05:00
AlanCoding
1adeb833fb clean up unwanted data in activity stream of nodes 2018-11-28 10:41:32 -05:00
Ryan Petrello
a810aaf319 fix isolated nodes in the dev environment 2018-11-28 09:54:39 -05:00
softwarefactory-project-zuul[bot]
d9866c35b4 Merge pull request #2819 from ryanpetrello/fix-busted-tests
mock an HTTP call to fix busted unit tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 14:50:38 +00:00
Ryan Petrello
4e45c3a66c mock an HTTP call to fix busted unit tests 2018-11-28 09:17:50 -05:00
softwarefactory-project-zuul[bot]
87b55dc413 Merge pull request #2816 from ansible/jakemcdermott-smoke-break
update expected color vals of active tab in smoke test

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2018-11-28 04:02:31 +00:00
Jake McDermott
2e3949d612 update expected color vals of active tab in smoke test 2018-11-27 22:46:16 -05:00
softwarefactory-project-zuul[bot]
d928ccd922 Merge pull request #2799 from ansible/jakemcdermott-readonly-viz
don't conditionally hide workflow viz templates list button

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-28 03:39:51 +00:00
softwarefactory-project-zuul[bot]
a9c51b737c Merge pull request #2389 from ansible/workflow-convergence
Workflow convergence

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 22:04:50 +00:00
mabashian
51669c9765 Fixes hint/lint errors in workflow viz test 2018-11-27 16:12:42 -05:00
mabashian
17cc82d946 Ensure that selected row is cleared when adding new node after editing existing node 2018-11-27 16:12:42 -05:00
mabashian
10de5b6866 Fixes clicking on a wf in wf node. Also fixes editing wf in wf node with inv prompt 2018-11-27 16:12:42 -05:00
mabashian
55dc27f243 Set active tab to jobs when initially clicking a workflow_job_template type node 2018-11-27 16:12:42 -05:00
mabashian
6fc2ba3495 Fixes delete node shifting e2e test 2018-11-27 16:12:42 -05:00
mabashian
7bad01e193 Fixes e2e workflow visualizer tests 2018-11-27 16:12:42 -05:00
mabashian
62a1f10c42 Fix node pagination for project/inv 2018-11-27 16:12:42 -05:00
mabashian
3975a2ecdb fix linkpath class 2018-11-27 16:12:42 -05:00
Jake McDermott
bfa361c87f hide prompt button when not on jobs tab 2018-11-27 16:12:42 -05:00
Jake McDermott
d5f07a9652 hide inventory help message when not on jobs tab 2018-11-27 16:12:42 -05:00
Jake McDermott
65ec1d18ad skip missing inventory prompt value check when selecting workflow node 2018-11-27 16:12:42 -05:00
Jake McDermott
7b4521f980 workflow node prompt fixup
* use workflow model and endpoint when node is workflow
* always include template type in prompt data
* skip missing inventory checks when node is workflow
* skip checks for required credential fields when node is workflow
2018-11-27 16:12:42 -05:00
John Mitchell
3762ba7b24 add back in workflow_nodes in order to be able to use it for count of nodes 2018-11-27 16:12:42 -05:00
John Mitchell
762c882cd7 consume workflow maker total nodes label change 2018-11-27 16:12:42 -05:00
John Mitchell
343639d4b7 fix workflow maker total templates header to total nodes 2018-11-27 16:12:42 -05:00
John Mitchell
38dc0b8e90 fix workflow total jobs header to total nodes 2018-11-27 16:12:42 -05:00
mabashian
ed40ba6267 Fix searching on related fields 2018-11-27 16:12:42 -05:00
mabashian
54d56f2284 Fix node jobs column sorting. Adds arrows to potential workflow node links 2018-11-27 16:12:42 -05:00
mabashian
1477bbae30 Fixed error Cannot read property 'type' of undefined in console when selecting a project or inventory node 2018-11-27 16:12:42 -05:00
mabashian
625c6c30fc Fixed edge dropdown id 2018-11-27 16:12:42 -05:00
chris meyers
228e412478 simplify workflow job failure reason
* Log the more detailed reason for a workflow job failing but expose a
simplified reason to users via job_explanation
2018-11-27 16:12:42 -05:00
chris meyers
f8f2e005ba better comment for deciding parent's status 2018-11-27 16:12:42 -05:00
chris meyers
d8bf82a8cb add help_text to do_not_run workflow field 2018-11-27 16:12:41 -05:00
chris meyers
2eeca3cfd7 add example workflow run to docs 2018-11-27 16:12:41 -05:00
mabashian
28a4bbbe8a Fixed jshint errors that fell out of merge conflict 2018-11-27 16:12:41 -05:00
mabashian
1cfcaa72ad Fixed editNodeHelpMessage logic that was broken during merge conflict 2018-11-27 16:12:41 -05:00
AlanCoding
4c14727762 bump migration number 2018-11-27 16:12:41 -05:00
chris meyers
0c8dde9718 fix dfs_run_nodes()
* Tried to re-use the topological sort order to crawl the graph to find
the next node(s) to run. This is incorrect, we need to take into account
the fail/success of jobs and directionally crawl the graph.
2018-11-27 16:12:41 -05:00
chris meyers
febf051748 do not mark ujt None nodes dnr
* Leave workflow nodes with no related unified job template nodes
do_not_run = False. If we mark it True, we can't differentiate between
the actual want to not take that path vs. do not run this because I do
not have a valid related unified job template.
2018-11-27 16:12:41 -05:00
mabashian
56885a5da1 Remove reference to isStartNode and just check the id of the node to determine if it's our start node or not 2018-11-27 16:12:41 -05:00
mabashian
623cf54766 Added dagre and graphlib licenses 2018-11-27 16:12:41 -05:00
mabashian
a804c854bf Fix test failures and jshint errors 2018-11-27 16:12:41 -05:00
chris meyers
7b087d4a6c loop over dnr nodes by topological sort
* Perform topological sort on graph nodes before looping over them to
mark do not run. This guarantees that parent nodes will be processed
before calling dependent child nodes. The complexity of the sorting is
N. The complexity of marking the the nodes is N*V
2018-11-27 16:12:41 -05:00
chris meyers
cfa098479e Revert "optimize mark dnr nodes algorithm"
This reverts commit 6372c52772.
2018-11-27 16:12:41 -05:00
mabashian
3c510e6344 Fixed bug where root link became clickable. Fix workflow key on results page. 2018-11-27 16:12:41 -05:00
chris meyers
4c9a1d6b90 optimize mark dnr nodes algorithm
* Compute largest depth of each node and traverse graph by depth. This
allows us to check a node once, and only once, to determine if it needs
to be marked for do not run.
2018-11-27 16:12:41 -05:00
chris meyers
d1aa52a2a6 fix up mark dnr logic 2018-11-27 16:12:41 -05:00
chris meyers
f30f52a0a8 handle missing unified job template in workflow
* Workflow Node without unified_job_template is treated as a job marked
as failure; when deciding what path to execute.
* Remove optimization of marking dnr nodes due to it making the
algorithm incorrect.
2018-11-27 16:12:41 -05:00
mabashian
5b459e3c5d Code cleanup. Fixed bugs with workflow results page including details links 2018-11-27 16:12:41 -05:00
chris meyers
676c068b71 add job_description to failed workflow node
* When workflow job fails because a workflow job node doesn't have a
related unified_job_template note that with an error on the workflow
job's job_description
* When a workflow job fails because a failure path isn't defined, note
that on the workflow job job_description
2018-11-27 16:12:41 -05:00
chris meyers
00d71cea50 detect workflow nodes without job templates
* Fail workflow job run when encountering a Workflow Job Nodes with
no related job templates.
2018-11-27 16:12:41 -05:00
mabashian
72263c5c7b Addresses a number of workflow related bugs 2018-11-27 16:12:41 -05:00
chris meyers
281345dd67 flake8 fix 2018-11-27 16:12:41 -05:00
chris meyers
1a85fcd2d5 update docs to include workflow failure semantic 2018-11-27 16:12:41 -05:00
chris meyers
c1171fe4ff treat canceled nodes as failed when processing wf
* When deciding what jobs to run next, treat canceled as failed.
* Also add tests.
2018-11-27 16:12:41 -05:00
chris meyers
d6a8ad0b33 treat canceled jobs in wf the same as failed jobs
* Also fix spelling mistake that caused workflows to be falsely marked
successful in the case of a canceled job.
2018-11-27 16:12:41 -05:00
mabashian
4a6a3b27fa Fixed a number of workflow visualizer bugs. Added loading spinners while data is being loaded/processed. 2018-11-27 16:12:41 -05:00
chris meyers
266831e26d add cycle unit test 2018-11-27 16:12:41 -05:00
chris meyers
a6e20eeaaa update wf done and failed tests 2018-11-27 16:12:41 -05:00
chris meyers
6529c1bb46 update done and fail detection for workflow
* Instead of traversing the workflow graph to determine if a workflow is
done or has failed; instead, loop through all the nodes in the graph and
grab only the relevant nodes.
2018-11-27 16:12:41 -05:00
mabashian
ae0d0db62c Added dagre to handle our workflow graph layout. Fixed various workflow related bugs. 2018-11-27 16:12:41 -05:00
chris meyers
b81d795c00 fix up dot graph generator
* Update graph dot generator to use the new efficient graph
2018-11-27 16:12:41 -05:00
chris meyers
1b87e11d8f flake8 2018-11-27 16:12:41 -05:00
chris meyers
8bb9cfd62a add dag tests 2018-11-27 16:12:41 -05:00
chris meyers
a176a4b8cf remove unused code 2018-11-27 16:12:41 -05:00
chris meyers
3f4d14e48d crawl entire graph when marking DNR
* From the root, the code was only going down the did run path to find
nodes to mark DNR. This is incorrect, Now, we traverse the entire graph
each time to find nodes to mark DNR.
2018-11-27 16:12:41 -05:00
chris meyers
0499d419c3 more efficient graph processing
* Getting parent nodes from child was inefficient. Optimize it with a
hash table like we did for the getting of children.
* Getting leaf nodes was inefficient. Optimize it like we did getting
root nodes. A node is assumed to be a leaf node until it gets a child.
2018-11-27 16:12:41 -05:00
mabashian
700860e040 Fix long name tooltip. Fixed bug adding new node before finishing adding new link.
Fixed template list column layout.  Ensure that we're getting 200 workflow nodes per GET request
2018-11-27 16:12:41 -05:00
chris meyers
3dadeb3037 remove print statements 2018-11-27 16:12:41 -05:00
chris meyers
16a60412cf optimization fix
* WorkflowDAG accepts workflow job template and workflow jobs for which
to build a graph out of the nodes. The optimized query for each is
different. This changeset adds the differing queries for a workflow job.
2018-11-27 16:12:41 -05:00
chris meyers
9f3e272665 optimize cycle detection 2018-11-27 16:12:41 -05:00
mabashian
b84fc3b111 Fixes for post-rebase bugs 2018-11-27 16:12:41 -05:00
chris meyers
e1e8d3b372 bump migration 2018-11-27 16:12:40 -05:00
mabashian
05f4d94db2 Fixed serveral bugs including credential prompting. Added logic to bring links/nodes to the forefront when you hover over them in case there's some overlap 2018-11-27 16:12:40 -05:00
mabashian
61fb3eb390 First pass at implementing better node placement in the workflow graph 2018-11-27 16:12:40 -05:00
mabashian
7b95d2114d Implements workflow convergence without proper layout 2018-11-27 16:12:40 -05:00
chris meyers
07db7a41b3 more flake8 2018-11-27 16:12:40 -05:00
chris meyers
1120f8b1e1 try2 at the devil flake8 2018-11-27 16:12:40 -05:00
chris meyers
17b3996568 fix flake8 anyway I can 2018-11-27 16:12:40 -05:00
chris meyers
584b3f4e3d remove workflow test
* We now handle workflows with jobs that have errored. We treat them the
same as a failure result. Before, we would abort the workflow when we
encountered an error.
2018-11-27 16:12:40 -05:00
chris meyers
f8c53f4933 handle job error state in convergence 2018-11-27 16:12:40 -05:00
chris meyers
6e40e9c856 handle edge case ring cycle 2018-11-27 16:12:40 -05:00
chris meyers
2f9dc4d075 remove relationship in view if cycle detected 2018-11-27 16:12:40 -05:00
chris meyers
9afc38b714 fixup migrations 2018-11-27 16:12:40 -05:00
chris meyers
dfccc9e07d rework wf cycle detection for convergence 2018-11-27 16:12:40 -05:00
chris meyers
7b22d1b874 cycle detection when multiple parents 2018-11-27 16:12:40 -05:00
mabashian
29b4979736 Completed work necessary to support editing workflow links and nodes separately. Added hover and tooltip to links 2018-11-27 16:12:40 -05:00
mabashian
87d6253176 Decouple editing a wf node with editing a node link 2018-11-27 16:12:40 -05:00
chris meyers
1e10d4323f update docs 2018-11-27 16:12:40 -05:00
chris meyers
4111e53113 correctly name migration to align with 3.4.0 2018-11-27 16:12:40 -05:00
chris meyers
02df0c29e9 merge artifacts deterministically 2018-11-27 16:12:40 -05:00
chris meyers
475c90fd00 prevent job launching twice 2018-11-27 16:12:40 -05:00
chris meyers
2742b00a65 flake8 2018-11-27 16:12:40 -05:00
chris meyers
ea29e66a41 fix workflow finish state detector
* Take into account the new do_not_run field when finding if a workflow
is finished. If do_not_run is True then the node is considered finished.
2018-11-27 16:12:40 -05:00
chris meyers
6ef6b649e8 cleaner code 2018-11-27 16:12:40 -05:00
chris meyers
9bf2a49e0f save state 2018-11-27 16:12:40 -05:00
chris meyers
914892c3ac all parents should finish before start child 2018-11-27 16:12:40 -05:00
chris meyers
77661c6032 short circuit performance optimization 2018-11-27 16:12:40 -05:00
chris meyers
b4fc585495 stop DNR propogation on always path
* This makes sure DNR propogation stops when a job is successful, down
an always path
2018-11-27 16:12:40 -05:00
chris meyers
ff6db37a95 correct stop DNR propogation
* If a child has a parent that is not in the finished state then do not
propogate the DNR to the child in question.
* If a parent is in a finished state; do not propogate the DNR to the
child if the path to the child is traversed (based on the parent job
status).
2018-11-27 16:12:40 -05:00
chris meyers
1a064bdc59 satisfy flake8 2018-11-27 16:12:40 -05:00
chris meyers
ebabec0dad always find and mark dnr nodes 2018-11-27 16:12:40 -05:00
chris meyers
3506b9a7d8 Revert "mark dnr field read only"
This reverts commit 3dbc52d91223167683fd01174222bd6c22813dbd.

Workflow Job Nodes are read only already
2018-11-27 16:12:40 -05:00
chris meyers
cc374ca705 update debug dot graph to output dnr data 2018-11-27 16:12:40 -05:00
chris meyers
ad56a27cc0 mark dnr field read only 2018-11-27 16:12:40 -05:00
chris meyers
779e1a34db remove dnr field from jt wf node 2018-11-27 16:12:40 -05:00
chris meyers
447dfbb64d only visit nodes once for dnr 2018-11-27 16:12:40 -05:00
chris meyers
a9365a3967 code cleanup 2018-11-27 16:12:40 -05:00
chris meyers
f5c10f99b0 support workflow convergence nodes
* remove convergence restriction in API
* change task manager logic to be aware of and support convergence nodes
2018-11-27 16:12:40 -05:00
softwarefactory-project-zuul[bot]
c53ccc8d4a Merge pull request #2813 from shanemcd/update-translation-templates
Extract latest strings from source code for translations

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 17:53:49 +00:00
chris meyers
e214dcac85 add slowdown, final status delay, and debug
* slowdown by using --speed 0.1 <-- decimal
* optionally specify a delay between the event and the final status
* debug mode where you can step through emitting the job events
2018-11-27 12:45:49 -05:00
softwarefactory-project-zuul[bot]
de77f6bd1f Merge pull request #2809 from jlmitch5/fixManagmentJobScheduleEditLink
fix management job schedule edit link button

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 17:33:07 +00:00
Shane McDonald
18c4771a38 Extract latest strings from source code for translations 2018-11-27 12:31:52 -05:00
chris meyers
042c7ffe5b emit job status lifecycle in event replayer 2018-11-27 11:54:39 -05:00
John Mitchell
f25c6effa3 fix management job schedule edit link button 2018-11-27 11:35:13 -05:00
softwarefactory-project-zuul[bot]
46d303ceee Merge pull request #2805 from ryanpetrello/wcag
raise contrast on a few key page elements to pass WCAG contrast checks

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-27 15:45:15 +00:00
Ryan Petrello
b1bd87bcd2 raise contrast on a few key page elements to pass WCAG contrast checks 2018-11-27 10:20:24 -05:00
softwarefactory-project-zuul[bot]
50b0a5a54d Merge pull request #2756 from wenottingham/logged-out-damn-spot
Add a message to the resulting login dialog when a user explicitly logs out.

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2018-11-27 15:13:45 +00:00
Ryan Petrello
d5c6c589b2 add an AWX_ISOLATED_VERBOSITY setting for debugging isolated connections 2018-11-26 23:47:16 -05:00
Ryan Petrello
fc0a039097 collect isolated capacity using a cache plugin, not stdout parsing
reading capacity values using the jsonfile cache plugin is more robust
in scenarios where ansible-playbook may print non-JSON output (such as
-vvv or when a custom callback plugin like timer is enabled)
2018-11-26 17:08:42 -05:00
Jake McDermott
4f731017ea don't conditionally hide workflow viz templates list button 2018-11-26 16:21:44 -05:00
softwarefactory-project-zuul[bot]
9eb2c02e92 Merge pull request #2788 from AlanCoding/container_names
Set fixed container names

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-26 18:53:41 +00:00
softwarefactory-project-zuul[bot]
55e5432027 Merge pull request #2794 from shanemcd/devel
Add a note about `—force-with-lease` to contributing documentation.

Reviewed-by: Marliana Lara <marliana.lara@gmail.com>
             https://github.com/marshmalien
2018-11-26 16:55:31 +00:00
Shane McDonald
a9ae4dc5a8 Clean up after yourselves, people! 2018-11-26 11:31:58 -05:00
Shane McDonald
0398b744a1 Add a note about —force-with-lease to contributing documentation. 2018-11-26 11:31:36 -05:00
AlanCoding
012511e4f0 prohibit relaunching sliced jobs with changed count 2018-11-26 10:54:19 -05:00
softwarefactory-project-zuul[bot]
4483d0320f Merge pull request #2791 from ryanpetrello/fix-iso-installs
only override django for FIPS in environments where Django is installed

Reviewed-by: Yanis Guenane
             https://github.com/Spredzy
2018-11-26 15:36:23 +00:00
Ryan Petrello
32e7ddd43a only override django for FIPS in environments where Django is installed
isolated awx installs don't have this tooling, and so they don't need
this specific monkey-patch
2018-11-26 09:17:48 -05:00
AlanCoding
0b32733dc8 set fixed container names 2018-11-26 08:26:57 -05:00
Christian Adams
d310c48988 Merge pull request #2758 from rooftopcellist/secure_current_user
make current_user ck secure and httponly
2018-11-21 15:26:35 -05:00
softwarefactory-project-zuul[bot]
5a6eefaf2c Merge pull request #2780 from kialam/update-readme-install-exact
Update README to include saving exact dependencies using npm.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 19:12:50 +00:00
kialam
8c5a94fa64 Update readme to include saving exact dependencies using npm. 2018-11-21 13:35:56 -05:00
adamscmRH
05d988349c make current_user ck secure and httponly 2018-11-21 10:36:35 -05:00
softwarefactory-project-zuul[bot]
bb1473f67f Merge pull request #2762 from kialam/fix-2554-converted-jts-from-sjt
UI WF results relaunch: handle any relaunch errors.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 15:01:17 +00:00
softwarefactory-project-zuul[bot]
79f483a66d Merge pull request #2772 from ansible/jakemcdermott-update
add package uninstall example to readme

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-21 03:11:45 +00:00
Jake McDermott
1b09a0230d Update README.md 2018-11-20 21:26:11 -05:00
kialam
f3344e9816 Fix failing tests. 2018-11-20 18:35:29 -05:00
softwarefactory-project-zuul[bot]
554e4d45aa Merge pull request #2763 from kialam/add-npm-precheck-script
Add precheck script.

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2018-11-20 21:12:58 +00:00
softwarefactory-project-zuul[bot]
bfe86cbc95 Merge pull request #2753 from jlmitch5/fixInstanceGroupLookup
fix instance group lookup in orgs scrolling behavior

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 20:05:14 +00:00
softwarefactory-project-zuul[bot]
29e4160d3e Merge pull request #2764 from ryanpetrello/dispatcher-sos
add dispatcher status to the sosreport

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2018-11-20 18:06:05 +00:00
Ryan Petrello
b4f906ceb1 add dispatcher status to the sosreport 2018-11-20 12:12:02 -05:00
kialam
e099fc58c7 Add precheck script. 2018-11-20 11:48:25 -05:00
softwarefactory-project-zuul[bot]
e342ef5cfa Merge pull request #2719 from AlanCoding/project_is_failed
Fail job run if project is failed

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 16:45:56 +00:00
John Mitchell
8997fca457 add sanitize 2018-11-20 11:41:58 -05:00
kialam
435ab4ad67 Handle any relaunch errors. 2018-11-20 11:25:36 -05:00
softwarefactory-project-zuul[bot]
d7a28dcea4 Merge pull request #2755 from kialam/fix-invalid-start-date
Scheduler: Start Date begins 1 day ahead.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 15:36:47 +00:00
softwarefactory-project-zuul[bot]
5f3024d395 Merge pull request #2703 from kialam/fix-2552-org-jt-list-websocket
Fix Organizations TemplateList websockets

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 15:01:05 +00:00
softwarefactory-project-zuul[bot]
3126480d1e Merge pull request #2754 from kdelee/rename-schema-change-detector
Rename schema job to be more clear about its purpose

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 14:22:08 +00:00
Elijah DeLee
ca84d312ce Rename schema job to be more clear about its purpose
The make target fails when it detects schema changes, not when schema is invalid.

Also update CONTRIBUTING.md to include information about zuul jobs.
2018-11-20 07:42:10 -06:00
AlanCoding
b790b50a1a Fail job run if project is failed
provide message in traceback and explanation fields

add log messages for dependency spawns
2018-11-20 07:55:37 -05:00
softwarefactory-project-zuul[bot]
6df26eb7a3 Merge pull request #2750 from ryanpetrello/timeout-modal-remove-close-button
remove the x icon on the session timeout modal (it doesn't work)

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 08:50:49 +00:00
softwarefactory-project-zuul[bot]
fccaebdc8e Merge pull request #2342 from ansible/workflow_inventory
Workflow level inventory

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-20 03:19:40 +00:00
Jake McDermott
45728dc1bb update workflow docs 2018-11-19 17:35:52 -05:00
AlanCoding
9cd8aa1667 further update of workflow docs for inventory feature 2018-11-19 17:35:39 -05:00
Jake McDermott
b74597f4dd fix bug when reverting non-default inventory prompts 2018-11-19 17:35:26 -05:00
Bill Nottingham
ce3d3c3490 Add a message to the resulting login dialog when a user explicitly logs out. 2018-11-19 17:16:55 -05:00
kialam
a9fe1ad9c1 Start Date begins 1 day ahead. 2018-11-19 16:20:46 -05:00
John Mitchell
22e7083d71 fix instance group lookup 2018-11-19 15:38:11 -05:00
Jake McDermott
951515da2f disable next and show warning when default workflow inventory is removed 2018-11-19 15:16:46 -05:00
Ryan Petrello
9e2f4cff08 remove the x icon on the session timeout modal (it doesn't work) 2018-11-19 14:37:49 -05:00
softwarefactory-project-zuul[bot]
0c1a4439ba Merge pull request #2745 from mabashian/2428-workflow-edit-patch
Use patch instead of put when updating a wfjt

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 19:23:26 +00:00
John Mitchell
c2a1603a56 Merge pull request #2746 from jlmitch5/navColorContrast
fix color contrast of nav
2018-11-19 13:50:04 -05:00
Jake McDermott
13e715aeb9 handle null inventory value on workflow launch 2018-11-19 12:53:21 -05:00
Jake McDermott
2bc75270e7 dry up org permissions test 2018-11-19 12:53:18 -05:00
Jake McDermott
fabe56088d fix workflow e2e tests again 2018-11-19 12:53:14 -05:00
Jake McDermott
0e3bf6db09 open workflow visualizer from form 2018-11-19 12:53:10 -05:00
Jake McDermott
c6a7d0859d add workflow jobs to inventory list status popup 2018-11-19 12:53:05 -05:00
Jake McDermott
fed00a18ad show workflow jobs on inventory completed jobs view 2018-11-19 12:53:02 -05:00
Jake McDermott
ecbdc55955 show related workflow counts on inventory deletion warning prompt 2018-11-19 12:52:57 -05:00
AlanCoding
bca9bcf6dd fix prompts contradiction: should be non-functional change 2018-11-19 12:52:54 -05:00
Jake McDermott
018a8e12de fix lookup message 2018-11-19 12:52:50 -05:00
AlanCoding
e0a28e32eb Tweak of error message wording for model-specific name 2018-11-19 12:52:47 -05:00
AlanCoding
c105885c7b Do not count template variables as prompted 2018-11-19 12:52:43 -05:00
Jake McDermott
89a0be64af fix bug with opening visualizer from list page 2018-11-19 12:52:38 -05:00
AlanCoding
c1d85f568c fix survey vars bug and inventory defaults display 2018-11-19 12:52:35 -05:00
Jake McDermott
75566bad39 fix workflow e2e tests 2018-11-19 12:52:32 -05:00
Jake McDermott
75c2d1eda1 add inventory help messages for workflow node edit 2018-11-19 12:52:29 -05:00
Jake McDermott
9a4667c6c7 add static messages to workflow inventory lookups 2018-11-19 12:52:26 -05:00
Jake McDermott
9917841585 open and close workflow visualizer from list 2018-11-19 12:52:23 -05:00
Jake McDermott
fbc3cd3758 redirect to workflow visualizer on workflow creation 2018-11-19 12:52:19 -05:00
Jake McDermott
d65687f14a add workflow inventory prompt to scheduler 2018-11-19 12:52:16 -05:00
Jake McDermott
4ea7511ae8 make workflow prompt inventory step optional 2018-11-19 12:52:12 -05:00
Jake McDermott
a8d22b9459 show correct ask_inventory state 2018-11-19 12:52:08 -05:00
Jake McDermott
f8453ffe68 accept inventory_id in workflow launch requests 2018-11-19 12:52:05 -05:00
Jake McDermott
38f43c147a fix exploding unit test 2018-11-19 12:52:01 -05:00
Jake McDermott
38fbcf8ee6 add missing api fields 2018-11-19 12:51:58 -05:00
Jake McDermott
2bd25b1fba add inventory prompt to wf editor 2018-11-19 12:51:54 -05:00
AlanCoding
7178fb83b0 migration number bumped again 2018-11-19 12:51:51 -05:00
Jake McDermott
2376013d49 add prompt on launch for workflow inventory 2018-11-19 12:51:47 -05:00
Jake McDermott
a94042def5 display inventory on workflow job details 2018-11-19 12:51:44 -05:00
Jake McDermott
2d2164a4ba add inventory lookup to workflow detail view 2018-11-19 12:51:41 -05:00
AlanCoding
3c980d373c bump migration number 2018-11-19 12:51:37 -05:00
AlanCoding
5b3ce1e999 add test for WFJT schedule inventory prompting 2018-11-19 12:51:33 -05:00
AlanCoding
6d4469ebbd handle inventory for WFJT editing RBAC 2018-11-19 12:51:29 -05:00
AlanCoding
eb58a6cc0e add test for launching with deleted inventory 2018-11-19 12:51:26 -05:00
AlanCoding
a60401abb9 fix bug with WFJT launch validation 2018-11-19 12:51:22 -05:00
AlanCoding
1203c8c0ee feature docs for workflow-level inventory 2018-11-19 12:51:18 -05:00
AlanCoding
0c52d17951 fix bug, handle RBAC, add test 2018-11-19 12:51:13 -05:00
AlanCoding
44fa3b18a9 Adjust prompt logic and views to accept workflow inventory 2018-11-19 12:50:57 -05:00
AlanCoding
33328c4ad7 initial model changes for workflow inventory 2018-11-19 12:50:29 -05:00
John Mitchell
11adcb9800 fix color contrast of nav 2018-11-19 12:35:55 -05:00
softwarefactory-project-zuul[bot]
932a1c6386 Merge pull request #2743 from ryanpetrello/survey_spec_stream
include survey_spec in activity stream

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 17:33:33 +00:00
softwarefactory-project-zuul[bot]
d1791fc48c Merge pull request #2585 from wenottingham/licensed-to-illegibility
Add a test that checks the included license files against included dependencies

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 17:22:49 +00:00
AlanCoding
5b274cfc2a include survey_spec in activity stream 2018-11-19 12:07:48 -05:00
mabashian
1d7d2820fd Use patch instead of put when updating a wfjt 2018-11-19 12:04:35 -05:00
Bill Nottingham
605c1355a8 Add updates to UI license grabber from jlmitch5. 2018-11-19 12:00:00 -05:00
Bill Nottingham
a6e00df041 Clean up included licenses such that tests pass.
Rename ui licenses to '.txt' for consistency.
Update bundled code as appropriate.
Remove dead licenses and dev-only UI licenses.
Add additional python licenses from Azure & related updates.
2018-11-19 12:00:00 -05:00
Bill Nottingham
67219e743f Add a test that we are including proper license files for all requirements. 2018-11-19 11:59:59 -05:00
softwarefactory-project-zuul[bot]
67273ff8c3 Merge pull request #2742 from jlmitch5/fixScrollTop
fix scroll top

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 16:56:27 +00:00
softwarefactory-project-zuul[bot]
a3bbe308a8 Merge pull request #2741 from matburt/fix_project_admin_add
Fix a bug that did not allow project_admin's to create a project.

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-19 16:53:16 +00:00
kialam
f1b5bbb1f6 Add websocket listener to Org > JT list view. 2018-11-19 11:46:07 -05:00
Matthew Jones
7330102961 Remove a warning message for dispatcher pool for tests 2018-11-19 11:19:57 -05:00
Matthew Jones
61916b86b5 Fix a bug that did not allow project_admin's to create a project.
This was a regression from previous functionality
2018-11-19 11:05:48 -05:00
John Mitchell
35d5bde690 fix scroll top 2018-11-19 11:03:03 -05:00
softwarefactory-project-zuul[bot]
b17c477af7 Merge pull request #2737 from AlanCoding/death_to_groups
Implement deprecation of groups_with_active_failures

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 15:31:26 +00:00
softwarefactory-project-zuul[bot]
01d891cd6e Merge pull request #2738 from ryanpetrello/inv-update-as
don't send activity stream create for unregistered models

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 15:02:33 +00:00
Ryan Petrello
e36335f68c only send activity stream create for registered unified jobs
see https://github.com/ansible/awx/issues/2733
2018-11-19 09:44:12 -05:00
softwarefactory-project-zuul[bot]
39369c7721 Merge pull request #2702 from kdelee/schema_validation
Pre-Merge Schema validation

Reviewed-by: Matthew Jones <mat@matburt.net>
             https://github.com/matburt
2018-11-19 14:26:42 +00:00
softwarefactory-project-zuul[bot]
4fbc39991d Merge pull request #2730 from AlanCoding/slice_license
License check for slicing >1

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 14:18:10 +00:00
softwarefactory-project-zuul[bot]
91075e8332 Merge pull request #2732 from AlanCoding/middle
Include migration middleware in timings and profiling

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-19 13:59:34 +00:00
AlanCoding
0ed50b380a Implement deprecation of groups_with_active_failures 2018-11-19 08:29:46 -05:00
AlanCoding
53716a4c5a include migration middleware in timings and profiling 2018-11-18 10:55:37 -05:00
AlanCoding
f30bbad07d License check for slicing >1 2018-11-17 22:48:46 -05:00
softwarefactory-project-zuul[bot]
b923efad37 Merge pull request #2717 from wenottingham/make-that-network-work
Add ncclient for use by networking modules.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 22:02:29 +00:00
softwarefactory-project-zuul[bot]
3b36372880 Merge pull request #2716 from ryanpetrello/insights-user-agent
add a user agent for requests to Insights

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 21:39:15 +00:00
Ryan Petrello
661cc896a9 add a user agent for requests to Insights 2018-11-16 16:25:08 -05:00
Bill Nottingham
e9c3623dfd specify a version 2018-11-16 15:34:42 -05:00
Bill Nottingham
c65b362841 Add ncclient for use by networking modules. 2018-11-16 15:21:04 -05:00
softwarefactory-project-zuul[bot]
6a0e11a233 Merge pull request #2713 from AlanCoding/system_env_vars
Minor cleanup of task environment vars

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 19:35:05 +00:00
AlanCoding
7417f9925f Minor cleanup of task environment vars 2018-11-16 13:28:42 -05:00
softwarefactory-project-zuul[bot]
2f669685d8 Merge pull request #2707 from ryanpetrello/min-max-workers
prevent the dispatcher from using a nonsensical max_workers value

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 15:41:42 +00:00
softwarefactory-project-zuul[bot]
86510029e1 Merge pull request #2692 from kialam/fix-2586-ldap-dropdown-fix
Fix LDAP and TACACS+ dropdowns

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-16 15:28:09 +00:00
Ryan Petrello
37234ca66e prevent the dispatcher from using a nonsensical max_workers value 2018-11-16 10:16:39 -05:00
Elijah DeLee
4ae1fdef05 Ignore differences in whitespace for schema validation 2018-11-16 09:47:33 -05:00
kialam
95e94a8ab5 Some styling changes; fix Server dropdown.
- Left align first dropdown on Github and LDAP tabs.
- Add border to give some whitespace
2018-11-15 20:46:34 -05:00
kialam
ea35d9713a Fix empty dropdowns for both LDAP and TACACS tabs. 2018-11-15 20:46:34 -05:00
Elijah DeLee
949cf53b89 Use r in front of regex string to make flake8 happy
This means we should not escape the \ character in the same way
2018-11-15 17:29:27 -05:00
Elijah DeLee
a68e22b114 Add tox target to detect schema changes
Fetches reference schema from public bucket
Still need define method for updating reference schema on merge.
2018-11-15 16:25:13 -05:00
Elijah DeLee
d70cd113e1 Reduce duplicated logic for genschema target 2018-11-15 15:29:35 -05:00
Matthew Jones
f737fc066f Generate schema suitable for comparing for schema changes 2018-11-15 15:29:35 -05:00
softwarefactory-project-zuul[bot]
9b992c971e Merge pull request #2672 from AlanCoding/sliceanator
Fix bug with non-sliced JT job spawn

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 20:16:19 +00:00
softwarefactory-project-zuul[bot]
e0d59766e0 Merge pull request #2696 from AlanCoding/bulk_del_inv
Pre-delete bulk delete related, fix parallel request conflicts

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-15 19:58:49 +00:00
AlanCoding
a9d88f728d Pre-delete bulk delete related, fix parallel request conflicts 2018-11-15 11:39:48 -05:00
softwarefactory-project-zuul[bot]
e24d63ea9a Merge pull request #2665 from ryanpetrello/fips
add support for running awx w/ FIPS enabled

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 15:18:07 +00:00
softwarefactory-project-zuul[bot]
1833a5b78b Merge pull request #2667 from saito-hideki/issue/admin_with_docker-compose
Fixed issue where admin_user and password change are not reflected

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 14:24:43 +00:00
softwarefactory-project-zuul[bot]
4213a00548 Merge pull request #2686 from AlanCoding/fast_workflows
Add task manager rescheduling hooks, de-duplication, lifecycle tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-15 14:19:09 +00:00
softwarefactory-project-zuul[bot]
7345512785 Merge pull request #2690 from ryanpetrello/ldap-long-name
truncate user first/last name if it exceeds 30 chars on LDAP auth

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-14 22:29:43 +00:00
Ryan Petrello
d3dc126d45 truncate user first/last name if it exceeds 30 chars on LDAP auth 2018-11-14 15:51:43 -05:00
AlanCoding
758a488aee Add task manager rescheduling hooks, de-duplication, lifecycle tests 2018-11-14 11:31:34 -05:00
softwarefactory-project-zuul[bot]
c0c358b640 Merge pull request #2682 from ryanpetrello/job-m2m-activity-stream
include M2M labels and credentials in Job creation activity stream

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-14 16:05:30 +00:00
Ryan Petrello
49f4ed10ca include M2M labels and credentials in Job creation activity stream 2018-11-14 10:36:01 -05:00
softwarefactory-project-zuul[bot]
5e18eccd19 Merge pull request #2671 from wenottingham/you-want-to-change-things---well---you-can't
Fix tooltip referring to PROJECTS_ROOT.

Reviewed-by: Bill Nottingham
             https://github.com/wenottingham
2018-11-13 21:01:19 +00:00
softwarefactory-project-zuul[bot]
c4e9daca4e Merge pull request #2669 from paradegoat/issue-2370
Updated JT project-related error message

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-13 20:22:31 +00:00
AlanCoding
5443e10697 Fix bug with non-sliced JT job spawn 2018-11-13 15:20:40 -05:00
Ryan Petrello
a3f9c0b012 warn about FIPS mode if the Django version changes 2018-11-13 15:04:36 -05:00
Geoff Humphreys
8517053934 Updated JT project-related error message
Signed-off-by: Geoff Humphreys <humphreys.geoff@gmail.com>
2018-11-13 14:56:22 -05:00
Bill Nottingham
0506968d4f Fix tooltip referring to PROJECTS_ROOT.
This can't be changed in AWX settings; it needs to be done in settings.py
either on the filesystem, or overridden in a k8s/openshift configmap.
2018-11-13 12:14:24 -05:00
softwarefactory-project-zuul[bot]
3bb91b20d0 Merge pull request #2663 from AlanCoding/unified_jobs_flake
Get rid of star import in unified_jobs.py

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-13 15:37:04 +00:00
softwarefactory-project-zuul[bot]
268b1ff436 Merge pull request #2662 from wwitzel3/devel
move root views new file

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-13 15:33:23 +00:00
Hideki Saito
f16a72081a Fixed issue where admin_user and password change are not reflected
- No effect of changing admin_user and admin_password when using docker-compose #2666
2018-11-13 18:21:18 +09:00
Ryan Petrello
cceac8d907 support PKCS8-formatted keys to enable FIPS compliance
see: https://access.redhat.com/solutions/1519083
2018-11-12 16:21:57 -05:00
adamscmRH
8d012de3e2 monkey-patch _digest for fips 2018-11-12 15:32:23 -05:00
AlanCoding
cee7ac9511 get rid of star import in unified_jobs.py 2018-11-12 13:38:58 -05:00
softwarefactory-project-zuul[bot]
36faaf4720 Merge pull request #2656 from iplaman/fix-postgres96
Fix default Postgresql version to 9.6

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-12 18:34:48 +00:00
Wayne Witzel III
33b8e7624b move root views new file 2018-11-12 12:36:16 -05:00
Idan Bidani
a213e01491 updating default Postgresql version to 9.6 2018-11-10 18:27:22 -05:00
softwarefactory-project-zuul[bot]
a00ed8e297 Merge pull request #2646 from jlmitch5/fixCredentialKindTranslation
Fix credential kind translation

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-10 03:06:38 +00:00
softwarefactory-project-zuul[bot]
aeaebcd81a Merge pull request #2572 from AlanCoding/coalesce
Coalesce host and group Activity Stream deletion entries

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-09 21:04:07 +00:00
softwarefactory-project-zuul[bot]
b5849f3712 Merge pull request #2639 from ansible/enhancement-inv_summary
Update UJT with inv summary field and update schedule base route to include object to be scheduled

Reviewed-by: Chris Meyers
             https://github.com/chrismeyersfsu
2018-11-09 19:06:36 +00:00
AlanCoding
658f87953e coalesce data without setting 2018-11-09 14:00:45 -05:00
AlanCoding
5562e636ea Coalesce host and group A.S. deletion entries 2018-11-09 13:58:31 -05:00
John Mitchell
80fcdae50b update credential type usage to kind instead of name 2018-11-09 10:36:09 -05:00
softwarefactory-project-zuul[bot]
2c5f209996 Merge pull request #2631 from kialam/fix/2533-dashboard-hover
Fix Dashboard hover-over action so it shows the correct info

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-09 05:27:11 +00:00
softwarefactory-project-zuul[bot]
692b55311e Merge pull request #2645 from ryanpetrello/unbound
fix an unbound variable

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 20:40:57 +00:00
Ryan Petrello
10667fc855 fix an unbound variable
see: https://github.com/ansible/awx/issues/2642j
2018-11-08 15:14:47 -05:00
softwarefactory-project-zuul[bot]
1fc33b551d Merge pull request #2643 from kialam/fix/2606
Fix DETAILS link in WF viz not working until after job has ran

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 20:14:45 +00:00
kialam
729256c3d1 Fix DETAILS link in WF viz not working until after job has ran
- Make additional GET request if we need it in order to surface the job type so that we can properly redirect the user to the detail page.
2018-11-08 13:23:32 -05:00
chris meyers
23e1feba96 fill in summary inv for sched only when needed
* If scheduler inv exists then the inv summary will be filled in with
our generic summary filler inner. Else, if the related unified job has
an inventory, fill in the inv summary with that, explicitly.
2018-11-08 12:43:47 -05:00
John Mitchell
e3614c3012 update to using new inventory id from summary fields of UJT if applicable 2018-11-08 12:22:02 -05:00
chris meyers
f37391397e add inventory to schedule summary fields
* Use the same logic that related inventory uses. If there is an
inventory that overrides the inventory on the unified job  template then
summarize that field. Else, use the inventory on the unified job template
being scheduled.
2018-11-08 11:47:31 -05:00
softwarefactory-project-zuul[bot]
6a8454f748 Merge pull request #2352 from ansible/workflows_squared
[Feature] Allow use of workflows inside of workflows

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 16:28:37 +00:00
softwarefactory-project-zuul[bot]
d1328c7625 Merge pull request #2044 from cdvv7788/AWX-2030
Add command to revoke tokens (https://github.com/ansible/awx/issues/2030)

Reviewed-by: Cristian Vargas
             https://github.com/cdvv7788
2018-11-08 15:15:44 +00:00
John Mitchell
d1cce109fb update schedule base route to include resource being scheduled 2018-11-08 09:41:06 -05:00
kialam
32bd8b6473 Fix Dashboard hover-over action so it shows the correct info.
- Refactor our Dashboard directive to display the dashboard graph's x-axis according to the new changes in NVD3 v1.8.1. See https://github.com/nvd3-community/nvd3/blob/gh-pages/examples/lineChart.html for reference.
2018-11-08 09:15:54 -05:00
Ryan Petrello
001bd4ca59 resolve a few token revocation issues, and add tests 2018-11-08 08:15:24 -05:00
softwarefactory-project-zuul[bot]
541b503e06 Merge pull request #2634 from wwitzel3/views-inventory
inventory views

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 12:31:14 +00:00
Cristian Vargas
093c29e315 Add command to revoke tokens
Signed-off-by: Cristian Vargas <cristian@swapps.co>
2018-11-08 07:28:21 -05:00
softwarefactory-project-zuul[bot]
eec7d7199b Merge pull request #2614 from stokkie90/devel
Fixed bug where all group variables are not included

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 12:27:30 +00:00
Rick Stokkingreef
7dbb862673 Fixed test cases 2018-11-08 12:07:07 +01:00
softwarefactory-project-zuul[bot]
be1422d021 Merge pull request #2629 from ansible/org-view-ui-tests
Org view ui tests

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-08 06:32:59 +00:00
Wayne Witzel III
459ac0e5d9 inventory views 2018-11-07 22:06:33 -05:00
Wayne Witzel III
16c9d043c0 Merge pull request #2344 from wwitzel3/views-organization
organization views
2018-11-07 21:33:14 -05:00
Wayne Witzel III
1b465c4ed9 remove duplicate BaseUsersList 2018-11-07 18:18:41 -05:00
Wayne Witzel III
198a0db808 move organization views to their own file 2018-11-07 18:18:41 -05:00
softwarefactory-project-zuul[bot]
91dda0a164 Merge pull request #2625 from abedwardsw/feature/vmware_groupby_custom_field_excludes
update to latest vmware_inventory.py support for groupby_custom_field_excludes

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 21:03:05 +00:00
softwarefactory-project-zuul[bot]
9341480209 Merge pull request #2627 from jakemcdermott/fix-form-cred-lookups
fix project and adhoc command form lookups when many credential types exist

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 20:43:36 +00:00
Adam Edwards
21877b3378 update to latest vmware_inventory.py with support for groupby_custom_field_excludes
e364d717cb/contrib/inventory/vmware_inventory.py

Signed-off-by: Adam Edwards <adam@middleware360.com>
2018-11-07 15:43:15 -05:00
softwarefactory-project-zuul[bot]
03169a96ef Merge pull request #2626 from jakemcdermott/fix-multicred
always recompile multicredential lists

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 20:27:20 +00:00
Marliana Lara
ebc3dbe7b6 Add source WF label to job details template
* Change label from "Parent WF" to "Source WF"
* Fix WF job result Firefox responsive style bugs
2018-11-07 13:22:41 -05:00
AlanCoding
1bed5d4af2 avoid nested on_commit use 2018-11-07 13:22:41 -05:00
AlanCoding
0783d86c6c adjust recursion error text 2018-11-07 13:22:41 -05:00
Marliana Lara
a3d5705cea Fix bug with workflow maker templates pagination and smart search 2018-11-07 13:22:40 -05:00
Marliana Lara
2ae8583a86 Fix Firefox labels bug 2018-11-07 13:22:40 -05:00
Marliana Lara
edda4bb265 Address PR review 2018-11-07 13:22:40 -05:00
AlanCoding
d068481aec link workflow job node based on job type, not UJT type 2018-11-07 13:22:40 -05:00
Marliana Lara
e20d8c8e81 Show workflow badge
Add Workflow tags

Hookup workflow details link

Add parent workflow and job explanation fields

Add workflow key icon to WF maker and WF results

Hookup wf prompting

Add wf key dropdown and hide wf info badge
2018-11-07 13:22:39 -05:00
Marliana Lara
f6cc351f7f Format workflow-chart directive code 2018-11-07 13:22:39 -05:00
Marliana Lara
c2d4887043 Include workflow jobs in workflow maker job templates list 2018-11-07 13:22:39 -05:00
AlanCoding
4428dbf1ff Allow use of role_level filter in UJT list 2018-11-07 13:22:39 -05:00
AlanCoding
e225489f43 workflows-in-workflows add docs and tests 2018-11-07 13:22:39 -05:00
AlanCoding
5169fe3484 safeguard against infinite loop if jobs have cycles 2018-11-07 13:22:38 -05:00
AlanCoding
01d1470544 workflow variables processing, recursion detection 2018-11-07 13:22:38 -05:00
AlanCoding
faa6ee47c5 allow use of workflows in workflows 2018-11-07 13:22:38 -05:00
softwarefactory-project-zuul[bot]
aa1d71148c Merge pull request #2583 from AlanCoding/superusers_manage
Do not block superusers with MANAGE_ORGANIZATION_AUTH setting

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-07 17:38:04 +00:00
Daniel Sami
e86ded6c68 removed namespace from user fixture 2018-11-07 09:02:00 -05:00
Daniel Sami
365bf4eb53 UI tests for org permission views 2018-11-07 09:01:23 -05:00
Jake McDermott
ceb9bfe486 fix adhoc command cred lookup when many credential types exist 2018-11-06 22:47:07 -05:00
Jake McDermott
0f85c867a0 fix project cred lookups when many credential types exist 2018-11-06 22:46:50 -05:00
Jake McDermott
f8a8186bd1 always recompile multicred lists 2018-11-06 20:58:08 -05:00
softwarefactory-project-zuul[bot]
33dfb6bf76 Merge pull request #2623 from ryanpetrello/this-is-the-end
properly validate cert data that happens to contain an END substring

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 21:14:26 +00:00
Ryan Petrello
28cd762dd7 properly validate cert data that happens to contain an END substring 2018-11-06 15:57:35 -05:00
softwarefactory-project-zuul[bot]
217cca47f5 Merge pull request #2620 from ansible/chrismeyersfsu-patch-1
Update saml.md

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 20:16:20 +00:00
softwarefactory-project-zuul[bot]
8faa5d8b7a Merge pull request #2591 from AlanCoding/no_more_ask
Implement deprecation of duplicated ask_ fields in job view

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 20:13:30 +00:00
softwarefactory-project-zuul[bot]
2c6711e183 Merge pull request #2618 from ryanpetrello/json-activity-stream-changes
send activity stream changes as raw JSON, not a JSON-ified string

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 19:45:19 +00:00
Chris Meyers
45328b6e6d Update saml.md 2018-11-06 14:45:10 -05:00
Ryan Petrello
1523feee91 send activity stream changes as raw JSON, not a JSON-ified string
see: https://github.com/ansible/awx/issues/2005
2018-11-06 14:28:57 -05:00
softwarefactory-project-zuul[bot]
856dc3645e Merge pull request #2605 from jakemcdermott/fix-2601
remove admin and member roles from organization->team permissions 

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 17:08:43 +00:00
Jake McDermott
5e4dd54112 remove admin and member roles from organization->team role assignment options 2018-11-06 11:52:19 -05:00
softwarefactory-project-zuul[bot]
5860689619 Merge pull request #2596 from jlmitch5/fixPermIssue
fix permission issue with regular user jt admins

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 16:40:58 +00:00
John Mitchell
da7834476b remove inadverdent scope variable that was added 2018-11-06 10:52:16 -05:00
John Mitchell
d5ba981515 remove inadverdent log statement 2018-11-06 10:50:15 -05:00
softwarefactory-project-zuul[bot]
afe07bd874 Merge pull request #2595 from ryanpetrello/fix-gce-json-creds
move from GCE_PEM_FILE_PATH to GCE_CREDENTIALS_FILE_PATH

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 15:09:11 +00:00
Rick Stokkingreef
f916bd7994 Fixed bug when all group vars are not included 2018-11-06 14:26:36 +01:00
softwarefactory-project-zuul[bot]
3fef7acaa8 Merge pull request #2457 from jakemcdermott/output-line-fixup
dont render blank output lines for some event types + adjust line wrapping

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-06 01:54:38 +00:00
Jake McDermott
95190c5509 remove unused constants 2018-11-05 20:13:00 -05:00
Jake McDermott
76e887f46d highlight entire row on hover 2018-11-05 20:12:52 -05:00
Jake McDermott
0c2b1b7747 don't compile html in real time 2018-11-05 20:12:44 -05:00
Jake McDermott
4c74c8c40c delete contents of slide array before reassigning 2018-11-05 20:12:37 -05:00
Jake McDermott
3a929919a3 enable expanded details for dynamic host events 2018-11-05 20:12:29 -05:00
Jake McDermott
c25af96c56 don't render events if stdout is zero-length string 2018-11-05 20:12:21 -05:00
Jake McDermott
f28f1e434d adjust output line wrapping 2018-11-05 20:12:12 -05:00
Jake McDermott
b3c5df193a don't render playbook_on_notify or runner_on_ok events if they have no stdout 2018-11-05 20:11:53 -05:00
John Mitchell
8645602b0a fix permission issue where regular users assigned jt admin could not add user jt roles they couldn't edit 2018-11-05 16:45:35 -05:00
Ryan Petrello
05156a5991 move from GEC_PEM_FILE_PATH to GCE_CREDENTIALS_FILE_PATH 2018-11-05 15:44:31 -05:00
softwarefactory-project-zuul[bot]
049d642df8 Merge pull request #1894 from AlanCoding/rm_sdonu
Remove unused project field

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-05 16:14:56 +00:00
AlanCoding
951ebf146a remove unused project field 2018-11-05 10:40:53 -05:00
AlanCoding
7a67e0f3d6 Implement deprecation of duplicated ask_ fields in job view 2018-11-05 10:38:55 -05:00
softwarefactory-project-zuul[bot]
37def8cf7c Merge pull request #2587 from jakemcdermott/fix-2563
remove admin and member roles from team->organizations role assignment options

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-05 15:13:18 +00:00
Jake McDermott
e4c28fed03 remove admin and member roles from team->organizations role assignment options 2018-11-04 22:59:03 -05:00
softwarefactory-project-zuul[bot]
f8b7259d7f Merge pull request #2543 from westfood/fix-helm-postgresql
Using new Helm parameters for PostgreSQL access.

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-02 18:56:15 +00:00
AlanCoding
6ae1e156c8 do not block superusers with MANAGE_ORGANIZATION_AUTH setting 2018-11-02 14:13:05 -04:00
softwarefactory-project-zuul[bot]
9a055dbf78 Merge pull request #2550 from matburt/runner_on_start
Add support for runner_on_start

Reviewed-by: Alan Rominger <arominge@redhat.com>
             https://github.com/AlanCoding
2018-11-02 15:10:07 +00:00
Matthew Jones
80ac44565a Make sure we reference the actual hostname 2018-11-02 10:37:58 -04:00
softwarefactory-project-zuul[bot]
a338199198 Merge pull request #2577 from AlanCoding/fix_grandparents
Fix bug where grandparent groups were excluded

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-02 14:36:48 +00:00
AlanCoding
47fc0a759f fix bug where grandparent groups were excluded 2018-11-02 10:10:38 -04:00
softwarefactory-project-zuul[bot]
5eb4b35508 Merge pull request #2547 from AlanCoding/decorator
Get rid of decorator dependency

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-01 20:39:30 +00:00
softwarefactory-project-zuul[bot]
d93eedaedb Merge pull request #2571 from ryanpetrello/devel
Merge remote-tracking branch 'release_3.3.1' into devel

Reviewed-by: https://github.com/softwarefactory-project-zuul[bot]
2018-11-01 16:48:46 +00:00
Ryan Petrello
a748a272fb Merge remote-tracking branch 'tower/release_3.3.1' into devel 2018-11-01 12:07:02 -04:00
AlanCoding
d8d710a83d get rid of decorator dependency 2018-10-31 11:37:10 -04:00
Matthew Jones
673068464a Add support for runner_on_start
This will be available in ansible 2.8
2018-10-30 13:23:55 -04:00
westfood
694e494484 Using new Helm parameters for PostgreSQL access. 2018-10-28 11:55:36 +01:00
Wayne Witzel III
c8e208dea7 Merge pull request #3074 from wwitzel3/release_3.3.1
fix typo in length
2018-10-17 17:10:41 -04:00
Wayne Witzel III
f2cec03900 fix typo in length 2018-10-17 16:34:24 -04:00
Wayne Witzel III
0b4e0678e9 Merge pull request #3070 from wwitzel3/release_3.3.1
better error handling when over limit
2018-10-17 09:09:01 -04:00
Wayne Witzel III
6e3b2a5c2d better error handling when over limit 2018-10-15 16:07:14 -04:00
Ryan Petrello
3d378077d9 Merge pull request #3066 from saito-hideki/issue/3064
[3.3.1] Add files and output of commands to gather with sosreport
2018-10-15 12:28:08 -04:00
Ryan Petrello
716d440a76 Merge pull request #3068 from ryanpetrello/fix-3039
fix a typo on the JT add page that breaks the custom venv field
2018-10-15 11:40:33 -04:00
Ryan Petrello
9d81727d16 fix a typo on the JT add page that breaks the custom venv field 2018-10-15 11:19:58 -04:00
Hideki Saito
d5626a4f3e [3.3.1] Add files and output of commands to gather with sosreport
- Fixed issue #3064
2018-10-15 11:40:51 +09:00
Ryan Petrello
6073e8e3b6 Merge pull request #3062 from ryanpetrello/fix-3043
minor nit for https://github.com/ansible/tower/pull/3060
2018-10-12 16:56:23 -04:00
Ryan Petrello
867ff5da71 minor nit for https://github.com/ansible/tower/pull/3060 2018-10-12 16:17:14 -04:00
Ryan Petrello
c8b2ca7fed Merge pull request #3060 from ryanpetrello/fix-3043
properly handle AnsibleVaultEncryptedUnicode objects in the callback
2018-10-12 15:42:11 -04:00
Ryan Petrello
d4e3127fb4 properly handle AnsibleVaultEncryptedUnicode objects in the callback 2018-10-12 12:29:46 -04:00
Chris Meyers
503a47c509 Merge pull request #3054 from chrismeyersfsu/fix-ldap_posix_group_type
fix issue with ldap queries containing unicode
2018-10-12 09:14:09 -05:00
Ryan Petrello
71577bb00d Merge pull request #3052 from wwitzel3/bump-asgi_amqp
Use latest version of asgi_amqp
2018-10-10 16:07:57 -04:00
chris meyers
cfb58eb145 fix issue with ldap queries containing unicode 2018-10-10 12:32:27 -04:00
Wayne Witzel III
5994c35975 Use latest version of asgi_amqp 2018-10-10 11:33:11 -04:00
Michael Abashian
3aa07baf26 Merge pull request #3035 from mabashian/3010-extra-vars
Fixes bug where schedule extra vars were not being displayed in the edit form
2018-09-28 09:24:41 -04:00
Chris Meyers
f1c53fcd85 Merge pull request #3034 from chrismeyersfsu/fix-ldap_params
at migration time, validate ldap group type params
2018-09-28 08:53:42 -04:00
mabashian
6c98e6c3a0 Actually fix extra vars on edit schedule. This commit takes into account survey question answers which need to get pulled out of extra vars and displayed in the prompt. 2018-09-27 16:49:23 -04:00
mabashian
8aec4ed72e Fixes bug where schedule extra vars were not being displayed in the edit form 2018-09-27 16:30:10 -04:00
chris meyers
0a0cdc2e21 at migration time, validate ldap group type params
* Previously, we have logic in the API to ensure that ldap group type
params, when changed, align with ldap group type Class init
expectations. However, we did not have this logic in the migrations.
This PR adds the validation check to migrations.
2018-09-27 12:18:39 -04:00
Ryan Petrello
f0776d6838 Merge pull request #3026 from ryanpetrello/fix-3004
properly support deprecated `Authorization: Token xyz`
2018-09-24 15:33:56 -04:00
Ryan Petrello
9de63832ce properly support deprecated Authorization: Token xyz 2018-09-24 15:16:09 -04:00
John Mitchell
70629ef7f3 Merge pull request #2997 from jlmitch5/fixPageSelector
fix filter page size selector
2018-09-13 10:42:50 -04:00
John Mitchell
1d8bb47726 fix filter page size selector 2018-09-12 17:31:10 -04:00
Matthew Jones
5e16c72d30 Merge pull request #2988 from mabashian/2982-wfjt-list-select
Fixes bug in wfjt node form where rows weren't remaining selected after being clicked
2018-09-12 14:00:32 -04:00
Matthew Jones
02f709f8d1 Merge pull request #2995 from jlmitch5/lodashFindUpdate
update syntax of lodash find call
2018-09-12 13:59:59 -04:00
Shane McDonald
90bd27f5a8 Whitespace fix
I’m not actually this pedantic, I just need something to tag.
2018-09-12 13:41:56 -04:00
John Mitchell
593ab90f92 update syntax of lodash find call 2018-09-12 10:54:17 -04:00
mabashian
27c06a7285 Fixes bug in wfjt node form where rows weren't remaining selected after being clicked 2018-09-11 16:34:02 -04:00
Ryan Petrello
b2c755ba76 Merge pull request #2980 from rooftopcellist/amend_changelog_networkui
rm network ui from changelog
2018-09-11 10:03:00 -04:00
Ryan Petrello
c88cab7d31 Merge pull request #2983 from ansible/deprecated_facts
deprecate fact endpoints
2018-09-11 10:02:25 -04:00
chris meyers
f82f4a9993 deprecate fact endpoints and commands 2018-09-07 17:46:33 -04:00
adamscmRH
5a6f1a342f rm network ui from changelog 2018-09-07 15:04:34 -04:00
1055 changed files with 27285 additions and 22270 deletions

5
.gitignore vendored
View File

@@ -1,3 +1,7 @@
# Ignore generated schema
swagger.json
schema.json
reference-schema.json
# Tags
.tags
@@ -58,6 +62,7 @@ __pycache__
# UI build flag files
awx/ui/.deps_built
awx/ui/.release_built
awx/ui/.release_deps_built
# Testing
.cache

View File

@@ -34,10 +34,11 @@ Have questions about this document or anything not covered here? Come chat with
## Things to know prior to submitting code
- All code submissions are done through pull requests against the `devel` branch.
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
## Setting up your development environment
@@ -49,7 +50,7 @@ The AWX development environment workflow and toolchain is based on Docker, and t
Prior to starting the development services, you'll need `docker` and `docker-compose`. On Linux, you can generally find these in your distro's packaging, but you may find that Docker themselves maintain a separate repo that tracks more closely to the latest releases.
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
respectively.
For Linux platforms, refer to the following from Docker:
@@ -82,12 +83,10 @@ If you're not using Docker for Mac, or Docker for Windows, you may need, or choo
(host)$ pip install docker-compose
```
#### Node and npm
#### Frontend Development
The AWX UI requires the following:
See [the ui development documentation](awx/ui/README.md).
- Node 8.x LTS
- NPM 6.x LTS
### Build the environment
@@ -137,21 +136,21 @@ Run the following to build the AWX UI:
```
### Running the environment
#### Start the containers
#### Start the containers
Start the development containers by running the following:
```bash
(host)$ make docker-compose
```
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django and the front end build process.
If you start a second terminal session, you can take a look at the running containers using the `docker ps` command. For example:
```bash
# List running containers
(host)$ docker ps
(host)$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@@ -219,7 +218,7 @@ If you want to start and use the development environment, you'll first need to b
```
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes.
Now you can start each service individually, or start all services in a pre-configured tmux session like so:
```bash
@@ -248,9 +247,9 @@ Before you can log into AWX, you need to create an admin user. With this user yo
(container)# awx-manage createsuperuser
```
You will be prompted for a username, an email address, and a password, and you will be asked to confirm the password. The email address is not important, so just enter something that looks like an email address. Remember the username and password, as you will use them to log into the web interface for the first time.
##### Load demo data
You can optionally load some demo data. This will create a demo project, inventory, and job template. From within the container shell, run the following to load the data:
```bash
@@ -276,7 +275,7 @@ in OpenAPI format. A variety of online tools are available for translating
this data into more consumable formats (such as HTML). http://editor.swagger.io
is an example of one such service.
### Accessing the AWX web interface
### Accessing the AWX web interface
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).
@@ -289,7 +288,7 @@ When necessary, remove any AWX containers and images by running the following:
```bash
(host)$ make docker-clean
```
## What should I work on?
For feature work, take a look at the current [Enhancements](https://github.com/ansible/awx/issues?q=is%3Aissue+is%3Aopen+label%3Atype%3Aenhancement).
@@ -331,6 +330,23 @@ Sometimes it might take us a while to fully review your PR. We try to keep the `
All submitted PRs will have the linter and unit tests run against them via Zuul, and the status reported in the PR.
## PR Checks ran by Zuul
Zuul jobs for awx are defined in the [zuul-jobs](https://github.com/ansible/zuul-jobs) repo.
Zuul runs the following checks that must pass:
1) `tox-awx-api-lint`
2) `tox-awx-ui-lint`
3) `tox-awx-api`
4) `tox-awx-ui`
5) `tox-awx-swagger`
Zuul runs the following checks that are non-voting (can not pass but serve to inform PR reviewers):
1) `tox-awx-detect-schema-change`
This check generates the schema and diffs it against a reference copy of the `devel` version of the schema.
Reviewers should inspect the `job-output.txt.gz` related to the check if their is a failure (grep for `diff -u -b` to find beginning of diff).
If the schema change is expected and makes sense in relation to the changes made by the PR, then you are good to go!
If not, the schema changes should be fixed, but this decision must be enforced by reviewers.
## Reporting Issues
We welcome your feedback, and encourage you to file an issue when you run into a problem. But before opening a new issues, we ask that you please view our [Issues guide](./ISSUES.md).

View File

@@ -18,7 +18,7 @@ $ pip install --upgrade ansible-tower-cli
The AWX host URL, user, and password must be set for the AWX instance to be exported:
```
$ tower-cli config host <old-awx-host.example.com>
$ tower-cli config host http://<old-awx-host.example.com>
$ tower-cli config username <user>
$ tower-cli config password <pass>
```
@@ -62,7 +62,7 @@ For other install methods, refer to the [Install.md](https://github.com/ansible/
Configure tower-cli for your new AWX host as shown earlier. Import from a JSON file named assets.json
```
$ tower-cli config host <new-awx-host.example.com>
$ tower-cli config host http://<new-awx-host.example.com>
$ tower-cli config username <user>
$ tower-cli config password <pass>
$ tower-cli send assets.json

View File

@@ -27,7 +27,7 @@ This document provides a guide for installing AWX.
- [Start the build](#start-the-build-1)
- [Accessing AWX](#accessing-awx-1)
- [SSL Termination](#ssl-termination)
- [Docker or Docker Compose](#docker-or-docker-compose)
- [Docker Compose](#docker-compose)
- [Prerequisites](#prerequisites-3)
- [Pre-build steps](#pre-build-steps-2)
- [Deploying to a remote host](#deploying-to-a-remote-host)
@@ -73,7 +73,7 @@ The system that runs the AWX service will need to satisfy the following requirem
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 9.4.
- If you choose to use an external PostgreSQL database, please note that the minimum version is 9.6+.
### AWX Tunables
@@ -81,14 +81,14 @@ The system that runs the AWX service will need to satisfy the following requirem
### Choose a deployment platform
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster, docker-compose or a standalone Docker daemon. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster or docker-compose. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
In the sections below, you'll find deployment details and instructions for each platform:
- [OpenShift](#openshift)
- [Kubernetes](#kubernetes)
- [Docker or Docker Compose](#docker-or-docker-compose).
- [Docker Compose](#docker-compose).
### Official vs Building Images
@@ -391,14 +391,13 @@ If your provider is able to allocate an IP Address from the Ingress controller t
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
## Docker or Docker-Compose
## Docker-Compose
### Prerequisites
- [Docker](https://docs.docker.com/engine/installation/) on the host where AWX will be deployed. After installing Docker, the Docker service must be started (depending on your OS, you may have to add the local user that uses Docker to the ``docker`` group, refer to the documentation for details)
- [docker-py](https://github.com/docker/docker-py) Python module.
If you're installing using Docker Compose, you'll need [Docker Compose](https://docs.docker.com/compose/install/).
- [Docker Compose](https://docs.docker.com/compose/install/).
### Pre-build steps
@@ -441,13 +440,13 @@ Before starting the build process, review the [inventory](./installer/inventory)
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container. Defaults to *80*.
*use_docker_compose*
*ssl_certificate*
> Switch to ``true`` to use Docker Compose instead of the standalone Docker install.
> Optionally, provide the path to a file that contains a certificate and its private key.
*docker_compose_dir*
When using docker-compose, the `docker-compose.yml` file will be created there (default `/var/lib/awx`).
When using docker-compose, the `docker-compose.yml` file will be created there (default `/tmp/awxcompose`).
*ca_trust_dir*
@@ -527,7 +526,7 @@ After the playbook run completes, Docker will report up to 5 running containers.
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e240ed8209cd awx_task:1.0.0.8 "/tini -- /bin/sh ..." 2 minutes ago Up About a minute 8052/tcp awx_task
1cfd02601690 awx_web:1.0.0.8 "/tini -- /bin/sh ..." 2 minutes ago Up About a minute 0.0.0.0:80->8052/tcp awx_web
1cfd02601690 awx_web:1.0.0.8 "/tini -- /bin/sh ..." 2 minutes ago Up About a minute 0.0.0.0:443->8052/tcp awx_web
55a552142bcd memcached:alpine "docker-entrypoint..." 2 minutes ago Up 2 minutes 11211/tcp memcached
84011c072aad rabbitmq:3 "docker-entrypoint..." 2 minutes ago Up 2 minutes 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
97e196120ab3 postgres:9.6 "docker-entrypoint..." 2 minutes ago Up 2 minutes 5432/tcp postgres

109
Makefile
View File

@@ -1,4 +1,4 @@
PYTHON ?= python
PYTHON ?= python3
PYTHON_VERSION = $(shell $(PYTHON) -c "from distutils.sysconfig import get_python_version; print(get_python_version())")
SITELIB=$(shell $(PYTHON) -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")
OFFICIAL ?= no
@@ -11,7 +11,6 @@ GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
IMAGE_REPOSITORY_AUTH ?=
IMAGE_REPOSITORY_BASE ?= https://gcr.io
VERSION := $(shell cat VERSION)
# NOTE: This defaults the container image version to the branch that's active
@@ -54,6 +53,7 @@ WHEEL_FILE ?= $(WHEEL_NAME)-py2-none-any.whl
# UI flag files
UI_DEPS_FLAG_FILE = awx/ui/.deps_built
UI_RELEASE_DEPS_FLAG_FILE = awx/ui/.release_deps_built
UI_RELEASE_FLAG_FILE = awx/ui/.release_built
I18N_FLAG_FILE = .i18n_built
@@ -74,6 +74,7 @@ clean-ui:
rm -rf awx/ui/test/e2e/reports/
rm -rf awx/ui/client/languages/
rm -f $(UI_DEPS_FLAG_FILE)
rm -f $(UI_RELEASE_DEPS_FLAG_FILE)
rm -f $(UI_RELEASE_FLAG_FILE)
clean-tmp:
@@ -85,6 +86,11 @@ clean-venv:
clean-dist:
rm -rf dist
clean-schema:
rm -rf swagger.json
rm -rf schema.json
rm -rf reference-schema.json
# Remove temporary build files, compiled Python files.
clean: clean-ui clean-dist
rm -rf awx/public
@@ -116,23 +122,31 @@ virtualenv_ansible:
mkdir $(VENV_BASE); \
fi; \
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
virtualenv --system-site-packages $(VENV_BASE)/ansible && \
virtualenv -p python --system-site-packages $(VENV_BASE)/ansible && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed six packaging appdirs && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==36.0.1 && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
fi; \
fi
virtualenv_ansible_py3:
if [ "$(VENV_BASE)" ]; then \
if [ ! -d "$(VENV_BASE)" ]; then \
mkdir $(VENV_BASE); \
fi; \
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
$(PYTHON) -m venv --system-site-packages $(VENV_BASE)/ansible; \
fi; \
fi
virtualenv_awx:
if [ "$(VENV_BASE)" ]; then \
if [ ! -d "$(VENV_BASE)" ]; then \
mkdir $(VENV_BASE); \
fi; \
if [ ! -d "$(VENV_BASE)/awx" ]; then \
virtualenv --system-site-packages $(VENV_BASE)/awx && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed six packaging appdirs && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==36.0.1 && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
$(PYTHON) -m venv --system-site-packages $(VENV_BASE)/awx; \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed docutils==0.14; \
fi; \
fi
@@ -144,20 +158,19 @@ requirements_ansible: virtualenv_ansible
fi
$(VENV_BASE)/ansible/bin/pip uninstall --yes -r requirements/requirements_ansible_uninstall.txt
requirements_ansible_py3: virtualenv_ansible_py3
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_local.txt | $(VENV_BASE)/ansible/bin/pip3 install $(PIP_OPTIONS) --ignore-installed -r /dev/stdin ; \
else \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_git.txt | $(VENV_BASE)/ansible/bin/pip3 install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) --ignore-installed -r /dev/stdin ; \
fi
$(VENV_BASE)/ansible/bin/pip3 uninstall --yes -r requirements/requirements_ansible_uninstall.txt
requirements_ansible_dev:
if [ "$(VENV_BASE)" ]; then \
$(VENV_BASE)/ansible/bin/pip install pytest mock; \
fi
requirements_isolated:
if [ ! -d "$(VENV_BASE)/awx" ]; then \
virtualenv --system-site-packages $(VENV_BASE)/awx && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed six packaging appdirs && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==35.0.2 && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
fi;
$(VENV_BASE)/awx/bin/pip install -r requirements/requirements_isolated.txt
# Install third-party requirements needed for AWX's environment.
requirements_awx: virtualenv_awx
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
@@ -165,6 +178,7 @@ requirements_awx: virtualenv_awx
else \
cat requirements/requirements.txt requirements/requirements_git.txt | $(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) --ignore-installed -r /dev/stdin ; \
fi
echo "include-system-site-packages = true" >> $(VENV_BASE)/awx/lib/python$(PYTHON_VERSION)/pyvenv.cfg
#$(VENV_BASE)/awx/bin/pip uninstall --yes -r requirements/requirements_tower_uninstall.txt
requirements_awx_dev:
@@ -191,7 +205,7 @@ version_file:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
python -c "import awx as awx; print awx.__version__" > /var/lib/awx/.awx_version; \
python -c "import awx; print(awx.__version__)" > /var/lib/awx/.awx_version; \
# Do any one-time init tasks.
comma := ,
@@ -204,7 +218,7 @@ init:
if [ "$(AWX_GROUP_QUEUES)" == "tower,thepentagon" ]; then \
$(MANAGEMENT_COMMAND) provision_instance --hostname=isolated; \
$(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \
$(MANAGEMENT_COMMAND) generate_isolated_key | ssh -o "StrictHostKeyChecking no" root@isolated 'cat >> /root/.ssh/authorized_keys'; \
$(MANAGEMENT_COMMAND) generate_isolated_key > /awx_devel/awx/main/expect/authorized_keys; \
fi;
# Refresh development environment after pulling new code.
@@ -255,7 +269,7 @@ supervisor:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
supervisord --configuration /supervisor.conf --pidfile=/tmp/supervisor_pid
supervisord --pidfile=/tmp/supervisor_pid
# Alternate approach to tmux to run all development tasks specified in
# Procfile.
@@ -338,18 +352,21 @@ pyflakes: reports
pylint: reports
@(set -o pipefail && $@ | reports/$@.report)
genschema: reports
$(MAKE) swagger PYTEST_ARGS="--genschema"
swagger: reports
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
(set -o pipefail && py.test awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
(set -o pipefail && py.test $(PYTEST_ARGS) awx/conf/tests/functional awx/main/tests/functional/api awx/main/tests/docs --release=$(VERSION_TARGET) | tee reports/$@.report)
check: flake8 pep8 # pyflakes pylint
awx-link:
cp -R /tmp/awx.egg-info /awx_devel/ || true
sed -i "s/placeholder/$(shell git describe --long | sed 's/\./\\./g')/" /awx_devel/awx.egg-info/PKG-INFO
cp -f /tmp/awx.egg-link /venv/awx/lib/python2.7/site-packages/awx.egg-link
cp -f /tmp/awx.egg-link /venv/awx/lib/python$(PYTHON_VERSION)/site-packages/awx.egg-link
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
@@ -446,7 +463,7 @@ messages:
# generate l10n .json .mo
languages: $(I18N_FLAG_FILE)
$(I18N_FLAG_FILE): $(UI_DEPS_FLAG_FILE)
$(I18N_FLAG_FILE): $(UI_RELEASE_DEPS_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui run languages
$(PYTHON) tools/scripts/compilemessages.py
touch $(I18N_FLAG_FILE)
@@ -454,13 +471,31 @@ $(I18N_FLAG_FILE): $(UI_DEPS_FLAG_FILE)
# End l10n TASKS
# --------------------------------------
# UI TASKS
# UI RELEASE TASKS
# --------------------------------------
ui-release: $(UI_RELEASE_FLAG_FILE)
$(UI_RELEASE_FLAG_FILE): $(I18N_FLAG_FILE) $(UI_RELEASE_DEPS_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui run build-release
touch $(UI_RELEASE_FLAG_FILE)
$(UI_RELEASE_DEPS_FLAG_FILE):
PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=1 $(NPM_BIN) --unsafe-perm --prefix awx/ui ci --no-save awx/ui
touch $(UI_RELEASE_DEPS_FLAG_FILE)
# END UI RELEASE TASKS
# --------------------------------------
# UI TASKS
# --------------------------------------
ui-deps: $(UI_DEPS_FLAG_FILE)
$(UI_DEPS_FLAG_FILE):
$(NPM_BIN) --unsafe-perm --prefix awx/ui install --no-save awx/ui
@if [ -f ${UI_RELEASE_DEPS_FLAG_FILE} ]; then \
rm -rf awx/ui/node_modules; \
rm -f ${UI_RELEASE_DEPS_FLAG_FILE}; \
fi; \
$(NPM_BIN) --unsafe-perm --prefix awx/ui ci --no-save awx/ui
touch $(UI_DEPS_FLAG_FILE)
ui-docker-machine: $(UI_DEPS_FLAG_FILE)
@@ -474,12 +509,6 @@ ui-docker: $(UI_DEPS_FLAG_FILE)
ui-devel: $(UI_DEPS_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui run build-devel -- $(MAKEFLAGS)
ui-release: $(UI_RELEASE_FLAG_FILE)
$(UI_RELEASE_FLAG_FILE): $(I18N_FLAG_FILE) $(UI_DEPS_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui run build-release
touch $(UI_RELEASE_FLAG_FILE)
ui-test: $(UI_DEPS_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui run test
@@ -490,9 +519,6 @@ ui-test-ci: $(UI_DEPS_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui run test:ci
$(NPM_BIN) --prefix awx/ui run unit
testjs_ci:
echo "Update UI unittests later" #ui-test-ci
jshint: $(UI_DEPS_FLAG_FILE)
$(NPM_BIN) run --prefix awx/ui jshint
$(NPM_BIN) run --prefix awx/ui lint
@@ -540,12 +566,6 @@ docker-isolated:
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml create
docker start tools_awx_1
docker start tools_isolated_1
echo "__version__ = '`git describe --long | cut -d - -f 1-1`'" | docker exec -i tools_isolated_1 /bin/bash -c "cat > /venv/awx/lib/python2.7/site-packages/awx.py"
if [ "`docker exec -i -t tools_isolated_1 cat /root/.ssh/authorized_keys`" == "`docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub`" ]; then \
echo "SSH keys already copied to isolated instance"; \
else \
docker exec "tools_isolated_1" bash -c "mkdir -p /root/.ssh && rm -f /root/.ssh/authorized_keys && echo $$(docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub) >> /root/.ssh/authorized_keys"; \
fi
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
# Docker Compose Development environment
@@ -564,6 +584,16 @@ docker-compose-runtest:
docker-compose-build-swagger:
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh swagger
docker-compose-genschema:
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh genschema
mv swagger.json schema.json
docker-compose-detect-schema-change:
$(MAKE) docker-compose-genschema
curl https://s3.amazonaws.com/awx-public-ci-files/schema.json -o reference-schema.json
# Ignore differences in whitespace with -b
diff -u -b reference-schema.json schema.json
docker-compose-clean:
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm -w /awx_devel --service-ports awx make clean
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose rm -sf
@@ -600,7 +630,6 @@ docker-compose-cluster-elk: docker-auth
minishift-dev:
ansible-playbook -i localhost, -e devtree_directory=$(CURDIR) tools/clusterdevel/start_minishift_dev.yml
clean-elk:
docker stop tools_kibana_1
docker stop tools_logstash_1

View File

@@ -1 +1 @@
2.1.0
4.0.0

View File

@@ -21,6 +21,48 @@ except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
import django
from django.utils.encoding import force_bytes
from django.db.backends.base.schema import BaseDatabaseSchemaEditor
from django.db.backends.base import schema
HAS_DJANGO = True
except ImportError:
HAS_DJANGO = False
if HAS_DJANGO is True:
# This line exists to make sure we don't regress on FIPS support if we
# upgrade Django; if you're upgrading Django and see this error,
# update the version check below, and confirm that FIPS still works.
if django.__version__ != '1.11.16':
raise RuntimeError("Django version other than 1.11.16 detected {}. \
Subclassing BaseDatabaseSchemaEditor is known to work for Django 1.11.16 \
and may not work in newer Django versions.".format(django.__version__))
class FipsBaseDatabaseSchemaEditor(BaseDatabaseSchemaEditor):
@classmethod
def _digest(cls, *args):
"""
Generates a 32-bit digest of a set of arguments that can be used to
shorten identifying names.
"""
try:
h = hashlib.md5()
except ValueError:
h = hashlib.md5(usedforsecurity=False)
for arg in args:
h.update(force_bytes(arg))
return h.hexdigest()[:8]
schema.BaseDatabaseSchemaEditor = FipsBaseDatabaseSchemaEditor
def find_commands(management_dir):
# Modified version of function from django/core/management/__init__.py.
command_dir = os.path.join(management_dir, 'commands')
@@ -49,16 +91,6 @@ def prepare_env():
# Monkeypatch Django find_commands to also work with .pyc files.
import django.core.management
django.core.management.find_commands = find_commands
# Fixup sys.modules reference to django.utils.six to allow jsonfield to
# work when using Django 1.4.
import django.utils
try:
import django.utils.six
except ImportError: # pragma: no cover
import six
sys.modules['django.utils.six'] = sys.modules['six']
django.utils.six = sys.modules['django.utils.six']
from django.utils import six # noqa
# Use the AWX_TEST_DATABASE_* environment variables to specify the test
# database settings to use when management command is run as an external
# program via unit tests.

View File

@@ -43,7 +43,7 @@ register(
help_text=_('Dictionary for customizing OAuth 2 timeouts, available items are '
'`ACCESS_TOKEN_EXPIRE_SECONDS`, the duration of access tokens in the number '
'of seconds, and `AUTHORIZATION_CODE_EXPIRE_SECONDS`, the duration of '
'authorization grants in the number of seconds.'),
'authorization codes in the number of seconds.'),
category=_('Authentication'),
category_slug='authentication',
)

View File

@@ -101,6 +101,10 @@ class DeprecatedCredentialField(serializers.IntegerField):
super(DeprecatedCredentialField, self).__init__(**kwargs)
def to_internal_value(self, pk):
try:
pk = int(pk)
except ValueError:
self.fail('invalid')
try:
Credential.objects.get(pk=pk)
except ObjectDoesNotExist:

View File

@@ -25,7 +25,6 @@ from rest_framework.filters import BaseFilterBackend
from awx.main.utils import get_type_for_model, to_python_boolean
from awx.main.utils.db import get_all_field_names
from awx.main.models.credential import CredentialType
from awx.main.models.rbac import RoleAncestorEntry
class V1CredentialFilterBackend(BaseFilterBackend):
@@ -66,7 +65,7 @@ class TypeFilterBackend(BaseFilterBackend):
model = queryset.model
model_type = get_type_for_model(model)
if 'polymorphic_ctype' in get_all_field_names(model):
types_pks = set([v for k,v in types_map.items() if k in types])
types_pks = set([v for k, v in types_map.items() if k in types])
queryset = queryset.filter(polymorphic_ctype_id__in=types_pks)
elif model_type in types:
queryset = queryset
@@ -193,7 +192,7 @@ class FieldLookupBackend(BaseFilterBackend):
def value_to_python(self, model, lookup, value):
try:
lookup = lookup.encode("ascii")
lookup.encode("ascii")
except UnicodeEncodeError:
raise ValueError("%r is not an allowed field name. Must be ascii encodable." % lookup)
@@ -347,12 +346,12 @@ class FieldLookupBackend(BaseFilterBackend):
else:
args.append(Q(**{k:v}))
for role_name in role_filters:
if not hasattr(queryset.model, 'accessible_pk_qs'):
raise ParseError(_(
'Cannot apply role_level filter to this list because its model '
'does not use roles for access control.'))
args.append(
Q(pk__in=RoleAncestorEntry.objects.filter(
ancestor__in=request.user.roles.all(),
content_type_id=ContentType.objects.get_for_model(queryset.model).id,
role_field=role_name
).values_list('object_id').distinct())
Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name))
)
if or_filters:
q = Q()
@@ -364,12 +363,12 @@ class FieldLookupBackend(BaseFilterBackend):
args.append(q)
if search_filters and search_filter_relation == 'OR':
q = Q()
for term, constrains in search_filters.iteritems():
for term, constrains in search_filters.items():
for constrain in constrains:
q |= Q(**{constrain: term})
args.append(q)
elif search_filters and search_filter_relation == 'AND':
for term, constrains in search_filters.iteritems():
for term, constrains in search_filters.items():
q_chain = Q()
for constrain in constrains:
q_chain |= Q(**{constrain: term})

View File

@@ -5,8 +5,7 @@
import inspect
import logging
import time
import six
import urllib
import urllib.parse
# Django
from django.conf import settings
@@ -32,14 +31,19 @@ from rest_framework.permissions import AllowAny
from rest_framework.renderers import StaticHTMLRenderer, JSONRenderer
from rest_framework.negotiation import DefaultContentNegotiation
# cryptography
from cryptography.fernet import InvalidToken
# AWX
from awx.api.filters import FieldLookupBackend
from awx.main.models import * # noqa
from awx.main.models import (
UnifiedJob, UnifiedJobTemplate, User, Role
)
from awx.main.access import access_registry
from awx.main.utils import * # noqa
from awx.main.utils import (
camelcase_to_underscore,
get_search_fields,
getattrd,
get_object_or_400,
decrypt_field
)
from awx.main.utils.db import get_all_field_names
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer, UserSerializer
from awx.api.versioning import URLPathVersioning, get_request_version
@@ -56,7 +60,7 @@ __all__ = ['APIView', 'GenericAPIView', 'ListAPIView', 'SimpleListAPIView',
'ParentMixin',
'DeleteLastUnattachLabelMixin',
'SubListAttachDetachAPIView',
'CopyAPIView']
'CopyAPIView', 'BaseUsersList',]
logger = logging.getLogger('awx.api.generics')
analytics_logger = logging.getLogger('awx.analytics.performance')
@@ -90,12 +94,14 @@ class LoggedLoginView(auth_views.LoginView):
logger.info(smart_text(u"User {} logged in.".format(self.request.user.username)))
ret.set_cookie('userLoggedIn', 'true')
current_user = UserSerializer(self.request.user)
current_user = JSONRenderer().render(current_user.data)
current_user = urllib.quote('%s' % current_user, '')
ret.set_cookie('current_user', current_user)
current_user = smart_text(JSONRenderer().render(current_user.data))
current_user = urllib.parse.quote('%s' % current_user, '')
ret.set_cookie('current_user', current_user, secure=settings.SESSION_COOKIE_SECURE or None)
return ret
else:
if 'username' in self.request.POST:
logger.warn(smart_text(u"Login failed for user {} from {}".format(self.request.POST.get('username'),request.META.get('REMOTE_ADDR', None))))
ret.status_code = 401
return ret
@@ -305,7 +311,7 @@ class APIView(views.APIView):
# submitted data was rejected.
request_method = getattr(self, '_raw_data_request_method', None)
response_status = getattr(self, '_raw_data_response_status', 0)
if request_method in ('POST', 'PUT', 'PATCH') and response_status in xrange(400, 500):
if request_method in ('POST', 'PUT', 'PATCH') and response_status in range(400, 500):
return self.request.data.copy()
return data
@@ -348,7 +354,7 @@ class GenericAPIView(generics.GenericAPIView, APIView):
# form.
if hasattr(self, '_raw_data_form_marker'):
# Always remove read only fields from serializer.
for name, field in serializer.fields.items():
for name, field in list(serializer.fields.items()):
if getattr(field, 'read_only', None):
del serializer.fields[name]
serializer._data = self.update_raw_data(serializer.data)
@@ -552,9 +558,8 @@ class SubListDestroyAPIView(DestroyAPIView, SubListAPIView):
def perform_list_destroy(self, instance_list):
if self.check_sub_obj_permission:
# Check permissions for all before deleting, avoiding half-deleted lists
for instance in instance_list:
if self.has_delete_permission(instance):
if not self.has_delete_permission(instance):
raise PermissionDenied()
for instance in instance_list:
self.perform_destroy(instance, check_permission=False)
@@ -749,7 +754,7 @@ class SubListAttachDetachAPIView(SubListCreateAttachDetachAPIView):
def update_raw_data(self, data):
request_method = getattr(self, '_raw_data_request_method', None)
response_status = getattr(self, '_raw_data_response_status', 0)
if request_method == 'POST' and response_status in xrange(400, 500):
if request_method == 'POST' and response_status in range(400, 500):
return super(SubListAttachDetachAPIView, self).update_raw_data(data)
return {'id': None}
@@ -855,15 +860,18 @@ class CopyAPIView(GenericAPIView):
return field_val
if isinstance(field_val, dict):
for sub_field in field_val:
if isinstance(sub_field, six.string_types) \
and isinstance(field_val[sub_field], six.string_types):
if isinstance(sub_field, str) \
and isinstance(field_val[sub_field], str):
try:
field_val[sub_field] = decrypt_field(obj, field_name, sub_field)
except InvalidToken:
except AttributeError:
# Catching the corner case with v1 credential fields
field_val[sub_field] = decrypt_field(obj, sub_field)
elif isinstance(field_val, six.string_types):
field_val = decrypt_field(obj, field_name)
elif isinstance(field_val, str):
try:
field_val = decrypt_field(obj, field_name)
except AttributeError:
return field_val
return field_val
def _build_create_dict(self, obj):
@@ -917,7 +925,7 @@ class CopyAPIView(GenericAPIView):
obj, field.name, field_val
)
new_obj = model.objects.create(**create_kwargs)
logger.debug(six.text_type('Deep copy: Created new object {}({})').format(
logger.debug('Deep copy: Created new object {}({})'.format(
new_obj, model
))
# Need to save separatedly because Djang-crum get_current_user would
@@ -990,3 +998,22 @@ class CopyAPIView(GenericAPIView):
serializer = self._get_copy_return_serializer(new_obj)
headers = {'Location': new_obj.get_absolute_url(request=request)}
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class BaseUsersList(SubListCreateAttachDetachAPIView):
def post(self, request, *args, **kwargs):
ret = super(BaseUsersList, self).post( request, *args, **kwargs)
if ret.status_code != 201:
return ret
try:
if ret.data is not None and request.data.get('is_system_auditor', False):
# This is a faux-field that just maps to checking the system
# auditor role member list.. unfortunately this means we can't
# set it on creation, and thus needs to be set here.
user = User.objects.get(id=ret.data['id'])
user.is_system_auditor = request.data['is_system_auditor']
ret.data['is_system_auditor'] = request.data['is_system_auditor']
except AttributeError as exc:
print(exc)
pass
return ret

View File

@@ -157,7 +157,7 @@ class Metadata(metadata.SimpleMetadata):
finally:
view.request = request
for field, meta in actions[method].items():
for field, meta in list(actions[method].items()):
if not isinstance(meta, dict):
continue
@@ -234,17 +234,15 @@ class RoleMetadata(Metadata):
# TODO: Tower 3.3 remove class and all uses in views.py when API v1 is removed
class JobTypeMetadata(Metadata):
def get_field_info(self, field):
res = super(JobTypeMetadata, self).get_field_info(field)
def get_field_info(self, field):
res = super(JobTypeMetadata, self).get_field_info(field)
if field.field_name == 'job_type':
index = 0
for choice in res['choices']:
if choice[0] == 'scan':
res['choices'].pop(index)
break
index += 1
return res
if field.field_name == 'job_type':
res['choices'] = [
choice for choice in res['choices']
if choice[0] != 'scan'
]
return res
class SublistAttachDetatchMetadata(Metadata):
@@ -253,7 +251,7 @@ class SublistAttachDetatchMetadata(Metadata):
actions = super(SublistAttachDetatchMetadata, self).determine_actions(request, view)
method = 'POST'
if method in actions:
for field in actions[method]:
for field in list(actions[method].keys()):
if field == 'id':
continue
actions[method].pop(field)

View File

@@ -4,7 +4,7 @@ import json
# Django
from django.conf import settings
from django.utils import six
from django.utils.encoding import smart_str
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
@@ -25,7 +25,7 @@ class JSONParser(parsers.JSONParser):
encoding = parser_context.get('encoding', settings.DEFAULT_CHARSET)
try:
data = stream.read().decode(encoding)
data = smart_str(stream.read(), encoding=encoding)
if not data:
return {}
obj = json.loads(data, object_pairs_hook=OrderedDict)
@@ -33,4 +33,4 @@ class JSONParser(parsers.JSONParser):
raise ParseError(_('JSON parse error - not a JSON object'))
return obj
except ValueError as exc:
raise ParseError(_('JSON parse error - %s\nPossible cause: trailing comma.' % six.text_type(exc)))
raise ParseError(_('JSON parse error - %s\nPossible cause: trailing comma.' % str(exc)))

View File

@@ -9,8 +9,8 @@ from rest_framework.exceptions import MethodNotAllowed, PermissionDenied
from rest_framework import permissions
# AWX
from awx.main.access import * # noqa
from awx.main.models import * # noqa
from awx.main.access import check_user_access
from awx.main.models import Inventory, UnifiedJob
from awx.main.utils import get_object_or_400
logger = logging.getLogger('awx.api.permissions')

View File

@@ -1,12 +1,12 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
from django.utils.safestring import SafeText
# Django REST Framework
from rest_framework import renderers
from rest_framework.request import override_method
import six
class BrowsableAPIRenderer(renderers.BrowsableAPIRenderer):
'''
@@ -20,6 +20,19 @@ class BrowsableAPIRenderer(renderers.BrowsableAPIRenderer):
return renderers.JSONRenderer()
return renderer
def get_content(self, renderer, data, accepted_media_type, renderer_context):
if isinstance(data, SafeText):
# Older versions of Django (pre-2.0) have a py3 bug which causes
# bytestrings marked as "safe" to not actually get _treated_ as
# safe; this causes certain embedded strings (like the stdout HTML
# view) to be improperly escaped
# see: https://github.com/ansible/awx/issues/3108
# https://code.djangoproject.com/ticket/28121
return data
return super(BrowsableAPIRenderer, self).get_content(renderer, data,
accepted_media_type,
renderer_context)
def get_context(self, data, accepted_media_type, renderer_context):
# Store the associated response status to know how to populate the raw
# data form.
@@ -71,8 +84,8 @@ class PlainTextRenderer(renderers.BaseRenderer):
format = 'txt'
def render(self, data, media_type=None, renderer_context=None):
if not isinstance(data, six.string_types):
data = six.text_type(data)
if not isinstance(data, str):
data = str(data)
return data.encode(self.charset)

View File

@@ -5,10 +5,8 @@
import copy
import json
import logging
import operator
import re
import six
import urllib
import urllib.parse
from collections import OrderedDict
from datetime import timedelta
@@ -40,17 +38,30 @@ from rest_framework.utils.serializer_helpers import ReturnList
from polymorphic.models import PolymorphicModel
# AWX
from awx.main.access import get_user_capabilities
from awx.main.constants import (
SCHEDULEABLE_PROVIDERS,
ANSI_SGR_PATTERN,
ACTIVE_STATES,
CENSOR_VALUE,
CHOICES_PRIVILEGE_ESCALATION_METHODS,
)
from awx.main.models import * # noqa
from awx.main.models.base import NEW_JOB_TYPE_CHOICES
from awx.main.access import get_user_capabilities
from awx.main.fields import ImplicitRoleField
from awx.main.models import (
ActivityStream, AdHocCommand, AdHocCommandEvent, Credential,
CredentialType, CustomInventoryScript, Fact, Group, Host, Instance,
InstanceGroup, Inventory, InventorySource, InventoryUpdate,
InventoryUpdateEvent, Job, JobEvent, JobHostSummary, JobLaunchConfig,
JobTemplate, Label, Notification, NotificationTemplate, OAuth2AccessToken,
OAuth2Application, Organization, Project, ProjectUpdate,
ProjectUpdateEvent, RefreshToken, Role, Schedule, SystemJob,
SystemJobEvent, SystemJobTemplate, Team, UnifiedJob, UnifiedJobTemplate,
UserSessionMembership, V1Credential, WorkflowJob, WorkflowJobNode,
WorkflowJobTemplate, WorkflowJobTemplateNode, StdoutMaxBytesExceeded
)
from awx.main.models.base import VERBOSITY_CHOICES, NEW_JOB_TYPE_CHOICES
from awx.main.models.rbac import (
get_roles_on_resource, role_summary_fields_generator
)
from awx.main.fields import ImplicitRoleField, JSONBField
from awx.main.utils import (
get_type_for_model, get_model_for_type, timestamp_apiformat,
camelcase_to_underscore, getattrd, parse_yaml_or_json,
@@ -61,7 +72,7 @@ from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.validators import vars_validate_or_raise
from awx.conf.license import feature_enabled
from awx.conf.license import feature_enabled, LicenseForbids
from awx.api.versioning import reverse, get_request_version
from awx.api.fields import (BooleanNullField, CharNullField, ChoiceNullField,
VerbatimField, DeprecatedCredentialField)
@@ -104,7 +115,7 @@ SUMMARIZABLE_FK_FIELDS = {
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'vault_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed'),
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed', 'type'),
'job_template': DEFAULT_SUMMARY_FIELDS,
'workflow_job_template': DEFAULT_SUMMARY_FIELDS,
'workflow_job': DEFAULT_SUMMARY_FIELDS,
@@ -203,11 +214,11 @@ class BaseSerializerMetaclass(serializers.SerializerMetaclass):
@staticmethod
def _is_list_of_strings(x):
return isinstance(x, (list, tuple)) and all([isinstance(y, basestring) for y in x])
return isinstance(x, (list, tuple)) and all([isinstance(y, str) for y in x])
@staticmethod
def _is_extra_kwargs(x):
return isinstance(x, dict) and all([isinstance(k, basestring) and isinstance(v, dict) for k,v in x.items()])
return isinstance(x, dict) and all([isinstance(k, str) and isinstance(v, dict) for k,v in x.items()])
@classmethod
def _update_meta(cls, base, meta, other=None):
@@ -259,9 +270,7 @@ class BaseSerializerMetaclass(serializers.SerializerMetaclass):
return super(BaseSerializerMetaclass, cls).__new__(cls, name, bases, attrs)
class BaseSerializer(serializers.ModelSerializer):
__metaclass__ = BaseSerializerMetaclass
class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetaclass):
class Meta:
fields = ('id', 'type', 'url', 'related', 'summary_fields', 'created',
@@ -284,7 +293,7 @@ class BaseSerializer(serializers.ModelSerializer):
# The following lines fix the problem of being able to pass JSON dict into PrimaryKeyRelatedField.
data = kwargs.get('data', False)
if data:
for field_name, field_instance in six.iteritems(self.fields):
for field_name, field_instance in self.fields.items():
if isinstance(field_instance, ManyRelatedField) and not field_instance.read_only:
if isinstance(data.get(field_name, False), dict):
raise serializers.ValidationError(_('Cannot use dictionary for %s' % field_name))
@@ -294,7 +303,7 @@ class BaseSerializer(serializers.ModelSerializer):
"""
The request version component of the URL as an integer i.e., 1 or 2
"""
return get_request_version(self.context.get('request'))
return get_request_version(self.context.get('request')) or 1
def get_type(self, obj):
return get_type_for_model(self.Meta.model)
@@ -612,7 +621,7 @@ class BaseSerializer(serializers.ModelSerializer):
v2.extend(e)
else:
v2.append(e)
d[k] = map(force_text, v2)
d[k] = list(map(force_text, v2))
raise ValidationError(d)
return attrs
@@ -632,9 +641,7 @@ class EmptySerializer(serializers.Serializer):
pass
class BaseFactSerializer(BaseSerializer):
__metaclass__ = BaseSerializerMetaclass
class BaseFactSerializer(BaseSerializer, metaclass=BaseSerializerMetaclass):
def get_fields(self):
ret = super(BaseFactSerializer, self).get_fields()
@@ -886,7 +893,7 @@ class UserSerializer(BaseSerializer):
model = User
fields = ('*', '-name', '-description', '-modified',
'username', 'first_name', 'last_name',
'email', 'is_superuser', 'is_system_auditor', 'password', 'ldap_dn', 'external_account')
'email', 'is_superuser', 'is_system_auditor', 'password', 'ldap_dn', 'last_login', 'external_account')
def to_representation(self, obj): # TODO: Remove in 3.3
ret = super(UserSerializer, self).to_representation(obj)
@@ -1050,7 +1057,7 @@ class BaseOAuth2TokenSerializer(BaseSerializer):
return ret
def _is_valid_scope(self, value):
if not value or (not isinstance(value, six.string_types)):
if not value or (not isinstance(value, str)):
return False
words = value.split()
for word in words:
@@ -1222,7 +1229,7 @@ class OrganizationSerializer(BaseSerializer):
class Meta:
model = Organization
fields = ('*', 'custom_virtualenv',)
fields = ('*', 'max_hosts', 'custom_virtualenv',)
def get_related(self, obj):
res = super(OrganizationSerializer, self).get_related(obj)
@@ -1258,6 +1265,20 @@ class OrganizationSerializer(BaseSerializer):
summary_dict['related_field_counts'] = counts_dict[obj.id]
return summary_dict
def validate(self, attrs):
obj = self.instance
view = self.context['view']
obj_limit = getattr(obj, 'max_hosts', None)
api_limit = attrs.get('max_hosts')
if not view.request.user.is_superuser:
if api_limit is not None and api_limit != obj_limit:
# Only allow superusers to edit the max_hosts field
raise serializers.ValidationError(_('Cannot change max_hosts.'))
return super(OrganizationSerializer, self).validate(attrs)
class ProjectOptionsSerializer(BaseSerializer):
@@ -1311,16 +1332,12 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
'admin', 'update',
{'copy': 'organization.project_admin'}
]
scm_delete_on_next_update = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
class Meta:
model = Project
fields = ('*', 'organization', 'scm_delete_on_next_update', 'scm_update_on_launch',
fields = ('*', 'organization', 'scm_update_on_launch',
'scm_update_cache_timeout', 'scm_revision', 'custom_virtualenv',) + \
('last_update_failed', 'last_updated') # Backwards compatibility
read_only_fields = ('scm_delete_on_next_update',)
def get_related(self, obj):
res = super(ProjectSerializer, self).get_related(obj)
@@ -1503,6 +1520,12 @@ class InventorySerializer(BaseSerializerWithVariables):
'admin', 'adhoc',
{'copy': 'organization.inventory_admin'}
]
groups_with_active_failures = serializers.IntegerField(
read_only=True,
min_value=0,
help_text=_('This field has been deprecated and will be removed in a future release')
)
class Meta:
model = Inventory
@@ -1547,6 +1570,18 @@ class InventorySerializer(BaseSerializerWithVariables):
def validate_host_filter(self, host_filter):
if host_filter:
try:
for match in JSONBField.get_lookups().keys():
if match == 'exact':
# __exact is allowed
continue
match = '__{}'.format(match)
if re.match(
'ansible_facts[^=]+{}='.format(match),
host_filter
):
raise models.base.ValidationError({
'host_filter': 'ansible_facts does not support searching with {}'.format(match)
})
SmartFilter().query_from_string(host_filter)
except RuntimeError as e:
raise models.base.ValidationError(e)
@@ -1724,6 +1759,11 @@ class AnsibleFactsSerializer(BaseSerializer):
class GroupSerializer(BaseSerializerWithVariables):
capabilities_prefetch = ['inventory.admin', 'inventory.adhoc']
groups_with_active_failures = serializers.IntegerField(
read_only=True,
min_value=0,
help_text=_('This field has been deprecated and will be removed in a future release')
)
class Meta:
model = Group
@@ -2132,10 +2172,10 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
if get_field_from_model_or_attrs('source') != 'scm':
redundant_scm_fields = filter(
redundant_scm_fields = list(filter(
lambda x: attrs.get(x, None),
['source_project', 'source_path', 'update_on_project_update']
)
))
if redundant_scm_fields:
raise serializers.ValidationError(
{"detail": _("Cannot set %s if not SCM type." % ' '.join(redundant_scm_fields))}
@@ -2172,10 +2212,12 @@ class InventorySourceUpdateSerializer(InventorySourceSerializer):
class InventoryUpdateSerializer(UnifiedJobSerializer, InventorySourceOptionsSerializer):
custom_virtualenv = serializers.ReadOnlyField()
class Meta:
model = InventoryUpdate
fields = ('*', 'inventory', 'inventory_source', 'license_error', 'source_project_update',
'-controller_node',)
fields = ('*', 'inventory', 'inventory_source', 'license_error', 'org_host_limit_error',
'source_project_update', 'custom_virtualenv', '-controller_node',)
def get_related(self, obj):
res = super(InventoryUpdateSerializer, self).get_related(obj)
@@ -2204,6 +2246,44 @@ class InventoryUpdateSerializer(UnifiedJobSerializer, InventorySourceOptionsSeri
return res
class InventoryUpdateDetailSerializer(InventoryUpdateSerializer):
source_project = serializers.SerializerMethodField(
help_text=_('The project used for this job.'),
method_name='get_source_project_id'
)
class Meta:
model = InventoryUpdate
fields = ('*', 'source_project',)
def get_source_project(self, obj):
return getattrd(obj, 'source_project_update.unified_job_template', None)
def get_source_project_id(self, obj):
return getattrd(obj, 'source_project_update.unified_job_template.id', None)
def get_related(self, obj):
res = super(InventoryUpdateDetailSerializer, self).get_related(obj)
source_project_id = self.get_source_project_id(obj)
if source_project_id:
res['source_project'] = self.reverse('api:project_detail', kwargs={'pk': source_project_id})
return res
def get_summary_fields(self, obj):
summary_fields = super(InventoryUpdateDetailSerializer, self).get_summary_fields(obj)
summary_obj = self.get_source_project(obj)
if summary_obj:
summary_fields['source_project'] = {}
for field in SUMMARIZABLE_FK_FIELDS['project']:
value = getattr(summary_obj, field, None)
if value is not None:
summary_fields['source_project'][field] = value
return summary_fields
class InventoryUpdateListSerializer(InventoryUpdateSerializer, UnifiedJobListSerializer):
class Meta:
@@ -2456,25 +2536,21 @@ class CredentialTypeSerializer(BaseSerializer):
field['label'] = _(field['label'])
if 'help_text' in field:
field['help_text'] = _(field['help_text'])
if field['type'] == 'become_method':
field.pop('type')
field['choices'] = map(operator.itemgetter(0), CHOICES_PRIVILEGE_ESCALATION_METHODS)
return value
def filter_field_metadata(self, fields, method):
# API-created/modified CredentialType kinds are limited to
# `cloud` and `net`
if method in ('PUT', 'POST'):
fields['kind']['choices'] = filter(
fields['kind']['choices'] = list(filter(
lambda choice: choice[0] in ('cloud', 'net'),
fields['kind']['choices']
)
))
return fields
# TODO: remove when API v1 is removed
@six.add_metaclass(BaseSerializerMetaclass)
class V1CredentialFields(BaseSerializer):
class V1CredentialFields(BaseSerializer, metaclass=BaseSerializerMetaclass):
class Meta:
model = Credential
@@ -2492,8 +2568,7 @@ class V1CredentialFields(BaseSerializer):
return super(V1CredentialFields, self).build_field(field_name, info, model_class, nested_depth)
@six.add_metaclass(BaseSerializerMetaclass)
class V2CredentialFields(BaseSerializer):
class V2CredentialFields(BaseSerializer, metaclass=BaseSerializerMetaclass):
class Meta:
model = Credential
@@ -2619,8 +2694,8 @@ class CredentialSerializer(BaseSerializer):
raise serializers.ValidationError({"kind": _('"%s" is not a valid choice' % kind)})
data['credential_type'] = credential_type.pk
value = OrderedDict(
{'credential_type': credential_type}.items() +
super(CredentialSerializer, self).to_internal_value(data).items()
list({'credential_type': credential_type}.items()) +
list(super(CredentialSerializer, self).to_internal_value(data).items())
)
# Make a set of the keys in the POST/PUT payload
@@ -2781,8 +2856,7 @@ class LabelsListMixin(object):
# TODO: remove when API v1 is removed
@six.add_metaclass(BaseSerializerMetaclass)
class V1JobOptionsSerializer(BaseSerializer):
class V1JobOptionsSerializer(BaseSerializer, metaclass=BaseSerializerMetaclass):
class Meta:
model = Credential
@@ -2796,8 +2870,7 @@ class V1JobOptionsSerializer(BaseSerializer):
return super(V1JobOptionsSerializer, self).build_field(field_name, info, model_class, nested_depth)
@six.add_metaclass(BaseSerializerMetaclass)
class LegacyCredentialFields(BaseSerializer):
class LegacyCredentialFields(BaseSerializer, metaclass=BaseSerializerMetaclass):
class Meta:
model = Credential
@@ -2976,12 +3049,16 @@ class JobTemplateMixin(object):
'''
def _recent_jobs(self, obj):
if hasattr(obj, 'workflow_jobs'):
job_mgr = obj.workflow_jobs
else:
job_mgr = obj.jobs
return [{'id': x.id, 'status': x.status, 'finished': x.finished}
for x in job_mgr.all().order_by('-created')[:10]]
# Exclude "joblets", jobs that ran as part of a sliced workflow job
uj_qs = obj.unifiedjob_unified_jobs.exclude(job__job_slice_count__gt=1).order_by('-created')
# Would like to apply an .only, but does not play well with non_polymorphic
# .only('id', 'status', 'finished', 'polymorphic_ctype_id')
optimized_qs = uj_qs.non_polymorphic()
return [{
'id': x.id, 'status': x.status, 'finished': x.finished,
# Make type consistent with API top-level key, for instance workflow_job
'type': x.get_real_instance_class()._meta.verbose_name.replace(' ', '_')
} for x in optimized_qs[:10]]
def get_summary_fields(self, obj):
d = super(JobTemplateMixin, self).get_summary_fields(obj)
@@ -3050,7 +3127,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
prompting_error_message = _("Must either set a default value or ask to prompt on launch.")
if project is None:
raise serializers.ValidationError({'project': _("Job types 'run' and 'check' must have assigned a project.")})
raise serializers.ValidationError({'project': _("Job Templates must have a project assigned.")})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):
raise serializers.ValidationError({'inventory': prompting_error_message})
@@ -3059,6 +3136,13 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
def validate_extra_vars(self, value):
return vars_validate_or_raise(value)
def validate_job_slice_count(self, value):
if value > 1 and not feature_enabled('workflows'):
raise LicenseForbids({'job_slice_count': [_(
"Job slicing is a workflows-based feature and your license does not allow use of workflows."
)]})
return value
def get_summary_fields(self, obj):
summary_fields = super(JobTemplateSerializer, self).get_summary_fields(obj)
all_creds = []
@@ -3103,19 +3187,46 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
return summary_fields
class JobTemplateWithSpecSerializer(JobTemplateSerializer):
'''
Used for activity stream entries.
'''
class Meta:
model = JobTemplate
fields = ('*', 'survey_spec')
class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
passwords_needed_to_start = serializers.ReadOnlyField()
ask_diff_mode_on_launch = serializers.ReadOnlyField()
ask_variables_on_launch = serializers.ReadOnlyField()
ask_limit_on_launch = serializers.ReadOnlyField()
ask_skip_tags_on_launch = serializers.ReadOnlyField()
ask_tags_on_launch = serializers.ReadOnlyField()
ask_job_type_on_launch = serializers.ReadOnlyField()
ask_verbosity_on_launch = serializers.ReadOnlyField()
ask_inventory_on_launch = serializers.ReadOnlyField()
ask_credential_on_launch = serializers.ReadOnlyField()
ask_diff_mode_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_variables_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_limit_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_skip_tags_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_tags_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_job_type_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_verbosity_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_inventory_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
ask_credential_on_launch = serializers.BooleanField(
read_only=True,
help_text=_('This field has been deprecated and will be removed in a future release'))
artifacts = serializers.SerializerMethodField()
class Meta:
@@ -3252,10 +3363,11 @@ class JobDetailSerializer(JobSerializer):
playbook_counts = serializers.SerializerMethodField(
help_text=_('A count of all plays and tasks for the job run.'),
)
custom_virtualenv = serializers.ReadOnlyField()
class Meta:
model = Job
fields = ('*', 'host_status_counts', 'playbook_counts',)
fields = ('*', 'host_status_counts', 'playbook_counts', 'custom_virtualenv')
def get_playbook_counts(self, obj):
task_count = obj.job_events.filter(event='playbook_on_task_start').count()
@@ -3442,12 +3554,16 @@ class AdHocCommandSerializer(UnifiedJobSerializer):
ret['name'] = obj.module_name
return ret
def validate(self, attrs):
ret = super(AdHocCommandSerializer, self).validate(attrs)
return ret
def validate_extra_vars(self, value):
redacted_extra_vars, removed_vars = extract_ansible_vars(value)
if removed_vars:
raise serializers.ValidationError(_(
"{} are prohibited from use in ad hoc commands."
).format(", ".join(removed_vars)))
).format(", ".join(sorted(removed_vars, reverse=True))))
return vars_validate_or_raise(value)
@@ -3558,7 +3674,7 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
class Meta:
model = WorkflowJobTemplate
fields = ('*', 'extra_vars', 'organization', 'survey_enabled', 'allow_simultaneous',
'ask_variables_on_launch',)
'ask_variables_on_launch', 'inventory', 'ask_inventory_on_launch',)
def get_related(self, obj):
res = super(WorkflowJobTemplateSerializer, self).get_related(obj)
@@ -3586,13 +3702,24 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
return vars_validate_or_raise(value)
class WorkflowJobTemplateWithSpecSerializer(WorkflowJobTemplateSerializer):
'''
Used for activity stream entries.
'''
class Meta:
model = WorkflowJobTemplate
fields = ('*', 'survey_spec')
class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
class Meta:
model = WorkflowJob
fields = ('*', 'workflow_job_template', 'extra_vars', 'allow_simultaneous',
'job_template', 'is_sliced_job',
'-execution_node', '-event_processing_finished', '-controller_node',)
'-execution_node', '-event_processing_finished', '-controller_node',
'inventory',)
def get_related(self, obj):
res = super(WorkflowJobSerializer, self).get_related(obj)
@@ -3664,7 +3791,7 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
for field in self.instance._meta.fields:
setattr(mock_obj, field.name, getattr(self.instance, field.name))
field_names = set(field.name for field in self.Meta.model._meta.fields)
for field_name, value in attrs.items():
for field_name, value in list(attrs.items()):
setattr(mock_obj, field_name, value)
if field_name not in field_names:
attrs.pop(field_name)
@@ -3675,7 +3802,7 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
if obj is None:
return ret
if 'extra_data' in ret and obj.survey_passwords:
ret['extra_data'] = obj.display_extra_data()
ret['extra_data'] = obj.display_extra_vars()
return ret
def get_summary_fields(self, obj):
@@ -3816,9 +3943,6 @@ class WorkflowJobTemplateNodeSerializer(LaunchConfigurationBaseSerializer):
ujt_obj = attrs['unified_job_template']
elif self.instance:
ujt_obj = self.instance.unified_job_template
if isinstance(ujt_obj, (WorkflowJobTemplate)):
raise serializers.ValidationError({
"unified_job_template": _("Cannot nest a %s inside a WorkflowJobTemplate") % ujt_obj.__class__.__name__})
if 'credential' in deprecated_fields: # TODO: remove when v2 API is deprecated
cred = deprecated_fields['credential']
attrs['credential'] = cred
@@ -3867,7 +3991,8 @@ class WorkflowJobNodeSerializer(LaunchConfigurationBaseSerializer):
class Meta:
model = WorkflowJobNode
fields = ('*', 'credential', 'job', 'workflow_job', '-name', '-description', 'id', 'url', 'related',
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes',)
'unified_job_template', 'success_nodes', 'failure_nodes', 'always_nodes',
'do_not_run',)
def get_related(self, obj):
res = super(WorkflowJobNodeSerializer, self).get_related(obj)
@@ -3896,7 +4021,7 @@ class WorkflowJobTemplateNodeDetailSerializer(WorkflowJobTemplateNodeSerializer)
Influence the api browser sample data to not include workflow_job_template
when editing a WorkflowNode.
Note: I was not able to accomplish this trough the use of extra_kwargs.
Note: I was not able to accomplish this through the use of extra_kwargs.
Maybe something to do with workflow_job_template being a relational field?
'''
def build_relational_field(self, field_name, relation_info):
@@ -3927,7 +4052,8 @@ class JobHostSummarySerializer(BaseSerializer):
class Meta:
model = JobHostSummary
fields = ('*', '-name', '-description', 'job', 'host', 'host_name', 'changed',
'dark', 'failures', 'ok', 'processed', 'skipped', 'failed')
'dark', 'failures', 'ok', 'processed', 'skipped', 'failed',
'ignored', 'rescued')
def get_related(self, obj):
res = super(JobHostSummarySerializer, self).get_related(obj)
@@ -4280,7 +4406,7 @@ class JobLaunchSerializer(BaseSerializer):
passwords_needed=cred.passwords_needed
)
if cred.credential_type.managed_by_tower and 'vault_id' in cred.credential_type.defined_fields:
cred_dict['vault_id'] = cred.inputs.get('vault_id') or None
cred_dict['vault_id'] = cred.get_input('vault_id', default=None)
defaults_dict.setdefault(field_name, []).append(cred_dict)
else:
defaults_dict[field_name] = getattr(obj, field_name)
@@ -4330,7 +4456,7 @@ class JobLaunchSerializer(BaseSerializer):
errors.setdefault('credentials', []).append(_(
'Removing {} credential at launch time without replacement is not supported. '
'Provided list lacked credential(s): {}.'
).format(cred.unique_hash(display=True), ', '.join([six.text_type(c) for c in removed_creds])))
).format(cred.unique_hash(display=True), ', '.join([str(c) for c in removed_creds])))
# verify that credentials (either provided or existing) don't
# require launch-time passwords that have not been provided
@@ -4369,37 +4495,63 @@ class JobLaunchSerializer(BaseSerializer):
class WorkflowJobLaunchSerializer(BaseSerializer):
can_start_without_user_input = serializers.BooleanField(read_only=True)
defaults = serializers.SerializerMethodField()
variables_needed_to_start = serializers.ReadOnlyField()
survey_enabled = serializers.SerializerMethodField()
extra_vars = VerbatimField(required=False, write_only=True)
inventory = serializers.PrimaryKeyRelatedField(
queryset=Inventory.objects.all(),
required=False, write_only=True
)
workflow_job_template_data = serializers.SerializerMethodField()
class Meta:
model = WorkflowJobTemplate
fields = ('can_start_without_user_input', 'extra_vars',
'survey_enabled', 'variables_needed_to_start',
fields = ('ask_inventory_on_launch', 'can_start_without_user_input', 'defaults', 'extra_vars',
'inventory', 'survey_enabled', 'variables_needed_to_start',
'node_templates_missing', 'node_prompts_rejected',
'workflow_job_template_data')
'workflow_job_template_data', 'survey_enabled', 'ask_variables_on_launch')
read_only_fields = ('ask_inventory_on_launch', 'ask_variables_on_launch')
def get_survey_enabled(self, obj):
if obj:
return obj.survey_enabled and 'spec' in obj.survey_spec
return False
def get_defaults(self, obj):
defaults_dict = {}
for field_name in WorkflowJobTemplate.get_ask_mapping().keys():
if field_name == 'inventory':
defaults_dict[field_name] = dict(
name=getattrd(obj, '%s.name' % field_name, None),
id=getattrd(obj, '%s.pk' % field_name, None))
else:
defaults_dict[field_name] = getattr(obj, field_name)
return defaults_dict
def get_workflow_job_template_data(self, obj):
return dict(name=obj.name, id=obj.id, description=obj.description)
def validate(self, attrs):
obj = self.instance
template = self.instance
accepted, rejected, errors = obj._accept_or_ignore_job_kwargs(
_exclude_errors=['required'],
**attrs)
accepted, rejected, errors = template._accept_or_ignore_job_kwargs(**attrs)
self._ignored_fields = rejected
WFJT_extra_vars = obj.extra_vars
attrs = super(WorkflowJobLaunchSerializer, self).validate(attrs)
obj.extra_vars = WFJT_extra_vars
return attrs
if template.inventory and template.inventory.pending_deletion is True:
errors['inventory'] = _("The inventory associated with this Workflow is being deleted.")
elif 'inventory' in accepted and accepted['inventory'].pending_deletion:
errors['inventory'] = _("The provided inventory is being deleted.")
if errors:
raise serializers.ValidationError(errors)
WFJT_extra_vars = template.extra_vars
WFJT_inventory = template.inventory
super(WorkflowJobLaunchSerializer, self).validate(attrs)
template.extra_vars = WFJT_extra_vars
template.inventory = WFJT_inventory
return accepted
class NotificationTemplateSerializer(BaseSerializer):
@@ -4410,11 +4562,11 @@ class NotificationTemplateSerializer(BaseSerializer):
model = NotificationTemplate
fields = ('*', 'organization', 'notification_type', 'notification_configuration')
type_map = {"string": (str, unicode),
type_map = {"string": (str,),
"int": (int,),
"bool": (bool,),
"list": (list,),
"password": (str, unicode),
"password": (str,),
"object": (dict, OrderedDict)}
def to_representation(self, obj):
@@ -4466,12 +4618,15 @@ class NotificationTemplateSerializer(BaseSerializer):
object_actual = self.context['view'].get_object()
else:
object_actual = None
for field in notification_class.init_parameters:
for field, params in notification_class.init_parameters.items():
if field not in attrs['notification_configuration']:
missing_fields.append(field)
continue
if 'default' in params:
attrs['notification_configuration'][field] = params['default']
else:
missing_fields.append(field)
continue
field_val = attrs['notification_configuration'][field]
field_type = notification_class.init_parameters[field]['type']
field_type = params['type']
expected_types = self.type_map[field_type]
if not type(field_val) in expected_types:
incorrect_type_fields.append((field, field_type))
@@ -4618,6 +4773,23 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
res['inventory'] = obj.unified_job_template.inventory.get_absolute_url(self.context.get('request'))
return res
def get_summary_fields(self, obj):
summary_fields = super(ScheduleSerializer, self).get_summary_fields(obj)
if 'inventory' in summary_fields:
return summary_fields
inventory = None
if obj.unified_job_template and getattr(obj.unified_job_template, 'inventory', None):
inventory = obj.unified_job_template.inventory
else:
return summary_fields
summary_fields['inventory'] = dict()
for field in SUMMARIZABLE_FK_FIELDS['inventory']:
summary_fields['inventory'][field] = getattr(inventory, field, None)
return summary_fields
def validate_unified_job_template(self, value):
if type(value) == InventorySource and value.source not in SCHEDULEABLE_PROVIDERS:
raise serializers.ValidationError(_('Inventory Source must be a cloud resource.'))
@@ -4625,8 +4797,8 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
raise serializers.ValidationError(_('Manual Project cannot have a schedule set.'))
elif type(value) == InventorySource and value.source == 'scm' and value.update_on_project_update:
raise serializers.ValidationError(_(
six.text_type('Inventory sources with `update_on_project_update` cannot be scheduled. '
'Schedule its source project `{}` instead.').format(value.source_project.name)))
'Inventory sources with `update_on_project_update` cannot be scheduled. '
'Schedule its source project `{}` instead.'.format(value.source_project.name)))
return value
@@ -4780,7 +4952,7 @@ class ActivityStreamSerializer(BaseSerializer):
for key in summary_dict.keys():
if 'id' not in summary_dict[key]:
summary_dict[key] = summary_dict[key] + ('id',)
field_list = summary_dict.items()
field_list = list(summary_dict.items())
# Needed related fields that are not in the default summary fields
field_list += [
('workflow_job_template_node', ('id', 'unified_job_template_id')),
@@ -4800,7 +4972,7 @@ class ActivityStreamSerializer(BaseSerializer):
def get_fields(self):
ret = super(ActivityStreamSerializer, self).get_fields()
for key, field in ret.items():
for key, field in list(ret.items()):
if key == 'changes':
field.help_text = _('A summary of the new and changed values when an object is created, updated, or deleted')
if key == 'object1':
@@ -4841,10 +5013,6 @@ class ActivityStreamSerializer(BaseSerializer):
def get_related(self, obj):
rel = {}
VIEW_NAME_EXCEPTIONS = {
'custom_inventory_script': 'inventory_script_detail',
'o_auth2_access_token': 'o_auth2_token_detail'
}
if obj.actor is not None:
rel['actor'] = self.reverse('api:user_detail', kwargs={'pk': obj.actor.pk})
for fk, __ in self._local_summarizable_fk_fields:
@@ -4858,11 +5026,12 @@ class ActivityStreamSerializer(BaseSerializer):
if getattr(thisItem, 'id', None) in id_list:
continue
id_list.append(getattr(thisItem, 'id', None))
if fk in VIEW_NAME_EXCEPTIONS:
view_name = VIEW_NAME_EXCEPTIONS[fk]
if hasattr(thisItem, 'get_absolute_url'):
rel_url = thisItem.get_absolute_url(self.context.get('request'))
else:
view_name = fk + '_detail'
rel[fk].append(self.reverse('api:' + view_name, kwargs={'pk': thisItem.id}))
rel_url = self.reverse('api:' + view_name, kwargs={'pk': thisItem.id})
rel[fk].append(rel_url)
if fk == 'schedule':
rel['unified_job_template'] = thisItem.unified_job_template.get_absolute_url(self.context.get('request'))
@@ -4904,6 +5073,17 @@ class ActivityStreamSerializer(BaseSerializer):
if fval is not None:
job_template_item[field] = fval
summary_fields['job_template'].append(job_template_item)
if fk == 'workflow_job_template_node':
summary_fields['workflow_job_template'] = []
workflow_job_template_item = {}
workflow_job_template_fields = SUMMARIZABLE_FK_FIELDS['workflow_job_template']
workflow_job_template = getattr(thisItem, 'workflow_job_template', None)
if workflow_job_template is not None:
for field in workflow_job_template_fields:
fval = getattr(workflow_job_template, field, None)
if fval is not None:
workflow_job_template_item[field] = fval
summary_fields['workflow_job_template'].append(workflow_job_template_item)
if fk == 'schedule':
unified_job_template = getattr(thisItem, 'unified_job_template', None)
if unified_job_template is not None:
@@ -4945,7 +5125,7 @@ class FactVersionSerializer(BaseFactSerializer):
}
res['fact_view'] = '%s?%s' % (
reverse('api:host_fact_compare_view', kwargs={'pk': obj.host.pk}, request=self.context.get('request')),
urllib.urlencode(params)
urllib.parse.urlencode(params)
)
return res
@@ -4967,6 +5147,6 @@ class FactSerializer(BaseFactSerializer):
ret = super(FactSerializer, self).to_representation(obj)
if obj is None:
return ret
if 'facts' in ret and isinstance(ret['facts'], six.string_types):
if 'facts' in ret and isinstance(ret['facts'], str):
ret['facts'] = json.loads(ret['facts'])
return ret

View File

@@ -13,6 +13,17 @@ from rest_framework.views import APIView
from rest_framework_swagger import renderers
class SuperUserSchemaGenerator(SchemaGenerator):
def has_view_permissions(self, path, method, view):
#
# Generate the Swagger schema as if you were a superuser and
# permissions didn't matter; this short-circuits the schema path
# discovery to include _all_ potential paths in the API.
#
return True
class AutoSchema(DRFAuthSchema):
def get_link(self, path, method, base_url):
@@ -59,7 +70,7 @@ class SwaggerSchemaView(APIView):
]
def get(self, request):
generator = SchemaGenerator(
generator = SuperUserSchemaGenerator(
title='Ansible Tower API',
patterns=None,
urlconf=None

View File

@@ -56,6 +56,7 @@ For example:
```bash
curl -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=password&username=<username>&password=<password>&scope=read" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569e
IaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
@@ -85,6 +86,7 @@ format:
The `/api/o/token/` endpoint is used for refreshing access token:
```bash
curl -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/token/ -i
@@ -114,6 +116,7 @@ Revoking is done by POSTing to `/api/o/revoke_token/` with the token to revoke a
```bash
curl -X POST -d "token=rQONsve372fQwuc2pn76k3IHDCYpi7" \
-H "Content-Type: application/x-www-form-urlencoded" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://localhost:8013/api/o/revoke_token/ -i
```

File diff suppressed because it is too large Load Diff

211
awx/api/views/inventory.py Normal file
View File

@@ -0,0 +1,211 @@
# Copyright (c) 2018 Red Hat, Inc.
# All Rights Reserved.
# Python
import logging
# Django
from django.conf import settings
from django.db.models import Q
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import status
# AWX
from awx.main.models import (
ActivityStream,
Inventory,
JobTemplate,
Role,
User,
InstanceGroup,
InventoryUpdateEvent,
InventoryUpdate,
InventorySource,
CustomInventoryScript,
)
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
SubListAPIView,
SubListAttachDetachAPIView,
ResourceAccessList,
CopyAPIView,
)
from awx.api.serializers import (
InventorySerializer,
ActivityStreamSerializer,
RoleSerializer,
InstanceGroupSerializer,
InventoryUpdateEventSerializer,
CustomInventoryScriptSerializer,
InventoryDetailSerializer,
JobTemplateSerializer,
)
from awx.api.views.mixin import (
ActivityStreamEnforcementMixin,
RelatedJobsPreventDeleteMixin,
ControlledByScmMixin,
)
logger = logging.getLogger('awx.api.views.organization')
class InventoryUpdateEventsList(SubListAPIView):
model = InventoryUpdateEvent
serializer_class = InventoryUpdateEventSerializer
parent_model = InventoryUpdate
relationship = 'inventory_update_events'
view_name = _('Inventory Update Events List')
search_fields = ('stdout',)
def finalize_response(self, request, response, *args, **kwargs):
response['X-UI-Max-Events'] = settings.MAX_UI_JOB_EVENTS
return super(InventoryUpdateEventsList, self).finalize_response(request, response, *args, **kwargs)
class InventoryScriptList(ListCreateAPIView):
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
class InventoryScriptDetail(RetrieveUpdateDestroyAPIView):
model = CustomInventoryScript
serializer_class = CustomInventoryScriptSerializer
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
can_delete = request.user.can_access(self.model, 'delete', instance)
if not can_delete:
raise PermissionDenied(_("Cannot delete inventory script."))
for inv_src in InventorySource.objects.filter(source_script=instance):
inv_src.source_script = None
inv_src.save()
return super(InventoryScriptDetail, self).destroy(request, *args, **kwargs)
class InventoryScriptObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = CustomInventoryScript
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class InventoryScriptCopy(CopyAPIView):
model = CustomInventoryScript
copy_return_serializer_class = CustomInventoryScriptSerializer
class InventoryList(ListCreateAPIView):
model = Inventory
serializer_class = InventorySerializer
def get_queryset(self):
qs = Inventory.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'read_role', 'update_role', 'use_role', 'adhoc_role')
qs = qs.prefetch_related('created_by', 'modified_by', 'organization')
return qs
class InventoryDetail(RelatedJobsPreventDeleteMixin, ControlledByScmMixin, RetrieveUpdateDestroyAPIView):
model = Inventory
serializer_class = InventoryDetailSerializer
def update(self, request, *args, **kwargs):
obj = self.get_object()
kind = self.request.data.get('kind') or kwargs.get('kind')
# Do not allow changes to an Inventory kind.
if kind is not None and obj.kind != kind:
return self.http_method_not_allowed(request, *args, **kwargs)
return super(InventoryDetail, self).update(request, *args, **kwargs)
def destroy(self, request, *args, **kwargs):
obj = self.get_object()
if not request.user.can_access(self.model, 'delete', obj):
raise PermissionDenied()
self.check_related_active_jobs(obj) # related jobs mixin
try:
obj.schedule_deletion(getattr(request.user, 'id', None))
return Response(status=status.HTTP_202_ACCEPTED)
except RuntimeError as e:
return Response(dict(error=_("{0}".format(e))), status=status.HTTP_400_BAD_REQUEST)
class InventoryActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Inventory
relationship = 'activitystream_set'
search_fields = ('changes',)
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(Q(inventory=parent) | Q(host__in=parent.hosts.all()) | Q(group__in=parent.groups.all()))
class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Inventory
relationship = 'instance_groups'
class InventoryAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Inventory
class InventoryObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = Inventory
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)
class InventoryJobTemplateList(SubListAPIView):
model = JobTemplate
serializer_class = JobTemplateSerializer
parent_model = Inventory
relationship = 'jobtemplates'
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(inventory=parent)
class InventoryCopy(CopyAPIView):
model = Inventory
copy_return_serializer_class = InventorySerializer

View File

@@ -13,12 +13,16 @@ from django.shortcuts import get_object_or_404
from django.utils.timezone import now
from django.utils.translation import ugettext_lazy as _
from rest_framework.permissions import SAFE_METHODS
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import status
from awx.main.constants import ACTIVE_STATES
from awx.main.utils import get_object_or_400
from awx.main.utils import (
get_object_or_400,
parse_yaml_or_json,
)
from awx.main.models.ha import (
Instance,
InstanceGroup,
@@ -273,3 +277,33 @@ class OrganizationCountsMixin(object):
full_context['related_field_counts'] = count_context
return full_context
class ControlledByScmMixin(object):
'''
Special method to reset SCM inventory commit hash
if anything that it manages changes.
'''
def _reset_inv_src_rev(self, obj):
if self.request.method in SAFE_METHODS or not obj:
return
project_following_sources = obj.inventory_sources.filter(
update_on_project_update=True, source='scm')
if project_following_sources:
# Allow inventory changes unrelated to variables
if self.model == Inventory and (
not self.request or not self.request.data or
parse_yaml_or_json(self.request.data.get('variables', '')) == parse_yaml_or_json(obj.variables)):
return
project_following_sources.update(scm_last_revision='')
def get_object(self):
obj = super(ControlledByScmMixin, self).get_object()
self._reset_inv_src_rev(obj)
return obj
def get_parent_object(self):
obj = super(ControlledByScmMixin, self).get_parent_object()
self._reset_inv_src_rev(obj)
return obj

View File

@@ -0,0 +1,247 @@
# Copyright (c) 2018 Red Hat, Inc.
# All Rights Reserved.
# Python
import logging
# Django
from django.db.models import Count
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import ugettext_lazy as _
# AWX
from awx.conf.license import (
feature_enabled,
LicenseForbids,
)
from awx.main.models import (
ActivityStream,
Inventory,
Project,
JobTemplate,
WorkflowJobTemplate,
Organization,
NotificationTemplate,
Role,
User,
Team,
InstanceGroup,
)
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
SubListAPIView,
SubListCreateAttachDetachAPIView,
SubListAttachDetachAPIView,
ResourceAccessList,
BaseUsersList,
)
from awx.api.serializers import (
OrganizationSerializer,
InventorySerializer,
ProjectSerializer,
UserSerializer,
TeamSerializer,
ActivityStreamSerializer,
RoleSerializer,
NotificationTemplateSerializer,
WorkflowJobTemplateSerializer,
InstanceGroupSerializer,
)
from awx.api.views.mixin import (
ActivityStreamEnforcementMixin,
RelatedJobsPreventDeleteMixin,
OrganizationCountsMixin,
)
logger = logging.getLogger('awx.api.views.organization')
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_queryset(self):
qs = Organization.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'auditor_role', 'member_role', 'read_role')
qs = qs.prefetch_related('created_by', 'modified_by')
return qs
def create(self, request, *args, **kwargs):
"""Create a new organzation.
If there is already an organization and the license of this
instance does not permit multiple organizations, then raise
LicenseForbids.
"""
# Sanity check: If the multiple organizations feature is disallowed
# by the license, then we are only willing to create this organization
# if no organizations exist in the system.
if (not feature_enabled('multiple_organizations') and
self.model.objects.exists()):
raise LicenseForbids(_('Your license only permits a single '
'organization to exist.'))
# Okay, create the organization as usual.
return super(OrganizationList, self).create(request, *args, **kwargs)
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization
serializer_class = OrganizationSerializer
def get_serializer_context(self, *args, **kwargs):
full_context = super(OrganizationDetail, self).get_serializer_context(*args, **kwargs)
if not hasattr(self, 'kwargs') or 'pk' not in self.kwargs:
return full_context
org_id = int(self.kwargs['pk'])
org_counts = {}
access_kwargs = {'accessor': self.request.user, 'role_field': 'read_role'}
direct_counts = Organization.objects.filter(id=org_id).annotate(
users=Count('member_role__members', distinct=True),
admins=Count('admin_role__members', distinct=True)
).values('users', 'admins')
if not direct_counts:
return full_context
org_counts = direct_counts[0]
org_counts['inventories'] = Inventory.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['teams'] = Team.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['projects'] = Project.accessible_objects(**access_kwargs).filter(
organization__id=org_id).count()
org_counts['job_templates'] = JobTemplate.accessible_objects(**access_kwargs).filter(
project__organization__id=org_id).count()
full_context['related_field_counts'] = {}
full_context['related_field_counts'][org_id] = org_counts
return full_context
class OrganizationInventoriesList(SubListAPIView):
model = Inventory
serializer_class = InventorySerializer
parent_model = Organization
relationship = 'inventories'
class OrganizationUsersList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'member_role.members'
class OrganizationAdminsList(BaseUsersList):
model = User
serializer_class = UserSerializer
parent_model = Organization
relationship = 'admin_role.members'
class OrganizationProjectsList(SubListCreateAttachDetachAPIView):
model = Project
serializer_class = ProjectSerializer
parent_model = Organization
relationship = 'projects'
parent_key = 'organization'
class OrganizationWorkflowJobTemplatesList(SubListCreateAttachDetachAPIView):
model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization
relationship = 'workflows'
parent_key = 'organization'
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team
serializer_class = TeamSerializer
parent_model = Organization
relationship = 'teams'
parent_key = 'organization'
class OrganizationActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView):
model = ActivityStream
serializer_class = ActivityStreamSerializer
parent_model = Organization
relationship = 'activitystream_set'
search_fields = ('changes',)
class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates'
parent_key = 'organization'
class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_any'
class OrganizationNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_error'
class OrganizationNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate
serializer_class = NotificationTemplateSerializer
parent_model = Organization
relationship = 'notification_templates_success'
class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup
serializer_class = InstanceGroupSerializer
parent_model = Organization
relationship = 'instance_groups'
class OrganizationAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's
parent_model = Organization
class OrganizationObjectRolesList(SubListAPIView):
model = Role
serializer_class = RoleSerializer
parent_model = Organization
search_fields = ('role_field', 'content_type__model',)
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return Role.objects.filter(content_type=content_type, object_id=po.pk)

281
awx/api/views/root.py Normal file
View File

@@ -0,0 +1,281 @@
# Copyright (c) 2018 Ansible, Inc.
# All Rights Reserved.
import logging
import operator
import json
from collections import OrderedDict
from django.conf import settings
from django.utils.encoding import smart_text
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import ensure_csrf_cookie
from django.template.loader import render_to_string
from django.utils.translation import ugettext_lazy as _
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
from rest_framework import status
from awx.api.generics import APIView
from awx.main.ha import is_ha_environment
from awx.main.utils import (
get_awx_version,
get_ansible_version,
get_custom_venv_choices,
to_python_boolean,
)
from awx.api.versioning import reverse, get_request_version, drf_reverse
from awx.conf.license import get_license, feature_enabled
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS
from awx.main.models import (
Project,
Organization,
Instance,
InstanceGroup,
JobTemplate,
)
logger = logging.getLogger('awx.api.views.root')
class ApiRootView(APIView):
permission_classes = (AllowAny,)
view_name = _('REST API')
versioning_class = None
swagger_topic = 'Versioning'
@method_decorator(ensure_csrf_cookie)
def get(self, request, format=None):
''' List supported API versions '''
v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'})
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
data = OrderedDict()
data['description'] = _('AWX REST API')
data['current_version'] = v2
data['available_versions'] = dict(v1 = v1, v2 = v2)
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
if feature_enabled('rebranding'):
data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
return Response(data)
class ApiOAuthAuthorizationRootView(APIView):
permission_classes = (AllowAny,)
view_name = _("API OAuth 2 Authorization Root")
versioning_class = None
swagger_topic = 'Authentication'
def get(self, request, format=None):
data = OrderedDict()
data['authorize'] = drf_reverse('api:authorize')
data['token'] = drf_reverse('api:token')
data['revoke_token'] = drf_reverse('api:revoke-token')
return Response(data)
class ApiVersionRootView(APIView):
permission_classes = (AllowAny,)
swagger_topic = 'Versioning'
def get(self, request, format=None):
''' List top level resources '''
data = OrderedDict()
data['ping'] = reverse('api:api_v1_ping_view', request=request)
data['instances'] = reverse('api:instance_list', request=request)
data['instance_groups'] = reverse('api:instance_group_list', request=request)
data['config'] = reverse('api:api_v1_config_view', request=request)
data['settings'] = reverse('api:setting_category_list', request=request)
data['me'] = reverse('api:user_me_list', request=request)
data['dashboard'] = reverse('api:dashboard_view', request=request)
data['organizations'] = reverse('api:organization_list', request=request)
data['users'] = reverse('api:user_list', request=request)
data['projects'] = reverse('api:project_list', request=request)
data['project_updates'] = reverse('api:project_update_list', request=request)
data['teams'] = reverse('api:team_list', request=request)
data['credentials'] = reverse('api:credential_list', request=request)
if get_request_version(request) > 1:
data['credential_types'] = reverse('api:credential_type_list', request=request)
data['applications'] = reverse('api:o_auth2_application_list', request=request)
data['tokens'] = reverse('api:o_auth2_token_list', request=request)
data['inventory'] = reverse('api:inventory_list', request=request)
data['inventory_scripts'] = reverse('api:inventory_script_list', request=request)
data['inventory_sources'] = reverse('api:inventory_source_list', request=request)
data['inventory_updates'] = reverse('api:inventory_update_list', request=request)
data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request)
data['job_events'] = reverse('api:job_event_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)
data['system_job_templates'] = reverse('api:system_job_template_list', request=request)
data['system_jobs'] = reverse('api:system_job_list', request=request)
data['schedules'] = reverse('api:schedule_list', request=request)
data['roles'] = reverse('api:role_list', request=request)
data['notification_templates'] = reverse('api:notification_template_list', request=request)
data['notifications'] = reverse('api:notification_list', request=request)
data['labels'] = reverse('api:label_list', request=request)
data['unified_job_templates'] = reverse('api:unified_job_template_list', request=request)
data['unified_jobs'] = reverse('api:unified_job_list', request=request)
data['activity_stream'] = reverse('api:activity_stream_list', request=request)
data['workflow_job_templates'] = reverse('api:workflow_job_template_list', request=request)
data['workflow_jobs'] = reverse('api:workflow_job_list', request=request)
data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request)
data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request)
return Response(data)
class ApiV1RootView(ApiVersionRootView):
view_name = _('Version 1')
class ApiV2RootView(ApiVersionRootView):
view_name = _('Version 2')
class ApiV1PingView(APIView):
"""A simple view that reports very basic information about this
instance, which is acceptable to be public information.
"""
permission_classes = (AllowAny,)
authentication_classes = ()
view_name = _('Ping')
swagger_topic = 'System Configuration'
def get(self, request, format=None):
"""Return some basic information about this instance
Everything returned here should be considered public / insecure, as
this requires no auth and is intended for use by the installer process.
"""
response = {
'ha': is_ha_environment(),
'version': get_awx_version(),
'active_node': settings.CLUSTER_HOST_ID,
}
response['instances'] = []
for instance in Instance.objects.all():
response['instances'].append(dict(node=instance.hostname, heartbeat=instance.modified,
capacity=instance.capacity, version=instance.version))
sorted(response['instances'], key=operator.itemgetter('node'))
response['instance_groups'] = []
for instance_group in InstanceGroup.objects.all():
response['instance_groups'].append(dict(name=instance_group.name,
capacity=instance_group.capacity,
instances=[x.hostname for x in instance_group.instances.all()]))
return Response(response)
class ApiV1ConfigView(APIView):
permission_classes = (IsAuthenticated,)
view_name = _('Configuration')
swagger_topic = 'System Configuration'
def check_permissions(self, request):
super(ApiV1ConfigView, self).check_permissions(request)
if not request.user.is_superuser and request.method.lower() not in {'options', 'head', 'get'}:
self.permission_denied(request) # Raises PermissionDenied exception.
def get(self, request, format=None):
'''Return various sitewide configuration settings'''
if request.user.is_superuser or request.user.is_system_auditor:
license_data = get_license(show_key=True)
else:
license_data = get_license(show_key=False)
if not license_data.get('valid_key', False):
license_data = {}
if license_data and 'features' in license_data and 'activity_streams' in license_data['features']:
# FIXME: Make the final setting value dependent on the feature?
license_data['features']['activity_streams'] &= settings.ACTIVITY_STREAM_ENABLED
pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off'
data = dict(
time_zone=settings.TIME_ZONE,
license_info=license_data,
version=get_awx_version(),
ansible_version=get_ansible_version(),
eula=render_to_string("eula.md") if license_data.get('license_type', 'UNLICENSED') != 'open' else '',
analytics_status=pendo_state,
become_methods=PRIVILEGE_ESCALATION_METHODS,
)
# If LDAP is enabled, user_ldap_fields will return a list of field
# names that are managed by LDAP and should be read-only for users with
# a non-empty ldap_dn attribute.
if getattr(settings, 'AUTH_LDAP_SERVER_URI', None) and feature_enabled('ldap'):
user_ldap_fields = ['username', 'password']
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_ATTR_MAP', {}).keys())
user_ldap_fields.extend(getattr(settings, 'AUTH_LDAP_USER_FLAGS_BY_GROUP', {}).keys())
data['user_ldap_fields'] = user_ldap_fields
if request.user.is_superuser \
or request.user.is_system_auditor \
or Organization.accessible_objects(request.user, 'admin_role').exists() \
or Organization.accessible_objects(request.user, 'auditor_role').exists():
data.update(dict(
project_base_dir = settings.PROJECTS_ROOT,
project_local_paths = Project.get_local_path_choices(),
custom_virtualenvs = get_custom_venv_choices()
))
elif JobTemplate.accessible_objects(request.user, 'admin_role').exists():
data['custom_virtualenvs'] = get_custom_venv_choices()
return Response(data)
def post(self, request):
if not isinstance(request.data, dict):
return Response({"error": _("Invalid license data")}, status=status.HTTP_400_BAD_REQUEST)
if "eula_accepted" not in request.data:
return Response({"error": _("Missing 'eula_accepted' property")}, status=status.HTTP_400_BAD_REQUEST)
try:
eula_accepted = to_python_boolean(request.data["eula_accepted"])
except ValueError:
return Response({"error": _("'eula_accepted' value is invalid")}, status=status.HTTP_400_BAD_REQUEST)
if not eula_accepted:
return Response({"error": _("'eula_accepted' must be True")}, status=status.HTTP_400_BAD_REQUEST)
request.data.pop("eula_accepted")
try:
data_actual = json.dumps(request.data)
except Exception:
logger.info(smart_text(u"Invalid JSON submitted for license."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid JSON")}, status=status.HTTP_400_BAD_REQUEST)
try:
from awx.main.utils.common import get_licenser
license_data = json.loads(data_actual)
license_data_validated = get_licenser(**license_data).validate()
except Exception:
logger.warning(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid License")}, status=status.HTTP_400_BAD_REQUEST)
# If the license is valid, write it to the database.
if license_data_validated['valid_key']:
settings.LICENSE = license_data
settings.TOWER_URL_BASE = "{}://{}".format(request.scheme, request.get_host())
return Response(license_data_validated)
logger.warning(smart_text(u"Invalid license submitted."),
extra=dict(actor=request.user.username))
return Response({"error": _("Invalid license")}, status=status.HTTP_400_BAD_REQUEST)
def delete(self, request):
try:
settings.LICENSE = {}
return Response(status=status.HTTP_204_NO_CONTENT)
except Exception:
# FIX: Log
return Response({"error": _("Failed to remove license.")}, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -1,6 +1,7 @@
# Python
import os
import logging
import urlparse
import urllib.parse as urlparse
from collections import OrderedDict
# Django
@@ -8,9 +9,10 @@ from django.core.validators import URLValidator
from django.utils.translation import ugettext_lazy as _
# Django REST Framework
from rest_framework.fields import * # noqa
import six
from rest_framework.fields import ( # noqa
BooleanField, CharField, ChoiceField, DictField, EmailField, IntegerField,
ListField, NullBooleanField
)
logger = logging.getLogger('awx.conf.fields')
@@ -71,7 +73,7 @@ class StringListBooleanField(ListField):
return False
elif value in NullBooleanField.NULL_VALUES:
return None
elif isinstance(value, basestring):
elif isinstance(value, str):
return self.child.to_representation(value)
except TypeError:
pass
@@ -88,13 +90,33 @@ class StringListBooleanField(ListField):
return False
elif data in NullBooleanField.NULL_VALUES:
return None
elif isinstance(data, basestring):
elif isinstance(data, str):
return self.child.run_validation(data)
except TypeError:
pass
self.fail('type_error', input_type=type(data))
class StringListPathField(StringListField):
default_error_messages = {
'type_error': _('Expected list of strings but got {input_type} instead.'),
'path_error': _('{path} is not a valid path choice.'),
}
def to_internal_value(self, paths):
if isinstance(paths, (list, tuple)):
for p in paths:
if not isinstance(p, str):
self.fail('type_error', input_type=type(p))
if not os.path.exists(p):
self.fail('path_error', path=p)
return super(StringListPathField, self).to_internal_value(sorted({os.path.normpath(path) for path in paths}))
else:
self.fail('type_error', input_type=type(paths))
class URLField(CharField):
def __init__(self, **kwargs):
@@ -139,7 +161,7 @@ class KeyValueField(DictField):
def to_internal_value(self, data):
ret = super(KeyValueField, self).to_internal_value(data)
for value in data.values():
if not isinstance(value, six.string_types + six.integer_types + (float,)):
if not isinstance(value, (str, int, float)):
if isinstance(value, OrderedDict):
value = dict(value)
self.fail('invalid_child', input=value)

View File

@@ -1,480 +0,0 @@
# Copyright (c) 2016 Ansible, Inc.
# All Rights Reserved.
# Python
import base64
import collections
import difflib
import json
import os
import shutil
# Django
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
from django.utils.text import slugify
from django.utils.timezone import now
from django.utils.translation import ugettext_lazy as _
# Tower
from awx import MODE
from awx.conf import settings_registry
from awx.conf.fields import empty, SkipField
from awx.conf.models import Setting
from awx.conf.utils import comment_assignments
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument(
'category',
nargs='*',
type=str,
)
parser.add_argument(
'--dry-run',
action='store_true',
dest='dry_run',
default=False,
help=_('Only show which settings would be commented/migrated.'),
)
parser.add_argument(
'--skip-errors',
action='store_true',
dest='skip_errors',
default=False,
help=_('Skip over settings that would raise an error when commenting/migrating.'),
)
parser.add_argument(
'--no-comment',
action='store_true',
dest='no_comment',
default=False,
help=_('Skip commenting out settings in files.'),
)
parser.add_argument(
'--comment-only',
action='store_true',
dest='comment_only',
default=False,
help=_('Skip migrating and only comment out settings in files.'),
)
parser.add_argument(
'--backup-suffix',
dest='backup_suffix',
default=now().strftime('.%Y%m%d%H%M%S'),
help=_('Backup existing settings files with this suffix.'),
)
@transaction.atomic
def handle(self, *args, **options):
self.verbosity = int(options.get('verbosity', 1))
self.dry_run = bool(options.get('dry_run', False))
self.skip_errors = bool(options.get('skip_errors', False))
self.no_comment = bool(options.get('no_comment', False))
self.comment_only = bool(options.get('comment_only', False))
self.backup_suffix = options.get('backup_suffix', '')
self.categories = options.get('category', None) or ['all']
self.style.HEADING = self.style.MIGRATE_HEADING
self.style.LABEL = self.style.MIGRATE_LABEL
self.style.OK = self.style.SQL_FIELD
self.style.SKIP = self.style.WARNING
self.style.VALUE = self.style.SQL_KEYWORD
# Determine if any categories provided are invalid.
category_slugs = []
invalid_categories = []
for category in self.categories:
category_slug = slugify(category)
if category_slug in settings_registry.get_registered_categories():
if category_slug not in category_slugs:
category_slugs.append(category_slug)
else:
if category not in invalid_categories:
invalid_categories.append(category)
if len(invalid_categories) == 1:
raise CommandError('Invalid setting category: {}'.format(invalid_categories[0]))
elif len(invalid_categories) > 1:
raise CommandError('Invalid setting categories: {}'.format(', '.join(invalid_categories)))
# Build a list of all settings to be migrated.
registered_settings = []
for category_slug in category_slugs:
for registered_setting in settings_registry.get_registered_settings(category_slug=category_slug, read_only=False):
if registered_setting not in registered_settings:
registered_settings.append(registered_setting)
self._migrate_settings(registered_settings)
def _get_settings_file_patterns(self):
if MODE == 'development':
return [
'/etc/tower/settings.py',
'/etc/tower/conf.d/*.py',
os.path.join(os.path.dirname(__file__), '..', '..', '..', 'settings', 'local_*.py')
]
else:
return [
os.environ.get('AWX_SETTINGS_FILE', '/etc/tower/settings.py'),
os.path.join(os.environ.get('AWX_SETTINGS_DIR', '/etc/tower/conf.d/'), '*.py'),
]
def _get_license_file(self):
return os.environ.get('AWX_LICENSE_FILE', '/etc/tower/license')
def _comment_license_file(self, dry_run=True):
license_file = self._get_license_file()
diff_lines = []
if os.path.exists(license_file):
try:
raw_license_data = open(license_file).read()
json.loads(raw_license_data)
except Exception as e:
raise CommandError('Error reading license from {0}: {1!r}'.format(license_file, e))
if self.backup_suffix:
backup_license_file = '{}{}'.format(license_file, self.backup_suffix)
else:
backup_license_file = '{}.old'.format(license_file)
diff_lines = list(difflib.unified_diff(
raw_license_data.splitlines(),
[],
fromfile=backup_license_file,
tofile=license_file,
lineterm='',
))
if not dry_run:
if self.backup_suffix:
shutil.copy2(license_file, backup_license_file)
os.remove(license_file)
return diff_lines
def _get_local_settings_file(self):
if MODE == 'development':
static_root = os.path.join(os.path.dirname(__file__), '..', '..', '..', 'ui', 'static')
else:
static_root = settings.STATIC_ROOT
return os.path.join(static_root, 'local_settings.json')
def _comment_local_settings_file(self, dry_run=True):
local_settings_file = self._get_local_settings_file()
diff_lines = []
if os.path.exists(local_settings_file):
try:
raw_local_settings_data = open(local_settings_file).read()
json.loads(raw_local_settings_data)
except Exception as e:
if not self.skip_errors:
raise CommandError('Error reading local settings from {0}: {1!r}'.format(local_settings_file, e))
return diff_lines
if self.backup_suffix:
backup_local_settings_file = '{}{}'.format(local_settings_file, self.backup_suffix)
else:
backup_local_settings_file = '{}.old'.format(local_settings_file)
diff_lines = list(difflib.unified_diff(
raw_local_settings_data.splitlines(),
[],
fromfile=backup_local_settings_file,
tofile=local_settings_file,
lineterm='',
))
if not dry_run:
if self.backup_suffix:
shutil.copy2(local_settings_file, backup_local_settings_file)
os.remove(local_settings_file)
return diff_lines
def _get_custom_logo_file(self):
if MODE == 'development':
static_root = os.path.join(os.path.dirname(__file__), '..', '..', '..', 'ui', 'static')
else:
static_root = settings.STATIC_ROOT
return os.path.join(static_root, 'assets', 'custom_console_logo.png')
def _comment_custom_logo_file(self, dry_run=True):
custom_logo_file = self._get_custom_logo_file()
diff_lines = []
if os.path.exists(custom_logo_file):
try:
raw_custom_logo_data = open(custom_logo_file).read()
except Exception as e:
if not self.skip_errors:
raise CommandError('Error reading custom logo from {0}: {1!r}'.format(custom_logo_file, e))
return diff_lines
if self.backup_suffix:
backup_custom_logo_file = '{}{}'.format(custom_logo_file, self.backup_suffix)
else:
backup_custom_logo_file = '{}.old'.format(custom_logo_file)
diff_lines = list(difflib.unified_diff(
['<PNG Image ({} bytes)>'.format(len(raw_custom_logo_data))],
[],
fromfile=backup_custom_logo_file,
tofile=custom_logo_file,
lineterm='',
))
if not dry_run:
if self.backup_suffix:
shutil.copy2(custom_logo_file, backup_custom_logo_file)
os.remove(custom_logo_file)
return diff_lines
def _check_if_needs_comment(self, patterns, setting):
files_to_comment = []
# If any diffs are returned, this setting needs to be commented.
diffs = comment_assignments(patterns, setting, dry_run=True)
if setting == 'LICENSE':
diffs.extend(self._comment_license_file(dry_run=True))
elif setting == 'CUSTOM_LOGIN_INFO':
diffs.extend(self._comment_local_settings_file(dry_run=True))
elif setting == 'CUSTOM_LOGO':
diffs.extend(self._comment_custom_logo_file(dry_run=True))
for diff in diffs:
for line in diff.splitlines():
if line.startswith('+++ '):
files_to_comment.append(line[4:])
return files_to_comment
def _check_if_needs_migration(self, setting):
# Check whether the current value differs from the default.
default_value = settings.DEFAULTS_SNAPSHOT.get(setting, empty)
if default_value is empty and setting != 'LICENSE':
field = settings_registry.get_setting_field(setting, read_only=True)
try:
default_value = field.get_default()
except SkipField:
pass
current_value = getattr(settings, setting, empty)
if setting == 'CUSTOM_LOGIN_INFO' and current_value in {empty, ''}:
local_settings_file = self._get_local_settings_file()
try:
if os.path.exists(local_settings_file):
local_settings = json.load(open(local_settings_file))
current_value = local_settings.get('custom_login_info', '')
except Exception as e:
if not self.skip_errors:
raise CommandError('Error reading custom login info from {0}: {1!r}'.format(local_settings_file, e))
if setting == 'CUSTOM_LOGO' and current_value in {empty, ''}:
custom_logo_file = self._get_custom_logo_file()
try:
if os.path.exists(custom_logo_file):
custom_logo_data = open(custom_logo_file).read()
if custom_logo_data:
current_value = 'data:image/png;base64,{}'.format(base64.b64encode(custom_logo_data))
else:
current_value = ''
except Exception as e:
if not self.skip_errors:
raise CommandError('Error reading custom logo from {0}: {1!r}'.format(custom_logo_file, e))
if current_value != default_value:
if current_value is empty:
current_value = None
return current_value
return empty
def _display_tbd(self, setting, files_to_comment, migrate_value, comment_error=None, migrate_error=None):
if self.verbosity >= 1:
if files_to_comment:
if migrate_value is not empty:
action = 'Migrate + Comment'
else:
action = 'Comment'
if comment_error or migrate_error:
action = self.style.ERROR('{} (skipped)'.format(action))
else:
action = self.style.OK(action)
self.stdout.write(' {}: {}'.format(
self.style.LABEL(setting),
action,
))
if self.verbosity >= 2:
if migrate_error:
self.stdout.write(' - Migrate value: {}'.format(
self.style.ERROR(migrate_error),
))
elif migrate_value is not empty:
self.stdout.write(' - Migrate value: {}'.format(
self.style.VALUE(repr(migrate_value)),
))
if comment_error:
self.stdout.write(' - Comment: {}'.format(
self.style.ERROR(comment_error),
))
elif files_to_comment:
for file_to_comment in files_to_comment:
self.stdout.write(' - Comment in: {}'.format(
self.style.VALUE(file_to_comment),
))
else:
if self.verbosity >= 2:
self.stdout.write(' {}: {}'.format(
self.style.LABEL(setting),
self.style.SKIP('No Migration'),
))
def _display_migrate(self, setting, action, display_value):
if self.verbosity >= 1:
if action == 'No Change':
action = self.style.SKIP(action)
else:
action = self.style.OK(action)
self.stdout.write(' {}: {}'.format(
self.style.LABEL(setting),
action,
))
if self.verbosity >= 2:
for line in display_value.splitlines():
self.stdout.write(' {}'.format(
self.style.VALUE(line),
))
def _display_diff_summary(self, filename, added, removed):
self.stdout.write(' {} {}{} {}{}'.format(
self.style.LABEL(filename),
self.style.ERROR('-'),
self.style.ERROR(int(removed)),
self.style.OK('+'),
self.style.OK(str(added)),
))
def _display_comment(self, diffs):
for diff in diffs:
if self.verbosity >= 2:
for line in diff.splitlines():
display_line = line
if line.startswith('--- ') or line.startswith('+++ '):
display_line = self.style.LABEL(line)
elif line.startswith('-'):
display_line = self.style.ERROR(line)
elif line.startswith('+'):
display_line = self.style.OK(line)
elif line.startswith('@@'):
display_line = self.style.VALUE(line)
if line.startswith('--- ') or line.startswith('+++ '):
self.stdout.write(' ' + display_line)
else:
self.stdout.write(' ' + display_line)
elif self.verbosity >= 1:
filename, lines_added, lines_removed = None, 0, 0
for line in diff.splitlines():
if line.startswith('+++ '):
if filename:
self._display_diff_summary(filename, lines_added, lines_removed)
filename, lines_added, lines_removed = line[4:], 0, 0
elif line.startswith('+'):
lines_added += 1
elif line.startswith('-'):
lines_removed += 1
if filename:
self._display_diff_summary(filename, lines_added, lines_removed)
def _discover_settings(self, registered_settings):
if self.verbosity >= 1:
self.stdout.write(self.style.HEADING('Discovering settings to be migrated and commented:'))
# Determine which settings need to be commented/migrated.
to_migrate = collections.OrderedDict()
to_comment = collections.OrderedDict()
patterns = self._get_settings_file_patterns()
for name in registered_settings:
comment_error, migrate_error = None, None
files_to_comment = []
try:
files_to_comment = self._check_if_needs_comment(patterns, name)
except Exception as e:
comment_error = 'Error commenting {0}: {1!r}'.format(name, e)
if not self.skip_errors:
raise CommandError(comment_error)
if files_to_comment:
to_comment[name] = files_to_comment
migrate_value = empty
if files_to_comment:
migrate_value = self._check_if_needs_migration(name)
if migrate_value is not empty:
field = settings_registry.get_setting_field(name)
assert not field.read_only
try:
data = field.to_representation(migrate_value)
setting_value = field.run_validation(data)
db_value = field.to_representation(setting_value)
to_migrate[name] = db_value
except Exception as e:
to_comment.pop(name)
migrate_error = 'Unable to assign value {0!r} to setting "{1}: {2!s}".'.format(migrate_value, name, e)
if not self.skip_errors:
raise CommandError(migrate_error)
self._display_tbd(name, files_to_comment, migrate_value, comment_error, migrate_error)
if self.verbosity == 1 and not to_migrate and not to_comment:
self.stdout.write(' No settings found to migrate or comment!')
return (to_migrate, to_comment)
def _migrate(self, to_migrate):
if self.verbosity >= 1:
if self.dry_run:
self.stdout.write(self.style.HEADING('Migrating settings to database (dry-run):'))
else:
self.stdout.write(self.style.HEADING('Migrating settings to database:'))
if not to_migrate:
self.stdout.write(' No settings to migrate!')
# Now migrate those settings to the database.
for name, db_value in to_migrate.items():
display_value = json.dumps(db_value, indent=4)
setting = Setting.objects.filter(key=name, user__isnull=True).order_by('pk').first()
action = 'No Change'
if not setting:
action = 'Migrated'
if not self.dry_run:
Setting.objects.create(key=name, user=None, value=db_value)
elif setting.value != db_value or type(setting.value) != type(db_value):
action = 'Updated'
if not self.dry_run:
setting.value = db_value
setting.save(update_fields=['value'])
self._display_migrate(name, action, display_value)
def _comment(self, to_comment):
if self.verbosity >= 1:
if bool(self.dry_run or self.no_comment):
self.stdout.write(self.style.HEADING('Commenting settings in files (dry-run):'))
else:
self.stdout.write(self.style.HEADING('Commenting settings in files:'))
if not to_comment:
self.stdout.write(' No settings to comment!')
# Now comment settings in settings files.
if to_comment:
to_comment_patterns = []
license_file_to_comment = None
local_settings_file_to_comment = None
custom_logo_file_to_comment = None
for files_to_comment in to_comment.values():
for file_to_comment in files_to_comment:
if file_to_comment == self._get_license_file():
license_file_to_comment = file_to_comment
elif file_to_comment == self._get_local_settings_file():
local_settings_file_to_comment = file_to_comment
elif file_to_comment == self._get_custom_logo_file():
custom_logo_file_to_comment = file_to_comment
elif file_to_comment not in to_comment_patterns:
to_comment_patterns.append(file_to_comment)
# Run once in dry-run mode to catch any errors from updating the files.
diffs = comment_assignments(to_comment_patterns, to_comment.keys(), dry_run=True, backup_suffix=self.backup_suffix)
# Then, if really updating, run again.
if not self.dry_run and not self.no_comment:
diffs = comment_assignments(to_comment_patterns, to_comment.keys(), dry_run=False, backup_suffix=self.backup_suffix)
if license_file_to_comment:
diffs.extend(self._comment_license_file(dry_run=False))
if local_settings_file_to_comment:
diffs.extend(self._comment_local_settings_file(dry_run=False))
if custom_logo_file_to_comment:
diffs.extend(self._comment_custom_logo_file(dry_run=False))
self._display_comment(diffs)
def _migrate_settings(self, registered_settings):
to_migrate, to_comment = self._discover_settings(registered_settings)
if not bool(self.comment_only):
self._migrate(to_migrate)
self._comment(to_comment)

View File

@@ -0,0 +1,18 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
# AWX
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('conf', '0005_v330_rename_two_session_settings'),
]
operations = [
migrations.RunPython(fill_ldap_group_type_params),
]

View File

@@ -0,0 +1,30 @@
import inspect
from django.conf import settings
from django.utils.timezone import now
def fill_ldap_group_type_params(apps, schema_editor):
group_type = settings.AUTH_LDAP_GROUP_TYPE
Setting = apps.get_model('conf', 'Setting')
group_type_params = {'name_attr': 'cn', 'member_attr': 'member'}
qs = Setting.objects.filter(key='AUTH_LDAP_GROUP_TYPE_PARAMS')
entry = None
if qs.exists():
entry = qs[0]
group_type_params = entry.value
else:
entry = Setting(key='AUTH_LDAP_GROUP_TYPE_PARAMS',
value=group_type_params,
created=now(),
modified=now())
init_attrs = set(inspect.getargspec(group_type.__init__).args[1:])
for k in group_type_params.keys():
if k not in init_attrs:
del group_type_params[k]
entry.value = group_type_params
entry.save()

View File

@@ -1,7 +1,6 @@
import base64
import hashlib
import six
from django.utils.encoding import smart_str
from cryptography.hazmat.backends import default_backend
@@ -91,7 +90,7 @@ def encrypt_field(instance, field_name, ask=False, subfield=None, skip_utf8=Fals
if skip_utf8:
utf8 = False
else:
utf8 = type(value) == six.text_type
utf8 = type(value) == str
value = smart_str(value)
key = get_encryption_key(field_name, getattr(instance, 'pk', None))
encryptor = Cipher(AES(key), ECB(), default_backend()).encryptor()

View File

@@ -33,7 +33,7 @@ class Setting(CreatedModifiedModel):
on_delete=models.CASCADE,
))
def __unicode__(self):
def __str__(self):
try:
json_value = json.dumps(self.value)
except ValueError:

View File

@@ -1,8 +1,6 @@
# Django REST Framework
from rest_framework import serializers
import six
# Tower
from awx.api.fields import VerbatimField
from awx.api.serializers import BaseSerializer
@@ -47,12 +45,12 @@ class SettingFieldMixin(object):
"""Mixin to use a registered setting field class for API display/validation."""
def to_representation(self, obj):
if getattr(self, 'encrypted', False) and isinstance(obj, six.string_types) and obj:
if getattr(self, 'encrypted', False) and isinstance(obj, str) and obj:
return '$encrypted$'
return obj
def to_internal_value(self, value):
if getattr(self, 'encrypted', False) and isinstance(value, six.string_types) and value.startswith('$encrypted$'):
if getattr(self, 'encrypted', False) and isinstance(value, str) and value.startswith('$encrypted$'):
raise serializers.SkipField()
obj = super(SettingFieldMixin, self).to_internal_value(value)
return super(SettingFieldMixin, self).to_representation(obj)

View File

@@ -6,18 +6,17 @@ import re
import sys
import threading
import time
import StringIO
import traceback
import urllib
import six
import urllib.parse
from io import StringIO
# Django
from django.conf import LazySettings
from django.conf import settings, UserSettingsHolder
from django.core.cache import cache as django_cache
from django.core.exceptions import ImproperlyConfigured
from django.db import ProgrammingError, OperationalError, transaction, connection
from django.db import transaction, connection
from django.db.utils import Error as DBError
from django.utils.functional import cached_property
# Django REST Framework
@@ -67,7 +66,7 @@ def normalize_broker_url(value):
match = re.search('(amqp://[^:]+:)(.*)', parts[0])
if match:
prefix, password = match.group(1), match.group(2)
parts[0] = prefix + urllib.quote(password)
parts[0] = prefix + urllib.parse.quote(password)
return '@'.join(parts)
@@ -90,21 +89,21 @@ def _ctit_db_wrapper(trans_safe=False):
logger.debug('Obtaining database settings in spite of broken transaction.')
transaction.set_rollback(False)
yield
except (ProgrammingError, OperationalError):
except DBError:
if 'migrate' in sys.argv and get_tower_migration_version() < '310':
logger.info('Using default settings until version 3.1 migration.')
else:
# We want the _full_ traceback with the context
# First we get the current call stack, which constitutes the "top",
# it has the context up to the point where the context manager is used
top_stack = StringIO.StringIO()
top_stack = StringIO()
traceback.print_stack(file=top_stack)
top_lines = top_stack.getvalue().strip('\n').split('\n')
top_stack.close()
# Get "bottom" stack from the local error that happened
# inside of the "with" block this wraps
exc_type, exc_value, exc_traceback = sys.exc_info()
bottom_stack = StringIO.StringIO()
bottom_stack = StringIO()
traceback.print_tb(exc_traceback, file=bottom_stack)
bottom_lines = bottom_stack.getvalue().strip('\n').split('\n')
# Glue together top and bottom where overlap is found
@@ -168,15 +167,6 @@ class EncryptedCacheProxy(object):
def get(self, key, **kwargs):
value = self.cache.get(key, **kwargs)
value = self._handle_encryption(self.decrypter, key, value)
# python-memcached auto-encodes unicode on cache set in python2
# https://github.com/linsomniac/python-memcached/issues/79
# https://github.com/linsomniac/python-memcached/blob/288c159720eebcdf667727a859ef341f1e908308/memcache.py#L961
if six.PY2 and isinstance(value, six.binary_type):
try:
six.text_type(value)
except UnicodeDecodeError:
value = value.decode('utf-8')
logger.debug('cache get(%r, %r) -> %r', key, empty, filter_sensitive(self.registry, key, value))
return value
@@ -309,7 +299,7 @@ class SettingsWrapper(UserSettingsHolder):
self.__dict__['_awx_conf_preload_expires'] = time.time() + SETTING_CACHE_TIMEOUT
# Check for any settings that have been defined in Python files and
# make those read-only to avoid overriding in the database.
if not self._awx_conf_init_readonly and 'migrate_to_database_settings' not in sys.argv:
if not self._awx_conf_init_readonly:
defaults_snapshot = self._get_default('DEFAULTS_SNAPSHOT')
for key in get_writeable_settings(self.registry):
init_default = defaults_snapshot.get(key, None)

View File

@@ -9,15 +9,11 @@ from django.core.cache import cache
from django.dispatch import receiver
# Tower
import awx.main.signals
from awx.conf import settings_registry
from awx.conf.models import Setting
from awx.conf.serializers import SettingSerializer
logger = logging.getLogger('awx.conf.signals')
awx.main.signals.model_serializer_mapping[Setting] = SettingSerializer
__all__ = []

View File

@@ -1,7 +1,8 @@
import urllib.parse
import pytest
from django.core.urlresolvers import resolve
from django.utils.six.moves.urllib.parse import urlparse
from django.contrib.auth.models import User
from rest_framework.test import (
@@ -33,7 +34,7 @@ def admin():
@pytest.fixture
def api_request(admin):
def rf(verb, url, data=None, user=admin):
view, view_args, view_kwargs = resolve(urlparse(url)[2])
view, view_args, view_kwargs = resolve(urllib.parse.urlparse(url)[2])
request = getattr(APIRequestFactory(), verb)(url, data=data, format='json')
if user:
force_authenticate(request, user=user)

View File

@@ -1,5 +1,5 @@
import pytest
import mock
from unittest import mock
from rest_framework import serializers

View File

@@ -1,38 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
import pytest
import mock
from django.apps import apps
from awx.conf.migrations._reencrypt import (
replace_aesecb_fernet,
encrypt_field,
decrypt_field,
)
from awx.conf.settings import Setting
from awx.main.utils import decrypt_field as new_decrypt_field
@pytest.mark.django_db
@pytest.mark.parametrize("old_enc, new_enc, value", [
('$encrypted$UTF8$AES', '$encrypted$UTF8$AESCBC$', u'Iñtërnâtiônàlizætiøn'),
('$encrypted$AES$', '$encrypted$AESCBC$', 'test'),
])
def test_settings(old_enc, new_enc, value):
with mock.patch('awx.conf.models.encrypt_field', encrypt_field):
with mock.patch('awx.conf.settings.decrypt_field', decrypt_field):
setting = Setting.objects.create(key='SOCIAL_AUTH_GITHUB_SECRET', value=value)
assert setting.value.startswith(old_enc)
replace_aesecb_fernet(apps, None)
setting.refresh_from_db()
assert setting.value.startswith(new_enc)
assert new_decrypt_field(setting, 'value') == value
# This is here for a side-effect.
# Exception if the encryption type of AESCBC is not properly skipped, ensures
# our `startswith` calls don't have typos
replace_aesecb_fernet(apps, None)

View File

@@ -1,7 +1,7 @@
import pytest
from rest_framework.fields import ValidationError
from awx.conf.fields import StringListBooleanField, ListTuplesField
from awx.conf.fields import StringListBooleanField, StringListPathField, ListTuplesField
class TestStringListBooleanField():
@@ -84,3 +84,49 @@ class TestListTuplesField():
assert e.value.detail[0] == "Expected a list of tuples of max length 2 " \
"but got {} instead.".format(t)
class TestStringListPathField():
FIELD_VALUES = [
((".", "..", "/"), [".", "..", "/"]),
(("/home",), ["/home"]),
(("///home///",), ["/home"]),
(("/home/././././",), ["/home"]),
(("/home", "/home", "/home/"), ["/home"]),
(["/home/", "/home/", "/opt/", "/opt/", "/var/"], ["/home", "/opt", "/var"])
]
FIELD_VALUES_INVALID_TYPE = [
1.245,
{"a": "b"},
("/home"),
]
FIELD_VALUES_INVALID_PATH = [
"",
"~/",
"home",
"/invalid_path",
"/home/invalid_path",
]
@pytest.mark.parametrize("value_in, value_known", FIELD_VALUES)
def test_to_internal_value_valid(self, value_in, value_known):
field = StringListPathField()
v = field.to_internal_value(value_in)
assert v == value_known
@pytest.mark.parametrize("value", FIELD_VALUES_INVALID_TYPE)
def test_to_internal_value_invalid_type(self, value):
field = StringListPathField()
with pytest.raises(ValidationError) as e:
field.to_internal_value(value)
assert e.value.detail[0] == "Expected list of strings but got {} instead.".format(type(value))
@pytest.mark.parametrize("value", FIELD_VALUES_INVALID_PATH)
def test_to_internal_value_invalid_path(self, value):
field = StringListPathField()
with pytest.raises(ValidationError) as e:
field.to_internal_value([value])
assert e.value.detail[0] == "{} is not a valid path choice.".format(value)

View File

@@ -4,6 +4,7 @@
# All Rights Reserved.
from contextlib import contextmanager
import codecs
from uuid import uuid4
import time
@@ -12,7 +13,6 @@ from django.core.cache.backends.locmem import LocMemCache
from django.core.exceptions import ImproperlyConfigured
from django.utils.translation import ugettext_lazy as _
import pytest
import six
from awx.conf import models, fields
from awx.conf.settings import SettingsWrapper, EncryptedCacheProxy, SETTING_CACHE_NOTSET
@@ -67,9 +67,9 @@ def test_cached_settings_unicode_is_auto_decoded(settings):
# https://github.com/linsomniac/python-memcached/issues/79
# https://github.com/linsomniac/python-memcached/blob/288c159720eebcdf667727a859ef341f1e908308/memcache.py#L961
value = six.u('Iñtërnâtiônàlizætiøn').encode('utf-8') # this simulates what python-memcached does on cache.set()
value = 'Iñtërnâtiônàlizætiøn' # this simulates what python-memcached does on cache.set()
settings.cache.set('DEBUG', value)
assert settings.cache.get('DEBUG') == six.u('Iñtërnâtiônàlizætiøn')
assert settings.cache.get('DEBUG') == 'Iñtërnâtiônàlizætiøn'
def test_read_only_setting(settings):
@@ -262,7 +262,7 @@ def test_setting_from_db_with_unicode(settings, mocker, encrypted):
encrypted=encrypted
)
# this simulates a bug in python-memcached; see https://github.com/linsomniac/python-memcached/issues/79
value = six.u('Iñtërnâtiônàlizætiøn').encode('utf-8')
value = 'Iñtërnâtiônàlizætiøn'
setting_from_db = mocker.Mock(id=1, key='AWX_SOME_SETTING', value=value)
mocks = mocker.Mock(**{
@@ -272,8 +272,8 @@ def test_setting_from_db_with_unicode(settings, mocker, encrypted):
}),
})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
assert settings.AWX_SOME_SETTING == six.u('Iñtërnâtiônàlizætiøn')
assert settings.cache.get('AWX_SOME_SETTING') == six.u('Iñtërnâtiônàlizætiøn')
assert settings.AWX_SOME_SETTING == 'Iñtërnâtiônàlizætiøn'
assert settings.cache.get('AWX_SOME_SETTING') == 'Iñtërnâtiônàlizætiøn'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -434,7 +434,7 @@ def test_sensitive_cache_data_is_encrypted(settings, mocker):
def rot13(obj, attribute):
assert obj.pk == 123
return getattr(obj, attribute).encode('rot13')
return codecs.encode(getattr(obj, attribute), 'rot_13')
native_cache = LocMemCache(str(uuid4()), {})
cache = EncryptedCacheProxy(
@@ -471,7 +471,7 @@ def test_readonly_sensitive_cache_data_is_encrypted(settings):
def rot13(obj, attribute):
assert obj.pk is None
return getattr(obj, attribute).encode('rot13')
return codecs.encode(getattr(obj, attribute), 'rot_13')
native_cache = LocMemCache(str(uuid4()), {})
cache = EncryptedCacheProxy(

View File

@@ -1,110 +1,9 @@
#!/usr/bin/env python
# Python
import difflib
import glob
import os
import shutil
import six
# AWX
from awx.conf.registry import settings_registry
__all__ = ['comment_assignments', 'conf_to_dict']
def comment_assignments(patterns, assignment_names, dry_run=True, backup_suffix='.old'):
if isinstance(patterns, six.string_types):
patterns = [patterns]
diffs = []
for pattern in patterns:
for filename in sorted(glob.glob(pattern)):
filename = os.path.abspath(os.path.normpath(filename))
if backup_suffix:
backup_filename = '{}{}'.format(filename, backup_suffix)
else:
backup_filename = None
diff = comment_assignments_in_file(filename, assignment_names, dry_run, backup_filename)
if diff:
diffs.append(diff)
return diffs
def comment_assignments_in_file(filename, assignment_names, dry_run=True, backup_filename=None):
from redbaron import RedBaron, indent
if isinstance(assignment_names, six.string_types):
assignment_names = [assignment_names]
else:
assignment_names = assignment_names[:]
current_file_data = open(filename).read()
for assignment_name in assignment_names[:]:
if assignment_name in current_file_data:
continue
if assignment_name in assignment_names:
assignment_names.remove(assignment_name)
if not assignment_names:
return ''
replace_lines = {}
rb = RedBaron(current_file_data)
for assignment_node in rb.find_all('assignment'):
for assignment_name in assignment_names:
# Only target direct assignments to a variable.
name_node = assignment_node.find('name', value=assignment_name)
if not name_node:
continue
if assignment_node.target.type != 'name':
continue
# Build a new node that comments out the existing assignment node.
indentation = '{}# '.format(assignment_node.indentation or '')
new_node_content = indent(assignment_node.dumps(), indentation)
new_node_lines = new_node_content.splitlines()
# Add a pass statement in case the assignment block is the only
# child in a parent code block to prevent a syntax error.
if assignment_node.indentation:
new_node_lines[0] = new_node_lines[0].replace(indentation, '{}pass # '.format(assignment_node.indentation or ''), 1)
new_node_lines[0] = '{0}This setting is now configured via the Tower API.\n{1}'.format(indentation, new_node_lines[0])
# Store new node lines in dictionary to be replaced in file.
start_lineno = assignment_node.absolute_bounding_box.top_left.line
end_lineno = assignment_node.absolute_bounding_box.bottom_right.line
for n, new_node_line in enumerate(new_node_lines):
new_lineno = start_lineno + n
assert new_lineno <= end_lineno
replace_lines[new_lineno] = new_node_line
if not replace_lines:
return ''
# Iterate through all lines in current file and replace as needed.
current_file_lines = current_file_data.splitlines()
new_file_lines = []
for n, line in enumerate(current_file_lines):
new_file_lines.append(replace_lines.get(n + 1, line))
new_file_data = '\n'.join(new_file_lines)
new_file_lines = new_file_data.splitlines()
# If changed, syntax check and write the new file; return a diff of changes.
diff_lines = []
if new_file_data != current_file_data:
compile(new_file_data, filename, 'exec')
if backup_filename:
from_file = backup_filename
else:
from_file = '{}.old'.format(filename)
to_file = filename
diff_lines = list(difflib.unified_diff(current_file_lines, new_file_lines, fromfile=from_file, tofile=to_file, lineterm=''))
if not dry_run:
if backup_filename:
shutil.copy2(filename, backup_filename)
with open(filename, 'wb') as fileobj:
fileobj.write(new_file_data)
return '\n'.join(diff_lines)
__all__ = ['conf_to_dict']
def conf_to_dict(obj):
@@ -112,10 +11,3 @@ def conf_to_dict(obj):
'category': settings_registry.get_setting_category(obj.key),
'name': obj.key,
}
if __name__ == '__main__':
pattern = os.path.join(os.path.dirname(__file__), '..', 'settings', 'local_*.py')
diffs = comment_assignments(pattern, ['AUTH_LDAP_ORGANIZATION_MAP'])
for diff in diffs:
print(diff)

View File

@@ -17,10 +17,15 @@ from rest_framework import serializers
from rest_framework import status
# Tower
from awx.api.generics import * # noqa
from awx.api.generics import (
APIView,
GenericAPIView,
ListAPIView,
RetrieveUpdateDestroyAPIView,
)
from awx.api.permissions import IsSuperUser
from awx.api.versioning import reverse, get_request_version
from awx.main.utils import * # noqa
from awx.main.utils import camelcase_to_underscore
from awx.main.utils.handlers import AWXProxyHandler, LoggingConnectivityException
from awx.main.tasks import handle_setting_changes
from awx.conf.license import get_licensed_features
@@ -72,7 +77,7 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
def get_queryset(self):
self.category_slug = self.kwargs.get('category_slug', 'all')
all_category_slugs = settings_registry.get_registered_categories(features_enabled=get_licensed_features()).keys()
all_category_slugs = list(settings_registry.get_registered_categories(features_enabled=get_licensed_features()).keys())
for slug_to_delete in VERSION_SPECIFIC_CATEGORIES_TO_EXCLUDE[get_request_version(self.request)]:
all_category_slugs.remove(slug_to_delete)
if self.request.user.is_superuser or getattr(self.request.user, 'is_system_auditor', False):
@@ -123,7 +128,7 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
if key == 'LICENSE' or settings_registry.is_setting_read_only(key):
continue
if settings_registry.is_setting_encrypted(key) and \
isinstance(value, basestring) and \
isinstance(value, str) and \
value.startswith('$encrypted$'):
continue
setattr(serializer.instance, key, value)
@@ -135,7 +140,7 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
setting.value = value
setting.save(update_fields=['value'])
settings_change_list.append(key)
if settings_change_list and 'migrate_to_database_settings' not in sys.argv:
if settings_change_list:
handle_setting_changes.delay(settings_change_list)
def destroy(self, request, *args, **kwargs):
@@ -150,7 +155,7 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
continue
setting.delete()
settings_change_list.append(setting.key)
if settings_change_list and 'migrate_to_database_settings' not in sys.argv:
if settings_change_list:
handle_setting_changes.delay(settings_change_list)
# When TOWER_URL_BASE is deleted from the API, reset it to the hostname
@@ -210,7 +215,7 @@ class SettingLoggingTest(GenericAPIView):
# in URL patterns and reverse URL lookups, converting CamelCase names to
# lowercase_with_underscore (e.g. MyView.as_view() becomes my_view).
this_module = sys.modules[__name__]
for attr, value in locals().items():
for attr, value in list(locals().items()):
if isinstance(value, type) and issubclass(value, APIView):
name = camelcase_to_underscore(attr)
view = value.as_view()

View File

@@ -1,25 +0,0 @@
# Copyright (c) 2016 Ansible by Red Hat, Inc.
#
# This file is part of Ansible Tower, but depends on code imported from Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
# AWX Display Callback
from . import cleanup # noqa (registers control persistent cleanup)
from . import display # noqa (wraps ansible.display.Display methods)
from .module import AWXDefaultCallbackModule, AWXMinimalCallbackModule
__all__ = ['AWXDefaultCallbackModule', 'AWXMinimalCallbackModule']

View File

@@ -1,85 +0,0 @@
# Copyright (c) 2016 Ansible by Red Hat, Inc.
#
# This file is part of Ansible Tower, but depends on code imported from Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
# Python
import atexit
import glob
import os
import pwd
# PSUtil
try:
import psutil
except ImportError:
raise ImportError('psutil is missing; {}bin/pip install psutil'.format(
os.environ['VIRTUAL_ENV']
))
__all__ = []
main_pid = os.getpid()
@atexit.register
def terminate_ssh_control_masters():
# Only run this cleanup from the main process.
if os.getpid() != main_pid:
return
# Determine if control persist is being used and if any open sockets
# exist after running the playbook.
cp_path = os.environ.get('ANSIBLE_SSH_CONTROL_PATH', '')
if not cp_path:
return
cp_dir = os.path.dirname(cp_path)
if not os.path.exists(cp_dir):
return
cp_pattern = os.path.join(cp_dir, 'ansible-ssh-*')
cp_files = glob.glob(cp_pattern)
if not cp_files:
return
# Attempt to find any running control master processes.
username = pwd.getpwuid(os.getuid())[0]
ssh_cm_procs = []
for proc in psutil.process_iter():
try:
pname = proc.name()
pcmdline = proc.cmdline()
pusername = proc.username()
except psutil.NoSuchProcess:
continue
if pusername != username:
continue
if pname != 'ssh':
continue
for cp_file in cp_files:
if pcmdline and cp_file in pcmdline[0]:
ssh_cm_procs.append(proc)
break
# Terminate then kill control master processes. Workaround older
# version of psutil that may not have wait_procs implemented.
for proc in ssh_cm_procs:
try:
proc.terminate()
except psutil.NoSuchProcess:
continue
procs_gone, procs_alive = psutil.wait_procs(ssh_cm_procs, timeout=5)
for proc in procs_alive:
proc.kill()

View File

@@ -1,98 +0,0 @@
# Copyright (c) 2016 Ansible by Red Hat, Inc.
#
# This file is part of Ansible Tower, but depends on code imported from Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
# Python
import functools
import sys
import uuid
# Ansible
from ansible.utils.display import Display
# Tower Display Callback
from .events import event_context
__all__ = []
def with_context(**context):
global event_context
def wrap(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
with event_context.set_local(**context):
return f(*args, **kwargs)
return wrapper
return wrap
for attr in dir(Display):
if attr.startswith('_') or 'cow' in attr or 'prompt' in attr:
continue
if attr in ('display', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv', 'verbose'):
continue
if not callable(getattr(Display, attr)):
continue
setattr(Display, attr, with_context(**{attr: True})(getattr(Display, attr)))
def with_verbosity(f):
global event_context
@functools.wraps(f)
def wrapper(*args, **kwargs):
host = args[2] if len(args) >= 3 else kwargs.get('host', None)
caplevel = args[3] if len(args) >= 4 else kwargs.get('caplevel', 2)
context = dict(verbose=True, verbosity=(caplevel + 1))
if host is not None:
context['remote_addr'] = host
with event_context.set_local(**context):
return f(*args, **kwargs)
return wrapper
Display.verbose = with_verbosity(Display.verbose)
def display_with_context(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
log_only = args[5] if len(args) >= 6 else kwargs.get('log_only', False)
stderr = args[3] if len(args) >= 4 else kwargs.get('stderr', False)
event_uuid = event_context.get().get('uuid', None)
with event_context.display_lock:
# If writing only to a log file or there is already an event UUID
# set (from a callback module method), skip dumping the event data.
if log_only or event_uuid:
return f(*args, **kwargs)
try:
fileobj = sys.stderr if stderr else sys.stdout
event_context.add_local(uuid=str(uuid.uuid4()))
event_context.dump_begin(fileobj)
return f(*args, **kwargs)
finally:
event_context.dump_end(fileobj)
event_context.remove_local(uuid=None)
return wrapper
Display.display = display_with_context(Display.display)

View File

@@ -1,189 +0,0 @@
# Copyright (c) 2016 Ansible by Red Hat, Inc.
#
# This file is part of Ansible Tower, but depends on code imported from Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
# Python
import base64
import contextlib
import datetime
import json
import multiprocessing
import os
import stat
import threading
import uuid
try:
import memcache
except ImportError:
raise ImportError('python-memcached is missing; {}bin/pip install python-memcached'.format(
os.environ['VIRTUAL_ENV']
))
from six.moves import xrange
__all__ = ['event_context']
class IsolatedFileWrite:
'''
Stand-in class that will write partial event data to a file as a
replacement for memcache when a job is running on an isolated host.
'''
def __init__(self):
self.private_data_dir = os.getenv('AWX_ISOLATED_DATA_DIR')
def set(self, key, value):
# Strip off the leading memcache key identifying characters :1:ev-
event_uuid = key[len(':1:ev-'):]
# Write data in a staging area and then atomic move to pickup directory
filename = '{}-partial.json'.format(event_uuid)
dropoff_location = os.path.join(self.private_data_dir, 'artifacts', 'job_events', filename)
write_location = '.'.join([dropoff_location, 'tmp'])
partial_data = json.dumps(value)
with os.fdopen(os.open(write_location, os.O_WRONLY | os.O_CREAT, stat.S_IRUSR | stat.S_IWUSR), 'w') as f:
f.write(partial_data)
os.rename(write_location, dropoff_location)
class EventContext(object):
'''
Store global and local (per thread/process) data associated with callback
events and other display output methods.
'''
def __init__(self):
self.display_lock = multiprocessing.RLock()
cache_actual = os.getenv('CACHE', '127.0.0.1:11211')
if os.getenv('AWX_ISOLATED_DATA_DIR', False):
self.cache = IsolatedFileWrite()
else:
self.cache = memcache.Client([cache_actual], debug=0)
def add_local(self, **kwargs):
if not hasattr(self, '_local'):
self._local = threading.local()
self._local._ctx = {}
self._local._ctx.update(kwargs)
def remove_local(self, **kwargs):
if hasattr(self, '_local'):
for key in kwargs.keys():
self._local._ctx.pop(key, None)
@contextlib.contextmanager
def set_local(self, **kwargs):
try:
self.add_local(**kwargs)
yield
finally:
self.remove_local(**kwargs)
def get_local(self):
return getattr(getattr(self, '_local', None), '_ctx', {})
def add_global(self, **kwargs):
if not hasattr(self, '_global_ctx'):
self._global_ctx = {}
self._global_ctx.update(kwargs)
def remove_global(self, **kwargs):
if hasattr(self, '_global_ctx'):
for key in kwargs.keys():
self._global_ctx.pop(key, None)
@contextlib.contextmanager
def set_global(self, **kwargs):
try:
self.add_global(**kwargs)
yield
finally:
self.remove_global(**kwargs)
def get_global(self):
return getattr(self, '_global_ctx', {})
def get(self):
ctx = {}
ctx.update(self.get_global())
ctx.update(self.get_local())
return ctx
def get_begin_dict(self):
event_data = self.get()
if os.getenv('JOB_ID', ''):
event_data['job_id'] = int(os.getenv('JOB_ID', '0'))
if os.getenv('AD_HOC_COMMAND_ID', ''):
event_data['ad_hoc_command_id'] = int(os.getenv('AD_HOC_COMMAND_ID', '0'))
if os.getenv('PROJECT_UPDATE_ID', ''):
event_data['project_update_id'] = int(os.getenv('PROJECT_UPDATE_ID', '0'))
event_data.setdefault('pid', os.getpid())
event_data.setdefault('uuid', str(uuid.uuid4()))
event_data.setdefault('created', datetime.datetime.utcnow().isoformat())
if not event_data.get('parent_uuid', None) and event_data.get('job_id', None):
for key in ('task_uuid', 'play_uuid', 'playbook_uuid'):
parent_uuid = event_data.get(key, None)
if parent_uuid and parent_uuid != event_data.get('uuid', None):
event_data['parent_uuid'] = parent_uuid
break
event = event_data.pop('event', None)
if not event:
event = 'verbose'
for key in ('debug', 'verbose', 'deprecated', 'warning', 'system_warning', 'error'):
if event_data.get(key, False):
event = key
break
max_res = int(os.getenv("MAX_EVENT_RES", 700000))
if event not in ('playbook_on_stats',) and "res" in event_data and len(str(event_data['res'])) > max_res:
event_data['res'] = {}
event_dict = dict(event=event, event_data=event_data)
for key in event_data.keys():
if key in ('job_id', 'ad_hoc_command_id', 'project_update_id', 'uuid', 'parent_uuid', 'created',):
event_dict[key] = event_data.pop(key)
elif key in ('verbosity', 'pid'):
event_dict[key] = event_data[key]
return event_dict
def get_end_dict(self):
return {}
def dump(self, fileobj, data, max_width=78, flush=False):
b64data = base64.b64encode(json.dumps(data))
with self.display_lock:
# pattern corresponding to OutputEventFilter expectation
fileobj.write(u'\x1b[K')
for offset in xrange(0, len(b64data), max_width):
chunk = b64data[offset:offset + max_width]
escaped_chunk = u'{}\x1b[{}D'.format(chunk, len(chunk))
fileobj.write(escaped_chunk)
fileobj.write(u'\x1b[K')
if flush:
fileobj.flush()
def dump_begin(self, fileobj):
begin_dict = self.get_begin_dict()
self.cache.set(":1:ev-{}".format(begin_dict['uuid']), begin_dict)
self.dump(fileobj, {'uuid': begin_dict['uuid']})
def dump_end(self, fileobj):
self.dump(fileobj, self.get_end_dict(), flush=True)
event_context = EventContext()

View File

@@ -1,29 +0,0 @@
# Copyright (c) 2016 Ansible by Red Hat, Inc.
#
# This file is part of Ansible Tower, but depends on code imported from Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
# Python
import os
# Ansible
import ansible
# Because of the way Ansible loads plugins, it's not possible to import
# ansible.plugins.callback.minimal when being loaded as the minimal plugin. Ugh.
with open(os.path.join(os.path.dirname(ansible.__file__), 'plugins', 'callback', 'minimal.py')) as in_file:
exec(in_file.read())

View File

@@ -1,492 +0,0 @@
# Copyright (c) 2016 Ansible by Red Hat, Inc.
#
# This file is part of Ansible Tower, but depends on code imported from Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
# Python
import codecs
import contextlib
import json
import os
import stat
import sys
import uuid
from copy import copy
# Ansible
from ansible import constants as C
from ansible.plugins.callback import CallbackBase
from ansible.plugins.callback.default import CallbackModule as DefaultCallbackModule
# AWX Display Callback
from .events import event_context
from .minimal import CallbackModule as MinimalCallbackModule
CENSORED = "the output has been hidden due to the fact that 'no_log: true' was specified for this result" # noqa
class BaseCallbackModule(CallbackBase):
'''
Callback module for logging ansible/ansible-playbook events.
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
# These events should never have an associated play.
EVENTS_WITHOUT_PLAY = [
'playbook_on_start',
'playbook_on_stats',
]
# These events should never have an associated task.
EVENTS_WITHOUT_TASK = EVENTS_WITHOUT_PLAY + [
'playbook_on_setup',
'playbook_on_notify',
'playbook_on_import_for_host',
'playbook_on_not_import_for_host',
'playbook_on_no_hosts_matched',
'playbook_on_no_hosts_remaining',
]
def __init__(self):
super(BaseCallbackModule, self).__init__()
self.task_uuids = set()
@contextlib.contextmanager
def capture_event_data(self, event, **event_data):
event_data.setdefault('uuid', str(uuid.uuid4()))
if event not in self.EVENTS_WITHOUT_TASK:
task = event_data.pop('task', None)
else:
task = None
if event_data.get('res'):
if event_data['res'].get('_ansible_no_log', False):
event_data['res'] = {'censored': CENSORED}
if event_data['res'].get('results', []):
event_data['res']['results'] = copy(event_data['res']['results'])
for i, item in enumerate(event_data['res'].get('results', [])):
if isinstance(item, dict) and item.get('_ansible_no_log', False):
event_data['res']['results'][i] = {'censored': CENSORED}
with event_context.display_lock:
try:
event_context.add_local(event=event, **event_data)
if task:
self.set_task(task, local=True)
event_context.dump_begin(sys.stdout)
yield
finally:
event_context.dump_end(sys.stdout)
if task:
self.clear_task(local=True)
event_context.remove_local(event=None, **event_data)
def set_playbook(self, playbook):
# NOTE: Ansible doesn't generate a UUID for playbook_on_start so do it for them.
self.playbook_uuid = str(uuid.uuid4())
file_name = getattr(playbook, '_file_name', '???')
event_context.add_global(playbook=file_name, playbook_uuid=self.playbook_uuid)
self.clear_play()
def set_play(self, play):
if hasattr(play, 'hosts'):
if isinstance(play.hosts, list):
pattern = ','.join(play.hosts)
else:
pattern = play.hosts
else:
pattern = ''
name = play.get_name().strip() or pattern
event_context.add_global(play=name, play_uuid=str(play._uuid), play_pattern=pattern)
self.clear_task()
def clear_play(self):
event_context.remove_global(play=None, play_uuid=None, play_pattern=None)
self.clear_task()
def set_task(self, task, local=False):
# FIXME: Task is "global" unless using free strategy!
task_ctx = dict(
task=(task.name or task.action),
task_uuid=str(task._uuid),
task_action=task.action,
task_args='',
)
try:
task_ctx['task_path'] = task.get_path()
except AttributeError:
pass
if C.DISPLAY_ARGS_TO_STDOUT:
if task.no_log:
task_ctx['task_args'] = "the output has been hidden due to the fact that 'no_log: true' was specified for this result"
else:
task_args = ', '.join(('%s=%s' % a for a in task.args.items()))
task_ctx['task_args'] = task_args
if getattr(task, '_role', None):
task_role = task._role._role_name
else:
task_role = getattr(task, 'role_name', '')
if task_role:
task_ctx['role'] = task_role
if local:
event_context.add_local(**task_ctx)
else:
event_context.add_global(**task_ctx)
def clear_task(self, local=False):
task_ctx = dict(task=None, task_path=None, task_uuid=None, task_action=None, task_args=None, role=None)
if local:
event_context.remove_local(**task_ctx)
else:
event_context.remove_global(**task_ctx)
def v2_playbook_on_start(self, playbook):
self.set_playbook(playbook)
event_data = dict(
uuid=self.playbook_uuid,
)
with self.capture_event_data('playbook_on_start', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_start(playbook)
def v2_playbook_on_vars_prompt(self, varname, private=True, prompt=None,
encrypt=None, confirm=False, salt_size=None,
salt=None, default=None):
event_data = dict(
varname=varname,
private=private,
prompt=prompt,
encrypt=encrypt,
confirm=confirm,
salt_size=salt_size,
salt=salt,
default=default,
)
with self.capture_event_data('playbook_on_vars_prompt', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_vars_prompt(
varname, private, prompt, encrypt, confirm, salt_size, salt,
default,
)
def v2_playbook_on_include(self, included_file):
event_data = dict(
included_file=included_file._filename if included_file is not None else None,
)
with self.capture_event_data('playbook_on_include', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_include(included_file)
def v2_playbook_on_play_start(self, play):
self.set_play(play)
if hasattr(play, 'hosts'):
if isinstance(play.hosts, list):
pattern = ','.join(play.hosts)
else:
pattern = play.hosts
else:
pattern = ''
name = play.get_name().strip() or pattern
event_data = dict(
name=name,
pattern=pattern,
uuid=str(play._uuid),
)
with self.capture_event_data('playbook_on_play_start', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_play_start(play)
def v2_playbook_on_import_for_host(self, result, imported_file):
# NOTE: Not used by Ansible 2.x.
with self.capture_event_data('playbook_on_import_for_host'):
super(BaseCallbackModule, self).v2_playbook_on_import_for_host(result, imported_file)
def v2_playbook_on_not_import_for_host(self, result, missing_file):
# NOTE: Not used by Ansible 2.x.
with self.capture_event_data('playbook_on_not_import_for_host'):
super(BaseCallbackModule, self).v2_playbook_on_not_import_for_host(result, missing_file)
def v2_playbook_on_setup(self):
# NOTE: Not used by Ansible 2.x.
with self.capture_event_data('playbook_on_setup'):
super(BaseCallbackModule, self).v2_playbook_on_setup()
def v2_playbook_on_task_start(self, task, is_conditional):
# FIXME: Flag task path output as vv.
task_uuid = str(task._uuid)
if task_uuid in self.task_uuids:
# FIXME: When this task UUID repeats, it means the play is using the
# free strategy, so different hosts may be running different tasks
# within a play.
return
self.task_uuids.add(task_uuid)
self.set_task(task)
event_data = dict(
task=task,
name=task.get_name(),
is_conditional=is_conditional,
uuid=task_uuid,
)
with self.capture_event_data('playbook_on_task_start', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_task_start(task, is_conditional)
def v2_playbook_on_cleanup_task_start(self, task):
# NOTE: Not used by Ansible 2.x.
self.set_task(task)
event_data = dict(
task=task,
name=task.get_name(),
uuid=str(task._uuid),
is_conditional=True,
)
with self.capture_event_data('playbook_on_task_start', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_cleanup_task_start(task)
def v2_playbook_on_handler_task_start(self, task):
# NOTE: Re-using playbook_on_task_start event for this v2-specific
# event, but setting is_conditional=True, which is how v1 identified a
# task run as a handler.
self.set_task(task)
event_data = dict(
task=task,
name=task.get_name(),
uuid=str(task._uuid),
is_conditional=True,
)
with self.capture_event_data('playbook_on_task_start', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_handler_task_start(task)
def v2_playbook_on_no_hosts_matched(self):
with self.capture_event_data('playbook_on_no_hosts_matched'):
super(BaseCallbackModule, self).v2_playbook_on_no_hosts_matched()
def v2_playbook_on_no_hosts_remaining(self):
with self.capture_event_data('playbook_on_no_hosts_remaining'):
super(BaseCallbackModule, self).v2_playbook_on_no_hosts_remaining()
def v2_playbook_on_notify(self, handler, host):
# NOTE: Not used by Ansible < 2.5.
event_data = dict(
host=host.get_name(),
handler=handler.get_name(),
)
with self.capture_event_data('playbook_on_notify', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_notify(handler, host)
'''
ansible_stats is, retoractively, added in 2.2
'''
def v2_playbook_on_stats(self, stats):
self.clear_play()
# FIXME: Add count of plays/tasks.
event_data = dict(
changed=stats.changed,
dark=stats.dark,
failures=stats.failures,
ok=stats.ok,
processed=stats.processed,
skipped=stats.skipped
)
# write custom set_stat artifact data to the local disk so that it can
# be persisted by awx after the process exits
custom_artifact_data = stats.custom.get('_run', {}) if hasattr(stats, 'custom') else {}
if custom_artifact_data:
# create the directory for custom stats artifacts to live in (if it doesn't exist)
custom_artifacts_dir = os.path.join(os.getenv('AWX_PRIVATE_DATA_DIR'), 'artifacts')
if not os.path.isdir(custom_artifacts_dir):
os.makedirs(custom_artifacts_dir, mode=stat.S_IXUSR + stat.S_IWUSR + stat.S_IRUSR)
custom_artifacts_path = os.path.join(custom_artifacts_dir, 'custom')
with codecs.open(custom_artifacts_path, 'w', encoding='utf-8') as f:
os.chmod(custom_artifacts_path, stat.S_IRUSR | stat.S_IWUSR)
json.dump(custom_artifact_data, f)
with self.capture_event_data('playbook_on_stats', **event_data):
super(BaseCallbackModule, self).v2_playbook_on_stats(stats)
@staticmethod
def _get_event_loop(task):
if hasattr(task, 'loop_with'): # Ansible >=2.5
return task.loop_with
elif hasattr(task, 'loop'): # Ansible <2.4
return task.loop
return None
def v2_runner_on_ok(self, result):
# FIXME: Display detailed results or not based on verbosity.
# strip environment vars from the job event; it already exists on the
# job and sensitive values are filtered there
if result._task.action in ('setup', 'gather_facts'):
result._result.get('ansible_facts', {}).pop('ansible_env', None)
event_data = dict(
host=result._host.get_name(),
remote_addr=result._host.address,
task=result._task,
res=result._result,
event_loop=self._get_event_loop(result._task),
)
with self.capture_event_data('runner_on_ok', **event_data):
super(BaseCallbackModule, self).v2_runner_on_ok(result)
def v2_runner_on_failed(self, result, ignore_errors=False):
# FIXME: Add verbosity for exception/results output.
event_data = dict(
host=result._host.get_name(),
remote_addr=result._host.address,
res=result._result,
task=result._task,
ignore_errors=ignore_errors,
event_loop=self._get_event_loop(result._task),
)
with self.capture_event_data('runner_on_failed', **event_data):
super(BaseCallbackModule, self).v2_runner_on_failed(result, ignore_errors)
def v2_runner_on_skipped(self, result):
event_data = dict(
host=result._host.get_name(),
remote_addr=result._host.address,
task=result._task,
event_loop=self._get_event_loop(result._task),
)
with self.capture_event_data('runner_on_skipped', **event_data):
super(BaseCallbackModule, self).v2_runner_on_skipped(result)
def v2_runner_on_unreachable(self, result):
event_data = dict(
host=result._host.get_name(),
remote_addr=result._host.address,
task=result._task,
res=result._result,
)
with self.capture_event_data('runner_on_unreachable', **event_data):
super(BaseCallbackModule, self).v2_runner_on_unreachable(result)
def v2_runner_on_no_hosts(self, task):
# NOTE: Not used by Ansible 2.x.
event_data = dict(
task=task,
)
with self.capture_event_data('runner_on_no_hosts', **event_data):
super(BaseCallbackModule, self).v2_runner_on_no_hosts(task)
def v2_runner_on_async_poll(self, result):
# NOTE: Not used by Ansible 2.x.
event_data = dict(
host=result._host.get_name(),
task=result._task,
res=result._result,
jid=result._result.get('ansible_job_id'),
)
with self.capture_event_data('runner_on_async_poll', **event_data):
super(BaseCallbackModule, self).v2_runner_on_async_poll(result)
def v2_runner_on_async_ok(self, result):
# NOTE: Not used by Ansible 2.x.
event_data = dict(
host=result._host.get_name(),
task=result._task,
res=result._result,
jid=result._result.get('ansible_job_id'),
)
with self.capture_event_data('runner_on_async_ok', **event_data):
super(BaseCallbackModule, self).v2_runner_on_async_ok(result)
def v2_runner_on_async_failed(self, result):
# NOTE: Not used by Ansible 2.x.
event_data = dict(
host=result._host.get_name(),
task=result._task,
res=result._result,
jid=result._result.get('ansible_job_id'),
)
with self.capture_event_data('runner_on_async_failed', **event_data):
super(BaseCallbackModule, self).v2_runner_on_async_failed(result)
def v2_runner_on_file_diff(self, result, diff):
# NOTE: Not used by Ansible 2.x.
event_data = dict(
host=result._host.get_name(),
task=result._task,
diff=diff,
)
with self.capture_event_data('runner_on_file_diff', **event_data):
super(BaseCallbackModule, self).v2_runner_on_file_diff(result, diff)
def v2_on_file_diff(self, result):
# NOTE: Logged as runner_on_file_diff.
event_data = dict(
host=result._host.get_name(),
task=result._task,
diff=result._result.get('diff'),
)
with self.capture_event_data('runner_on_file_diff', **event_data):
super(BaseCallbackModule, self).v2_on_file_diff(result)
def v2_runner_item_on_ok(self, result):
event_data = dict(
host=result._host.get_name(),
task=result._task,
res=result._result,
)
with self.capture_event_data('runner_item_on_ok', **event_data):
super(BaseCallbackModule, self).v2_runner_item_on_ok(result)
def v2_runner_item_on_failed(self, result):
event_data = dict(
host=result._host.get_name(),
task=result._task,
res=result._result,
)
with self.capture_event_data('runner_item_on_failed', **event_data):
super(BaseCallbackModule, self).v2_runner_item_on_failed(result)
def v2_runner_item_on_skipped(self, result):
event_data = dict(
host=result._host.get_name(),
task=result._task,
res=result._result,
)
with self.capture_event_data('runner_item_on_skipped', **event_data):
super(BaseCallbackModule, self).v2_runner_item_on_skipped(result)
def v2_runner_retry(self, result):
event_data = dict(
host=result._host.get_name(),
task=result._task,
res=result._result,
)
with self.capture_event_data('runner_retry', **event_data):
super(BaseCallbackModule, self).v2_runner_retry(result)
class AWXDefaultCallbackModule(BaseCallbackModule, DefaultCallbackModule):
CALLBACK_NAME = 'awx_display'
class AWXMinimalCallbackModule(BaseCallbackModule, MinimalCallbackModule):
CALLBACK_NAME = 'minimal'
def v2_playbook_on_play_start(self, play):
pass
def v2_playbook_on_task_start(self, task, is_conditional):
self.set_task(task)

View File

@@ -1,353 +0,0 @@
# Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved
from __future__ import absolute_import
from collections import OrderedDict
import json
import mock
import os
import shutil
import sys
import tempfile
import pytest
# ansible uses `ANSIBLE_CALLBACK_PLUGINS` and `ANSIBLE_STDOUT_CALLBACK` to
# discover callback plugins; `ANSIBLE_CALLBACK_PLUGINS` is a list of paths to
# search for a plugin implementation (which should be named `CallbackModule`)
#
# this code modifies the Python path to make our
# `awx.lib.awx_display_callback` callback importable (because `awx.lib`
# itself is not a package)
#
# we use the `awx_display_callback` imports below within this file, but
# Ansible also uses them when it discovers this file in
# `ANSIBLE_CALLBACK_PLUGINS`
CALLBACK = os.path.splitext(os.path.basename(__file__))[0]
PLUGINS = os.path.dirname(__file__)
with mock.patch.dict(os.environ, {'ANSIBLE_STDOUT_CALLBACK': CALLBACK,
'ANSIBLE_CALLBACK_PLUGINS': PLUGINS}):
from ansible import __version__ as ANSIBLE_VERSION
from ansible.cli.playbook import PlaybookCLI
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.inventory.manager import InventoryManager
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
# Add awx/lib to sys.path so we can use the plugin
path = os.path.abspath(os.path.join(PLUGINS, '..', '..', 'lib'))
if path not in sys.path:
sys.path.insert(0, path)
from awx_display_callback import AWXDefaultCallbackModule as CallbackModule # noqa
from awx_display_callback.events import event_context # noqa
@pytest.fixture()
def cache(request):
class Cache(OrderedDict):
def set(self, key, value):
self[key] = value
local_cache = Cache()
patch = mock.patch.object(event_context, 'cache', local_cache)
patch.start()
request.addfinalizer(patch.stop)
return local_cache
@pytest.fixture()
def executor(tmpdir_factory, request):
playbooks = request.node.callspec.params.get('playbook')
playbook_files = []
for name, playbook in playbooks.items():
filename = str(tmpdir_factory.mktemp('data').join(name))
with open(filename, 'w') as f:
f.write(playbook)
playbook_files.append(filename)
cli = PlaybookCLI(['', 'playbook.yml'])
cli.parse()
options = cli.parser.parse_args(['-v'])[0]
loader = DataLoader()
variable_manager = VariableManager(loader=loader)
inventory = InventoryManager(loader=loader, sources='localhost,')
variable_manager.set_inventory(inventory)
return PlaybookExecutor(playbooks=playbook_files, inventory=inventory,
variable_manager=variable_manager, loader=loader,
options=options, passwords={})
@pytest.mark.parametrize('event', {'playbook_on_start',
'playbook_on_play_start',
'playbook_on_task_start', 'runner_on_ok',
'playbook_on_stats'})
@pytest.mark.parametrize('playbook', [
{'helloworld.yml': '''
- name: Hello World Sample
connection: local
hosts: all
gather_facts: no
tasks:
- name: Hello Message
debug:
msg: "Hello World!"
'''}, # noqa
{'results_included.yml': '''
- name: Run module which generates results list
connection: local
hosts: all
gather_facts: no
vars:
results: ['foo', 'bar']
tasks:
- name: Generate results list
debug:
var: results
'''}, # noqa
])
def test_callback_plugin_receives_events(executor, cache, event, playbook):
executor.run()
assert len(cache)
assert event in [task['event'] for task in cache.values()]
@pytest.mark.parametrize('playbook', [
{'no_log_on_ok.yml': '''
- name: args should not be logged when task-level no_log is set
connection: local
hosts: all
gather_facts: no
tasks:
- shell: echo "SENSITIVE"
no_log: true
'''}, # noqa
{'no_log_on_fail.yml': '''
- name: failed args should not be logged when task-level no_log is set
connection: local
hosts: all
gather_facts: no
tasks:
- shell: echo "SENSITIVE"
no_log: true
failed_when: true
ignore_errors: true
'''}, # noqa
{'no_log_on_skip.yml': '''
- name: skipped task args should be suppressed with no_log
connection: local
hosts: all
gather_facts: no
tasks:
- shell: echo "SENSITIVE"
no_log: true
when: false
'''}, # noqa
{'no_log_on_play.yml': '''
- name: args should not be logged when play-level no_log set
connection: local
hosts: all
gather_facts: no
no_log: true
tasks:
- shell: echo "SENSITIVE"
'''}, # noqa
{'async_no_log.yml': '''
- name: async task args should suppressed with no_log
connection: local
hosts: all
gather_facts: no
no_log: true
tasks:
- async: 10
poll: 1
shell: echo "SENSITIVE"
no_log: true
'''}, # noqa
{'with_items.yml': '''
- name: with_items tasks should be suppressed with no_log
connection: local
hosts: all
gather_facts: no
tasks:
- shell: echo {{ item }}
no_log: true
with_items: [ "SENSITIVE", "SENSITIVE-SKIPPED", "SENSITIVE-FAILED" ]
when: item != "SENSITIVE-SKIPPED"
failed_when: item == "SENSITIVE-FAILED"
ignore_errors: yes
'''}, # noqa, NOTE: with_items will be deprecated in 2.9
{'loop.yml': '''
- name: loop tasks should be suppressed with no_log
connection: local
hosts: all
gather_facts: no
tasks:
- shell: echo {{ item }}
no_log: true
loop: [ "SENSITIVE", "SENSITIVE-SKIPPED", "SENSITIVE-FAILED" ]
when: item != "SENSITIVE-SKIPPED"
failed_when: item == "SENSITIVE-FAILED"
ignore_errors: yes
'''}, # noqa
])
def test_callback_plugin_no_log_filters(executor, cache, playbook):
executor.run()
assert len(cache)
assert 'SENSITIVE' not in json.dumps(cache.items())
@pytest.mark.parametrize('playbook', [
{'no_log_on_ok.yml': '''
- name: args should not be logged when no_log is set at the task or module level
connection: local
hosts: all
gather_facts: no
tasks:
- shell: echo "PUBLIC"
- shell: echo "PRIVATE"
no_log: true
- uri: url=https://example.org username="PUBLIC" password="PRIVATE"
- copy: content="PRIVATE" dest="/tmp/tmp_no_log"
'''}, # noqa
])
def test_callback_plugin_task_args_leak(executor, cache, playbook):
executor.run()
events = cache.values()
assert events[0]['event'] == 'playbook_on_start'
assert events[1]['event'] == 'playbook_on_play_start'
# task 1
assert events[2]['event'] == 'playbook_on_task_start'
assert events[3]['event'] == 'runner_on_ok'
# task 2 no_log=True
assert events[4]['event'] == 'playbook_on_task_start'
assert events[5]['event'] == 'runner_on_ok'
assert 'PUBLIC' in json.dumps(cache.items())
assert 'PRIVATE' not in json.dumps(cache.items())
# make sure playbook was successful, so all tasks were hit
assert not events[-1]['event_data']['failures'], 'Unexpected playbook execution failure'
@pytest.mark.parametrize('playbook', [
{'loop_with_no_log.yml': '''
- name: playbook variable should not be overwritten when using no log
connection: local
hosts: all
gather_facts: no
tasks:
- command: "{{ item }}"
register: command_register
no_log: True
with_items:
- "echo helloworld!"
- debug: msg="{{ command_register.results|map(attribute='stdout')|list }}"
'''}, # noqa
])
def test_callback_plugin_censoring_does_not_overwrite(executor, cache, playbook):
executor.run()
events = cache.values()
assert events[0]['event'] == 'playbook_on_start'
assert events[1]['event'] == 'playbook_on_play_start'
# task 1
assert events[2]['event'] == 'playbook_on_task_start'
# Ordering of task and item events may differ randomly
assert set(['runner_on_ok', 'runner_item_on_ok']) == set([data['event'] for data in events[3:5]])
# task 2 no_log=True
assert events[5]['event'] == 'playbook_on_task_start'
assert events[6]['event'] == 'runner_on_ok'
assert 'helloworld!' in events[6]['event_data']['res']['msg']
@pytest.mark.parametrize('playbook', [
{'strip_env_vars.yml': '''
- name: sensitive environment variables should be stripped from events
connection: local
hosts: all
tasks:
- shell: echo "Hello, World!"
'''}, # noqa
])
def test_callback_plugin_strips_task_environ_variables(executor, cache, playbook):
executor.run()
assert len(cache)
for event in cache.values():
assert os.environ['PATH'] not in json.dumps(event)
@pytest.mark.parametrize('playbook', [
{'custom_set_stat.yml': '''
- name: custom set_stat calls should persist to the local disk so awx can save them
connection: local
hosts: all
tasks:
- set_stats:
data:
foo: "bar"
'''}, # noqa
])
def test_callback_plugin_saves_custom_stats(executor, cache, playbook):
try:
private_data_dir = tempfile.mkdtemp()
with mock.patch.dict(os.environ, {'AWX_PRIVATE_DATA_DIR': private_data_dir}):
executor.run()
artifacts_path = os.path.join(private_data_dir, 'artifacts', 'custom')
with open(artifacts_path, 'r') as f:
assert json.load(f) == {'foo': 'bar'}
finally:
shutil.rmtree(os.path.join(private_data_dir))
@pytest.mark.parametrize('playbook', [
{'handle_playbook_on_notify.yml': '''
- name: handle playbook_on_notify events properly
connection: local
hosts: all
handlers:
- name: my_handler
debug: msg="My Handler"
tasks:
- debug: msg="My Task"
changed_when: true
notify:
- my_handler
'''}, # noqa
])
@pytest.mark.skipif(ANSIBLE_VERSION < '2.5', reason="v2_playbook_on_notify doesn't work before ansible 2.5")
def test_callback_plugin_records_notify_events(executor, cache, playbook):
executor.run()
assert len(cache)
notify_events = [x[1] for x in cache.items() if x[1]['event'] == 'playbook_on_notify']
assert len(notify_events) == 1
assert notify_events[0]['event_data']['handler'] == 'my_handler'
assert notify_events[0]['event_data']['host'] == 'localhost'
assert notify_events[0]['event_data']['task'] == 'debug'
@pytest.mark.parametrize('playbook', [
{'no_log_module_with_var.yml': '''
- name: ensure that module-level secrets are redacted
connection: local
hosts: all
vars:
- pw: SENSITIVE
tasks:
- uri:
url: https://example.org
user: john-jacob-jingleheimer-schmidt
password: "{{ pw }}"
'''}, # noqa
])
def test_module_level_no_log(executor, cache, playbook):
# https://github.com/ansible/tower/issues/1101
# It's possible for `no_log=True` to be defined at the _module_ level,
# e.g., for the URI module password parameter
# This test ensures that we properly redact those
executor.run()
assert len(cache)
assert 'john-jacob-jingleheimer-schmidt' in json.dumps(cache.items())
assert 'SENSITIVE' not in json.dumps(cache.items())

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -3086,7 +3086,7 @@ msgstr "URL CloudForms"
#: awx/main/models/credential/__init__.py:982
msgid ""
"Enter the URL for the virtual machine that corresponds to your CloudForm "
"Enter the URL for the virtual machine that corresponds to your CloudForms "
"instance. For example, https://cloudforms.example.org"
msgstr ""
"Introduzca la URL para la máquina virtual que corresponda a su instancia "

View File

@@ -3099,7 +3099,7 @@ msgstr "URL CloudForms"
#: awx/main/models/credential/__init__.py:982
msgid ""
"Enter the URL for the virtual machine that corresponds to your CloudForm "
"Enter the URL for the virtual machine that corresponds to your CloudForms "
"instance. For example, https://cloudforms.example.org"
msgstr ""
"Veuillez saisir lURL de la machine virtuelle qui correspond à votre "

View File

@@ -2858,7 +2858,7 @@ msgstr "CloudForms URL"
#: awx/main/models/credential/__init__.py:982
msgid ""
"Enter the URL for the virtual machine that corresponds to your CloudForm "
"Enter the URL for the virtual machine that corresponds to your CloudForms "
"instance. For example, https://cloudforms.example.org"
msgstr ""
"CloudForms インスタンスに対応する仮想マシンの URL を入力します (例: https://cloudforms.example.org)。"

View File

@@ -3072,7 +3072,7 @@ msgstr "CloudForms-URL"
#: awx/main/models/credential/__init__.py:982
msgid ""
"Enter the URL for the virtual machine that corresponds to your CloudForm "
"Enter the URL for the virtual machine that corresponds to your CloudForms "
"instance. For example, https://cloudforms.example.org"
msgstr ""
"Voer de URL in voor de virtuele machine die overeenkomt met uw CloudForm-"

View File

@@ -5,7 +5,6 @@
import os
import sys
import logging
import six
from functools import reduce
# Django
@@ -29,7 +28,17 @@ from awx.main.utils import (
to_python_boolean,
get_licenser,
)
from awx.main.models import * # noqa
from awx.main.models import (
ActivityStream, AdHocCommand, AdHocCommandEvent, Credential, CredentialType,
CustomInventoryScript, Group, Host, Instance, InstanceGroup, Inventory,
InventorySource, InventoryUpdate, InventoryUpdateEvent, Job, JobEvent,
JobHostSummary, JobLaunchConfig, JobTemplate, Label, Notification,
NotificationTemplate, Organization, Project, ProjectUpdate,
ProjectUpdateEvent, Role, Schedule, SystemJob, SystemJobEvent,
SystemJobTemplate, Team, UnifiedJob, UnifiedJobTemplate, WorkflowJob,
WorkflowJobNode, WorkflowJobTemplate, WorkflowJobTemplateNode,
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR
)
from awx.main.models.mixins import ResourceMixin
from awx.conf.license import LicenseForbids, feature_enabled
@@ -321,6 +330,36 @@ class BaseAccess(object):
elif "features" not in validation_info:
raise LicenseForbids(_("Features not found in active license."))
def check_org_host_limit(self, data, add_host_name=None):
validation_info = get_licenser().validate()
if validation_info.get('license_type', 'UNLICENSED') == 'open':
return
inventory = get_object_from_data('inventory', Inventory, data)
if inventory is None: # In this case a missing inventory error is launched
return # further down the line, so just ignore it.
org = inventory.organization
if org is None or org.max_hosts == 0:
return
active_count = Host.objects.org_active_count(org.id)
if active_count > org.max_hosts:
raise PermissionDenied(
_("You have already reached the maximum number of %s hosts"
" allowed for your organization. Contact your System Administrator"
" for assistance." % org.max_hosts)
)
if add_host_name:
host_exists = Host.objects.filter(inventory__organization=org.id, name=add_host_name).exists()
if not host_exists and active_count == org.max_hosts:
raise PermissionDenied(
_("You have already reached the maximum number of %s hosts"
" allowed for your organization. Contact your System Administrator"
" for assistance." % org.max_hosts)
)
def get_user_capabilities(self, obj, method_list=[], parent_obj=None, capabilities_cache={}):
if obj is None:
return {}
@@ -351,7 +390,7 @@ class BaseAccess(object):
user_capabilities[display_method] = self.user.is_superuser
continue
elif display_method == 'copy' and isinstance(obj, Project) and obj.scm_type == '':
# Connot copy manual project without errors
# Cannot copy manual project without errors
user_capabilities[display_method] = False
continue
elif display_method in ['start', 'schedule'] and isinstance(obj, Group): # TODO: remove in 3.3
@@ -435,12 +474,16 @@ class InstanceAccess(BaseAccess):
skip_sub_obj_read_check=False):
if relationship == 'rampart_groups' and isinstance(sub_obj, InstanceGroup):
return self.user.is_superuser
return super(InstanceAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
return super(InstanceAccess, self).can_attach(
obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check
)
def can_unattach(self, obj, sub_obj, relationship, data=None):
if relationship == 'rampart_groups' and isinstance(sub_obj, InstanceGroup):
return self.user.is_superuser
return super(InstanceAccess, self).can_unattach(obj, sub_obj, relationship, *args, **kwargs)
return super(InstanceAccess, self).can_unattach(
obj, sub_obj, relationship, relationship, data=data
)
def can_add(self, data):
return False
@@ -524,7 +567,7 @@ class UserAccess(BaseAccess):
# A user can be changed if they are themselves, or by org admins or
# superusers. Change permission implies changing only certain fields
# that a user should be able to edit for themselves.
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
return bool(self.user == obj or self.can_admin(obj, data))
@@ -577,7 +620,7 @@ class UserAccess(BaseAccess):
return False
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
# Reverse obj and sub_obj, defer to RoleAccess if this is a role assignment.
@@ -587,7 +630,7 @@ class UserAccess(BaseAccess):
return super(UserAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
if relationship == 'roles':
@@ -615,7 +658,7 @@ class OAuth2ApplicationAccess(BaseAccess):
return self.model.objects.filter(organization__in=org_access_qs)
def can_change(self, obj, data):
return self.user.is_superuser or self.check_related('organization', Organization, data, obj=obj,
return self.user.is_superuser or self.check_related('organization', Organization, data, obj=obj,
role_field='admin_role', mandatory=True)
def can_delete(self, obj):
@@ -623,7 +666,7 @@ class OAuth2ApplicationAccess(BaseAccess):
def can_add(self, data):
if self.user.is_superuser:
return True
return True
if not data:
return Organization.accessible_objects(self.user, 'admin_role').exists()
return self.check_related('organization', Organization, data, role_field='admin_role', mandatory=True)
@@ -637,29 +680,29 @@ class OAuth2TokenAccess(BaseAccess):
- I am the user of the token.
I can create an OAuth2 app token when:
- I have the read permission of the related application.
I can read, change or delete a personal token when:
I can read, change or delete a personal token when:
- I am the user of the token
- I am the superuser
I can create an OAuth2 Personal Access Token when:
- I am a user. But I can only create a PAT for myself.
- I am a user. But I can only create a PAT for myself.
'''
model = OAuth2AccessToken
select_related = ('user', 'application')
def filtered_queryset(self):
def filtered_queryset(self):
org_access_qs = Organization.objects.filter(
Q(admin_role__members=self.user) | Q(auditor_role__members=self.user))
return self.model.objects.filter(application__organization__in=org_access_qs) | self.model.objects.filter(user__id=self.user.pk)
def can_delete(self, obj):
if (self.user.is_superuser) | (obj.user == self.user):
return True
elif not obj.application:
return False
return self.user in obj.application.organization.admin_role
def can_change(self, obj, data):
return self.can_delete(obj)
@@ -827,6 +870,10 @@ class HostAccess(BaseAccess):
# Check to see if we have enough licenses
self.check_license(add_host_name=data.get('name', None))
# Check the per-org limit
self.check_org_host_limit(data, add_host_name=data.get('name', None))
return True
def can_change(self, obj, data):
@@ -839,6 +886,10 @@ class HostAccess(BaseAccess):
if data and 'name' in data:
self.check_license(add_host_name=data['name'])
# Check the per-org limit
self.check_org_host_limit({'inventory': obj.inventory},
add_host_name=data['name'])
# Checks for admin or change permission on inventory, controls whether
# the user can edit variable data.
return obj and self.user in obj.inventory.admin_role
@@ -1157,13 +1208,10 @@ class TeamAccess(BaseAccess):
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
"""Reverse obj and sub_obj, defer to RoleAccess if this is an assignment
of a resource role to the team."""
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
# MANAGE_ORGANIZATION_AUTH setting checked in RoleAccess
if isinstance(sub_obj, Role):
if sub_obj.content_object is None:
raise PermissionDenied(_("The {} role cannot be assigned to a team").format(sub_obj.name))
elif isinstance(sub_obj.content_object, User):
raise PermissionDenied(_("The admin_role for a User cannot be assigned to a team"))
if isinstance(sub_obj.content_object, ResourceMixin):
role_access = RoleAccess(self.user)
@@ -1175,9 +1223,7 @@ class TeamAccess(BaseAccess):
*args, **kwargs)
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if not settings.MANAGE_ORGANIZATION_AUTH:
return False
# MANAGE_ORGANIZATION_AUTH setting checked in RoleAccess
if isinstance(sub_obj, Role):
if isinstance(sub_obj.content_object, ResourceMixin):
role_access = RoleAccess(self.user)
@@ -1213,7 +1259,7 @@ class ProjectAccess(BaseAccess):
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'admin_role').exists()
return Organization.accessible_objects(self.user, 'project_admin_role').exists()
return (self.check_related('organization', Organization, data, role_field='project_admin_role', mandatory=True) and
self.check_related('credential', Credential, data, role_field='use_role'))
@@ -1281,6 +1327,7 @@ class JobTemplateAccess(BaseAccess):
'instance_groups',
'credentials__credential_type',
Prefetch('labels', queryset=Label.objects.all().order_by('name')),
Prefetch('last_job', queryset=UnifiedJob.objects.non_polymorphic()),
)
def filtered_queryset(self):
@@ -1337,7 +1384,7 @@ class JobTemplateAccess(BaseAccess):
return self.user in project.use_role
else:
return False
@check_superuser
def can_copy_related(self, obj):
'''
@@ -1346,13 +1393,17 @@ class JobTemplateAccess(BaseAccess):
'''
# obj.credentials.all() is accessible ONLY when object is saved (has valid id)
credential_manager = getattr(obj, 'credentials', None) if getattr(obj, 'id', False) else Credentials.objects.none()
credential_manager = getattr(obj, 'credentials', None) if getattr(obj, 'id', False) else Credential.objects.none()
return reduce(lambda prev, cred: prev and self.user in cred.use_role, credential_manager.all(), True)
def can_start(self, obj, validate_license=True):
# Check license.
if validate_license:
self.check_license()
# Check the per-org limit
self.check_org_host_limit({'inventory': obj.inventory})
if obj.survey_enabled:
self.check_license(feature='surveys')
if Instance.objects.active_count() > 1:
@@ -1394,13 +1445,15 @@ class JobTemplateAccess(BaseAccess):
'job_tags', 'force_handlers', 'skip_tags', 'ask_variables_on_launch',
'ask_tags_on_launch', 'ask_job_type_on_launch', 'ask_skip_tags_on_launch',
'ask_inventory_on_launch', 'ask_credential_on_launch', 'survey_enabled',
'custom_virtualenv', 'diff_mode',
'custom_virtualenv', 'diff_mode', 'timeout', 'job_slice_count',
# These fields are ignored, but it is convenient for QA to allow clients to post them
'last_job_run', 'created', 'modified',
]
for k, v in data.items():
if k not in [x.name for x in obj._meta.concrete_fields]:
continue
if hasattr(obj, k) and getattr(obj, k) != v:
if k not in field_whitelist and v != getattr(obj, '%s_id' % k, None) \
and not (hasattr(obj, '%s_id' % k) and getattr(obj, '%s_id' % k) is None and v == ''): # Equate '' to None in the case of foreign keys
@@ -1509,6 +1562,9 @@ class JobAccess(BaseAccess):
if validate_license:
self.check_license()
# Check the per-org limit
self.check_org_host_limit({'inventory': obj.inventory})
# A super user can relaunch a job
if self.user.is_superuser:
return True
@@ -1840,8 +1896,10 @@ class WorkflowJobTemplateAccess(BaseAccess):
if 'survey_enabled' in data and data['survey_enabled']:
self.check_license(feature='surveys')
return self.check_related('organization', Organization, data, role_field='workflow_admin_role',
mandatory=True)
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True) and
self.check_related('inventory', Inventory, data, role_field='use_role')
)
def can_copy(self, obj):
if self.save_messages:
@@ -1851,7 +1909,6 @@ class WorkflowJobTemplateAccess(BaseAccess):
qs = obj.workflow_job_template_nodes
qs = qs.prefetch_related('unified_job_template', 'inventory__use_role', 'credentials__use_role')
for node in qs.all():
node_errors = {}
if node.inventory and self.user not in node.inventory.use_role:
missing_inventories.append(node.inventory.name)
for cred in node.credentials.all():
@@ -1860,8 +1917,6 @@ class WorkflowJobTemplateAccess(BaseAccess):
ujt = node.unified_job_template
if ujt and not self.user.can_access(UnifiedJobTemplate, 'start', ujt, validate_license=False):
missing_ujt.append(ujt.name)
if node_errors:
wfjt_errors[node.id] = node_errors
if missing_ujt:
self.messages['templates_unable_to_copy'] = missing_ujt
if missing_credentials:
@@ -1876,6 +1931,10 @@ class WorkflowJobTemplateAccess(BaseAccess):
if validate_license:
# check basic license, node count
self.check_license()
# Check the per-org limit
self.check_org_host_limit({'inventory': obj.inventory})
# if surveys are added to WFJTs, check license here
if obj.survey_enabled:
self.check_license(feature='surveys')
@@ -1895,8 +1954,11 @@ class WorkflowJobTemplateAccess(BaseAccess):
if self.user.is_superuser:
return True
return (self.check_related('organization', Organization, data, role_field='workflow_admin_role', obj=obj) and
self.user in obj.admin_role)
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', obj=obj) and
self.check_related('inventory', Inventory, data, role_field='use_role', obj=obj) and
self.user in obj.admin_role
)
def can_delete(self, obj):
return self.user.is_superuser or self.user in obj.admin_role
@@ -1944,6 +2006,9 @@ class WorkflowJobAccess(BaseAccess):
if validate_license:
self.check_license()
# Check the per-org limit
self.check_org_host_limit({'inventory': obj.inventory})
if self.user.is_superuser:
return True
@@ -1954,19 +2019,29 @@ class WorkflowJobAccess(BaseAccess):
if not template:
return False
# If job was launched by another user, it could have survey passwords
if obj.created_by_id != self.user.pk:
# Obtain prompts used to start original job
JobLaunchConfig = obj._meta.get_field('launch_config').related_model
try:
config = JobLaunchConfig.objects.get(job=obj)
except JobLaunchConfig.DoesNotExist:
config = None
# Obtain prompts used to start original job
JobLaunchConfig = obj._meta.get_field('launch_config').related_model
try:
config = JobLaunchConfig.objects.get(job=obj)
except JobLaunchConfig.DoesNotExist:
if self.save_messages:
self.messages['detail'] = _('Workflow Job was launched with unknown prompts.')
return False
if config is None or config.prompts_dict():
# Check if access to prompts to prevent relaunch
if config.prompts_dict():
if obj.created_by_id != self.user.pk:
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts provided by another user.')
return False
if not JobLaunchConfigAccess(self.user).can_add({'reference_obj': config}):
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts you lack access to.')
return False
if config.has_unprompted(template):
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts no longer accepted.')
return False
# execute permission to WFJT is mandatory for any relaunch
return (self.user in template.execute_role)
@@ -2010,6 +2085,9 @@ class AdHocCommandAccess(BaseAccess):
if validate_license:
self.check_license()
# Check the per-org limit
self.check_org_host_limit(data)
# If a credential is provided, the user should have use access to it.
if not self.check_related('credential', Credential, data, role_field='use_role'):
return False
@@ -2419,7 +2497,7 @@ class ActivityStreamAccess(BaseAccess):
model = ActivityStream
prefetch_related = ('organization', 'user', 'inventory', 'host', 'group',
'inventory_update', 'credential', 'credential_type', 'team',
'ad_hoc_command', 'o_auth2_application', 'o_auth2_access_token',
'ad_hoc_command', 'o_auth2_application', 'o_auth2_access_token',
'notification_template', 'notification', 'label', 'role', 'actor',
'schedule', 'custom_inventory_script', 'unified_job_template',
'workflow_job_template_node',)
@@ -2552,14 +2630,13 @@ class RoleAccess(BaseAccess):
# Unsupported for now
return False
def can_attach(self, obj, sub_obj, relationship, data,
skip_sub_obj_read_check=False):
return self.can_unattach(obj, sub_obj, relationship, data, skip_sub_obj_read_check)
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
return self.can_unattach(obj, sub_obj, relationship, *args, **kwargs)
@check_superuser
def can_unattach(self, obj, sub_obj, relationship, data=None, skip_sub_obj_read_check=False):
if isinstance(obj.content_object, Team):
if not settings.MANAGE_ORGANIZATION_AUTH:
if not settings.MANAGE_ORGANIZATION_AUTH and not self.user.is_superuser:
return False
if not skip_sub_obj_read_check and relationship in ['members', 'member_role.parents', 'parents']:
@@ -2578,7 +2655,7 @@ class RoleAccess(BaseAccess):
if (isinstance(obj.content_object, Organization) and
obj.role_field in (Organization.member_role.field.parent_role + ['member_role'])):
if not isinstance(sub_obj, User):
logger.error(six.text_type('Unexpected attempt to associate {} with organization role.').format(sub_obj))
logger.error('Unexpected attempt to associate {} with organization role.'.format(sub_obj))
return False
if not UserAccess(self.user).can_admin(sub_obj, None, allow_orphans=True):
return False

View File

@@ -126,6 +126,17 @@ register(
category_slug='system',
)
register(
'CUSTOM_VENV_PATHS',
field_class=fields.StringListPathField,
label=_('Custom virtual environment paths'),
help_text=_('Paths where Tower will look for custom virtual environments '
'(in addition to /var/lib/awx/venv/). Enter one path per line.'),
category=_('System'),
category_slug='system',
default=[],
)
register(
'AD_HOC_COMMANDS',
field_class=fields.StringListField,
@@ -197,6 +208,18 @@ register(
category_slug='jobs',
)
register(
'AWX_ISOLATED_VERBOSITY',
field_class=fields.IntegerField,
min_value=0,
max_value=5,
label=_('Verbosity level for isolated node management tasks'),
help_text=_('This can be raised to aid in debugging connection issues for isolated task execution'),
category=_('Jobs'),
category_slug='jobs',
default=0
)
register(
'AWX_ISOLATED_CHECK_INTERVAL',
field_class=fields.IntegerField,
@@ -283,7 +306,7 @@ register(
field_class=fields.BooleanField,
default=True,
label=_('Enable Role Download'),
help_text=_('Allows roles to be dynamically downlaoded from a requirements.yml file for SCM projects.'),
help_text=_('Allows roles to be dynamically downloaded from a requirements.yml file for SCM projects.'),
category=_('Jobs'),
category_slug='jobs',
)

View File

@@ -16,7 +16,8 @@ SCHEDULEABLE_PROVIDERS = CLOUD_PROVIDERS + ('custom', 'scm',)
PRIVILEGE_ESCALATION_METHODS = [
('sudo', _('Sudo')), ('su', _('Su')), ('pbrun', _('Pbrun')), ('pfexec', _('Pfexec')),
('dzdo', _('DZDO')), ('pmrun', _('Pmrun')), ('runas', _('Runas')),
('enable', _('Enable')), ('doas', _('Doas')),
('enable', _('Enable')), ('doas', _('Doas')), ('ksu', _('Ksu')),
('machinectl', _('Machinectl')), ('sesu', _('Sesu')),
]
CHOICES_PRIVILEGE_ESCALATION_METHODS = [('', _('None'))] + PRIVILEGE_ESCALATION_METHODS
ANSI_SGR_PATTERN = re.compile(r'\x1b\[[0-9;]*m')
@@ -24,7 +25,9 @@ STANDARD_INVENTORY_UPDATE_ENV = {
# Failure to parse inventory should always be fatal
'ANSIBLE_INVENTORY_UNPARSED_FAILED': 'True',
# Always use the --export option for ansible-inventory
'ANSIBLE_INVENTORY_EXPORT': 'True'
'ANSIBLE_INVENTORY_EXPORT': 'True',
# Redirecting output to stderr allows JSON parsing to still work with -vvv
'ANSIBLE_VERBOSE_TO_STDERR': 'True'
}
CAN_CANCEL = ('new', 'pending', 'waiting', 'running')
ACTIVE_STATES = CAN_CANCEL

View File

@@ -4,6 +4,7 @@ import logging
from channels import Group
from channels.auth import channel_session_user_from_http, channel_session_user
from django.utils.encoding import smart_str
from django.http.cookie import parse_cookie
from django.core.serializers.json import DjangoJSONEncoder
@@ -30,7 +31,7 @@ def ws_connect(message):
# store the valid CSRF token from the cookie so we can compare it later
# on ws_receive
cookie_token = parse_cookie(
headers.get('cookie')
smart_str(headers.get(b'cookie'))
).get('csrftoken')
if cookie_token:
message.channel_session[XRF_KEY] = cookie_token

0
awx/main/db/__init__.py Normal file
View File

View File

View File

@@ -0,0 +1,155 @@
import os
import pkg_resources
import sqlite3
import sys
import traceback
import uuid
from django.core.cache import cache
from django.core.cache.backends.locmem import LocMemCache
from django.db.backends.postgresql.base import DatabaseWrapper as BaseDatabaseWrapper
from awx.main.utils import memoize
__loc__ = LocMemCache(str(uuid.uuid4()), {})
__all__ = ['DatabaseWrapper']
class RecordedQueryLog(object):
def __init__(self, log, db, dest='/var/log/tower/profile'):
self.log = log
self.db = db
self.dest = dest
try:
self.threshold = cache.get('awx-profile-sql-threshold')
except Exception:
# if we can't reach memcached, just assume profiling's off
self.threshold = None
def append(self, query):
ret = self.log.append(query)
try:
self.write(query)
except Exception:
# not sure what else to do her e- we can't really safely
# *use* our loggers because it'll just generate more DB queries
# and potentially recurse into this state again
_, _, tb = sys.exc_info()
traceback.print_tb(tb)
return ret
def write(self, query):
if self.threshold is None:
return
seconds = float(query['time'])
# if the query is slow enough...
if seconds >= self.threshold:
sql = query['sql']
if sql.startswith('EXPLAIN'):
return
# build a printable Python stack
bt = ' '.join(traceback.format_stack())
# and re-run the same query w/ EXPLAIN
explain = ''
cursor = self.db.cursor()
cursor.execute('EXPLAIN VERBOSE {}'.format(sql))
for line in cursor.fetchall():
explain += line[0] + '\n'
# write a row of data into a per-PID sqlite database
if not os.path.isdir(self.dest):
os.makedirs(self.dest)
progname = ' '.join(sys.argv)
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'runworker'):
if match in progname:
progname = match
break
else:
progname = os.path.basename(sys.argv[0])
filepath = os.path.join(
self.dest,
'{}.sqlite'.format(progname)
)
version = pkg_resources.get_distribution('awx').version
log = sqlite3.connect(filepath, timeout=3)
log.execute(
'CREATE TABLE IF NOT EXISTS queries ('
' id INTEGER PRIMARY KEY,'
' version TEXT,'
' pid INTEGER,'
' stamp DATETIME DEFAULT CURRENT_TIMESTAMP,'
' argv REAL,'
' time REAL,'
' sql TEXT,'
' explain TEXT,'
' bt TEXT'
');'
)
log.commit()
log.execute(
'INSERT INTO queries (pid, version, argv, time, sql, explain, bt) '
'VALUES (?, ?, ?, ?, ?, ?, ?);',
(os.getpid(), version, ' ' .join(sys.argv), seconds, sql, explain, bt)
)
log.commit()
def __len__(self):
return len(self.log)
def __iter__(self):
return iter(self.log)
def __getattr__(self, attr):
return getattr(self.log, attr)
class DatabaseWrapper(BaseDatabaseWrapper):
"""
This is a special subclass of Django's postgres DB backend which - based on
the value of a special flag in memcached - captures slow queries and
writes profile and Python stack metadata to the disk.
"""
def __init__(self, *args, **kwargs):
super(DatabaseWrapper, self).__init__(*args, **kwargs)
# Django's default base wrapper implementation has `queries_log`
# which is a `collections.deque` that every query is appended to
#
# this line wraps the deque with a proxy that can capture each query
# and - if it's slow enough - record profiling metadata to the file
# system for debugging purposes
self.queries_log = RecordedQueryLog(self.queries_log, self)
@property
@memoize(ttl=1, cache=__loc__)
def force_debug_cursor(self):
# in Django's base DB implementation, `self.force_debug_cursor` is just
# a simple boolean, and this value is used to signal to Django that it
# should record queries into `self.queries_log` as they're executed (this
# is the same mechanism used by libraries like the django-debug-toolbar)
#
# in _this_ implementation, we represent it as a property which will
# check memcache for a special flag to be set (when the flag is set, it
# means we should start recording queries because somebody called
# `awx-manage profile_sql`)
#
# it's worth noting that this property is wrapped w/ @memoize because
# Django references this attribute _constantly_ (in particular, once
# per executed query); doing a memcached.get() _at most_ once per
# second is a good enough window to detect when profiling is turned
# on/off by a system administrator
try:
threshold = cache.get('awx-profile-sql-threshold')
except Exception:
# if we can't reach memcached, just assume profiling's off
threshold = None
self.queries_log.threshold = threshold
return threshold is not None
@force_debug_cursor.setter
def force_debug_cursor(self, v):
return

View File

@@ -2,4 +2,4 @@ from django.conf import settings
def get_local_queuename():
return settings.CLUSTER_HOST_ID.encode('utf-8')
return settings.CLUSTER_HOST_ID

View File

@@ -1,14 +1,14 @@
import logging
import os
import sys
import random
import sys
import traceback
from uuid import uuid4
import collections
from multiprocessing import Process
from multiprocessing import Queue as MPQueue
from Queue import Full as QueueFull, Empty as QueueEmpty
from queue import Full as QueueFull, Empty as QueueEmpty
from django.conf import settings
from django.db import connection as django_connection, connections
@@ -19,7 +19,10 @@ import psutil
from awx.main.models import UnifiedJob
from awx.main.dispatch import reaper
logger = logging.getLogger('awx.main.dispatch')
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
else:
logger = logging.getLogger('awx.main.dispatch')
class PoolWorker(object):
@@ -129,7 +132,7 @@ class PoolWorker(object):
# the task at [0] is the one that's running right now (or is about to
# be running)
if len(self.managed_tasks):
return self.managed_tasks[self.managed_tasks.keys()[0]]
return self.managed_tasks[list(self.managed_tasks.keys())[0]]
return None
@@ -180,7 +183,7 @@ class WorkerPool(object):
class MessagePrinter(awx.main.dispatch.worker.BaseWorker):
def perform_work(self, body):
print body
print(body)
pool = WorkerPool(min_workers=4) # spawn four worker processes
pool.init_workers(MessagePrint().work_loop)
@@ -253,7 +256,7 @@ class WorkerPool(object):
return tmpl.render(pool=self, workers=self.workers, meta=self.debug_meta)
def write(self, preferred_queue, body):
queue_order = sorted(range(len(self.workers)), cmp=lambda x, y: -1 if x==preferred_queue else 0)
queue_order = sorted(range(len(self.workers)), key=lambda x: -1 if x==preferred_queue else x)
write_attempt_order = []
for queue_actual in queue_order:
try:
@@ -296,6 +299,9 @@ class AutoscalePool(WorkerPool):
# 5 workers per GB of total memory
self.max_workers = (total_memory_gb * 5)
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)
@property
def should_grow(self):
if len(self.workers) < self.min_workers:
@@ -322,6 +328,11 @@ class AutoscalePool(WorkerPool):
2. Clean up unnecessary, idle workers.
3. Check to see if the database says this node is running any tasks
that aren't actually running. If so, reap them.
IMPORTANT: this function is one of the few places in the dispatcher
(aside from setting lookups) where we talk to the database. As such,
if there's an outage, this method _can_ throw various
django.db.utils.Error exceptions. Act accordingly.
"""
orphaned = []
for w in self.workers[::]:
@@ -362,14 +373,8 @@ class AutoscalePool(WorkerPool):
running_uuids = []
for worker in self.workers:
worker.calculate_managed_tasks()
running_uuids.extend(worker.managed_tasks.keys())
try:
reaper.reap(excluded_uuids=running_uuids)
except Exception:
# we _probably_ failed here due to DB connectivity issues, so
# don't use our logger (it accesses the database for configuration)
_, _, tb = sys.exc_info()
traceback.print_tb(tb)
running_uuids.extend(list(worker.managed_tasks.keys()))
reaper.reap(excluded_uuids=running_uuids)
def up(self):
if self.full:

View File

@@ -45,7 +45,7 @@ class task:
@task(queue='tower_broadcast', exchange_type='fanout')
def announce():
print "Run this everywhere!"
print("Run this everywhere!")
"""
def __init__(self, queue=None, exchange_type=None):

View File

@@ -11,6 +11,9 @@ logger = logging.getLogger('awx.main.dispatch')
def reap_job(j, status):
if UnifiedJob.objects.get(id=j.id).status not in ('running', 'waiting'):
# just in case, don't reap jobs that aren't running
return
j.status = status
j.start_args = '' # blank field to remove encrypted passwords
j.job_explanation += ' '.join((

View File

@@ -4,15 +4,20 @@
import os
import logging
import signal
import sys
from uuid import UUID
from Queue import Empty as QueueEmpty
from queue import Empty as QueueEmpty
from django import db
from kombu import Producer
from kombu.mixins import ConsumerMixin
from awx.main.dispatch.pool import WorkerPool
logger = logging.getLogger('awx.main.dispatch')
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
else:
logger = logging.getLogger('awx.main.dispatch')
def signame(sig):
@@ -80,7 +85,11 @@ class AWXConsumer(ConsumerMixin):
def process_task(self, body, message):
if 'control' in body:
return self.control(body, message)
try:
return self.control(body, message)
except Exception:
logger.exception("Exception handling control message:")
return
if len(self.pool):
if "uuid" in body and body['uuid']:
try:
@@ -103,7 +112,7 @@ class AWXConsumer(ConsumerMixin):
def stop(self, signum, frame):
self.should_stop = True # this makes the kombu mixin stop consuming
logger.debug('received {}, stopping'.format(signame(signum)))
logger.warn('received {}, stopping'.format(signame(signum)))
self.worker.on_stop()
raise SystemExit()
@@ -128,6 +137,10 @@ class BaseWorker(object):
logger.error("Exception on worker {}, restarting: ".format(idx) + str(e))
continue
try:
for conn in db.connections.all():
# If the database connection has a hiccup during the prior message, close it
# so we can establish a new connection
conn.close_if_unusable_or_obsolete()
self.perform_work(body, *args)
finally:
if 'uuid' in body:

View File

@@ -1,7 +1,5 @@
import logging
import time
import os
import signal
import traceback
from django.conf import settings
@@ -110,8 +108,7 @@ class CallbackBrokerWorker(BaseWorker):
break
except (OperationalError, InterfaceError, InternalError):
if retries >= self.MAX_RETRIES:
logger.exception('Worker could not re-establish database connectivity, shutting down gracefully: Job {}'.format(job_identifier))
os.kill(os.getppid(), signal.SIGINT)
logger.exception('Worker could not re-establish database connectivity, giving up on event for Job {}'.format(job_identifier))
return
delay = 60 * retries
logger.exception('Database Error Saving Job Event, retry #{i} in {delay} seconds:'.format(

View File

@@ -4,8 +4,6 @@ import importlib
import sys
import traceback
import six
from django import db
from awx.main.tasks import dispatch_startup, inform_cluster_of_shutdown
@@ -31,11 +29,18 @@ class TaskWorker(BaseWorker):
awx.main.tasks.delete_inventory
awx.main.tasks.RunProjectUpdate
'''
if not task.startswith('awx.'):
raise ValueError('{} is not a valid awx task'.format(task))
module, target = task.rsplit('.', 1)
module = importlib.import_module(module)
_call = None
if hasattr(module, target):
_call = getattr(module, target, None)
if not (
hasattr(_call, 'apply_async') and hasattr(_call, 'delay')
):
raise ValueError('{} is not decorated with @task()'.format(task))
return _call
def run_callable(self, body):
@@ -75,19 +80,16 @@ class TaskWorker(BaseWorker):
'task': u'awx.main.tasks.RunProjectUpdate'
}
'''
for conn in db.connections.all():
# If the database connection has a hiccup during at task, close it
# so we can establish a new connection
conn.close_if_unusable_or_obsolete()
result = None
try:
result = self.run_callable(body)
except Exception as exc:
result = exc
try:
if getattr(exc, 'is_awx_task_error', False):
# Error caused by user / tracked in job output
logger.warning(six.text_type("{}").format(exc))
logger.warning("{}".format(exc))
else:
task = body['task']
args = body.get('args', [])

View File

@@ -1,13 +1,12 @@
# Copyright (c) 2018 Ansible by Red Hat
# All Rights Reserved.
import six
class _AwxTaskError():
def build_exception(self, task, message=None):
if message is None:
message = six.text_type("Execution error running {}").format(task.log_format)
message = "Execution error running {}".format(task.log_format)
e = Exception(message)
e.task = task
e.is_awx_task_error = True
@@ -15,7 +14,7 @@ class _AwxTaskError():
def TaskCancel(self, task, rc):
"""Canceled flag caused run_pexpect to kill the job run"""
message=six.text_type("{} was canceled (rc={})").format(task.log_format, rc)
message="{} was canceled (rc={})".format(task.log_format, rc)
e = self.build_exception(task, message)
e.rc = rc
e.awx_task_error_type = "TaskCancel"
@@ -23,7 +22,7 @@ class _AwxTaskError():
def TaskError(self, task, rc):
"""Userspace error (non-zero exit code) in run_pexpect subprocess"""
message = six.text_type("{} encountered an error (rc={}), please see task stdout for details.").format(task.log_format, rc)
message = "{} encountered an error (rc={}), please see task stdout for details.".format(task.log_format, rc)
e = self.build_exception(task, message)
e.rc = rc
e.awx_task_error_type = "TaskError"

1
awx/main/expect/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
authorized_keys

View File

View File

@@ -1,6 +1,3 @@
import base64
import codecs
import StringIO
import json
import os
import shutil
@@ -8,13 +5,13 @@ import stat
import tempfile
import time
import logging
from distutils.version import LooseVersion as Version
from io import StringIO
from django.conf import settings
import awx
from awx.main.expect import run
from awx.main.utils import OutputEventFilter, get_system_task_capacity
from awx.main.utils import get_system_task_capacity
from awx.main.queue import CallbackQueueDispatcher
logger = logging.getLogger('awx.isolated.manager')
@@ -23,23 +20,11 @@ playbook_logger = logging.getLogger('awx.isolated.manager.playbooks')
class IsolatedManager(object):
def __init__(self, args, cwd, env, stdout_handle, ssh_key_path,
expect_passwords={}, cancelled_callback=None, job_timeout=0,
idle_timeout=None, extra_update_fields=None,
pexpect_timeout=5, proot_cmd='bwrap'):
def __init__(self, env, cancelled_callback=None, job_timeout=0,
idle_timeout=None):
"""
:param args: a list of `subprocess.call`-style arguments
representing a subprocess e.g.,
['ansible-playbook', '...']
:param cwd: the directory where the subprocess should run,
generally the directory where playbooks exist
:param env: a dict containing environment variables for the
subprocess, ala `os.environ`
:param stdout_handle: a file-like object for capturing stdout
:param ssh_key_path: a filepath where SSH key data can be read
:param expect_passwords: a dict of regular expression password prompts
to input values, i.e., {r'Password:*?$':
'some_password'}
:param cancelled_callback: a callable - which returns `True` or `False`
- signifying if the job has been prematurely
cancelled
@@ -48,26 +33,11 @@ class IsolatedManager(object):
:param idle_timeout a timeout (in seconds); if new output is not
sent to stdout in this interval, the process
will be terminated
:param extra_update_fields: a dict used to specify DB fields which should
be updated on the underlying model
object after execution completes
:param pexpect_timeout a timeout (in seconds) to wait on
`pexpect.spawn().expect()` calls
:param proot_cmd the command used to isolate processes, `bwrap`
"""
self.args = args
self.cwd = cwd
self.isolated_env = self._redact_isolated_env(env.copy())
self.management_env = self._base_management_env()
self.stdout_handle = stdout_handle
self.ssh_key_path = ssh_key_path
self.expect_passwords = {k.pattern: v for k, v in expect_passwords.items()}
self.cancelled_callback = cancelled_callback
self.job_timeout = job_timeout
self.idle_timeout = idle_timeout
self.extra_update_fields = extra_update_fields
self.pexpect_timeout = pexpect_timeout
self.proot_cmd = proot_cmd
self.started_at = None
@staticmethod
@@ -101,20 +71,10 @@ class IsolatedManager(object):
]
if extra_vars:
args.extend(['-e', json.dumps(extra_vars)])
if settings.AWX_ISOLATED_VERBOSITY:
args.append('-%s' % ('v' * min(5, settings.AWX_ISOLATED_VERBOSITY)))
return args
@staticmethod
def _redact_isolated_env(env):
'''
strips some environment variables that aren't applicable to
job execution within the isolated instance
'''
for var in (
'HOME', 'RABBITMQ_HOST', 'RABBITMQ_PASS', 'RABBITMQ_USER', 'CACHE',
'DJANGO_PROJECT_DIR', 'DJANGO_SETTINGS_MODULE', 'RABBITMQ_VHOST'):
env.pop(var, None)
return env
@classmethod
def awx_playbook_path(cls):
return os.path.abspath(os.path.join(
@@ -125,56 +85,35 @@ class IsolatedManager(object):
def path_to(self, *args):
return os.path.join(self.private_data_dir, *args)
def dispatch(self):
def dispatch(self, playbook=None, module=None, module_args=None):
'''
Compile the playbook, its environment, and metadata into a series
of files, and ship to a remote host for isolated execution.
Ship the runner payload to a remote host for isolated execution.
'''
self.handled_events = set()
self.started_at = time.time()
secrets = {
'env': self.isolated_env,
'passwords': self.expect_passwords,
'ssh_key_data': None,
'idle_timeout': self.idle_timeout,
'job_timeout': self.job_timeout,
'pexpect_timeout': self.pexpect_timeout
}
# if an ssh private key fifo exists, read its contents and delete it
if self.ssh_key_path:
buff = StringIO.StringIO()
with open(self.ssh_key_path, 'r') as fifo:
for line in fifo:
buff.write(line)
secrets['ssh_key_data'] = buff.getvalue()
os.remove(self.ssh_key_path)
# write the entire secret payload to a named pipe
# the run_isolated.yml playbook will use a lookup to read this data
# into a variable, and will replicate the data into a named pipe on the
# isolated instance
secrets_path = os.path.join(self.private_data_dir, 'env')
run.open_fifo_write(secrets_path, base64.b64encode(json.dumps(secrets)))
self.build_isolated_job_data()
extra_vars = {
'src': self.private_data_dir,
'dest': settings.AWX_PROOT_BASE_PATH,
'ident': self.ident
}
if self.proot_temp_dir:
extra_vars['proot_temp_dir'] = self.proot_temp_dir
if playbook:
extra_vars['playbook'] = playbook
if module and module_args:
extra_vars['module'] = module
extra_vars['module_args'] = module_args
# Run ansible-playbook to launch a job on the isolated host. This:
#
# - sets up a temporary directory for proot/bwrap (if necessary)
# - copies encrypted job data from the controlling host to the isolated host (with rsync)
# - writes the encryption secret to a named pipe on the isolated host
# - launches the isolated playbook runner via `awx-expect start <job-id>`
# - launches ansible-runner
args = self._build_args('run_isolated.yml', '%s,' % self.host, extra_vars)
if self.instance.verbosity:
args.append('-%s' % ('v' * min(5, self.instance.verbosity)))
buff = StringIO.StringIO()
buff = StringIO()
logger.debug('Starting job {} on isolated host with `run_isolated.yml` playbook.'.format(self.instance.id))
status, rc = IsolatedManager.run_pexpect(
args, self.awx_playbook_path(), self.management_env, buff,
@@ -182,10 +121,15 @@ class IsolatedManager(object):
job_timeout=settings.AWX_ISOLATED_LAUNCH_TIMEOUT,
pexpect_timeout=5
)
output = buff.getvalue().encode('utf-8')
output = buff.getvalue()
playbook_logger.info('Isolated job {} dispatch:\n{}'.format(self.instance.id, output))
if status != 'successful':
self.stdout_handle.write(output)
for event_data in [
{'event': 'verbose', 'stdout': output},
{'event': 'EOF', 'final_counter': 1},
]:
event_data.setdefault(self.event_data_key, self.instance.id)
CallbackQueueDispatcher().dispatch(event_data)
return status, rc
@classmethod
@@ -209,11 +153,8 @@ class IsolatedManager(object):
def build_isolated_job_data(self):
'''
Write the playbook and metadata into a collection of files on the local
file system.
This function is intended to be used to compile job data so that it
can be shipped to a remote, isolated host (via ssh).
Write metadata related to the playbook run into a collection of files
on the local file system.
'''
rsync_exclude = [
@@ -223,42 +164,18 @@ class IsolatedManager(object):
'- /project/.hg',
# don't rsync job events that are in the process of being written
'- /artifacts/job_events/*-partial.json.tmp',
# rsync can't copy named pipe data - we're replicating this manually ourselves in the playbook
'- /env'
# don't rsync the ssh_key FIFO
'- /env/ssh_key',
]
for filename, data in (
['.rsync-filter', '\n'.join(rsync_exclude)],
['args', json.dumps(self.args)]
):
path = self.path_to(filename)
with open(path, 'w') as f:
f.write(data)
os.chmod(path, stat.S_IRUSR)
# symlink the scm checkout (if there is one) so that it's rsync'ed over, too
if 'AD_HOC_COMMAND_ID' not in self.isolated_env:
os.symlink(self.cwd, self.path_to('project'))
# create directories for build artifacts to live in
os.makedirs(self.path_to('artifacts', 'job_events'), mode=stat.S_IXUSR + stat.S_IWUSR + stat.S_IRUSR)
def _missing_artifacts(self, path_list, output):
missing_artifacts = filter(lambda path: not os.path.exists(path), path_list)
for path in missing_artifacts:
self.stdout_handle.write('ansible did not exit cleanly, missing `{}`.\n'.format(path))
if missing_artifacts:
daemon_path = self.path_to('artifacts', 'daemon.log')
if os.path.exists(daemon_path):
# If available, show log files from the run.py call
with codecs.open(daemon_path, 'r', encoding='utf-8') as f:
self.stdout_handle.write(f.read())
else:
# Provide the management playbook standard out if not available
self.stdout_handle.write(output)
return True
return False
def check(self, interval=None):
"""
Repeatedly poll the isolated node to determine if the job has run.
@@ -282,20 +199,13 @@ class IsolatedManager(object):
status = 'failed'
output = ''
rc = None
buff = StringIO.StringIO()
buff = StringIO()
last_check = time.time()
seek = 0
job_timeout = remaining = self.job_timeout
dispatcher = CallbackQueueDispatcher()
while status == 'failed':
if job_timeout != 0:
remaining = max(0, job_timeout - (time.time() - self.started_at))
if remaining == 0:
# if it takes longer than $REMAINING_JOB_TIMEOUT to retrieve
# job artifacts from the host, consider the job failed
if isinstance(self.extra_update_fields, dict):
self.extra_update_fields['job_explanation'] = "Job terminated due to timeout"
status = 'failed'
break
canceled = self.cancelled_callback() if self.cancelled_callback else False
if not canceled and time.time() - last_check < interval:
@@ -303,7 +213,10 @@ class IsolatedManager(object):
time.sleep(1)
continue
buff = StringIO.StringIO()
if canceled:
logger.warning('Isolated job {} was manually cancelled.'.format(self.instance.id))
buff = StringIO()
logger.debug('Checking on isolated job {} with `check_isolated.yml`.'.format(self.instance.id))
status, rc = IsolatedManager.run_pexpect(
args, self.awx_playbook_path(), self.management_env, buff,
@@ -311,36 +224,50 @@ class IsolatedManager(object):
idle_timeout=remaining,
job_timeout=remaining,
pexpect_timeout=5,
proot_cmd=self.proot_cmd
proot_cmd='bwrap'
)
output = buff.getvalue().encode('utf-8')
playbook_logger.info('Isolated job {} check:\n{}'.format(self.instance.id, output))
path = self.path_to('artifacts', 'stdout')
if os.path.exists(path):
with open(path, 'r') as f:
f.seek(seek)
for line in f:
self.stdout_handle.write(line)
seek += len(line)
# discover new events and ingest them
events_path = self.path_to('artifacts', self.ident, 'job_events')
# it's possible that `events_path` doesn't exist *yet*, because runner
# hasn't actually written any events yet (if you ran e.g., a sleep 30)
# only attempt to consume events if any were rsynced back
if os.path.exists(events_path):
for event in set(os.listdir(events_path)) - self.handled_events:
path = os.path.join(events_path, event)
if os.path.exists(path):
event_data = json.load(
open(os.path.join(events_path, event), 'r')
)
event_data.setdefault(self.event_data_key, self.instance.id)
dispatcher.dispatch(event_data)
self.handled_events.add(event)
# handle artifacts
if event_data.get('event_data', {}).get('artifact_data', {}):
self.instance.artifacts = event_data['event_data']['artifact_data']
self.instance.save(update_fields=['artifacts'])
last_check = time.time()
if status == 'successful':
status_path = self.path_to('artifacts', 'status')
rc_path = self.path_to('artifacts', 'rc')
if self._missing_artifacts([status_path, rc_path], output):
status = 'failed'
rc = 1
else:
with open(status_path, 'r') as f:
status = f.readline()
with open(rc_path, 'r') as f:
rc = int(f.readline())
elif status == 'failed':
# if we were unable to retrieve job reults from the isolated host,
# print stdout of the `check_isolated.yml` playbook for clues
self.stdout_handle.write(output)
status_path = self.path_to('artifacts', self.ident, 'status')
rc_path = self.path_to('artifacts', self.ident, 'rc')
with open(status_path, 'r') as f:
status = f.readline()
with open(rc_path, 'r') as f:
rc = int(f.readline())
# emit an EOF event
event_data = {
'event': 'EOF',
'final_counter': len(self.handled_events)
}
event_data.setdefault(self.event_data_key, self.instance.id)
dispatcher.dispatch(event_data)
return status, rc
@@ -350,12 +277,11 @@ class IsolatedManager(object):
'private_data_dir': self.private_data_dir,
'cleanup_dirs': [
self.private_data_dir,
self.proot_temp_dir,
],
}
args = self._build_args('clean_isolated.yml', '%s,' % self.host, extra_vars)
logger.debug('Cleaning up job {} on isolated host with `clean_isolated.yml` playbook.'.format(self.instance.id))
buff = StringIO.StringIO()
buff = StringIO()
timeout = max(60, 2 * settings.AWX_ISOLATED_CONNECTION_TIMEOUT)
status, rc = IsolatedManager.run_pexpect(
args, self.awx_playbook_path(), self.management_env, buff,
@@ -371,23 +297,15 @@ class IsolatedManager(object):
@classmethod
def update_capacity(cls, instance, task_result, awx_application_version):
instance.version = task_result['version']
instance.version = 'ansible-runner-{}'.format(task_result['version'])
isolated_version = instance.version.split("-", 1)[0]
cluster_version = awx_application_version.split("-", 1)[0]
if Version(cluster_version) > Version(isolated_version):
err_template = "Isolated instance {} reports version {}, cluster node is at {}, setting capacity to zero."
logger.error(err_template.format(instance.hostname, instance.version, awx_application_version))
instance.capacity = 0
else:
if instance.capacity == 0 and task_result['capacity_cpu']:
logger.warning('Isolated instance {} has re-joined.'.format(instance.hostname))
instance.cpu_capacity = int(task_result['capacity_cpu'])
instance.mem_capacity = int(task_result['capacity_mem'])
instance.capacity = get_system_task_capacity(scale=instance.capacity_adjustment,
cpu_capacity=int(task_result['capacity_cpu']),
mem_capacity=int(task_result['capacity_mem']))
if instance.capacity == 0 and task_result['capacity_cpu']:
logger.warning('Isolated instance {} has re-joined.'.format(instance.hostname))
instance.cpu_capacity = int(task_result['capacity_cpu'])
instance.mem_capacity = int(task_result['capacity_mem'])
instance.capacity = get_system_task_capacity(scale=instance.capacity_adjustment,
cpu_capacity=int(task_result['capacity_cpu']),
mem_capacity=int(task_result['capacity_mem']))
instance.save(update_fields=['cpu_capacity', 'mem_capacity', 'capacity', 'version', 'modified'])
@classmethod
@@ -407,69 +325,55 @@ class IsolatedManager(object):
args = cls._build_args('heartbeat_isolated.yml', hostname_string)
args.extend(['--forks', str(len(instance_qs))])
env = cls._base_management_env()
env['ANSIBLE_STDOUT_CALLBACK'] = 'json'
buff = StringIO.StringIO()
timeout = max(60, 2 * settings.AWX_ISOLATED_CONNECTION_TIMEOUT)
status, rc = IsolatedManager.run_pexpect(
args, cls.awx_playbook_path(), env, buff,
idle_timeout=timeout, job_timeout=timeout,
pexpect_timeout=5
)
output = buff.getvalue().encode('utf-8')
buff.close()
try:
result = json.loads(output)
if not isinstance(result, dict):
raise TypeError('Expected a dict but received {}.'.format(str(type(result))))
except (ValueError, AssertionError, TypeError):
logger.exception('Failed to read status from isolated instances, output:\n {}'.format(output))
return
facts_path = tempfile.mkdtemp()
env['ANSIBLE_CACHE_PLUGIN'] = 'jsonfile'
env['ANSIBLE_CACHE_PLUGIN_CONNECTION'] = facts_path
for instance in instance_qs:
try:
task_result = result['plays'][0]['tasks'][0]['hosts'][instance.hostname]
except (KeyError, IndexError):
buff = StringIO()
timeout = max(60, 2 * settings.AWX_ISOLATED_CONNECTION_TIMEOUT)
status, rc = IsolatedManager.run_pexpect(
args, cls.awx_playbook_path(), env, buff,
idle_timeout=timeout, job_timeout=timeout,
pexpect_timeout=5
)
heartbeat_stdout = buff.getvalue().encode('utf-8')
buff.close()
for instance in instance_qs:
output = heartbeat_stdout
task_result = {}
if 'capacity_cpu' in task_result and 'capacity_mem' in task_result:
cls.update_capacity(instance, task_result, awx_application_version)
logger.debug('Isolated instance {} successful heartbeat'.format(instance.hostname))
elif instance.capacity == 0:
logger.debug('Isolated instance {} previously marked as lost, could not re-join.'.format(
instance.hostname))
else:
logger.warning('Could not update status of isolated instance {}, msg={}'.format(
instance.hostname, task_result.get('msg', 'unknown failure')
))
if instance.is_lost(isolated=True):
instance.capacity = 0
instance.save(update_fields=['capacity'])
logger.error('Isolated instance {} last checked in at {}, marked as lost.'.format(
instance.hostname, instance.modified))
@staticmethod
def get_stdout_handle(instance, private_data_dir, event_data_key='job_id'):
dispatcher = CallbackQueueDispatcher()
def job_event_callback(event_data):
event_data.setdefault(event_data_key, instance.id)
if 'uuid' in event_data:
filename = '{}-partial.json'.format(event_data['uuid'])
partial_filename = os.path.join(private_data_dir, 'artifacts', 'job_events', filename)
try:
with codecs.open(partial_filename, 'r', encoding='utf-8') as f:
partial_event_data = json.load(f)
event_data.update(partial_event_data)
except IOError:
if event_data.get('event', '') != 'verbose':
logger.error('Missing callback data for event type `{}`, uuid {}, job {}.\nevent_data: {}'.format(
event_data.get('event', ''), event_data['uuid'], instance.id, event_data))
dispatcher.dispatch(event_data)
with open(os.path.join(facts_path, instance.hostname), 'r') as facts_data:
output = facts_data.read()
task_result = json.loads(output)
except Exception:
logger.exception('Failed to read status from isolated instances, output:\n {}'.format(output))
if 'awx_capacity_cpu' in task_result and 'awx_capacity_mem' in task_result:
task_result = {
'capacity_cpu': task_result['awx_capacity_cpu'],
'capacity_mem': task_result['awx_capacity_mem'],
'version': task_result['awx_capacity_version']
}
cls.update_capacity(instance, task_result, awx_application_version)
logger.debug('Isolated instance {} successful heartbeat'.format(instance.hostname))
elif instance.capacity == 0:
logger.debug('Isolated instance {} previously marked as lost, could not re-join.'.format(
instance.hostname))
else:
logger.warning('Could not update status of isolated instance {}'.format(instance.hostname))
if instance.is_lost(isolated=True):
instance.capacity = 0
instance.save(update_fields=['capacity'])
logger.error('Isolated instance {} last checked in at {}, marked as lost.'.format(
instance.hostname, instance.modified))
finally:
if os.path.exists(facts_path):
shutil.rmtree(facts_path)
return OutputEventFilter(job_event_callback)
def run(self, instance, private_data_dir, proot_temp_dir):
def run(self, instance, private_data_dir, playbook, module, module_args,
event_data_key, ident=None):
"""
Run a job on an isolated host.
@@ -477,18 +381,21 @@ class IsolatedManager(object):
:param private_data_dir: an absolute path on the local file system
where job-specific data should be written
(i.e., `/tmp/ansible_awx_xyz/`)
:param proot_temp_dir: a temporary directory which bwrap maps
restricted paths to
:param playbook: the playbook to run
:param module: the module to run
:param module_args: the module args to use
:param event_data_key: e.g., job_id, inventory_id, ...
For a completed job run, this function returns (status, rc),
representing the status and return code of the isolated
`ansible-playbook` run.
"""
self.ident = ident
self.event_data_key = event_data_key
self.instance = instance
self.host = instance.execution_node
self.private_data_dir = private_data_dir
self.proot_temp_dir = proot_temp_dir
status, rc = self.dispatch()
status, rc = self.dispatch(playbook, module, module_args)
if status == 'successful':
status, rc = self.check()
self.cleanup()

View File

@@ -4,7 +4,6 @@ import argparse
import base64
import codecs
import collections
import StringIO
import logging
import json
import os
@@ -13,12 +12,12 @@ import pipes
import re
import signal
import sys
import thread
import threading
import time
from io import StringIO
import pexpect
import psutil
import six
logger = logging.getLogger('awx.main.utils.expect')
@@ -49,7 +48,10 @@ def open_fifo_write(path, data):
reads data from the pipe.
'''
os.mkfifo(path, 0o600)
thread.start_new_thread(lambda p, d: open(p, 'w').write(d), (path, data))
threading.Thread(
target=lambda p, d: open(p, 'w').write(d),
args=(path, data)
).start()
def run_pexpect(args, cwd, env, logfile,
@@ -97,14 +99,8 @@ def run_pexpect(args, cwd, env, logfile,
# enforce usage of an OrderedDict so that the ordering of elements in
# `keys()` matches `values()`.
expect_passwords = collections.OrderedDict(expect_passwords)
password_patterns = expect_passwords.keys()
password_values = expect_passwords.values()
# pexpect needs all env vars to be utf-8 encoded strings
# https://github.com/pexpect/pexpect/issues/512
for k, v in env.items():
if isinstance(v, six.text_type):
env[k] = v.encode('utf-8')
password_patterns = list(expect_passwords.keys())
password_values = list(expect_passwords.values())
child = pexpect.spawn(
args[0], args[1:], cwd=cwd, env=env, ignore_sighup=True,
@@ -195,18 +191,7 @@ def run_isolated_job(private_data_dir, secrets, logfile=sys.stdout):
job_timeout = secrets.get('job_timeout', 10)
pexpect_timeout = secrets.get('pexpect_timeout', 5)
# Use local callback directory
callback_dir = os.getenv('AWX_LIB_DIRECTORY')
if callback_dir is None:
raise RuntimeError('Location for callbacks must be specified '
'by environment variable AWX_LIB_DIRECTORY.')
env['ANSIBLE_CALLBACK_PLUGINS'] = os.path.join(callback_dir, 'isolated_callbacks')
if 'AD_HOC_COMMAND_ID' in env:
env['ANSIBLE_STDOUT_CALLBACK'] = 'minimal'
else:
env['ANSIBLE_STDOUT_CALLBACK'] = 'awx_display'
env['AWX_ISOLATED_DATA_DIR'] = private_data_dir
env['PYTHONPATH'] = env.get('PYTHONPATH', '') + callback_dir + ':'
venv_path = env.get('VIRTUAL_ENV')
if venv_path and not os.path.exists(venv_path):
@@ -232,7 +217,8 @@ def handle_termination(pid, args, proot_cmd, is_cancel=True):
instance's cancel_flag.
'''
try:
if proot_cmd in ' '.join(args):
used_proot = proot_cmd.encode('utf-8') in args
if used_proot:
if not psutil:
os.kill(pid, signal.SIGKILL)
else:
@@ -253,8 +239,8 @@ def handle_termination(pid, args, proot_cmd, is_cancel=True):
def __run__(private_data_dir):
buff = StringIO.StringIO()
with open(os.path.join(private_data_dir, 'env'), 'r') as f:
buff = StringIO()
with codecs.open(os.path.join(private_data_dir, 'env'), 'r', encoding='utf-8') as f:
for line in f:
buff.write(line)

View File

@@ -4,10 +4,8 @@
# Python
import copy
import json
import operator
import re
import six
import urllib
import urllib.parse
from jinja2 import Environment, StrictUndefined
from jinja2.exceptions import UndefinedError, TemplateSyntaxError
@@ -46,7 +44,7 @@ from awx.main.utils.filters import SmartFilter
from awx.main.utils.encryption import encrypt_value, decrypt_value, get_encryption_key
from awx.main.validators import validate_ssh_private_key
from awx.main.models.rbac import batch_role_ancestor_rebuilding, Role
from awx.main.constants import CHOICES_PRIVILEGE_ESCALATION_METHODS, ENV_BLACKLIST
from awx.main.constants import ENV_BLACKLIST
from awx.main import utils
@@ -80,7 +78,7 @@ class JSONField(upstream_JSONField):
class JSONBField(upstream_JSONBField):
def get_prep_lookup(self, lookup_type, value):
if isinstance(value, six.string_types) and value == "null":
if isinstance(value, str) and value == "null":
return 'null'
return super(JSONBField, self).get_prep_lookup(lookup_type, value)
@@ -95,7 +93,7 @@ class JSONBField(upstream_JSONBField):
def from_db_value(self, value, expression, connection, context):
# Work around a bug in django-jsonfield
# https://bitbucket.org/schinckel/django-jsonfield/issues/57/cannot-use-in-the-same-project-as-djangos
if isinstance(value, six.string_types):
if isinstance(value, str):
return json.loads(value)
return value
@@ -251,6 +249,9 @@ class ImplicitRoleField(models.ForeignKey):
if type(field_name) == tuple:
continue
if type(field_name) == bytes:
field_name = field_name.decode('utf-8')
if field_name.startswith('singleton:'):
continue
@@ -373,7 +374,7 @@ class SmartFilterField(models.TextField):
# https://docs.python.org/2/library/stdtypes.html#truth-value-testing
if not value:
return None
value = urllib.unquote(value)
value = urllib.parse.unquote(value)
try:
SmartFilter().query_from_string(value)
except RuntimeError as e:
@@ -407,11 +408,8 @@ class JSONSchemaField(JSONBField):
self.schema(model_instance),
format_checker=self.format_checker
).iter_errors(value):
# strip Python unicode markers from jsonschema validation errors
error.message = re.sub(r'\bu(\'|")', r'\1', error.message)
if error.validator == 'pattern' and 'error' in error.schema:
error.message = six.text_type(error.schema['error']).format(instance=error.instance)
error.message = error.schema['error'].format(instance=error.instance)
elif error.validator == 'type':
expected_type = error.validator_value
if expected_type == 'object':
@@ -450,7 +448,7 @@ class JSONSchemaField(JSONBField):
def from_db_value(self, value, expression, connection, context):
# Work around a bug in django-jsonfield
# https://bitbucket.org/schinckel/django-jsonfield/issues/57/cannot-use-in-the-same-project-as-djangos
if isinstance(value, six.string_types):
if isinstance(value, str):
return json.loads(value)
return value
@@ -512,12 +510,9 @@ class CredentialInputField(JSONSchemaField):
properties = {}
for field in model_instance.credential_type.inputs.get('fields', []):
field = field.copy()
if field['type'] == 'become_method':
field.pop('type')
field['choices'] = map(operator.itemgetter(0), CHOICES_PRIVILEGE_ESCALATION_METHODS)
properties[field['id']] = field
if field.get('choices', []):
field['enum'] = field['choices'][:]
field['enum'] = list(field['choices'])[:]
return {
'type': 'object',
'properties': properties,
@@ -547,7 +542,7 @@ class CredentialInputField(JSONSchemaField):
v != '$encrypted$',
model_instance.pk
]):
if not isinstance(getattr(model_instance, k), six.string_types):
if not isinstance(getattr(model_instance, k), str):
raise django_exceptions.ValidationError(
_('secret values must be of type string, not {}').format(type(v).__name__),
code='invalid',
@@ -564,7 +559,7 @@ class CredentialInputField(JSONSchemaField):
format_checker=self.format_checker
).iter_errors(decrypted_values):
if error.validator == 'pattern' and 'error' in error.schema:
error.message = six.text_type(error.schema['error']).format(instance=error.instance)
error.message = error.schema['error'].format(instance=error.instance)
if error.validator == 'dependencies':
# replace the default error messaging w/ a better i18n string
# I wish there was a better way to determine the parameters of
@@ -658,7 +653,7 @@ class CredentialTypeInputField(JSONSchemaField):
'items': {
'type': 'object',
'properties': {
'type': {'enum': ['string', 'boolean', 'become_method']},
'type': {'enum': ['string', 'boolean']},
'format': {'enum': ['ssh_private_key']},
'choices': {
'type': 'array',
@@ -676,6 +671,7 @@ class CredentialTypeInputField(JSONSchemaField):
'multiline': {'type': 'boolean'},
'secret': {'type': 'boolean'},
'ask_at_runtime': {'type': 'boolean'},
'default': {},
},
'additionalProperties': False,
'required': ['id', 'label'],
@@ -719,16 +715,13 @@ class CredentialTypeInputField(JSONSchemaField):
# If no type is specified, default to string
field['type'] = 'string'
if field['type'] == 'become_method':
if not model_instance.managed_by_tower:
if 'default' in field:
default = field['default']
_type = {'string': str, 'boolean': bool}[field['type']]
if type(default) != _type:
raise django_exceptions.ValidationError(
_('become_method is a reserved type name'),
code='invalid',
params={'value': value},
_('{} is not a {}').format(default, field['type'])
)
else:
field.pop('type')
field['choices'] = CHOICES_PRIVILEGE_ESCALATION_METHODS
for key in ('choices', 'multiline', 'format', 'secret',):
if key in field and field['type'] != 'string':
@@ -824,14 +817,14 @@ class CredentialTypeInjectorField(JSONSchemaField):
)
class ExplodingNamespace:
def __unicode__(self):
def __str__(self):
raise UndefinedError(_('Must define unnamed file injector in order to reference `tower.filename`.'))
class TowerNamespace:
def __init__(self):
self.filename = ExplodingNamespace()
def __unicode__(self):
def __str__(self):
raise UndefinedError(_('Cannot directly reference reserved `tower` namespace container.'))
valid_namespace['tower'] = TowerNamespace()

View File

@@ -5,7 +5,6 @@
import datetime
import logging
import six
# Django
from django.core.management.base import BaseCommand
@@ -43,7 +42,7 @@ class Command(BaseCommand):
n_deleted_items = 0
pks_to_delete = set()
for asobj in ActivityStream.objects.iterator():
asobj_disp = '"%s" id: %s' % (six.text_type(asobj), asobj.id)
asobj_disp = '"%s" id: %s' % (str(asobj), asobj.id)
if asobj.timestamp >= self.cutoff:
if self.dry_run:
self.logger.info("would skip %s" % asobj_disp)

View File

@@ -3,6 +3,7 @@
# Python
import re
import sys
from dateutil.relativedelta import relativedelta
# Django
@@ -129,6 +130,7 @@ class Command(BaseCommand):
@transaction.atomic
def handle(self, *args, **options):
sys.stderr.write("This command has been deprecated and will be removed in a future release.\n")
if not feature_enabled('system_tracking'):
raise CommandError("The System Tracking feature is not enabled for your instance")
cleanup_facts = CleanupFacts()

View File

@@ -5,7 +5,6 @@
import datetime
import logging
import six
# Django
from django.core.management.base import BaseCommand, CommandError
@@ -68,7 +67,7 @@ class Command(BaseCommand):
jobs = Job.objects.filter(created__lt=self.cutoff)
for job in jobs.iterator():
job_display = '"%s" (%d host summaries, %d events)' % \
(six.text_type(job),
(str(job),
job.job_host_summaries.count(), job.job_events.count())
if job.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
@@ -89,7 +88,7 @@ class Command(BaseCommand):
ad_hoc_commands = AdHocCommand.objects.filter(created__lt=self.cutoff)
for ad_hoc_command in ad_hoc_commands.iterator():
ad_hoc_command_display = '"%s" (%d events)' % \
(six.text_type(ad_hoc_command),
(str(ad_hoc_command),
ad_hoc_command.ad_hoc_command_events.count())
if ad_hoc_command.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
@@ -109,7 +108,7 @@ class Command(BaseCommand):
skipped, deleted = 0, 0
project_updates = ProjectUpdate.objects.filter(created__lt=self.cutoff)
for pu in project_updates.iterator():
pu_display = '"%s" (type %s)' % (six.text_type(pu), six.text_type(pu.launch_type))
pu_display = '"%s" (type %s)' % (str(pu), str(pu.launch_type))
if pu.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
self.logger.debug('%s %s project update %s', action_text, pu.status, pu_display)
@@ -132,7 +131,7 @@ class Command(BaseCommand):
skipped, deleted = 0, 0
inventory_updates = InventoryUpdate.objects.filter(created__lt=self.cutoff)
for iu in inventory_updates.iterator():
iu_display = '"%s" (source %s)' % (six.text_type(iu), six.text_type(iu.source))
iu_display = '"%s" (source %s)' % (str(iu), str(iu.source))
if iu.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
self.logger.debug('%s %s inventory update %s', action_text, iu.status, iu_display)
@@ -155,7 +154,7 @@ class Command(BaseCommand):
skipped, deleted = 0, 0
system_jobs = SystemJob.objects.filter(created__lt=self.cutoff)
for sj in system_jobs.iterator():
sj_display = '"%s" (type %s)' % (six.text_type(sj), six.text_type(sj.job_type))
sj_display = '"%s" (type %s)' % (str(sj), str(sj.job_type))
if sj.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
self.logger.debug('%s %s system_job %s', action_text, sj.status, sj_display)
@@ -185,7 +184,7 @@ class Command(BaseCommand):
workflow_jobs = WorkflowJob.objects.filter(created__lt=self.cutoff)
for workflow_job in workflow_jobs.iterator():
workflow_job_display = '"{}" ({} nodes)'.format(
six.text_type(workflow_job),
str(workflow_job),
workflow_job.workflow_nodes.count())
if workflow_job.status in ('pending', 'waiting', 'running'):
action_text = 'would skip' if self.dry_run else 'skipping'
@@ -206,7 +205,7 @@ class Command(BaseCommand):
notifications = Notification.objects.filter(created__lt=self.cutoff)
for notification in notifications.iterator():
notification_display = '"{}" (started {}, {} type, {} sent)'.format(
six.text_type(notification), six.text_type(notification.created),
str(notification), str(notification.created),
notification.notification_type, notification.notifications_sent)
if notification.status in ('pending',):
action_text = 'would skip' if self.dry_run else 'skipping'

View File

@@ -2,7 +2,6 @@
# All Rights Reserved
import subprocess
import warnings
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError
@@ -24,17 +23,10 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str,
help='Hostname used during provisioning')
parser.add_argument('--name', dest='name', type=str,
help='(PENDING DEPRECIATION) Hostname used during provisioning')
@transaction.atomic
def handle(self, *args, **options):
# TODO: remove in 3.3
if options.get('name'):
warnings.warn("`--name` is depreciated in favor of `--hostname`, and will be removed in release 3.3.")
if options.get('hostname'):
raise CommandError("Cannot accept both --name and --hostname.")
options['hostname'] = options['name']
hostname = options.get('hostname')
if not hostname:
raise CommandError("--hostname is a required argument")

View File

@@ -1,17 +0,0 @@
# Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved
# Borrow from another AWX command
from awx.main.management.commands.deprovision_instance import Command as OtherCommand
# Python
import warnings
class Command(OtherCommand):
def handle(self, *args, **options):
# TODO: delete this entire file in 3.3
warnings.warn('This command is replaced with `deprovision_instance` and will '
'be removed in release 3.3.')
return super(Command, self).handle(*args, **options)

View File

@@ -1,6 +1,7 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
import datetime
from django.utils.encoding import smart_str
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization
@@ -35,10 +36,10 @@ class Command(BaseCommand):
).save()
pemfile = Setting.objects.create(
key='AWX_ISOLATED_PUBLIC_KEY',
value=key.public_key().public_bytes(
value=smart_str(key.public_key().public_bytes(
encoding=serialization.Encoding.OpenSSH,
format=serialization.PublicFormat.OpenSSH
) + " generated-by-awx@%s" % datetime.datetime.utcnow().isoformat()
)) + " generated-by-awx@%s" % datetime.datetime.utcnow().isoformat()
)
pemfile.save()
print(pemfile.value)

View File

@@ -4,6 +4,7 @@
# Python
import json
import logging
import fnmatch
import os
import re
import subprocess
@@ -11,16 +12,25 @@ import sys
import time
import traceback
import shutil
from distutils.version import LooseVersion as Version
# Django
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError
from django.core.exceptions import ImproperlyConfigured
from django.db import connection, transaction
from django.utils.encoding import smart_text
# AWX
from awx.main.models import * # noqa
# AWX inventory imports
from awx.main.models.inventory import (
Inventory,
InventorySource,
InventoryUpdate,
Host
)
from awx.main.utils.mem_inventory import MemInventory, dict_to_mem_data
# other AWX imports
from awx.main.models.rbac import batch_role_ancestor_rebuilding
from awx.main.utils import (
ignore_inventory_computed_fields,
check_proot_installed,
@@ -28,7 +38,7 @@ from awx.main.utils import (
build_proot_temp_dir,
get_licenser
)
from awx.main.utils.mem_inventory import MemInventory, dict_to_mem_data
from awx.main.utils.common import _get_ansible_version
from awx.main.signals import disable_activity_stream
from awx.main.constants import STANDARD_INVENTORY_UPDATE_ENV
@@ -63,60 +73,71 @@ class AnsibleInventoryLoader(object):
use the ansible-inventory CLI utility to convert it into in-memory
representational objects. Example:
/usr/bin/ansible/ansible-inventory -i hosts --list
If it fails to find this, it uses the backported script instead
'''
def __init__(self, source, group_filter_re=None, host_filter_re=None, is_custom=False):
def __init__(self, source, is_custom=False, venv_path=None):
self.source = source
self.source_dir = functioning_dir(self.source)
self.is_custom = is_custom
self.tmp_private_dir = None
self.method = 'ansible-inventory'
self.group_filter_re = group_filter_re
self.host_filter_re = host_filter_re
self.is_vendored_source = False
if self.source_dir == os.path.join(settings.BASE_DIR, 'plugins', 'inventory'):
self.is_vendored_source = True
if venv_path:
self.venv_path = venv_path
else:
self.venv_path = settings.ANSIBLE_VENV_PATH
def build_env(self):
env = dict(os.environ.items())
env['VIRTUAL_ENV'] = settings.ANSIBLE_VENV_PATH
env['PATH'] = os.path.join(settings.ANSIBLE_VENV_PATH, "bin") + ":" + env['PATH']
env['VIRTUAL_ENV'] = self.venv_path
env['PATH'] = os.path.join(self.venv_path, "bin") + ":" + env['PATH']
# Set configuration items that should always be used for updates
for key, value in STANDARD_INVENTORY_UPDATE_ENV.items():
if key not in env:
env[key] = value
venv_libdir = os.path.join(settings.ANSIBLE_VENV_PATH, "lib")
venv_libdir = os.path.join(self.venv_path, "lib")
env.pop('PYTHONPATH', None) # default to none if no python_ver matches
if os.path.isdir(os.path.join(venv_libdir, "python2.7")):
env['PYTHONPATH'] = os.path.join(venv_libdir, "python2.7", "site-packages") + ":"
for version in os.listdir(venv_libdir):
if fnmatch.fnmatch(version, 'python[23].*'):
if os.path.isdir(os.path.join(venv_libdir, version)):
env['PYTHONPATH'] = os.path.join(venv_libdir, version, "site-packages") + ":"
break
# For internal inventory updates, these are not reported in the job_env API
logger.info('Using VIRTUAL_ENV: {}'.format(env['VIRTUAL_ENV']))
logger.info('Using PATH: {}'.format(env['PATH']))
logger.info('Using PYTHONPATH: {}'.format(env.get('PYTHONPATH', None)))
return env
def get_path_to_ansible_inventory(self):
venv_exe = os.path.join(self.venv_path, 'bin', 'ansible-inventory')
if os.path.exists(venv_exe):
return venv_exe
elif os.path.exists(
os.path.join(self.venv_path, 'bin', 'ansible')
):
# if bin/ansible exists but bin/ansible-inventory doesn't, it's
# probably a really old version of ansible that doesn't support
# ansible-inventory
raise RuntimeError(
"{} does not exist (please upgrade to ansible >= 2.4)".format(
venv_exe
)
)
return shutil.which('ansible-inventory')
def get_base_args(self):
# get ansible-inventory absolute path for running in bubblewrap/proot, in Popen
for path in os.environ["PATH"].split(os.pathsep):
potential_path = os.path.join(path.strip('"'), 'ansible-inventory')
if os.path.isfile(potential_path) and os.access(potential_path, os.X_OK):
logger.debug('Using system install of ansible-inventory CLI: {}'.format(potential_path))
return [potential_path, '-i', self.source]
# Stopgap solution for group_vars, do not use backported module for official
# vendored cloud modules or custom scripts TODO: remove after Ansible 2.3 deprecation
if self.is_vendored_source or self.is_custom:
self.method = 'inventory script invocation'
return [self.source]
# ansible-inventory was not found, look for backported module TODO: remove after Ansible 2.3 deprecation
abs_module_path = os.path.abspath(os.path.join(
os.path.dirname(__file__), '..', '..', '..', 'plugins',
'ansible_inventory', 'backport.py'))
self.method = 'ansible-inventory backport'
if not os.path.exists(abs_module_path):
raise ImproperlyConfigured('Cannot find inventory module')
logger.debug('Using backported ansible-inventory module: {}'.format(abs_module_path))
return [abs_module_path, '-i', self.source]
ansible_inventory_path = self.get_path_to_ansible_inventory()
# NOTE: why do we add "python" to the start of these args?
# the script that runs ansible-inventory specifies a python interpreter
# that makes no sense in light of the fact that we put all the dependencies
# inside of /venv/ansible, so we override the specified interpreter
# https://github.com/ansible/ansible/issues/50714
bargs = ['python', ansible_inventory_path, '-i', self.source]
ansible_version = _get_ansible_version(ansible_inventory_path[:-len('-inventory')])
if ansible_version != 'unknown' and Version(ansible_version) >= Version('2.5'):
bargs.extend(['--playbook-dir', self.source_dir])
logger.debug('Using base command: {}'.format(' '.join(bargs)))
return bargs
def get_proot_args(self, cmd, env):
cwd = os.getcwd()
@@ -142,6 +163,9 @@ class AnsibleInventoryLoader(object):
kwargs['proot_show_paths'] = [functioning_dir(self.source)]
logger.debug("Running from `{}` working directory.".format(cwd))
if self.venv_path != settings.ANSIBLE_VENV_PATH:
kwargs['proot_custom_virtualenv'] = self.venv_path
return wrap_args_with_proot(cmd, cwd, **kwargs)
def command_to_json(self, cmd):
@@ -155,6 +179,8 @@ class AnsibleInventoryLoader(object):
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)
stdout, stderr = proc.communicate()
stdout = smart_text(stdout)
stderr = smart_text(stderr)
if self.tmp_private_dir:
shutil.rmtree(self.tmp_private_dir, True)
@@ -177,80 +203,7 @@ class AnsibleInventoryLoader(object):
base_args = self.get_base_args()
logger.info('Reading Ansible inventory source: %s', self.source)
data = self.command_to_json(base_args + ['--list'])
# TODO: remove after we run custom scripts through ansible-inventory
if self.is_custom and '_meta' not in data or 'hostvars' not in data['_meta']:
# Invoke the executable once for each host name we've built up
# to set their variables
data.setdefault('_meta', {})
data['_meta'].setdefault('hostvars', {})
logger.warning('Re-calling script for hostvars individually.')
for group_name, group_data in data.iteritems():
if group_name == '_meta':
continue
if isinstance(group_data, dict):
group_host_list = group_data.get('hosts', [])
elif isinstance(group_data, list):
group_host_list = group_data
else:
logger.warning('Group data for "%s" is not a dict or list',
group_name)
group_host_list = []
for hostname in group_host_list:
logger.debug('Obtaining hostvars for %s' % hostname.encode('utf-8'))
hostdata = self.command_to_json(
base_args + ['--host', hostname.encode("utf-8")]
)
if isinstance(hostdata, dict):
data['_meta']['hostvars'][hostname] = hostdata
else:
logger.warning(
'Expected dict of vars for host "%s" when '
'calling with `--host`, got %s instead',
k, str(type(data))
)
logger.info('Processing JSON output...')
inventory = MemInventory(
group_filter_re=self.group_filter_re, host_filter_re=self.host_filter_re)
inventory = dict_to_mem_data(data, inventory=inventory)
return inventory
def load_inventory_source(source, group_filter_re=None,
host_filter_re=None, exclude_empty_groups=False,
is_custom=False):
'''
Load inventory from given source directory or file.
'''
# Sanity check: We sanitize these module names for our API but Ansible proper doesn't follow
# good naming conventions
source = source.replace('rhv.py', 'ovirt4.py')
source = source.replace('satellite6.py', 'foreman.py')
source = source.replace('vmware.py', 'vmware_inventory.py')
if not os.path.exists(source):
raise IOError('Source does not exist: %s' % source)
source = os.path.join(os.getcwd(), os.path.dirname(source),
os.path.basename(source))
source = os.path.normpath(os.path.abspath(source))
inventory = AnsibleInventoryLoader(
source=source,
group_filter_re=group_filter_re,
host_filter_re=host_filter_re,
is_custom=is_custom).load()
logger.debug('Finished loading from source: %s', source)
# Exclude groups that are completely empty.
if exclude_empty_groups:
inventory.delete_empty_groups()
logger.info('Loaded %d groups, %d hosts', len(inventory.all_group.all_groups),
len(inventory.all_group.all_hosts))
return inventory.all_group
return self.command_to_json(base_args + ['--list'])
class Command(BaseCommand):
@@ -268,6 +221,8 @@ class Command(BaseCommand):
parser.add_argument('--inventory-id', dest='inventory_id', type=int,
default=None, metavar='i',
help='id of inventory to sync')
parser.add_argument('--venv', dest='venv', type=str, default=None,
help='absolute path to the AWX custom virtualenv to use')
parser.add_argument('--overwrite', dest='overwrite', action='store_true', default=False,
help='overwrite the destination hosts and groups')
parser.add_argument('--overwrite-vars', dest='overwrite_vars',
@@ -347,7 +302,7 @@ class Command(BaseCommand):
if enabled is not default:
enabled_value = getattr(self, 'enabled_value', None)
if enabled_value is not None:
enabled = bool(unicode(enabled_value) == unicode(enabled))
enabled = bool(str(enabled_value) == str(enabled))
else:
enabled = bool(enabled)
if enabled is default:
@@ -357,6 +312,14 @@ class Command(BaseCommand):
else:
raise NotImplementedError('Value of enabled {} not understood.'.format(enabled))
def get_source_absolute_path(self, source):
if not os.path.exists(source):
raise IOError('Source does not exist: %s' % source)
source = os.path.join(os.getcwd(), os.path.dirname(source),
os.path.basename(source))
source = os.path.normpath(os.path.abspath(source))
return source
def load_inventory_from_database(self):
'''
Load inventory and related objects from the database.
@@ -369,9 +332,9 @@ class Command(BaseCommand):
try:
self.inventory = Inventory.objects.get(**q)
except Inventory.DoesNotExist:
raise CommandError('Inventory with %s = %s cannot be found' % q.items()[0])
raise CommandError('Inventory with %s = %s cannot be found' % list(q.items())[0])
except Inventory.MultipleObjectsReturned:
raise CommandError('Inventory with %s = %s returned multiple results' % q.items()[0])
raise CommandError('Inventory with %s = %s returned multiple results' % list(q.items())[0])
logger.info('Updating inventory %d: %s' % (self.inventory.pk,
self.inventory.name))
@@ -456,6 +419,16 @@ class Command(BaseCommand):
mem_host.instance_id = instance_id
self.mem_instance_id_map[instance_id] = mem_host.name
def _existing_host_pks(self):
'''Returns cached set of existing / previous host primary key values
this is the starting set, meaning that it is pre-modification
by deletions and other things done in the course of this import
'''
if not hasattr(self, '_cached_host_pk_set'):
self._cached_host_pk_set = frozenset(
self.inventory_source.hosts.values_list('pk', flat=True))
return self._cached_host_pk_set
def _delete_hosts(self):
'''
For each host in the database that is NOT in the local list, delete
@@ -467,11 +440,11 @@ class Command(BaseCommand):
queries_before = len(connection.queries)
hosts_qs = self.inventory_source.hosts
# Build list of all host pks, remove all that should not be deleted.
del_host_pks = set(hosts_qs.values_list('pk', flat=True))
del_host_pks = set(self._existing_host_pks()) # makes mutable copy
if self.instance_id_var:
all_instance_ids = self.mem_instance_id_map.keys()
all_instance_ids = list(self.mem_instance_id_map.keys())
instance_ids = []
for offset in xrange(0, len(all_instance_ids), self._batch_size):
for offset in range(0, len(all_instance_ids), self._batch_size):
instance_ids = all_instance_ids[offset:(offset + self._batch_size)]
for host_pk in hosts_qs.filter(instance_id__in=instance_ids).values_list('pk', flat=True):
del_host_pks.discard(host_pk)
@@ -479,14 +452,14 @@ class Command(BaseCommand):
del_host_pks.discard(host_pk)
all_host_names = list(set(self.mem_instance_id_map.values()) - set(self.all_group.all_hosts.keys()))
else:
all_host_names = self.all_group.all_hosts.keys()
for offset in xrange(0, len(all_host_names), self._batch_size):
all_host_names = list(self.all_group.all_hosts.keys())
for offset in range(0, len(all_host_names), self._batch_size):
host_names = all_host_names[offset:(offset + self._batch_size)]
for host_pk in hosts_qs.filter(name__in=host_names).values_list('pk', flat=True):
del_host_pks.discard(host_pk)
# Now delete all remaining hosts in batches.
all_del_pks = sorted(list(del_host_pks))
for offset in xrange(0, len(all_del_pks), self._batch_size):
for offset in range(0, len(all_del_pks), self._batch_size):
del_pks = all_del_pks[offset:(offset + self._batch_size)]
for host in hosts_qs.filter(pk__in=del_pks):
host_name = host.name
@@ -509,8 +482,8 @@ class Command(BaseCommand):
groups_qs = self.inventory_source.groups.all()
# Build list of all group pks, remove those that should not be deleted.
del_group_pks = set(groups_qs.values_list('pk', flat=True))
all_group_names = self.all_group.all_groups.keys()
for offset in xrange(0, len(all_group_names), self._batch_size):
all_group_names = list(self.all_group.all_groups.keys())
for offset in range(0, len(all_group_names), self._batch_size):
group_names = all_group_names[offset:(offset + self._batch_size)]
for group_pk in groups_qs.filter(name__in=group_names).values_list('pk', flat=True):
del_group_pks.discard(group_pk)
@@ -522,7 +495,7 @@ class Command(BaseCommand):
del_group_pks.discard(self.inventory_source.deprecated_group_id)
# Now delete all remaining groups in batches.
all_del_pks = sorted(list(del_group_pks))
for offset in xrange(0, len(all_del_pks), self._batch_size):
for offset in range(0, len(all_del_pks), self._batch_size):
del_pks = all_del_pks[offset:(offset + self._batch_size)]
for group in groups_qs.filter(pk__in=del_pks):
group_name = group.name
@@ -547,6 +520,10 @@ class Command(BaseCommand):
group_group_count = 0
group_host_count = 0
db_groups = self.inventory_source.groups
# Set of all group names managed by this inventory source
all_source_group_names = frozenset(self.all_group.all_groups.keys())
# Set of all host pks managed by this inventory source
all_source_host_pks = self._existing_host_pks()
for db_group in db_groups.all():
if self.inventory_source.deprecated_group_id == db_group.id: # TODO: remove in 3.3
logger.debug(
@@ -557,11 +534,20 @@ class Command(BaseCommand):
# Delete child group relationships not present in imported data.
db_children = db_group.children
db_children_name_pk_map = dict(db_children.values_list('name', 'pk'))
# Exclude child groups from removal list if they were returned by
# the import, because this parent-child relationship has not changed
mem_children = self.all_group.all_groups[db_group.name].children
for mem_group in mem_children:
db_children_name_pk_map.pop(mem_group.name, None)
# Exclude child groups from removal list if they were not imported
# by this specific inventory source, because
# those relationships are outside of the dominion of this inventory source
other_source_group_names = set(db_children_name_pk_map.keys()) - all_source_group_names
for group_name in other_source_group_names:
db_children_name_pk_map.pop(group_name, None)
# Removal list is complete - now perform the removals
del_child_group_pks = list(set(db_children_name_pk_map.values()))
for offset in xrange(0, len(del_child_group_pks), self._batch_size):
for offset in range(0, len(del_child_group_pks), self._batch_size):
child_group_pks = del_child_group_pks[offset:(offset + self._batch_size)]
for db_child in db_children.filter(pk__in=child_group_pks):
group_group_count += 1
@@ -572,22 +558,29 @@ class Command(BaseCommand):
# Delete group/host relationships not present in imported data.
db_hosts = db_group.hosts
del_host_pks = set(db_hosts.values_list('pk', flat=True))
# Exclude child hosts from removal list if they were not imported
# by this specific inventory source, because
# those relationships are outside of the dominion of this inventory source
del_host_pks = del_host_pks & all_source_host_pks
# Exclude child hosts from removal list if they were returned by
# the import, because this group-host relationship has not changed
mem_hosts = self.all_group.all_groups[db_group.name].hosts
all_mem_host_names = [h.name for h in mem_hosts if not h.instance_id]
for offset in xrange(0, len(all_mem_host_names), self._batch_size):
for offset in range(0, len(all_mem_host_names), self._batch_size):
mem_host_names = all_mem_host_names[offset:(offset + self._batch_size)]
for db_host_pk in db_hosts.filter(name__in=mem_host_names).values_list('pk', flat=True):
del_host_pks.discard(db_host_pk)
all_mem_instance_ids = [h.instance_id for h in mem_hosts if h.instance_id]
for offset in xrange(0, len(all_mem_instance_ids), self._batch_size):
for offset in range(0, len(all_mem_instance_ids), self._batch_size):
mem_instance_ids = all_mem_instance_ids[offset:(offset + self._batch_size)]
for db_host_pk in db_hosts.filter(instance_id__in=mem_instance_ids).values_list('pk', flat=True):
del_host_pks.discard(db_host_pk)
all_db_host_pks = [v for k,v in self.db_instance_id_map.items() if k in all_mem_instance_ids]
for db_host_pk in all_db_host_pks:
del_host_pks.discard(db_host_pk)
# Removal list is complete - now perform the removals
del_host_pks = list(del_host_pks)
for offset in xrange(0, len(del_host_pks), self._batch_size):
for offset in range(0, len(del_host_pks), self._batch_size):
del_pks = del_host_pks[offset:(offset + self._batch_size)]
for db_host in db_hosts.filter(pk__in=del_pks):
group_host_count += 1
@@ -635,7 +628,7 @@ class Command(BaseCommand):
if len(v.parents) == 1 and v.parents[0].name == 'all':
root_group_names.add(k)
existing_group_names = set()
for offset in xrange(0, len(all_group_names), self._batch_size):
for offset in range(0, len(all_group_names), self._batch_size):
group_names = all_group_names[offset:(offset + self._batch_size)]
for group in self.inventory.groups.filter(name__in=group_names):
mem_group = self.all_group.all_groups[group.name]
@@ -739,7 +732,7 @@ class Command(BaseCommand):
mem_host_instance_id_map = {}
mem_host_name_map = {}
mem_host_names_to_update = set(self.all_group.all_hosts.keys())
for k,v in self.all_group.all_hosts.iteritems():
for k,v in self.all_group.all_hosts.items():
mem_host_name_map[k] = v
instance_id = self._get_instance_id(v.variables)
if instance_id in self.db_instance_id_map:
@@ -749,7 +742,7 @@ class Command(BaseCommand):
# Update all existing hosts where we know the PK based on instance_id.
all_host_pks = sorted(mem_host_pk_map.keys())
for offset in xrange(0, len(all_host_pks), self._batch_size):
for offset in range(0, len(all_host_pks), self._batch_size):
host_pks = all_host_pks[offset:(offset + self._batch_size)]
for db_host in self.inventory.hosts.filter( pk__in=host_pks):
if db_host.pk in host_pks_updated:
@@ -761,7 +754,7 @@ class Command(BaseCommand):
# Update all existing hosts where we know the instance_id.
all_instance_ids = sorted(mem_host_instance_id_map.keys())
for offset in xrange(0, len(all_instance_ids), self._batch_size):
for offset in range(0, len(all_instance_ids), self._batch_size):
instance_ids = all_instance_ids[offset:(offset + self._batch_size)]
for db_host in self.inventory.hosts.filter( instance_id__in=instance_ids):
if db_host.pk in host_pks_updated:
@@ -773,7 +766,7 @@ class Command(BaseCommand):
# Update all existing hosts by name.
all_host_names = sorted(mem_host_name_map.keys())
for offset in xrange(0, len(all_host_names), self._batch_size):
for offset in range(0, len(all_host_names), self._batch_size):
host_names = all_host_names[offset:(offset + self._batch_size)]
for db_host in self.inventory.hosts.filter( name__in=host_names):
if db_host.pk in host_pks_updated:
@@ -815,15 +808,15 @@ class Command(BaseCommand):
'''
if settings.SQL_DEBUG:
queries_before = len(connection.queries)
all_group_names = sorted([k for k,v in self.all_group.all_groups.iteritems() if v.children])
all_group_names = sorted([k for k,v in self.all_group.all_groups.items() if v.children])
group_group_count = 0
for offset in xrange(0, len(all_group_names), self._batch_size):
for offset in range(0, len(all_group_names), self._batch_size):
group_names = all_group_names[offset:(offset + self._batch_size)]
for db_group in self.inventory.groups.filter(name__in=group_names):
mem_group = self.all_group.all_groups[db_group.name]
group_group_count += len(mem_group.children)
all_child_names = sorted([g.name for g in mem_group.children])
for offset2 in xrange(0, len(all_child_names), self._batch_size):
for offset2 in range(0, len(all_child_names), self._batch_size):
child_names = all_child_names[offset2:(offset2 + self._batch_size)]
db_children_qs = self.inventory.groups.filter(name__in=child_names)
for db_child in db_children_qs.filter(children__id=db_group.id):
@@ -842,15 +835,15 @@ class Command(BaseCommand):
# belongs.
if settings.SQL_DEBUG:
queries_before = len(connection.queries)
all_group_names = sorted([k for k,v in self.all_group.all_groups.iteritems() if v.hosts])
all_group_names = sorted([k for k,v in self.all_group.all_groups.items() if v.hosts])
group_host_count = 0
for offset in xrange(0, len(all_group_names), self._batch_size):
for offset in range(0, len(all_group_names), self._batch_size):
group_names = all_group_names[offset:(offset + self._batch_size)]
for db_group in self.inventory.groups.filter(name__in=group_names):
mem_group = self.all_group.all_groups[db_group.name]
group_host_count += len(mem_group.hosts)
all_host_names = sorted([h.name for h in mem_group.hosts if not h.instance_id])
for offset2 in xrange(0, len(all_host_names), self._batch_size):
for offset2 in range(0, len(all_host_names), self._batch_size):
host_names = all_host_names[offset2:(offset2 + self._batch_size)]
db_hosts_qs = self.inventory.hosts.filter(name__in=host_names)
for db_host in db_hosts_qs.filter(groups__id=db_group.id):
@@ -859,7 +852,7 @@ class Command(BaseCommand):
self._batch_add_m2m(db_group.hosts, db_host)
logger.debug('Host "%s" added to group "%s"', db_host.name, db_group.name)
all_instance_ids = sorted([h.instance_id for h in mem_group.hosts if h.instance_id])
for offset2 in xrange(0, len(all_instance_ids), self._batch_size):
for offset2 in range(0, len(all_instance_ids), self._batch_size):
instance_ids = all_instance_ids[offset2:(offset2 + self._batch_size)]
db_hosts_qs = self.inventory.hosts.filter(instance_id__in=instance_ids)
for db_host in db_hosts_qs.filter(groups__id=db_group.id):
@@ -892,12 +885,24 @@ class Command(BaseCommand):
self._create_update_group_children()
self._create_update_group_hosts()
def remote_tower_license_compare(self, local_license_type):
# this requires https://github.com/ansible/ansible/pull/52747
source_vars = self.all_group.variables
remote_license_type = source_vars.get('tower_metadata', {}).get('license_type', None)
if remote_license_type is None:
raise CommandError('Unexpected Error: Tower inventory plugin missing needed metadata!')
if local_license_type != remote_license_type:
raise CommandError('Tower server licenses must match: source: {} local: {}'.format(
remote_license_type, local_license_type
))
def check_license(self):
license_info = get_licenser().validate()
local_license_type = license_info.get('license_type', 'UNLICENSED')
if license_info.get('license_key', 'UNLICENSED') == 'UNLICENSED':
logger.error(LICENSE_NON_EXISTANT_MESSAGE)
raise CommandError('No license found!')
elif license_info.get('license_type', 'UNLICENSED') == 'open':
elif local_license_type == 'open':
return
available_instances = license_info.get('available_instances', 0)
free_instances = license_info.get('free_instances', 0)
@@ -906,6 +911,13 @@ class Command(BaseCommand):
if time_remaining <= 0 and not license_info.get('demo', False):
logger.error(LICENSE_EXPIRED_MESSAGE)
raise CommandError("License has expired!")
# special check for tower-type inventory sources
# but only if running the plugin
TOWER_SOURCE_FILES = ['tower.yml', 'tower.yaml']
if self.inventory_source.source == 'tower' and any(f in self.source for f in TOWER_SOURCE_FILES):
# only if this is the 2nd call to license check, we cannot compare before running plugin
if hasattr(self, 'all_group'):
self.remote_tower_license_compare(local_license_type)
if free_instances < 0:
d = {
'new_count': new_count,
@@ -917,15 +929,33 @@ class Command(BaseCommand):
logger.error(LICENSE_MESSAGE % d)
raise CommandError('License count exceeded!')
def check_org_host_limit(self):
license_info = get_licenser().validate()
if license_info.get('license_type', 'UNLICENSED') == 'open':
return
org = self.inventory.organization
if org is None or org.max_hosts == 0:
return
active_count = Host.objects.org_active_count(org.id)
if active_count > org.max_hosts:
raise CommandError('Host limit for organization exceeded!')
def mark_license_failure(self, save=True):
self.inventory_update.license_error = True
self.inventory_update.save(update_fields=['license_error'])
def mark_org_limits_failure(self, save=True):
self.inventory_update.org_host_limit_error = True
self.inventory_update.save(update_fields=['org_host_limit_error'])
def handle(self, *args, **options):
self.verbosity = int(options.get('verbosity', 1))
self.set_logging_level()
self.inventory_name = options.get('inventory_name', None)
self.inventory_id = options.get('inventory_id', None)
venv_path = options.get('venv', None)
self.overwrite = bool(options.get('overwrite', False))
self.overwrite_vars = bool(options.get('overwrite_vars', False))
self.keep_vars = bool(options.get('keep_vars', False))
@@ -973,6 +1003,13 @@ class Command(BaseCommand):
self.mark_license_failure(save=True)
raise e
try:
# Check the per-org host limits
self.check_org_host_limit()
except CommandError as e:
self.mark_org_limits_failure(save=True)
raise e
status, tb, exc = 'error', '', None
try:
if settings.SQL_DEBUG:
@@ -986,12 +1023,26 @@ class Command(BaseCommand):
self.inventory_update.status = 'running'
self.inventory_update.save()
# Load inventory from source.
self.all_group = load_inventory_source(self.source,
self.group_filter_re,
self.host_filter_re,
self.exclude_empty_groups,
self.is_custom)
source = self.get_source_absolute_path(self.source)
data = AnsibleInventoryLoader(source=source, is_custom=self.is_custom, venv_path=venv_path).load()
logger.debug('Finished loading from source: %s', source)
logger.info('Processing JSON output...')
inventory = MemInventory(
group_filter_re=self.group_filter_re, host_filter_re=self.host_filter_re)
inventory = dict_to_mem_data(data, inventory=inventory)
del data # forget dict from import, could be large
logger.info('Loaded %d groups, %d hosts', len(inventory.all_group.all_groups),
len(inventory.all_group.all_hosts))
if self.exclude_empty_groups:
inventory.delete_empty_groups()
self.all_group = inventory.all_group
if settings.DEBUG:
# depending on inventory source, this output can be
# *exceedingly* verbose - crawling a deeply nested
@@ -1030,9 +1081,17 @@ class Command(BaseCommand):
# If the license is not valid, a CommandError will be thrown,
# and inventory update will be marked as invalid.
# with transaction.atomic() will roll back the changes.
license_fail = True
self.check_license()
# Check the per-org host limits
license_fail = False
self.check_org_host_limit()
except CommandError as e:
self.mark_license_failure()
if license_fail:
self.mark_license_failure()
else:
self.mark_org_limits_failure()
raise e
if settings.SQL_DEBUG:
@@ -1060,9 +1119,8 @@ class Command(BaseCommand):
else:
tb = traceback.format_exc()
exc = e
transaction.rollback()
if self.invoked_from_dispatcher is False:
if not self.invoked_from_dispatcher:
with ignore_inventory_computed_fields():
self.inventory_update = InventoryUpdate.objects.get(pk=self.inventory_update.pk)
self.inventory_update.result_traceback = tb
@@ -1071,7 +1129,10 @@ class Command(BaseCommand):
self.inventory_source.status = status
self.inventory_source.save(update_fields=['status'])
if exc and isinstance(exc, CommandError):
sys.exit(1)
elif exc:
raise
if exc:
logger.error(str(exc))
if exc:
if isinstance(exc, CommandError):
sys.exit(1)
raise exc

View File

@@ -3,7 +3,6 @@
from awx.main.models import Instance, InstanceGroup
from django.core.management.base import BaseCommand
import six
class Ungrouped(object):
@@ -42,7 +41,7 @@ class Command(BaseCommand):
fmt += ' policy>={0.policy_instance_minimum}'
if instance_group.controller:
fmt += ' controller={0.controller.name}'
print(six.text_type(fmt + ']').format(instance_group))
print((fmt + ']').format(instance_group))
for x in instance_group.instances.all():
color = '\033[92m'
if x.capacity == 0 or x.enabled is False:
@@ -52,5 +51,5 @@ class Command(BaseCommand):
fmt += ' last_isolated_check="{0.last_isolated_check:%Y-%m-%d %H:%M:%S}"'
if x.capacity:
fmt += ' heartbeat="{0.modified:%Y-%m-%d %H:%M:%S}"'
print(six.text_type(fmt + '\033[0m').format(x, x.version or '?'))
print((fmt + '\033[0m').format(x, x.version or '?'))
print('')

View File

@@ -0,0 +1,21 @@
from django.core.management.base import BaseCommand
from awx.main.tasks import profile_sql
class Command(BaseCommand):
"""
Enable or disable SQL Profiling across all Python processes.
SQL profile data will be recorded at /var/log/tower/profile
"""
def add_arguments(self, parser):
parser.add_argument('--threshold', dest='threshold', type=float, default=2.0,
help='The minimum query duration in seconds (default=2). Use 0 to disable.')
parser.add_argument('--minutes', dest='minutes', type=float, default=5,
help='How long to record for in minutes (default=5)')
def handle(self, **options):
profile_sql.delay(
threshold=options['threshold'], minutes=options['minutes']
)

View File

@@ -1,17 +0,0 @@
# Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved
# Borrow from another AWX command
from awx.main.management.commands.provision_instance import Command as OtherCommand
# Python
import warnings
class Command(OtherCommand):
def handle(self, *args, **options):
# TODO: delete this entire file in 3.3
warnings.warn('This command is replaced with `provision_instance` and will '
'be removed in release 3.3.')
return super(Command, self).handle(*args, **options)

View File

@@ -1,7 +1,6 @@
# Copyright (c) 2017 Ansible Tower by Red Hat
# All Rights Reserved.
import sys
import six
from awx.main.utils.pglock import advisory_lock
from awx.main.models import Instance, InstanceGroup
@@ -19,11 +18,11 @@ class InstanceNotFound(Exception):
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--queuename', dest='queuename', type=lambda s: six.text_type(s, 'utf8'),
parser.add_argument('--queuename', dest='queuename', type=str,
help='Queue to create/update')
parser.add_argument('--hostnames', dest='hostnames', type=lambda s: six.text_type(s, 'utf8'),
parser.add_argument('--hostnames', dest='hostnames', type=str,
help='Comma-Delimited Hosts to add to the Queue (will not remove already assigned instances)')
parser.add_argument('--controller', dest='controller', type=lambda s: six.text_type(s, 'utf8'),
parser.add_argument('--controller', dest='controller', type=str,
default='', help='The controlling group (makes this an isolated group)')
parser.add_argument('--instance_percent', dest='instance_percent', type=int, default=0,
help='The percentage of active instances that will be assigned to this group'),
@@ -73,7 +72,7 @@ class Command(BaseCommand):
if instance.exists():
instances.append(instance[0])
else:
raise InstanceNotFound(six.text_type("Instance does not exist: {}").format(inst_name), changed)
raise InstanceNotFound("Instance does not exist: {}".format(inst_name), changed)
ig.instances.add(*instances)
@@ -99,24 +98,24 @@ class Command(BaseCommand):
if options.get('hostnames'):
hostname_list = options.get('hostnames').split(",")
with advisory_lock(six.text_type('instance_group_registration_{}').format(queuename)):
with advisory_lock('instance_group_registration_{}'.format(queuename)):
changed2 = False
changed3 = False
(ig, created, changed1) = self.get_create_update_instance_group(queuename, inst_per, inst_min)
if created:
print(six.text_type("Creating instance group {}".format(ig.name)))
print("Creating instance group {}".format(ig.name))
elif not created:
print(six.text_type("Instance Group already registered {}").format(ig.name))
print("Instance Group already registered {}".format(ig.name))
if ctrl:
(ig_ctrl, changed2) = self.update_instance_group_controller(ig, ctrl)
if changed2:
print(six.text_type("Set controller group {} on {}.").format(ctrl, queuename))
print("Set controller group {} on {}.".format(ctrl, queuename))
try:
(instances, changed3) = self.add_instances_to_group(ig, hostname_list)
for i in instances:
print(six.text_type("Added instance {} to {}").format(i.hostname, ig.name))
print("Added instance {} to {}".format(i.hostname, ig.name))
except InstanceNotFound as e:
instance_not_found_err = e
@@ -126,4 +125,3 @@ class Command(BaseCommand):
if instance_not_found_err:
print(instance_not_found_err.message)
sys.exit(1)

View File

@@ -4,6 +4,7 @@
import sys
import time
import json
import random
from django.utils import timezone
from django.core.management.base import BaseCommand
@@ -26,7 +27,21 @@ from awx.api.serializers import (
)
class ReplayJobEvents():
class JobStatusLifeCycle():
def emit_job_status(self, job, status):
# {"status": "successful", "project_id": 13, "unified_job_id": 659, "group_name": "jobs"}
job.websocket_emit_status(status)
def determine_job_event_finish_status_index(self, job_event_count, random_seed):
if random_seed == 0:
return job_event_count - 1
random.seed(random_seed)
job_event_index = random.randint(0, job_event_count - 1)
return job_event_index
class ReplayJobEvents(JobStatusLifeCycle):
recording_start = None
replay_start = None
@@ -76,9 +91,10 @@ class ReplayJobEvents():
job_events = job.inventory_update_events.order_by('created')
elif type(job) is SystemJob:
job_events = job.system_job_events.order_by('created')
if job_events.count() == 0:
count = job_events.count()
if count == 0:
raise RuntimeError("No events for job id {}".format(job.id))
return job_events
return job_events, count
def get_serializer(self, job):
if type(job) is Job:
@@ -95,7 +111,7 @@ class ReplayJobEvents():
raise RuntimeError("Job is of type {} and replay is not yet supported.".format(type(job)))
sys.exit(1)
def run(self, job_id, speed=1.0, verbosity=0, skip_range=[]):
def run(self, job_id, speed=1.0, verbosity=0, skip_range=[], random_seed=0, final_status_delay=0, debug=False):
stats = {
'events_ontime': {
'total': 0,
@@ -119,17 +135,27 @@ class ReplayJobEvents():
}
try:
job = self.get_job(job_id)
job_events = self.get_job_events(job)
job_events, job_event_count = self.get_job_events(job)
serializer = self.get_serializer(job)
except RuntimeError as e:
print("{}".format(e.message))
sys.exit(1)
je_previous = None
self.emit_job_status(job, 'pending')
self.emit_job_status(job, 'waiting')
self.emit_job_status(job, 'running')
finish_status_index = self.determine_job_event_finish_status_index(job_event_count, random_seed)
for n, je_current in enumerate(job_events):
if je_current.counter in skip_range:
continue
if debug:
input("{} of {}:".format(n, job_event_count))
if not je_previous:
stats['recording_start'] = je_current.created
self.start(je_current.created)
@@ -146,7 +172,7 @@ class ReplayJobEvents():
print("recording: next job in {} seconds".format(recording_diff))
if replay_offset >= 0:
replay_diff = recording_diff - replay_offset
if replay_diff > 0:
stats['events_ontime']['total'] += 1
if verbosity >= 3:
@@ -167,6 +193,11 @@ class ReplayJobEvents():
stats['events_total'] += 1
je_previous = je_current
if n == finish_status_index:
if final_status_delay != 0:
self.sleep(final_status_delay)
self.emit_job_status(job, job.status)
if stats['events_total'] > 2:
stats['replay_end'] = self.now()
stats['replay_duration'] = (stats['replay_end'] - stats['replay_start']).total_seconds()
@@ -206,16 +237,26 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--job_id', dest='job_id', type=int, metavar='j',
help='Id of the job to replay (job or adhoc)')
parser.add_argument('--speed', dest='speed', type=int, metavar='s',
parser.add_argument('--speed', dest='speed', type=float, metavar='s',
help='Speedup factor.')
parser.add_argument('--skip-range', dest='skip_range', type=str, metavar='k',
default='0:-1:1', help='Range of events to skip')
parser.add_argument('--random-seed', dest='random_seed', type=int, metavar='r',
default=0, help='Random number generator seed to use when determining job_event index to emit final job status')
parser.add_argument('--final-status-delay', dest='final_status_delay', type=float, metavar='f',
default=0, help='Delay between event and final status emit')
parser.add_argument('--debug', dest='debug', type=bool, metavar='d',
default=False, help='Enable step mode to control emission of job events one at a time.')
def handle(self, *args, **options):
job_id = options.get('job_id')
speed = options.get('speed') or 1
verbosity = options.get('verbosity') or 0
random_seed = options.get('random_seed')
final_status_delay = options.get('final_status_delay')
debug = options.get('debug')
skip = self._parse_slice_range(options.get('skip_range'))
replayer = ReplayJobEvents()
replayer.run(job_id, speed, verbosity, skip)
replayer.run(job_id, speed=speed, verbosity=verbosity, skip_range=skip, random_seed=random_seed,
final_status_delay=final_status_delay, debug=debug)

View File

@@ -0,0 +1,37 @@
# Django
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
# AWX
from awx.main.models.oauth import OAuth2AccessToken
from oauth2_provider.models import RefreshToken
def revoke_tokens(token_list):
for token in token_list:
token.revoke()
print('revoked {} {}'.format(token.__class__.__name__, token.token))
class Command(BaseCommand):
"""Command that revokes OAuth2 access tokens."""
help='Revokes OAuth2 access tokens. Use --all to revoke access and refresh tokens.'
def add_arguments(self, parser):
parser.add_argument('--user', dest='user', type=str, help='revoke OAuth2 tokens for a specific username')
parser.add_argument('--all', dest='all', action='store_true', help='revoke OAuth2 access tokens and refresh tokens')
def handle(self, *args, **options):
if not options['user']:
if options['all']:
revoke_tokens(RefreshToken.objects.filter(revoked=None))
revoke_tokens(OAuth2AccessToken.objects.all())
else:
try:
user = User.objects.get(username=options['user'])
except ObjectDoesNotExist:
raise CommandError('A user with that username does not exist.')
if options['all']:
revoke_tokens(RefreshToken.objects.filter(revoked=None).filter(user=user))
revoke_tokens(user.main_oauth2accesstoken.filter(user=user))

View File

@@ -19,7 +19,7 @@ logger = logging.getLogger('awx.main.dispatch')
def construct_bcast_queue_name(common_name):
return common_name.encode('utf8') + '_' + settings.CLUSTER_HOST_ID
return common_name + '_' + settings.CLUSTER_HOST_ID
class Command(BaseCommand):
@@ -69,21 +69,42 @@ class Command(BaseCommand):
return TaskResult()
sched_file = '/var/lib/awx/beat.db'
app = Celery()
app.conf.BROKER_URL = settings.BROKER_URL
app.conf.CELERY_TASK_RESULT_EXPIRES = False
# celery in py3 seems to have a bug where the celerybeat schedule
# shelve can become corrupted; we've _only_ seen this in Ubuntu and py36
# it can be avoided by detecting and removing the corrupted file
# at some point, we'll just stop using celerybeat, because it's clearly
# buggy, too -_-
#
# https://github.com/celery/celery/issues/4777
sched = AWXScheduler(schedule_filename=sched_file, app=app)
try:
sched.setup_schedule()
except Exception:
logger.exception('{} is corrupted, removing.'.format(sched_file))
sched._remove_db()
finally:
try:
sched.close()
except Exception:
logger.exception('{} failed to sync/close'.format(sched_file))
beat.Beat(
30,
app,
schedule='/var/lib/awx/beat.db', scheduler_cls=AWXScheduler
schedule=sched_file, scheduler_cls=AWXScheduler
).run()
def handle(self, *arg, **options):
if options.get('status'):
print Control('dispatcher').status()
print(Control('dispatcher').status())
return
if options.get('running'):
print Control('dispatcher').running()
print(Control('dispatcher').running())
return
if options.get('reload'):
return Control('dispatcher').control({'control': 'reload'})

View File

@@ -28,7 +28,7 @@ class Command(BaseCommand):
args = [
'ansible', 'all', '-i', '{},'.format(hostname), '-u',
settings.AWX_ISOLATED_USERNAME, '-T5', '-m', 'shell',
'-a', 'awx-expect -h', '-vvv'
'-a', 'ansible-runner --version', '-vvv'
]
if all([
getattr(settings, 'AWX_ISOLATED_KEY_GENERATION', False) is True,

View File

@@ -1,39 +0,0 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from django.contrib.auth.models import User
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
"""A command that reports whether a username exists within the
system or not.
"""
def handle(self, *args, **options):
"""Print out information about the user to the console."""
# Sanity check: There should be one and exactly one positional
# argument.
if len(args) != 1:
raise CommandError('This command requires one positional argument '
'(a username).')
# Get the user.
try:
username = args[0]
user = User.objects.get(username=username)
# Print a cute header.
header = 'Information for user: %s' % username
print('%s\n%s' % (header, '=' * len(header)))
# Print the email and real name of the user.
print('Email: %s' % user.email)
if user.first_name or user.last_name:
print('Name: %s %s' % (user.first_name, user.last_name))
else:
print('No name provided.')
except User.DoesNotExist:
raise CommandError('User %s does not exist.' % username)

View File

@@ -29,6 +29,34 @@ class HostManager(models.Manager):
"""
return self.order_by().exclude(inventory_sources__source='tower').values('name').distinct().count()
def org_active_count(self, org_id):
"""Return count of active, unique hosts used by an organization.
Construction of query involves:
- remove any ordering specified in model's Meta
- Exclude hosts sourced from another Tower
- Consider only hosts where the canonical inventory is owned by the organization
- Restrict the query to only return the name column
- Only consider results that are unique
- Return the count of this query
"""
return self.order_by().exclude(
inventory_sources__source='tower'
).filter(inventory__organization=org_id).values('name').distinct().count()
def active_counts_by_org(self):
"""Return the counts of active, unique hosts for each organization.
Construction of query involves:
- remove any ordering specified in model's Meta
- Exclude hosts sourced from another Tower
- Consider only hosts where the canonical inventory is owned by each organization
- Restrict the query to only count distinct names
- Return the counts
"""
return self.order_by().exclude(
inventory_sources__source='tower'
).values('inventory__organization').annotate(
inventory__organization__count=models.Count('name', distinct=True))
def get_queryset(self):
"""When the parent instance of the host query set has a `kind=smart` and a `host_filter`
set. Use the `host_filter` to generate the queryset for the hosts.
@@ -38,20 +66,20 @@ class HostManager(models.Manager):
hasattr(self.instance, 'host_filter') and
hasattr(self.instance, 'kind')):
if self.instance.kind == 'smart' and self.instance.host_filter is not None:
q = SmartFilter.query_from_string(self.instance.host_filter)
if self.instance.organization_id:
q = q.filter(inventory__organization=self.instance.organization_id)
# If we are using host_filters, disable the core_filters, this allows
# us to access all of the available Host entries, not just the ones associated
# with a specific FK/relation.
#
# If we don't disable this, a filter of {'inventory': self.instance} gets automatically
# injected by the related object mapper.
self.core_filters = {}
q = SmartFilter.query_from_string(self.instance.host_filter)
if self.instance.organization_id:
q = q.filter(inventory__organization=self.instance.organization_id)
# If we are using host_filters, disable the core_filters, this allows
# us to access all of the available Host entries, not just the ones associated
# with a specific FK/relation.
#
# If we don't disable this, a filter of {'inventory': self.instance} gets automatically
# injected by the related object mapper.
self.core_filters = {}
qs = qs & q
unique_by_name = qs.order_by('name', 'pk').distinct('name')
return qs.filter(pk__in=unique_by_name)
qs = qs & q
unique_by_name = qs.order_by('name', 'pk').distinct('name')
return qs.filter(pk__in=unique_by_name)
return qs

View File

@@ -4,11 +4,11 @@
import uuid
import logging
import threading
import six
import time
import cProfile
import pstats
import os
import urllib.parse
from django.conf import settings
from django.contrib.auth.models import User
@@ -34,7 +34,7 @@ perf_logger = logging.getLogger('awx.analytics.performance')
class TimingMiddleware(threading.local):
dest = '/var/lib/awx/profile'
dest = '/var/log/tower/profile'
def process_request(self, request):
self.start_time = time.time()
@@ -57,7 +57,7 @@ class TimingMiddleware(threading.local):
def save_profile_file(self, request):
if not os.path.isdir(self.dest):
os.makedirs(self.dest)
filename = '%.3fs-%s' % (pstats.Stats(self.prof).total_tt, uuid.uuid4())
filename = '%.3fs-%s.pstats' % (pstats.Stats(self.prof).total_tt, uuid.uuid4())
filepath = os.path.join(self.dest, filename)
with open(filepath, 'w') as f:
f.write('%s %s\n' % (request.method, request.get_full_path()))
@@ -126,8 +126,9 @@ class SessionTimeoutMiddleware(object):
"""
def process_response(self, request, response):
should_skip = 'HTTP_X_WS_SESSION_QUIET' in request.META
req_session = getattr(request, 'session', None)
if req_session and not req_session.is_empty():
if req_session and not req_session.is_empty() and should_skip is False:
expiry = int(settings.SESSION_COOKIE_AGE)
request.session.set_expiry(expiry)
response['Session-Timeout'] = expiry
@@ -194,7 +195,7 @@ class URLModificationMiddleware(object):
def process_request(self, request):
if hasattr(request, 'environ') and 'REQUEST_URI' in request.environ:
old_path = six.moves.urllib.parse.urlsplit(request.environ['REQUEST_URI']).path
old_path = urllib.parse.urlsplit(request.environ['REQUEST_URI']).path
old_path = old_path[request.path.find(request.path_info):]
else:
old_path = request.path_info

View File

@@ -27,7 +27,7 @@ class Migration(migrations.Migration):
name='ActivityStream',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('operation', models.CharField(max_length=13, choices=[(b'create', 'Entity Created'), (b'update', 'Entity Updated'), (b'delete', 'Entity Deleted'), (b'associate', 'Entity Associated with another Entity'), (b'disassociate', 'Entity was Disassociated with another Entity')])),
('operation', models.CharField(max_length=13, choices=[('create', 'Entity Created'), ('update', 'Entity Updated'), ('delete', 'Entity Deleted'), ('associate', 'Entity Associated with another Entity'), ('disassociate', 'Entity was Disassociated with another Entity')])),
('timestamp', models.DateTimeField(auto_now_add=True)),
('changes', models.TextField(blank=True)),
('object_relationship_type', models.TextField(blank=True)),
@@ -42,8 +42,8 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('host_name', models.CharField(default=b'', max_length=1024, editable=False)),
('event', models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_skipped', 'Host Skipped')])),
('host_name', models.CharField(default='', max_length=1024, editable=False)),
('event', models.CharField(max_length=100, choices=[('runner_on_failed', 'Host Failed'), ('runner_on_ok', 'Host OK'), ('runner_on_unreachable', 'Host Unreachable'), ('runner_on_skipped', 'Host Skipped')])),
('event_data', jsonfield.fields.JSONField(default={}, blank=True)),
('failed', models.BooleanField(default=False, editable=False)),
('changed', models.BooleanField(default=False, editable=False)),
@@ -60,8 +60,8 @@ class Migration(migrations.Migration):
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('expires', models.DateTimeField(default=django.utils.timezone.now)),
('request_hash', models.CharField(default=b'', max_length=40, blank=True)),
('reason', models.CharField(default=b'', help_text='Reason the auth token was invalidated.', max_length=1024, blank=True)),
('request_hash', models.CharField(default='', max_length=40, blank=True)),
('reason', models.CharField(default='', help_text='Reason the auth token was invalidated.', max_length=1024, blank=True)),
('user', models.ForeignKey(related_name='auth_tokens', to=settings.AUTH_USER_MODEL)),
],
),
@@ -71,22 +71,22 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(max_length=512)),
('kind', models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'openstack', 'OpenStack')])),
('kind', models.CharField(default='ssh', max_length=32, choices=[('ssh', 'Machine'), ('scm', 'Source Control'), ('aws', 'Amazon Web Services'), ('rax', 'Rackspace'), ('vmware', 'VMware vCenter'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure'), ('openstack', 'OpenStack')])),
('cloud', models.BooleanField(default=False, editable=False)),
('host', models.CharField(default=b'', help_text='The hostname or IP address to use.', max_length=1024, verbose_name='Host', blank=True)),
('username', models.CharField(default=b'', help_text='Username for this credential.', max_length=1024, verbose_name='Username', blank=True)),
('password', models.CharField(default=b'', help_text='Password for this credential (or "ASK" to prompt the user for machine credentials).', max_length=1024, verbose_name='Password', blank=True)),
('security_token', models.CharField(default=b'', help_text='Security Token for this credential', max_length=1024, verbose_name='Security Token', blank=True)),
('project', models.CharField(default=b'', help_text='The identifier for the project.', max_length=100, verbose_name='Project', blank=True)),
('ssh_key_data', models.TextField(default=b'', help_text='RSA or DSA private key to be used instead of password.', verbose_name='SSH private key', blank=True)),
('ssh_key_unlock', models.CharField(default=b'', help_text='Passphrase to unlock SSH private key if encrypted (or "ASK" to prompt the user for machine credentials).', max_length=1024, verbose_name='SSH key unlock', blank=True)),
('become_method', models.CharField(default=b'', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[(b'', 'None'), (b'sudo', 'Sudo'), (b'su', 'Su'), (b'pbrun', 'Pbrun'), (b'pfexec', 'Pfexec')])),
('become_username', models.CharField(default=b'', help_text='Privilege escalation username.', max_length=1024, blank=True)),
('become_password', models.CharField(default=b'', help_text='Password for privilege escalation method.', max_length=1024, blank=True)),
('vault_password', models.CharField(default=b'', help_text='Vault password (or "ASK" to prompt the user).', max_length=1024, blank=True)),
('host', models.CharField(default='', help_text='The hostname or IP address to use.', max_length=1024, verbose_name='Host', blank=True)),
('username', models.CharField(default='', help_text='Username for this credential.', max_length=1024, verbose_name='Username', blank=True)),
('password', models.CharField(default='', help_text='Password for this credential (or "ASK" to prompt the user for machine credentials).', max_length=1024, verbose_name='Password', blank=True)),
('security_token', models.CharField(default='', help_text='Security Token for this credential', max_length=1024, verbose_name='Security Token', blank=True)),
('project', models.CharField(default='', help_text='The identifier for the project.', max_length=100, verbose_name='Project', blank=True)),
('ssh_key_data', models.TextField(default='', help_text='RSA or DSA private key to be used instead of password.', verbose_name='SSH private key', blank=True)),
('ssh_key_unlock', models.CharField(default='', help_text='Passphrase to unlock SSH private key if encrypted (or "ASK" to prompt the user for machine credentials).', max_length=1024, verbose_name='SSH key unlock', blank=True)),
('become_method', models.CharField(default='', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[('', 'None'), ('sudo', 'Sudo'), ('su', 'Su'), ('pbrun', 'Pbrun'), ('pfexec', 'Pfexec')])),
('become_username', models.CharField(default='', help_text='Privilege escalation username.', max_length=1024, blank=True)),
('become_password', models.CharField(default='', help_text='Password for privilege escalation method.', max_length=1024, blank=True)),
('vault_password', models.CharField(default='', help_text='Vault password (or "ASK" to prompt the user).', max_length=1024, blank=True)),
('created_by', models.ForeignKey(related_name="{u'class': 'credential', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name="{u'class': 'credential', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
('tags', taggit.managers.TaggableManager(to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags')),
@@ -101,10 +101,10 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(max_length=512)),
('script', models.TextField(default=b'', help_text='Inventory script contents', blank=True)),
('script', models.TextField(default='', help_text='Inventory script contents', blank=True)),
('created_by', models.ForeignKey(related_name="{u'class': 'custominventoryscript', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name="{u'class': 'custominventoryscript', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
],
@@ -118,10 +118,10 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(max_length=512)),
('variables', models.TextField(default=b'', help_text='Group variables in JSON or YAML format.', blank=True)),
('variables', models.TextField(default='', help_text='Group variables in JSON or YAML format.', blank=True)),
('total_hosts', models.PositiveIntegerField(default=0, help_text='Total number of hosts directly or indirectly in this group.', editable=False)),
('has_active_failures', models.BooleanField(default=False, help_text='Flag indicating whether this group has any hosts with active failures.', editable=False)),
('hosts_with_active_failures', models.PositiveIntegerField(default=0, help_text='Number of hosts in this group with active failures.', editable=False)),
@@ -140,12 +140,12 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(max_length=512)),
('enabled', models.BooleanField(default=True, help_text='Is this host online and available for running jobs?')),
('instance_id', models.CharField(default=b'', max_length=100, blank=True)),
('variables', models.TextField(default=b'', help_text='Host variables in JSON or YAML format.', blank=True)),
('instance_id', models.CharField(default='', max_length=100, blank=True)),
('variables', models.TextField(default='', help_text='Host variables in JSON or YAML format.', blank=True)),
('has_active_failures', models.BooleanField(default=False, help_text='Flag indicating whether the last job failed for this host.', editable=False)),
('has_inventory_sources', models.BooleanField(default=False, help_text='Flag indicating whether this host was created/updated from any external inventory sources.', editable=False)),
('created_by', models.ForeignKey(related_name="{u'class': 'host', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
@@ -171,10 +171,10 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(unique=True, max_length=512)),
('variables', models.TextField(default=b'', help_text='Inventory variables in JSON or YAML format.', blank=True)),
('variables', models.TextField(default='', help_text='Inventory variables in JSON or YAML format.', blank=True)),
('has_active_failures', models.BooleanField(default=False, help_text='Flag indicating whether any hosts in this inventory have failed.', editable=False)),
('total_hosts', models.PositiveIntegerField(default=0, help_text='Total number of hosts in this inventory.', editable=False)),
('hosts_with_active_failures', models.PositiveIntegerField(default=0, help_text='Number of hosts in this inventory with active failures.', editable=False)),
@@ -197,14 +197,14 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('event', models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_error', 'Host Failure'), (b'runner_on_skipped', 'Host Skipped'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_no_hosts', 'No Hosts Remaining'), (b'runner_on_async_poll', 'Host Polling'), (b'runner_on_async_ok', 'Host Async OK'), (b'runner_on_async_failed', 'Host Async Failure'), (b'runner_on_file_diff', 'File Difference'), (b'playbook_on_start', 'Playbook Started'), (b'playbook_on_notify', 'Running Handlers'), (b'playbook_on_no_hosts_matched', 'No Hosts Matched'), (b'playbook_on_no_hosts_remaining', 'No Hosts Remaining'), (b'playbook_on_task_start', 'Task Started'), (b'playbook_on_vars_prompt', 'Variables Prompted'), (b'playbook_on_setup', 'Gathering Facts'), (b'playbook_on_import_for_host', 'internal: on Import for Host'), (b'playbook_on_not_import_for_host', 'internal: on Not Import for Host'), (b'playbook_on_play_start', 'Play Started'), (b'playbook_on_stats', 'Playbook Complete')])),
('event', models.CharField(max_length=100, choices=[('runner_on_failed', 'Host Failed'), ('runner_on_ok', 'Host OK'), ('runner_on_error', 'Host Failure'), ('runner_on_skipped', 'Host Skipped'), ('runner_on_unreachable', 'Host Unreachable'), ('runner_on_no_hosts', 'No Hosts Remaining'), ('runner_on_async_poll', 'Host Polling'), ('runner_on_async_ok', 'Host Async OK'), ('runner_on_async_failed', 'Host Async Failure'), ('runner_on_file_diff', 'File Difference'), ('playbook_on_start', 'Playbook Started'), ('playbook_on_notify', 'Running Handlers'), ('playbook_on_no_hosts_matched', 'No Hosts Matched'), ('playbook_on_no_hosts_remaining', 'No Hosts Remaining'), ('playbook_on_task_start', 'Task Started'), ('playbook_on_vars_prompt', 'Variables Prompted'), ('playbook_on_setup', 'Gathering Facts'), ('playbook_on_import_for_host', 'internal: on Import for Host'), ('playbook_on_not_import_for_host', 'internal: on Not Import for Host'), ('playbook_on_play_start', 'Play Started'), ('playbook_on_stats', 'Playbook Complete')])),
('event_data', jsonfield.fields.JSONField(default={}, blank=True)),
('failed', models.BooleanField(default=False, editable=False)),
('changed', models.BooleanField(default=False, editable=False)),
('host_name', models.CharField(default=b'', max_length=1024, editable=False)),
('play', models.CharField(default=b'', max_length=1024, editable=False)),
('role', models.CharField(default=b'', max_length=1024, editable=False)),
('task', models.CharField(default=b'', max_length=1024, editable=False)),
('host_name', models.CharField(default='', max_length=1024, editable=False)),
('play', models.CharField(default='', max_length=1024, editable=False)),
('role', models.CharField(default='', max_length=1024, editable=False)),
('task', models.CharField(default='', max_length=1024, editable=False)),
('counter', models.PositiveIntegerField(default=0)),
('host', models.ForeignKey(related_name='job_events_as_primary_host', on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to='main.Host', null=True)),
('hosts', models.ManyToManyField(related_name='job_events', editable=False, to='main.Host')),
@@ -220,7 +220,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('host_name', models.CharField(default=b'', max_length=1024, editable=False)),
('host_name', models.CharField(default='', max_length=1024, editable=False)),
('changed', models.PositiveIntegerField(default=0, editable=False)),
('dark', models.PositiveIntegerField(default=0, editable=False)),
('failures', models.PositiveIntegerField(default=0, editable=False)),
@@ -250,7 +250,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(unique=True, max_length=512)),
('admins', models.ManyToManyField(related_name='admin_of_organizations', to=settings.AUTH_USER_MODEL, blank=True)),
@@ -269,10 +269,10 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(max_length=512)),
('permission_type', models.CharField(max_length=64, choices=[(b'read', 'Read Inventory'), (b'write', 'Edit Inventory'), (b'admin', 'Administrate Inventory'), (b'run', 'Deploy To Inventory'), (b'check', 'Deploy To Inventory (Dry Run)'), (b'scan', 'Scan an Inventory'), (b'create', 'Create a Job Template')])),
('permission_type', models.CharField(max_length=64, choices=[('read', 'Read Inventory'), ('write', 'Edit Inventory'), ('admin', 'Administrate Inventory'), ('run', 'Deploy To Inventory'), ('check', 'Deploy To Inventory (Dry Run)'), ('scan', 'Scan an Inventory'), ('create', 'Create a Job Template')])),
('run_ad_hoc_commands', models.BooleanField(default=False, help_text='Execute Commands on the Inventory')),
('created_by', models.ForeignKey(related_name="{u'class': 'permission', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
('inventory', models.ForeignKey(related_name='permissions', on_delete=django.db.models.deletion.SET_NULL, to='main.Inventory', null=True)),
@@ -286,7 +286,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('ldap_dn', models.CharField(default=b'', max_length=1024)),
('ldap_dn', models.CharField(default='', max_length=1024)),
('user', awx.main.fields.AutoOneToOneField(related_name='profile', editable=False, to=settings.AUTH_USER_MODEL)),
],
),
@@ -296,7 +296,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(unique=True, max_length=512)),
('enabled', models.BooleanField(default=True)),
@@ -319,7 +319,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(max_length=512)),
('created_by', models.ForeignKey(related_name="{u'class': 'team', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
@@ -338,26 +338,26 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(max_length=512)),
('old_pk', models.PositiveIntegerField(default=None, null=True, editable=False)),
('launch_type', models.CharField(default=b'manual', max_length=20, editable=False, choices=[(b'manual', 'Manual'), (b'relaunch', 'Relaunch'), (b'callback', 'Callback'), (b'scheduled', 'Scheduled'), (b'dependency', 'Dependency')])),
('launch_type', models.CharField(default='manual', max_length=20, editable=False, choices=[('manual', 'Manual'), ('relaunch', 'Relaunch'), ('callback', 'Callback'), ('scheduled', 'Scheduled'), ('dependency', 'Dependency')])),
('cancel_flag', models.BooleanField(default=False, editable=False)),
('status', models.CharField(default=b'new', max_length=20, editable=False, choices=[(b'new', 'New'), (b'pending', 'Pending'), (b'waiting', 'Waiting'), (b'running', 'Running'), (b'successful', 'Successful'), (b'failed', 'Failed'), (b'error', 'Error'), (b'canceled', 'Canceled')])),
('status', models.CharField(default='new', max_length=20, editable=False, choices=[('new', 'New'), ('pending', 'Pending'), ('waiting', 'Waiting'), ('running', 'Running'), ('successful', 'Successful'), ('failed', 'Failed'), ('error', 'Error'), ('canceled', 'Canceled')])),
('failed', models.BooleanField(default=False, editable=False)),
('started', models.DateTimeField(default=None, null=True, editable=False)),
('finished', models.DateTimeField(default=None, null=True, editable=False)),
('elapsed', models.DecimalField(editable=False, max_digits=12, decimal_places=3)),
('job_args', models.TextField(default=b'', editable=False, blank=True)),
('job_cwd', models.CharField(default=b'', max_length=1024, editable=False, blank=True)),
('job_args', models.TextField(default='', editable=False, blank=True)),
('job_cwd', models.CharField(default='', max_length=1024, editable=False, blank=True)),
('job_env', jsonfield.fields.JSONField(default={}, editable=False, blank=True)),
('job_explanation', models.TextField(default=b'', editable=False, blank=True)),
('start_args', models.TextField(default=b'', editable=False, blank=True)),
('result_stdout_text', models.TextField(default=b'', editable=False, blank=True)),
('result_stdout_file', models.TextField(default=b'', editable=False, blank=True)),
('result_traceback', models.TextField(default=b'', editable=False, blank=True)),
('celery_task_id', models.CharField(default=b'', max_length=100, editable=False, blank=True)),
('job_explanation', models.TextField(default='', editable=False, blank=True)),
('start_args', models.TextField(default='', editable=False, blank=True)),
('result_stdout_text', models.TextField(default='', editable=False, blank=True)),
('result_stdout_file', models.TextField(default='', editable=False, blank=True)),
('result_traceback', models.TextField(default='', editable=False, blank=True)),
('celery_task_id', models.CharField(default='', max_length=100, editable=False, blank=True)),
],
),
migrations.CreateModel(
@@ -366,7 +366,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('active', models.BooleanField(default=True, editable=False)),
('name', models.CharField(max_length=512)),
('old_pk', models.PositiveIntegerField(default=None, null=True, editable=False)),
@@ -374,19 +374,19 @@ class Migration(migrations.Migration):
('last_job_run', models.DateTimeField(default=None, null=True, editable=False)),
('has_schedules', models.BooleanField(default=False, editable=False)),
('next_job_run', models.DateTimeField(default=None, null=True, editable=False)),
('status', models.CharField(default=b'ok', max_length=32, editable=False, choices=[(b'new', 'New'), (b'pending', 'Pending'), (b'waiting', 'Waiting'), (b'running', 'Running'), (b'successful', 'Successful'), (b'failed', 'Failed'), (b'error', 'Error'), (b'canceled', 'Canceled'), (b'never updated', b'Never Updated'), (b'ok', b'OK'), (b'missing', b'Missing'), (b'none', 'No External Source'), (b'updating', 'Updating')])),
('status', models.CharField(default='ok', max_length=32, editable=False, choices=[('new', 'New'), ('pending', 'Pending'), ('waiting', 'Waiting'), ('running', 'Running'), ('successful', 'Successful'), ('failed', 'Failed'), ('error', 'Error'), ('canceled', 'Canceled'), ('never updated', 'Never Updated'), ('ok', 'OK'), ('missing', 'Missing'), ('none', 'No External Source'), ('updating', 'Updating')])),
],
),
migrations.CreateModel(
name='AdHocCommand',
fields=[
('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')),
('job_type', models.CharField(default=b'run', max_length=64, choices=[(b'run', 'Run'), (b'check', 'Check')])),
('limit', models.CharField(default=b'', max_length=1024, blank=True)),
('module_name', models.CharField(default=b'', max_length=1024, blank=True)),
('module_args', models.TextField(default=b'', blank=True)),
('job_type', models.CharField(default='run', max_length=64, choices=[('run', 'Run'), ('check', 'Check')])),
('limit', models.CharField(default='', max_length=1024, blank=True)),
('module_name', models.CharField(default='', max_length=1024, blank=True)),
('module_args', models.TextField(default='', blank=True)),
('forks', models.PositiveIntegerField(default=0, blank=True)),
('verbosity', models.PositiveIntegerField(default=0, blank=True, choices=[(0, b'0 (Normal)'), (1, b'1 (Verbose)'), (2, b'2 (More Verbose)'), (3, b'3 (Debug)'), (4, b'4 (Connection Debug)'), (5, b'5 (WinRM Debug)')])),
('verbosity', models.PositiveIntegerField(default=0, blank=True, choices=[(0, '0 (Normal)'), (1, '1 (Verbose)'), (2, '2 (More Verbose)'), (3, '3 (Debug)'), (4, '4 (Connection Debug)'), (5, '5 (WinRM Debug)')])),
('become_enabled', models.BooleanField(default=False)),
],
bases=('main.unifiedjob',),
@@ -395,12 +395,12 @@ class Migration(migrations.Migration):
name='InventorySource',
fields=[
('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')),
('source', models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'vmware', 'VMware vCenter'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')])),
('source_path', models.CharField(default=b'', max_length=1024, editable=False, blank=True)),
('source_vars', models.TextField(default=b'', help_text='Inventory source variables in YAML or JSON format.', blank=True)),
('source_regions', models.CharField(default=b'', max_length=1024, blank=True)),
('instance_filters', models.CharField(default=b'', help_text='Comma-separated list of filter expressions (EC2 only). Hosts are imported when ANY of the filters match.', max_length=1024, blank=True)),
('group_by', models.CharField(default=b'', help_text='Limit groups automatically created from inventory source (EC2 only).', max_length=1024, blank=True)),
('source', models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'Local File, Directory or Script'), ('rax', 'Rackspace Cloud Servers'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure'), ('vmware', 'VMware vCenter'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')])),
('source_path', models.CharField(default='', max_length=1024, editable=False, blank=True)),
('source_vars', models.TextField(default='', help_text='Inventory source variables in YAML or JSON format.', blank=True)),
('source_regions', models.CharField(default='', max_length=1024, blank=True)),
('instance_filters', models.CharField(default='', help_text='Comma-separated list of filter expressions (EC2 only). Hosts are imported when ANY of the filters match.', max_length=1024, blank=True)),
('group_by', models.CharField(default='', help_text='Limit groups automatically created from inventory source (EC2 only).', max_length=1024, blank=True)),
('overwrite', models.BooleanField(default=False, help_text='Overwrite local groups and hosts from remote inventory source.')),
('overwrite_vars', models.BooleanField(default=False, help_text='Overwrite local variables from remote inventory source.')),
('update_on_launch', models.BooleanField(default=False)),
@@ -412,12 +412,12 @@ class Migration(migrations.Migration):
name='InventoryUpdate',
fields=[
('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')),
('source', models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'vmware', 'VMware vCenter'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')])),
('source_path', models.CharField(default=b'', max_length=1024, editable=False, blank=True)),
('source_vars', models.TextField(default=b'', help_text='Inventory source variables in YAML or JSON format.', blank=True)),
('source_regions', models.CharField(default=b'', max_length=1024, blank=True)),
('instance_filters', models.CharField(default=b'', help_text='Comma-separated list of filter expressions (EC2 only). Hosts are imported when ANY of the filters match.', max_length=1024, blank=True)),
('group_by', models.CharField(default=b'', help_text='Limit groups automatically created from inventory source (EC2 only).', max_length=1024, blank=True)),
('source', models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'Local File, Directory or Script'), ('rax', 'Rackspace Cloud Servers'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure'), ('vmware', 'VMware vCenter'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')])),
('source_path', models.CharField(default='', max_length=1024, editable=False, blank=True)),
('source_vars', models.TextField(default='', help_text='Inventory source variables in YAML or JSON format.', blank=True)),
('source_regions', models.CharField(default='', max_length=1024, blank=True)),
('instance_filters', models.CharField(default='', help_text='Comma-separated list of filter expressions (EC2 only). Hosts are imported when ANY of the filters match.', max_length=1024, blank=True)),
('group_by', models.CharField(default='', help_text='Limit groups automatically created from inventory source (EC2 only).', max_length=1024, blank=True)),
('overwrite', models.BooleanField(default=False, help_text='Overwrite local groups and hosts from remote inventory source.')),
('overwrite_vars', models.BooleanField(default=False, help_text='Overwrite local variables from remote inventory source.')),
('license_error', models.BooleanField(default=False, editable=False)),
@@ -428,16 +428,16 @@ class Migration(migrations.Migration):
name='Job',
fields=[
('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')),
('job_type', models.CharField(default=b'run', max_length=64, choices=[(b'run', 'Run'), (b'check', 'Check'), (b'scan', 'Scan')])),
('playbook', models.CharField(default=b'', max_length=1024, blank=True)),
('job_type', models.CharField(default='run', max_length=64, choices=[('run', 'Run'), ('check', 'Check'), ('scan', 'Scan')])),
('playbook', models.CharField(default='', max_length=1024, blank=True)),
('forks', models.PositiveIntegerField(default=0, blank=True)),
('limit', models.CharField(default=b'', max_length=1024, blank=True)),
('verbosity', models.PositiveIntegerField(default=0, blank=True, choices=[(0, b'0 (Normal)'), (1, b'1 (Verbose)'), (2, b'2 (More Verbose)'), (3, b'3 (Debug)'), (4, b'4 (Connection Debug)'), (5, b'5 (WinRM Debug)')])),
('extra_vars', models.TextField(default=b'', blank=True)),
('job_tags', models.CharField(default=b'', max_length=1024, blank=True)),
('limit', models.CharField(default='', max_length=1024, blank=True)),
('verbosity', models.PositiveIntegerField(default=0, blank=True, choices=[(0, '0 (Normal)'), (1, '1 (Verbose)'), (2, '2 (More Verbose)'), (3, '3 (Debug)'), (4, '4 (Connection Debug)'), (5, '5 (WinRM Debug)')])),
('extra_vars', models.TextField(default='', blank=True)),
('job_tags', models.CharField(default='', max_length=1024, blank=True)),
('force_handlers', models.BooleanField(default=False)),
('skip_tags', models.CharField(default=b'', max_length=1024, blank=True)),
('start_at_task', models.CharField(default=b'', max_length=1024, blank=True)),
('skip_tags', models.CharField(default='', max_length=1024, blank=True)),
('start_at_task', models.CharField(default='', max_length=1024, blank=True)),
('become_enabled', models.BooleanField(default=False)),
],
options={
@@ -449,18 +449,18 @@ class Migration(migrations.Migration):
name='JobTemplate',
fields=[
('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')),
('job_type', models.CharField(default=b'run', max_length=64, choices=[(b'run', 'Run'), (b'check', 'Check'), (b'scan', 'Scan')])),
('playbook', models.CharField(default=b'', max_length=1024, blank=True)),
('job_type', models.CharField(default='run', max_length=64, choices=[('run', 'Run'), ('check', 'Check'), ('scan', 'Scan')])),
('playbook', models.CharField(default='', max_length=1024, blank=True)),
('forks', models.PositiveIntegerField(default=0, blank=True)),
('limit', models.CharField(default=b'', max_length=1024, blank=True)),
('verbosity', models.PositiveIntegerField(default=0, blank=True, choices=[(0, b'0 (Normal)'), (1, b'1 (Verbose)'), (2, b'2 (More Verbose)'), (3, b'3 (Debug)'), (4, b'4 (Connection Debug)'), (5, b'5 (WinRM Debug)')])),
('extra_vars', models.TextField(default=b'', blank=True)),
('job_tags', models.CharField(default=b'', max_length=1024, blank=True)),
('limit', models.CharField(default='', max_length=1024, blank=True)),
('verbosity', models.PositiveIntegerField(default=0, blank=True, choices=[(0, '0 (Normal)'), (1, '1 (Verbose)'), (2, '2 (More Verbose)'), (3, '3 (Debug)'), (4, '4 (Connection Debug)'), (5, '5 (WinRM Debug)')])),
('extra_vars', models.TextField(default='', blank=True)),
('job_tags', models.CharField(default='', max_length=1024, blank=True)),
('force_handlers', models.BooleanField(default=False)),
('skip_tags', models.CharField(default=b'', max_length=1024, blank=True)),
('start_at_task', models.CharField(default=b'', max_length=1024, blank=True)),
('skip_tags', models.CharField(default='', max_length=1024, blank=True)),
('start_at_task', models.CharField(default='', max_length=1024, blank=True)),
('become_enabled', models.BooleanField(default=False)),
('host_config_key', models.CharField(default=b'', max_length=1024, blank=True)),
('host_config_key', models.CharField(default='', max_length=1024, blank=True)),
('ask_variables_on_launch', models.BooleanField(default=False)),
('survey_enabled', models.BooleanField(default=False)),
('survey_spec', jsonfield.fields.JSONField(default={}, blank=True)),
@@ -475,9 +475,9 @@ class Migration(migrations.Migration):
fields=[
('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')),
('local_path', models.CharField(help_text='Local path (relative to PROJECTS_ROOT) containing playbooks and related files for this project.', max_length=1024, blank=True)),
('scm_type', models.CharField(default=b'', max_length=8, verbose_name='SCM Type', blank=True, choices=[(b'', 'Manual'), (b'git', 'Git'), (b'hg', 'Mercurial'), (b'svn', 'Subversion')])),
('scm_url', models.CharField(default=b'', max_length=1024, verbose_name='SCM URL', blank=True)),
('scm_branch', models.CharField(default=b'', help_text='Specific branch, tag or commit to checkout.', max_length=256, verbose_name='SCM Branch', blank=True)),
('scm_type', models.CharField(default='', max_length=8, verbose_name='SCM Type', blank=True, choices=[('', 'Manual'), ('git', 'Git'), ('hg', 'Mercurial'), ('svn', 'Subversion')])),
('scm_url', models.CharField(default='', max_length=1024, verbose_name='SCM URL', blank=True)),
('scm_branch', models.CharField(default='', help_text='Specific branch, tag or commit to checkout.', max_length=256, verbose_name='SCM Branch', blank=True)),
('scm_clean', models.BooleanField(default=False)),
('scm_delete_on_update', models.BooleanField(default=False)),
('scm_delete_on_next_update', models.BooleanField(default=False, editable=False)),
@@ -494,9 +494,9 @@ class Migration(migrations.Migration):
fields=[
('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')),
('local_path', models.CharField(help_text='Local path (relative to PROJECTS_ROOT) containing playbooks and related files for this project.', max_length=1024, blank=True)),
('scm_type', models.CharField(default=b'', max_length=8, verbose_name='SCM Type', blank=True, choices=[(b'', 'Manual'), (b'git', 'Git'), (b'hg', 'Mercurial'), (b'svn', 'Subversion')])),
('scm_url', models.CharField(default=b'', max_length=1024, verbose_name='SCM URL', blank=True)),
('scm_branch', models.CharField(default=b'', help_text='Specific branch, tag or commit to checkout.', max_length=256, verbose_name='SCM Branch', blank=True)),
('scm_type', models.CharField(default='', max_length=8, verbose_name='SCM Type', blank=True, choices=[('', 'Manual'), ('git', 'Git'), ('hg', 'Mercurial'), ('svn', 'Subversion')])),
('scm_url', models.CharField(default='', max_length=1024, verbose_name='SCM URL', blank=True)),
('scm_branch', models.CharField(default='', help_text='Specific branch, tag or commit to checkout.', max_length=256, verbose_name='SCM Branch', blank=True)),
('scm_clean', models.BooleanField(default=False)),
('scm_delete_on_update', models.BooleanField(default=False)),
],
@@ -506,8 +506,8 @@ class Migration(migrations.Migration):
name='SystemJob',
fields=[
('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')),
('job_type', models.CharField(default=b'', max_length=32, blank=True, choices=[(b'cleanup_jobs', 'Remove jobs older than a certain number of days'), (b'cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), (b'cleanup_deleted', 'Purge previously deleted items from the database'), (b'cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')])),
('extra_vars', models.TextField(default=b'', blank=True)),
('job_type', models.CharField(default='', max_length=32, blank=True, choices=[('cleanup_jobs', 'Remove jobs older than a certain number of days'), ('cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), ('cleanup_deleted', 'Purge previously deleted items from the database'), ('cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')])),
('extra_vars', models.TextField(default='', blank=True)),
],
options={
'ordering': ('id',),
@@ -518,7 +518,7 @@ class Migration(migrations.Migration):
name='SystemJobTemplate',
fields=[
('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')),
('job_type', models.CharField(default=b'', max_length=32, blank=True, choices=[(b'cleanup_jobs', 'Remove jobs older than a certain number of days'), (b'cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), (b'cleanup_deleted', 'Purge previously deleted items from the database'), (b'cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')])),
('job_type', models.CharField(default='', max_length=32, blank=True, choices=[('cleanup_jobs', 'Remove jobs older than a certain number of days'), ('cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), ('cleanup_deleted', 'Purge previously deleted items from the database'), ('cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')])),
],
bases=('main.unifiedjobtemplate', models.Model),
),

View File

@@ -105,24 +105,24 @@ def create_system_job_templates(apps, schema_editor):
class Migration(migrations.Migration):
replaces = [(b'main', '0002_v300_tower_settings_changes'),
(b'main', '0003_v300_notification_changes'),
(b'main', '0004_v300_fact_changes'),
(b'main', '0005_v300_migrate_facts'),
(b'main', '0006_v300_active_flag_cleanup'),
(b'main', '0007_v300_active_flag_removal'),
(b'main', '0008_v300_rbac_changes'),
(b'main', '0009_v300_rbac_migrations'),
(b'main', '0010_v300_create_system_job_templates'),
(b'main', '0011_v300_credential_domain_field'),
(b'main', '0012_v300_create_labels'),
(b'main', '0013_v300_label_changes'),
(b'main', '0014_v300_invsource_cred'),
(b'main', '0015_v300_label_changes'),
(b'main', '0016_v300_prompting_changes'),
(b'main', '0017_v300_prompting_migrations'),
(b'main', '0018_v300_host_ordering'),
(b'main', '0019_v300_new_azure_credential'),]
replaces = [('main', '0002_v300_tower_settings_changes'),
('main', '0003_v300_notification_changes'),
('main', '0004_v300_fact_changes'),
('main', '0005_v300_migrate_facts'),
('main', '0006_v300_active_flag_cleanup'),
('main', '0007_v300_active_flag_removal'),
('main', '0008_v300_rbac_changes'),
('main', '0009_v300_rbac_migrations'),
('main', '0010_v300_create_system_job_templates'),
('main', '0011_v300_credential_domain_field'),
('main', '0012_v300_create_labels'),
('main', '0013_v300_label_changes'),
('main', '0014_v300_invsource_cred'),
('main', '0015_v300_label_changes'),
('main', '0016_v300_prompting_changes'),
('main', '0017_v300_prompting_migrations'),
('main', '0018_v300_host_ordering'),
('main', '0019_v300_new_azure_credential'),]
dependencies = [
('taggit', '0002_auto_20150616_2121'),
@@ -143,7 +143,7 @@ class Migration(migrations.Migration):
('description', models.TextField()),
('category', models.CharField(max_length=128)),
('value', models.TextField(blank=True)),
('value_type', models.CharField(max_length=12, choices=[(b'string', 'String'), (b'int', 'Integer'), (b'float', 'Decimal'), (b'json', 'JSON'), (b'bool', 'Boolean'), (b'password', 'Password'), (b'list', 'List')])),
('value_type', models.CharField(max_length=12, choices=[('string', 'String'), ('int', 'Integer'), ('float', 'Decimal'), ('json', 'JSON'), ('bool', 'Boolean'), ('password', 'Password'), ('list', 'List')])),
('user', models.ForeignKey(related_name='settings', default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
],
),
@@ -154,12 +154,12 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('status', models.CharField(default=b'pending', max_length=20, editable=False, choices=[(b'pending', 'Pending'), (b'successful', 'Successful'), (b'failed', 'Failed')])),
('error', models.TextField(default=b'', editable=False, blank=True)),
('status', models.CharField(default='pending', max_length=20, editable=False, choices=[('pending', 'Pending'), ('successful', 'Successful'), ('failed', 'Failed')])),
('error', models.TextField(default='', editable=False, blank=True)),
('notifications_sent', models.IntegerField(default=0, editable=False)),
('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'mattermost', 'Mattermost'), (b'rocketchat', 'Rocket.Chat'), (b'irc', 'IRC')])),
('recipients', models.TextField(default=b'', editable=False, blank=True)),
('subject', models.TextField(default=b'', editable=False, blank=True)),
('notification_type', models.CharField(max_length=32, choices=[('email', 'Email'), ('slack', 'Slack'), ('twilio', 'Twilio'), ('pagerduty', 'Pagerduty'), ('hipchat', 'HipChat'), ('webhook', 'Webhook'), ('mattermost', 'Mattermost'), ('rocketchat', 'Rocket.Chat'), ('irc', 'IRC')])),
('recipients', models.TextField(default='', editable=False, blank=True)),
('subject', models.TextField(default='', editable=False, blank=True)),
('body', jsonfield.fields.JSONField(default=dict, blank=True)),
],
options={
@@ -172,9 +172,9 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('name', models.CharField(unique=True, max_length=512)),
('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'mattermost', 'Mattermost'), (b'rocketchat', 'Rocket.Chat'), (b'irc', 'IRC')])),
('notification_type', models.CharField(max_length=32, choices=[('email', 'Email'), ('slack', 'Slack'), ('twilio', 'Twilio'), ('pagerduty', 'Pagerduty'), ('hipchat', 'HipChat'), ('webhook', 'Webhook'), ('mattermost', 'Mattermost'), ('rocketchat', 'Rocket.Chat'), ('irc', 'IRC')])),
('notification_configuration', jsonfield.fields.JSONField(default=dict)),
('created_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
@@ -381,7 +381,7 @@ class Migration(migrations.Migration):
('singleton_name', models.TextField(default=None, unique=True, null=True, db_index=True)),
('members', models.ManyToManyField(related_name='roles', to=settings.AUTH_USER_MODEL)),
('parents', models.ManyToManyField(related_name='children', to='main.Role')),
('implicit_parents', models.TextField(default=b'[]')),
('implicit_parents', models.TextField(default='[]')),
('content_type', models.ForeignKey(default=None, to='contenttypes.ContentType', null=True)),
('object_id', models.PositiveIntegerField(default=None, null=True)),
@@ -422,122 +422,122 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='credential',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['singleton:system_administrator'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='credential',
name='use_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='credential',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['singleton:system_auditor', 'organization.auditor_role', 'use_role', 'admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='custominventoryscript',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='organization.admin_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='custominventoryscript',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'organization.member_role', b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['organization.auditor_role', 'organization.member_role', 'admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='inventory',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='organization.admin_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='inventory',
name='adhoc_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='admin_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='inventory',
name='update_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='admin_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='inventory',
name='use_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'adhoc_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='adhoc_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='inventory',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'update_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['organization.auditor_role', 'update_role', 'use_role', 'admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='jobtemplate',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'project.organization.admin_role', b'inventory.organization.admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['project.organization.admin_role', 'inventory.organization.admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='jobtemplate',
name='execute_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='jobtemplate',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'project.organization.auditor_role', b'inventory.organization.auditor_role', b'execute_role', b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['project.organization.auditor_role', 'inventory.organization.auditor_role', 'execute_role', 'admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='organization',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_administrator', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='singleton:system_administrator', to='main.Role', null='True'),
),
migrations.AddField(
model_name='organization',
name='auditor_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_auditor', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='singleton:system_auditor', to='main.Role', null='True'),
),
migrations.AddField(
model_name='organization',
name='member_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='admin_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='organization',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'member_role', b'auditor_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['member_role', 'auditor_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='project',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.admin_role', b'singleton:system_administrator'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['organization.admin_role', 'singleton:system_administrator'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='project',
name='use_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='admin_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='project',
name='update_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='admin_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='project',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'singleton:system_auditor', b'use_role', b'update_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['organization.auditor_role', 'singleton:system_auditor', 'use_role', 'update_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='team',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='organization.admin_role', to='main.Role', null='True'),
),
migrations.AddField(
model_name='team',
name='member_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=None, to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=None, to='main.Role', null='True'),
),
migrations.AddField(
model_name='team',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role', b'organization.auditor_role', b'member_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['admin_role', 'organization.auditor_role', 'member_role'], to='main.Role', null='True'),
),
# System Job Templates
@@ -545,18 +545,18 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='systemjob',
name='job_type',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'cleanup_jobs', 'Remove jobs older than a certain number of days'), (b'cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), (b'cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('cleanup_jobs', 'Remove jobs older than a certain number of days'), ('cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), ('cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')]),
),
migrations.AlterField(
model_name='systemjobtemplate',
name='job_type',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'cleanup_jobs', 'Remove jobs older than a certain number of days'), (b'cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), (b'cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('cleanup_jobs', 'Remove jobs older than a certain number of days'), ('cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), ('cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')]),
),
# Credential domain field
migrations.AddField(
model_name='credential',
name='domain',
field=models.CharField(default=b'', help_text='The identifier for the domain.', max_length=100, verbose_name='Domain', blank=True),
field=models.CharField(default='', help_text='The identifier for the domain.', max_length=100, verbose_name='Domain', blank=True),
),
# Create Labels
migrations.CreateModel(
@@ -565,7 +565,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('name', models.CharField(max_length=512)),
('created_by', models.ForeignKey(related_name="{u'class': 'label', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name="{u'class': 'label', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
@@ -625,7 +625,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='credential',
name='authorize_password',
field=models.CharField(default=b'', help_text='Password used by the authorize mechanism.', max_length=1024, blank=True),
field=models.CharField(default='', help_text='Password used by the authorize mechanism.', max_length=1024, blank=True),
),
migrations.AlterField(
model_name='credential',
@@ -640,17 +640,17 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='credential',
name='kind',
field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'openstack', 'OpenStack')]),
field=models.CharField(default='ssh', max_length=32, choices=[('ssh', 'Machine'), ('net', 'Network'), ('scm', 'Source Control'), ('aws', 'Amazon Web Services'), ('rax', 'Rackspace'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure'), ('openstack', 'OpenStack')]),
),
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'Local File, Directory or Script'), ('rax', 'Rackspace Cloud Servers'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')]),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'Local File, Directory or Script'), ('rax', 'Rackspace Cloud Servers'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')]),
),
migrations.AlterField(
model_name='team',
@@ -702,41 +702,41 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='credential',
name='client',
field=models.CharField(default=b'', help_text='Client Id or Application Id for the credential', max_length=128, blank=True),
field=models.CharField(default='', help_text='Client Id or Application Id for the credential', max_length=128, blank=True),
),
migrations.AddField(
model_name='credential',
name='secret',
field=models.CharField(default=b'', help_text='Secret Token for this credential', max_length=1024, blank=True),
field=models.CharField(default='', help_text='Secret Token for this credential', max_length=1024, blank=True),
),
migrations.AddField(
model_name='credential',
name='subscription',
field=models.CharField(default=b'', help_text='Subscription identifier for this credential', max_length=1024, blank=True),
field=models.CharField(default='', help_text='Subscription identifier for this credential', max_length=1024, blank=True),
),
migrations.AddField(
model_name='credential',
name='tenant',
field=models.CharField(default=b'', help_text='Tenant identifier for this credential', max_length=1024, blank=True),
field=models.CharField(default='', help_text='Tenant identifier for this credential', max_length=1024, blank=True),
),
migrations.AlterField(
model_name='credential',
name='kind',
field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'openstack', 'OpenStack')]),
field=models.CharField(default='ssh', max_length=32, choices=[('ssh', 'Machine'), ('net', 'Network'), ('scm', 'Source Control'), ('aws', 'Amazon Web Services'), ('rax', 'Rackspace'), ('vmware', 'VMware vCenter'), ('satellite6', 'Satellite 6'), ('cloudforms', 'CloudForms'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure Classic (deprecated)'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('openstack', 'OpenStack')]),
),
migrations.AlterField(
model_name='host',
name='instance_id',
field=models.CharField(default=b'', max_length=1024, blank=True),
field=models.CharField(default='', max_length=1024, blank=True),
),
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'Local File, Directory or Script'), ('rax', 'Rackspace Cloud Servers'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure Classic (deprecated)'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Satellite 6'), ('cloudforms', 'CloudForms'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')]),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'Local File, Directory or Script'), ('rax', 'Rackspace Cloud Servers'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure Classic (deprecated)'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Satellite 6'), ('cloudforms', 'CloudForms'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')]),
),
]

View File

@@ -9,20 +9,20 @@ from django.db import migrations, models
from django.conf import settings
import awx.main.fields
import _squashed
from _squashed_30 import SQUASHED_30
from . import _squashed
from ._squashed_30 import SQUASHED_30
class Migration(migrations.Migration):
replaces = [(b'main', '0020_v300_labels_changes'),
(b'main', '0021_v300_activity_stream'),
(b'main', '0022_v300_adhoc_extravars'),
(b'main', '0023_v300_activity_stream_ordering'),
(b'main', '0024_v300_jobtemplate_allow_simul'),
(b'main', '0025_v300_update_rbac_parents'),
(b'main', '0026_v300_credential_unique'),
(b'main', '0027_v300_team_migrations'),
(b'main', '0028_v300_org_team_cascade')] + _squashed.replaces(SQUASHED_30, applied=True)
replaces = [('main', '0020_v300_labels_changes'),
('main', '0021_v300_activity_stream'),
('main', '0022_v300_adhoc_extravars'),
('main', '0023_v300_activity_stream_ordering'),
('main', '0024_v300_jobtemplate_allow_simul'),
('main', '0025_v300_update_rbac_parents'),
('main', '0026_v300_credential_unique'),
('main', '0027_v300_team_migrations'),
('main', '0028_v300_org_team_cascade')] + _squashed.replaces(SQUASHED_30, applied=True)
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
@@ -63,22 +63,22 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='adhoccommand',
name='extra_vars',
field=models.TextField(default=b'', blank=True),
field=models.TextField(default='', blank=True),
),
migrations.AlterField(
model_name='credential',
name='kind',
field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'openstack', 'OpenStack')]),
field=models.CharField(default='ssh', max_length=32, choices=[('ssh', 'Machine'), ('net', 'Network'), ('scm', 'Source Control'), ('aws', 'Amazon Web Services'), ('rax', 'Rackspace'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure Classic (deprecated)'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('openstack', 'OpenStack')]),
),
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'Local File, Directory or Script'), ('rax', 'Rackspace Cloud Servers'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure Classic (deprecated)'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')]),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'Local File, Directory or Script'), ('rax', 'Rackspace Cloud Servers'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure', 'Microsoft Azure Classic (deprecated)'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')]),
),
# jobtemplate allow simul
migrations.AddField(
@@ -90,17 +90,17 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='credential',
name='use_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.admin_role', b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['organization.admin_role', 'admin_role'], to='main.Role', null='True'),
),
migrations.AlterField(
model_name='team',
name='member_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role='admin_role', to='main.Role', null='True'),
),
migrations.AlterField(
model_name='team',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'member_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['organization.auditor_role', 'member_role'], to='main.Role', null='True'),
),
# Unique credential
migrations.AlterUniqueTogether(
@@ -110,7 +110,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='credential',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['singleton:system_auditor', 'organization.auditor_role', 'use_role', 'admin_role'], to='main.Role', null='True'),
),
# Team cascade
migrations.AlterField(

View File

@@ -8,8 +8,8 @@ import django.db.models.deletion
import awx.main.models.workflow
import awx.main.fields
import _squashed
from _squashed_30 import SQUASHED_30
from . import _squashed
from ._squashed_30 import SQUASHED_30
class Migration(migrations.Migration):
@@ -19,7 +19,7 @@ class Migration(migrations.Migration):
]
replaces = _squashed.replaces(SQUASHED_30) + [
(b'main', '0034_v310_release'),
('main', '0034_v310_release'),
]
operations = _squashed.operations(SQUASHED_30) + [
@@ -42,13 +42,13 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='jobevent',
name='uuid',
field=models.CharField(default=b'', max_length=1024, editable=False),
field=models.CharField(default='', max_length=1024, editable=False),
),
# Job Parent Event UUID
migrations.AddField(
model_name='jobevent',
name='parent_uuid',
field=models.CharField(default=b'', max_length=1024, editable=False),
field=models.CharField(default='', max_length=1024, editable=False),
),
# Modify the HA Instance
migrations.RemoveField(
@@ -63,19 +63,19 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='credential',
name='become_method',
field=models.CharField(default=b'', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[(b'', 'None'), (b'sudo', 'Sudo'), (b'su', 'Su'), (b'pbrun', 'Pbrun'), (b'pfexec', 'Pfexec'), (b'dzdo', 'DZDO'), (b'pmrun', 'Pmrun')]),
field=models.CharField(default='', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[('', 'None'), ('sudo', 'Sudo'), ('su', 'Su'), ('pbrun', 'Pbrun'), ('pfexec', 'Pfexec'), ('dzdo', 'DZDO'), ('pmrun', 'Pmrun')]),
),
# Add Workflows
migrations.AlterField(
model_name='unifiedjob',
name='launch_type',
field=models.CharField(default=b'manual', max_length=20, editable=False, choices=[(b'manual', 'Manual'), (b'relaunch', 'Relaunch'), (b'callback', 'Callback'), (b'scheduled', 'Scheduled'), (b'dependency', 'Dependency'), (b'workflow', 'Workflow'), (b'sync', 'Sync')]),
field=models.CharField(default='manual', max_length=20, editable=False, choices=[('manual', 'Manual'), ('relaunch', 'Relaunch'), ('callback', 'Callback'), ('scheduled', 'Scheduled'), ('dependency', 'Dependency'), ('workflow', 'Workflow'), ('sync', 'Sync')]),
),
migrations.CreateModel(
name='WorkflowJob',
fields=[
('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')),
('extra_vars', models.TextField(default=b'', blank=True)),
('extra_vars', models.TextField(default='', blank=True)),
],
options={
'ordering': ('id',),
@@ -101,8 +101,8 @@ class Migration(migrations.Migration):
name='WorkflowJobTemplate',
fields=[
('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')),
('extra_vars', models.TextField(default=b'', blank=True)),
('admin_role', awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_administrator', to='main.Role', null=b'True')),
('extra_vars', models.TextField(default='', blank=True)),
('admin_role', awx.main.fields.ImplicitRoleField(related_name='+', parent_role='singleton:system_administrator', to='main.Role', null='True')),
],
bases=('main.unifiedjobtemplate', models.Model),
),
@@ -176,7 +176,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='workflowjobtemplate',
name='execute_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='workflowjobtemplate',
@@ -186,7 +186,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='workflowjobtemplate',
name='read_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'execute_role', b'admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['singleton:system_auditor', 'organization.auditor_role', 'execute_role', 'admin_role'], to='main.Role', null='True'),
),
migrations.AddField(
model_name='workflowjobtemplatenode',
@@ -216,7 +216,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='workflowjobtemplate',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator', b'organization.admin_role'], to='main.Role', null=b'True'),
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=['singleton:system_administrator', 'organization.admin_role'], to='main.Role', null='True'),
),
migrations.AlterField(
model_name='workflowjobtemplatenode',
@@ -269,23 +269,23 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='unifiedjob',
name='execution_node',
field=models.TextField(default=b'', editable=False, blank=True),
field=models.TextField(default='', editable=False, blank=True),
),
# SCM Revision
migrations.AddField(
model_name='project',
name='scm_revision',
field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The last revision fetched by a project update', verbose_name='SCM Revision'),
field=models.CharField(default='', editable=False, max_length=1024, blank=True, help_text='The last revision fetched by a project update', verbose_name='SCM Revision'),
),
migrations.AddField(
model_name='projectupdate',
name='job_type',
field=models.CharField(default=b'check', max_length=64, choices=[(b'run', 'Run'), (b'check', 'Check')]),
field=models.CharField(default='check', max_length=64, choices=[('run', 'Run'), ('check', 'Check')]),
),
migrations.AddField(
model_name='job',
name='scm_revision',
field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The SCM Revision from the Project used for this job, if available', verbose_name='SCM Revision'),
field=models.CharField(default='', editable=False, max_length=1024, blank=True, help_text='The SCM Revision from the Project used for this job, if available', verbose_name='SCM Revision'),
),
# Project Playbook Files
migrations.AddField(
@@ -307,12 +307,12 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='adhoccommandevent',
name='stdout',
field=models.TextField(default=b'', editable=False),
field=models.TextField(default='', editable=False),
),
migrations.AddField(
model_name='adhoccommandevent',
name='uuid',
field=models.CharField(default=b'', max_length=1024, editable=False),
field=models.CharField(default='', max_length=1024, editable=False),
),
migrations.AddField(
model_name='adhoccommandevent',
@@ -327,7 +327,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='jobevent',
name='playbook',
field=models.CharField(default=b'', max_length=1024, editable=False),
field=models.CharField(default='', max_length=1024, editable=False),
),
migrations.AddField(
model_name='jobevent',
@@ -337,7 +337,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='jobevent',
name='stdout',
field=models.TextField(default=b'', editable=False),
field=models.TextField(default='', editable=False),
),
migrations.AddField(
model_name='jobevent',
@@ -352,7 +352,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='adhoccommandevent',
name='event',
field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_skipped', 'Host Skipped'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]),
field=models.CharField(max_length=100, choices=[('runner_on_failed', 'Host Failed'), ('runner_on_ok', 'Host OK'), ('runner_on_unreachable', 'Host Unreachable'), ('runner_on_skipped', 'Host Skipped'), ('debug', 'Debug'), ('verbose', 'Verbose'), ('deprecated', 'Deprecated'), ('warning', 'Warning'), ('system_warning', 'System Warning'), ('error', 'Error')]),
),
migrations.AlterField(
model_name='jobevent',
@@ -362,7 +362,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='jobevent',
name='event',
field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_error', 'Host Failure'), (b'runner_on_skipped', 'Host Skipped'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_no_hosts', 'No Hosts Remaining'), (b'runner_on_async_poll', 'Host Polling'), (b'runner_on_async_ok', 'Host Async OK'), (b'runner_on_async_failed', 'Host Async Failure'), (b'runner_item_on_ok', 'Item OK'), (b'runner_item_on_failed', 'Item Failed'), (b'runner_item_on_skipped', 'Item Skipped'), (b'runner_retry', 'Host Retry'), (b'runner_on_file_diff', 'File Difference'), (b'playbook_on_start', 'Playbook Started'), (b'playbook_on_notify', 'Running Handlers'), (b'playbook_on_include', 'Including File'), (b'playbook_on_no_hosts_matched', 'No Hosts Matched'), (b'playbook_on_no_hosts_remaining', 'No Hosts Remaining'), (b'playbook_on_task_start', 'Task Started'), (b'playbook_on_vars_prompt', 'Variables Prompted'), (b'playbook_on_setup', 'Gathering Facts'), (b'playbook_on_import_for_host', 'internal: on Import for Host'), (b'playbook_on_not_import_for_host', 'internal: on Not Import for Host'), (b'playbook_on_play_start', 'Play Started'), (b'playbook_on_stats', 'Playbook Complete'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]),
field=models.CharField(max_length=100, choices=[('runner_on_failed', 'Host Failed'), ('runner_on_ok', 'Host OK'), ('runner_on_error', 'Host Failure'), ('runner_on_skipped', 'Host Skipped'), ('runner_on_unreachable', 'Host Unreachable'), ('runner_on_no_hosts', 'No Hosts Remaining'), ('runner_on_async_poll', 'Host Polling'), ('runner_on_async_ok', 'Host Async OK'), ('runner_on_async_failed', 'Host Async Failure'), ('runner_item_on_ok', 'Item OK'), ('runner_item_on_failed', 'Item Failed'), ('runner_item_on_skipped', 'Item Skipped'), ('runner_retry', 'Host Retry'), ('runner_on_file_diff', 'File Difference'), ('playbook_on_start', 'Playbook Started'), ('playbook_on_notify', 'Running Handlers'), ('playbook_on_include', 'Including File'), ('playbook_on_no_hosts_matched', 'No Hosts Matched'), ('playbook_on_no_hosts_remaining', 'No Hosts Remaining'), ('playbook_on_task_start', 'Task Started'), ('playbook_on_vars_prompt', 'Variables Prompted'), ('playbook_on_setup', 'Gathering Facts'), ('playbook_on_import_for_host', 'internal: on Import for Host'), ('playbook_on_not_import_for_host', 'internal: on Not Import for Host'), ('playbook_on_play_start', 'Play Started'), ('playbook_on_stats', 'Playbook Complete'), ('debug', 'Debug'), ('verbose', 'Verbose'), ('deprecated', 'Deprecated'), ('warning', 'Warning'), ('system_warning', 'System Warning'), ('error', 'Error')]),
),
migrations.AlterUniqueTogether(
name='adhoccommandevent',
@@ -505,7 +505,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='host',
name='instance_id',
field=models.CharField(default=b'', help_text='The value used by the remote inventory source to uniquely identify the host', max_length=1024, blank=True),
field=models.CharField(default='', help_text='The value used by the remote inventory source to uniquely identify the host', max_length=1024, blank=True),
),
migrations.AlterField(
model_name='project',
@@ -520,7 +520,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='project',
name='scm_type',
field=models.CharField(default=b'', choices=[(b'', 'Manual'), (b'git', 'Git'), (b'hg', 'Mercurial'), (b'svn', 'Subversion')], max_length=8, blank=True, help_text='Specifies the source control system used to store the project.', verbose_name='SCM Type'),
field=models.CharField(default='', choices=[('', 'Manual'), ('git', 'Git'), ('hg', 'Mercurial'), ('svn', 'Subversion')], max_length=8, blank=True, help_text='Specifies the source control system used to store the project.', verbose_name='SCM Type'),
),
migrations.AlterField(
model_name='project',
@@ -535,7 +535,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='project',
name='scm_url',
field=models.CharField(default=b'', help_text='The location where the project is stored.', max_length=1024, verbose_name='SCM URL', blank=True),
field=models.CharField(default='', help_text='The location where the project is stored.', max_length=1024, verbose_name='SCM URL', blank=True),
),
migrations.AlterField(
model_name='project',
@@ -555,12 +555,12 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='projectupdate',
name='scm_type',
field=models.CharField(default=b'', choices=[(b'', 'Manual'), (b'git', 'Git'), (b'hg', 'Mercurial'), (b'svn', 'Subversion')], max_length=8, blank=True, help_text='Specifies the source control system used to store the project.', verbose_name='SCM Type'),
field=models.CharField(default='', choices=[('', 'Manual'), ('git', 'Git'), ('hg', 'Mercurial'), ('svn', 'Subversion')], max_length=8, blank=True, help_text='Specifies the source control system used to store the project.', verbose_name='SCM Type'),
),
migrations.AlterField(
model_name='projectupdate',
name='scm_url',
field=models.CharField(default=b'', help_text='The location where the project is stored.', max_length=1024, verbose_name='SCM URL', blank=True),
field=models.CharField(default='', help_text='The location where the project is stored.', max_length=1024, verbose_name='SCM URL', blank=True),
),
migrations.AlterField(
model_name='projectupdate',
@@ -600,7 +600,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='unifiedjob',
name='execution_node',
field=models.TextField(default=b'', help_text='The Tower node the job executed on.', editable=False, blank=True),
field=models.TextField(default='', help_text='The Tower node the job executed on.', editable=False, blank=True),
),
migrations.AlterField(
model_name='unifiedjob',
@@ -610,7 +610,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='unifiedjob',
name='job_explanation',
field=models.TextField(default=b'', help_text="A status field to indicate the state of the job if it wasn't able to run and capture stdout", editable=False, blank=True),
field=models.TextField(default='', help_text="A status field to indicate the state of the job if it wasn't able to run and capture stdout", editable=False, blank=True),
),
migrations.AlterField(
model_name='unifiedjob',

View File

@@ -2,8 +2,8 @@
from __future__ import unicode_literals
from django.db import migrations
import _squashed
from _squashed_31 import SQUASHED_31
from . import _squashed
from ._squashed_31 import SQUASHED_31
class Migration(migrations.Migration):

View File

@@ -72,7 +72,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='inventory',
name='kind',
field=models.CharField(default=b'', help_text='Kind of inventory being represented.', max_length=32, blank=True, choices=[(b'', 'Hosts have a direct link to this inventory.'), (b'smart', 'Hosts for inventory generated using the host_filter property.')]),
field=models.CharField(default='', help_text='Kind of inventory being represented.', max_length=32, blank=True, choices=[('', 'Hosts have a direct link to this inventory.'), ('smart', 'Hosts for inventory generated using the host_filter property.')]),
),
migrations.CreateModel(
name='SmartInventoryMembership',
@@ -143,7 +143,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='inventorysource',
name='scm_last_revision',
field=models.CharField(default=b'', max_length=1024, editable=False, blank=True),
field=models.CharField(default='', max_length=1024, editable=False, blank=True),
),
migrations.AddField(
model_name='inventorysource',
@@ -163,27 +163,27 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'File, Directory or Script'), (b'scm', 'Sourced from a Project'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'File, Directory or Script'), ('scm', 'Sourced from a Project'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')]),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'File, Directory or Script'), (b'scm', 'Sourced from a Project'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'File, Directory or Script'), ('scm', 'Sourced from a Project'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('custom', 'Custom Script')]),
),
migrations.AlterField(
model_name='inventorysource',
name='source_path',
field=models.CharField(default=b'', max_length=1024, blank=True),
field=models.CharField(default='', max_length=1024, blank=True),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source_path',
field=models.CharField(default=b'', max_length=1024, blank=True),
field=models.CharField(default='', max_length=1024, blank=True),
),
migrations.AlterField(
model_name='unifiedjob',
name='launch_type',
field=models.CharField(default=b'manual', max_length=20, editable=False, choices=[(b'manual', 'Manual'), (b'relaunch', 'Relaunch'), (b'callback', 'Callback'), (b'scheduled', 'Scheduled'), (b'dependency', 'Dependency'), (b'workflow', 'Workflow'), (b'sync', 'Sync'), (b'scm', 'SCM Update')]),
field=models.CharField(default='manual', max_length=20, editable=False, choices=[('manual', 'Manual'), ('relaunch', 'Relaunch'), ('callback', 'Callback'), ('scheduled', 'Scheduled'), ('dependency', 'Dependency'), ('workflow', 'Workflow'), ('sync', 'Sync'), ('scm', 'SCM Update')]),
),
migrations.AddField(
model_name='inventorysource',
@@ -211,12 +211,12 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='inventorysource',
name='verbosity',
field=models.PositiveIntegerField(default=1, blank=True, choices=[(0, b'0 (WARNING)'), (1, b'1 (INFO)'), (2, b'2 (DEBUG)')]),
field=models.PositiveIntegerField(default=1, blank=True, choices=[(0, '0 (WARNING)'), (1, '1 (INFO)'), (2, '2 (DEBUG)')]),
),
migrations.AddField(
model_name='inventoryupdate',
name='verbosity',
field=models.PositiveIntegerField(default=1, blank=True, choices=[(0, b'0 (WARNING)'), (1, b'1 (INFO)'), (2, b'2 (DEBUG)')]),
field=models.PositiveIntegerField(default=1, blank=True, choices=[(0, '0 (WARNING)'), (1, '1 (INFO)'), (2, '2 (DEBUG)')]),
),
# Job Templates
@@ -317,7 +317,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='inventory',
name='kind',
field=models.CharField(default=b'', help_text='Kind of inventory being represented.', max_length=32, blank=True, choices=[(b'', 'Hosts have a direct link to this inventory.'), (b'smart', 'Hosts for inventory generated using the host_filter property.')]),
field=models.CharField(default='', help_text='Kind of inventory being represented.', max_length=32, blank=True, choices=[('', 'Hosts have a direct link to this inventory.'), ('smart', 'Hosts for inventory generated using the host_filter property.')]),
),
# Timeout help text update
@@ -378,9 +378,9 @@ class Migration(migrations.Migration):
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)),
('description', models.TextField(default='', blank=True)),
('name', models.CharField(max_length=512)),
('kind', models.CharField(max_length=32, choices=[(b'ssh', 'Machine'), (b'vault', 'Vault'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'cloud', 'Cloud'), (b'insights', 'Insights')])),
('kind', models.CharField(max_length=32, choices=[('ssh', 'Machine'), ('vault', 'Vault'), ('net', 'Network'), ('scm', 'Source Control'), ('cloud', 'Cloud'), ('insights', 'Insights')])),
('managed_by_tower', models.BooleanField(default=False, editable=False)),
('inputs', awx.main.fields.CredentialTypeInputField(default={}, blank=True, help_text='Enter inputs using either JSON or YAML syntax. Use the radio button to toggle between the two. Refer to the Ansible Tower documentation for example syntax.')),
('injectors', awx.main.fields.CredentialTypeInjectorField(default={}, blank=True, help_text='Enter injectors using either JSON or YAML syntax. Use the radio button to toggle between the two. Refer to the Ansible Tower documentation for example syntax.')),
@@ -435,7 +435,7 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='credential',
name='become_method',
field=models.CharField(default=b'', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[(b'', 'None'), (b'sudo', 'Sudo'), (b'su', 'Su'), (b'pbrun', 'Pbrun'), (b'pfexec', 'Pfexec'), (b'dzdo', 'DZDO'), (b'pmrun', 'Pmrun'), (b'runas', 'Runas')]),
field=models.CharField(default='', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[('', 'None'), ('sudo', 'Sudo'), ('su', 'Su'), ('pbrun', 'Pbrun'), ('pfexec', 'Pfexec'), ('dzdo', 'DZDO'), ('pmrun', 'Pmrun'), ('runas', 'Runas')]),
),
# Connecting activity stream
@@ -496,6 +496,6 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='unifiedjob',
name='execution_node',
field=models.TextField(default=b'', help_text='The node the job executed on.', editable=False, blank=True),
field=models.TextField(default='', help_text='The node the job executed on.', editable=False, blank=True),
),
]

View File

@@ -20,11 +20,11 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='inventorysource',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'File, Directory or Script'), (b'scm', 'Sourced from a Project'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'rhv', 'Red Hat Virtualization'), (b'tower', 'Ansible Tower'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'File, Directory or Script'), ('scm', 'Sourced from a Project'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('rhv', 'Red Hat Virtualization'), ('tower', 'Ansible Tower'), ('custom', 'Custom Script')]),
),
migrations.AlterField(
model_name='inventoryupdate',
name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'File, Directory or Script'), (b'scm', 'Sourced from a Project'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'rhv', 'Red Hat Virtualization'), (b'tower', 'Ansible Tower'), (b'custom', 'Custom Script')]),
field=models.CharField(default='', max_length=32, blank=True, choices=[('', 'Manual'), ('file', 'File, Directory or Script'), ('scm', 'Sourced from a Project'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('cloudforms', 'Red Hat CloudForms'), ('openstack', 'OpenStack'), ('rhv', 'Red Hat Virtualization'), ('tower', 'Ansible Tower'), ('custom', 'Custom Script')]),
),
]

View File

@@ -21,9 +21,9 @@ class Migration(migrations.Migration):
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('event_data', awx.main.fields.JSONField(blank=True, default={})),
('uuid', models.CharField(default=b'', editable=False, max_length=1024)),
('uuid', models.CharField(default='', editable=False, max_length=1024)),
('counter', models.PositiveIntegerField(default=0, editable=False)),
('stdout', models.TextField(default=b'', editable=False)),
('stdout', models.TextField(default='', editable=False)),
('verbosity', models.PositiveIntegerField(default=0, editable=False)),
('start_line', models.PositiveIntegerField(default=0, editable=False)),
('end_line', models.PositiveIntegerField(default=0, editable=False)),
@@ -39,17 +39,17 @@ class Migration(migrations.Migration):
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('event', models.CharField(choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_error', 'Host Failure'), (b'runner_on_skipped', 'Host Skipped'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_no_hosts', 'No Hosts Remaining'), (b'runner_on_async_poll', 'Host Polling'), (b'runner_on_async_ok', 'Host Async OK'), (b'runner_on_async_failed', 'Host Async Failure'), (b'runner_item_on_ok', 'Item OK'), (b'runner_item_on_failed', 'Item Failed'), (b'runner_item_on_skipped', 'Item Skipped'), (b'runner_retry', 'Host Retry'), (b'runner_on_file_diff', 'File Difference'), (b'playbook_on_start', 'Playbook Started'), (b'playbook_on_notify', 'Running Handlers'), (b'playbook_on_include', 'Including File'), (b'playbook_on_no_hosts_matched', 'No Hosts Matched'), (b'playbook_on_no_hosts_remaining', 'No Hosts Remaining'), (b'playbook_on_task_start', 'Task Started'), (b'playbook_on_vars_prompt', 'Variables Prompted'), (b'playbook_on_setup', 'Gathering Facts'), (b'playbook_on_import_for_host', 'internal: on Import for Host'), (b'playbook_on_not_import_for_host', 'internal: on Not Import for Host'), (b'playbook_on_play_start', 'Play Started'), (b'playbook_on_stats', 'Playbook Complete'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')], max_length=100)),
('event', models.CharField(choices=[('runner_on_failed', 'Host Failed'), ('runner_on_ok', 'Host OK'), ('runner_on_error', 'Host Failure'), ('runner_on_skipped', 'Host Skipped'), ('runner_on_unreachable', 'Host Unreachable'), ('runner_on_no_hosts', 'No Hosts Remaining'), ('runner_on_async_poll', 'Host Polling'), ('runner_on_async_ok', 'Host Async OK'), ('runner_on_async_failed', 'Host Async Failure'), ('runner_item_on_ok', 'Item OK'), ('runner_item_on_failed', 'Item Failed'), ('runner_item_on_skipped', 'Item Skipped'), ('runner_retry', 'Host Retry'), ('runner_on_file_diff', 'File Difference'), ('playbook_on_start', 'Playbook Started'), ('playbook_on_notify', 'Running Handlers'), ('playbook_on_include', 'Including File'), ('playbook_on_no_hosts_matched', 'No Hosts Matched'), ('playbook_on_no_hosts_remaining', 'No Hosts Remaining'), ('playbook_on_task_start', 'Task Started'), ('playbook_on_vars_prompt', 'Variables Prompted'), ('playbook_on_setup', 'Gathering Facts'), ('playbook_on_import_for_host', 'internal: on Import for Host'), ('playbook_on_not_import_for_host', 'internal: on Not Import for Host'), ('playbook_on_play_start', 'Play Started'), ('playbook_on_stats', 'Playbook Complete'), ('debug', 'Debug'), ('verbose', 'Verbose'), ('deprecated', 'Deprecated'), ('warning', 'Warning'), ('system_warning', 'System Warning'), ('error', 'Error')], max_length=100)),
('event_data', awx.main.fields.JSONField(blank=True, default={})),
('failed', models.BooleanField(default=False, editable=False)),
('changed', models.BooleanField(default=False, editable=False)),
('uuid', models.CharField(default=b'', editable=False, max_length=1024)),
('playbook', models.CharField(default=b'', editable=False, max_length=1024)),
('play', models.CharField(default=b'', editable=False, max_length=1024)),
('role', models.CharField(default=b'', editable=False, max_length=1024)),
('task', models.CharField(default=b'', editable=False, max_length=1024)),
('uuid', models.CharField(default='', editable=False, max_length=1024)),
('playbook', models.CharField(default='', editable=False, max_length=1024)),
('play', models.CharField(default='', editable=False, max_length=1024)),
('role', models.CharField(default='', editable=False, max_length=1024)),
('task', models.CharField(default='', editable=False, max_length=1024)),
('counter', models.PositiveIntegerField(default=0, editable=False)),
('stdout', models.TextField(default=b'', editable=False)),
('stdout', models.TextField(default='', editable=False)),
('verbosity', models.PositiveIntegerField(default=0, editable=False)),
('start_line', models.PositiveIntegerField(default=0, editable=False)),
('end_line', models.PositiveIntegerField(default=0, editable=False)),
@@ -66,9 +66,9 @@ class Migration(migrations.Migration):
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('event_data', awx.main.fields.JSONField(blank=True, default={})),
('uuid', models.CharField(default=b'', editable=False, max_length=1024)),
('uuid', models.CharField(default='', editable=False, max_length=1024)),
('counter', models.PositiveIntegerField(default=0, editable=False)),
('stdout', models.TextField(default=b'', editable=False)),
('stdout', models.TextField(default='', editable=False)),
('verbosity', models.PositiveIntegerField(default=0, editable=False)),
('start_line', models.PositiveIntegerField(default=0, editable=False)),
('end_line', models.PositiveIntegerField(default=0, editable=False)),

View File

@@ -18,77 +18,77 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='organization',
name='execute_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
),
migrations.AddField(
model_name='organization',
name='job_template_admin_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
),
migrations.AddField(
model_name='organization',
name='credential_admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
),
migrations.AddField(
model_name='organization',
name='inventory_admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
),
migrations.AddField(
model_name='organization',
name='project_admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
),
migrations.AddField(
model_name='organization',
name='workflow_admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
),
migrations.AddField(
model_name='organization',
name='notification_admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='credential',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'singleton:system_administrator', b'organization.credential_admin_role'], related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['singleton:system_administrator', 'organization.credential_admin_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='inventory',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'organization.inventory_admin_role', related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='organization.inventory_admin_role', related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='project',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'organization.project_admin_role', b'singleton:system_administrator'], related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['organization.project_admin_role', 'singleton:system_administrator'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='workflowjobtemplate',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'singleton:system_administrator', b'organization.workflow_admin_role'], related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['singleton:system_administrator', 'organization.workflow_admin_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='workflowjobtemplate',
name='execute_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'admin_role', b'organization.execute_role'], related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role', 'organization.execute_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='jobtemplate',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'project.organization.job_template_admin_role', b'inventory.organization.job_template_admin_role'], related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['project.organization.job_template_admin_role', 'inventory.organization.job_template_admin_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='jobtemplate',
name='execute_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'admin_role', b'project.organization.execute_role', b'inventory.organization.execute_role'], related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role', 'project.organization.execute_role', 'inventory.organization.execute_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='organization',
name='member_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'admin_role', b'execute_role', b'project_admin_role', b'inventory_admin_role', b'workflow_admin_role', b'notification_admin_role', b'credential_admin_role', b'job_template_admin_role'], related_name='+', to='main.Role'),
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role', 'execute_role', 'project_admin_role', 'inventory_admin_role', 'workflow_admin_role', 'notification_admin_role', 'credential_admin_role', 'job_template_admin_role'], related_name='+', to='main.Role'),
),
]

View File

@@ -35,8 +35,8 @@ class Migration(migrations.Migration):
('skip_authorization', models.BooleanField(default=False)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('description', models.TextField(blank=True, default=b'')),
('logo_data', models.TextField(default=b'', editable=False, validators=[django.core.validators.RegexValidator(re.compile(b'.*'))])),
('description', models.TextField(blank=True, default='')),
('logo_data', models.TextField(default='', editable=False, validators=[django.core.validators.RegexValidator(re.compile('.*'))])),
('user', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='main_oauth2application', to=settings.AUTH_USER_MODEL)),
],
options={
@@ -52,7 +52,7 @@ class Migration(migrations.Migration):
('scope', models.TextField(blank=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('description', models.CharField(blank=True, default=b'', max_length=200)),
('description', models.CharField(blank=True, default='', max_length=200)),
('last_used', models.DateTimeField(default=None, editable=False, null=True)),
('application', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.OAUTH2_PROVIDER_APPLICATION_MODEL)),
('user', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='main_oauth2accesstoken', to=settings.AUTH_USER_MODEL)),

Some files were not shown because too many files have changed in this diff Show More