Compare commits

...

697 Commits

Author SHA1 Message Date
Shane McDonald
14f42af700 Update CHANGELOG.md 2021-03-23 10:30:59 -04:00
softwarefactory-project-zuul[bot]
9b702e46fe Merge pull request #9655 from ansible/jakemcdermott-patch-changelog
Update some links and notes in the changelog

SUMMARY
Fix typo, remove duplicate change note, fix a wrong link, add link to the ui virtualenv removal

Reviewed-by: Ryan Petrello <None>
2021-03-22 19:47:22 +00:00
softwarefactory-project-zuul[bot]
9608539710 Merge pull request #9596 from nixocio/ui_issue_9592
Allow one to select non-global execution environments for organizations

Allow one to select non-global EE when editing an Organization.
See: #9592
All those EE should be present as a choice when editing the Default organization.

Editing Default organization.

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-22 19:38:44 +00:00
Jake McDermott
23ac8d346f Update some links and notes in the changelog 2021-03-22 14:59:04 -04:00
Shane McDonald
ff00239c09 Update CHANGELOG.md 2021-03-22 13:05:52 -04:00
Shane McDonald
fc7b2212e6 Merge pull request #9652 from shanemcd/fix-changelog-dates
Fix changelog dates
2021-03-22 12:53:36 -04:00
Shane McDonald
ade0734af7 Fix changelog dates 2021-03-22 12:42:40 -04:00
Shane McDonald
1efd189d52 Revert "Remove search filtering from changelog"
This reverts commit f6fb3e0b41.
2021-03-22 12:40:11 -04:00
softwarefactory-project-zuul[bot]
52799c169e Merge pull request #9605 from fosterseth/fix_dev_cluster_port_conflict
fix port conflict in dev cluster

SUMMARY

problem: loop adds 100 to ports 7899 and 7999, which would yield 7999 to 8099 on the next iteration, so the 7999 is conflicting
fix: add 1000 instead
Also, haproxy was being defined twice, now it renders once.

ISSUE TYPE


Bugfix Pull Request

COMPONENT NAME


API

AWX VERSION

awx: 17.0.1

Reviewed-by: Seth Foster <None>
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Shane McDonald <me@shanemcd.com>
2021-03-22 15:30:18 +00:00
softwarefactory-project-zuul[bot]
73f819b632 Merge pull request #9647 from jbradberry/ignore-vscode
Instruct git to ignore the .vscode/ directory

SUMMARY
Instruct git to ignore the .vscode/ directory
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME

API

AWX VERSION
awx: 18.0.0

Reviewed-by: Ryan Petrello <None>
Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
2021-03-22 15:18:38 +00:00
softwarefactory-project-zuul[bot]
f3116a3250 Merge pull request #9648 from tiagodread/ouiaid-01
add ouiaID to select and cancel buttons on modals

SUMMARY
Add ouiaId prop to select and cancel button within modals

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
2021-03-22 15:14:55 +00:00
softwarefactory-project-zuul[bot]
43522d30e6 Merge pull request #9576 from AlexSCorey/9373-InventorySourceSilentFailure
Fixes silent error on SCM subform

SUMMARY
This addresses #9373.  It disallows the user to select both Update on launch and update on project update. It also adds a bit of info to the tool tip including a link to the project in question so the user can edit the project to allow them to update on launch and on project update
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME

UI

AWX VERSION
ADDITIONAL INFORMATION

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-22 15:10:50 +00:00
Seth Foster
f8641bfa5e fix port conflict in dev cluster. Output only one haproxy def 2021-03-22 10:45:50 -04:00
Tiago Goes
8b9a92f306 add ouiaID to select and cancel buttons on modals 2021-03-22 11:41:51 -03:00
Jeff Bradberry
c508695ed0 Instruct git to ignore the .vscode/ directory 2021-03-22 10:24:11 -04:00
softwarefactory-project-zuul[bot]
305f717e88 Merge pull request #9646 from shanemcd/pdd-wrapper
Create a wrapper directory for the private data dir

Reviewed-by: None <None>
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Elijah DeLee <kdelee@redhat.com>
2021-03-22 14:15:40 +00:00
softwarefactory-project-zuul[bot]
7d190da1c4 Merge pull request #9544 from AlexSCorey/9485-9319-7516-fix
Fixes Several Bugs

SUMMARY
This address #9485 (Job template project field validate), #9319 (Job Details view only would show job type run, even if it was a job type check, #7516 (changes the Completed Jobs tab for a JT or WFJT to show Jobs since it show completed and pending/running jobs).
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME

UI

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: John Mitchell <None>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-22 14:00:08 +00:00
Shane McDonald
fd63937fa2 Create a wrapper directory for the private data dir 2021-03-22 09:24:48 -04:00
softwarefactory-project-zuul[bot]
5e64e8771e Merge pull request #9613 from mabashian/8921-node-cred-password
Prevent users from selecting credentials that prompt for passwords on workflow nodes and schedules

SUMMARY
link #8921
If a user selects a job template with default credentials that prompt for passwords (but does not prompt for credentials) then the user should not be allowed to create the node and a different JT must be selected:

If a user selects a credential that prompts for passwords when creating/editing a workflow node or schedule then we show this error:

If a user removes a credential that exists in the default collection of credentials on the JT then it must be replaced.  This is the error we show:

If a user attempts to create a schedule for a job template with default credentials that prompt (but does not prompt for credentials) then the API responds with this error:

I believe this UX is consistent with the old UI but I am double checking that now.
ISSUE TYPE

Feature Pull Request

COMPONENT NAME

UI

Reviewed-by: Kersom <None>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-22 13:17:26 +00:00
softwarefactory-project-zuul[bot]
597435141d Merge pull request #9563 from AlexSCorey/8769-WizardFailure
Fixes crashing wizard, and adds error handle on adding role

SUMMARY
This addresses #8769.  It also adds error handling if there is some sort of request error during the submit request.
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME

UI

AWX VERSION
ADDITIONAL INFORMATION

Reviewed-by: John Mitchell <None>
2021-03-22 12:16:48 +00:00
softwarefactory-project-zuul[bot]
f11bc1154a Merge pull request #9637 from mabashian/ouiaids
Adds ouiaId's to various buttons

SUMMARY
@tiagodread @unlikelyzero @one-t @akus062381 this will likely break something because I changed some existing ouia-id's so that they are a consistent structure.
^^ Let's let one of them merge this
I also removed an unused component

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: John Hill <johill@redhat.com>
2021-03-21 19:55:56 +00:00
softwarefactory-project-zuul[bot]
cfff30f024 Merge pull request #9519 from mabashian/7603-custom-login
Adds support for html in custom login text

SUMMARY
link #7603
I couldn't come up with a way to do this without breaking up the component and discontinuing use of the LoginPage PF component.  This is because LoginPage expects the textContent component (what we use to display the custom login text) to be a string.  By using the underlying LoginPage components I reconstructed the login page and got more control over that prop.
The custom message in the old UI supported both strings and HTML:

So we need to support rendering HTML but we need to do it in a safe way.  Our solution to that was https://docs.angularjs.org/api/ngSanitize.  React doesn't seem to have anything like this built in so I went looking for outside help.  html-entities is already included in our project but as best as I can tell that lib is mainly focused on swapping special characters out for html entities.  I wanted something that was going to strip the HTML of bits that could be exploited by a malicious actor.
I settled on https://www.npmjs.com/package/sanitize-html because it was a) small and b) actively maintained.  The API was simple and let me sanitize the HTML before setting it using dangerouslySetInnerHTML.  If we need to tweak the configuration away from the default values then we can certainly do that.


ISSUE TYPE

Feature Pull Request

COMPONENT NAME

UI

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
2021-03-19 22:45:42 +00:00
mabashian
3cbb516bac Fixed changelog 2021-03-19 18:07:51 -04:00
mabashian
467536f93f Prevent users from selecting credentials that prompt for passwords on workflow nodes and schedules 2021-03-19 18:04:23 -04:00
softwarefactory-project-zuul[bot]
3878d8f7d8 Merge pull request #9642 from mabashian/9640-workflow-output
Only attempt to fetch event options on non workflow jobs

SUMMARY
link #9640
This was fallout from output search filtering.  We need this request for non workflow jobs so that we can build the search options.
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME

UI

Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-19 21:42:21 +00:00
mabashian
112bab274d Fixes linting errors 2021-03-19 17:21:16 -04:00
mabashian
9b3aa60468 Only attempt to fetch event options on non workflow jobs 2021-03-19 16:52:26 -04:00
mabashian
1d01efa024 Adds data-cy attributes to some login components 2021-03-19 15:55:22 -04:00
softwarefactory-project-zuul[bot]
f98c6b5c5b Merge pull request #9638 from mabashian/cred-type-ouia-ids
Adds identifiers to various credential elements

SUMMARY
See commits

Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-19 19:37:06 +00:00
mabashian
854da96976 Add line to changelog 2021-03-19 15:12:18 -04:00
mabashian
3211323a4e Adds support for html in custom login text 2021-03-19 15:06:51 -04:00
mabashian
2d396927dd Adds identifiers to credential details 2021-03-19 14:59:17 -04:00
mabashian
54e2608cf4 Adds identifier to credential type options 2021-03-19 14:50:07 -04:00
mabashian
44cd8819f4 Remove CollapsibleSection as it's no longer used in the app. Adds ouiaId's to various buttons. 2021-03-19 14:33:06 -04:00
softwarefactory-project-zuul[bot]
5c0850b279 Merge pull request #9368 from mabashian/7256-cred-password-replace
Add support for replace/revert on secret credential fields

SUMMARY
link #7256
Note that this only applies to editing an existing credential.  You should not see this button on fields when adding a new credential.
When editing an existing credential the replace button should show up on fields where secret is true and the field has an existing value that is not an external credential.  Examples:



Fields with external credentials should look the same:

Initially the button tooltip should say Replace.  Clicking Replace will clear out the previously saved value and enable the form field:

The tooltip will change to Revert.  Clicking Revert will take the field back to it's original state.
I also noticed a race condition which would result in the input fields (subform) not being populated due to the form rendering before the request(s) were completed.  I fixed this.
ISSUE TYPE

Feature Pull Request

COMPONENT NAME

UI

Reviewed-by: Kersom <None>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-19 17:40:16 +00:00
softwarefactory-project-zuul[bot]
7a512c1de7 Merge pull request #9208 from mabashian/job-output-search-2
Add support for filtering and pagination on job output

SUMMARY
link #6612
link #5906
This PR adds the ability to filter job events and also includes logic to handle fetching filtered job events across different pages.
Note that the verbosity dropdown included in #5906 is not included in this work.  I don't think that's possible without api changes.
As part of this work, I converted JobOutput.jsx from a class based component to a functional component.  I've tried my best to make sure that all existing functionality has remained the same by comparing the experience of this branch to devel.
Like the old UI, the output filter is disabled while the job is running.
ISSUE TYPE

Feature Pull Request

COMPONENT NAME

UI

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Marliana Lara <marliana.lara@gmail.com>
2021-03-19 16:38:23 +00:00
softwarefactory-project-zuul[bot]
5bdc8b0015 Merge pull request #9497 from ryanpetrello/bump-18
Bump version to 18.0.0

Reviewed-by: Shane McDonald <me@shanemcd.com>
Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
Reviewed-by: Jim Ladd <None>
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
2021-03-19 16:00:08 +00:00
mabashian
385ef0b0a4 Adds ouiaId's to output page control buttons 2021-03-19 11:27:43 -04:00
mabashian
6495779e40 Fix bug where output was not loading after job finished and cancel button was shown instead of delete 2021-03-19 10:48:05 -04:00
softwarefactory-project-zuul[bot]
012902f4fe Merge pull request #9598 from beeankha/ansible_version_analytics
Enable Ansible version to be collected from EEs

SUMMARY

Connecting issue #9473
This PR, along with this Ansible-Runner PR, enables us to obtain the Ansible (core) version for each execution environment that is utilized. This info can be gathered from the new ansible_version column on the main_unifiedjobs table.

ISSUE TYPE


Feature Pull Request

COMPONENT NAME


API

AWX VERSION

awx: 17.0.1

ADDITIONAL INFORMATION
Screenshot/example of the DB output:

Reviewed-by: Ryan Petrello <None>
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Ladislav Smola <lsmola@redhat.com>
Reviewed-by: Shane McDonald <me@shanemcd.com>
2021-03-19 14:06:03 +00:00
mabashian
65bdf0baf7 Add support for replace/revert on secret credential fields 2021-03-19 09:59:13 -04:00
Shane McDonald
f6fb3e0b41 Remove search filtering from changelog
This has not landed yet
2021-03-19 09:01:33 -04:00
softwarefactory-project-zuul[bot]
6fae6168d9 Merge pull request #9619 from mabashian/9568-session-modal-ouia
Adds ouiaId's to session modal and buttons

SUMMARY
link #9568

Reviewed-by: Kersom <None>
2021-03-19 12:57:25 +00:00
Shane McDonald
9f91190f4b Add changelog entries for installer directory and custom venv removals 2021-03-18 18:56:47 -04:00
Ryan Petrello
23f2ac4cbc Bump version to 18.0.0
Co-Authored-By: Shane McDonald <me@shanemcd.com>
Co-Authored-By: AlanCoding <arominge@redhat.com>
Co-Authored-By: Rebeccah Hunter <rhunter@redhat.com>
Co-Authored-By: Graham Mainwaring <graham@mhn.org>
Co-Authored-By: Jeff Bradberry <jeff.bradberry@gmail.com>
Co-Authored-By: beeankha <beeankha@gmail.com>
Co-Authored-By: Elyézer Rezende <elyezermr@gmail.com>
Co-Authored-By: Yanis Guenane <yguenane@redhat.com>
Co-Authored-By: Jim Ladd <jladd@redhat.com>
Co-Authored-By: Seth Foster <fosterbseth@gmail.com>
Co-Authored-By: Elijah DeLee <kdelee@redhat.com>
Co-Authored-By: Tiago Góes <tiago.goes2009@gmail.com>
Co-Authored-By: Yago Marques <yagomarquesja@gmail.com>
Co-Authored-By: shebangbash <ndasilva@redhat.com>
Co-Authored-By: Jake McDermott <jmcdermott@ansible.com>
Co-Authored-By: Christian Adams <rooftopcellist@gmail.com>
Co-Authored-By: nixocio <nixocio@gmail.com>
Co-Authored-By: Caleb Boylan <calebboylan@gmail.com>
2021-03-18 18:56:47 -04:00
softwarefactory-project-zuul[bot]
f06141eb00 Merge pull request #9616 from shanemcd/awx-logos
Fix paths used for detecting and copying awx-logos

Reviewed-by: Elijah DeLee <kdelee@redhat.com>
Reviewed-by: Ryan Petrello <None>
2021-03-18 22:23:11 +00:00
softwarefactory-project-zuul[bot]
a971e20e05 Merge pull request #9618 from beeankha/fix_cred_input_source_test_lint_errors
Fix Linting Errors in Credential Input Source Test File

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
2021-03-18 22:10:57 +00:00
softwarefactory-project-zuul[bot]
edc97a24b2 Merge pull request #9617 from shanemcd/derp
Use correct image for awx-manage inventory_import

Reviewed-by: Elijah DeLee <kdelee@redhat.com>
2021-03-18 22:09:11 +00:00
mabashian
8dec13a25d Adds ouiaId's to session modal and buttons 2021-03-18 16:58:07 -04:00
beeankha
586019fe8f Fix linting errors in credential input source test file 2021-03-18 15:18:57 -04:00
Shane McDonald
d7b8a20a75 Use correct image for awx-manage inventory_import 2021-03-18 15:18:47 -04:00
Shane McDonald
d98e4c304f Fix paths used for detecting and copying awx-logos 2021-03-18 15:09:45 -04:00
mabashian
eb4dca9dc1 Fix padding on output search 2021-03-18 14:15:14 -04:00
mabashian
74460fa2d7 Adds ouiaId's to output page buttons 2021-03-18 14:08:55 -04:00
mabashian
ac06f9e432 Fix output search styling 2021-03-18 14:08:34 -04:00
mabashian
2c8d524b1a Revert "Fix output search styling"
This reverts commit 46728a0931.
2021-03-18 14:07:44 -04:00
mabashian
46728a0931 Fix output search styling 2021-03-18 14:07:13 -04:00
beeankha
39d785070c Check for the existence of ansible.txt file explicitly 2021-03-18 13:51:54 -04:00
beeankha
dca29e756d Update migration file order and name 2021-03-18 13:50:19 -04:00
beeankha
2a9d728b70 Set max string length to a wayyyy bigger number just in case 2021-03-18 12:55:48 -04:00
beeankha
ef6297377b Enable Ansible version to be collected from EEs 2021-03-18 12:55:48 -04:00
mabashian
73fb332af3 Adds line to changelog for output pagination 2021-03-18 09:57:11 -04:00
mabashian
16ad68a6b0 Fix typo after merge conflict 2021-03-18 09:53:50 -04:00
mabashian
88c4feb3ae Fix test after updating page index 2021-03-18 09:53:50 -04:00
mabashian
f65839ec8f Move loading spinner inside output panel 2021-03-18 09:53:50 -04:00
mabashian
8e0a22c766 Fix page number when only fetching one row 2021-03-18 09:53:50 -04:00
mabashian
c0fb2ddbdc Adds ability to cancel jobs from output page back in 2021-03-18 09:53:50 -04:00
mabashian
d60bec8155 Use delete item hook for job output delete 2021-03-18 09:53:50 -04:00
mabashian
98da019d12 Add support for filtering and pagination on job output 2021-03-18 09:53:50 -04:00
Alex Corey
40f0f5ddf7 fixes erroneous validation warning, template Jobs tab, job detail job type 2021-03-17 17:39:21 -04:00
softwarefactory-project-zuul[bot]
31124e07c6 Merge pull request #9589 from nixocio/ui_issue_9250
Do not allow user to modify EE managed by tower

Do not allow user to attempt to modify EE managed by tower.
See: #9250

Reviewed-by: Ryan Petrello <None>
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-17 21:22:10 +00:00
softwarefactory-project-zuul[bot]
cb1ab742e8 Merge pull request #9595 from nixocio/ui_issue_9307
Add copy functionality to EE

Add copy functionality to EE.
See: #9307

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-17 16:44:08 -04:00
softwarefactory-project-zuul[bot]
93093b9bc6 Merge pull request #9595 from nixocio/ui_issue_9307
Add copy functionality to EE

Add copy functionality to EE.
See: #9307

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-03-17 20:33:56 +00:00
nixocio
5fa44c61d2 Add copy functionality to EE
Add copy functionality to EE.

See: https://github.com/ansible/awx/issues/9307
2021-03-17 15:49:39 -04:00
Alex Corey
7290de22e2 addes better tooltip selector 2021-03-17 15:38:58 -04:00
Alex Corey
10d95c9aef Fixes silent error on SCM subform 2021-03-17 15:32:14 -04:00
softwarefactory-project-zuul[bot]
da74c61a09 Merge pull request #9606 from nixocio/ui_add_selectors
Improve selectors to ease testing

Improve selectors to ease testing

Reviewed-by: Keith Grant <None>
2021-03-17 18:12:22 +00:00
Tiago Goes
d93ef9f6de add ouiaID 2021-03-17 14:35:00 -03:00
softwarefactory-project-zuul[bot]
209c5c9378 Merge pull request #9599 from kdelee/refactor_awxkit_fk_fields
refactor payload construction for awxkit

This fixes container_group creation to allow passing
"is_container_group" and "credential" to the "create" method
on instance groups, and refactors other page objects
to use a common utility function to eliminate copy-pasted code
This will help us update to set is_container_group correctly as is now needed since de52ade

Reviewed-by: Ryan Petrello <None>
2021-03-17 17:10:59 +00:00
softwarefactory-project-zuul[bot]
6b1110f1c3 Merge pull request #9570 from nixocio/ui_issue_9190
Remove custom virtual env

Remove custom virtual from the UI.
Also, surface missing-resource warnings on list items for UJTs that were using
custom virtualenvs. And related details page.
See: #9190
Also: #9207

Reviewed-by: Ryan Petrello <None>
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Mat Wilson <mawilson@redhat.com>
2021-03-17 16:57:41 +00:00
nixocio
ec08997b63 Improve selectors to ease testing
Improve selectors to ease testing
2021-03-17 12:19:31 -04:00
softwarefactory-project-zuul[bot]
09150fe21d Merge pull request #9542 from ryanpetrello/centrify
Add support for Centrify Vault as a credential plugin

replaces #8952
cc @surbhijain1502 @Asharma-bhavna @badrogh

Reviewed-by: Ryan Petrello <None>
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Chris Meyers <None>
2021-03-17 14:52:25 +00:00
Elijah DeLee
3562be8317 refactor payload construction for awxkit
This fixes container_group creation to allow passing
"is_container_group" and "credential" to the "create" method
on instance groups, and refactors other page objects
to use a common utility function to eliminate copy-pasted code
2021-03-17 10:40:00 -04:00
Ryan Petrello
dc8115681a bump the migration version number for Centrify 2021-03-17 10:20:11 -04:00
Ryan Petrello
6f0f56f4f6 verify all Centrify HTTPS requests 2021-03-17 10:19:03 -04:00
Ryan Petrello
1b2d457090 fix a bug in the Centrify Vault plugin 2021-03-17 10:19:03 -04:00
Ryan Petrello
764322b87b more centrify fixes 2021-03-17 10:19:03 -04:00
Asharma-bhavna
51005c0342 Bugs identified during flake8 testing 2021-03-17 10:19:03 -04:00
Asharma-bhavna
cccd021d8b Removed explicitly calling of python json module 2021-03-17 10:19:03 -04:00
Asharma-bhavna
18752a637f Code changes suggested by AWX repo reviewer team 2021-03-17 10:19:03 -04:00
surbhijain1502
bbf283d1fd Change namespace placing in the array 2021-03-17 10:19:03 -04:00
surbhijain1502
f83126643a Removed account name as secret, query changed 2021-03-17 10:19:03 -04:00
surbhijain1502
d913d622d3 Centrify Vault Plugin
To read Inputs and fetch the data from PAS Portal
2021-03-17 10:19:03 -04:00
surbhijain1502
f062554e82 To test Centrify Vault Credential Source 2021-03-17 10:19:03 -04:00
surbhijain1502
2d0eae26bc Adding Centrify plugin namespace to test 2021-03-17 10:19:03 -04:00
surbhijain1502
45937f0be3 Registering Centrify Plugin as entrypoint
Register Plugin
2021-03-17 10:19:03 -04:00
softwarefactory-project-zuul[bot]
e0d9100dc4 Merge pull request #9597 from shanemcd/oops
Dont require is_container_group in payload when creating InstanceGroups

Reviewed-by: Elijah DeLee <kdelee@redhat.com>
Reviewed-by: Ryan Petrello <None>
2021-03-16 21:19:58 +00:00
Shane McDonald
19d0524461 Dont require is_container_group in payload when creating InstanceGroups 2021-03-16 16:46:24 -04:00
nixocio
4db5c496d0 Allow one to select non-global execution environments for organizations
Allow one to select non-global EE when editing an Organization.

See: https://github.com/ansible/awx/issues/9592
2021-03-16 16:32:21 -04:00
nixocio
babea5d599 Remove custom virtual env
Remove custom virtual from the UI.

Also, surface missing-resource warnings on list items for UJTs that were using
custom virtualenvs.

Fix some uni-tests warnings.

See: https://github.com/ansible/awx/issues/9190
Also: https://github.com/ansible/awx/issues/9207
2021-03-16 14:51:27 -04:00
softwarefactory-project-zuul[bot]
a2e3bf1030 Merge pull request #9590 from jerem991/devel
Hashicorp Vault Credential Plugin : Support for namespace

SUMMARY

Added the support for Vault Namespace (Enterprise feature)
ISSUE TYPE

Feature Pull Request

COMPONENT NAME

credential_plugins/hashivault.py
AWX VERSION
1.7.0
ADDITIONAL INFORMATION
Adding specific X-Vault-Namespace header when Namespace option is set.

Reviewed-by: Ryan Petrello <None>
2021-03-16 14:40:17 +00:00
Jérémie Ben Arros
1550989482 add vault namespace support 2021-03-16 09:27:22 -04:00
Jérémie
d94a49ac74 Update hashivault.py 2021-03-16 09:16:55 -04:00
softwarefactory-project-zuul[bot]
de52adedef Merge pull request #9584 from shanemcd/explicit-is_container_group
Explicit db field for is_container_group

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Ryan Petrello <None>
2021-03-15 18:57:01 +00:00
Shane McDonald
876d4316e1 Fix collection tests 2021-03-15 14:14:03 -04:00
Shane McDonald
4a4d25329b Update instance_group module with is_container_group 2021-03-15 13:34:45 -04:00
Shane McDonald
e6f06a95da Remove unnecessary code from launch script
- Ansible is no longer installed on the control plane
- We register the instance / instance group at dispatcher startup
2021-03-15 13:30:31 -04:00
Shane McDonald
b15a75676d Fix container group tests 2021-03-15 13:28:40 -04:00
Jake McDermott
098ec63944 Add container group flag to add/edit data 2021-03-15 13:28:40 -04:00
Shane McDonald
1c4a376758 Explicit db field for is_container_group
We now have Container Groups that dont require a credential.
2021-03-15 13:28:39 -04:00
softwarefactory-project-zuul[bot]
a52f050f44 Merge pull request #9505 from shanemcd/inventory_import-podman
Update inventory_import to run inside of an EE

Option 2 identified in #9504

Reviewed-by: Ryan Petrello <None>
2021-03-15 16:47:35 +00:00
Shane McDonald
836335b4c5 Update inventory_import to run inside of an EE 2021-03-15 12:17:19 -04:00
softwarefactory-project-zuul[bot]
f10bf4c067 Merge pull request #9344 from mabashian/5990-translate-start-node
Mark start node for translation

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2021-03-15 14:17:17 +00:00
softwarefactory-project-zuul[bot]
138211da64 Merge pull request #9468 from mabashian/8789-workflow-inv-prompt
Adds warning message to inventory step when launching wfjt or creating node with wfjt

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-15 13:53:35 +00:00
softwarefactory-project-zuul[bot]
44befa7847 Merge pull request #9581 from saito-hideki/pr/proper_format_with_flake8_v3.9
Removed trailing whitespace to pass awx-api-lint check

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-15 13:50:22 +00:00
Hideki Saito
b9950deaf9 Modify to address flake8 v3.9 format check
* Removed trailing whitespace to pass awx-api-lint check

Signed-off-by: Hideki Saito <saito@fgrep.org>
2021-03-15 17:30:08 +09:00
softwarefactory-project-zuul[bot]
8ba9eef97b Merge pull request #9281 from keithjgrant/8905-codemirror-replacement
AceEditor - codemirror replacement

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2021-03-12 23:22:26 +00:00
softwarefactory-project-zuul[bot]
9342cb012a Merge pull request #9374 from sean-m-sullivan/copy_awx_collection
Add Copy to option to awx collection modules.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-12 22:00:48 +00:00
Sean Sullivan
0f04851ab7 Merge branch 'devel' into copy_awx_collection 2021-03-12 15:26:59 -06:00
Keith J. Grant
c90dfbb7e0 debounce CodeEditor onChange for performance improvement 2021-03-12 12:26:07 -08:00
Keith J. Grant
05f93032f5 fix CodeEditor tests 2021-03-12 08:54:19 -08:00
Keith J. Grant
726b5ddc26 fix lint error 2021-03-12 08:28:26 -08:00
softwarefactory-project-zuul[bot]
9c8dbdc7a5 Merge pull request #9558 from jainnikhil30/fix_tower_user
fix the tower_user module to update the fields properly

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-12 15:21:06 +00:00
softwarefactory-project-zuul[bot]
c170d4c4f6 Merge pull request #9575 from jainnikhil30/tower_collection_integration_test_fix
fix tower collection integration test race condition

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-12 14:01:36 +00:00
Nikhil Jain
0ee49dae76 fix tower collection integration test race condition 2021-03-12 14:57:52 +05:30
Keith J. Grant
4ea7c8a534 CodeEditor bugfixes
* fix typing space character
* hide cursor when editor doesn't have user focus
* show help text any time editor is in focus
* fix content shifting when help text appears/disappears
* remove 80 character "print limit" line
2021-03-11 16:20:05 -08:00
softwarefactory-project-zuul[bot]
8298b76dff Merge pull request #9569 from jbradberry/further-fix-for-ee-deletion
Undo the polymorphic.SET_NULL for Organization

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-11 21:47:59 +00:00
softwarefactory-project-zuul[bot]
52a46dd765 Merge pull request #9566 from wenottingham/version-note
Note that we need to match python versions.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-11 21:27:56 +00:00
Jeff Bradberry
5bec4a51c6 Undo the polymorphic.SET_NULL for Organization
It isn't polymorphic.
2021-03-11 15:50:57 -05:00
Bill Nottingham
fd658d44c9 Note that we need to match python versions.
(Some libraries don't have the same deps across python versions.)
2021-03-11 15:48:41 -05:00
softwarefactory-project-zuul[bot]
6a296419d2 Merge pull request #9360 from kdelee/log_request_time
create performance logger to log api response time

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2021-03-11 20:02:48 +00:00
softwarefactory-project-zuul[bot]
5b5f07d639 Merge pull request #9481 from kdelee/pin_websocket
pin websocket-client lib

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-11 19:39:11 +00:00
softwarefactory-project-zuul[bot]
27c56d4148 Merge pull request #9523 from jbradberry/ee-association-rbac
Fix the RBAC for attaching an EE to various objects

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-11 19:34:19 +00:00
softwarefactory-project-zuul[bot]
ae7396b0cf Merge pull request #9562 from jbradberry/ee-polymorphic-set-null
Make sure that EE foreign keys are polymorphic.SET_NULL

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-11 19:34:08 +00:00
softwarefactory-project-zuul[bot]
c677e4b18e Merge pull request #9557 from ryanpetrello/no-more-ansible-requirements
remove requirements_ansible logic from the update script

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-11 18:40:08 +00:00
softwarefactory-project-zuul[bot]
32f02fc91c Merge pull request #9561 from ryanpetrello/fork-you
clean stale dispatcher connections closer to post-fork

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-11 17:20:56 +00:00
Ryan Petrello
588cb1e403 fix some requirements updater breakage
- remove requirements_ansible logic from the update script
- removed the need for py2-specific system dependencies
- update to the latest pip-tools and move to the new long format
  (https://github.com/jazzband/pip-tools/pull/1237)
- fixed a few busted references to receptorctl @ devel
2021-03-11 11:54:01 -05:00
Alex Corey
5971a84e74 fixes crashing wizard, and adds error handle on adding role 2021-03-11 11:50:17 -05:00
Jeff Bradberry
e31fc37215 Make sure that EE foreign keys are polymorphic.SET_NULL
Deleting EEs that had been attached to something was failing.
2021-03-11 11:25:59 -05:00
Ryan Petrello
572c0fbb74 clean stale dispatcher connections closer to post-fork
see: https://github.com/ansible/awx/issues/9559
2021-03-11 11:14:13 -05:00
Nikhil Jain
2aa30226f4 removing some invalid chars 2021-03-11 20:07:31 +05:30
Nikhil Jain
53da8e0775 removing some invalid chars 2021-03-11 19:50:11 +05:30
Nikhil Jain
8e53453737 remove the extra result in test 2021-03-11 19:35:53 +05:30
Nikhil Jain
80023017a2 fix the tower_user module to update the fields properly 2021-03-11 19:14:50 +05:30
Sean Sullivan
10c357d0f1 Merge pull request #52 from ansible/devel
Rebase
2021-03-10 11:25:03 -06:00
softwarefactory-project-zuul[bot]
e8b2072ea5 Merge pull request #9536 from rooftopcellist/dev-db-test
consolidate conditional pytest sqlite3 db settings

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-10 15:57:14 +00:00
softwarefactory-project-zuul[bot]
9e89c16b38 Merge pull request #9533 from jbradberry/less-chatty-logs
Reduce the log level for some of the more spammy sources

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-10 15:26:43 +00:00
Christian M. Adams
ca19b9e9d4 consolidate conditional pytest sqlite3 db settings 2021-03-10 10:16:37 -05:00
Ryan Petrello
269e71c069 clarify this migration is only for 17.x -> 18.x 2021-03-10 09:41:50 -05:00
sean-m-ssullivan
857a5718e5 update logic 2021-03-09 18:40:15 -06:00
Keith J. Grant
6f7a717664 hide CodeEditor touch controls menu 2021-03-09 15:01:00 -08:00
Keith J. Grant
0b57522dce CodeEditor: don't type newline when pressing enter to start edit mode 2021-03-09 15:01:00 -08:00
Keith J. Grant
c2a2bf39d5 don't show CodeEditor control help text in readonly mode 2021-03-09 15:01:00 -08:00
Keith J. Grant
6b2cee2f69 add tests ensuring forms pass correct value to CodeEditor fields 2021-03-09 15:01:00 -08:00
Keith J. Grant
4e55c98bc6 add more code editor tests 2021-03-09 15:01:00 -08:00
Keith J. Grant
143d41fb2a add value assertions to code editor tests 2021-03-09 15:01:00 -08:00
Keith J. Grant
c975f65bbc CodeEditor: fix hidden error message 2021-03-09 15:01:00 -08:00
Keith J. Grant
2995cde7cb add AceEditor to changelog 2021-03-09 15:01:00 -08:00
Keith J. Grant
221021a798 disable interactive elements of CodeEditor in readOnly mode 2021-03-09 15:00:26 -08:00
Keith J. Grant
411d69204b remove codemirror dependencies 2021-03-09 15:00:26 -08:00
Keith J. Grant
19f4de0d05 add keyboard navigation help text to CodeEditor 2021-03-09 15:00:26 -08:00
Keith J. Grant
070c67ffe8 rename CodeMirror to CodeEditor 2021-03-09 15:00:26 -08:00
Keith Grant
5c38011ad5 styling Ace CodeEditor 2021-03-09 15:00:25 -08:00
Keith Grant
4e9c6a956d add code editor focus/blur keyboard controls 2021-03-09 15:00:25 -08:00
Keith Grant
1afdd7ac1d Ace editor POC 2021-03-09 15:00:25 -08:00
softwarefactory-project-zuul[bot]
3312db61c3 Merge pull request #9460 from mabashian/6528-unit-tests
Adds tests for workflow save error handling

Reviewed-by: John Hill <johill@redhat.com>
             https://github.com/unlikelyzero
2021-03-09 19:55:43 +00:00
Jeff Bradberry
0ca8fd7752 Update the debugging docs 2021-03-09 14:42:10 -05:00
softwarefactory-project-zuul[bot]
5bdf9a108c Merge pull request #9439 from mabashian/9410-workflow-node-disable-jt
Disable job templates in node modal that are missing inv or project

Reviewed-by: Mat Wilson <mawilson@redhat.com>
             https://github.com/one-t
2021-03-09 19:41:18 +00:00
Jeff Bradberry
0a6d13c1b9 Reduce the log level for some of the more spammy sources 2021-03-09 14:16:37 -05:00
softwarefactory-project-zuul[bot]
3673e7c3f4 Merge pull request #9530 from ryanpetrello/more-docker-install-notes
restore note about cloning the stable release branch for Docker installs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-09 17:44:04 +00:00
softwarefactory-project-zuul[bot]
b0324acd0e Merge pull request #9318 from mabashian/9223-notif-toast
Adds toast to notification template list whenever test notification finishes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-09 17:25:26 +00:00
Ryan Petrello
b6eeb2e77f restore note about cloning the stable release branch for Docker installs 2021-03-09 12:09:11 -05:00
softwarefactory-project-zuul[bot]
6bbbe23a96 Merge pull request #9528 from ryanpetrello/17.1.0-notes
update the changelog w/ notes for 17.1.0

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
             https://github.com/beeankha
2021-03-09 16:32:35 +00:00
Jeff Bradberry
097f465f39 Fix the RBAC for attaching an EE to various objects
- Organization.default_environment
- Project.default_environment
- JobTemplate.execution_environment
- WorkflowJobTemplate.execution_environment

System jobs are not editable by anyone other than a system admin, so
we don't need to check.  It appears that unified job templates can't
be created or edited outside of the endpoints for the specific types.
2021-03-09 11:00:03 -05:00
Ryan Petrello
58337b9e2e update the changelog w/ notes for 17.1.0 2021-03-09 10:58:50 -05:00
softwarefactory-project-zuul[bot]
2601631f28 Merge pull request #9489 from nixocio/ui_issue_9487
Fix diassociate EE from JT and WFJT 

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-09 15:25:54 +00:00
softwarefactory-project-zuul[bot]
48537e8202 Merge pull request #9398 from nixocio/ui_fix_variable_name
Update variables as returned by useRequest

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-09 14:37:56 +00:00
mabashian
0345a0c8e2 Remove Start from linter ignore array 2021-03-09 08:30:40 -05:00
mabashian
65c71b812e Mount with contexts now that i18n is used on workflow start node 2021-03-09 08:30:40 -05:00
mabashian
510b4197ac Mark start node for translation 2021-03-09 08:30:40 -05:00
sean-m-ssullivan
55855e9e63 fix lint and update test 2021-03-08 21:56:16 -06:00
mabashian
6a32164438 Adds support for ouiaId's on copy buttons 2021-03-08 16:11:22 -05:00
mabashian
811186308c Adds note to changelog about notification toasts 2021-03-08 15:23:39 -05:00
mabashian
d96383b317 Fixes bug where some toasts would reappear after being closed 2021-03-08 15:22:08 -05:00
mabashian
83a9c3470e Adds toast to notification template list whenever test notification finishes 2021-03-08 15:22:08 -05:00
sean-m-ssullivan
701a69b5e5 Merge branch 'ansible-devel' into copy_awx_collection 2021-03-08 10:18:47 -06:00
sean-m-ssullivan
295e40002e update 2021-03-08 10:18:34 -06:00
nixocio
7c2f6c95a6 Update variables as returned by useRequest
Update variables to be consistent with variables returned by useRequest.
2021-03-07 17:37:52 -05:00
nixocio
fbd46f7799 Fix diassociate EE from JT and WFJT
Allow EE to be removed from JT and WFJT.

Also, add unit-test related to those changes.

See: https://github.com/ansible/awx/issues/9487
2021-03-07 13:34:25 -05:00
softwarefactory-project-zuul[bot]
c9ec0d31f1 Merge pull request #9498 from ryanpetrello/bye-bye-virtualenv
remove custom_virtualenv support from the AWX collection and docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-06 14:12:32 +00:00
softwarefactory-project-zuul[bot]
85f7dc4222 Merge pull request #9500 from shanemcd/fix-sdb
Fix sdb in dev env

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 23:14:31 +00:00
softwarefactory-project-zuul[bot]
1c8851b716 Merge pull request #9499 from elyezer/update-collection
Linter fixes for Execution Environments module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 22:56:31 +00:00
Shane McDonald
afe8dc6ad9 Fix sdb in dev env 2021-03-05 17:41:25 -05:00
Elyézer Rezende
f294aabcc9 Linter fixes for Execution Environments module
Fix linter for the recently added Execution Environments module

Signed-off-by: Elyézer Rezende <elyezermr@gmail.com>
2021-03-05 16:13:21 -05:00
Ryan Petrello
4c60999161 remove custom_virtualenv support from the AWX collection and docs 2021-03-05 15:38:46 -05:00
softwarefactory-project-zuul[bot]
18ba40506f Merge pull request #9480 from wenottingham/burn-it
Remove ansible venvs & collection infrastructure.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 19:29:05 +00:00
softwarefactory-project-zuul[bot]
2ae82b9302 Merge pull request #9452 from ryanpetrello/rsyslogd-last-ditch
make rsyslogd socket emit failures a bit less verbose (but still write to stderr)

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 19:14:33 +00:00
softwarefactory-project-zuul[bot]
ec3b225c76 Merge pull request #9494 from ryanpetrello/older-install-instructions-warning
point people at install instructions for older stable releases

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 17:13:13 +00:00
Ryan Petrello
53d8e8b332 point people at install instructions for older stable releases 2021-03-05 11:38:19 -05:00
softwarefactory-project-zuul[bot]
3fd0c29a95 Merge pull request #9490 from shanemcd/delete-old-installer
Delete old installer / update INSTALL.md

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 15:31:37 +00:00
softwarefactory-project-zuul[bot]
8fb17cfbdb Merge pull request #9488 from nixocio/ui_fix_rerererender
Fix extra re-render for Job Template

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 15:12:05 +00:00
softwarefactory-project-zuul[bot]
e4d227a791 Merge pull request #9483 from rooftopcellist/rm_messages
Remove messages.js files that do not need to be committed to the repo

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 14:02:50 +00:00
softwarefactory-project-zuul[bot]
0123469e7e Merge pull request #9492 from chrismeyersfsu/fix-docker-compose-cluster-for-real
default cluster node count env var to 1

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-05 13:40:11 +00:00
Jake McDermott
d494645914 Add i18n config file to frontend container image 2021-03-05 08:24:31 -05:00
Chris Meyers
80c2249bdb default cluster node count env var to 1 2021-03-05 08:00:53 -05:00
Shane McDonald
6df65c95a7 Update INSTALL.md 2021-03-04 18:34:39 -05:00
Shane McDonald
119e80c717 Delete the old installer directory 2021-03-04 18:28:49 -05:00
nixocio
b7e614beee Fix extra re-render for Job Template
Fix extra re-render for Job Template.

Also, update a few unit-tests.

See: https://github.com/ansible/awx/issues/9479
2021-03-04 16:47:07 -05:00
softwarefactory-project-zuul[bot]
f231216584 Merge pull request #9471 from chrismeyersfsu/fix-docker_cluster
fix up awx docker cluster

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-04 21:24:54 +00:00
Chris Meyers
16a6fb5adc add docs for cluster dev mode 2021-03-04 15:23:04 -05:00
Chris Meyers
7b1edda368 support receptor in multi cluster nodes 2021-03-04 15:04:36 -05:00
Shane McDonald
69edef430c Get clustered dev env working 2021-03-04 14:56:22 -05:00
Chris Meyers
6f1f64118b wip 2021-03-04 14:54:41 -05:00
softwarefactory-project-zuul[bot]
e1e0bb30a9 Merge pull request #9450 from jakemcdermott/fix-9230
Use credential_type for prompted multicred select categories

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-04 19:51:59 +00:00
Jake McDermott
95ec009758 Add language catalog compile step before test commands 2021-03-04 13:25:31 -05:00
Jake McDermott
55b948bf39 Remove checkout from Makefile 2021-03-04 13:13:32 -05:00
Christian M. Adams
34df47ceba Remove messages.js files that do not need to be committed to the repo
* also clean up old .PHONY entries
2021-03-04 13:13:27 -05:00
Bill Nottingham
0505e38071 Remove ansible venvs & collection infrastructure. 2021-03-04 13:06:06 -05:00
softwarefactory-project-zuul[bot]
eb131f64cc Merge pull request #9482 from shanemcd/create_preload_data-on-dev-boot
Create admin user / run create_preload_data when dev env boots

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-04 17:05:44 +00:00
Shane McDonald
a3a47834fd Create admin user / run create_preload_data when dev env boots 2021-03-04 11:26:43 -05:00
Elijah DeLee
88a91cfeba pin websocket-client lib
They've made breaking changes that is going to take
some deeper investigation to update awxkit to use

This is only used for development purposes, and should
have not impact on the "awx" cli entry point
2021-03-04 11:09:32 -05:00
softwarefactory-project-zuul[bot]
6aab88259a Merge pull request #8030 from ansible/execution-environments
Execution Environments

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-04 14:15:13 +00:00
nixocio
0a4a1bed0a Add EE to inventory sources
Add EE to inventory sources

See: https://github.com/ansible/awx/issues/9189
2021-03-03 18:56:07 -05:00
nixocio
62215ca432 Add organization to EE details page
Add organization to EE details page.

See: https://github.com/ansible/awx/issues/9432
2021-03-03 18:56:07 -05:00
nixocio
fd21603c0e Update EE breadcrumb to use name instead of image
Update EE breadcrumb to use name instead of image to be consistent with
the other screens.

See: https://github.com/ansible/awx/issues/9087
2021-03-03 18:56:07 -05:00
Shane McDonald
aab58f5ae7 Dont require credential for Container Groups
Might want to follow up and make this only apply to Tower on Kubernetes
2021-03-03 18:56:07 -05:00
Shane McDonald
29ff69a774 For container group pods, use namespace Tower is deployed into by default 2021-03-03 18:56:07 -05:00
nixocio
33d7342ffe Linkify reference to EE on details page
Linkify reference to EE on a few details page.

See: https://github.com/ansible/awx/issues/9189
2021-03-03 18:56:07 -05:00
Shane McDonald
609b17aa20 Use receptor 0.9.6 2021-03-03 18:56:07 -05:00
Shane McDonald
e23a2b4506 Use var instead of set_fact 2021-03-03 18:56:07 -05:00
Shane McDonald
eba12a6207 Add ui_next to .dockerignore 2021-03-03 18:56:07 -05:00
Shane McDonald
03c5cc779b Only install podman in local dev env 2021-03-03 18:56:07 -05:00
Shane McDonald
065b943870 Fix mode for k8s launch scripts 2021-03-03 18:56:07 -05:00
nixocio
6e67ae68fd Add Execution Environments into a few screens
Add EE to the following screens:

* Job Template
* Organization
* Project
* Workflow Job Template

Also, add a new lookup component - ExecutionEnvironmentLoookup.

See: https://github.com/ansible/awx/issues/9189
2021-03-03 18:56:05 -05:00
Shane McDonald
adf708366a Add "copy" to EE related links 2021-03-03 18:52:55 -05:00
Rebeccah
4d2fcfd8c1 add a functional test for creating an EE, remove bum copy function because it's not needed, copy works from the base class
moved AWXKit pull additions to separate PR and made some changes that were causing linting errors in tests and add copy to show_capabilities for the ee serializer
2021-03-03 18:52:55 -05:00
Rebeccah
0921de5d2b adding needed url endpoint for copy functionality and the beginning of some testing that can be fleshed out more fully in later work 2021-03-03 18:52:55 -05:00
nixocio
f2801e0c03 Minor update EE tables
* Add table header `actions`
* Add `name` as default search

See: https://github.com/ansible/awx/issues/7884
Also: https://github.com/ansible/awx/issues/9087
2021-03-03 18:52:55 -05:00
Shane McDonald
befc658042 Wire up --pull option for EEs 2021-03-03 18:52:55 -05:00
Shane McDonald
883fa4906a Fix receptor.conf path in dev env 2021-03-03 18:52:55 -05:00
beeankha
60827143bb Bump up unified_jobs_table version for new column addition 2021-03-03 18:52:55 -05:00
beeankha
5b17ab6873 Enable EE collections info to be loaded as valid JSON vs stringified JSON 2021-03-03 18:52:55 -05:00
beeankha
0e80f663ab Add installed_collections column to unified job query 2021-03-03 18:52:55 -05:00
beeankha
cb95de0862 Assign entire file path to variable 2021-03-03 18:52:55 -05:00
beeankha
86a3a79be4 Enable utilized EE Collections name and version info to be detected 2021-03-03 18:52:55 -05:00
Jeff Bradberry
b417fc3803 Turn off permissions check bypassing for admins when hitting the execution environment list and detail views. 2021-03-03 18:52:55 -05:00
Jeff Bradberry
5b2adc89cf Make the managed_by_tower field read-only for EEs (similar to how we deal with it not being settable for Credentials) and add permissions checking for Org EE Admins.
can_add: gets an explicit role to check against, `'execution_environment_admin_role'`
can_change: leverages `self.check_related()` for the case where the Org is not changing, but also adds an explicit check for the EE Admin Role when the Org is changing to an explicit different Org.
2021-03-03 18:52:55 -05:00
Rebeccah
41fb21911e add execution_environment_admin_role to the an organizations read role, which access.py uses for determining access to reading an ee within an organization,
add migration file for execution_env_admin role addition to read_roles within an organization,

and set check related to mandatory
2021-03-03 18:52:55 -05:00
Rebeccah
eaa74b40c1 add org admins as able to control EEs even if they don't have the ee_admin role for the specific ee and prevent managed_by_tower EEs from being edited/deleted 2021-03-03 18:52:55 -05:00
Jake McDermott
cf513b33ee Add name field 2021-03-03 18:52:55 -05:00
nixocio
a39e1a528b Add execution environment list to Organizations
Add execution environment list to Organizations

See: https://github.com/ansible/awx/issues/8210
2021-03-03 18:52:55 -05:00
Elijah DeLee
05dded397d make sure we use built in credential type
this way we can pass kind="registry" to akit creat method and
we get the correct built in type
2021-03-03 18:52:55 -05:00
Shane McDonald
ae5a1117d4 Use official Receptor 0.9.5 release 2021-03-03 18:52:55 -05:00
Rebeccah
b1361c8fe2 edit original migration file, add blank string as acceptable to model 2021-03-03 18:52:55 -05:00
Rebeccah
20ee73ce73 default pull options for container images to None, also adding pull options to awxkit 2021-03-03 18:52:55 -05:00
nixocio
0bd8012fd9 Update selectors on EE details page to ease testing
Update selectors on EE details page to ease testing.
2021-03-03 18:52:55 -05:00
Shane McDonald
05ef51f710 Add migration to reset custom pod specs 2021-03-03 18:52:55 -05:00
nixocio
d6b5990cd2 Migrate EE list to tables
Migrate EE list to tables.

See:https://github.com/ansible/awx/issues/7884
2021-03-03 18:52:55 -05:00
nixocio
26f7a2ba81 Update Job Detail container group variable
`is_containerized` was updated on the API side to be
`is_container_group`.
2021-03-03 18:52:55 -05:00
Shane McDonald
0381a3ac8c Container Pull Option -> Pull
Co-authored-by: Jake McDermott <yo@jakemcdermott.me>
2021-03-03 18:52:55 -05:00
Rebeccah
4b40cb3abb changed the field name from 'container_options' to simply 'pull' 2021-03-03 18:52:55 -05:00
Jake McDermott
b0265b060b Remove client-side url validator 2021-03-03 18:52:55 -05:00
Jake McDermott
4ca33579a5 Add an interface for new ee options 2021-03-03 18:52:55 -05:00
Rebeccah
31e7e10f30 migration for container options for EE model
Co-authored-by: Shane McDonald <me@shanemcd.com>
2021-03-03 18:52:55 -05:00
Rebeccah
92f0af684c execution model pull container options added 2021-03-03 18:52:55 -05:00
Shane McDonald
6a7520d10f Handle quota exceeded in Container Groups v2 2021-03-03 18:52:55 -05:00
Shane McDonald
f2dfa132a7 Install Ansible only for collection tests 2021-03-03 18:52:55 -05:00
Shane McDonald
ddcbc408b9 Remove Ansible from control plane
Execution Environments or bust!
2021-03-03 18:52:55 -05:00
Shane McDonald
ea39cbce73 Update receptor.conf in dev env 2021-03-03 18:52:55 -05:00
Shane McDonald
44d7d68322 Update default ee image 2021-03-03 18:52:55 -05:00
Shane McDonald
9f39bab2b8 Quick fix for jobs failing in dev environment / VMs
The other alternative here is to go all the way with
https://github.com/ansible/ansible-runner/pull/617, which is proving to be
difficult if not impossible.
2021-03-03 18:52:55 -05:00
Shane McDonald
8eb4dafb17 Fix missing postgresql module 2021-03-03 18:52:55 -05:00
Shane McDonald
428f8addf8 Create default EE as a part of create_preload_data 2021-03-03 18:52:55 -05:00
Shane McDonald
57b317d440 Get system jobs working under new deployment model (#9221) 2021-03-03 18:52:55 -05:00
nixocio
68f9c5137d Mark string to translation
Mark string to translation
2021-03-03 18:52:55 -05:00
Shane McDonald
9f97efece8 Stop and kill dispatcher and callback reciever as group 2021-03-03 18:52:55 -05:00
Shane McDonald
1a68df275c Set correct SDB_NOTIFY_HOST in minikube env 2021-03-03 18:52:55 -05:00
Shane McDonald
86363e260e Provide new default pod defintion in CG metadata (#9181) 2021-03-03 18:52:55 -05:00
Alan Rominger
8f66793276 Assure that unit_id is always defined (#9180) 2021-03-03 18:52:55 -05:00
Alan Rominger
c7e0e30f93 Make sure project updates run in default EE (#9172)
* Make sure project updates run in default EE

* Remove project execution_environment field from collection
2021-03-03 18:52:55 -05:00
Yago Marques
8ab7745e3a WIP Inclusion of the EE option in the payloads within the Organization and Projects. (#9145)
* add ee option on factories for organizations

* add new lines

* remove inventory from the options

* remove line

* remove line from projects

* fix the tuple

* fix lint problems
2021-03-03 18:52:55 -05:00
Shane McDonald
fd0c4ec869 Pin to latest version of PyYAML
Fixes https://github.com/yaml/pyyaml/issues/478
2021-03-03 18:52:55 -05:00
Alan Rominger
a435843f23 Exception handling to always release work units 2021-03-03 18:52:55 -05:00
Ryan Petrello
f850f8d3e0 introduce a new global flag for denoating K8S-based deployments
- In K8S-based installs, only container groups are intended to be used
  for playbook execution (JTs, adhoc, inventory updates), so in this
  scenario, other job types have a task impact of zero.
- In K8S-based installs, traditional instances have *zero* capacity
  (because they're only members of the control plane where services
  - http/s, local control plane execution - run)
- This commit also includes some changes that allow for the task manager
  to launch tasks with task_impact=0 on instances that have capacity=0
  (previously, an instance with zero capacity would never be selected
  as the "execution node"

This means that when IS_K8S=True, any Job Template associated with an
Instance Group will never actually go from pending -> running (because
there's no capacity - all playbooks must run through Container Groups).
For an improved ux, our intention is to introduce logic into the
operator install process such that the *default* group that's created at
install time is a *Container Group* that's configured to point at the
K8S cluster where awx itself is deployed.
2021-03-03 18:52:55 -05:00
Alan Rominger
c29d476919 Fix obvious code error with foreman inventory 2021-03-03 18:52:55 -05:00
Alan Rominger
b05b6b2e03 Fix minor syntax error failing AdHocCommands 2021-03-03 18:52:55 -05:00
Alan Rominger
3f76499c56 Use the fully qualified inventory plugin name only for foreman 2021-03-03 18:52:55 -05:00
Shane McDonald
e63383bde6 Add PATH to blocked inventory source vars
This used to be skiped because PATH was already present in the env we
constructed for runner.
2021-03-03 18:52:55 -05:00
Shane McDonald
c6be92cdf6 Create awx group in container 2021-03-03 18:52:55 -05:00
Shane McDonald
341e1e34e3 Dont zip/unzip private data dir for local jobs 2021-03-03 18:52:55 -05:00
Shane McDonald
70755a395b Make receptorctl easier to use in dev env 2021-03-03 18:52:55 -05:00
Shane McDonald
fe9b24cde2 flake8 2021-03-03 18:52:55 -05:00
Shane McDonald
70f7a082bb Minimally functional container group v2 w/ receptor 2021-03-03 18:52:55 -05:00
Shane McDonald
9df29e8fc4 Use official awx-ee by default 2021-03-03 18:52:55 -05:00
Shane McDonald
d37cb64aaf Delete some old container group v1 code 2021-03-03 18:52:55 -05:00
Shane McDonald
1d9f01a201 Deleted unused build_params_process_isolation method 2021-03-03 18:52:55 -05:00
Shane McDonald
373bb443aa UnifiedJob#is_containerized -> UnifiedJob#is_container_group_task 2021-03-03 18:52:55 -05:00
Shane McDonald
286b1d4e25 InstanceGroup#is_containerized -> InstanceGroup#is_container_group 2021-03-03 18:52:55 -05:00
Shane McDonald
7b7465f168 Update receptor config to allow for runtime options 2021-03-03 18:52:55 -05:00
Shane McDonald
e453afa064 FOLLOW UP ON THIS: Fix fact_cache directory location
The part where we pass in the runner params to the processor phase is
legit. Need to investigate why the fact_cache directory is no longer nested
under job.id.
2021-03-03 18:52:55 -05:00
Shane McDonald
cf96275f1b Pull awx -> receptor job code into its own class 2021-03-03 18:52:54 -05:00
Shane McDonald
be8168b555 Surface errors when launching jobs through Receptor
This will raise errors such as:

exec: "ansible-runner": executable file not found in $PATH
2021-03-03 18:52:54 -05:00
Shane McDonald
fd92ba0c0b Actually cancel things 2021-03-03 18:52:54 -05:00
Shane McDonald
0184a7c267 Create receptor mesh in cluster development environment 2021-03-03 18:52:54 -05:00
Shane McDonald
81f6d36a3a Set SDB_NOTIFY_HOST for all processes 2021-03-03 18:52:54 -05:00
Shane McDonald
f1df4c54f8 Begin integrating receptor 2021-03-03 18:52:54 -05:00
Shane McDonald
521d3d5edb Initial EE integration 2021-03-03 18:52:54 -05:00
Shane McDonald
acee22435b Update ExecutionEnvironments.jsx with breadcrumb replacement 2021-03-03 18:52:54 -05:00
Shane McDonald
0c497fa682 Get podman-in-docker working under cgroups v2 2021-03-03 18:52:54 -05:00
Alan Rominger
90b9c7861c Allow jobs to run in the base ansible-runner image (#8949) 2021-03-03 18:52:54 -05:00
Alan Rominger
eb5bf599e3 Fix raw archive project updates
Several squashed commits

Fix git bug introduced by setting remote tmp in project path
change shebang back to py3 again
Revert shebang change
2021-03-03 18:52:54 -05:00
Alan Rominger
10e68c6fb3 Fix unit test fallout 2021-03-03 18:52:54 -05:00
Alan Rominger
49bdadcdbf Fix yet another host vs container path bug 2021-03-03 18:52:54 -05:00
Alan Rominger
015fc29c1c Fix another svn issue due to pre-existing folder 2021-03-03 18:52:54 -05:00
Alan Rominger
0dfb183cb6 Fix another credential path-in-container bug 2021-03-03 18:52:54 -05:00
Alan Rominger
ba14634318 Fix collection pep8 failure 2021-03-03 18:52:54 -05:00
Alan Rominger
b953478225 Change the default EE location 2021-03-03 18:52:54 -05:00
Jeff Bradberry
9964ba7c9a Improve the behavior of EE resolution for ad hoc commands
- call resolve_execution_environment during AdHocCommand.save()
- wrap the fallback call of the resolver in tasks.py in disable_activity_stream()
2021-03-03 18:52:54 -05:00
Shane McDonald
12b8349e88 Show EE images that are managed by tower in UI 2021-03-03 18:52:54 -05:00
Jeff Bradberry
c74d60f3f3 Make sure that the new credential type is in the choices list 2021-03-03 18:52:54 -05:00
Jeff Bradberry
44ad6bfdce Insert a default EE into the development environment 2021-03-03 18:52:54 -05:00
Jeff Bradberry
fde7a1e3e5 Ensure that the updated job instance is used
when attaching an EE.
2021-03-03 18:52:54 -05:00
Jeff Bradberry
4a0fc3e1af Ensure that a fallback EE is available to be found
for the failing tests.
2021-03-03 18:52:54 -05:00
Jeff Bradberry
5f1da2b923 Adjust ExecutionEnvironmentAccess to account for the new EE admin role 2021-03-03 18:52:54 -05:00
Jeff Bradberry
e7bf81883b Populate the EE name field in awxkit 2021-03-03 18:52:54 -05:00
Jeff Bradberry
4993a9e6ec Move the resolve_execution_environment method to the mixin class
so that it can be used with AdHocCommands as well.
2021-03-03 18:52:54 -05:00
Jeff Bradberry
8562c378c0 Make use of the EE resolver code when launching jobs 2021-03-03 18:52:54 -05:00
Jeff Bradberry
6d935f740c Fill in the new execution environment collection module
as well as changes to other ones that need to be able to attach EEs.
2021-03-03 18:52:54 -05:00
Jeff Bradberry
c1133b3f6d Add in more model changes around execution environments
- a new unique name field to EE
- a new configure-Tower-in-Tower setting DEFAULT_EXECUTION_ENVIRONMENT
- an Org-level execution_environment_admin_role
- a default_environment field on Project
- a new Container Registry credential type
- order EEs by reverse of the created timestamp
- a method to resolve which EE to use on jobs
2021-03-03 18:52:54 -05:00
Alan Rominger
c0faa39b53 Remove files moved to the ansible/awx-ee repo
These have been moved to:

https://github.com/ansible/awx-ee

that will be the home for the processes needed to
build this execution environment.
2021-03-03 18:52:54 -05:00
Alan Rominger
7a433f4e8f Change the shebang back to just python 2021-03-03 18:52:54 -05:00
Alan Rominger
2302496724 Add back in the subversion requirement 2021-03-03 18:52:54 -05:00
Alan Rominger
54681eb055 Add utility method to get controller private_data_dir 2021-03-03 18:52:54 -05:00
Alan Rominger
b716e2b099 Make insights integration tests pass again 2021-03-03 18:52:54 -05:00
Alan Rominger
69dcbe0865 More inventory update containerization fixes 2021-03-03 18:52:54 -05:00
Shane McDonald
14a8e3da5e WIP: containerized inventory updates. Thanks ALAN!! 2021-03-03 18:52:54 -05:00
Shane McDonald
6ff1424e8c Fix tests after rebasing in inventory update refactor 2021-03-03 18:52:54 -05:00
nixocio
9786dc08d3 Add organization as part of creating/editing an execution environments
Add organization as part of creating/editing an execution environments

If one is a `system admin` the Organization is an optional field. Not
providing an Organization makes the execution environment globally
available.

If one is a `org admin` the Organization is a required field.

See: https://github.com/ansible/awx/issues/7887
2021-03-03 18:52:54 -05:00
Shane McDonald
ecaa66c13b Fix linter 2021-03-03 18:52:54 -05:00
Shane McDonald
ee1d322336 WIP: Module for EEs 2021-03-03 18:52:54 -05:00
Shane McDonald
1f4a45a698 Remove "pull" field from EE mixin
I think this should go on the EE definition itself
2021-03-03 18:52:54 -05:00
Shane McDonald
966bb6fc74 Back to green 2021-03-03 18:52:54 -05:00
Shane McDonald
f554f45288 Add license for receptor 2021-03-03 18:52:54 -05:00
Shane McDonald
5c2b2dea0c REVERT ME: Install community.general in image
This is needed for the wait_fors in the launch scripts to work
2021-03-03 18:52:54 -05:00
Shane McDonald
fd9373a9ec Use official receptor image 2021-03-03 18:52:54 -05:00
Shane McDonald
82a641e173 Add AWX EE definition 2021-03-03 18:52:54 -05:00
Shane McDonald
490f719fd9 Add new ee container 2021-03-03 18:52:54 -05:00
Shane McDonald
46f5cb6b7a Install receptorctl in awx venv 2021-03-03 18:52:54 -05:00
Shane McDonald
efb25b7b9e Use WIP version of collections_requirements.yml
Pulled from ansible-builder/test/data/awx
2021-03-03 18:52:54 -05:00
Shane McDonald
87b13ead12 REVERT ME: Install ansible/devel for now 2021-03-03 18:52:54 -05:00
Alan Rominger
7c6975baec Collections volume permission fix, and container group fix
Use same image for both types of container isolation

Inventory move fix related to container groups
2021-03-03 18:52:54 -05:00
Kersom
6e6cd51b4d Update usage of summary_fields for execution environments (#8217)
Update usage of summary_fields for execution environments. Also, update
unit-tests to cover this change.

See: https://github.com/ansible/awx/issues/8216
2021-03-03 18:52:54 -05:00
Jeff Bradberry
3d233faed8 Expose the user capabilities dict for EEs (#8208) 2021-03-03 18:52:54 -05:00
Kersom
54d0f173dc Add details page for Execution Environments (#8172)
* Add feature to Add/Edit Execution Environments

Add feature to Add/Edit Execution Environments.

Also, add key for `ExecutionEnvironmentsList`.

See: https://github.com/ansible/awx/issues/7887

* Add details page for execution environments

Add details page for execution environments

See: https://github.com/ansible/awx/issues/8171
2021-03-03 18:52:54 -05:00
Kersom
684b9bd47a Add feature to Add/Edit Execution Environments (#8165)
* Add feature to Add/Edit Execution Environments

Add feature to Add/Edit Execution Environments.

Also, add key for `ExecutionEnvironmentsList`.

See: https://github.com/ansible/awx/issues/7887

* Update registry credential label
2021-03-03 18:52:54 -05:00
Alan Rominger
9530c6ca50 Changes to get execution environments factories working (#8126) 2021-03-03 18:52:54 -05:00
Kersom
b7209d1694 Add list Execution Environments (#8148)
See: https://github.com/ansible/awx/issues/7886
2021-03-03 18:52:54 -05:00
Alan Rominger
9d806ddb82 Initial minimal hooking up of JT EEs to jobs 2021-03-03 18:52:54 -05:00
Alan Rominger
332c802317 Deal with missing HOME env var 2021-03-03 18:52:54 -05:00
Alan Rominger
9660e27246 Fix project folder deletion
Fix another absolute path reference in containers
2021-03-03 18:52:54 -05:00
Alan Rominger
64f45da4d2 Fix pathing issue for credential file references 2021-03-03 18:52:54 -05:00
Alan Rominger
73418e41f3 Fix pathing issue with custom credentials
also fix some minor flake8 issues
2021-03-03 18:52:54 -05:00
Alan Rominger
6e2010ca40 Respect user proot show paths when using containers 2021-03-03 18:52:54 -05:00
Alan Rominger
50433789ae Purge environment variables to work with ansible-runner changes
Remove inventory scripts show because they no longer exist

Remove reference to non-existent callback directory

Remove more references to removed paths
2021-03-03 18:52:54 -05:00
Alan Rominger
a3f0158a94 Add Z to volume mount
Update to AWX execution environment
  use the special 2.9 container image
  revert setting back for merge

Fix another permission error by mapping 2 folders
  also create folders before running
2021-03-03 18:52:54 -05:00
Alan Rominger
130bf076f4 Add Z to volume mount
Set ansible-runner back to main fork due to merge
2021-03-03 18:52:54 -05:00
Shane McDonald
06d7a61ca1 Initial EE integration 2021-03-03 18:52:54 -05:00
Kersom
297fecba3a Add execution environments files (#7909)
Update navigation bar and routing system to add execution environments.
Also, add stub files for the remaining related work.

See: https://github.com/ansible/awx/issues/7885
Also: https://github.com/ansible/awx/issues/7884
2021-03-03 18:52:54 -05:00
Jeff Bradberry
3cbf384ad1 Run a receptor node in the dev environment 2021-03-03 18:52:54 -05:00
Jeff Bradberry
45a0084f78 Add a sublist api view for the UJTs that use a given execution environment 2021-03-03 18:52:54 -05:00
Jeff Bradberry
f9741b619c Make changes to support capture by the activity stream
Including exposing a new API view for a particular EE's activity
stream objects.
2021-03-03 18:52:54 -05:00
Jeff Bradberry
5ec7378135 Add a new Swagger topic 2021-03-03 18:52:54 -05:00
Jeff Bradberry
c05e4e07ee Expose execution environments in awxkit and awx-cli 2021-03-03 18:52:54 -05:00
Jeff Bradberry
cc429f9741 Expose an API view for all of the execution environments under an org 2021-03-03 18:52:54 -05:00
Jeff Bradberry
cb766c6a95 Add execution_environment and pull to the fields for UJs and UJTs 2021-03-03 18:52:54 -05:00
Jeff Bradberry
3c637cd54c Change OrganizationSerializer to show and set default_environment 2021-03-03 18:52:53 -05:00
Jeff Bradberry
61cbd34586 Add in the basic list and detail api views 2021-03-03 18:52:53 -05:00
Jeff Bradberry
9697999ddd Create the RBAC access class for execution environments 2021-03-03 18:52:53 -05:00
Jeff Bradberry
41613ff544 Add a new ExecutionEnvironment model 2021-03-03 18:52:53 -05:00
softwarefactory-project-zuul[bot]
0af7f046f0 Merge pull request #9474 from ryanpetrello/badges
add some new fancy badges

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-03 21:55:51 +00:00
Ryan Petrello
efe03c396b add some new fancy badges 2021-03-03 16:27:04 -05:00
mabashian
9b1d6ab441 Adds ouiaId to inventory warning 2021-03-03 13:38:12 -05:00
mabashian
e9d524ebc0 Space out the inventory warning 2021-03-03 13:36:06 -05:00
sean-m-ssullivan
c4d8d5ee9e Merge branch 'copy_awx_collection' of github.com:sean-m-sullivan/awx into copy_awx_collection 2021-03-03 12:20:53 -06:00
mabashian
75746f2c69 Adds warning message to inventory step when launching wfjt or creating node with wfjt 2021-03-03 13:20:30 -05:00
sean-m-ssullivan
6d88a81cbd update logic 2021-03-03 12:20:13 -06:00
Sean Sullivan
56d6479cd8 Merge pull request #48 from ansible/devel
Rebase
2021-03-03 09:56:54 -06:00
softwarefactory-project-zuul[bot]
7be129f9fa Merge pull request #9464 from beeankha/test_playbook_cleanup
Integration Test Playbook Cleanup for Collections

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-03 15:55:24 +00:00
softwarefactory-project-zuul[bot]
4d21d699bb Merge pull request #9465 from nixocio/ui_unfix_async
Revert change on WorkflowJobTemplateAdd

Reviewed-by: Kersom
             https://github.com/nixocio
2021-03-03 15:40:39 +00:00
nixocio
899a5fca0b Revert change on WorkflowJobTemplateAdd
Revert removal of `await` on WorkflowJobTemplateAdd

See: https://github.com/ansible/awx/pull/9459
2021-03-03 10:05:42 -05:00
Ryan Petrello
4456ae2d71 if rsyslogd cannot be reached, note the failure in sys.stderr
see: https://github.com/ansible/awx/issues/8505
2021-03-03 09:46:33 -05:00
beeankha
3153587c90 Clean up Collections test playbooks 2021-03-03 09:44:35 -05:00
Sean
a0090c7c52 update logic 2021-03-02 23:55:03 -06:00
softwarefactory-project-zuul[bot]
3903e88a47 Merge pull request #8992 from saito-hideki/awx/issue/8839
Add inventory source and project links to details view of Jobs list

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-03 02:09:06 +00:00
softwarefactory-project-zuul[bot]
11b8ca1ef9 Merge pull request #9443 from rooftopcellist/update_translations_targets
Update translations targets

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-02 23:52:56 +00:00
softwarefactory-project-zuul[bot]
0b54ee65f5 Merge pull request #9459 from nixocio/ui_fix_minor_issues
Fix minor UI issues

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-02 19:51:07 +00:00
Christian M. Adams
6b8267d2b8 add extract-template to pot target 2021-03-02 14:45:34 -05:00
mabashian
1f93d3ad69 Adds tests for workflow save error handling. Removes unnecessary code that was attempting to remove credentials from a new node. 2021-03-02 14:27:48 -05:00
nixocio
5969d9e3e2 Fix minor UI issues
Fix minor UI issues

* fix typo
* add string to be translated
* remove not necessary `await`
2021-03-02 14:13:56 -05:00
Sean Sullivan
e05db68bde Reverted tower credential test
Reverted tower credential test
2021-03-02 13:00:08 -06:00
Jake McDermott
e92f1187d2 Use credential_type for prompted multicred select categories 2021-03-02 10:27:49 -05:00
Christian M. Adams
53d0611cf8 Update translation make targets and add init PO files 2021-03-02 09:51:04 -05:00
softwarefactory-project-zuul[bot]
d1c49d45bf Merge pull request #9047 from AlexSCorey/8923-JTPOLPagination
Fixes pagination issue on modal

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-03-01 20:04:12 +00:00
mabashian
a293a60d5c Disable job templates in node modal that are missing inv or project 2021-03-01 11:47:34 -05:00
softwarefactory-project-zuul[bot]
4cdec9c297 Merge pull request #9417 from mabashian/typeahead-select-ouia
Adds ouia id's to typeahead select components

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-27 19:52:07 +00:00
softwarefactory-project-zuul[bot]
4baada37d8 Merge pull request #9037 from dejongm/devel
Update defaults.py RH Satellite settings to use new Foreman plugin variables.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-25 16:34:31 +00:00
mabashian
cb3f7b9ef5 Remove duplicate ouia id 2021-02-25 11:15:56 -05:00
mabashian
1c7fbc2806 Adds ouia id's to typeahead select components 2021-02-25 11:15:10 -05:00
softwarefactory-project-zuul[bot]
060578b30b Merge pull request #9196 from keithjgrant/6189-admin-list-tables
Convert admin pages lists to tables

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-25 13:40:50 +00:00
softwarefactory-project-zuul[bot]
cb05b54404 Merge pull request #9224 from jakemcdermott/add-mgmt-jobs
Add system jobs interface w/ configurable data retention

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2021-02-24 19:08:55 +00:00
Jake McDermott
218b97883d Fix schedule loading flicker 2021-02-24 12:20:37 -05:00
Jake McDermott
d337948bd6 Use system job type as identifier 2021-02-24 12:20:30 -05:00
Jake McDermott
5d51a4e781 Fix system job list item key 2021-02-24 12:20:27 -05:00
Jake McDermott
a0beb9e445 Remove instructions to add on empty mgmt jobs 2021-02-24 12:20:24 -05:00
Jake McDermott
a9aa91d9f2 Remove duplicate notification admin request 2021-02-24 12:20:20 -05:00
Jake McDermott
2f56a20484 Add data retention field for schedule creation 2021-02-24 12:20:17 -05:00
Jake McDermott
4985fb6ffa Add case insensitive search for Name 2021-02-24 12:20:14 -05:00
Jake McDermott
b545a6510f Fix Data retention label 2021-02-24 12:20:12 -05:00
Jake McDermott
df7b168911 Add actions column
Co-authored-by: Tiago Góes <tiago.goes2009@gmail.com>
2021-02-24 12:20:09 -05:00
Jake McDermott
83b449fd30 Add sysjob data retention to schedules
* Migate management jobs list to tables
 * Use cancel link variant for consistency with other prompts
 * Add basic test coverage for sysjobs
 * Remove select-all from mgmt jobs
 * Remove unneeded component variables
 * Fix missing schedule breadcrumb
 * Optimize data fetching with useCallback
2021-02-24 12:20:05 -05:00
Jake McDermott
a00c8920ce Remove default sysjob days 2021-02-24 12:20:03 -05:00
Jake McDermott
a07b1a19f3 Add system prompt and config 2021-02-24 12:20:00 -05:00
Jake McDermott
a95e554a16 Only render edit control if editable 2021-02-24 12:19:57 -05:00
Jake McDermott
4c92d02540 Add mgmt job details 2021-02-24 12:19:54 -05:00
Jake McDermott
a0bdf8cdae Add default sysjob days 2021-02-24 12:19:52 -05:00
Jake McDermott
daaabd935c Add mgmt job launch 2021-02-24 12:19:49 -05:00
Jake McDermott
eaf55728d8 Add mgmt job list 2021-02-24 12:19:46 -05:00
Jake McDermott
45acd15c82 Add mgmt job notifications 2021-02-24 12:19:43 -05:00
Jake McDermott
3f936cd5e7 Add mgmt job schedules 2021-02-24 12:19:40 -05:00
Jake McDermott
5d9d486f9c Resolve notification admin status with config 2021-02-24 12:19:37 -05:00
Jake McDermott
d3f2dedbd5 Add routing system for mgmt jobs 2021-02-24 12:19:31 -05:00
softwarefactory-project-zuul[bot]
fe605596b5 Merge pull request #9406 from nixocio/ui_issue_9323
Do not show tooltip with empty content

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-24 15:52:56 +00:00
nixocio
bf601fc988 Do not show tooltip with empty content
Do not show tooltip with empty content.

See: https://github.com/ansible/awx/issues/9323
2021-02-24 09:29:53 -05:00
softwarefactory-project-zuul[bot]
7f36efe8dd Merge pull request #9400 from ansible/local-docker-update-notes
some more data migration notes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-24 14:19:32 +00:00
softwarefactory-project-zuul[bot]
615cc11d0d Merge pull request #9120 from AlexSCorey/9043-InventoryFileField
Inventory File field and playbook field are both now type ahead.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-24 14:15:28 +00:00
softwarefactory-project-zuul[bot]
3b520cdeee Merge pull request #9391 from sean-m-sullivan/collections_md_update
Collections md update

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
             https://github.com/beeankha
2021-02-23 23:01:20 +00:00
Ryan Petrello
c245abefcb some more data migration notes
some things I noticed while going through an upgrade
2021-02-23 17:23:37 -05:00
softwarefactory-project-zuul[bot]
6a093c8e8b Merge pull request #9395 from rooftopcellist/migrate-path
fix dev env migrate.yml path in docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-23 21:38:27 +00:00
Christian M. Adams
2097b014e8 fix dev env migrate.yml path in docs 2021-02-23 16:04:22 -05:00
softwarefactory-project-zuul[bot]
fb6ce4bed3 Merge pull request #9387 from rooftopcellist/secret_key_dev
Mount SECRET_KEY into dev env & document it

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-23 20:22:58 +00:00
Shane McDonald
7ae19a8ba3 Fix tests 2021-02-23 14:50:27 -05:00
softwarefactory-project-zuul[bot]
240556b5d6 Merge pull request #9392 from rooftopcellist/update-docs-ansible-install
Add ansible requirement for dev env docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-23 18:37:36 +00:00
Christian M. Adams
1b6976205c Add ansible requirement for dev env docs 2021-02-23 13:03:47 -05:00
sean-m-sullivan
ce21dec0d0 update 2021-02-23 11:50:29 -06:00
softwarefactory-project-zuul[bot]
22df3047e6 Merge pull request #9381 from wenottingham/no-more-email
Make email not required for a user in the UI.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-23 16:27:56 +00:00
softwarefactory-project-zuul[bot]
79cb11d905 Merge pull request #9385 from ryanpetrello/buildkit
build images using Buildkit

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-23 15:20:06 +00:00
Ryan Petrello
be79b6e907 build images using Buildkit 2021-02-23 09:32:00 -05:00
Christian M. Adams
2239c8d6e5 Add full path to data migration example in docs 2021-02-23 09:07:37 -05:00
Shane McDonald
7394535022 Remove default SECRET_KEY
We should never be using default values for sensitive information
2021-02-23 08:46:42 -05:00
Christian M. Adams
ec40f62c4d Mount SECRET_KEY into dev env & document it 2021-02-22 18:46:47 -05:00
softwarefactory-project-zuul[bot]
5e6c978a47 Merge pull request #9289 from rooftopcellist/docker-community
Replace Local Docker Install with Community Docker-Compose

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-22 23:25:24 +00:00
Alex Corey
a54352898e makes the inventory file field creatable 2021-02-22 18:05:09 -05:00
Christian M. Adams
b583aeb646 Pass docker project_name whenever docker-compose is used
- Also, do no explicitly name containers
2021-02-22 17:05:51 -05:00
Christian M. Adams
af6af052d0 Remove unneeded roles from the installer directory 2021-02-22 15:37:24 -05:00
Shane McDonald
70325fd249 Remove nodejs from list of required dependencies to install 2021-02-22 13:44:22 -05:00
Shane McDonald
e935b06c39 Fix docker-compose-clean 2021-02-22 13:44:22 -05:00
Shane McDonald
8f9758803e Set compose project dir for backwards compatibility 2021-02-22 13:44:22 -05:00
Christian M. Adams
9672e72834 Consolidate the Local Docker installer and the dev env
- removes local_docker installer and points community users to our development environment (make docker-compose)
  - provides a migration path from Local Docker Compose installations --> the dev environment
  - the dev env can now be configured to use an external database
  - consolidated the Local Docker and dev env docker-compose.yml files into one template file, used by the dockerfile role
  - added a 'sources' role to template out config files
  - the postgres data dir is no longer a bind-mount, it is a docker volume
  - the redis socket is not longer a bind-mount, it is a docker volume
  - the local_settings.py.docker-compose file no longer needs to be copied over in the dev env
  - Create tmp rsyslog.conf in rsyslog volume to avoid cross-linking. Previously, the tmp code-generated rsyslog.conf was being written to /tmp (by default).  As a result, we were attempting to shutil.move() across volumes.
  - move k8s image build and push roles under tools/ansible
  - See tools/docker-compose/README.md for usage of these changes
2021-02-22 13:44:19 -05:00
Bill Nottingham
fb07be36b5 Make email not required for a user in the UI.
It's already not required in the API.
2021-02-22 13:28:03 -05:00
softwarefactory-project-zuul[bot]
0f6d2c36a0 Merge pull request #9131 from sezanzeb/patch-1
Document admin_password in INSTALL.md

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-22 17:36:55 +00:00
softwarefactory-project-zuul[bot]
805ba2568c Merge pull request #9322 from jainnikhil30/workflow_launch_type_metavar
add workflow_job_launch_type in metavars

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-22 17:27:32 +00:00
softwarefactory-project-zuul[bot]
e0bbd36b99 Merge pull request #9378 from tchellomello/guid-configmap-k8s
Added GUID filters for k8s/OCP environment

Reviewed-by: awxbot
             https://github.com/awxbot
2021-02-22 15:00:28 +00:00
softwarefactory-project-zuul[bot]
3b1b55946e Merge pull request #9334 from mabashian/8372-pending-approvals
Add pending approvals badge to application header

Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
             https://github.com/tiagodread
2021-02-22 14:47:30 +00:00
Alex Corey
189804932e Inventory File field and playbook field are both now type ahead. 2021-02-22 09:15:27 -05:00
Marcelo Moreira de Mello
61da965d54 Added GUID filters for k8s/OCP environment 2021-02-22 09:13:22 -05:00
softwarefactory-project-zuul[bot]
1f1657d880 Merge pull request #9210 from AlexSCorey/7692-PromptsOnSchedules
Prompts on schedules

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-22 13:25:12 +00:00
sean-m-sullivan
57f4db25d2 linting 2021-02-21 23:51:21 -06:00
sean-m-sullivan
115726ed83 fix completness 2021-02-21 23:29:30 -06:00
sean-m-sullivan
9a7dd38cbb add copy to modules 2021-02-21 19:49:14 -06:00
Sean Sullivan
74a5247d9d Merge pull request #42 from ansible/devel
Rebase
2021-02-20 05:58:30 -06:00
softwarefactory-project-zuul[bot]
cc18cf650e Merge pull request #9369 from jbradberry/fix-broken-logos
Fix the broken paths to the favicon and logo in the API browser

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2021-02-19 20:30:01 +00:00
Jeff Bradberry
78ccf3c674 Fix the broken paths to the favicon and logo in the API browser 2021-02-19 14:56:26 -05:00
Alex Corey
96531eff7a fixes survey row alignment issue 2021-02-19 14:27:50 -05:00
Alex Corey
561390d405 Refactors to add warning icon and disable save if schedule has missing values 2021-02-19 14:27:50 -05:00
Alex Corey
c608d761a2 Adds Prompting for schedule 2021-02-19 14:27:50 -05:00
Alex Corey
61c0beccff Updates props being passed to Schedules to more accuratly reflect what they are 2021-02-19 14:27:49 -05:00
softwarefactory-project-zuul[bot]
182a7e8e5c Merge pull request #9251 from Saurabh-Thakre/Saurabh-Thakre-patch-1
Fixed the Customized Notification returning incorrect values for host_status_counts

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-19 16:07:30 +00:00
Elijah DeLee
0e6c14e707 create performance logger to log api response time
this middleware allready existed, and we were trying to log this
data but it was not working.

Hope is these logs will be able to be shipped via external logging
and we could use kibana to track response time of different endpoints
2021-02-18 18:39:12 -05:00
softwarefactory-project-zuul[bot]
080c430b82 Merge pull request #9356 from mabashian/fix-test-warnings
Fix a few unit test console warnings

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-18 15:14:44 +00:00
mabashian
40c6b346c1 Adds note to changelog about pending workflow approval count 2021-02-18 09:43:44 -05:00
mabashian
1c61fafbc7 Add pending approvals badge to application header 2021-02-18 09:42:51 -05:00
mabashian
afe36974de Fix a few unit test console warnings 2021-02-18 09:14:15 -05:00
softwarefactory-project-zuul[bot]
99460a76d8 Merge pull request #9347 from AlexSCorey/TestCoverageScript
Don't instrument for coverage by default unless CI

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-18 13:51:47 +00:00
softwarefactory-project-zuul[bot]
efbbbc0bdc Merge pull request #9188 from nixocio/ui_issue_8707
Allow user to remove organization from credentials

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-18 13:51:39 +00:00
Alex Corey
9090cb830d Don't instrument for coverage by default unless CI 2021-02-18 08:09:19 -05:00
Saurabh Thakre
6aca9d80bb Updated notifications.py
Fixed the customized notification returning incorrect values for host_status_counts

Update notifications.py

Removed if condition

Added exception handling

A nitpick
2021-02-18 12:24:14 +00:00
nixocio
33deaa7d0b Allow user to remove organization from credentials
Allow user to remove organization from credentials.

See: https://github.com/ansible/awx/issues/8707
2021-02-17 16:11:09 -05:00
softwarefactory-project-zuul[bot]
0bd4a8c16d Merge pull request #9340 from ansible/jakemcdermott-changelog
Add some items to the upcoming changelog

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-17 21:08:43 +00:00
softwarefactory-project-zuul[bot]
c98c6db664 Merge pull request #8640 from AlexSCorey/8130-DiplayOrgAdmins
Adds filter by role on Org access lists

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-17 21:00:09 +00:00
Jake McDermott
31e6d7cc0a Add some items to the upcoming changelog 2021-02-17 15:36:12 -05:00
softwarefactory-project-zuul[bot]
c758e6f8cf Merge pull request #9343 from beeankha/remove_unused_files
Remove Unused JSON Test Files

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2021-02-17 20:27:51 +00:00
softwarefactory-project-zuul[bot]
242c4e2533 Merge pull request #9338 from amolgautam25/fix-for-tower-issue-4696
fix for issue 4696(tower) , Upstream commit for tower PR 4840

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-17 20:15:59 +00:00
softwarefactory-project-zuul[bot]
d7d496553e Merge pull request #9332 from ryanpetrello/guid-trace
add a per-request GUID and log as it travels through background services

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-17 20:07:44 +00:00
Alex Corey
1f59d6182b Adds filtering for system level roles 2021-02-17 15:05:31 -05:00
beeankha
64b49feeee Remove functional test files that are no longer in use 2021-02-17 14:42:50 -05:00
Keith Grant
2419a592b4 convert NotificationList to tables 2021-02-17 11:10:35 -08:00
Keith Grant
eb66a03a30 fix duplicate tooltip on notification template edit button 2021-02-17 11:09:25 -08:00
Keith Grant
87cf797153 convert application list to tables 2021-02-17 11:08:33 -08:00
Keith Grant
a481fc3cc9 convert instance group list to tables 2021-02-17 11:08:33 -08:00
Keith Grant
c3bab52a61 convert notification template list to tables 2021-02-17 11:08:33 -08:00
Keith Grant
b3cdefec23 convert CredentialTypeList to tables 2021-02-17 11:07:47 -08:00
Ryan Petrello
3cc3cf1f80 add a per-request GUID and log as it travels through background services
see: https://github.com/ansible/awx/issues/9329
2021-02-17 12:54:13 -05:00
Alex Corey
2a9a471181 adds sorting by role on org access lists 2021-02-17 11:17:20 -05:00
softwarefactory-project-zuul[bot]
2a37430eab Merge pull request #9336 from mabashian/9310-cred-edit
Fix bug where credential inputs were not filled on edit

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-17 16:02:37 +00:00
fedora
29520f1ca6 fix for issue 4696(tower) , Upstream commit for tower PR 4840 2021-02-17 10:14:06 -05:00
mabashian
051efa4a19 Adds unit test coverage for input rendering on credential edit 2021-02-17 09:56:48 -05:00
mabashian
e26015a084 Fix bug where credential inputs were not filled on edit 2021-02-16 18:50:24 -05:00
softwarefactory-project-zuul[bot]
4cb0366fcf Merge pull request #9320 from nixocio/ui_fix_minor_typo
Fix typo

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-16 16:58:57 +00:00
softwarefactory-project-zuul[bot]
365b3a2366 Merge pull request #9313 from sean-m-sullivan/job_template_limit
Fix defaults that were set to empty string in collections.

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
             https://github.com/beeankha
2021-02-16 16:40:13 +00:00
softwarefactory-project-zuul[bot]
5c35bb335a Merge pull request #9324 from ansible/dependabot/pip/requirements/atomicwrites-1.4.0
Bump atomicwrites from 1.1.5 to 1.4.0 in /requirements

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-16 16:33:16 +00:00
softwarefactory-project-zuul[bot]
0df85b0a27 Merge pull request #9277 from nixocio/ui_7777_update_pf4
Update Patternfly to allow access to slider component

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-16 16:28:10 +00:00
nixocio
02c8f4cc59 Update Patternfly to allow access to slider component
Update Patternfly to allow access to slider component

See: https://github.com/ansible/awx/issues/7777
2021-02-16 10:44:22 -05:00
dependabot[bot]
fbe5832d5a Bump atomicwrites from 1.1.5 to 1.4.0 in /requirements
Bumps [atomicwrites](https://github.com/untitaker/python-atomicwrites) from 1.1.5 to 1.4.0.
- [Release notes](https://github.com/untitaker/python-atomicwrites/releases)
- [Commits](https://github.com/untitaker/python-atomicwrites/compare/1.1.5...1.4.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-02-16 14:43:53 +00:00
softwarefactory-project-zuul[bot]
b4956de6e4 Merge pull request #9306 from jlosito/dependabot-config
Allow dependabot to check python dependencies

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-16 14:43:24 +00:00
John Losito
5e91ae7b03 Change dependabot schedule to monthly 2021-02-16 09:07:31 -05:00
Nikhil Jain
1468e5908a add workflow_job_launch_type in metavars 2021-02-16 18:19:37 +05:30
softwarefactory-project-zuul[bot]
6e8c71a231 Merge pull request #9304 from wenottingham/stacks-n-such
Fix Openstack credential region implementation.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-16 03:40:12 +00:00
softwarefactory-project-zuul[bot]
56868dbedd Merge pull request #9225 from nixocio/ui_issue_8670
Add relaunch against failed hosts

Reviewed-by: Mat Wilson <mawilson@redhat.com>
             https://github.com/one-t
2021-02-16 00:18:07 +00:00
nixocio
3c8f5f666f Fix typo
Fix typo on error message
2021-02-15 17:41:32 -05:00
Sean Sullivan
c7bfc60be3 Merge pull request #41 from ansible/devel
Rebase
2021-02-15 14:23:29 -06:00
sean-m-sullivan
62d91365a7 add test for ad hoc no module arg 2021-02-15 12:23:18 -06:00
softwarefactory-project-zuul[bot]
4e48118704 Merge pull request #9287 from mabashian/7679-copy-inv-w-source
Disable inventory copy button when inventory has sources

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-15 17:19:49 +00:00
softwarefactory-project-zuul[bot]
8bcc91518e Merge pull request #9295 from jainnikhil30/add_diff_support
add support for diff mode in tower_settings module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-15 16:52:32 +00:00
mabashian
f6bddfd336 Fix linting error in CopyButton test 2021-02-15 11:33:38 -05:00
mabashian
a691caf346 Disable inventory copy button when inventory has sources. Refactor copy button props since tooltip is now handled by the ActionItem component. 2021-02-15 11:19:01 -05:00
Nikhil Jain
3ad2bb1bb9 remove whitespaces 2021-02-15 09:31:16 +05:30
sean-m-sullivan
5548f7e91d fix default value bug 2021-02-14 17:17:04 -06:00
Sean Sullivan
6d08c11506 Merge pull request #39 from ansible/devel
Rebase
2021-02-14 01:29:54 -06:00
John Losito
055222bb82 Allow dependabot to check python dependencies 2021-02-13 20:37:33 -05:00
Hideki Saito
82a226d1fe Add links to inventory source and project to inventory update details view of Jobs list
* Addresses #8839

Signed-off-by: Hideki Saito <saito@fgrep.org>
2021-02-13 22:08:35 +09:00
Bill Nottingham
e93518a030 Fix Openstack credential region implementation.
The injector wasn't using the same variable name as the model.
2021-02-12 17:44:44 -05:00
nixocio
33b1028882 Add relaunch against failed hosts
Add relaunch against failed hosts

See: https://github.com/ansible/awx/issues/8670
2021-02-12 17:29:00 -05:00
softwarefactory-project-zuul[bot]
67f1f9ac69 Merge pull request #9278 from AlexSCorey/PreLingUI-Upgrade-3
Fixes final files in preparation for lingui upgrade

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-12 22:00:46 +00:00
softwarefactory-project-zuul[bot]
f99f183d53 Merge pull request #9301 from jbradberry/bump-ansible-runner
Bump ansible-runner to get the pexpect fix

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-12 19:59:59 +00:00
softwarefactory-project-zuul[bot]
2e179609d9 Merge pull request #9300 from darrenjones24/devel
Fix #9298 by upgrading pip allowing use of custom_venvs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-12 17:21:35 +00:00
Jeff Bradberry
8fb0b401ce Bump ansible-runner to get the pexpect fix 2021-02-12 12:08:21 -05:00
darren
ed124c693a remove unecessary --update 2021-02-12 16:47:14 +00:00
softwarefactory-project-zuul[bot]
949c777546 Merge pull request #9284 from sean-m-sullivan/fix-awx_collection-docs
Fix awx collection docs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-12 16:31:13 +00:00
darren
3f5cb645f2 fix 9298 by upgrading pip allowing use of custom_venvs 2021-02-12 16:24:18 +00:00
sean-m-sullivan
d83771f082 revert changes 2021-02-12 08:35:27 -06:00
Nikhil Jain
b177c7b558 add support diff mode in tower_settings module 2021-02-12 08:19:32 +05:30
sean-m-sullivan
2b664d6958 add default 2021-02-11 12:28:55 -06:00
softwarefactory-project-zuul[bot]
909c2513ac Merge pull request #9116 from AlexSCorey/8944-CredTypeLookup
Fixes missing credential types and makes credential type field a type ahead

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-11 04:02:44 +00:00
softwarefactory-project-zuul[bot]
5f7c056f13 Merge pull request #9290 from jakemcdermott/relax-yall
Relax client-side email validator

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-11 02:27:58 +00:00
softwarefactory-project-zuul[bot]
56881c25c9 Merge pull request #9288 from mabashian/9060-job-tags
Show job and skip tags on job detail view

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-10 23:38:34 +00:00
softwarefactory-project-zuul[bot]
aa855c1d57 Merge pull request #9170 from nixocio/ui_issue_7676_tables
Add tooltips to inventory sync status

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-10 22:54:37 +00:00
Jake McDermott
09b097d6b9 Relax client-side email validator 2021-02-10 17:35:30 -05:00
sean-m-sullivan
ed8b4a3c70 Merge branch 'fix-awx_collection-docs' of github.com:sean-m-sullivan/awx into fix-awx_collection-docs 2021-02-10 15:45:38 -06:00
sean-m-sullivan
d4f86e8999 update test 2021-02-10 15:45:17 -06:00
Alex Corey
17627ed6ce Closes select on selection and truncates list items 2021-02-10 14:56:53 -05:00
Alex Corey
32ddfdf590 adds static ID for testing 2021-02-10 14:56:53 -05:00
Alex Corey
4a2a6949a8 Fixes missing credential types and makes credential type drop down a typeahead component 2021-02-10 14:56:53 -05:00
mabashian
5ba6d14e4d Show job and skip tags on job detail view 2021-02-10 13:42:27 -05:00
Sean Sullivan
c7793e0b9c Merge pull request #38 from ansible/devel
Rebase
2021-02-10 10:22:40 -06:00
Sean Sullivan
44d7915993 Merge pull request #37 from ansible/devel
Rebase
2021-02-10 10:21:50 -06:00
softwarefactory-project-zuul[bot]
2ef08b1d13 Merge pull request #9052 from jgroom33/patch-1
Update custom_virtualenvs.md

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-10 16:20:18 +00:00
Sean Sullivan
16aa73fc2c Update inventory source req fields
As per the gui, and testing, source is a required field.
2021-02-10 08:47:51 -06:00
Sean Sullivan
5013d74f46 Remove required field in doc
new name is not a required field, removing that designation from the docs.
2021-02-10 08:42:06 -06:00
softwarefactory-project-zuul[bot]
3d54ab9a0f Merge pull request #8897 from wenottingham/right-on-schedule
Add schedule info from summary fields to allowed notification content.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-09 22:41:13 +00:00
Alex Corey
9420837dae Fixes final files in preparation for lingui upgrade 2021-02-09 17:24:15 -05:00
softwarefactory-project-zuul[bot]
a613cb2f78 Merge pull request #9227 from sean-m-sullivan/project_update
Add project update to project module

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-09 22:00:19 +00:00
softwarefactory-project-zuul[bot]
92c9f6b0f7 Merge pull request #9268 from fosterseth/fix_requirements_json_log_formatter
Add version to json-log-formatter in requirements.txt file

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-09 16:32:19 +00:00
softwarefactory-project-zuul[bot]
c7a558b027 Merge pull request #9274 from constreference/9273
[9273]: Add missing symlinks for awx-manage

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-09 16:21:28 +00:00
Siva Renganathan
4f40381b46 [9273]: Add missing symlinks for awx-manage
This fixes the issue addressed in 9273
The symlinks are created on the build container as opposed to the final
container causing awx-manage command to break.

Signed-off-by: Siva Renganathan <siva.rg@protonmail.com>
2021-02-09 20:37:11 +05:30
softwarefactory-project-zuul[bot]
2c9ef3bae6 Merge pull request #9182 from keithjgrant/6189-hosts-users-teams-tables
Convert Hosts/Users/Teams list to tables

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-09 13:16:59 +00:00
sean-m-sullivan
f82163499c update python 2021-02-08 23:10:34 -06:00
Seth Foster
6f7bdc04c0 fix requirements file 2021-02-08 23:06:07 -05:00
softwarefactory-project-zuul[bot]
9fc8144f8e Merge pull request #9265 from shanemcd/fix-notification-tests
Fix test notifications

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2021-02-08 23:24:24 +00:00
softwarefactory-project-zuul[bot]
f670a01bf7 Merge pull request #9264 from jakemcdermott/fix-output-error-attr-no-stdout
Handle error state with traceback and no stdout

Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
             https://github.com/jakemcdermott
2021-02-08 23:13:12 +00:00
Shane McDonald
0e2fb02185 Fix test notifications 2021-02-08 17:49:20 -05:00
Jake McDermott
b0520ae0e4 Handle error state with traceback and no stdout 2021-02-08 17:20:24 -05:00
softwarefactory-project-zuul[bot]
471ee46171 Merge pull request #9240 from marshmalien/7040-cancel-btn-style
Update cancel button style to "link"

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-08 19:21:09 +00:00
softwarefactory-project-zuul[bot]
fc6acbd9d1 Merge pull request #9148 from keithjgrant/6189-credentials-table
Convert Credentials & Projects lists to PaginatedTable

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-08 13:21:45 +00:00
Jim Ladd
5b3bb1e81d set schedule.next_run to datetime.datetime
* Update JobNotificationMixin.context_stub
* .. TestJobNotificationMixin.CONTEXT_STRUCTURE
2021-02-05 13:56:56 -08:00
Bill Nottingham
7b757f17a9 Add schedule info from summary fields to allowed notification content. 2021-02-05 13:56:56 -08:00
softwarefactory-project-zuul[bot]
753749d2b4 Merge pull request #9123 from nixocio/ui_issue_8355
Update migration page

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-05 18:38:09 +00:00
softwarefactory-project-zuul[bot]
73d4d14a62 Merge pull request #9199 from jakemcdermott/fix-8848
Add ability to cancel running jobs from output view

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-05 03:21:02 +00:00
Keith Grant
13131e96b6 delete unused import 2021-02-04 16:21:57 -08:00
Keith Grant
aaaeb37812 hide table scrollbar when not needed 2021-02-04 14:04:21 -08:00
Keith Grant
dfcfdc7a4e add actions column header to host, user, team lists 2021-02-04 13:52:37 -08:00
Keith Grant
4ca8862b51 convert TeamsList to tables 2021-02-04 13:52:37 -08:00
Keith Grant
e886ce57aa convent UsersList to tables 2021-02-04 13:52:27 -08:00
Keith Grant
f747edca0c convert HostList to PaginatedTable 2021-02-04 13:52:14 -08:00
softwarefactory-project-zuul[bot]
ba318f8670 Merge pull request #9130 from fosterseth/feat_wip_job_lifecycle
log job lifecycle

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-04 19:40:12 +00:00
sean-m-sullivan
9d359a080b updating completeness parameters 2021-02-04 11:33:49 -06:00
Seth Foster
41d0a2f7b9 Add job lifecycle logging
Various	points (e.g. created, running, processing events), are
structured into	json format and	output to /var/log/tower/job_lifecycle.log

As part	of this	work, the DependencyGraph is reworked to return
which job object is doing the blocking, rather than a boolean.
2021-02-04 12:25:51 -05:00
Marliana Lara
379824bbf7 Update cancel buttons to link style 2021-02-04 12:22:52 -05:00
softwarefactory-project-zuul[bot]
47de2ddcb5 Merge pull request #9231 from jakemcdermott/fix-9229
Prevent url smashing from prompted inputs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-04 16:44:24 +00:00
Jake McDermott
14686e8b16 Prevent enter-submit on prompt steps 2021-02-03 23:22:42 -05:00
sean-m-sullivan
f35c16e4f0 update 2021-02-03 18:47:32 -06:00
Sean Sullivan
226a689f7b Merge pull request #36 from ansible/devel
Rebase
2021-02-03 18:40:12 -06:00
softwarefactory-project-zuul[bot]
54d63ee437 Merge pull request #8726 from jakemcdermott/fix-8709
Show job traceback stdout and error details

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-03 19:42:09 +00:00
Alex Corey
fe9bd37c74 fixes pagination issue on modal 2021-02-03 12:33:33 -05:00
Keith Grant
0b633c2b6c convert project list to PaginatedTable 2021-02-03 09:01:43 -08:00
Keith Grant
8081d66872 convert credentials list to table 2021-02-03 09:01:22 -08:00
softwarefactory-project-zuul[bot]
40cf4ff209 Merge pull request #9045 from keithjgrant/6189-schedules-list
Convert SchedulesList, WorkflowApprovalList to tables

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-03 14:08:53 +00:00
softwarefactory-project-zuul[bot]
554f96ccef Merge pull request #8869 from mabashian/6510-schedule-detail-prompts
Fix prompted values section of schedule details

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-03 00:56:23 +00:00
mabashian
1a5cdd05c5 Adds more test coverage around Prompted Values and when those fields get shown 2021-02-02 18:58:08 -05:00
mabashian
bb67d62d81 Adds variables to track showing limit/job type/scm branch/verbosity fields in Prompted Values 2021-02-02 18:58:08 -05:00
mabashian
a9288c82fc Adds missing verbosity detail. Adds flag for showing the diff mode detail. 2021-02-02 18:58:08 -05:00
mabashian
5a70989ff5 Hide Prompted Fields section when tags/skip tags are prompted but have no values 2021-02-02 18:58:08 -05:00
mabashian
56139423d9 Use PF variables where possible in styles 2021-02-02 18:58:08 -05:00
mabashian
ffd9a239ba Fix prompted values section of schedule details and updated PromptDetail to match ScheduleDetail prompt UX 2021-02-02 18:58:08 -05:00
Keith Grant
00b47f1dbf add table header to ScheduleList 2021-02-02 15:12:41 -08:00
Keith Grant
eb2a9baadd Convert WorkflowApprovalList to table 2021-02-02 15:12:41 -08:00
Keith Grant
953fa3fe0d convert Schedules list to PaginatedTable 2021-02-02 15:12:41 -08:00
softwarefactory-project-zuul[bot]
b20adac33d Merge pull request #9114 from keithjgrant/6189-template-list-tables
Convert TemplateList to PaginatedTable

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-02 20:28:58 +00:00
Keith Grant
186aa6cc99 update snapshot 2021-02-02 11:59:17 -08:00
Keith Grant
27b0f874cc remove unused var 2021-02-02 11:45:34 -08:00
Keith Grant
669e963e38 combine TemplateList/DashboardTemplateList into shared component 2021-02-02 11:45:34 -08:00
Keith Grant
03eb9bafb7 add missing template list data; add ids to relevant page elements 2021-02-02 11:44:20 -08:00
Keith Grant
78ef11d558 update Template list tests 2021-02-02 11:44:20 -08:00
Keith Grant
ad71dc3e98 add expandable row details to template list 2021-02-02 11:44:20 -08:00
Keith Grant
f3410f6517 convert TemplateList to tables 2021-02-02 11:44:20 -08:00
softwarefactory-project-zuul[bot]
7d432e484c Merge pull request #9205 from beeankha/fix_pylint_error
Fix Pylint Error

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-02 19:19:46 +00:00
softwarefactory-project-zuul[bot]
271eb4043a Merge pull request #9094 from AlexSCorey/TransLintPOC
Adds translation linting and addresses issues the linter found

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-02 17:28:31 +00:00
softwarefactory-project-zuul[bot]
cc9ee550d9 Merge pull request #9185 from sean-m-sullivan/add_orgs_job_launch
Add orgs job launch

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
             https://github.com/beeankha
2021-02-02 16:05:42 +00:00
softwarefactory-project-zuul[bot]
7a8d9394be Merge pull request #6106 from egmar/ghe-auth-support
Support for Github Enteprise for authentication

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-02 15:59:22 +00:00
Jake McDermott
339d430de1 Add ability to cancel jobs 2021-02-02 10:39:19 -05:00
softwarefactory-project-zuul[bot]
6f24fda168 Merge pull request #9104 from sean-m-sullivan/survey_logic
Survey logic for awx modules

Reviewed-by: Bianca Henderson <beeankha@gmail.com>
             https://github.com/beeankha
2021-02-02 15:33:20 +00:00
beeankha
3ad9aa902d Fix pylint throwaway variable error 2021-02-02 08:56:42 -05:00
softwarefactory-project-zuul[bot]
3408f5270e Merge pull request #9203 from ryanpetrello/new-logo
Update with the new logo

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-01 21:40:27 +00:00
Ryan Petrello
d365705e05 Revert "Remove old logo"
This reverts commit b24a1746ae.
2021-02-01 16:00:07 -05:00
Alex Corey
1f81592b5c adds translation linting and addresses issues the linter found 2021-02-01 13:14:54 -05:00
softwarefactory-project-zuul[bot]
0f0857747d Merge pull request #9198 from Zokormazo/ui-next-typo
Docs: Fix npm install command line

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-02-01 14:19:18 +00:00
Julen Landa Alustiza
ff27c486bf Fix npm install command line
Signed-off-by: Julen Landa Alustiza <jlandaal@redhat.com>
2021-02-01 12:05:41 +01:00
Jake McDermott
e7719e3fdc Splice result_traceback into first row stdout 2021-01-31 22:15:51 -05:00
Jake McDermott
7a6cbe2685 Display status explanation message when available 2021-01-31 22:15:49 -05:00
Jake McDermott
b2328d17a4 Render result tracebacks as stdout 2021-01-31 22:15:43 -05:00
Egor Margineanu
b8790db84a [PATCH] Rebase and update the Settings options file. Source: 65b4811
Fixed Github Enterprise labels
2021-01-29 23:41:12 +01:00
softwarefactory-project-zuul[bot]
89646e7799 Merge pull request #8661 from marshmalien/setting-jobs-ui
Add job settings form and unit tests

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-29 21:01:40 +00:00
softwarefactory-project-zuul[bot]
1b32dad745 Merge pull request #9192 from jakemcdermott/fix-9191
Fix container group detail link outs

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-29 19:38:40 +00:00
softwarefactory-project-zuul[bot]
5ab0b91a0d Merge pull request #9057 from keithjgrant/9049-inventory-table-responsiveness
Fix inventory list responsive behavior

Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
             https://github.com/tiagodread
2021-01-29 19:14:37 +00:00
Keith Grant
da6f793468 delete unused import 2021-01-29 10:41:48 -08:00
Sean Sullivan
ce44a056ad Merge pull request #35 from ansible/devel
Rebase
2021-01-29 09:27:42 -06:00
Sean Sullivan
54aababd0f Merge pull request #34 from ansible/devel
Rebase
2021-01-29 09:26:40 -06:00
Marliana Lara
8a58a73cb0 Handle reverting falsy values that are not null or undefined 2021-01-29 09:27:21 -05:00
Marliana Lara
db4d84b94c Add job settings form and unit tests 2021-01-29 09:24:51 -05:00
Egor Margineanu
604b42929d Improved labels and help text for Github Enterprise SSO configuration fields.
Signed-off-by: Egor Margineanu <egor.margineanu@gmail.com>
2021-01-29 13:09:40 +01:00
Egor Margineanu
a90ff45e98 Added changes from 645701a and 4692e2f 2021-01-29 13:09:40 +01:00
mabashian
d461090415 Adds support for github enterprise auth methods in ui_next 2021-01-29 13:09:27 +01:00
Egor Margineanu
9ccee200f3 Removed GHE forms from ui folder
Fixed org/team field names based on @constreference feedback
Added support for Github Enteprise for authentication
2021-01-29 13:01:08 +01:00
Keith Grant
a74fe57691 remove extra inventory columns; add horizontal scroll to wide tables 2021-01-28 16:34:46 -08:00
nixocio
3f5e1518ee Update migration page
Update migration to add styling. Also, use django template and context
to inject variables into the migration page.

See: https://github.com/ansible/awx/issues/8355
2021-01-28 17:46:36 -05:00
sean-m-sullivan
e433e3ebc2 update logic 2021-01-28 15:19:28 -06:00
softwarefactory-project-zuul[bot]
88cd154c97 Merge pull request #9038 from AlexSCorey/8998-BrokenLink
fixes broken link and adds test

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-28 20:58:59 +00:00
Jake McDermott
6fcaba6540 Fix container group detail link outs 2021-01-28 15:45:25 -05:00
sean-m-sullivan
cc0d658b1c fix typo 2021-01-28 10:16:27 -06:00
softwarefactory-project-zuul[bot]
792928aae8 Merge pull request #9069 from chrismeyersfsu/logrotate
remove python log rotation in favor of system

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-28 14:48:33 +00:00
sean-m-sullivan
aa5219cf30 fix verbage 2021-01-28 08:36:13 -06:00
Sean Sullivan
9e7882fce2 Merge pull request #33 from ansible/devel
Rebase
2021-01-28 08:32:50 -06:00
sean-m-sullivan
c40ca718d0 add org to job launch 2021-01-28 08:31:52 -06:00
Chris Meyers
67daca04e0 remove python log rotation in favor of system
* The cron ran logrotate will now rotate our log files instead of python
* If not error log file is specified in the config then do not include
it as a paremter to rsyslog omhttp module. This is useful for
containers.
2021-01-28 09:19:08 -05:00
Keith Grant
c3ca43d9bc hide inventory groups/hosts/sources columns when screen width doesn't allow 2021-01-27 15:45:57 -08:00
softwarefactory-project-zuul[bot]
a534a80360 Merge pull request #8970 from keithjgrant/6189-job-list-tables
Convert JobList to PaginatedTable

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-27 21:44:16 +00:00
Sean Sullivan
012189cb10 Merge pull request #32 from ansible/devel
Rebase
2021-01-27 12:47:08 -06:00
Keith Grant
0feeaf0453 fix select boxes in JobList 2021-01-27 09:26:19 -08:00
Keith Grant
7bf7cebd72 Add id to table sort headers 2021-01-27 08:36:30 -08:00
softwarefactory-project-zuul[bot]
b7475442fb Merge pull request #9135 from jainnikhil30/fix_wrong_survey_spec_handling_collection
Fix the handling of wrong survey spec in awx collection

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-27 15:48:15 +00:00
softwarefactory-project-zuul[bot]
11874d38e2 Merge pull request #8783 from marshmalien/setting-saml-ui-edit-forms
Add SAML and UI setting edit forms

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-27 01:30:37 +00:00
softwarefactory-project-zuul[bot]
ce59f1817a Merge pull request #9035 from mabashian/8915-source-workflow-link
Display source workflow job when available on job details view

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-26 22:31:59 +00:00
Keith Grant
2e7b53cf90 remove job data duplicated in expanded view 2021-01-26 14:22:50 -08:00
Keith Grant
afdb4f9709 add id to expanded job rows 2021-01-26 14:18:10 -08:00
Keith Grant
2c17d56acd create LaunchedByDetail to consolidate shared code between detail & list 2021-01-26 14:09:49 -08:00
Keith Grant
8bde6060c4 add expandable rows to JobList 2021-01-26 14:09:49 -08:00
Keith Grant
da16785201 convert JobList to PaginatedTable 2021-01-26 14:09:49 -08:00
softwarefactory-project-zuul[bot]
b7eb1a4c59 Merge pull request #9156 from shanemcd/minikube-dev-env
Minikube-based development environment

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-26 21:10:08 +00:00
Shane McDonald
47740f62f4 Add ingress variable to minikube doc 2021-01-26 15:18:37 -05:00
Shane McDonald
fb7b36cfbd Add note about needing openshift python lib 2021-01-26 15:14:06 -05:00
Shane McDonald
ba8c44f8a6 Add note on how to access AWX in minikube 2021-01-26 15:07:04 -05:00
nixocio
287d181af7 Add tooltips to inventory sync status
Add tooltips to inventory sync status on Inventory List.

See: https://github.com/ansible/awx/issues/7676
2021-01-26 13:50:01 -05:00
softwarefactory-project-zuul[bot]
dfa65225d9 Merge pull request #9048 from AlexSCorey/8961-AuditorCanAddUserRoles
fixes erroneous render of add button

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2021-01-26 18:48:43 +00:00
Marliana Lara
12f6c05a4a Format certificates/keys in setting detail and hide pendo when off 2021-01-26 12:47:28 -05:00
Marliana Lara
bbde149ab1 Add UI settings form 2021-01-26 12:47:27 -05:00
Marliana Lara
9cf3066591 Add SAML settings edit form 2021-01-26 12:47:27 -05:00
Shane McDonald
bda4db462f Enable inline caching for image builds 2021-01-26 10:24:35 -05:00
Sean Sullivan
a825cf3583 Merge pull request #31 from ansible/devel
Rebase
2021-01-25 21:38:47 -06:00
Shane McDonald
7c8bd47198 Minikube-based development environment
Works in conjunction with https://github.com/ansible/awx-operator/pull/71

See docs/development/minikube.md
2021-01-25 18:03:17 -05:00
Sean Sullivan
916f919fcb Merge pull request #30 from ansible/devel
rebase
2021-01-23 19:39:56 -06:00
Sean Sullivan
ac280e1446 Merge pull request #29 from ansible/devel
Rebase
2021-01-22 12:01:25 -06:00
Nikhil Jain
81875f0971 fix the handling of wrong survey spec 2021-01-22 11:03:44 +05:30
sezanzeb
7d58ae3c8c dot 2021-01-21 20:27:33 +01:00
sezanzeb
5445de7d01 Document admin_password in INSTALL.md
Because the first user that ever logs in shouldn't be an automated bot looking for vulnerable webservers.
2021-01-21 20:25:06 +01:00
sean-m-sullivan
3ae133d39d syntax and lint fix 2021-01-18 13:05:28 -06:00
sean-m-sullivan
b8bd6d2472 Merge branch 'survey_logic' of github.com:sean-m-sullivan/awx into survey_logic 2021-01-18 12:46:53 -06:00
sean-m-sullivan
d32461388f syntax and lint fix 2021-01-18 12:46:43 -06:00
Sean Sullivan
0e9f7f37e0 Delete test file
Delete test file
2021-01-18 12:21:25 -06:00
Sean Sullivan
e8ea6bc946 Delete test file
Delete test file
2021-01-18 12:20:27 -06:00
Sean Sullivan
91045534d0 Merge pull request #28 from sean-m-sullivan/survey_update
Add logic Rebase
2021-01-18 11:26:09 -06:00
sean-m-sullivan
1042c1cc28 add logic for survey 2021-01-17 10:32:07 -06:00
Sean Sullivan
1ce9c00d77 Merge pull request #26 from ansible/devel
Rebase
2021-01-16 22:58:41 -06:00
Jeff Groom
22aaf765a5 Update custom_virtualenvs.md 2021-01-09 10:54:33 -07:00
Alex Corey
186a1b04b4 fixes erronous render of add button 2021-01-08 15:34:56 -05:00
Alex Corey
1cd7f42a27 fixes broken link and adds test 2021-01-07 13:13:45 -05:00
dejongm
8cae728ea0 Update defaults.py 2021-01-07 10:27:03 -05:00
mabashian
c0690cddc8 Display source workflow job when available on job details view 2021-01-07 09:32:59 -05:00
sean-m-sullivan
bbed8ec704 add update to tower project 2020-11-25 09:36:03 -06:00
861 changed files with 84781 additions and 31322 deletions

View File

@@ -1 +1,3 @@
awx/ui/node_modules
awx/ui_next/node_modules
Dockerfile

2
.env
View File

@@ -1,3 +1,3 @@
PYTHONUNBUFFERED=true
SELENIUM_DOCKER_TAG=latest
COMPOSE_PROJECT_NAME=tools

7
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,7 @@
---
version: 2
updates:
- package-ecosystem: "pip"
directory: "/requirements"
schedule:
interval: "monthly"

13
.gitignore vendored
View File

@@ -29,14 +29,18 @@ awx/ui/client/languages
awx/ui/templates/ui/index.html
awx/ui/templates/ui/installing.html
awx/ui_next/node_modules/
awx/ui_next/src/locales/
awx/ui_next/src/locales/*/messages.js
awx/ui_next/coverage/
awx/ui_next/build
awx/ui_next/.env.local
awx/ui_next/instrumented
rsyslog.pid
tools/prometheus/data
tools/docker-compose/ansible/awx_dump.sql
tools/docker-compose/Dockerfile
tools/docker-compose/_build
tools/docker-compose/_sources
tools/docker-compose/overrides/
# Tower setup playbook testing
setup/test/roles/postgresql
@@ -87,6 +91,9 @@ awx/awx_test.sqlite3-journal
# Mac OS X
*.DS_Store
# VSCode
.vscode/
# Editors
*.sw[poj]
*~
@@ -146,6 +153,8 @@ use_dev_supervisor.txt
.idea/*
*.unison.tmp
*.#
/tools/docker-compose/overrides/
/awx/ui_next/.ui-built
/Dockerfile
/_build/
/_build_kube_dev/
/Dockerfile.kube-dev

View File

@@ -2,6 +2,58 @@
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
# 18.0.0 (March 23, 2021)
**IMPORTANT INSTALL AND UPGRADE NOTES**
Starting in version 18.0, the [AWX Operator](https://github.com/ansible/awx-operator) is the preferred way to install AWX: https://github.com/ansible/awx/blob/devel/INSTALL.md#installing-awx
If you have a pre-existing installation of AWX that utilizes the Docker-based installation method, this install method has ** notably changed** from 17.x to 18.x. For details, please see:
- https://groups.google.com/g/awx-project/c/47MjWSUQaOc/m/bCjSDn0eBQAJ
- https://github.com/ansible/awx/blob/devel/tools/docker-compose
- https://github.com/ansible/awx/blob/devel/tools/docker-compose/docs/data_migration.md
### Introducing Execution Environments
After a herculean effort from a number of contributors, we're excited to announce that AWX 18.0.0 introduces a new concept called Execution Environments.
Execution Environments are container images which consist of everything necessary to run a playbook within AWX, and which drive the entire management and lifecycle of playbook execution runtime in AWX: https://github.com/ansible/awx/issues/5157. This means that going forward, AWX no longer utilizes the [bubblewrap](https://github.com/containers/bubblewrap) project for playbook isolation, but instead utilizes a container per playbook run.
Much like custom virtualenvs, custom Execution Environments can be crafted to specify additional Python or system-level dependencies. Ansible Builder outputs images you can upload to your registry which can *then* be defined in AWX and utilized for playbook runs.
To learn more about Ansible Builder and Execution Environments, see: https://www.ansible.com/blog/introduction-to-ansible-builder
### Other Notable Changes
- Removed `installer` directory.
- The Kubernetes installer has been removed in favor of [AWX Operator](https://github.com/ansible/awx-operator).
- The "Local Docker" install method has been removed in favor of the development environment. Details can be found at: https://github.com/ansible/awx/blob/devel/tools/docker-compose/README.md
- Removal of custom virtual environments https://github.com/ansible/awx/pull/9498
- Custom virtual environments have been replaced by Execution Environments https://github.com/ansible/awx/pull/9570
- The default Container Group Pod definition has changed. All custom Pod specs have been reset. https://github.com/ansible/awx/commit/05ef51f710dad8f8036bc5acee4097db4adc0d71
- Added user interface for the activity stream: https://github.com/ansible/awx/pull/9083
- Converted many of the top-level list views (Jobs, Teams, Hosts, Inventories, Projects, and more) to a new, permanent table component for substantially increased responsiveness, usability, maintainability, and other 'ility's: https://github.com/ansible/awx/pull/8970, https://github.com/ansible/awx/pull/9182 and many others!
- Added click-to-expand details for job tables
- Added search filtering to job output https://github.com/ansible/awx/pull/9208
- Added the new migration, update, and "installation in progress" page https://github.com/ansible/awx/pull/9123
- Added the user interface for job settings https://github.com/ansible/awx/pull/8661
- Runtime errors from jobs are now displayed, along with an explanation for what went wrong, on the output page https://github.com/ansible/awx/pull/8726
- You can now cancel a running job from its output and details panel https://github.com/ansible/awx/pull/9199
- Fixed a bug where launch prompt inputs were unexpectedly deposited in the url: https://github.com/ansible/awx/pull/9231
- Playbook, credential type, and inventory file inputs now support type-ahead and manual type-in! https://github.com/ansible/awx/pull/9120
- Added ability to relaunch against failed hosts: https://github.com/ansible/awx/pull/9225
- Added pending workflow approval count to the application header https://github.com/ansible/awx/pull/9334
- Added user interface for management jobs: https://github.com/ansible/awx/pull/9224
- Added toast message to show notification template test result to notification templates list https://github.com/ansible/awx/pull/9318
- Replaced CodeMirror with AceEditor for editing template variables and notification templates https://github.com/ansible/awx/pull/9281
- Added support for filtering and pagination on job output https://github.com/ansible/awx/pull/9208
- Added support for html in custom login text https://github.com/ansible/awx/pull/9519
# 17.1.0 (March 9, 2021)
- Addressed a security issue in AWX (CVE-2021-20253)
- Fixed a bug permissions error related to redis in K8S-based deployments: https://github.com/ansible/awx/issues/9401
# 17.0.1 (January 26, 2021)
- Fixed pgdocker directory permissions issue with Local Docker installer: https://github.com/ansible/awx/pull/9152
- Fixed a bug in the UI which caused toggle settings to not be changed when clicked: https://github.com/ansible/awx/pull/9093

View File

@@ -11,24 +11,15 @@ Have questions about this document or anything not covered here? Come chat with
* [Prerequisites](#prerequisites)
* [Docker](#docker)
* [Docker compose](#docker-compose)
* [Node and npm](#node-and-npm)
* [Build the environment](#build-the-environment)
* [Frontend Development](#frontend-development)
* [Build and Run the Development Environment](#build-and-run-the-development-environment)
* [Fork and clone the AWX repo](#fork-and-clone-the-awx-repo)
* [Create local settings](#create-local-settings)
* [Build the base image](#build-the-base-image)
* [Build the user interface](#build-the-user-interface)
* [Running the environment](#running-the-environment)
* [Start the containers](#start-the-containers)
* [Start from the container shell](#start-from-the-container-shell)
* [Post Build Steps](#post-build-steps)
* [Start a shell](#start-a-shell)
* [Create a superuser](#create-a-superuser)
* [Load the data](#load-the-data)
* [Building API Documentation](#build-api-documentation)
* [Building API Documentation](#building-api-documentation)
* [Accessing the AWX web interface](#accessing-the-awx-web-interface)
* [Purging containers and images](#purging-containers-and-images)
* [What should I work on?](#what-should-i-work-on)
* [Submitting Pull Requests](#submitting-pull-requests)
* [PR Checks run by Zuul](#pr-checks-run-by-zuul)
* [Reporting Issues](#reporting-issues)
## Things to know prior to submitting code
@@ -42,7 +33,7 @@ Have questions about this document or anything not covered here? Come chat with
## Setting up your development environment
The AWX development environment workflow and toolchain is based on Docker, and the docker-compose tool, to provide dependencies, services, and databases necessary to run all of the components. It also binds the local source tree into the development container, making it possible to observe and test changes in real time.
The AWX development environment workflow and toolchain uses Docker and the docker-compose tool, to provide dependencies, services, and databases necessary to run all of the components. It also bind-mounts the local source tree into the development container, making it possible to observe and test changes in real time.
### Prerequisites
@@ -55,29 +46,19 @@ respectively.
For Linux platforms, refer to the following from Docker:
**Fedora**
* **Fedora** - https://docs.docker.com/engine/installation/linux/docker-ce/fedora/
> https://docs.docker.com/engine/installation/linux/docker-ce/fedora/
* **CentOS** - https://docs.docker.com/engine/installation/linux/docker-ce/centos/
**CentOS**
* **Ubuntu** - https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/
> https://docs.docker.com/engine/installation/linux/docker-ce/centos/
* **Debian** - https://docs.docker.com/engine/installation/linux/docker-ce/debian/
**Ubuntu**
* **Arch** - https://wiki.archlinux.org/index.php/Docker
> https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/
#### Docker Compose
**Debian**
> https://docs.docker.com/engine/installation/linux/docker-ce/debian/
**Arch**
> https://wiki.archlinux.org/index.php/Docker
#### Docker compose
If you're not using Docker for Mac, or Docker for Windows, you may need, or choose to, install the Docker compose Python module separately, in which case you'll need to run the following:
If you're not using Docker for Mac, or Docker for Windows, you may need, or choose to, install the `docker-compose` Python module separately.
```bash
(host)$ pip3 install docker-compose
@@ -87,186 +68,15 @@ If you're not using Docker for Mac, or Docker for Windows, you may need, or choo
See [the ui development documentation](awx/ui_next/CONTRIBUTING.md).
### Build the environment
#### Fork and clone the AWX repo
If you have not done so already, you'll need to fork the AWX repo on GitHub. For more on how to do this, see [Fork a Repo](https://help.github.com/articles/fork-a-repo/).
#### Create local settings
### Build and Run the Development Environment
AWX will import the file `awx/settings/local_settings.py` and combine it with defaults in `awx/settings/defaults.py`. This file is required for starting the development environment and startup will fail if it's not provided.
See the [README.md](./tools/docker-compose/README.md) for docs on how to build the awx_devel image and run the development environment.
An example is provided. Make a copy of it, and edit as needed (the defaults are usually fine):
```bash
(host)$ cp awx/settings/local_settings.py.docker_compose awx/settings/local_settings.py
```
#### Build the base image
The AWX base container image (defined in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and symbolic links into the development environment that make running the services easy.
Run the following to build the image:
```bash
(host)$ make docker-compose-build
```
**NOTE**
> The image will need to be rebuilt, if the Python requirements or OS dependencies change.
Once the build completes, you will have a `ansible/awx_devel` image in your local image cache. Use the `docker images` command to view it, as follows:
```bash
(host)$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ansible/awx_devel latest ba9ec3e8df74 26 minutes ago 1.42GB
```
#### Build the user interface
Run the following to build the AWX UI:
```bash
(host) $ make ui-devel
```
See [the ui development documentation](awx/ui/README.md) for more information on using the frontend development, build, and test tooling.
### Running the environment
#### Start the containers
Start the development containers by running the following:
```bash
(host)$ make docker-compose
```
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django and the front end build process.
If you start a second terminal session, you can take a look at the running containers using the `docker ps` command. For example:
```bash
# List running containers
(host)$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44251b476f98 gcr.io/ansible-tower-engineering/awx_devel:devel "/entrypoint.sh /bin…" 27 seconds ago Up 23 seconds 0.0.0.0:6899->6899/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 0.0.0.0:8080->8080/tcp, 22/tcp, 0.0.0.0:8888->8888/tcp tools_awx_run_9e820694d57e
40de380e3c2e redis:latest "docker-entrypoint.s…" 28 seconds ago Up 26 seconds
b66a506d3007 postgres:12 "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:5432->5432/tcp tools_postgres_1
```
**NOTE**
> The Makefile assumes that the image you built is tagged with your current branch. This allows you to build images for different contexts or branches. When starting the containers, you can choose a specific branch by setting `COMPOSE_TAG=<branch name>` in your environment.
> For example, you might be working in a feature branch, but you want to run the containers using the `devel` image you built previously. To do that, start the containers using the following command: `$ COMPOSE_TAG=devel make docker-compose`
##### Wait for migrations to complete
The first time you start the environment, database migrations need to run in order to build the PostgreSQL database. It will take few moments, but eventually you will see output in your terminal session that looks like the following:
```bash
awx_1 | Operations to perform:
awx_1 | Synchronize unmigrated apps: solo, api, staticfiles, debug_toolbar, messages, channels, django_extensions, ui, rest_framework, polymorphic
awx_1 | Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
awx_1 | Synchronizing apps without migrations:
awx_1 | Creating tables...
awx_1 | Running deferred SQL...
awx_1 | Installing custom SQL...
awx_1 | Running migrations:
awx_1 | Rendering model states... DONE
awx_1 | Applying contenttypes.0001_initial... OK
awx_1 | Applying contenttypes.0002_remove_content_type_name... OK
awx_1 | Applying auth.0001_initial... OK
awx_1 | Applying auth.0002_alter_permission_name_max_length... OK
awx_1 | Applying auth.0003_alter_user_email_max_length... OK
awx_1 | Applying auth.0004_alter_user_username_opts... OK
awx_1 | Applying auth.0005_alter_user_last_login_null... OK
awx_1 | Applying auth.0006_require_contenttypes_0002... OK
awx_1 | Applying taggit.0001_initial... OK
awx_1 | Applying taggit.0002_auto_20150616_2121... OK
awx_1 | Applying main.0001_initial... OK
awx_1 | Applying main.0002_squashed_v300_release... OK
awx_1 | Applying main.0003_squashed_v300_v303_updates... OK
awx_1 | Applying main.0004_squashed_v310_release... OK
awx_1 | Applying conf.0001_initial... OK
awx_1 | Applying conf.0002_v310_copy_tower_settings... OK
...
```
Once migrations are completed, you can begin using AWX.
#### Start from the container shell
Often times you'll want to start the development environment without immediately starting all of the services in the *awx* container, and instead be taken directly to a shell. You can do this with the following:
```bash
(host)$ make docker-compose-test
```
Using `docker exec`, this will create a session in the running *awx* container, and place you at a command prompt, where you can run shell commands inside the container.
If you want to start and use the development environment, you'll first need to bootstrap it by running the following command:
```bash
(container)# /usr/bin/bootstrap_development.sh
```
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes. Once it's done it
will drop you back to the shell.
In order to launch all developer services:
```bash
(container)# /usr/bin/launch_awx.sh
```
`launch_awx.sh` also calls `bootstrap_development.sh` so if all you are doing is launching the supervisor to start all services, you don't
need to call `bootstrap_development.sh` first.
### Post Build Steps
Before you can log in and use the system, you will need to create an admin user. Optionally, you may also want to load some demo data.
##### Start a shell
To create the admin user, and load demo data, you first need to start a shell session on the *awx* container. In a new terminal session, use the `docker exec` command as follows to start the shell session:
```bash
(host)$ docker exec -it tools_awx_1 bash
```
This creates a session in the *awx* containers, just as if you were using `ssh`, and allows you execute commands within the running container.
##### Create an admin user
Before you can log into AWX, you need to create an admin user. With this user you will be able to create more users, and begin configuring the server. From within the container shell, run the following command:
```bash
(container)# awx-manage createsuperuser
```
You will be prompted for a username, an email address, and a password, and you will be asked to confirm the password. The email address is not important, so just enter something that looks like an email address. Remember the username and password, as you will use them to log into the web interface for the first time.
##### Load demo data
You can optionally load some demo data. This will create a demo project, inventory, and job template. From within the container shell, run the following to load the data:
```bash
(container)# awx-manage create_preload_data
```
**NOTE**
> This information will persist in the database running in the `tools_postgres_1` container, until the container is removed. You may periodically need to recreate
this container, and thus the database, if the database schema changes in an upstream commit.
##### Building API Documentation
### Building API Documentation
AWX includes support for building [Swagger/OpenAPI
documentation](https://swagger.io). To build the documentation locally, run:
@@ -284,7 +94,7 @@ is an example of one such service.
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).
To log in use the admin user and password you created above in [Create an admin user](#create-an-admin-user).
[Create an admin user](./tools/docker-compose/README.md#create-an-admin-user) if needed.
### Purging containers and images
@@ -335,7 +145,7 @@ Sometimes it might take us a while to fully review your PR. We try to keep the `
All submitted PRs will have the linter and unit tests run against them via Zuul, and the status reported in the PR.
## PR Checks ran by Zuul
## PR Checks run by Zuul
Zuul jobs for awx are defined in the [zuul-jobs](https://github.com/ansible/zuul-jobs) repo.
Zuul runs the following checks that must pass:

View File

@@ -1,643 +1,122 @@
Table of Contents
=================
* [Installing AWX](#installing-awx)
* [The AWX Operator](#the-awx-operator)
* [Quickstart with minikube](#quickstart-with-minikube)
* [Starting minikube](#starting-minikube)
* [Deploying the AWX Operator](#deploying-the-awx-operator)
* [Verifying the Operator Deployment](#verifying-the-operator-deployment)
* [Deploy AWX](#deploy-awx)
* [Accessing AWX](#accessing-awx)
* [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
# Installing AWX
This document provides a guide for installing AWX.
:warning: NOTE |
--- |
If you're installing an older release of AWX (prior to 18.0), these instructions have changed. Take a look at your version specific instructions, e.g., for AWX 17.0.1, see: [https://github.com/ansible/awx/blob/17.0.1/INSTALL.md](https://github.com/ansible/awx/blob/17.0.1/INSTALL.md)
If you're attempting to migrate an older Docker-based AWX installation, see: [Migrating Data from Local Docker](https://github.com/ansible/awx/blob/devel/tools/docker-compose/docs/data_migration.md) |
## Table of contents
## The AWX Operator
- [Installing AWX](#installing-awx)
* [Getting started](#getting-started)
+ [Clone the repo](#clone-the-repo)
+ [AWX branding](#awx-branding)
+ [Prerequisites](#prerequisites)
+ [System Requirements](#system-requirements)
+ [Choose a deployment platform](#choose-a-deployment-platform)
+ [Official vs Building Images](#official-vs-building-images)
* [Upgrading from previous versions](#upgrading-from-previous-versions)
* [OpenShift](#openshift)
+ [Prerequisites](#prerequisites-1)
+ [Pre-install steps](#pre-install-steps)
- [Deploying to Minishift](#deploying-to-minishift)
- [PostgreSQL](#postgresql)
+ [Run the installer](#run-the-installer)
+ [Post-install](#post-install)
+ [Accessing AWX](#accessing-awx)
* [Kubernetes](#kubernetes)
+ [Prerequisites](#prerequisites-2)
+ [Pre-install steps](#pre-install-steps-1)
+ [Configuring Helm](#configuring-helm)
+ [Run the installer](#run-the-installer-1)
+ [Post-install](#post-install-1)
+ [Accessing AWX](#accessing-awx-1)
+ [SSL Termination](#ssl-termination)
* [Docker-Compose](#docker-compose)
+ [Prerequisites](#prerequisites-3)
+ [Pre-install steps](#pre-install-steps-2)
- [Deploying to a remote host](#deploying-to-a-remote-host)
- [Inventory variables](#inventory-variables)
- [Docker registry](#docker-registry)
- [Proxy settings](#proxy-settings)
- [PostgreSQL](#postgresql-1)
+ [Run the installer](#run-the-installer-2)
+ [Post-install](#post-install-2)
+ [Accessing AWX](#accessing-awx-2)
- [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
Starting in version 18.0, the [AWX Operator](https://github.com/ansible/awx-operator) is the preferred way to install AWX.
AWX can also alternatively be installed and [run in Docker](./tools/docker-compose/README.md), but this install path is only recommended for development/test-oriented deployments, and has no official published release.
## Getting started
### Quickstart with minikube
### Clone the repo
If you don't have an existing OpenShift or Kubernetes cluster, minikube is a fast and easy way to get up and running.
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). We generally recommend that you view the releases page:
To install minikube, follow the steps in their [documentation](https://minikube.sigs.k8s.io/docs/start/).
https://github.com/ansible/awx/releases
#### Starting minikube
...and clone the latest stable release, e.g.,
`git clone -b x.y.z https://github.com/ansible/awx.git`
Please note that deploying from `HEAD` (or the latest commit) is **not** stable, and that if you want to do this, you should proceed at your own risk (also, see the section #official-vs-building-images for building your own image).
For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
Once you have a local copy, run the commands in the following sections from the root of the project tree.
### AWX branding
You can optionally install the AWX branding assets from the [awx-logos repo](https://github.com/ansible/awx-logos). Prior to installing, please review and agree to the [trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
To install the assets, clone the `awx-logos` repo so that it is next to your `awx` clone. As you progress through the installation steps, you'll be setting variables in the [inventory](./installer/inventory) file. To include the assets in the build, set `awx_official=true`.
### Prerequisites
Before you can run a deployment, you'll need the following installed in your local environment:
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) Requires Version 2.8+
- [Docker](https://docs.docker.com/engine/installation/)
+ A recent version
- [docker](https://pypi.org/project/docker/) Python module
+ This is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
+ We use this module instead of `docker-py` because it is what the `docker-compose` Python module requires.
- [community.general.docker_image collection](https://docs.ansible.com/ansible/latest/collections/community/general/docker_image_module.html)
+ This is only required if you are using Ansible >= 2.10
- [GNU Make](https://www.gnu.org/software/make/)
- [Git](https://git-scm.com/) Requires Version 1.8.4+
- Python 3.6+
- [Node 14.x LTS version](https://nodejs.org/en/download/)
+ This is only required if you're [building your own container images](#official-vs-building-images) with `use_container_for_build=false`
- [NPM 6.x LTS](https://docs.npmjs.com/)
+ This is only required if you're [building your own container images](#official-vs-building-images) with `use_container_for_build=false`
### System Requirements
The system that runs the AWX service will need to satisfy the following requirements
- At least 4GB of memory
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 10+.
### Choose a deployment platform
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster, a Kubernetes cluster, or docker-compose. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
In the sections below, you'll find deployment details and instructions for each platform:
- [OpenShift](#openshift)
- [Kubernetes](#kubernetes)
- [Docker Compose](#docker-compose).
### Official vs Building Images
When installing AWX you have the option of building your own image or using the image provided on DockerHub (see [awx](https://hub.docker.com/r/ansible/awx/))
This is controlled by the following variables in the `inventory` file
Once you have installed minikube, run the following command to start it. You may wish to customize these options.
```
dockerhub_base=ansible
dockerhub_version=latest
$ minikube start --cpus=4 --memory=8g --addons=ingress
```
If these variables are present then all deployments will use these hosted images. If the variables are not present then the images will be built during the install.
#### Deploying the AWX Operator
*dockerhub_base*
> The base location on DockerHub where the images are hosted (by default this pulls a container image named `ansible/awx:tag`)
*dockerhub_version*
> Multiple versions are provided. `latest` always pulls the most recent. You may also select version numbers at different granularities: 1, 1.0, 1.0.1, 1.0.0.123
*use_container_for_build*
> Use a local distribution build container image for building the AWX package. This is helpful if you don't want to bother installing the build-time dependencies as it is taken care of already.
## Upgrading from previous versions
Upgrading AWX involves rerunning the install playbook. Download a newer release from [https://github.com/ansible/awx/releases](https://github.com/ansible/awx/releases) and re-populate the inventory file with your customized variables.
For convenience, you can create a file called `vars.yml`:
For a comprehensive overview of features, see [README.md](https://github.com/ansible/awx-operator/blob/devel/README.md) in the awx-operator repo. The following steps are the bare minimum to get AWX up and running.
```
admin_password: 'adminpass'
pg_password: 'pgpass'
secret_key: 'mysupersecret'
$ minikube kubectl -- apply -f https://raw.githubusercontent.com/ansible/awx-operator/devel/deploy/awx-operator.yaml
```
And pass it to the installer:
##### Verifying the Operator Deployment
After a few seconds, the operator should be up and running. Verify it by running the following command:
```
$ ansible-playbook -i inventory install.yml -e @vars.yml
$ minikube kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-operator-7c78bfbfd-xb6th 1/1 Running 0 11s
```
## OpenShift
#### Deploy AWX
### Prerequisites
To complete a deployment to OpenShift, you will need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
When using OpenShift for deploying AWX make sure you have correct privileges to add the security context 'privileged', otherwise the installation will fail. The privileged context is needed because of the use of [the bubblewrap tool](https://github.com/containers/bubblewrap) to add an additional layer of security when using containers.
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
The default resource requests per-deployment requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources](https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources)
### Pre-install steps
Before starting the install, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*openshift_host*
> IP address or hostname of the OpenShift cluster. If you're using Minishift, this will be the value returned by `minishift ip`.
*openshift_skip_tls_verify*
> Boolean. Set to True if using self-signed certs.
*openshift_project*
> Name of the OpenShift project that will be created, and used as the namespace for the AWX app. Defaults to *awx*.
*openshift_user*
> Username of the OpenShift user that will create the project, and deploy the application. Defaults to *developer*.
*openshift_pg_emptydir*
> Boolean. Set to True to use an emptyDir volume when deploying the PostgreSQL pod. Note: This should only be used for demo and testing purposes.
*docker_registry*
> IP address and port, or URL, for accessing a registry that the OpenShift cluster can access. Defaults to *172.30.1.1:5000*, the internal registry delivered with Minishift. This is not needed if you are using official hosted images.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Generally this will match the project name. It defaults to *awx*. This is not needed if you are using official hosted images.
*docker_registry_username*
> Username of the user that will push images to the registry. Will generally match the *openshift_user* value. Defaults to *developer*. This is not needed if you are using official hosted images.
#### Deploying to Minishift
Install Minishift by following the [installation guide](https://docs.openshift.org/latest/minishift/getting-started/installing.html).
The recommended minimum resources for your Minishift VM:
```bash
$ minishift start --cpus=4 --memory=8GB
```
The Minishift VM contains a Docker daemon, which you can use to build the AWX images. This is generally the approach you should take, and we recommend doing so. To use this instance, run the following command to setup your environment:
```bash
# Set DOCKER environment variable to point to the Minishift VM
$ eval $(minishift docker-env)
```
**Note**
> If you choose to not use the Docker instance running inside the VM, and build the images externally, you will have to enable the OpenShift cluster to access the images. This involves pushing the images to an external Docker registry, and granting the cluster access to it, or exposing the internal registry, and pushing the images into it.
#### PostgreSQL
By default, AWX will deploy a PostgreSQL pod inside of your cluster. You will need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) which is named `postgresql` by default, and can be overridden by setting the `openshift_pg_pvc_name` variable. For testing and demo purposes, you may set `openshift_pg_emptydir=yes`.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
### Run the installer
To start the install, you will pass two *extra* variables on the command line. The first is *openshift_password*, which is the password for the *openshift_user*, and the second is *docker_registry_password*, which is the password associated with *docker_registry_username*.
If you're using the OpenShift internal registry, then you'll pass an access token for the *docker_registry_password* value, rather than a password. The `oc whoami -t` command will generate the required token, as long as you're logged into the cluster via `oc cluster login`.
Run the following command (docker_registry_password is optional if using official images):
```bash
# Start the install
$ ansible-playbook -i inventory install.yml -e openshift_password=developer -e docker_registry_password=$(oc whoami -t)
```
### Post-install
After the playbook run completes, check the status of the deployment by running `oc get pods`:
```bash
# View the running pods
$ oc get pods
NAME READY STATUS RESTARTS AGE
awx-3886581826-5mv0l 4/4 Running 0 8s
postgresql-1-l85fh 1/1 Running 0 20m
Once the Operator is running, you can now deploy AWX by creating a simple YAML file:
```
In the above example, the name of the AWX pod is `awx-3886581826-5mv0l`. Before accessing the AWX web interface, setup tasks and database migrations need to complete. These tasks are running in the `awx_task` container inside the AWX pod. To monitor their status, tail the container's STDOUT by running the following command, replacing the AWX pod name with the pod name from your environment:
```bash
# Follow the awx_task log output
$ oc logs -f awx-3886581826-5mv0l -c awx-celery
$ cat myawx.yml
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
tower_ingress_type: Ingress
```
You will see the following indicating that database migrations are running:
And then creating the AWX object in the Kubernetes API:
```bash
Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => {
"changed": false,
"db": "awx"
}
Operations to perform:
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying taggit.0001_initial... OK
Applying taggit.0002_auto_20150616_2121... OK
...
```
$ minikube kubectl apply -- -f myawx.yml
awx.awx.ansible.com/awx created
```
When you see output similar to the following, you'll know that database migrations have completed, and you can access the web interface:
After creating the AWX object in the Kubernetes API, the operator will begin running its reconciliation loop.
```bash
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
To see what's going on, you can tail the logs of the operator pod (note that your pod name will be different):
>>> <User: admin>
>>> Default organization added.
Demo Credential, Inventory, and Job Template added.
Successfully registered instance awx-3886581826-5mv0l
(changed: True)
Creating instance group tower
Added instance awx-3886581826-5mv0l to tower
```
$ minikube kubectl logs -- -f awx-operator-7c78bfbfd-xb6th
```
Once database migrations complete, the web interface will be accessible.
After a few seconds, you will see the database and application pods show up. On a fresh system, it may take a few minutes for the container images to download.
### Accessing AWX
The AWX web interface is running in the AWX pod, behind the `awx-web-svc` service. To view the service, and its port value, run the following command:
```bash
# View available services
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-web-svc 172.30.111.74 <nodes> 8052:30083/TCP 37m
postgresql 172.30.102.9 <none> 5432/TCP 38m
```
$ minikube kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-5ffbfd489c-bvtvf 3/3 Running 0 2m54s
awx-operator-7c78bfbfd-xb6th 1/1 Running 0 6m42s
awx-postgres-0 1/1 Running 0 2m58s
```
The deployment process creates a route, `awx-web-svc`, to expose the service. How the ingres is actually created will vary depending on your environment, and how the cluster is configured. You can view the route, and the external IP address and hostname assigned to it, by running the following command:
##### Accessing AWX
```bash
# View available routes
$ oc get routes
To access the AWX UI, you'll need to grab the service url from minikube:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
awx-web-svc awx-web-svc-awx.192.168.64.2.nip.io awx-web-svc http edge/Allow None
```
$ minikube service awx-service --url
http://192.168.59.2:31868
```
The above example is taken from a Minishift instance. From a web browser, use `https` to access the `HOST/PORT` value from your environment. Using the above example, the URL to access the server would be [https://awx-web-svc-awx.192.168.64.2.nip.io](https://awx-web-svc-awx.192.168.64.2.nip.io).
On fresh installs, you will see the "AWX is currently upgrading." page until database migrations finish.
Once you access the AWX server, you will be prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
Once you are redirected to the login screen, you can now log in by obtaining the generated admin password (note: do not copy the trailing `%`):
## Kubernetes
### Prerequisites
A Kubernetes deployment will require you to have access to a Kubernetes cluster as well as the following tools:
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [helm](https://helm.sh/docs/intro/quickstart/)
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
The default resource requests per-pod requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
### Pre-install steps
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section uncommenting when necessary. Make sure the openshift and standalone docker sections are commented out:
*kubernetes_context*
> Prior to running the installer, make sure you've configured the context for the cluster you'll be installing to. This is how the installer knows which cluster to connect to and what authentication to use
*kubernetes_namespace*
> Name of the Kubernetes namespace where the AWX resources will be installed. This will be created if it doesn't exist
*docker_registry_*
> These settings should be used if building your own base images. You'll need access to an external registry and are responsible for making sure your kube cluster can talk to it and use it. If these are undefined and the dockerhub_ configuration settings are uncommented then the images will be pulled from dockerhub instead
### Configuring Helm
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://helm.sh/docs/intro/quickstart/](https://helm.sh/docs/intro/quickstart/).
You do not need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) as Helm does it for you. However, an existing one may be used by setting the `pg_persistence_existingclaim` variable.
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://helm.sh/docs/topics/rbac/](https://helm.sh/docs/topics/rbac/)
### Run the installer
After making changes to the `inventory` file use `ansible-playbook` to begin the install
```bash
$ ansible-playbook -i inventory install.yml
```
$ minikube kubectl -- get secret awx-admin-password -o jsonpath='{.data.password}' | base64 --decode
b6ChwVmqEiAsil2KSpH4xGaZPeZvWnWj%
```
### Post-install
After the playbook run completes, check the status of the deployment by running `kubectl get pods --namespace awx` (replace awx with the namespace you used):
```bash
# View the running pods, it may take a few minutes for everything to be marked in the Running state
$ kubectl get pods --namespace awx
NAME READY STATUS RESTARTS AGE
awx-2558692395-2r8ss 4/4 Running 0 29s
awx-postgresql-355348841-kltkn 1/1 Running 0 1m
```
### Accessing AWX
The AWX web interface is running in the AWX pod behind the `awx-web-svc` service:
```bash
# View available services
$ kubectl get svc --namespace awx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-postgresql ClusterIP 10.7.250.208 <none> 5432/TCP 2m
awx-web-svc NodePort 10.7.241.35 <none> 80:30177/TCP 1m
```
The deployment process creates an `Ingress` named `awx-web-svc` also. Some kubernetes cloud providers will automatically handle routing configuration when an Ingress is created others may require that you more explicitly configure it. You can see what kubernetes knows about things with:
```bash
kubectl get ing --namespace awx
NAME HOSTS ADDRESS PORTS AGE
awx-web-svc * 35.227.x.y 80 3m
```
If your provider is able to allocate an IP Address from the Ingress controller then you can navigate to the address and access the AWX interface. For some providers it can take a few minutes to allocate and make this accessible. For other providers it may require you to manually intervene.
### SSL Termination
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
## Docker-Compose
### Prerequisites
- [Docker](https://docs.docker.com/engine/installation/) on the host where AWX will be deployed. After installing Docker, the Docker service must be started (depending on your OS, you may have to add the local user that uses Docker to the ``docker`` group, refer to the documentation for details)
- [docker-compose](https://pypi.org/project/docker-compose/) Python module.
+ This also installs the `docker` Python module, which is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
- [Docker Compose](https://docs.docker.com/compose/install/).
### Pre-install steps
#### Deploying to a remote host
By default, the delivered [installer/inventory](./installer/inventory) file will deploy AWX to the local host. It is possible, however, to deploy to a remote host. The [installer/install.yml](./installer/install.yml) playbook can be used to build images on the local host, and ship the built images to, and run deployment tasks on, a remote host. To do this, modify the [installer/inventory](./installer/inventory) file, by commenting out `localhost`, and adding the remote host.
For example, suppose you wish to build images locally on your CI/CD host, and deploy them to a remote host named *awx-server*. To do this, add *awx-server* to the [installer/inventory](./installer/inventory) file, and comment out or remove `localhost`, as demonstrated by the following:
```yaml
# localhost ansible_connection=local
awx-server
[all:vars]
...
```
In the above example, image build tasks will be delegated to `localhost`, which is typically where the clone of the AWX project exists. Built images will be archived, copied to remote host, and imported into the remote Docker image cache. Tasks to start the AWX containers will then execute on the remote host.
If you choose to use the official images then the remote host will be the one to pull those images.
**Note**
> You may also want to set additional variables to control how Ansible connects to the host. For more information about this, view [Behavioral Inventory Parameters](http://docs.ansible.com/ansible/latest/intro_inventory.html#id12).
> As mentioned above, in [Prerequisites](#prerequisites-1), the prerequisites are required on the remote host.
> When deploying to a remote host, the playbook does not execute tasks with the `become` option. For this reason, make sure the user that connects to the remote host has privileges to run the `docker` command. This typically means that non-privileged users need to be part of the `docker` group.
#### Inventory variables
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*postgres_data_dir*
> If you're using the default PostgreSQL container (see [PostgreSQL](#postgresql-1) below), provide a path that can be mounted to the container, and where the database can be persisted.
*host_port*
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container. If undefined no port will be exposed. Defaults to *80*.
*host_port_ssl*
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container for SSL support. If undefined no port will be exposed. Defaults to *443*, only works if you also set `ssl_certificate` (see below).
*ssl_certificate*
> Optionally, provide the path to a file that contains a certificate and its private key. This needs to be a .pem-file
*docker_compose_dir*
> When using docker-compose, the `docker-compose.yml` file will be created there (default `~/.awx/awxcompose`).
*custom_venv_dir*
> Adds the custom venv environments from the local host to be passed into the containers at install.
*ca_trust_dir*
> If you're using a non trusted CA, provide a path where the untrusted Certs are stored on your Host.
#### Docker registry
If you wish to tag and push built images to a Docker registry, set the following variables in the inventory file:
*docker_registry*
> IP address and port, or URL, for accessing a registry.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Defaults to *awx*.
*docker_registry_username*
> Username of the user that will push images to the registry. Defaults to *developer*.
**Note**
> These settings are ignored if using official images
#### Proxy settings
*http_proxy*
> IP address and port, or URL, for using an http_proxy.
*https_proxy*
> IP address and port, or URL, for using an https_proxy.
*no_proxy*
> Exclude IP address or URL from the proxy.
#### PostgreSQL
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a container, and data will be persisted to a host volume. In this scenario, you must set the value of `postgres_data_dir` to a path that can be mounted to the container. When the container is stopped, the database files will still exist in the specified path.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information.
### Run the installer
If you are not pushing images to a Docker registry, start the install by running the following:
```bash
# Set the working directory to installer
$ cd installer
# Run the Ansible playbook
$ ansible-playbook -i inventory install.yml
```
If you're pushing built images to a repository, then use the `-e` option to pass the registry password as follows, replacing *password* with the password of the username assigned to `docker_registry_username` (note that you will also need to remove `dockerhub_base` and `dockerhub_version` from the inventory file):
```bash
# Set the working directory to installer
$ cd installer
# Run the Ansible playbook
$ ansible-playbook -i inventory -e docker_registry_password=password install.yml
```
### Post-install
After the playbook run completes, Docker starts a series of containers that provide the services that make up AWX. You can view the running containers using the `docker ps` command.
If you're deploying using Docker Compose, container names will be prefixed by the name of the folder where the docker-compose.yml file is created (by default, `awx`).
Immediately after the containers start, the *awx_task* container will perform required setup tasks, including database migrations. These tasks need to complete before the web interface can be accessed. To monitor the progress, you can follow the container's STDOUT by running the following:
```bash
# Tail the awx_task log
$ docker logs -f awx_task
```
You will see output similar to the following:
```bash
Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => {
"changed": false,
"db": "awx"
}
Operations to perform:
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying taggit.0001_initial... OK
Applying taggit.0002_auto_20150616_2121... OK
Applying main.0001_initial... OK
...
```
Once migrations complete, you will see the following log output, indicating that migrations have completed:
```bash
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> <User: admin>
>>> Default organization added.
Demo Credential, Inventory, and Job Template added.
Successfully registered instance awx
(changed: True)
Creating instance group tower
Added instance awx to tower
(changed: True)
...
```
### Accessing AWX
The AWX web server is accessible on the deployment host, using the *host_port* value set in the *inventory* file. The default URL is [http://localhost](http://localhost).
You will prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
Now you can log in at the URL above with the username "admin" and the password above. Happy Automating!
# Installing the AWX CLI

230
Makefile
View File

@@ -20,11 +20,12 @@ COMPOSE_TAG ?= $(GIT_BRANCH)
COMPOSE_HOST ?= $(shell hostname)
VENV_BASE ?= /var/lib/awx/venv/
COLLECTION_BASE ?= /var/lib/awx/vendor/awx_ansible_collections
SCL_PREFIX ?=
CELERY_SCHEDULE_FILE ?= /var/lib/awx/beat.db
DEV_DOCKER_TAG_BASE ?= gcr.io/ansible-tower-engineering
DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
# Python packages to install only from source (not from binary wheels)
# Comma separated list
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio,pycurl
@@ -60,11 +61,11 @@ WHEEL_FILE ?= $(WHEEL_NAME)-py2-none-any.whl
I18N_FLAG_FILE = .i18n_built
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange runserver \
develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \
dev_build release_build release_clean sdist \
ui-docker-machine ui-docker ui-release ui-devel \
ui-test ui-deps ui-test-ci VERSION
dev_build release_build sdist \
ui-release ui-devel \
VERSION docker-compose-sources
clean-tmp:
rm -rf tmp/
@@ -113,31 +114,7 @@ guard-%:
exit 1; \
fi
virtualenv: virtualenv_ansible virtualenv_awx
# virtualenv_* targets do not use --system-site-packages to prevent bugs installing packages
# but Ansible venvs are expected to have this, so that must be done after venv creation
virtualenv_ansible:
if [ "$(VENV_BASE)" ]; then \
if [ ! -d "$(VENV_BASE)" ]; then \
mkdir $(VENV_BASE); \
fi; \
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
virtualenv -p python $(VENV_BASE)/ansible && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) $(VENV_BOOTSTRAP); \
fi; \
fi
virtualenv_ansible_py3:
if [ "$(VENV_BASE)" ]; then \
if [ ! -d "$(VENV_BASE)" ]; then \
mkdir $(VENV_BASE); \
fi; \
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
virtualenv -p $(PYTHON) $(VENV_BASE)/ansible; \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) $(VENV_BOOTSTRAP); \
fi; \
fi
virtualenv: virtualenv_awx
# flit is needed for offline install of certain packages, specifically ptyprocess
# it is needed for setup, but not always recognized as a setup dependency
@@ -153,32 +130,6 @@ virtualenv_awx:
fi; \
fi
# --ignore-install flag is not used because *.txt files should specify exact versions
requirements_ansible: virtualenv_ansible
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_local.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) -r /dev/stdin ; \
else \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_git.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) -r /dev/stdin ; \
fi
$(VENV_BASE)/ansible/bin/pip uninstall --yes -r requirements/requirements_ansible_uninstall.txt
# Same effect as using --system-site-packages flag on venv creation
rm $(shell ls -d $(VENV_BASE)/ansible/lib/python* | head -n 1)/no-global-site-packages.txt
requirements_ansible_py3: virtualenv_ansible_py3
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_local.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip3 install $(PIP_OPTIONS) -r /dev/stdin ; \
else \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_git.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip3 install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) -r /dev/stdin ; \
fi
$(VENV_BASE)/ansible/bin/pip3 uninstall --yes -r requirements/requirements_ansible_uninstall.txt
# Same effect as using --system-site-packages flag on venv creation
rm $(shell ls -d $(VENV_BASE)/ansible/lib/python* | head -n 1)/no-global-site-packages.txt
requirements_ansible_dev:
if [ "$(VENV_BASE)" ]; then \
$(VENV_BASE)/ansible/bin/pip install pytest mock; \
fi
# Install third-party requirements needed for AWX's environment.
# this does not use system site packages intentionally
requirements_awx: virtualenv_awx
@@ -192,17 +143,9 @@ requirements_awx: virtualenv_awx
requirements_awx_dev:
$(VENV_BASE)/awx/bin/pip install -r requirements/requirements_dev.txt
requirements_collections:
mkdir -p $(COLLECTION_BASE)
n=0; \
until [ "$$n" -ge 5 ]; do \
ansible-galaxy collection install -r requirements/collections_requirements.yml -p $(COLLECTION_BASE) && break; \
n=$$((n+1)); \
done
requirements: requirements_awx
requirements: requirements_ansible requirements_awx requirements_collections
requirements_dev: requirements_awx requirements_ansible_py3 requirements_awx_dev requirements_ansible_dev
requirements_dev: requirements_awx requirements_awx_dev
requirements_test: requirements
@@ -267,11 +210,27 @@ collectstatic:
fi; \
mkdir -p awx/public/static && $(PYTHON) manage.py collectstatic --clear --noinput > /dev/null 2>&1
UWSGI_DEV_RELOAD_COMMAND ?= supervisorctl restart tower-processes:awx-dispatcher tower-processes:awx-receiver
uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
uwsgi -b 32768 --socket 127.0.0.1:8050 --module=awx.wsgi:application --home=/var/lib/awx/venv/awx --chdir=/awx_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --lazy-apps --logformat "%(addr) %(method) %(uri) - %(proto) %(status)" --hook-accepting1="exec:supervisorctl restart tower-processes:awx-dispatcher tower-processes:awx-receiver"
uwsgi -b 32768 \
--socket 127.0.0.1:8050 \
--module=awx.wsgi:application \
--home=/var/lib/awx/venv/awx \
--chdir=/awx_devel/ \
--vacuum \
--processes=5 \
--harakiri=120 --master \
--no-orphans \
--py-autoreload 1 \
--max-requests=1000 \
--stats /tmp/stats.socket \
--lazy-apps \
--logformat "%(addr) %(method) %(uri) - %(proto) %(status)" \
--hook-accepting1="exec: $(UWSGI_DEV_RELOAD_COMMAND)"
daphne:
@if [ "$(VENV_BASE)" ]; then \
@@ -365,7 +324,8 @@ test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
fi && \
pip install ansible && \
py.test $(COLLECTION_TEST_DIRS) -v
# The python path needs to be modified so that the tests can find Ansible within the container
# First we will use anything expility set as PYTHONPATH
@@ -427,39 +387,6 @@ bulk_data:
fi; \
$(PYTHON) tools/data_generators/rbac_dummy_data_generator.py --preset=$(DATA_GEN_PRESET)
# l10n TASKS
# --------------------------------------
# check for UI po files
HAVE_PO := $(shell ls awx/ui/po/*.po 2>/dev/null)
check-po:
ifdef HAVE_PO
# Should be 'Language: zh-CN' but not 'Language: zh_CN' in zh_CN.po
for po in awx/ui/po/*.po ; do \
echo $$po; \
mo="awx/ui/po/`basename $$po .po`.mo"; \
msgfmt --check --verbose $$po -o $$mo; \
if test "$$?" -ne 0 ; then \
exit -1; \
fi; \
rm $$mo; \
name=`echo "$$po" | grep '-'`; \
if test "x$$name" != x ; then \
right_name=`echo $$language | sed -e 's/-/_/'`; \
echo "ERROR: WRONG $$name CORRECTION: $$right_name"; \
exit -1; \
fi; \
language=`grep '^"Language:' "$$po" | grep '_'`; \
if test "x$$language" != x ; then \
right_language=`echo $$language | sed -e 's/_/-/'`; \
echo "ERROR: WRONG $$language CORRECTION: $$right_language in $$po"; \
exit -1; \
fi; \
done;
else
@echo No PO files
endif
# UI TASKS
# --------------------------------------
@@ -472,16 +399,13 @@ clean-ui:
rm -rf awx/ui_next/build
rm -rf awx/ui_next/src/locales/_build
rm -rf $(UI_BUILD_FLAG_FILE)
git checkout awx/ui_next/src/locales
awx/ui_next/node_modules:
$(NPM_BIN) --prefix awx/ui_next --loglevel warn --ignore-scripts install
$(UI_BUILD_FLAG_FILE):
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-strings
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run compile-strings
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run build
git checkout awx/ui_next/src/locales
mkdir -p awx/public/static/css
mkdir -p awx/public/static/js
mkdir -p awx/public/static/media
@@ -495,11 +419,17 @@ ui-release: awx/ui_next/node_modules $(UI_BUILD_FLAG_FILE)
ui-devel: awx/ui_next/node_modules
@$(MAKE) -B $(UI_BUILD_FLAG_FILE)
ui-devel-instrumented: awx/ui_next/node_modules
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run start-instrumented
ui-devel-test: awx/ui_next/node_modules
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run start
ui-zuul-lint-and-test:
$(NPM_BIN) --prefix awx/ui_next install
$(NPM_BIN) run --prefix awx/ui_next lint
$(NPM_BIN) run --prefix awx/ui_next prettier-check
$(NPM_BIN) run --prefix awx/ui_next test
$(NPM_BIN) run --prefix awx/ui_next test -- --coverage --watchAll=false
# Build a pip-installable package into dist/ with a timestamped version number.
@@ -543,31 +473,30 @@ docker-auth:
awx/projects:
@mkdir -p $@
# Docker isolated rampart
docker-compose-isolated: awx/projects
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
COMPOSE_UP_OPTS ?=
CLUSTER_NODE_COUNT ?= 1
# Docker Compose Development environment
docker-compose: docker-auth awx/projects
CURRENT_UID=$(shell id -u) OS="$(shell docker info | grep 'Operating System')" TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml $(COMPOSE_UP_OPTS) up --no-recreate awx
docker-compose-sources:
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \
-e awx_image=$(DEV_DOCKER_TAG_BASE)/awx_devel \
-e awx_image_tag=$(COMPOSE_TAG) \
-e cluster_node_count=$(CLUSTER_NODE_COUNT)
docker-compose-cluster: docker-auth awx/projects
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml up
docker-compose: docker-auth awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_UP_OPTS) up
docker-compose-credential-plugins: docker-auth awx/projects
docker-compose-credential-plugins: docker-auth awx/projects docker-compose-sources
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx
docker-compose-test: docker-auth awx/projects
cd tools && CURRENT_UID=$(shell id -u) OS="$(shell docker info | grep 'Operating System')" TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /bin/bash
docker-compose-test: docker-auth awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash
docker-compose-runtest: awx/projects
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /start_tests.sh
docker-compose-runtest: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
docker-compose-build-swagger: awx/projects
cd tools && CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports --no-deps awx /start_tests.sh swagger
docker-compose-build-swagger: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
detect-schema-change: genschema
curl https://s3.amazonaws.com/awx-public-ci-files/schema.json -o reference-schema.json
@@ -575,21 +504,14 @@ detect-schema-change: genschema
diff -u -b reference-schema.json schema.json
docker-compose-clean: awx/projects
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose rm -sf
docker-compose -f tools/docker-compose/_sources/docker-compose.yml rm -sf
# Base development image build
docker-compose-build:
ansible localhost -m template -a "src=installer/roles/image_build/templates/Dockerfile.j2 dest=tools/docker-compose/Dockerfile" -e build_dev=True
docker build -t ansible/awx_devel -f tools/docker-compose/Dockerfile \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
docker tag ansible/awx_devel $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
# For use when developing on "isolated" AWX deployments
docker-compose-isolated-build: docker-compose-build
docker build -t ansible/awx_isolated -f tools/docker-isolated/Dockerfile .
docker tag ansible/awx_isolated $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
ansible-playbook tools/ansible/dockerfile.yml -e build_dev=True
DOCKER_BUILDKIT=1 docker build -t $(DEVEL_IMAGE_NAME) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
docker-clean:
$(foreach container_id,$(shell docker ps -f name=tools_awx -aq),docker stop $(container_id); docker rm -f $(container_id);)
@@ -601,11 +523,11 @@ docker-clean-volumes: docker-compose-clean
docker-refresh: docker-clean docker-compose
# Docker Development Environment with Elastic Stack Connected
docker-compose-elk: docker-auth awx/projects
CURRENT_UID=$(shell id -u) TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-elk: docker-auth awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: docker-auth awx/projects
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: docker-auth awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
prometheus:
docker run -u0 --net=tools_default --link=`docker ps | egrep -o "tools_awx(_run)?_([^ ]+)?"`:awxweb --volume `pwd`/tools/prometheus:/prometheus --name prometheus -d -p 0.0.0.0:9090:9090 prom/prometheus --web.enable-lifecycle --config.file=/prometheus/prometheus.yml
@@ -624,5 +546,33 @@ psql-container:
VERSION:
@echo "awx: $(VERSION)"
Dockerfile: installer/roles/image_build/templates/Dockerfile.j2
ansible localhost -m template -a "src=installer/roles/image_build/templates/Dockerfile.j2 dest=Dockerfile"
Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml
Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.kube-dev \
-e kube_dev=True \
-e template_dest=_build_kube_dev
awx-kube-dev-build: Dockerfile.kube-dev
docker build -f Dockerfile.kube-dev \
--build-arg BUILDKIT_INLINE_CACHE=1 \
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
# Translation TASKS
# --------------------------------------
# generate UI .pot
pot: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-strings
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-template
# generate API django .pot .po
LANG = "en-us"
messages:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py makemessages -l $(LANG) --keep-pot

View File

@@ -1,4 +1,7 @@
[![Gated by Zuul](https://zuul-ci.org/gated.svg)](https://ansible.softwarefactory-project.io/zuul/status)
[![Gated by Zuul](https://zuul-ci.org/gated.svg)](https://ansible.softwarefactory-project.io/zuul/status) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![IRC Chat](https://img.shields.io/badge/IRC-%23ansible--awx-blueviolet.svg)](https://webchat.freenode.net/#ansible-awx)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
AWX provides a web-based user interface, REST API, and task engine built on top of [Ansible](https://github.com/ansible/ansible). It is the upstream project for [Tower](https://www.ansible.com/tower), a commercial derivative of AWX.
@@ -36,8 +39,3 @@ We welcome your feedback and ideas. Here's how to reach us with feedback and que
- Join the `#ansible-awx` channel on irc.freenode.net
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
License
-------
[Apache v2](./LICENSE.md)

View File

@@ -1 +1 @@
17.0.1
18.0.0

View File

@@ -24,7 +24,7 @@ from rest_framework.request import clone_request
from awx.api.fields import ChoiceNullField
from awx.main.fields import JSONField, ImplicitRoleField
from awx.main.models import NotificationTemplate
from awx.main.scheduler.kubernetes import PodManager
from awx.main.tasks import AWXReceptorJob
class Metadata(metadata.SimpleMetadata):
@@ -209,7 +209,7 @@ class Metadata(metadata.SimpleMetadata):
continue
if field == "pod_spec_override":
meta['default'] = PodManager().pod_definition
meta['default'] = AWXReceptorJob().pod_definition
# Add type choices if available from the serializer.
if field == 'type' and hasattr(serializer, 'get_type_choices'):

View File

@@ -50,7 +50,7 @@ from awx.main.constants import (
)
from awx.main.models import (
ActivityStream, AdHocCommand, AdHocCommandEvent, Credential, CredentialInputSource,
CredentialType, CustomInventoryScript, Group, Host, Instance,
CredentialType, CustomInventoryScript, ExecutionEnvironment, Group, Host, Instance,
InstanceGroup, Inventory, InventorySource, InventoryUpdate,
InventoryUpdateEvent, Job, JobEvent, JobHostSummary, JobLaunchConfig,
JobNotificationMixin, JobTemplate, Label, Notification, NotificationTemplate,
@@ -107,6 +107,8 @@ SUMMARIZABLE_FK_FIELDS = {
'insights_credential_id',),
'host': DEFAULT_SUMMARY_FIELDS,
'group': DEFAULT_SUMMARY_FIELDS,
'default_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'execution_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',),
@@ -124,12 +126,12 @@ SUMMARIZABLE_FK_FIELDS = {
'last_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'license_error'),
'current_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'license_error'),
'current_job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'license_error'),
'inventory_source': ('source', 'last_updated', 'status'),
'inventory_source': ('id', 'name', 'source', 'last_updated', 'status'),
'custom_inventory_script': DEFAULT_SUMMARY_FIELDS,
'source_script': DEFAULT_SUMMARY_FIELDS,
'role': ('id', 'role_field'),
'notification_template': DEFAULT_SUMMARY_FIELDS,
'instance_group': ('id', 'name', 'controller_id', 'is_containerized'),
'instance_group': ('id', 'name', 'controller_id', 'is_container_group'),
'insights_credential': DEFAULT_SUMMARY_FIELDS,
'source_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'target_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
@@ -647,7 +649,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
class Meta:
model = UnifiedJobTemplate
fields = ('*', 'last_job_run', 'last_job_failed',
'next_job_run', 'status')
'next_job_run', 'status', 'execution_environment')
def get_related(self, obj):
res = super(UnifiedJobTemplateSerializer, self).get_related(obj)
@@ -657,6 +659,9 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
res['last_job'] = obj.last_job.get_absolute_url(request=self.context.get('request'))
if obj.next_schedule:
res['next_schedule'] = obj.next_schedule.get_absolute_url(request=self.context.get('request'))
if obj.execution_environment_id:
res['execution_environment'] = self.reverse('api:execution_environment_detail',
kwargs={'pk': obj.execution_environment_id})
return res
def get_types(self):
@@ -711,6 +716,7 @@ class UnifiedJobSerializer(BaseSerializer):
class Meta:
model = UnifiedJob
fields = ('*', 'unified_job_template', 'launch_type', 'status',
'execution_environment',
'failed', 'started', 'finished', 'canceled_on', 'elapsed', 'job_args',
'job_cwd', 'job_env', 'job_explanation',
'execution_node', 'controller_node',
@@ -748,6 +754,9 @@ class UnifiedJobSerializer(BaseSerializer):
res['stdout'] = self.reverse('api:ad_hoc_command_stdout', kwargs={'pk': obj.pk})
if obj.workflow_job_id:
res['source_workflow_job'] = self.reverse('api:workflow_job_detail', kwargs={'pk': obj.workflow_job_id})
if obj.execution_environment_id:
res['execution_environment'] = self.reverse('api:execution_environment_detail',
kwargs={'pk': obj.execution_environment_id})
return res
def get_summary_fields(self, obj):
@@ -1243,11 +1252,13 @@ class OrganizationSerializer(BaseSerializer):
class Meta:
model = Organization
fields = ('*', 'max_hosts', 'custom_virtualenv',)
fields = ('*', 'max_hosts', 'custom_virtualenv', 'default_environment',)
read_only_fields = ('*', 'custom_virtualenv',)
def get_related(self, obj):
res = super(OrganizationSerializer, self).get_related(obj)
res.update(dict(
res.update(
execution_environments = self.reverse('api:organization_execution_environments_list', kwargs={'pk': obj.pk}),
projects = self.reverse('api:organization_projects_list', kwargs={'pk': obj.pk}),
inventories = self.reverse('api:organization_inventories_list', kwargs={'pk': obj.pk}),
job_templates = self.reverse('api:organization_job_templates_list', kwargs={'pk': obj.pk}),
@@ -1267,7 +1278,10 @@ class OrganizationSerializer(BaseSerializer):
access_list = self.reverse('api:organization_access_list', kwargs={'pk': obj.pk}),
instance_groups = self.reverse('api:organization_instance_groups_list', kwargs={'pk': obj.pk}),
galaxy_credentials = self.reverse('api:organization_galaxy_credentials_list', kwargs={'pk': obj.pk}),
))
)
if obj.default_environment:
res['default_environment'] = self.reverse('api:execution_environment_detail',
kwargs={'pk': obj.default_environment_id})
return res
def get_summary_fields(self, obj):
@@ -1347,6 +1361,29 @@ class ProjectOptionsSerializer(BaseSerializer):
return super(ProjectOptionsSerializer, self).validate(attrs)
class ExecutionEnvironmentSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete', 'copy']
managed_by_tower = serializers.ReadOnlyField()
class Meta:
model = ExecutionEnvironment
fields = ('*', 'organization', 'image', 'managed_by_tower', 'credential', 'pull')
def get_related(self, obj):
res = super(ExecutionEnvironmentSerializer, self).get_related(obj)
res.update(
activity_stream=self.reverse('api:execution_environment_activity_stream_list', kwargs={'pk': obj.pk}),
unified_job_templates=self.reverse('api:execution_environment_job_template_list', kwargs={'pk': obj.pk}),
copy=self.reverse('api:execution_environment_copy', kwargs={'pk': obj.pk}),
)
if obj.organization:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization.pk})
if obj.credential:
res['credential'] = self.reverse('api:credential_detail',
kwargs={'pk': obj.credential.pk})
return res
class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
status = serializers.ChoiceField(choices=Project.PROJECT_STATUS_CHOICES, read_only=True)
@@ -1360,9 +1397,10 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
class Meta:
model = Project
fields = ('*', 'organization', 'scm_update_on_launch',
'scm_update_cache_timeout', 'allow_override', 'custom_virtualenv',) + \
fields = ('*', '-execution_environment', 'organization', 'scm_update_on_launch',
'scm_update_cache_timeout', 'allow_override', 'custom_virtualenv', 'default_environment') + \
('last_update_failed', 'last_updated') # Backwards compatibility
read_only_fields = ('*', 'custom_virtualenv',)
def get_related(self, obj):
res = super(ProjectSerializer, self).get_related(obj)
@@ -1386,6 +1424,9 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
if obj.organization:
res['organization'] = self.reverse('api:organization_detail',
kwargs={'pk': obj.organization.pk})
if obj.default_environment:
res['default_environment'] = self.reverse('api:execution_environment_detail',
kwargs={'pk': obj.default_environment_id})
# Backwards compatibility.
if obj.current_update:
res['current_update'] = self.reverse('api:project_update_detail',
@@ -1939,6 +1980,7 @@ class InventorySourceOptionsSerializer(BaseSerializer):
fields = ('*', 'source', 'source_path', 'source_script', 'source_vars', 'credential',
'enabled_var', 'enabled_value', 'host_filter', 'overwrite', 'overwrite_vars',
'custom_virtualenv', 'timeout', 'verbosity')
read_only_fields = ('*', 'custom_virtualenv',)
def get_related(self, obj):
res = super(InventorySourceOptionsSerializer, self).get_related(obj)
@@ -2924,6 +2966,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
'become_enabled', 'diff_mode', 'allow_simultaneous', 'custom_virtualenv',
'job_slice_count', 'webhook_service', 'webhook_credential',
)
read_only_fields = ('*', 'custom_virtualenv',)
def get_related(self, obj):
res = super(JobTemplateSerializer, self).get_related(obj)
@@ -4731,10 +4774,10 @@ class InstanceGroupSerializer(BaseSerializer):
'Isolated groups have a designated controller group.'),
read_only=True
)
is_containerized = serializers.BooleanField(
is_container_group = serializers.BooleanField(
required=False,
help_text=_('Indicates whether instances in this group are containerized.'
'Containerized groups have a designated Openshift or Kubernetes cluster.'),
read_only=True
'Containerized groups have a designated Openshift or Kubernetes cluster.')
)
# NOTE: help_text is duplicated from field definitions, no obvious way of
# both defining field details here and also getting the field's help_text
@@ -4761,7 +4804,7 @@ class InstanceGroupSerializer(BaseSerializer):
fields = ("id", "type", "url", "related", "name", "created", "modified",
"capacity", "committed_capacity", "consumed_capacity",
"percent_capacity_remaining", "jobs_running", "jobs_total",
"instances", "controller", "is_controller", "is_isolated", "is_containerized", "credential",
"instances", "controller", "is_controller", "is_isolated", "is_container_group", "credential",
"policy_instance_percentage", "policy_instance_minimum", "policy_instance_list",
"pod_spec_override", "summary_fields")
@@ -4786,17 +4829,17 @@ class InstanceGroupSerializer(BaseSerializer):
raise serializers.ValidationError(_('Isolated instances may not be added or removed from instances groups via the API.'))
if self.instance and self.instance.controller_id is not None:
raise serializers.ValidationError(_('Isolated instance group membership may not be managed via the API.'))
if value and self.instance and self.instance.is_containerized:
if value and self.instance and self.instance.is_container_group:
raise serializers.ValidationError(_('Containerized instances may not be managed via the API'))
return value
def validate_policy_instance_percentage(self, value):
if value and self.instance and self.instance.is_containerized:
if value and self.instance and self.instance.is_container_group:
raise serializers.ValidationError(_('Containerized instances may not be managed via the API'))
return value
def validate_policy_instance_minimum(self, value):
if value and self.instance and self.instance.is_containerized:
if value and self.instance and self.instance.is_container_group:
raise serializers.ValidationError(_('Containerized instances may not be managed via the API'))
return value
@@ -4810,6 +4853,15 @@ class InstanceGroupSerializer(BaseSerializer):
raise serializers.ValidationError(_('Only Kubernetes credentials can be associated with an Instance Group'))
return value
def validate(self, attrs):
attrs = super(InstanceGroupSerializer, self).validate(attrs)
if attrs.get('credential') and not attrs.get('is_container_group'):
raise serializers.ValidationError({'is_container_group': _(
'is_container_group must be True when associating a credential to an Instance Group')})
return attrs
def get_capacity_dict(self):
# Store capacity values (globally computed) in the context
if 'capacity_map' not in self.context:

View File

@@ -0,0 +1,20 @@
from django.conf.urls import url
from awx.api.views import (
ExecutionEnvironmentList,
ExecutionEnvironmentDetail,
ExecutionEnvironmentJobTemplateList,
ExecutionEnvironmentCopy,
ExecutionEnvironmentActivityStreamList,
)
urls = [
url(r'^$', ExecutionEnvironmentList.as_view(), name='execution_environment_list'),
url(r'^(?P<pk>[0-9]+)/$', ExecutionEnvironmentDetail.as_view(), name='execution_environment_detail'),
url(r'^(?P<pk>[0-9]+)/unified_job_templates/$', ExecutionEnvironmentJobTemplateList.as_view(), name='execution_environment_job_template_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', ExecutionEnvironmentCopy.as_view(), name='execution_environment_copy'),
url(r'^(?P<pk>[0-9]+)/activity_stream/$', ExecutionEnvironmentActivityStreamList.as_view(), name='execution_environment_activity_stream_list'),
]
__all__ = ['urls']

View File

@@ -9,6 +9,7 @@ from awx.api.views import (
OrganizationUsersList,
OrganizationAdminsList,
OrganizationInventoriesList,
OrganizationExecutionEnvironmentsList,
OrganizationProjectsList,
OrganizationJobTemplatesList,
OrganizationWorkflowJobTemplatesList,
@@ -34,6 +35,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/users/$', OrganizationUsersList.as_view(), name='organization_users_list'),
url(r'^(?P<pk>[0-9]+)/admins/$', OrganizationAdminsList.as_view(), name='organization_admins_list'),
url(r'^(?P<pk>[0-9]+)/inventories/$', OrganizationInventoriesList.as_view(), name='organization_inventories_list'),
url(r'^(?P<pk>[0-9]+)/execution_environments/$', OrganizationExecutionEnvironmentsList.as_view(), name='organization_execution_environments_list'),
url(r'^(?P<pk>[0-9]+)/projects/$', OrganizationProjectsList.as_view(), name='organization_projects_list'),
url(r'^(?P<pk>[0-9]+)/job_templates/$', OrganizationJobTemplatesList.as_view(), name='organization_job_templates_list'),
url(r'^(?P<pk>[0-9]+)/workflow_job_templates/$', OrganizationWorkflowJobTemplatesList.as_view(), name='organization_workflow_job_templates_list'),

View File

@@ -42,6 +42,7 @@ from .user import urls as user_urls
from .project import urls as project_urls
from .project_update import urls as project_update_urls
from .inventory import urls as inventory_urls
from .execution_environments import urls as execution_environment_urls
from .team import urls as team_urls
from .host import urls as host_urls
from .group import urls as group_urls
@@ -106,6 +107,7 @@ v2_urls = [
url(r'^schedules/', include(schedule_urls)),
url(r'^organizations/', include(organization_urls)),
url(r'^users/', include(user_urls)),
url(r'^execution_environments/', include(execution_environment_urls)),
url(r'^projects/', include(project_urls)),
url(r'^project_updates/', include(project_update_urls)),
url(r'^teams/', include(team_urls)),

View File

@@ -112,6 +112,7 @@ from awx.api.views.organization import ( # noqa
OrganizationInventoriesList,
OrganizationUsersList,
OrganizationAdminsList,
OrganizationExecutionEnvironmentsList,
OrganizationProjectsList,
OrganizationJobTemplatesList,
OrganizationWorkflowJobTemplatesList,
@@ -396,7 +397,7 @@ class InstanceGroupDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAP
permission_classes = (InstanceGroupTowerPermission,)
def update_raw_data(self, data):
if self.get_object().is_containerized:
if self.get_object().is_container_group:
data.pop('policy_instance_percentage', None)
data.pop('policy_instance_minimum', None)
data.pop('policy_instance_list', None)
@@ -685,6 +686,52 @@ class TeamAccessList(ResourceAccessList):
parent_model = models.Team
class ExecutionEnvironmentList(ListCreateAPIView):
always_allow_superuser = False
model = models.ExecutionEnvironment
serializer_class = serializers.ExecutionEnvironmentSerializer
swagger_topic = "Execution Environments"
class ExecutionEnvironmentDetail(RetrieveUpdateDestroyAPIView):
always_allow_superuser = False
model = models.ExecutionEnvironment
serializer_class = serializers.ExecutionEnvironmentSerializer
swagger_topic = "Execution Environments"
class ExecutionEnvironmentJobTemplateList(SubListAPIView):
model = models.UnifiedJobTemplate
serializer_class = serializers.UnifiedJobTemplateSerializer
parent_model = models.ExecutionEnvironment
relationship = 'unifiedjobtemplates'
class ExecutionEnvironmentCopy(CopyAPIView):
model = models.ExecutionEnvironment
copy_return_serializer_class = serializers.ExecutionEnvironmentSerializer
class ExecutionEnvironmentActivityStreamList(SubListAPIView):
model = models.ActivityStream
serializer_class = serializers.ActivityStreamSerializer
parent_model = models.ExecutionEnvironment
relationship = 'activitystream_set'
search_fields = ('changes',)
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(execution_environment=parent)
class ProjectList(ListCreateAPIView):
model = models.Project

View File

@@ -15,6 +15,7 @@ from awx.main.models import (
Inventory,
Host,
Project,
ExecutionEnvironment,
JobTemplate,
WorkflowJobTemplate,
Organization,
@@ -45,6 +46,7 @@ from awx.api.serializers import (
RoleSerializer,
NotificationTemplateSerializer,
InstanceGroupSerializer,
ExecutionEnvironmentSerializer,
ProjectSerializer, JobTemplateSerializer, WorkflowJobTemplateSerializer,
CredentialSerializer
)
@@ -141,6 +143,16 @@ class OrganizationProjectsList(SubListCreateAPIView):
parent_key = 'organization'
class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
model = ExecutionEnvironment
serializer_class = ExecutionEnvironmentSerializer
parent_model = Organization
relationship = 'executionenvironments'
parent_key = 'organization'
swagger_topic = "Execution Environments"
class OrganizationJobTemplatesList(SubListCreateAPIView):
model = JobTemplate

View File

@@ -100,6 +100,7 @@ class ApiVersionRootView(APIView):
data['dashboard'] = reverse('api:dashboard_view', request=request)
data['organizations'] = reverse('api:organization_list', request=request)
data['users'] = reverse('api:user_list', request=request)
data['execution_environments'] = reverse('api:execution_environment_list', request=request)
data['projects'] = reverse('api:project_list', request=request)
data['project_updates'] = reverse('api:project_update_list', request=request)
data['teams'] = reverse('api:team_list', request=request)

View File

@@ -14,6 +14,7 @@ from rest_framework.fields import ( # noqa
BooleanField, CharField, ChoiceField, DictField, DateTimeField, EmailField,
IntegerField, ListField, NullBooleanField
)
from rest_framework.serializers import PrimaryKeyRelatedField # noqa
logger = logging.getLogger('awx.conf.fields')

View File

@@ -23,4 +23,4 @@ class Migration(migrations.Migration):
operations = [
migrations.RunPython(copy_session_settings, reverse_copy_session_settings),
]

View File

@@ -352,7 +352,6 @@ class SettingsWrapper(UserSettingsHolder):
self.cache.set_many(settings_to_cache, timeout=SETTING_CACHE_TIMEOUT)
def _get_local(self, name, validate=True):
self.__clean_on_fork__()
self._preload_cache()
cache_key = Setting.get_cache_key(name)
try:

View File

@@ -29,9 +29,9 @@ from awx.main.utils import (
)
from awx.main.models import (
ActivityStream, AdHocCommand, AdHocCommandEvent, Credential, CredentialType,
CredentialInputSource, CustomInventoryScript, Group, Host, Instance, InstanceGroup,
Inventory, InventorySource, InventoryUpdate, InventoryUpdateEvent, Job, JobEvent,
JobHostSummary, JobLaunchConfig, JobTemplate, Label, Notification,
CredentialInputSource, CustomInventoryScript, ExecutionEnvironment, Group, Host, Instance,
InstanceGroup, Inventory, InventorySource, InventoryUpdate, InventoryUpdateEvent, Job,
JobEvent, JobHostSummary, JobLaunchConfig, JobTemplate, Label, Notification,
NotificationTemplate, Organization, Project, ProjectUpdate,
ProjectUpdateEvent, Role, Schedule, SystemJob, SystemJobEvent,
SystemJobTemplate, Team, UnifiedJob, UnifiedJobTemplate, WorkflowJob,
@@ -777,6 +777,11 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
@check_superuser
def can_change(self, obj, data):
if data and data.get('default_environment'):
ee = get_object_from_data('default_environment', ExecutionEnvironment, data)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
return self.user in obj.admin_role
def can_delete(self, obj):
@@ -1308,6 +1313,54 @@ class TeamAccess(BaseAccess):
*args, **kwargs)
class ExecutionEnvironmentAccess(BaseAccess):
"""
I can see an execution environment when:
- I'm a superuser
- I'm a member of the same organization
- it is a global ExecutionEnvironment
I can create/change an execution environment when:
- I'm a superuser
- I'm an admin for the organization(s)
"""
model = ExecutionEnvironment
select_related = ('organization',)
prefetch_related = ('organization__admin_role', 'organization__execution_environment_admin_role')
def filtered_queryset(self):
return ExecutionEnvironment.objects.filter(
Q(organization__in=Organization.accessible_pk_qs(self.user, 'read_role')) |
Q(organization__isnull=True)
).distinct()
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'execution_environment_admin_role').exists()
return self.check_related('organization', Organization, data, mandatory=True,
role_field='execution_environment_admin_role')
def can_change(self, obj, data):
if obj.managed_by_tower:
raise PermissionDenied
if self.user.is_superuser:
return True
if obj and obj.organization_id is None:
raise PermissionDenied
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
if data and 'organization' in data:
new_org = get_object_from_data('organization', Organization, data, obj=obj)
if not new_org or self.user not in new_org.execution_environment_admin_role:
return False
return self.check_related('organization', Organization, data, obj=obj, mandatory=True,
role_field='execution_environment_admin_role')
def can_delete(self, obj):
return self.can_change(obj, None)
class ProjectAccess(NotificationAttachMixin, BaseAccess):
'''
I can see projects when:
@@ -1337,14 +1390,29 @@ class ProjectAccess(NotificationAttachMixin, BaseAccess):
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'project_admin_role').exists()
return (self.check_related('organization', Organization, data, role_field='project_admin_role', mandatory=True) and
self.check_related('credential', Credential, data, role_field='use_role'))
if data.get('default_environment'):
ee = get_object_from_data('default_environment', ExecutionEnvironment, data)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
return (
self.check_related('organization', Organization, data, role_field='project_admin_role', mandatory=True) and
self.check_related('credential', Credential, data, role_field='use_role')
)
@check_superuser
def can_change(self, obj, data):
return (self.check_related('organization', Organization, data, obj=obj, role_field='project_admin_role') and
self.user in obj.admin_role and
self.check_related('credential', Credential, data, obj=obj, role_field='use_role'))
if data and data.get('default_environment'):
ee = get_object_from_data('default_environment', ExecutionEnvironment, data, obj=obj)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
return (
self.check_related('organization', Organization, data, obj=obj, role_field='project_admin_role') and
self.user in obj.admin_role and
self.check_related('credential', Credential, data, obj=obj, role_field='use_role')
)
@check_superuser
def can_start(self, obj, validate_license=True):
@@ -1449,6 +1517,10 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
if self.user not in inventory.use_role:
return False
ee = get_value(ExecutionEnvironment, 'execution_environment')
if ee and not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
project = get_value(Project, 'project')
# If the user has admin access to the project (as an org admin), should
# be able to proceed without additional checks.
@@ -1496,6 +1568,11 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
if self.changes_are_non_sensitive(obj, data):
return True
if data.get('execution_environment'):
ee = get_object_from_data('execution_environment', ExecutionEnvironment, data)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
for required_field, cls in (('inventory', Inventory), ('project', Project)):
is_mandatory = True
if not getattr(obj, '{}_id'.format(required_field)):
@@ -1926,6 +2003,11 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'workflow_admin_role').exists()
if data.get('execution_environment'):
ee = get_object_from_data('execution_environment', ExecutionEnvironment, data)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True) and
self.check_related('inventory', Inventory, data, role_field='use_role')
@@ -1975,6 +2057,11 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
if self.user.is_superuser:
return True
if data and data.get('execution_environment'):
ee = get_object_from_data('execution_environment', ExecutionEnvironment, data)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', obj=obj) and
self.check_related('inventory', Inventory, data, role_field='use_role', obj=obj) and

View File

@@ -311,7 +311,7 @@ def events_table(since, full_path, until, **kwargs):
return _copy_table(table='events', query=events_query, path=full_path)
@register('unified_jobs_table', '1.1', format='csv', description=_('Data on jobs run'), expensive=True)
@register('unified_jobs_table', '1.2', format='csv', description=_('Data on jobs run'), expensive=True)
def unified_jobs_table(since, full_path, until, **kwargs):
unified_job_query = '''COPY (SELECT main_unifiedjob.id,
main_unifiedjob.polymorphic_ctype_id,
@@ -334,7 +334,9 @@ def unified_jobs_table(since, full_path, until, **kwargs):
main_unifiedjob.finished,
main_unifiedjob.elapsed,
main_unifiedjob.job_explanation,
main_unifiedjob.instance_group_id
main_unifiedjob.instance_group_id,
main_unifiedjob.installed_collections,
main_unifiedjob.ansible_version
FROM main_unifiedjob
JOIN django_content_type ON main_unifiedjob.polymorphic_ctype_id = django_content_type.id
LEFT JOIN main_job ON main_unifiedjob.id = main_job.unifiedjob_ptr_id

View File

@@ -10,6 +10,7 @@ from rest_framework.fields import FloatField
# Tower
from awx.conf import fields, register, register_validate
from awx.main.models import ExecutionEnvironment
logger = logging.getLogger('awx.main.conf')
@@ -176,6 +177,18 @@ register(
read_only=True,
)
register(
'DEFAULT_EXECUTION_ENVIRONMENT',
field_class=fields.PrimaryKeyRelatedField,
allow_null=True,
default=None,
queryset=ExecutionEnvironment.objects.all(),
label=_('Global default execution environment'),
help_text=_('.'),
category=_('System'),
category_slug='system',
)
register(
'CUSTOM_VENV_PATHS',
field_class=fields.StringListPathField,

View File

@@ -0,0 +1,142 @@
from .plugin import CredentialPlugin, raise_for_status
from django.utils.translation import ugettext_lazy as _
from urllib.parse import urljoin
import requests
pas_inputs = {
'fields': [{
'id': 'url',
'label': _('Centrify Tenant URL'),
'type': 'string',
'help_text': _('Centrify Tenant URL'),
'format': 'url',
}, {
'id':'client_id',
'label':_('Centrify API User'),
'type':'string',
'help_text': _('Centrify API User, having necessary permissions as mentioned in support doc'),
}, {
'id':'client_password',
'label':_('Centrify API Password'),
'type':'string',
'help_text': _('Password of Centrify API User with necessary permissions'),
'secret':True,
},{
'id':'oauth_application_id',
'label':_('OAuth2 Application ID'),
'type':'string',
'help_text': _('Application ID of the configured OAuth2 Client (defaults to \'awx\')'),
'default': 'awx',
},{
'id':'oauth_scope',
'label':_('OAuth2 Scope'),
'type':'string',
'help_text': _('Scope of the configured OAuth2 Client (defaults to \'awx\')'),
'default': 'awx',
}],
'metadata': [{
'id': 'account-name',
'label': _('Account Name'),
'type': 'string',
'help_text': _('Local system account or Domain account name enrolled in Centrify Vault. eg. (root or DOMAIN/Administrator)'),
},{
'id': 'system-name',
'label': _('System Name'),
'type': 'string',
'help_text': _('Machine Name enrolled with in Centrify Portal'),
}],
'required': ['url', 'account-name', 'system-name','client_id','client_password'],
}
# generate bearer token to authenticate with PAS portal, Input : Client ID, Client Secret
def handle_auth(**kwargs):
post_data = {
"grant_type": "client_credentials",
"scope": kwargs['oauth_scope']
}
response = requests.post(
kwargs['endpoint'],
data = post_data,
auth = (kwargs['client_id'],kwargs['client_password']),
verify = True,
timeout = (5, 30)
)
raise_for_status(response)
try:
return response.json()['access_token']
except KeyError:
raise RuntimeError('OAuth request to tenant was unsuccessful')
# fetch the ID of system with RedRock query, Input : System Name, Account Name
def get_ID(**kwargs):
endpoint = urljoin(kwargs['url'],'/Redrock/query')
name=" Name='{0}' and User='{1}'".format(kwargs['system_name'],kwargs['acc_name'])
query = 'Select ID from VaultAccount where {0}'.format(name)
post_headers = {
"Authorization": "Bearer " + kwargs['access_token'],
"X-CENTRIFY-NATIVE-CLIENT":"true"
}
response = requests.post(
endpoint,
json = {'Script': query},
headers = post_headers,
verify = True,
timeout = (5, 30)
)
raise_for_status(response)
try:
result_str = response.json()["Result"]["Results"]
return result_str[0]["Row"]["ID"]
except (IndexError, KeyError):
raise RuntimeError("Error Detected!! Check the Inputs")
# CheckOut Password from Centrify Vault, Input : ID
def get_passwd(**kwargs):
endpoint = urljoin(kwargs['url'],'/ServerManage/CheckoutPassword')
post_headers = {
"Authorization": "Bearer " + kwargs['access_token'],
"X-CENTRIFY-NATIVE-CLIENT":"true"
}
response = requests.post(
endpoint,
json = {'ID': kwargs['acc_id']},
headers = post_headers,
verify = True,
timeout = (5, 30)
)
raise_for_status(response)
try:
return response.json()["Result"]["Password"]
except KeyError:
raise RuntimeError("Password Not Found")
def centrify_backend(**kwargs):
url = kwargs.get('url')
acc_name = kwargs.get('account-name')
system_name = kwargs.get('system-name')
client_id = kwargs.get('client_id')
client_password = kwargs.get('client_password')
app_id = kwargs.get('oauth_application_id', 'awx')
endpoint = urljoin(url, f'/oauth2/token/{app_id}')
endpoint = {
'endpoint': endpoint,
'client_id': client_id,
'client_password': client_password,
'oauth_scope': kwargs.get('oauth_scope', 'awx')
}
token = handle_auth(**endpoint)
get_id_args = {'system_name':system_name,'acc_name':acc_name,'url':url,'access_token':token}
acc_id = get_ID(**get_id_args)
get_pwd_args = {'url':url,'acc_id':acc_id,'access_token':token}
return get_passwd(**get_pwd_args)
centrify_plugin = CredentialPlugin(
'Centrify Vault Credential Provider Lookup',
inputs=pas_inputs,
backend=centrify_backend
)

View File

@@ -40,6 +40,12 @@ base_inputs = {
'multiline': False,
'secret': True,
'help_text': _('The Secret ID for AppRole Authentication')
}, {
'id': 'namespace',
'label': _('Namespace name (Vault Enterprise only)'),
'type': 'string',
'multiline': False,
'help_text': _('Name of the namespace to use when authenticate and retrieve secrets')
}, {
'id': 'default_auth_path',
'label': _('Path to Approle Auth'),
@@ -137,6 +143,9 @@ def approle_auth(**kwargs):
# AppRole Login
request_kwargs['json'] = {'role_id': role_id, 'secret_id': secret_id}
sess = requests.Session()
# Namespace support
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
request_url = '/'.join([url, 'auth', auth_path, 'login']).rstrip('/')
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
@@ -164,6 +173,8 @@ def kv_backend(**kwargs):
sess.headers['Authorization'] = 'Bearer {}'.format(token)
# Compatibility header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
if api_version == 'v2':
if kwargs.get('secret_version'):
@@ -222,6 +233,8 @@ def ssh_backend(**kwargs):
sess = requests.Session()
sess.headers['Authorization'] = 'Bearer {}'.format(token)
if kwargs.get('namespace'):
sess.headers['X-Vault-Namespace'] = kwargs['namespace']
# Compatability header for older installs of Hashicorp Vault
sess.headers['X-Vault-Token'] = token
# https://www.vaultproject.io/api/secret/ssh/index.html#sign-ssh-key

View File

@@ -6,6 +6,7 @@ from multiprocessing import Process
from django.conf import settings
from django.db import connections
from schedule import Scheduler
from django_guid.middleware import GuidMiddleware
from awx.main.dispatch.worker import TaskWorker
@@ -35,6 +36,7 @@ class Scheduler(Scheduler):
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
GuidMiddleware.set_guid(GuidMiddleware._generate_guid())
self.run_pending()
except Exception:
logger.exception(

View File

@@ -16,6 +16,7 @@ from queue import Full as QueueFull, Empty as QueueEmpty
from django.conf import settings
from django.db import connection as django_connection, connections
from django.core.cache import cache as django_cache
from django_guid.middleware import GuidMiddleware
from jinja2 import Template
import psutil
@@ -445,6 +446,8 @@ class AutoscalePool(WorkerPool):
return super(AutoscalePool, self).up()
def write(self, preferred_queue, body):
if 'guid' in body:
GuidMiddleware.set_guid(body['guid'])
try:
# when the cluster heartbeat occurs, clean up internally
if isinstance(body, dict) and 'cluster_node_heartbeat' in body['task']:

View File

@@ -5,6 +5,7 @@ import json
from uuid import uuid4
from django.conf import settings
from django_guid.middleware import GuidMiddleware
from . import pg_bus_conn
@@ -83,6 +84,9 @@ class task:
'kwargs': kwargs,
'task': cls.name
}
guid = GuidMiddleware.get_guid()
if guid:
obj['guid'] = guid
obj.update(**kw)
if callable(queue):
queue = queue()

View File

@@ -9,6 +9,7 @@ from django.conf import settings
from django.utils.timezone import now as tz_now
from django.db import DatabaseError, OperationalError, connection as django_connection
from django.db.utils import InterfaceError, InternalError
from django_guid.middleware import GuidMiddleware
import psutil
@@ -152,6 +153,8 @@ class CallbackBrokerWorker(BaseWorker):
if body.get('event') == 'EOF':
try:
if 'guid' in body:
GuidMiddleware.set_guid(body['guid'])
final_counter = body.get('final_counter', 0)
logger.info('Event processing is finished for Job {}, sending notifications'.format(job_identifier))
# EOF events are sent when stdout for the running task is
@@ -176,6 +179,8 @@ class CallbackBrokerWorker(BaseWorker):
handle_success_and_failure_notifications.apply_async([uj.id])
except Exception:
logger.exception('Worker failed to emit notifications: Job {}'.format(job_identifier))
finally:
GuidMiddleware.set_guid('')
return
event = cls.create_from_data(**body)

View File

@@ -6,6 +6,9 @@ import traceback
from kubernetes.config import kube_config
from django.conf import settings
from django_guid.middleware import GuidMiddleware
from awx.main.tasks import dispatch_startup, inform_cluster_of_shutdown
from .base import BaseWorker
@@ -52,6 +55,8 @@ class TaskWorker(BaseWorker):
uuid = body.get('uuid', '<unknown>')
args = body.get('args', [])
kwargs = body.get('kwargs', {})
if 'guid' in body:
GuidMiddleware.set_guid(body.pop('guid'))
_call = TaskWorker.resolve_callable(task)
if inspect.isclass(_call):
# the callable is a class, e.g., RunJob; instantiate and
@@ -81,6 +86,7 @@ class TaskWorker(BaseWorker):
'task': u'awx.main.tasks.RunProjectUpdate'
}
'''
settings.__clean_on_fork__()
result = None
try:
result = self.run_callable(body)

View File

@@ -6,7 +6,6 @@ import stat
import tempfile
import time
import logging
import yaml
import datetime
from django.conf import settings
@@ -32,7 +31,7 @@ def set_pythonpath(venv_libdir, env):
class IsolatedManager(object):
def __init__(self, event_handler, canceled_callback=None, check_callback=None, pod_manager=None):
def __init__(self, event_handler, canceled_callback=None, check_callback=None):
"""
:param event_handler: a callable used to persist event data from isolated nodes
:param canceled_callback: a callable - which returns `True` or `False`
@@ -45,28 +44,12 @@ class IsolatedManager(object):
self.started_at = None
self.captured_command_artifact = False
self.instance = None
self.pod_manager = pod_manager
def build_inventory(self, hosts):
if self.instance and self.instance.is_containerized:
inventory = {'all': {'hosts': {}}}
fd, path = tempfile.mkstemp(
prefix='.kubeconfig', dir=self.private_data_dir
)
with open(path, 'wb') as temp:
temp.write(yaml.dump(self.pod_manager.kube_config).encode())
temp.flush()
os.chmod(temp.name, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
for host in hosts:
inventory['all']['hosts'][host] = {
"ansible_connection": "kubectl",
"ansible_kubectl_config": path,
}
else:
inventory = '\n'.join([
'{} ansible_ssh_user={}'.format(host, settings.AWX_ISOLATED_USERNAME)
for host in hosts
])
inventory = '\n'.join([
'{} ansible_ssh_user={}'.format(host, settings.AWX_ISOLATED_USERNAME)
for host in hosts
])
return inventory

View File

@@ -2,22 +2,22 @@
# All Rights Reserved
from django.core.management.base import BaseCommand
from django.conf import settings
from crum import impersonate
from awx.main.models import User, Organization, Project, Inventory, CredentialType, Credential, Host, JobTemplate
from awx.main.models import (
User, Organization, Project, Inventory, CredentialType,
Credential, Host, JobTemplate, ExecutionEnvironment
)
from awx.main.signals import disable_computed_fields
class Command(BaseCommand):
"""Create preloaded data, intended for new installs
"""
help = 'Creates a preload tower data iff there is none.'
help = 'Creates a preload tower data if there is none.'
def handle(self, *args, **kwargs):
# Sanity check: Is there already an organization in the system?
if Organization.objects.count():
print('An organization is already in the system, exiting.')
print('(changed: False)')
return
changed = False
# Create a default organization as the first superuser found.
try:
@@ -26,44 +26,62 @@ class Command(BaseCommand):
superuser = None
with impersonate(superuser):
with disable_computed_fields():
o = Organization.objects.create(name='Default')
p = Project(name='Demo Project',
scm_type='git',
scm_url='https://github.com/ansible/ansible-tower-samples',
scm_update_on_launch=True,
scm_update_cache_timeout=0,
organization=o)
p.save(skip_update=True)
ssh_type = CredentialType.objects.filter(namespace='ssh').first()
c = Credential.objects.create(credential_type=ssh_type,
name='Demo Credential',
inputs={
'username': superuser.username
},
created_by=superuser)
c.admin_role.members.add(superuser)
public_galaxy_credential = Credential(
name='Ansible Galaxy',
managed_by_tower=True,
credential_type=CredentialType.objects.get(kind='galaxy'),
inputs = {
'url': 'https://galaxy.ansible.com/'
}
)
public_galaxy_credential.save()
o.galaxy_credentials.add(public_galaxy_credential)
i = Inventory.objects.create(name='Demo Inventory',
organization=o,
created_by=superuser)
Host.objects.create(name='localhost',
inventory=i,
variables="ansible_connection: local\nansible_python_interpreter: '{{ ansible_playbook_python }}'",
created_by=superuser)
jt = JobTemplate.objects.create(name='Demo Job Template',
playbook='hello_world.yml',
project=p,
inventory=i)
jt.credentials.add(c)
print('Default organization added.')
print('Demo Credential, Inventory, and Job Template added.')
print('(changed: True)')
if not Organization.objects.exists():
o = Organization.objects.create(name='Default')
p = Project(name='Demo Project',
scm_type='git',
scm_url='https://github.com/ansible/ansible-tower-samples',
scm_update_on_launch=True,
scm_update_cache_timeout=0,
organization=o)
p.save(skip_update=True)
ssh_type = CredentialType.objects.filter(namespace='ssh').first()
c = Credential.objects.create(credential_type=ssh_type,
name='Demo Credential',
inputs={
'username': superuser.username
},
created_by=superuser)
c.admin_role.members.add(superuser)
public_galaxy_credential = Credential(name='Ansible Galaxy',
managed_by_tower=True,
credential_type=CredentialType.objects.get(kind='galaxy'),
inputs={'url': 'https://galaxy.ansible.com/'})
public_galaxy_credential.save()
o.galaxy_credentials.add(public_galaxy_credential)
i = Inventory.objects.create(name='Demo Inventory',
organization=o,
created_by=superuser)
Host.objects.create(name='localhost',
inventory=i,
variables="ansible_connection: local\nansible_python_interpreter: '{{ ansible_playbook_python }}'",
created_by=superuser)
jt = JobTemplate.objects.create(name='Demo Job Template',
playbook='hello_world.yml',
project=p,
inventory=i)
jt.credentials.add(c)
print('Default organization added.')
print('Demo Credential, Inventory, and Job Template added.')
changed = True
default_ee = settings.AWX_EXECUTION_ENVIRONMENT_DEFAULT_IMAGE
ee, created = ExecutionEnvironment.objects.get_or_create(name='Default EE', defaults={'image': default_ee,
'managed_by_tower': True})
if created:
changed = True
print('Default Execution Environment registered.')
if changed:
print('(changed: True)')
else:
print('(changed: False)')

View File

@@ -4,7 +4,6 @@
# Python
import json
import logging
import fnmatch
import os
import re
import subprocess
@@ -34,11 +33,7 @@ from awx.main.utils.safe_yaml import sanitize_jinja
# other AWX imports
from awx.main.models.rbac import batch_role_ancestor_rebuilding
# TODO: remove proot utils once we move to running inv. updates in containers
from awx.main.utils import (
check_proot_installed,
wrap_args_with_proot,
build_proot_temp_dir,
ignore_inventory_computed_fields,
get_licenser
)
@@ -89,27 +84,6 @@ class AnsibleInventoryLoader(object):
else:
self.venv_path = settings.ANSIBLE_VENV_PATH
def build_env(self):
env = dict(os.environ.items())
env['VIRTUAL_ENV'] = self.venv_path
env['PATH'] = os.path.join(self.venv_path, "bin") + ":" + env['PATH']
# Set configuration items that should always be used for updates
for key, value in STANDARD_INVENTORY_UPDATE_ENV.items():
if key not in env:
env[key] = value
venv_libdir = os.path.join(self.venv_path, "lib")
env.pop('PYTHONPATH', None) # default to none if no python_ver matches
for version in os.listdir(venv_libdir):
if fnmatch.fnmatch(version, 'python[23].*'):
if os.path.isdir(os.path.join(venv_libdir, version)):
env['PYTHONPATH'] = os.path.join(venv_libdir, version, "site-packages") + ":"
break
# For internal inventory updates, these are not reported in the job_env API
logger.info('Using VIRTUAL_ENV: {}'.format(env['VIRTUAL_ENV']))
logger.info('Using PATH: {}'.format(env['PATH']))
logger.info('Using PYTHONPATH: {}'.format(env.get('PYTHONPATH', None)))
return env
def get_path_to_ansible_inventory(self):
venv_exe = os.path.join(self.venv_path, 'bin', 'ansible-inventory')
if os.path.exists(venv_exe):
@@ -128,66 +102,29 @@ class AnsibleInventoryLoader(object):
return shutil.which('ansible-inventory')
def get_base_args(self):
# get ansible-inventory absolute path for running in bubblewrap/proot, in Popen
ansible_inventory_path = self.get_path_to_ansible_inventory()
# NOTE: why do we add "python" to the start of these args?
# the script that runs ansible-inventory specifies a python interpreter
# that makes no sense in light of the fact that we put all the dependencies
# inside of /var/lib/awx/venv/ansible, so we override the specified interpreter
# https://github.com/ansible/ansible/issues/50714
bargs = ['python', ansible_inventory_path, '-i', self.source]
bargs = ['podman', 'run', '--user=root', '--quiet']
bargs.extend(['-v', '{0}:{0}:Z'.format(self.source)])
for key, value in STANDARD_INVENTORY_UPDATE_ENV.items():
bargs.extend(['-e', '{0}={1}'.format(key, value)])
bargs.extend([settings.AWX_EXECUTION_ENVIRONMENT_DEFAULT_IMAGE])
bargs.extend(['ansible-inventory', '-i', self.source])
bargs.extend(['--playbook-dir', functioning_dir(self.source)])
if self.verbosity:
# INFO: -vvv, DEBUG: -vvvvv, for inventory, any more than 3 makes little difference
bargs.append('-{}'.format('v' * min(5, self.verbosity * 2 + 1)))
bargs.append('--list')
logger.debug('Using base command: {}'.format(' '.join(bargs)))
return bargs
# TODO: Remove this once we move to running ansible-inventory in containers
# and don't need proot for process isolation anymore
def get_proot_args(self, cmd, env):
cwd = os.getcwd()
if not check_proot_installed():
raise RuntimeError("proot is not installed but is configured for use")
kwargs = {}
# we cannot safely store tmp data in source dir or trust script contents
if env['AWX_PRIVATE_DATA_DIR']:
# If this is non-blank, file credentials are being used and we need access
private_data_dir = functioning_dir(env['AWX_PRIVATE_DATA_DIR'])
logger.debug("Using private credential data in '{}'.".format(private_data_dir))
kwargs['private_data_dir'] = private_data_dir
self.tmp_private_dir = build_proot_temp_dir()
logger.debug("Using fresh temporary directory '{}' for isolation.".format(self.tmp_private_dir))
kwargs['proot_temp_dir'] = self.tmp_private_dir
kwargs['proot_show_paths'] = [functioning_dir(self.source), settings.AWX_ANSIBLE_COLLECTIONS_PATHS]
logger.debug("Running from `{}` working directory.".format(cwd))
if self.venv_path != settings.ANSIBLE_VENV_PATH:
kwargs['proot_custom_virtualenv'] = self.venv_path
return wrap_args_with_proot(cmd, cwd, **kwargs)
def command_to_json(self, cmd):
data = {}
stdout, stderr = '', ''
env = self.build_env()
# TODO: remove proot args once inv. updates run in containers
if (('AWX_PRIVATE_DATA_DIR' in env) and
getattr(settings, 'AWX_PROOT_ENABLED', False)):
cmd = self.get_proot_args(cmd, env)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
stdout = smart_text(stdout)
stderr = smart_text(stderr)
# TODO: can be removed when proot is removed
if self.tmp_private_dir:
shutil.rmtree(self.tmp_private_dir, True)
if proc.returncode != 0:
raise RuntimeError('%s failed (rc=%d) with stdout:\n%s\nstderr:\n%s' % (
'ansible-inventory', proc.returncode, stdout, stderr))
@@ -205,9 +142,10 @@ class AnsibleInventoryLoader(object):
def load(self):
base_args = self.get_base_args()
logger.info('Reading Ansible inventory source: %s', self.source)
return self.command_to_json(base_args + ['--list'])
return self.command_to_json(base_args)
class Command(BaseCommand):

View File

@@ -17,13 +17,14 @@ class InstanceNotFound(Exception):
class RegisterQueue:
def __init__(self, queuename, controller, instance_percent, inst_min, hostname_list):
def __init__(self, queuename, controller, instance_percent, inst_min, hostname_list, is_container_group=None):
self.instance_not_found_err = None
self.queuename = queuename
self.controller = controller
self.instance_percent = instance_percent
self.instance_min = inst_min
self.hostname_list = hostname_list
self.is_container_group = is_container_group
def get_create_update_instance_group(self):
created = False
@@ -36,6 +37,10 @@ class RegisterQueue:
ig.policy_instance_minimum = self.instance_min
changed = True
if self.is_container_group:
ig.is_container_group = self.is_container_group
changed = True
if changed:
ig.save()

View File

@@ -144,7 +144,8 @@ class InstanceManager(models.Manager):
from awx.main.management.commands.register_queue import RegisterQueue
pod_ip = os.environ.get('MY_POD_IP')
registered = self.register(ip_address=pod_ip)
RegisterQueue('tower', None, 100, 0, []).register()
is_container_group = settings.IS_K8S
RegisterQueue('tower', None, 100, 0, [], is_container_group).register()
return registered
else:
return (False, self.me())
@@ -237,7 +238,7 @@ class InstanceGroupManager(models.Manager):
elif t.status == 'running':
# Subtract capacity from all groups that contain the instance
if t.execution_node not in instance_ig_mapping:
if not t.is_containerized:
if not t.is_container_group_task:
logger.warning('Detected %s running inside lost instance, '
'may still be waiting for reaper.', t.log_format)
if t.instance_group:

View File

@@ -45,7 +45,10 @@ class TimingMiddleware(threading.local, MiddlewareMixin):
response['X-API-Total-Time'] = '%0.3fs' % total_time
if settings.AWX_REQUEST_PROFILE:
response['X-API-Profile-File'] = self.prof.stop()
perf_logger.info('api response times', extra=dict(python_objects=dict(request=request, response=response)))
perf_logger.info(
f'request: {request}, response_time: {response["X-API-Total-Time"]}',
extra=dict(python_objects=dict(request=request, response=response, X_API_TOTAL_TIME=response["X-API-Total-Time"]))
)
return response

View File

@@ -0,0 +1,59 @@
# Generated by Django 2.2.11 on 2020-07-08 18:42
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.db.models.expressions
import taggit.managers
class Migration(migrations.Migration):
dependencies = [
('taggit', '0003_taggeditem_add_unique_index'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('main', '0123_drop_hg_support'),
]
operations = [
migrations.CreateModel(
name='ExecutionEnvironment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(blank=True, default='')),
('image', models.CharField(help_text='The registry location where the container is stored.', max_length=1024, verbose_name='image location')),
('managed_by_tower', models.BooleanField(default=False, editable=False)),
('created_by', models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{'class': 'executionenvironment', 'model_name': 'executionenvironment', 'app_label': 'main'}(class)s_created+", to=settings.AUTH_USER_MODEL)),
('credential', models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='executionenvironments', to='main.Credential')),
('modified_by', models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{'class': 'executionenvironment', 'model_name': 'executionenvironment', 'app_label': 'main'}(class)s_modified+", to=settings.AUTH_USER_MODEL)),
('organization', models.ForeignKey(blank=True, default=None, help_text='The organization used to determine access to this execution environment.', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='executionenvironments', to='main.Organization')),
('tags', taggit.managers.TaggableManager(blank=True, help_text='A comma-separated list of tags.', through='taggit.TaggedItem', to='taggit.Tag', verbose_name='Tags')),
],
options={
'ordering': (django.db.models.expressions.OrderBy(django.db.models.expressions.F('organization_id'), nulls_first=True), 'image'),
'unique_together': {('organization', 'image')},
},
),
migrations.AddField(
model_name='activitystream',
name='execution_environment',
field=models.ManyToManyField(blank=True, to='main.ExecutionEnvironment'),
),
migrations.AddField(
model_name='organization',
name='default_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The default execution environment for jobs run by this organization.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='main.ExecutionEnvironment'),
),
migrations.AddField(
model_name='unifiedjob',
name='execution_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The container image to be used for execution.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='unifiedjobs', to='main.ExecutionEnvironment'),
),
migrations.AddField(
model_name='unifiedjobtemplate',
name='execution_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The container image to be used for execution.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='unifiedjobtemplates', to='main.ExecutionEnvironment'),
),
]

View File

@@ -0,0 +1,46 @@
# Generated by Django 2.2.16 on 2020-11-19 16:20
import uuid
import awx.main.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0124_execution_environments'),
]
operations = [
migrations.AlterModelOptions(
name='executionenvironment',
options={'ordering': ('-created',)},
),
migrations.AddField(
model_name='executionenvironment',
name='name',
field=models.CharField(default=uuid.uuid4, max_length=512, unique=True),
preserve_default=False,
),
migrations.AddField(
model_name='organization',
name='execution_environment_admin_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
preserve_default='True',
),
migrations.AddField(
model_name='project',
name='default_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The default execution environment for jobs run using this project.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='main.ExecutionEnvironment'),
),
migrations.AlterField(
model_name='credentialtype',
name='kind',
field=models.CharField(choices=[('ssh', 'Machine'), ('vault', 'Vault'), ('net', 'Network'), ('scm', 'Source Control'), ('cloud', 'Cloud'), ('registry', 'Container Registry'), ('token', 'Personal Access Token'), ('insights', 'Insights'), ('external', 'External'), ('kubernetes', 'Kubernetes'), ('galaxy', 'Galaxy/Automation Hub')], max_length=32),
),
migrations.AlterUniqueTogether(
name='executionenvironment',
unique_together=set(),
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.16 on 2021-01-27 22:31
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0125_more_ee_modeling_changes'),
]
operations = [
migrations.AddField(
model_name='executionenvironment',
name='pull',
field=models.CharField(choices=[('always', 'Always pull container before running.'), ('missing', 'No pull option has been selected.'), ('never', 'Never pull container before running.')], blank=True, default='', help_text='Pull image before running?', max_length=16),
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.16 on 2021-02-15 22:02
from django.db import migrations
def reset_pod_specs(apps, schema_editor):
InstanceGroup = apps.get_model('main', 'InstanceGroup')
InstanceGroup.objects.update(pod_spec_override="")
class Migration(migrations.Migration):
dependencies = [
('main', '0126_executionenvironment_container_options'),
]
operations = [
migrations.RunPython(reset_pod_specs)
]

View File

@@ -0,0 +1,20 @@
# Generated by Django 2.2.16 on 2021-02-18 22:57
import awx.main.fields
from django.db import migrations
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0127_reset_pod_spec_override'),
]
operations = [
migrations.AlterField(
model_name='organization',
name='read_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['member_role', 'auditor_role', 'execute_role', 'project_admin_role', 'inventory_admin_role', 'workflow_admin_role', 'notification_admin_role', 'credential_admin_role', 'job_template_admin_role', 'approval_role', 'execution_environment_admin_role'], related_name='+', to='main.Role'),
),
]

View File

@@ -0,0 +1,19 @@
# Generated by Django 2.2.16 on 2021-02-16 20:27
import awx.main.fields
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0128_organiaztion_read_roles_ee_admin'),
]
operations = [
migrations.AddField(
model_name='unifiedjob',
name='installed_collections',
field=awx.main.fields.JSONBField(blank=True, default=dict, editable=False, help_text='The Collections names and versions installed in the execution environment.'),
),
]

View File

@@ -0,0 +1,34 @@
# Generated by Django 2.2.16 on 2021-03-11 16:25
import awx.main.utils.polymorphic
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0129_unifiedjob_installed_collections'),
]
operations = [
migrations.AlterField(
model_name='organization',
name='default_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The default execution environment for jobs run by this organization.', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='+', to='main.ExecutionEnvironment'),
),
migrations.AlterField(
model_name='project',
name='default_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The default execution environment for jobs run using this project.', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='+', to='main.ExecutionEnvironment'),
),
migrations.AlterField(
model_name='unifiedjob',
name='execution_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The container image to be used for execution.', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='unifiedjobs', to='main.ExecutionEnvironment'),
),
migrations.AlterField(
model_name='unifiedjobtemplate',
name='execution_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The container image to be used for execution.', null=True, on_delete=awx.main.utils.polymorphic.SET_NULL, related_name='unifiedjobtemplates', to='main.ExecutionEnvironment'),
),
]

View File

@@ -0,0 +1,19 @@
# Generated by Django 2.2.16 on 2021-03-11 20:50
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0130_ee_polymorphic_set_null'),
]
operations = [
migrations.AlterField(
model_name='organization',
name='default_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The default execution environment for jobs run by this organization.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='main.ExecutionEnvironment'),
),
]

View File

@@ -0,0 +1,27 @@
# Generated by Django 2.2.16 on 2021-03-13 14:53
from django.db import migrations, models
def migrate_existing_container_groups(apps, schema_editor):
InstanceGroup = apps.get_model('main', 'InstanceGroup')
for group in InstanceGroup.objects.filter(credential__isnull=False).iterator():
group.is_container_group = True
group.save(update_fields=['is_container_group'])
class Migration(migrations.Migration):
dependencies = [
('main', '0131_undo_org_polymorphic_ee'),
]
operations = [
migrations.AddField(
model_name='instancegroup',
name='is_container_group',
field=models.BooleanField(default=False),
),
migrations.RunPython(migrate_existing_container_groups, migrations.RunPython.noop),
]

View File

@@ -0,0 +1,20 @@
from django.db import migrations
from awx.main.models import CredentialType
from awx.main.utils.common import set_current_apps
def setup_tower_managed_defaults(apps, schema_editor):
set_current_apps(apps)
CredentialType.setup_tower_managed_defaults()
class Migration(migrations.Migration):
dependencies = [
('main', '0132_instancegroup_is_container_group'),
]
operations = [
migrations.RunPython(setup_tower_managed_defaults),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.16 on 2021-03-16 20:49
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0133_centrify_vault_credtype'),
]
operations = [
migrations.AddField(
model_name='unifiedjob',
name='ansible_version',
field=models.CharField(blank=True, default='', editable=False, help_text='The version of Ansible Core installed in the execution environment.', max_length=255),
),
]

View File

@@ -35,6 +35,7 @@ from awx.main.models.events import ( # noqa
)
from awx.main.models.ad_hoc_commands import AdHocCommand # noqa
from awx.main.models.schedules import Schedule # noqa
from awx.main.models.execution_environments import ExecutionEnvironment # noqa
from awx.main.models.activity_stream import ActivityStream # noqa
from awx.main.models.ha import ( # noqa
Instance, InstanceGroup, TowerScheduleState,
@@ -45,7 +46,7 @@ from awx.main.models.rbac import ( # noqa
ROLE_SINGLETON_SYSTEM_AUDITOR,
)
from awx.main.models.mixins import ( # noqa
CustomVirtualEnvMixin, ResourceMixin, SurveyJobMixin,
CustomVirtualEnvMixin, ExecutionEnvironmentMixin, ResourceMixin, SurveyJobMixin,
SurveyJobTemplateMixin, TaskManagerInventoryUpdateMixin,
TaskManagerJobMixin, TaskManagerProjectUpdateMixin,
TaskManagerUnifiedJobMixin,
@@ -221,6 +222,7 @@ activity_stream_registrar.connect(CredentialType)
activity_stream_registrar.connect(Team)
activity_stream_registrar.connect(Project)
#activity_stream_registrar.connect(ProjectUpdate)
activity_stream_registrar.connect(ExecutionEnvironment)
activity_stream_registrar.connect(JobTemplate)
activity_stream_registrar.connect(Job)
activity_stream_registrar.connect(AdHocCommand)

View File

@@ -61,6 +61,7 @@ class ActivityStream(models.Model):
team = models.ManyToManyField("Team", blank=True)
project = models.ManyToManyField("Project", blank=True)
project_update = models.ManyToManyField("ProjectUpdate", blank=True)
execution_environment = models.ManyToManyField("ExecutionEnvironment", blank=True)
job_template = models.ManyToManyField("JobTemplate", blank=True)
job = models.ManyToManyField("Job", blank=True)
workflow_job_template_node = models.ManyToManyField("WorkflowJobTemplateNode", blank=True)
@@ -74,6 +75,7 @@ class ActivityStream(models.Model):
ad_hoc_command = models.ManyToManyField("AdHocCommand", blank=True)
schedule = models.ManyToManyField("Schedule", blank=True)
custom_inventory_script = models.ManyToManyField("CustomInventoryScript", blank=True)
execution_environment = models.ManyToManyField("ExecutionEnvironment", blank=True)
notification_template = models.ManyToManyField("NotificationTemplate", blank=True)
notification = models.ManyToManyField("Notification", blank=True)
label = models.ManyToManyField("Label", blank=True)

View File

@@ -151,8 +151,8 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
return True
@property
def is_containerized(self):
return bool(self.instance_group and self.instance_group.is_containerized)
def is_container_group_task(self):
return bool(self.instance_group and self.instance_group.is_container_group)
@property
def can_run_containerized(self):
@@ -198,8 +198,8 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
def copy(self):
data = {}
for field in ('job_type', 'inventory_id', 'limit', 'credential_id',
'module_name', 'module_args', 'forks', 'verbosity',
'extra_vars', 'become_enabled', 'diff_mode'):
'execution_environment_id', 'module_name', 'module_args',
'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode'):
data[field] = getattr(self, field)
return AdHocCommand.objects.create(**data)
@@ -209,6 +209,9 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
self.name = Truncator(u': '.join(filter(None, (self.module_name, self.module_args)))).chars(512)
if 'name' not in update_fields:
update_fields.append('name')
if not self.execution_environment_id:
self.execution_environment = self.resolve_execution_environment()
update_fields.append('execution_environment')
super(AdHocCommand, self).save(*args, **kwargs)
@property

View File

@@ -331,6 +331,7 @@ class CredentialType(CommonModelNameNotUnique):
('net', _('Network')),
('scm', _('Source Control')),
('cloud', _('Cloud')),
('registry', _('Container Registry')),
('token', _('Personal Access Token')),
('insights', _('Insights')),
('external', _('External')),
@@ -528,15 +529,20 @@ class CredentialType(CommonModelNameNotUnique):
with open(path, 'w') as f:
f.write(data)
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
# FIXME: develop some better means of referencing paths inside containers
container_path = os.path.join(
'/runner',
os.path.basename(path)
)
# determine if filename indicates single file or many
if file_label.find('.') == -1:
tower_namespace.filename = path
tower_namespace.filename = container_path
else:
if not hasattr(tower_namespace, 'filename'):
tower_namespace.filename = TowerNamespace()
file_label = file_label.split('.')[1]
setattr(tower_namespace.filename, file_label, path)
setattr(tower_namespace.filename, file_label, container_path)
injector_field = self._meta.get_field('injectors')
for env_var, tmpl in self.injectors.get('env', {}).items():
@@ -564,7 +570,12 @@ class CredentialType(CommonModelNameNotUnique):
if extra_vars:
path = build_extra_vars_file(extra_vars, private_data_dir)
args.extend(['-e', '@%s' % path])
# FIXME: develop some better means of referencing paths inside containers
container_path = os.path.join(
'/runner',
os.path.basename(path)
)
args.extend(['-e', '@%s' % container_path])
class ManagedCredentialType(SimpleNamespace):
@@ -1123,7 +1134,6 @@ ManagedCredentialType(
},
)
ManagedCredentialType(
namespace='kubernetes_bearer_token',
kind='kubernetes',
@@ -1155,6 +1165,37 @@ ManagedCredentialType(
}
)
ManagedCredentialType(
namespace='registry',
kind='registry',
name=ugettext_noop('Container Registry'),
inputs={
'fields': [{
'id': 'host',
'label': ugettext_noop('Authentication URL'),
'type': 'string',
'help_text': ugettext_noop('Authentication endpoint for the container registry.'),
}, {
'id': 'username',
'label': ugettext_noop('Username'),
'type': 'string',
}, {
'id': 'password',
'label': ugettext_noop('Password'),
'type': 'string',
'secret': True,
}, {
'id': 'token',
'label': ugettext_noop('Access Token'),
'type': 'string',
'secret': True,
'help_text': ugettext_noop('A token to use to authenticate with. '
'This should not be set if username/password are being used.'),
}],
'required': ['host'],
}
)
ManagedCredentialType(
namespace='galaxy_api_token',

View File

@@ -35,8 +35,8 @@ def gce(cred, env, private_data_dir):
json.dump(json_cred, f, indent=2)
f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
env['GCE_CREDENTIALS_FILE_PATH'] = path
env['GCP_SERVICE_ACCOUNT_FILE'] = path
env['GCE_CREDENTIALS_FILE_PATH'] = os.path.join('/runner', os.path.basename(path))
env['GCP_SERVICE_ACCOUNT_FILE'] = os.path.join('/runner', os.path.basename(path))
# Handle env variables for new module types.
# This includes gcp_compute inventory plugin and
@@ -92,8 +92,8 @@ def _openstack_data(cred):
},
}
if cred.has_input('project_region_name'):
openstack_data['clouds']['devstack']['region_name'] = cred.get_input('project_region_name', default='')
if cred.has_input('region'):
openstack_data['clouds']['devstack']['region_name'] = cred.get_input('region', default='')
return openstack_data
@@ -105,7 +105,8 @@ def openstack(cred, env, private_data_dir):
yaml.safe_dump(openstack_data, f, default_flow_style=False, allow_unicode=True)
f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
env['OS_CLIENT_CONFIG_FILE'] = path
# TODO: constant for container base path
env['OS_CLIENT_CONFIG_FILE'] = os.path.join('/runner', os.path.basename(path))
def kubernetes_bearer_token(cred, env, private_data_dir):

View File

@@ -0,0 +1,53 @@
from django.db import models
from django.utils.translation import ugettext_lazy as _
from awx.api.versioning import reverse
from awx.main.models.base import CommonModel
__all__ = ['ExecutionEnvironment']
class ExecutionEnvironment(CommonModel):
class Meta:
ordering = ('-created',)
PULL_CHOICES = [
('always', _("Always pull container before running.")),
('missing', _("No pull option has been selected.")),
('never', _("Never pull container before running."))
]
organization = models.ForeignKey(
'Organization',
null=True,
default=None,
blank=True,
on_delete=models.CASCADE,
related_name='%(class)ss',
help_text=_('The organization used to determine access to this execution environment.'),
)
image = models.CharField(
max_length=1024,
verbose_name=_('image location'),
help_text=_("The registry location where the container is stored."),
)
managed_by_tower = models.BooleanField(default=False, editable=False)
credential = models.ForeignKey(
'Credential',
related_name='%(class)ss',
blank=True,
null=True,
default=None,
on_delete=models.SET_NULL,
)
pull = models.CharField(
max_length=16,
choices=PULL_CHOICES,
blank=True,
default='',
help_text=_('Pull image before running?'),
)
def get_absolute_url(self, request=None):
return reverse('api:execution_environment_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -147,6 +147,13 @@ class Instance(HasPolicyEditsMixin, BaseModel):
return self.rampart_groups.filter(controller__isnull=False).exists()
def refresh_capacity(self):
if settings.IS_K8S:
self.capacity = self.cpu = self.memory = self.cpu_capacity = self.mem_capacity = 0 # noqa
self.version = awx_application_version
self.save(update_fields=['capacity', 'version', 'modified', 'cpu',
'memory', 'cpu_capacity', 'mem_capacity'])
return
cpu = get_cpu_capacity()
mem = get_mem_capacity()
if self.enabled:
@@ -192,6 +199,9 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
null=True,
on_delete=models.CASCADE
)
is_container_group = models.BooleanField(
default=False
)
credential = models.ForeignKey(
'Credential',
related_name='%(class)ss',
@@ -246,10 +256,6 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
def is_isolated(self):
return bool(self.controller)
@property
def is_containerized(self):
return bool(self.credential and self.credential.kubernetes)
'''
RelatedJobsMixin
'''
@@ -306,9 +312,9 @@ def schedule_policy_task():
@receiver(post_save, sender=InstanceGroup)
def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs):
if created or instance.has_policy_changes():
if not instance.is_containerized:
if not instance.is_container_group:
schedule_policy_task()
elif created or instance.is_containerized:
elif created or instance.is_container_group:
instance.set_default_policy_fields()
@@ -320,7 +326,7 @@ def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
@receiver(post_delete, sender=InstanceGroup)
def on_instance_group_deleted(sender, instance, using, **kwargs):
if not instance.is_containerized:
if not instance.is_container_group:
schedule_policy_task()

View File

@@ -1373,6 +1373,7 @@ class PluginFileInjector(object):
collection = None
collection_migration = '2.9' # Starting with this version, we use collections
# TODO: delete this method and update unit tests
@classmethod
def get_proper_name(cls):
if cls.plugin_name is None:
@@ -1397,13 +1398,12 @@ class PluginFileInjector(object):
def inventory_as_dict(self, inventory_update, private_data_dir):
source_vars = dict(inventory_update.source_vars_dict) # make a copy
proper_name = self.get_proper_name()
'''
None conveys that we should use the user-provided plugin.
Note that a plugin value of '' should still be overridden.
'''
if proper_name is not None:
source_vars['plugin'] = proper_name
if self.plugin_name is not None:
source_vars['plugin'] = self.plugin_name
return source_vars
def build_env(self, inventory_update, env, private_data_dir, private_data_files):
@@ -1441,7 +1441,6 @@ class PluginFileInjector(object):
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
env = self._get_shared_env(inventory_update, private_data_dir, private_data_files)
env['ANSIBLE_COLLECTIONS_PATHS'] = settings.AWX_ANSIBLE_COLLECTIONS_PATHS
return env
def build_private_data(self, inventory_update, private_data_dir):
@@ -1544,7 +1543,7 @@ class openstack(PluginFileInjector):
env = super(openstack, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
credential = inventory_update.get_cloud_credential()
cred_data = private_data_files['credentials']
env['OS_CLIENT_CONFIG_FILE'] = cred_data[credential]
env['OS_CLIENT_CONFIG_FILE'] = os.path.join('/runner', os.path.basename(cred_data[credential]))
return env
@@ -1574,6 +1573,12 @@ class satellite6(PluginFileInjector):
ret['FOREMAN_PASSWORD'] = credential.get_input('password', default='')
return ret
def inventory_as_dict(self, inventory_update, private_data_dir):
ret = super(satellite6, self).inventory_as_dict(inventory_update, private_data_dir)
# this inventory plugin requires the fully qualified inventory plugin name
ret['plugin'] = f'{self.namespace}.{self.collection}.{self.plugin_name}'
return ret
class tower(PluginFileInjector):
plugin_name = 'tower'

View File

@@ -284,7 +284,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
def _get_unified_job_field_names(cls):
return set(f.name for f in JobOptions._meta.fields) | set(
['name', 'description', 'organization', 'survey_passwords', 'labels', 'credentials',
'job_slice_number', 'job_slice_count']
'job_slice_number', 'job_slice_count', 'execution_environment']
)
@property
@@ -768,11 +768,11 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
@property
def can_run_containerized(self):
return any([ig for ig in self.preferred_instance_groups if ig.is_containerized])
return any([ig for ig in self.preferred_instance_groups if ig.is_container_group])
@property
def is_containerized(self):
return bool(self.instance_group and self.instance_group.is_containerized)
def is_container_group_task(self):
return bool(self.instance_group and self.instance_group.is_container_group)
@property
def preferred_instance_groups(self):
@@ -828,6 +828,7 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
return self.inventory.hosts.only(*only)
def start_job_fact_cache(self, destination, modification_times, timeout=None):
self.log_lifecycle("start_job_fact_cache")
os.makedirs(destination, mode=0o700)
hosts = self._get_inventory_hosts()
if timeout is None:
@@ -852,6 +853,7 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
modification_times[filepath] = os.path.getmtime(filepath)
def finish_job_fact_cache(self, destination, modification_times):
self.log_lifecycle("finish_job_fact_cache")
for host in self._get_inventory_hosts():
filepath = os.sep.join(map(str, [destination, host.name]))
if not os.path.realpath(filepath).startswith(destination):
@@ -1284,6 +1286,8 @@ class SystemJob(UnifiedJob, SystemJobOptions, JobNotificationMixin):
@property
def task_impact(self):
if settings.IS_K8S:
return 0
return 5
@property

View File

@@ -22,7 +22,7 @@ from awx.main.models.base import prevent_search
from awx.main.models.rbac import (
Role, RoleAncestorEntry, get_roles_on_resource
)
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser
from awx.main.utils import parse_yaml_or_json, get_custom_venv_choices, get_licenser, polymorphic
from awx.main.utils.encryption import decrypt_value, get_encryption_key, is_encrypted
from awx.main.utils.polymorphic import build_polymorphic_ctypes_map
from awx.main.fields import JSONField, AskForField
@@ -34,7 +34,7 @@ logger = logging.getLogger('awx.main.models.mixins')
__all__ = ['ResourceMixin', 'SurveyJobTemplateMixin', 'SurveyJobMixin',
'TaskManagerUnifiedJobMixin', 'TaskManagerJobMixin', 'TaskManagerProjectUpdateMixin',
'TaskManagerInventoryUpdateMixin', 'CustomVirtualEnvMixin']
'TaskManagerInventoryUpdateMixin', 'ExecutionEnvironmentMixin', 'CustomVirtualEnvMixin']
class ResourceMixin(models.Model):
@@ -441,6 +441,44 @@ class TaskManagerInventoryUpdateMixin(TaskManagerUpdateOnLaunchMixin):
abstract = True
class ExecutionEnvironmentMixin(models.Model):
class Meta:
abstract = True
execution_environment = models.ForeignKey(
'ExecutionEnvironment',
null=True,
blank=True,
default=None,
on_delete=polymorphic.SET_NULL,
related_name='%(class)ss',
help_text=_('The container image to be used for execution.'),
)
def get_execution_environment_default(self):
from awx.main.models.execution_environments import ExecutionEnvironment
if settings.DEFAULT_EXECUTION_ENVIRONMENT is not None:
return settings.DEFAULT_EXECUTION_ENVIRONMENT
return ExecutionEnvironment.objects.filter(organization=None, managed_by_tower=True).first()
def resolve_execution_environment(self):
"""
Return the execution environment that should be used when creating a new job.
"""
if self.execution_environment is not None:
return self.execution_environment
if getattr(self, 'project_id', None) and self.project.default_environment is not None:
return self.project.default_environment
if getattr(self, 'organization', None) and self.organization.default_environment is not None:
return self.organization.default_environment
if getattr(self, 'inventory', None) and self.inventory.organization is not None:
if self.inventory.organization.default_environment is not None:
return self.inventory.organization.default_environment
return self.get_execution_environment_default()
class CustomVirtualEnvMixin(models.Model):
class Meta:
abstract = True

View File

@@ -280,6 +280,7 @@ class JobNotificationMixin(object):
{'unified_job_template': ['id', 'name', 'description', 'unified_job_type']},
{'instance_group': ['name', 'id']},
{'created_by': ['id', 'username', 'first_name', 'last_name']},
{'schedule': ['id', 'name', 'description', 'next_run']},
{'labels': ['count', 'results']}]}]
@classmethod
@@ -344,6 +345,10 @@ class JobNotificationMixin(object):
'name': 'Stub project',
'scm_type': 'git',
'status': 'successful'},
'schedule': {'description': 'Sample schedule',
'id': 42,
'name': 'Stub schedule',
'next_run': datetime.datetime(2038, 1, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc)},
'unified_job_template': {'description': 'Sample unified job template description',
'id': 39,
'name': 'Stub Job Template',
@@ -383,12 +388,17 @@ class JobNotificationMixin(object):
and a url to the job run."""
job_context = {'host_status_counts': {}}
summary = None
if hasattr(self, 'job_host_summaries'):
summary = self.job_host_summaries.first()
if summary:
from awx.api.serializers import JobHostSummarySerializer
summary_data = JobHostSummarySerializer(summary).to_representation(summary)
job_context['host_status_counts'] = summary_data
try:
has_event_property = any([f for f in self.event_class._meta.fields if f.name == 'event'])
except NotImplementedError:
has_event_property = False
if has_event_property:
qs = self.get_event_queryset()
if qs:
event = qs.only('event_data').filter(event='playbook_on_stats').first()
if event:
summary = event.get_host_status_counts()
job_context['host_status_counts'] = summary
context = {
'job': job_context,
'job_friendly_name': self.get_notification_friendly_name(),

View File

@@ -61,6 +61,15 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
blank=True,
related_name='%(class)s_notification_templates_for_approvals'
)
default_environment = models.ForeignKey(
'ExecutionEnvironment',
null=True,
blank=True,
default=None,
on_delete=models.SET_NULL,
related_name='+',
help_text=_('The default execution environment for jobs run by this organization.'),
)
admin_role = ImplicitRoleField(
parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
@@ -86,6 +95,9 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
job_template_admin_role = ImplicitRoleField(
parent_role='admin_role',
)
execution_environment_admin_role = ImplicitRoleField(
parent_role='admin_role',
)
auditor_role = ImplicitRoleField(
parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_AUDITOR,
)
@@ -97,7 +109,8 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
'execute_role', 'project_admin_role',
'inventory_admin_role', 'workflow_admin_role',
'notification_admin_role', 'credential_admin_role',
'job_template_admin_role', 'approval_role',],
'job_template_admin_role', 'approval_role',
'execution_environment_admin_role',],
)
approval_role = ImplicitRoleField(
parent_role='admin_role',

View File

@@ -35,7 +35,7 @@ from awx.main.models.mixins import (
CustomVirtualEnvMixin,
RelatedJobsMixin
)
from awx.main.utils import update_scm_url
from awx.main.utils import update_scm_url, polymorphic
from awx.main.utils.ansible import skip_directory, could_be_inventory, could_be_playbook
from awx.main.fields import ImplicitRoleField
from awx.main.models.rbac import (
@@ -187,6 +187,14 @@ class ProjectOptions(models.Model):
pass
return cred
def resolve_execution_environment(self):
"""
Project updates, themselves, will use the default execution environment.
Jobs using the project can use the default_environment, but the project updates
are not flexible enough to allow customizing the image they use.
"""
return self.get_execution_environment_default()
def get_project_path(self, check_if_exists=True):
local_path = os.path.basename(self.local_path)
if local_path and not local_path.startswith('.'):
@@ -259,6 +267,15 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
app_label = 'main'
ordering = ('id',)
default_environment = models.ForeignKey(
'ExecutionEnvironment',
null=True,
blank=True,
default=None,
on_delete=polymorphic.SET_NULL,
related_name='+',
help_text=_('The default execution environment for jobs run using this project.'),
)
scm_update_on_launch = models.BooleanField(
default=False,
help_text=_('Update the project when a job is launched that uses the project.'),
@@ -554,6 +571,8 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
@property
def task_impact(self):
if settings.IS_K8S:
return 0
return 0 if self.job_type == 'run' else 1
@property

View File

@@ -40,6 +40,7 @@ role_names = {
'inventory_admin_role': _('Inventory Admin'),
'credential_admin_role': _('Credential Admin'),
'job_template_admin_role': _('Job Template Admin'),
'execution_environment_admin_role': _('Execution Environment Admin'),
'workflow_admin_role': _('Workflow Admin'),
'notification_admin_role': _('Notification Admin'),
'auditor_role': _('Auditor'),
@@ -60,6 +61,7 @@ role_descriptions = {
'inventory_admin_role': _('Can manage all inventories of the %s'),
'credential_admin_role': _('Can manage all credentials of the %s'),
'job_template_admin_role': _('Can manage all job templates of the %s'),
'execution_environment_admin_role': _('Can manage all execution environments of the %s'),
'workflow_admin_role': _('Can manage all workflows of the %s'),
'notification_admin_role': _('Can manage all notifications of the %s'),
'auditor_role': _('Can view all aspects of the %s'),

View File

@@ -39,7 +39,7 @@ from awx.main.models.base import (
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin, ExecutionEnvironmentMixin
from awx.main.utils import (
camelcase_to_underscore, get_model_for_type,
encrypt_dict, decrypt_field, _inventory_updates,
@@ -50,16 +50,16 @@ from awx.main.utils import (
from awx.main.constants import ACTIVE_STATES, CAN_CANCEL
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.consumers import emit_channel_notification
from awx.main.fields import JSONField, AskForField, OrderedManyToManyField
from awx.main.fields import JSONField, JSONBField, AskForField, OrderedManyToManyField
__all__ = ['UnifiedJobTemplate', 'UnifiedJob', 'StdoutMaxBytesExceeded']
logger = logging.getLogger('awx.main.models.unified_jobs')
logger_job_lifecycle = logging.getLogger('awx.analytics.job_lifecycle')
# NOTE: ACTIVE_STATES moved to constants because it is used by parent modules
class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, NotificationFieldsModel):
class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, ExecutionEnvironmentMixin, NotificationFieldsModel):
'''
Concrete base class for unified job templates.
'''
@@ -376,6 +376,8 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
for fd, val in eager_fields.items():
setattr(unified_job, fd, val)
unified_job.execution_environment = self.resolve_execution_environment()
# NOTE: slice workflow jobs _get_parent_field_name method
# is not correct until this is set
if not parent_field_name:
@@ -420,7 +422,7 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
# have been associated to the UJ
if unified_job.__class__ in activity_stream_registrar.models:
activity_stream_create(None, unified_job, True)
unified_job.log_lifecycle("created")
return unified_job
@classmethod
@@ -527,7 +529,7 @@ class StdoutMaxBytesExceeded(Exception):
class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique,
UnifiedJobTypeStringMixin, TaskManagerUnifiedJobMixin):
UnifiedJobTypeStringMixin, TaskManagerUnifiedJobMixin, ExecutionEnvironmentMixin):
'''
Concrete base class for unified job run by the task engine.
'''
@@ -720,6 +722,19 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
'Credential',
related_name='%(class)ss',
)
installed_collections = JSONBField(
blank=True,
default=dict,
editable=False,
help_text=_("The Collections names and versions installed in the execution environment."),
)
ansible_version = models.CharField(
max_length=255,
blank=True,
default='',
editable=False,
help_text=_("The version of Ansible Core installed in the execution environment."),
)
def get_absolute_url(self, request=None):
RealClass = self.get_real_instance_class()
@@ -862,7 +877,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
self.unified_job_template = self._get_parent_instance()
if 'unified_job_template' not in update_fields:
update_fields.append('unified_job_template')
if self.cancel_flag and not self.canceled_on:
# Record the 'canceled' time.
self.canceled_on = now()
@@ -1010,6 +1025,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
event_qs = self.get_event_queryset()
except NotImplementedError:
return True # Model without events, such as WFJT
self.log_lifecycle("event_processing_finished")
return self.emitted_events == event_qs.count()
def result_stdout_raw_handle(self, enforce_max_bytes=True):
@@ -1318,6 +1334,10 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
if 'extra_vars' in kwargs:
self.handle_extra_data(kwargs['extra_vars'])
# remove any job_explanations that may have been set while job was in pending
if self.job_explanation != "":
self.job_explanation = ""
return (True, opts)
def signal_start(self, **kwargs):
@@ -1448,6 +1468,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
for name in ('awx', 'tower'):
r['{}_workflow_job_id'.format(name)] = wj.pk
r['{}_workflow_job_name'.format(name)] = wj.name
r['{}_workflow_job_launch_type'.format(name)] = wj.launch_type
if schedule:
r['{}_parent_job_schedule_id'.format(name)] = schedule.pk
r['{}_parent_job_schedule_name'.format(name)] = schedule.name
@@ -1482,5 +1503,19 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
return bool(self.controller_node)
@property
def is_containerized(self):
def is_container_group_task(self):
return False
def log_lifecycle(self, state, blocked_by=None):
extra={'type': self._meta.model_name,
'task_id': self.id,
'state': state}
if self.unified_job_template:
extra["template_name"] = self.unified_job_template.name
if state == "blocked" and blocked_by:
blocked_by_msg = f"{blocked_by._meta.model_name}-{blocked_by.id}"
msg = f"{self._meta.model_name}-{self.id} blocked by {blocked_by_msg}"
extra["blocked_by"] = blocked_by_msg
else:
msg = f"{self._meta.model_name}-{self.id} {state.replace('_', ' ')}"
logger_job_lifecycle.debug(msg, extra=extra)

View File

@@ -1,5 +1,3 @@
from django.utils.timezone import now as tz_now
from awx.main.models import (
Job,
ProjectUpdate,
@@ -20,119 +18,110 @@ class DependencyGraph(object):
INVENTORY_SOURCE_UPDATES = 'inventory_source_updates'
WORKFLOW_JOB_TEMPLATES_JOBS = 'workflow_job_template_jobs'
LATEST_PROJECT_UPDATES = 'latest_project_updates'
LATEST_INVENTORY_UPDATES = 'latest_inventory_updates'
INVENTORY_SOURCES = 'inventory_source_ids'
def __init__(self, queue):
self.queue = queue
def __init__(self):
self.data = {}
# project_id -> True / False
self.data[self.PROJECT_UPDATES] = {}
# inventory_id -> True / False
# The reason for tracking both inventory and inventory sources:
# Consider InvA, which has two sources, InvSource1, InvSource2.
# JobB might depend on InvA, which launches two updates, one for each source.
# To determine if JobB can run, we can just check InvA, which is marked in
# INVENTORY_UPDATES, instead of having to check for both entries in
# INVENTORY_SOURCE_UPDATES.
self.data[self.INVENTORY_UPDATES] = {}
# job_template_id -> True / False
self.data[self.JOB_TEMPLATE_JOBS] = {}
'''
Track runnable job related project and inventory to ensure updates
don't run while a job needing those resources is running.
'''
# inventory_source_id -> True / False
self.data[self.INVENTORY_SOURCE_UPDATES] = {}
# True / False
self.data[self.SYSTEM_JOB] = True
# workflow_job_template_id -> True / False
self.data[self.JOB_TEMPLATE_JOBS] = {}
self.data[self.SYSTEM_JOB] = {}
self.data[self.WORKFLOW_JOB_TEMPLATES_JOBS] = {}
# project_id -> latest ProjectUpdateLatestDict'
self.data[self.LATEST_PROJECT_UPDATES] = {}
# inventory_source_id -> latest InventoryUpdateLatestDict
self.data[self.LATEST_INVENTORY_UPDATES] = {}
def mark_if_no_key(self, job_type, id, job):
# only mark first occurrence of a task. If 10 of JobA are launched
# (concurrent disabled), the dependency graph should return that jobs
# 2 through 10 are blocked by job1
if id not in self.data[job_type]:
self.data[job_type][id] = job
# inventory_id -> [inventory_source_ids]
self.data[self.INVENTORY_SOURCES] = {}
def get_item(self, job_type, id):
return self.data[job_type].get(id, None)
def add_latest_project_update(self, job):
self.data[self.LATEST_PROJECT_UPDATES][job.project_id] = job
def get_now(self):
return tz_now()
def mark_system_job(self):
self.data[self.SYSTEM_JOB] = False
def mark_system_job(self, job):
# Don't track different types of system jobs, so that only one can run
# at a time. Therefore id in this case is just 'system_job'.
self.mark_if_no_key(self.SYSTEM_JOB, 'system_job', job)
def mark_project_update(self, job):
self.data[self.PROJECT_UPDATES][job.project_id] = False
self.mark_if_no_key(self.PROJECT_UPDATES, job.project_id, job)
def mark_inventory_update(self, inventory_id):
self.data[self.INVENTORY_UPDATES][inventory_id] = False
def mark_inventory_update(self, job):
if type(job) is AdHocCommand:
self.mark_if_no_key(self.INVENTORY_UPDATES, job.inventory_id, job)
else:
self.mark_if_no_key(self.INVENTORY_UPDATES, job.inventory_source.inventory_id, job)
def mark_inventory_source_update(self, inventory_source_id):
self.data[self.INVENTORY_SOURCE_UPDATES][inventory_source_id] = False
def mark_inventory_source_update(self, job):
self.mark_if_no_key(self.INVENTORY_SOURCE_UPDATES, job.inventory_source_id, job)
def mark_job_template_job(self, job):
self.data[self.JOB_TEMPLATE_JOBS][job.job_template_id] = False
self.mark_if_no_key(self.JOB_TEMPLATE_JOBS, job.job_template_id, job)
def mark_workflow_job(self, job):
self.data[self.WORKFLOW_JOB_TEMPLATES_JOBS][job.workflow_job_template_id] = False
self.mark_if_no_key(self.WORKFLOW_JOB_TEMPLATES_JOBS, job.workflow_job_template_id, job)
def can_project_update_run(self, job):
return self.data[self.PROJECT_UPDATES].get(job.project_id, True)
def project_update_blocked_by(self, job):
return self.get_item(self.PROJECT_UPDATES, job.project_id)
def can_inventory_update_run(self, job):
return self.data[self.INVENTORY_SOURCE_UPDATES].get(job.inventory_source_id, True)
def inventory_update_blocked_by(self, job):
return self.get_item(self.INVENTORY_SOURCE_UPDATES, job.inventory_source_id)
def can_job_run(self, job):
if self.data[self.PROJECT_UPDATES].get(job.project_id, True) is True and \
self.data[self.INVENTORY_UPDATES].get(job.inventory_id, True) is True:
if job.allow_simultaneous is False:
return self.data[self.JOB_TEMPLATE_JOBS].get(job.job_template_id, True)
else:
return True
return False
def job_blocked_by(self, job):
project_block = self.get_item(self.PROJECT_UPDATES, job.project_id)
inventory_block = self.get_item(self.INVENTORY_UPDATES, job.inventory_id)
if job.allow_simultaneous is False:
job_block = self.get_item(self.JOB_TEMPLATE_JOBS, job.job_template_id)
else:
job_block = None
return project_block or inventory_block or job_block
def can_workflow_job_run(self, job):
if job.allow_simultaneous:
return True
return self.data[self.WORKFLOW_JOB_TEMPLATES_JOBS].get(job.workflow_job_template_id, True)
def workflow_job_blocked_by(self, job):
if job.allow_simultaneous is False:
return self.get_item(self.WORKFLOW_JOB_TEMPLATES_JOBS, job.workflow_job_template_id)
return None
def can_system_job_run(self):
return self.data[self.SYSTEM_JOB]
def system_job_blocked_by(self, job):
return self.get_item(self.SYSTEM_JOB, 'system_job')
def can_ad_hoc_command_run(self, job):
return self.data[self.INVENTORY_UPDATES].get(job.inventory_id, True)
def ad_hoc_command_blocked_by(self, job):
return self.get_item(self.INVENTORY_UPDATES, job.inventory_id)
def is_job_blocked(self, job):
def task_blocked_by(self, job):
if type(job) is ProjectUpdate:
return not self.can_project_update_run(job)
return self.project_update_blocked_by(job)
elif type(job) is InventoryUpdate:
return not self.can_inventory_update_run(job)
return self.inventory_update_blocked_by(job)
elif type(job) is Job:
return not self.can_job_run(job)
return self.job_blocked_by(job)
elif type(job) is SystemJob:
return not self.can_system_job_run()
return self.system_job_blocked_by(job)
elif type(job) is AdHocCommand:
return not self.can_ad_hoc_command_run(job)
return self.ad_hoc_command_blocked_by(job)
elif type(job) is WorkflowJob:
return not self.can_workflow_job_run(job)
return self.workflow_job_blocked_by(job)
def add_job(self, job):
if type(job) is ProjectUpdate:
self.mark_project_update(job)
elif type(job) is InventoryUpdate:
self.mark_inventory_update(job.inventory_source.inventory_id)
self.mark_inventory_source_update(job.inventory_source_id)
self.mark_inventory_update(job)
self.mark_inventory_source_update(job)
elif type(job) is Job:
self.mark_job_template_job(job)
elif type(job) is WorkflowJob:
self.mark_workflow_job(job)
elif type(job) is SystemJob:
self.mark_system_job()
self.mark_system_job(job)
elif type(job) is AdHocCommand:
self.mark_inventory_update(job.inventory_id)
self.mark_inventory_update(job)
def add_jobs(self, jobs):
for j in jobs:

View File

@@ -64,11 +64,13 @@ class TaskManager():
# will no longer be started and will be started on the next task manager cycle.
self.start_task_limit = settings.START_TASK_LIMIT
self.time_delta_job_explanation = timedelta(seconds=30)
def after_lock_init(self):
'''
Init AFTER we know this instance of the task manager will run because the lock is acquired.
'''
instances = Instance.objects.filter(~Q(hostname=None), capacity__gt=0, enabled=True)
instances = Instance.objects.filter(~Q(hostname=None), enabled=True)
self.real_instances = {i.hostname: i for i in instances}
instances_partial = [SimpleNamespace(obj=instance,
@@ -80,26 +82,29 @@ class TaskManager():
instances_by_hostname = {i.hostname: i for i in instances_partial}
for rampart_group in InstanceGroup.objects.prefetch_related('instances'):
self.graph[rampart_group.name] = dict(graph=DependencyGraph(rampart_group.name),
self.graph[rampart_group.name] = dict(graph=DependencyGraph(),
capacity_total=rampart_group.capacity,
consumed_capacity=0,
instances=[])
for instance in rampart_group.instances.filter(capacity__gt=0, enabled=True).order_by('hostname'):
for instance in rampart_group.instances.filter(enabled=True).order_by('hostname'):
if instance.hostname in instances_by_hostname:
self.graph[rampart_group.name]['instances'].append(instances_by_hostname[instance.hostname])
def is_job_blocked(self, task):
def job_blocked_by(self, task):
# TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph
# in the old task manager this was handled as a method on each task object outside of the graph and
# probably has the side effect of cutting down *a lot* of the logic from this task manager class
for g in self.graph:
if self.graph[g]['graph'].is_job_blocked(task):
return True
blocked_by = self.graph[g]['graph'].task_blocked_by(task)
if blocked_by:
return blocked_by
if not task.dependent_jobs_finished():
return True
blocked_by = task.dependent_jobs.first()
if blocked_by:
return blocked_by
return False
return None
def get_tasks(self, status_list=('pending', 'waiting', 'running')):
jobs = [j for j in Job.objects.filter(status__in=status_list).prefetch_related('instance_group')]
@@ -278,12 +283,12 @@ class TaskManager():
task.controller_node = controller_node
logger.debug('Submitting isolated {} to queue {} controlled by {}.'.format(
task.log_format, task.execution_node, controller_node))
elif rampart_group.is_containerized:
elif rampart_group.is_container_group:
# find one real, non-containerized instance with capacity to
# act as the controller for k8s API interaction
match = None
for group in InstanceGroup.objects.all():
if group.is_containerized or group.controller_id:
if group.is_container_group or group.controller_id:
continue
match = group.fit_task_to_most_remaining_capacity_instance(task, group.instances.all())
if match:
@@ -312,6 +317,7 @@ class TaskManager():
with disable_activity_stream():
task.celery_task_id = str(uuid.uuid4())
task.save()
task.log_lifecycle("waiting")
if rampart_group is not None:
self.consume_capacity(task, rampart_group.name)
@@ -450,6 +456,7 @@ class TaskManager():
def generate_dependencies(self, undeped_tasks):
created_dependencies = []
for task in undeped_tasks:
task.log_lifecycle("acknowledged")
dependencies = []
if not type(task) is Job:
continue
@@ -489,11 +496,18 @@ class TaskManager():
def process_pending_tasks(self, pending_tasks):
running_workflow_templates = set([wf.unified_job_template_id for wf in self.get_running_workflow_jobs()])
tasks_to_update_job_explanation = []
for task in pending_tasks:
if self.start_task_limit <= 0:
break
if self.is_job_blocked(task):
logger.debug("{} is blocked from running".format(task.log_format))
blocked_by = self.job_blocked_by(task)
if blocked_by:
task.log_lifecycle("blocked", blocked_by=blocked_by)
job_explanation = gettext_noop(f"waiting for {blocked_by._meta.model_name}-{blocked_by.id} to finish")
if task.job_explanation != job_explanation:
if task.created < (tz_now() - self.time_delta_job_explanation):
task.job_explanation = job_explanation
tasks_to_update_job_explanation.append(task)
continue
preferred_instance_groups = task.preferred_instance_groups
found_acceptable_queue = False
@@ -507,14 +521,17 @@ class TaskManager():
self.start_task(task, None, task.get_jobs_fail_chain(), None)
continue
for rampart_group in preferred_instance_groups:
if task.can_run_containerized and rampart_group.is_containerized:
if task.can_run_containerized and rampart_group.is_container_group:
self.graph[rampart_group.name]['graph'].add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), None)
found_acceptable_queue = True
break
remaining_capacity = self.get_remaining_capacity(rampart_group.name)
if not rampart_group.is_containerized and self.get_remaining_capacity(rampart_group.name) <= 0:
if (
task.task_impact > 0 and # project updates have a cost of zero
not rampart_group.is_container_group and
self.get_remaining_capacity(rampart_group.name) <= 0):
logger.debug("Skipping group {}, remaining_capacity {} <= 0".format(
rampart_group.name, remaining_capacity))
continue
@@ -522,8 +539,8 @@ class TaskManager():
execution_instance = InstanceGroup.fit_task_to_most_remaining_capacity_instance(task, self.graph[rampart_group.name]['instances']) or \
InstanceGroup.find_largest_idle_instance(self.graph[rampart_group.name]['instances'])
if execution_instance or rampart_group.is_containerized:
if not rampart_group.is_containerized:
if execution_instance or rampart_group.is_container_group:
if not rampart_group.is_container_group:
execution_instance.remaining_capacity = max(0, execution_instance.remaining_capacity - task.task_impact)
execution_instance.jobs_running += 1
logger.debug("Starting {} in group {} instance {} (remaining_capacity={})".format(
@@ -539,7 +556,17 @@ class TaskManager():
logger.debug("No instance available in group {} to run job {} w/ capacity requirement {}".format(
rampart_group.name, task.log_format, task.task_impact))
if not found_acceptable_queue:
task.log_lifecycle("needs_capacity")
job_explanation = gettext_noop("This job is not ready to start because there is not enough available capacity.")
if task.job_explanation != job_explanation:
if task.created < (tz_now() - self.time_delta_job_explanation):
# Many launched jobs are immediately blocked, but most blocks will resolve in a few seconds.
# Therefore we should only update the job_explanation after some time has elapsed to
# prevent excessive task saves.
task.job_explanation = job_explanation
tasks_to_update_job_explanation.append(task)
logger.debug("{} couldn't be scheduled on graph, waiting for next cycle".format(task.log_format))
UnifiedJob.objects.bulk_update(tasks_to_update_job_explanation, ['job_explanation'])
def timeout_approval_node(self):
workflow_approvals = WorkflowApproval.objects.filter(status='pending')
@@ -570,7 +597,7 @@ class TaskManager():
).exclude(
execution_node__in=Instance.objects.values_list('hostname', flat=True)
):
if j.execution_node and not j.is_containerized:
if j.execution_node and not j.is_container_group_task:
logger.error(f'{j.execution_node} is not a registered instance; reaping {j.log_format}')
reap_job(j, 'failed')

View File

@@ -368,6 +368,7 @@ def model_serializer_mapping():
models.Credential: serializers.CredentialSerializer,
models.Team: serializers.TeamSerializer,
models.Project: serializers.ProjectSerializer,
models.ExecutionEnvironment: serializers.ExecutionEnvironmentSerializer,
models.JobTemplate: serializers.JobTemplateWithSpecSerializer,
models.Job: serializers.JobSerializer,
models.AdHocCommand: serializers.AdHocCommandSerializer,

File diff suppressed because it is too large Load Diff

View File

@@ -8,4 +8,5 @@ clouds:
project_name: fooo
username: fooo
private: true
region_name: fooo
verify: false

View File

@@ -1,283 +0,0 @@
{
"ansible_all_ipv4_addresses": [
"172.17.0.7"
],
"ansible_all_ipv6_addresses": [
"fe80::42:acff:fe11:7"
],
"ansible_architecture": "x86_64",
"ansible_bios_date": "12/01/2006",
"ansible_bios_version": "VirtualBox",
"ansible_cmdline": {
"BOOT_IMAGE": "/boot/vmlinuz64",
"base": true,
"console": "tty0",
"initrd": "/boot/initrd.img",
"loglevel": "3",
"noembed": true,
"nomodeset": true,
"norestore": true,
"user": "docker",
"waitusb": "10:LABEL=boot2docker-data"
},
"ansible_date_time": {
"date": "2016-02-02",
"day": "02",
"epoch": "1454424257",
"hour": "14",
"iso8601": "2016-02-02T14:44:17Z",
"iso8601_basic": "20160202T144417348424",
"iso8601_basic_short": "20160202T144417",
"iso8601_micro": "2016-02-02T14:44:17.348496Z",
"minute": "44",
"month": "02",
"second": "17",
"time": "14:44:17",
"tz": "UTC",
"tz_offset": "+0000",
"weekday": "Tuesday",
"weekday_number": "2",
"weeknumber": "05",
"year": "2016"
},
"ansible_default_ipv4": {
"address": "172.17.0.7",
"alias": "eth0",
"broadcast": "global",
"gateway": "172.17.0.1",
"interface": "eth0",
"macaddress": "02:42:ac:11:00:07",
"mtu": 1500,
"netmask": "255.255.0.0",
"network": "172.17.0.0",
"type": "ether"
},
"ansible_default_ipv6": {},
"ansible_devices": {
"sda": {
"holders": [],
"host": "",
"model": "VBOX HARDDISK",
"partitions": {
"sda1": {
"sectors": "510015555",
"sectorsize": 512,
"size": "243.19 GB",
"start": "1975995"
},
"sda2": {
"sectors": "1975932",
"sectorsize": 512,
"size": "964.81 MB",
"start": "63"
}
},
"removable": "0",
"rotational": "0",
"scheduler_mode": "deadline",
"sectors": "512000000",
"sectorsize": "512",
"size": "244.14 GB",
"support_discard": "0",
"vendor": "ATA"
},
"sr0": {
"holders": [],
"host": "",
"model": "CD-ROM",
"partitions": {},
"removable": "1",
"rotational": "1",
"scheduler_mode": "deadline",
"sectors": "61440",
"sectorsize": "2048",
"size": "120.00 MB",
"support_discard": "0",
"vendor": "VBOX"
}
},
"ansible_distribution": "Ubuntu",
"ansible_distribution_major_version": "14",
"ansible_distribution_release": "trusty",
"ansible_distribution_version": "14.04",
"ansible_dns": {
"nameservers": [
"8.8.8.8"
]
},
"ansible_domain": "",
"ansible_env": {
"HOME": "/root",
"HOSTNAME": "ede894599989",
"LANG": "en_US.UTF-8",
"LC_ALL": "en_US.UTF-8",
"LC_MESSAGES": "en_US.UTF-8",
"LESSCLOSE": "/usr/bin/lesspipe %s %s",
"LESSOPEN": "| /usr/bin/lesspipe %s",
"LS_COLORS": "",
"OLDPWD": "/ansible",
"PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PWD": "/ansible/examples",
"SHLVL": "1",
"_": "/usr/local/bin/ansible",
"container": "docker"
},
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "172.17.0.7",
"broadcast": "global",
"netmask": "255.255.0.0",
"network": "172.17.0.0"
},
"ipv6": [
{
"address": "fe80::42:acff:fe11:7",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "02:42:ac:11:00:07",
"mtu": 1500,
"promisc": false,
"type": "ether"
},
"ansible_fips": false,
"ansible_form_factor": "Other",
"ansible_fqdn": "ede894599989",
"ansible_hostname": "ede894599989",
"ansible_interfaces": [
"lo",
"eth0"
],
"ansible_kernel": "4.1.12-boot2docker",
"ansible_lo": {
"active": true,
"device": "lo",
"ipv4": {
"address": "127.0.0.1",
"broadcast": "host",
"netmask": "255.0.0.0",
"network": "127.0.0.0"
},
"ipv6": [
{
"address": "::1",
"prefix": "128",
"scope": "host"
}
],
"mtu": 65536,
"promisc": false,
"type": "loopback"
},
"ansible_lsb": {
"codename": "trusty",
"description": "Ubuntu 14.04.3 LTS",
"id": "Ubuntu",
"major_release": "14",
"release": "14.04"
},
"ansible_machine": "x86_64",
"ansible_memfree_mb": 3746,
"ansible_memory_mb": {
"nocache": {
"free": 8896,
"used": 3638
},
"real": {
"free": 3746,
"total": 12534,
"used": 8788
},
"swap": {
"cached": 0,
"free": 4048,
"total": 4048,
"used": 0
}
},
"ansible_memtotal_mb": 12534,
"ansible_mounts": [
{
"device": "/dev/sda1",
"fstype": "ext4",
"mount": "/etc/resolv.conf",
"options": "rw,relatime,data=ordered",
"size_available": 201281392640,
"size_total": 256895700992,
"uuid": "NA"
},
{
"device": "/dev/sda1",
"fstype": "ext4",
"mount": "/etc/hostname",
"options": "rw,relatime,data=ordered",
"size_available": 201281392640,
"size_total": 256895700992,
"uuid": "NA"
},
{
"device": "/dev/sda1",
"fstype": "ext4",
"mount": "/etc/hosts",
"options": "rw,relatime,data=ordered",
"size_available": 201281392640,
"size_total": 256895700992,
"uuid": "NA"
}
],
"ansible_nodename": "ede894599989",
"ansible_os_family": "Debian",
"ansible_pkg_mgr": "apt",
"ansible_processor": [
"GenuineIntel",
"Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz",
"GenuineIntel",
"Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz",
"GenuineIntel",
"Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz",
"GenuineIntel",
"Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz",
"GenuineIntel",
"Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz",
"GenuineIntel",
"Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz",
"GenuineIntel",
"Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz",
"GenuineIntel",
"Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz"
],
"ansible_processor_cores": 8,
"ansible_processor_count": 1,
"ansible_processor_threads_per_core": 1,
"ansible_processor_vcpus": 8,
"ansible_product_name": "VirtualBox",
"ansible_product_serial": "0",
"ansible_product_uuid": "25C5EA5A-1DF1-48D9-A2C6-81227DA153C0",
"ansible_product_version": "1.2",
"ansible_python_version": "2.7.6",
"ansible_selinux": false,
"ansible_service_mgr": "upstart",
"ansible_ssh_host_key_dsa_public": "AAAAB3NzaC1kc3MAAACBALF0xsM8UMXgSKiWNw4t19wxbxLnxQX742t/dIM0O8YLx+/lIP+Q69Dv5uoVt0zKV39eFziRlCh96qj2KYkGEJ6XfVZFnhpculL2Pv2CPpSwKuQ1vTbDO/xxUrvY+bHpfNJf9Rh69bFEE2pTsjomFPCgp8M0qGaFtwg6czSaeBONAAAAFQCGEfVtj97JiexTVRqgQITYlFp/eQAAAIEAg+S9qWn+AIb3amwVoLL/usQYOPCmZY9RVPzpkjJ6OG+HI4B7cXeauPtNTJwT0f9vGEqzf4mPpmS+aCShj6iwdmJ+cOwR5+SJlNalab3CMBoXKVLbT1J2XWFlK0szKKnoReP96IDbkAkGQ3fkm4jz0z6Wy0u6wOQVNcd4G5cwLZ4AAACAFvBm+H1LwNrwWBjWio+ayhglZ4Y25mLMEn2+dqBz0gLK5szEbft1HMPOWIVHvl6vi3v34pAJHKpxXpkLlNliTn8iw9BzCOrgP4V8sp2/85mxEuCdI1w/QERj9cHu5iS2pZ0cUwDE3pfuuGBB3IEliaJyaapowdrM8lN12jQl11E=",
"ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHiYp4e9RfXpxDcEWpK4EuXPHW9++xcFI9hiB0TYAZgxEF9RIgwfucpPawFk7HIFoNc7EXQMlryilLSbg155KWM=",
"ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAILclD2JaC654azEsAfcHRIOA2Ig9/Qk6MX80i/VCEdSH",
"ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDeSUGxZaZsgBsezld0mj3HcbAwx6aykGnejceBjcs6lVwSGMHevofzSXIQDPYBhZoyWNl0PYAHv6AsQ8+3khd2SitUMJAuHSz1ZjgHCCGQP9ijXTKHn+lWCKA8rhLG/dwYwiouoOPZfn1G+erbKO6XiVbELrrf2RadnMGuMinESIOKVj3IunXsaGRMsDOQferOnUf7MvH7xpQnoySyQ1+p4rGruaohWG+Y2cDo7+B2FylPVbrpRDDJkfbt4J96WHx0KOdD0qzOicQP8JqDflqQPJJCWcgrvjQOSe4gXdPB6GZDtBl2qgQRwt1IgizPMm+b7Bwbd2VDe1TeWV2gT/7H",
"ansible_swapfree_mb": 4048,
"ansible_swaptotal_mb": 4048,
"ansible_system": "Linux",
"ansible_system_vendor": "innotek GmbH",
"ansible_uptime_seconds": 178398,
"ansible_user_dir": "/root",
"ansible_user_gecos": "root",
"ansible_user_gid": 0,
"ansible_user_id": "root",
"ansible_user_shell": "/bin/bash",
"ansible_user_uid": 0,
"ansible_userspace_architecture": "x86_64",
"ansible_userspace_bits": "64",
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "docker",
"module_setup": true
}

View File

@@ -49,6 +49,7 @@ def isolated_instance_group(instance_group, instance):
def containerized_instance_group(instance_group, kube_credential):
ig = InstanceGroup(name="container")
ig.credential = kube_credential
ig.is_container_group = True
ig.save()
return ig
@@ -255,7 +256,7 @@ def test_instance_group_update_fields(patch, instance, instance_group, admin, co
# policy_instance_ variables can only be updated in instance groups that are NOT containerized
# instance group (not containerized)
ig_url = reverse("api:instance_group_detail", kwargs={'pk': instance_group.pk})
assert not instance_group.is_containerized
assert not instance_group.is_container_group
assert not containerized_instance_group.is_isolated
resp = patch(ig_url, {'policy_instance_percentage':15}, admin, expect=200)
assert 15 == resp.data['policy_instance_percentage']
@@ -266,7 +267,7 @@ def test_instance_group_update_fields(patch, instance, instance_group, admin, co
# containerized instance group
cg_url = reverse("api:instance_group_detail", kwargs={'pk': containerized_instance_group.pk})
assert containerized_instance_group.is_containerized
assert containerized_instance_group.is_container_group
assert not containerized_instance_group.is_isolated
resp = patch(cg_url, {'policy_instance_percentage':15}, admin, expect=400)
assert ["Containerized instances may not be managed via the API"] == resp.data['policy_instance_percentage']
@@ -287,8 +288,8 @@ def test_containerized_group_default_fields(instance_group, kube_credential):
assert ig.policy_instance_minimum == 5
assert ig.policy_instance_percentage == 5
ig.credential = kube_credential
ig.is_container_group = True
ig.save()
assert ig.policy_instance_list == []
assert ig.policy_instance_minimum == 0
assert ig.policy_instance_percentage == 0

View File

@@ -1,6 +1,3 @@
import os
from backports.tempfile import TemporaryDirectory
import pytest
# AWX
@@ -10,7 +7,6 @@ from awx.main.models import Job, JobTemplate, CredentialType, WorkflowJobTemplat
from awx.main.migrations import _save_password_keys as save_password_keys
# Django
from django.conf import settings
from django.apps import apps
# DRF
@@ -302,61 +298,6 @@ def test_save_survey_passwords_on_migration(job_template_with_survey_passwords):
assert job.survey_passwords == {'SSN': '$encrypted$', 'secret_key': '$encrypted$'}
@pytest.mark.django_db
@pytest.mark.parametrize('access', ["superuser", "admin", "peon"])
def test_job_template_custom_virtualenv(get, patch, organization_factory, job_template_factory, alice, access):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
user = alice
if access == "superuser":
user = objs.superusers.admin
elif access == "admin":
jt.admin_role.members.add(alice)
else:
jt.read_role.members.add(alice)
with TemporaryDirectory(dir=settings.BASE_VENV_PATH) as temp_dir:
os.makedirs(os.path.join(temp_dir, 'bin', 'activate'))
url = reverse('api:job_template_detail', kwargs={'pk': jt.id})
if access == "peon":
patch(url, {'custom_virtualenv': temp_dir}, user=user, expect=403)
assert 'custom_virtualenv' not in get(url, user=user)
assert JobTemplate.objects.get(pk=jt.id).custom_virtualenv is None
else:
patch(url, {'custom_virtualenv': temp_dir}, user=user, expect=200)
assert get(url, user=user).data['custom_virtualenv'] == os.path.join(temp_dir, '')
@pytest.mark.django_db
def test_job_template_invalid_custom_virtualenv(get, patch, organization_factory,
job_template_factory):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
url = reverse('api:job_template_detail', kwargs={'pk': jt.id})
resp = patch(url, {'custom_virtualenv': '/foo/bar'}, user=objs.superusers.admin, expect=400)
assert resp.data['custom_virtualenv'] == [
'/foo/bar is not a valid virtualenv in {}'.format(settings.BASE_VENV_PATH)
]
@pytest.mark.django_db
@pytest.mark.parametrize('value', ["", None])
def test_job_template_unset_custom_virtualenv(get, patch, organization_factory,
job_template_factory, value):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
url = reverse('api:job_template_detail', kwargs={'pk': jt.id})
resp = patch(url, {'custom_virtualenv': value}, user=objs.superusers.admin, expect=200)
assert resp.data['custom_virtualenv'] is None
@pytest.mark.django_db
def test_jt_organization_follows_project(post, patch, admin_user):
org1 = Organization.objects.create(name='foo1')

View File

@@ -1,11 +1,6 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
# Python
import os
from backports.tempfile import TemporaryDirectory
from django.conf import settings
import pytest
# AWX
@@ -242,32 +237,6 @@ def test_delete_organization_xfail2(delete, organization):
delete(reverse('api:organization_detail', kwargs={'pk': organization.id}), user=None, expect=401)
@pytest.mark.django_db
def test_organization_custom_virtualenv(get, patch, organization, admin):
with TemporaryDirectory(dir=settings.BASE_VENV_PATH) as temp_dir:
os.makedirs(os.path.join(temp_dir, 'bin', 'activate'))
url = reverse('api:organization_detail', kwargs={'pk': organization.id})
patch(url, {'custom_virtualenv': temp_dir}, user=admin, expect=200)
assert get(url, user=admin).data['custom_virtualenv'] == os.path.join(temp_dir, '')
@pytest.mark.django_db
def test_organization_invalid_custom_virtualenv(get, patch, organization, admin):
url = reverse('api:organization_detail', kwargs={'pk': organization.id})
resp = patch(url, {'custom_virtualenv': '/foo/bar'}, user=admin, expect=400)
assert resp.data['custom_virtualenv'] == [
'/foo/bar is not a valid virtualenv in {}'.format(settings.BASE_VENV_PATH)
]
@pytest.mark.django_db
@pytest.mark.parametrize('value', ["", None])
def test_organization_unset_custom_virtualenv(get, patch, organization, admin, value):
url = reverse('api:organization_detail', kwargs={'pk': organization.id})
resp = patch(url, {'custom_virtualenv': value}, user=admin, expect=200)
assert resp.data['custom_virtualenv'] is None
@pytest.mark.django_db
def test_organization_delete(delete, admin, organization, organization_jobs_successful):
url = reverse('api:organization_detail', kwargs={'pk': organization.id})

View File

@@ -1,7 +1,3 @@
import os
from backports.tempfile import TemporaryDirectory
from django.conf import settings
import pytest
from awx.api.versioning import reverse
@@ -21,32 +17,6 @@ class TestInsightsCredential:
expect=400)
@pytest.mark.django_db
def test_project_custom_virtualenv(get, patch, project, admin):
with TemporaryDirectory(dir=settings.BASE_VENV_PATH) as temp_dir:
os.makedirs(os.path.join(temp_dir, 'bin', 'activate'))
url = reverse('api:project_detail', kwargs={'pk': project.id})
patch(url, {'custom_virtualenv': temp_dir}, user=admin, expect=200)
assert get(url, user=admin).data['custom_virtualenv'] == os.path.join(temp_dir, '')
@pytest.mark.django_db
def test_project_invalid_custom_virtualenv(get, patch, project, admin):
url = reverse('api:project_detail', kwargs={'pk': project.id})
resp = patch(url, {'custom_virtualenv': '/foo/bar'}, user=admin, expect=400)
assert resp.data['custom_virtualenv'] == [
'/foo/bar is not a valid virtualenv in {}'.format(settings.BASE_VENV_PATH)
]
@pytest.mark.django_db
@pytest.mark.parametrize('value', ["", None])
def test_project_unset_custom_virtualenv(get, patch, project, admin, value):
url = reverse('api:project_detail', kwargs={'pk': project.id})
resp = patch(url, {'custom_virtualenv': value}, user=admin, expect=200)
assert resp.data['custom_virtualenv'] is None
@pytest.mark.django_db
def test_no_changing_overwrite_behavior_if_used(post, patch, organization, admin_user):
r1 = post(

View File

@@ -393,3 +393,43 @@ def test_saml_x509cert_validation(patch, get, admin, headers):
}
})
assert resp.status_code == 200
@pytest.mark.django_db
def test_github_settings(get, put, patch, delete, admin):
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'github'})
get(url, user=admin, expect=200)
delete(url, user=admin, expect=204)
response = get(url, user=admin, expect=200)
data = dict(response.data.items())
put(url, user=admin, data=data, expect=200)
patch(url, user=admin, data={'SOCIAL_AUTH_GITHUB_KEY': '???'}, expect=200)
response = get(url, user=admin, expect=200)
assert response.data['SOCIAL_AUTH_GITHUB_KEY'] == '???'
data.pop('SOCIAL_AUTH_GITHUB_KEY')
put(url, user=admin, data=data, expect=200)
response = get(url, user=admin, expect=200)
assert response.data['SOCIAL_AUTH_GITHUB_KEY'] == ''
@pytest.mark.django_db
def test_github_enterprise_settings(get, put, patch, delete, admin):
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'github-enterprise'})
get(url, user=admin, expect=200)
delete(url, user=admin, expect=204)
response = get(url, user=admin, expect=200)
data = dict(response.data.items())
put(url, user=admin, data=data, expect=200)
patch(url, user=admin, data={
'SOCIAL_AUTH_GITHUB_ENTERPRISE_URL': 'example.com',
'SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL': 'example.com',
}, expect=200)
response = get(url, user=admin, expect=200)
assert response.data['SOCIAL_AUTH_GITHUB_ENTERPRISE_URL'] == 'example.com'
assert response.data['SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL'] == 'example.com'
data.pop('SOCIAL_AUTH_GITHUB_ENTERPRISE_URL')
data.pop('SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL')
put(url, user=admin, data=data, expect=200)
response = get(url, user=admin, expect=200)
assert response.data['SOCIAL_AUTH_GITHUB_ENTERPRISE_URL'] == ''
assert response.data['SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL'] == ''

View File

@@ -52,6 +52,7 @@ from awx.main.models.events import (
from awx.main.models.workflow import WorkflowJobTemplate
from awx.main.models.ad_hoc_commands import AdHocCommand
from awx.main.models.oauth import OAuth2Application as Application
from awx.main.models.execution_environments import ExecutionEnvironment
__SWAGGER_REQUESTS__ = {}
@@ -850,3 +851,8 @@ def slice_job_factory(slice_jt_factory):
node.save()
return slice_job
return r
@pytest.fixture
def execution_environment(organization):
return ExecutionEnvironment.objects.create(name="test-ee", description="test-ee", organization=organization)

View File

@@ -6,7 +6,7 @@ import pytest
#from awx.main.models import NotificationTemplates, Notifications, JobNotificationMixin
from awx.main.models import (AdHocCommand, InventoryUpdate, Job, JobNotificationMixin, ProjectUpdate,
SystemJob, WorkflowJob)
Schedule, SystemJob, WorkflowJob)
from awx.api.serializers import UnifiedJobSerializer
@@ -72,6 +72,10 @@ class TestJobNotificationMixin(object):
'name': str,
'scm_type': str,
'status': str},
'schedule': {'description': str,
'id': int,
'name': str,
'next_run': datetime.datetime},
'unified_job_template': {'description': str,
'id': int,
'name': str,
@@ -89,27 +93,27 @@ class TestJobNotificationMixin(object):
'workflow_url': str,
'url': str}
def check_structure(self, expected_structure, obj):
if isinstance(expected_structure, dict):
assert isinstance(obj, dict)
for key in obj:
assert key in expected_structure
if obj[key] is None:
continue
if isinstance(expected_structure[key], dict):
assert isinstance(obj[key], dict)
self.check_structure(expected_structure[key], obj[key])
else:
if key == 'job_explanation':
assert isinstance(str(obj[key]), expected_structure[key])
else:
assert isinstance(obj[key], expected_structure[key])
@pytest.mark.django_db
@pytest.mark.parametrize('JobClass', [AdHocCommand, InventoryUpdate, Job, ProjectUpdate, SystemJob, WorkflowJob])
def test_context(self, JobClass, sqlite_copy_expert, project, inventory_source):
"""The Jinja context defines all of the fields that can be used by a template. Ensure that the context generated
for each job type has the expected structure."""
def check_structure(expected_structure, obj):
if isinstance(expected_structure, dict):
assert isinstance(obj, dict)
for key in obj:
assert key in expected_structure
if obj[key] is None:
continue
if isinstance(expected_structure[key], dict):
assert isinstance(obj[key], dict)
check_structure(expected_structure[key], obj[key])
else:
if key == 'job_explanation':
assert isinstance(str(obj[key]), expected_structure[key])
else:
assert isinstance(obj[key], expected_structure[key])
kwargs = {}
if JobClass is InventoryUpdate:
kwargs['inventory_source'] = inventory_source
@@ -121,8 +125,26 @@ class TestJobNotificationMixin(object):
job_serialization = UnifiedJobSerializer(job).to_representation(job)
context = job.context(job_serialization)
check_structure(TestJobNotificationMixin.CONTEXT_STRUCTURE, context)
self.check_structure(TestJobNotificationMixin.CONTEXT_STRUCTURE, context)
@pytest.mark.django_db
def test_schedule_context(self, job_template, admin_user):
schedule = Schedule.objects.create(
name='job-schedule',
rrule='DTSTART:20171129T155939z\nFREQ=MONTHLY',
unified_job_template=job_template
)
job = Job.objects.create(
name='fake-job',
launch_type='workflow',
schedule=schedule,
job_template=job_template
)
job_serialization = UnifiedJobSerializer(job).to_representation(job)
context = job.context(job_serialization)
self.check_structure(TestJobNotificationMixin.CONTEXT_STRUCTURE, context)
@pytest.mark.django_db
def test_context_job_metadata_with_unicode(self):

View File

@@ -154,6 +154,7 @@ class TestMetaVars:
assert data['awx_user_id'] == admin_user.id
assert data['awx_user_name'] == admin_user.username
assert data['awx_workflow_job_id'] == workflow_job.pk
assert data['awx_workflow_job_launch_type'] == workflow_job.launch_type
def test_scheduled_job_metavars(self, job_template, admin_user):
schedule = Schedule.objects.create(
@@ -197,6 +198,8 @@ class TestMetaVars:
'tower_workflow_job_name': 'workflow-job',
'awx_workflow_job_id': workflow_job.id,
'tower_workflow_job_id': workflow_job.id,
'awx_workflow_job_launch_type': workflow_job.launch_type,
'tower_workflow_job_launch_type': workflow_job.launch_type,
'awx_parent_job_schedule_id': schedule.id,
'tower_parent_job_schedule_id': schedule.id,
'awx_parent_job_schedule_name': 'job-schedule',

File diff suppressed because it is too large Load Diff

View File

@@ -1,697 +0,0 @@
[
{
"source": "sysv",
"state": "running",
"name": "iprdump"
},
{
"source": "sysv",
"state": "running",
"name": "iprinit"
},
{
"source": "sysv",
"state": "running",
"name": "iprupdate"
},
{
"source": "sysv",
"state": "stopped",
"name": "netconsole"
},
{
"source": "sysv",
"state": "running",
"name": "network"
},
{
"source": "systemd",
"state": "stopped",
"name": "arp-ethers.service"
},
{
"source": "systemd",
"state": "running",
"name": "auditd.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "autovt@.service"
},
{
"source": "systemd",
"state": "running",
"name": "avahi-daemon.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "blk-availability.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "brandbot.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "console-getty.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "console-shell.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "cpupower.service"
},
{
"source": "systemd",
"state": "running",
"name": "crond.service"
},
{
"source": "systemd",
"state": "running",
"name": "dbus-org.fedoraproject.FirewallD1.service"
},
{
"source": "systemd",
"state": "running",
"name": "dbus-org.freedesktop.Avahi.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dbus-org.freedesktop.hostname1.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dbus-org.freedesktop.locale1.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dbus-org.freedesktop.login1.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dbus-org.freedesktop.machine1.service"
},
{
"source": "systemd",
"state": "running",
"name": "dbus-org.freedesktop.NetworkManager.service"
},
{
"source": "systemd",
"state": "running",
"name": "dbus-org.freedesktop.nm-dispatcher.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dbus-org.freedesktop.timedate1.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dbus.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "debug-shell.service"
},
{
"source": "systemd",
"state": "running",
"name": "dhcpd.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dhcpd6.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dhcrelay.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dm-event.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dnsmasq.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dracut-cmdline.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dracut-initqueue.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dracut-mount.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dracut-pre-mount.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dracut-pre-pivot.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dracut-pre-trigger.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dracut-pre-udev.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "dracut-shutdown.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "ebtables.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "emergency.service"
},
{
"source": "systemd",
"state": "running",
"name": "firewalld.service"
},
{
"source": "systemd",
"state": "running",
"name": "getty@.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "halt-local.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "initrd-cleanup.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "initrd-parse-etc.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "initrd-switch-root.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "initrd-udevadm-cleanup-db.service"
},
{
"source": "systemd",
"state": "running",
"name": "irqbalance.service"
},
{
"source": "systemd",
"state": "running",
"name": "kdump.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "kmod-static-nodes.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "lvm2-lvmetad.service"
},
{
"source": "systemd",
"state": "running",
"name": "lvm2-monitor.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "lvm2-pvscan@.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "messagebus.service"
},
{
"source": "systemd",
"state": "running",
"name": "microcode.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "named-setup-rndc.service"
},
{
"source": "systemd",
"state": "running",
"name": "named.service"
},
{
"source": "systemd",
"state": "running",
"name": "NetworkManager-dispatcher.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "NetworkManager-wait-online.service"
},
{
"source": "systemd",
"state": "running",
"name": "NetworkManager.service"
},
{
"source": "systemd",
"state": "running",
"name": "ntpd.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "ntpdate.service"
},
{
"source": "systemd",
"state": "running",
"name": "openvpn@.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-halt.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-kexec.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-poweroff.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-quit-wait.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-quit.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-read-write.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-reboot.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-start.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "plymouth-switch-root.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "polkit.service"
},
{
"source": "systemd",
"state": "running",
"name": "postfix.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "quotaon.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rc-local.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rdisc.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rescue.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rhel-autorelabel-mark.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rhel-autorelabel.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rhel-configure.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rhel-dmesg.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rhel-domainname.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rhel-import-state.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rhel-loadmodules.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "rhel-readonly.service"
},
{
"source": "systemd",
"state": "running",
"name": "rsyslog.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "serial-getty@.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "sshd-keygen.service"
},
{
"source": "systemd",
"state": "running",
"name": "sshd.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "sshd@.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-ask-password-console.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-ask-password-plymouth.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-ask-password-wall.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-backlight@.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-binfmt.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-fsck-root.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-fsck@.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-halt.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-hibernate.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-hostnamed.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-hybrid-sleep.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-initctl.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-journal-flush.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-journald.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-kexec.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-localed.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-logind.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-machined.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-modules-load.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-nspawn@.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-poweroff.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-quotacheck.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-random-seed.service"
},
{
"source": "systemd",
"state": "running",
"name": "systemd-readahead-collect.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-readahead-done.service"
},
{
"source": "systemd",
"state": "running",
"name": "systemd-readahead-drop.service"
},
{
"source": "systemd",
"state": "running",
"name": "systemd-readahead-replay.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-reboot.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-remount-fs.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-shutdownd.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-suspend.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-sysctl.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-timedated.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-tmpfiles-clean.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-tmpfiles-setup-dev.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-tmpfiles-setup.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-udev-settle.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-udev-trigger.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-udevd.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-update-utmp-runlevel.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-update-utmp.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-user-sessions.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "systemd-vconsole-setup.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "teamd@.service"
},
{
"source": "systemd",
"state": "running",
"name": "tuned.service"
},
{
"source": "systemd",
"state": "running",
"name": "vmtoolsd.service"
},
{
"source": "systemd",
"state": "stopped",
"name": "wpa_supplicant.service"
}
]

View File

@@ -13,6 +13,7 @@ from awx.main.utils import (
@pytest.fixture
def containerized_job(default_instance_group, kube_credential, job_template_factory):
default_instance_group.credential = kube_credential
default_instance_group.is_container_group = True
default_instance_group.save()
objects = job_template_factory('jt', organization='org1', project='proj',
inventory='inv', credential='cred',
@@ -29,8 +30,8 @@ def containerized_job(default_instance_group, kube_credential, job_template_fact
@pytest.mark.django_db
def test_containerized_job(containerized_job):
assert containerized_job.is_containerized
assert containerized_job.instance_group.is_containerized
assert containerized_job.is_container_group_task
assert containerized_job.instance_group.is_container_group
assert containerized_job.instance_group.credential.kubernetes

View File

@@ -348,11 +348,11 @@ def test_job_not_blocking_project_update(default_instance_group, job_template_fa
project_update.instance_group = default_instance_group
project_update.status = "pending"
project_update.save()
assert not task_manager.is_job_blocked(project_update)
assert not task_manager.job_blocked_by(project_update)
dependency_graph = DependencyGraph(None)
dependency_graph = DependencyGraph()
dependency_graph.add_job(job)
assert not dependency_graph.is_job_blocked(project_update)
assert not dependency_graph.task_blocked_by(project_update)
@pytest.mark.django_db
@@ -378,11 +378,11 @@ def test_job_not_blocking_inventory_update(default_instance_group, job_template_
inventory_update.status = "pending"
inventory_update.save()
assert not task_manager.is_job_blocked(inventory_update)
assert not task_manager.job_blocked_by(inventory_update)
dependency_graph = DependencyGraph(None)
dependency_graph = DependencyGraph()
dependency_graph.add_job(job)
assert not dependency_graph.is_job_blocked(inventory_update)
assert not dependency_graph.task_blocked_by(inventory_update)
@pytest.mark.django_db

View File

@@ -79,6 +79,7 @@ def test_default_cred_types():
'aws',
'azure_kv',
'azure_rm',
'centrify_vault_kv',
'conjur',
'galaxy_api_token',
'gce',
@@ -90,6 +91,7 @@ def test_default_cred_types():
'kubernetes_bearer_token',
'net',
'openstack',
'registry',
'rhv',
'satellite6',
'scm',

View File

@@ -0,0 +1,19 @@
import pytest
from awx.main.models import (ExecutionEnvironment)
@pytest.mark.django_db
def test_execution_environment_creation(execution_environment, organization):
execution_env = ExecutionEnvironment.objects.create(
name='Hello Environment',
image='',
organization=organization,
managed_by_tower=False,
credential=None,
pull='missing'
)
assert type(execution_env) is type(execution_environment)
assert execution_env.organization == organization
assert execution_env.name == 'Hello Environment'
assert execution_env.pull == 'missing'

View File

@@ -6,7 +6,7 @@ import re
from collections import namedtuple
from awx.main.tasks import RunInventoryUpdate
from awx.main.models import InventorySource, Credential, CredentialType, UnifiedJob
from awx.main.models import InventorySource, Credential, CredentialType, UnifiedJob, ExecutionEnvironment
from awx.main.constants import CLOUD_PROVIDERS, STANDARD_INVENTORY_UPDATE_ENV
from awx.main.tests import data
@@ -110,7 +110,8 @@ def read_content(private_data_dir, raw_env, inventory_update):
continue # Ansible runner
abs_file_path = os.path.join(private_data_dir, filename)
file_aliases[abs_file_path] = filename
if abs_file_path in inverse_env:
runner_path = os.path.join('/runner', os.path.basename(abs_file_path))
if runner_path in inverse_env:
referenced_paths.add(abs_file_path)
alias = 'file_reference'
for i in range(10):
@@ -121,7 +122,7 @@ def read_content(private_data_dir, raw_env, inventory_update):
raise RuntimeError('Test not able to cope with >10 references by env vars. '
'Something probably went very wrong.')
file_aliases[abs_file_path] = alias
for env_key in inverse_env[abs_file_path]:
for env_key in inverse_env[runner_path]:
env[env_key] = '{{{{ {} }}}}'.format(alias)
try:
with open(abs_file_path, 'r') as f:
@@ -182,6 +183,8 @@ def create_reference_data(source_dir, env, content):
@pytest.mark.django_db
@pytest.mark.parametrize('this_kind', CLOUD_PROVIDERS)
def test_inventory_update_injected_content(this_kind, inventory, fake_credential_factory):
ExecutionEnvironment.objects.create(name='test EE', managed_by_tower=True)
injector = InventorySource.injectors[this_kind]
if injector.plugin_name is None:
pytest.skip('Use of inventory plugin is not enabled for this source')
@@ -197,12 +200,14 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
inventory_update = inventory_source.create_unified_job()
task = RunInventoryUpdate()
def substitute_run(envvars=None, **_kw):
def substitute_run(awx_receptor_job):
"""This method will replace run_pexpect
instead of running, it will read the private data directory contents
It will make assertions that the contents are correct
If MAKE_INVENTORY_REFERENCE_FILES is set, it will produce reference files
"""
envvars = awx_receptor_job.runner_params['envvars']
private_data_dir = envvars.pop('AWX_PRIVATE_DATA_DIR')
assert envvars.pop('ANSIBLE_INVENTORY_ENABLED') == 'auto'
set_files = bool(os.getenv("MAKE_INVENTORY_REFERENCE_FILES", 'false').lower()[0] not in ['f', '0'])
@@ -214,9 +219,6 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
f"'{inventory_filename}' file not found in inventory update runtime files {content.keys()}"
env.pop('ANSIBLE_COLLECTIONS_PATHS', None) # collection paths not relevant to this test
env.pop('PYTHONPATH')
env.pop('VIRTUAL_ENV')
env.pop('PROOT_TMP_DIR')
base_dir = os.path.join(DATA, 'plugins')
if not os.path.exists(base_dir):
os.mkdir(base_dir)
@@ -256,6 +258,6 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
# Also do not send websocket status updates
with mock.patch.object(UnifiedJob, 'websocket_emit_status', mock.Mock()):
# The point of this test is that we replace run with assertions
with mock.patch('awx.main.tasks.ansible_runner.interface.run', substitute_run):
with mock.patch('awx.main.tasks.AWXReceptorJob.run', substitute_run):
# so this sets up everything for a run and then yields control over to substitute_run
task.run(inventory_update.pk)

View File

@@ -49,7 +49,7 @@ def test_python_and_js_licenses():
def read_api_requirements(path):
ret = {}
for req_file in ['requirements.txt', 'requirements_ansible.txt', 'requirements_git.txt', 'requirements_ansible_git.txt']:
for req_file in ['requirements.txt', 'requirements_git.txt']:
fname = '%s/%s' % (path, req_file)
for reqt in parse_requirements(fname, session=''):

View File

@@ -40,7 +40,7 @@ def project_update(mocker):
@pytest.fixture
def job(mocker, job_template, project_update):
return mocker.MagicMock(pk=5, job_template=job_template, project_update=project_update,
workflow_job_id=None)
workflow_job_id=None, execution_environment_id=None)
@pytest.fixture

View File

@@ -35,16 +35,17 @@ data_loggly = {
# Test reconfigure logging settings function
# name this whatever you want
@pytest.mark.parametrize(
'enabled, log_type, host, port, protocol, expected_config', [
'enabled, log_type, host, port, protocol, errorfile, expected_config', [
(
True,
'loggly',
'http://logs-01.loggly.com/inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/',
None,
'https',
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
])
),
(
@@ -53,6 +54,7 @@ data_loggly = {
'localhost',
9000,
'udp',
'', # empty errorfile
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
@@ -64,6 +66,7 @@ data_loggly = {
'localhost',
9000,
'tcp',
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
@@ -75,9 +78,10 @@ data_loggly = {
'https://yoursplunk/services/collector/event',
None,
None,
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
])
),
(
@@ -86,9 +90,10 @@ data_loggly = {
'http://yoursplunk/services/collector/event',
None,
None,
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
])
),
(
@@ -97,9 +102,10 @@ data_loggly = {
'https://yoursplunk:8088/services/collector/event',
None,
None,
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
])
),
(
@@ -108,9 +114,10 @@ data_loggly = {
'https://yoursplunk/services/collector/event',
8088,
None,
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
])
),
(
@@ -119,9 +126,10 @@ data_loggly = {
'yoursplunk.org/services/collector/event',
8088,
'https',
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
])
),
(
@@ -130,9 +138,10 @@ data_loggly = {
'http://yoursplunk.org/services/collector/event',
8088,
None,
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
])
),
(
@@ -141,14 +150,15 @@ data_loggly = {
'https://endpoint5.collection.us2.sumologic.com/receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==', # noqa
None,
'https',
'/var/log/tower/rsyslog.err',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
])
),
]
)
def test_rsyslog_conf_template(enabled, log_type, host, port, protocol, expected_config):
def test_rsyslog_conf_template(enabled, log_type, host, port, protocol, errorfile, expected_config):
mock_settings, _ = _mock_logging_defaults()
@@ -159,6 +169,7 @@ def test_rsyslog_conf_template(enabled, log_type, host, port, protocol, expected
setattr(mock_settings, 'LOG_AGGREGATOR_ENABLED', enabled)
setattr(mock_settings, 'LOG_AGGREGATOR_TYPE', log_type)
setattr(mock_settings, 'LOG_AGGREGATOR_HOST', host)
setattr(mock_settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', errorfile)
if port:
setattr(mock_settings, 'LOG_AGGREGATOR_PORT', port)
if protocol:

View File

@@ -11,7 +11,7 @@ class FakeObject(object):
class Job(FakeObject):
task_impact = 43
is_containerized = False
is_container_group_task = False
def log_format(self):
return 'job 382 (fake)'

View File

@@ -6,7 +6,6 @@ import os
import shutil
import tempfile
from backports.tempfile import TemporaryDirectory
import fcntl
from unittest import mock
import pytest
@@ -19,6 +18,7 @@ from awx.main.models import (
AdHocCommand,
Credential,
CredentialType,
ExecutionEnvironment,
Inventory,
InventorySource,
InventoryUpdate,
@@ -249,7 +249,7 @@ def test_openstack_client_config_generation_with_project_domain_name(mocker, sou
@pytest.mark.parametrize("source,expected", [
(None, True), (False, False), (True, True)
])
def test_openstack_client_config_generation_with_project_region_name(mocker, source, expected, private_data_dir):
def test_openstack_client_config_generation_with_region(mocker, source, expected, private_data_dir):
update = tasks.RunInventoryUpdate()
credential_type = CredentialType.defaults['openstack']()
inputs = {
@@ -259,7 +259,7 @@ def test_openstack_client_config_generation_with_project_region_name(mocker, sou
'project': 'demo-project',
'domain': 'my-demo-domain',
'project_domain_name': 'project-domain',
'project_region_name': 'region-name',
'region': 'region-name',
}
if source is not None:
inputs['verify_ssl'] = source
@@ -347,11 +347,12 @@ def pytest_generate_tests(metafunc):
)
def parse_extra_vars(args):
def parse_extra_vars(args, private_data_dir):
extra_vars = {}
for chunk in args:
if chunk.startswith('@/tmp/'):
with open(chunk.strip('@'), 'r') as f:
if chunk.startswith('@/runner/'):
local_path = os.path.join(private_data_dir, os.path.basename(chunk.strip('@')))
with open(local_path, 'r') as f:
extra_vars.update(yaml.load(f, Loader=SafeLoader))
return extra_vars
@@ -527,7 +528,7 @@ class TestGenericRun():
task.instance = Job(pk=1, id=1)
task.event_ct = 17
task.finished_callback(None)
task.dispatcher.dispatch.assert_called_with({'event': 'EOF', 'final_counter': 17, 'job_id': 1})
task.dispatcher.dispatch.assert_called_with({'event': 'EOF', 'final_counter': 17, 'job_id': 1, 'guid': None})
def test_save_job_metadata(self, job, update_model_wrapper):
class MockMe():
@@ -546,44 +547,6 @@ class TestGenericRun():
job_cwd='/foobar', job_env={'switch': 'blade', 'foot': 'ball', 'secret_key': 'redacted_value'})
def test_uses_process_isolation(self, settings):
job = Job(project=Project(), inventory=Inventory())
task = tasks.RunJob()
task.should_use_proot = lambda instance: True
task.instance = job
private_data_dir = '/foo'
cwd = '/bar'
settings.AWX_PROOT_HIDE_PATHS = ['/AWX_PROOT_HIDE_PATHS1', '/AWX_PROOT_HIDE_PATHS2']
settings.ANSIBLE_VENV_PATH = '/ANSIBLE_VENV_PATH'
settings.AWX_VENV_PATH = '/AWX_VENV_PATH'
process_isolation_params = task.build_params_process_isolation(job, private_data_dir, cwd)
assert True is process_isolation_params['process_isolation']
assert process_isolation_params['process_isolation_path'].startswith(settings.AWX_PROOT_BASE_PATH), \
"Directory where a temp directory will be created for the remapping to take place"
assert private_data_dir in process_isolation_params['process_isolation_show_paths'], \
"The per-job private data dir should be in the list of directories the user can see."
assert cwd in process_isolation_params['process_isolation_show_paths'], \
"The current working directory should be in the list of directories the user can see."
for p in [settings.AWX_PROOT_BASE_PATH,
'/etc/tower',
'/etc/ssh',
'/var/lib/awx',
'/var/log',
settings.PROJECTS_ROOT,
settings.JOBOUTPUT_ROOT,
'/AWX_PROOT_HIDE_PATHS1',
'/AWX_PROOT_HIDE_PATHS2']:
assert p in process_isolation_params['process_isolation_hide_paths']
assert 9 == len(process_isolation_params['process_isolation_hide_paths'])
assert '/ANSIBLE_VENV_PATH' in process_isolation_params['process_isolation_ro_paths']
assert '/AWX_VENV_PATH' in process_isolation_params['process_isolation_ro_paths']
assert 2 == len(process_isolation_params['process_isolation_ro_paths'])
@mock.patch('os.makedirs')
def test_build_params_resource_profiling(self, os_makedirs):
job = Job(project=Project(), inventory=Inventory())
@@ -597,7 +560,7 @@ class TestGenericRun():
assert resource_profiling_params['resource_profiling_cpu_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_memory_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_pid_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_results_dir'] == '/fake_private_data_dir/artifacts/playbook_profiling'
assert resource_profiling_params['resource_profiling_results_dir'] == '/runner/artifacts/playbook_profiling'
@pytest.mark.parametrize("scenario, profiling_enabled", [
@@ -656,34 +619,13 @@ class TestGenericRun():
env = task.build_env(job, private_data_dir)
assert env['FOO'] == 'BAR'
def test_valid_custom_virtualenv(self, patch_Job, private_data_dir):
job = Job(project=Project(), inventory=Inventory())
with TemporaryDirectory(dir=settings.BASE_VENV_PATH) as tempdir:
job.project.custom_virtualenv = tempdir
os.makedirs(os.path.join(tempdir, 'lib'))
os.makedirs(os.path.join(tempdir, 'bin', 'activate'))
task = tasks.RunJob()
env = task.build_env(job, private_data_dir)
assert env['PATH'].startswith(os.path.join(tempdir, 'bin'))
assert env['VIRTUAL_ENV'] == tempdir
def test_invalid_custom_virtualenv(self, patch_Job, private_data_dir):
job = Job(project=Project(), inventory=Inventory())
job.project.custom_virtualenv = '/var/lib/awx/venv/missing'
task = tasks.RunJob()
with pytest.raises(tasks.InvalidVirtualenvError) as e:
task.build_env(job, private_data_dir)
assert 'Invalid virtual environment selected: /var/lib/awx/venv/missing' == str(e.value)
@pytest.mark.django_db
class TestAdhocRun(TestJobExecution):
def test_options_jinja_usage(self, adhoc_job, adhoc_update_model_wrapper):
ExecutionEnvironment.objects.create(name='test EE', managed_by_tower=True)
adhoc_job.module_args = '{{ ansible_ssh_pass }}'
adhoc_job.websocket_emit_status = mock.Mock()
adhoc_job.send_notification_templates = mock.Mock()
@@ -1203,7 +1145,9 @@ class TestJobCredentials(TestJobExecution):
credential.credential_type.inject_credential(
credential, env, safe_env, [], private_data_dir
)
json_data = json.load(open(env['GCE_CREDENTIALS_FILE_PATH'], 'rb'))
runner_path = env['GCE_CREDENTIALS_FILE_PATH']
local_path = os.path.join(private_data_dir, os.path.basename(runner_path))
json_data = json.load(open(local_path, 'rb'))
assert json_data['type'] == 'service_account'
assert json_data['private_key'] == self.EXAMPLE_PRIVATE_KEY
assert json_data['client_email'] == 'bob'
@@ -1306,7 +1250,11 @@ class TestJobCredentials(TestJobExecution):
credential, env, {}, [], private_data_dir
)
shade_config = open(env['OS_CLIENT_CONFIG_FILE'], 'r').read()
# convert container path to host machine path
config_loc = os.path.join(
private_data_dir, os.path.basename(env['OS_CLIENT_CONFIG_FILE'])
)
shade_config = open(config_loc, 'r').read()
assert shade_config == '\n'.join([
'clouds:',
' devstack:',
@@ -1344,7 +1292,7 @@ class TestJobCredentials(TestJobExecution):
)
config = configparser.ConfigParser()
config.read(env['OVIRT_INI_PATH'])
config.read(os.path.join(private_data_dir, os.path.basename(env['OVIRT_INI_PATH'])))
assert config.get('ovirt', 'ovirt_url') == 'some-ovirt-host.example.org'
assert config.get('ovirt', 'ovirt_username') == 'bob'
assert config.get('ovirt', 'ovirt_password') == 'some-pass'
@@ -1577,7 +1525,7 @@ class TestJobCredentials(TestJobExecution):
credential.credential_type.inject_credential(
credential, {}, {}, args, private_data_dir
)
extra_vars = parse_extra_vars(args)
extra_vars = parse_extra_vars(args, private_data_dir)
assert extra_vars["api_token"] == "ABC123"
assert hasattr(extra_vars["api_token"], '__UNSAFE__')
@@ -1612,7 +1560,7 @@ class TestJobCredentials(TestJobExecution):
credential.credential_type.inject_credential(
credential, {}, {}, args, private_data_dir
)
extra_vars = parse_extra_vars(args)
extra_vars = parse_extra_vars(args, private_data_dir)
assert extra_vars["turbo_button"] == "True"
return ['successful', 0]
@@ -1647,7 +1595,7 @@ class TestJobCredentials(TestJobExecution):
credential.credential_type.inject_credential(
credential, {}, {}, args, private_data_dir
)
extra_vars = parse_extra_vars(args)
extra_vars = parse_extra_vars(args, private_data_dir)
assert extra_vars["turbo_button"] == "FAST!"
@@ -1687,7 +1635,7 @@ class TestJobCredentials(TestJobExecution):
credential, {}, {}, args, private_data_dir
)
extra_vars = parse_extra_vars(args)
extra_vars = parse_extra_vars(args, private_data_dir)
assert extra_vars["password"] == "SUPER-SECRET-123"
def test_custom_environment_injectors_with_file(self, private_data_dir):
@@ -1722,7 +1670,8 @@ class TestJobCredentials(TestJobExecution):
credential, env, {}, [], private_data_dir
)
assert open(env['MY_CLOUD_INI_FILE'], 'r').read() == '[mycloud]\nABC123'
path = os.path.join(private_data_dir, os.path.basename(env['MY_CLOUD_INI_FILE']))
assert open(path, 'r').read() == '[mycloud]\nABC123'
def test_custom_environment_injectors_with_unicode_content(self, private_data_dir):
value = 'Iñtërnâtiônàlizætiøn'
@@ -1746,7 +1695,8 @@ class TestJobCredentials(TestJobExecution):
credential, env, {}, [], private_data_dir
)
assert open(env['MY_CLOUD_INI_FILE'], 'r').read() == value
path = os.path.join(private_data_dir, os.path.basename(env['MY_CLOUD_INI_FILE']))
assert open(path, 'r').read() == value
def test_custom_environment_injectors_with_files(self, private_data_dir):
some_cloud = CredentialType(
@@ -1786,8 +1736,10 @@ class TestJobCredentials(TestJobExecution):
credential, env, {}, [], private_data_dir
)
assert open(env['MY_CERT_INI_FILE'], 'r').read() == '[mycert]\nCERT123'
assert open(env['MY_KEY_INI_FILE'], 'r').read() == '[mykey]\nKEY123'
cert_path = os.path.join(private_data_dir, os.path.basename(env['MY_CERT_INI_FILE']))
key_path = os.path.join(private_data_dir, os.path.basename(env['MY_KEY_INI_FILE']))
assert open(cert_path, 'r').read() == '[mycert]\nCERT123'
assert open(key_path, 'r').read() == '[mykey]\nKEY123'
def test_multi_cloud(self, private_data_dir):
gce = CredentialType.defaults['gce']()
@@ -1826,7 +1778,8 @@ class TestJobCredentials(TestJobExecution):
assert env['AZURE_AD_USER'] == 'bob'
assert env['AZURE_PASSWORD'] == 'secret'
json_data = json.load(open(env['GCE_CREDENTIALS_FILE_PATH'], 'rb'))
path = os.path.join(private_data_dir, os.path.basename(env['GCE_CREDENTIALS_FILE_PATH']))
json_data = json.load(open(path, 'rb'))
assert json_data['type'] == 'service_account'
assert json_data['private_key'] == self.EXAMPLE_PRIVATE_KEY
assert json_data['client_email'] == 'bob'
@@ -1971,29 +1924,6 @@ class TestProjectUpdateCredentials(TestJobExecution):
]
}
def test_process_isolation_exposes_projects_root(self, private_data_dir, project_update):
task = tasks.RunProjectUpdate()
task.revision_path = 'foobar'
task.instance = project_update
ssh = CredentialType.defaults['ssh']()
project_update.scm_type = 'git'
project_update.credential = Credential(
pk=1,
credential_type=ssh,
)
process_isolation = task.build_params_process_isolation(job, private_data_dir, 'cwd')
assert process_isolation['process_isolation'] is True
assert settings.PROJECTS_ROOT in process_isolation['process_isolation_show_paths']
task._write_extra_vars_file = mock.Mock()
with mock.patch.object(Licenser, 'validate', lambda *args, **kw: {}):
task.build_extra_vars_file(project_update, private_data_dir)
call_args, _ = task._write_extra_vars_file.call_args_list[0]
_, extra_vars = call_args
def test_username_and_password_auth(self, project_update, scm_type):
task = tasks.RunProjectUpdate()
ssh = CredentialType.defaults['ssh']()
@@ -2107,7 +2037,8 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert '-i' in ' '.join(args)
script = args[args.index('-i') + 1]
with open(script, 'r') as f:
host_script = script.replace('/runner', private_data_dir)
with open(host_script, 'r') as f:
assert f.read() == inventory_update.source_script.script
assert env['FOO'] == 'BAR'
if with_credential:
@@ -2307,7 +2238,8 @@ class TestInventoryUpdateCredentials(TestJobExecution):
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, False, private_data_files)
shade_config = open(env['OS_CLIENT_CONFIG_FILE'], 'r').read()
path = os.path.join(private_data_dir, os.path.basename(env['OS_CLIENT_CONFIG_FILE']))
shade_config = open(path, 'r').read()
assert '\n'.join([
'clouds:',
' devstack:',

View File

@@ -9,9 +9,6 @@ import json
import yaml
from unittest import mock
from backports.tempfile import TemporaryDirectory
from django.conf import settings
from rest_framework.exceptions import ParseError
from awx.main.utils import common
@@ -194,24 +191,3 @@ def test_extract_ansible_vars():
redacted, var_list = common.extract_ansible_vars(json.dumps(my_dict))
assert var_list == set(['ansible_connetion_setting'])
assert redacted == {"foobar": "baz"}
def test_get_custom_venv_choices():
bundled_venv = os.path.join(settings.BASE_VENV_PATH, 'ansible', '')
assert sorted(common.get_custom_venv_choices()) == [bundled_venv]
with TemporaryDirectory(dir=settings.BASE_VENV_PATH, prefix='tmp') as temp_dir:
os.makedirs(os.path.join(temp_dir, 'bin', 'activate'))
custom_venv_dir = os.path.join(temp_dir, 'custom')
custom_venv_1 = os.path.join(custom_venv_dir, 'venv-1')
custom_venv_awx = os.path.join(custom_venv_dir, 'custom', 'awx')
os.makedirs(os.path.join(custom_venv_1, 'bin', 'activate'))
os.makedirs(os.path.join(custom_venv_awx, 'bin', 'activate'))
assert sorted(common.get_custom_venv_choices([custom_venv_dir])) == [
bundled_venv,
os.path.join(temp_dir, ''),
os.path.join(custom_venv_1, '')
]

View File

@@ -55,7 +55,8 @@ __all__ = [
'model_instance_diff', 'parse_yaml_or_json', 'RequireDebugTrueOrTest',
'has_model_field_prefetched', 'set_environ', 'IllegalArgumentError',
'get_custom_venv_choices', 'get_external_account', 'task_manager_bulk_reschedule',
'schedule_task_manager', 'classproperty', 'create_temporary_fifo', 'truncate_stdout'
'schedule_task_manager', 'classproperty', 'create_temporary_fifo', 'truncate_stdout',
'deepmerge'
]
@@ -1079,3 +1080,21 @@ def truncate_stdout(stdout, size):
set_count += 1
return stdout + u'\u001b[0m' * (set_count - reset_count)
def deepmerge(a, b):
"""
Merge dict structures and return the result.
>>> a = {'first': {'all_rows': {'pass': 'dog', 'number': '1'}}}
>>> b = {'first': {'all_rows': {'fail': 'cat', 'number': '5'}}}
>>> import pprint; pprint.pprint(deepmerge(a, b))
{'first': {'all_rows': {'fail': 'cat', 'number': '5', 'pass': 'dog'}}}
"""
if isinstance(a, dict) and isinstance(b, dict):
return dict([(k, deepmerge(a.get(k), b.get(k)))
for k in set(a.keys()).union(b.keys())])
elif b is None:
return a
else:
return b

View File

@@ -18,6 +18,7 @@ def construct_rsyslog_conf_template(settings=settings):
timeout = getattr(settings, 'LOG_AGGREGATOR_TCP_TIMEOUT', 5)
max_disk_space = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_GB', 1)
spool_directory = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH', '/var/lib/awx').rstrip('/')
error_log_file = getattr(settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', '')
if not os.access(spool_directory, os.W_OK):
spool_directory = '/var/lib/awx'
@@ -31,7 +32,7 @@ def construct_rsyslog_conf_template(settings=settings):
'$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf',
f'main_queue(queue.spoolDirectory="{spool_directory}" queue.maxdiskspace="{max_disk_space}g" queue.type="Disk" queue.filename="awx-external-logger-backlog")', # noqa
'module(load="imuxsock" SysSock.Use="off")',
'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on")',
'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on" RateLimit.Burst="0")',
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
])
@@ -74,9 +75,10 @@ def construct_rsyslog_conf_template(settings=settings):
f'skipverifyhost="{skip_verify}"',
'action.resumeRetryCount="-1"',
'template="awx"',
'errorfile="/var/log/tower/rsyslog.err"',
f'action.resumeInterval="{timeout}"'
]
if error_log_file:
params.append(f'errorfile="{error_log_file}"')
if parsed.path:
path = urlparse.quote(parsed.path[1:], safe='/=')
if parsed.query:
@@ -114,7 +116,7 @@ def construct_rsyslog_conf_template(settings=settings):
def reconfigure_rsyslog():
tmpl = construct_rsyslog_conf_template()
# Write config to a temp file then move it to preserve atomicity
with tempfile.TemporaryDirectory(prefix='rsyslog-conf-') as temp_dir:
with tempfile.TemporaryDirectory(dir='/var/lib/awx/rsyslog/', prefix='rsyslog-conf-') as temp_dir:
path = temp_dir + '/rsyslog.conf.temp'
with open(path, 'w') as f:
os.chmod(path, 0o640)

View File

@@ -15,6 +15,10 @@ from django.apps import apps
from django.db import models
from django.conf import settings
from django_guid.log_filters import CorrelationId
from django_guid.middleware import GuidMiddleware
from awx import MODE
from awx.main.constants import LOGGER_BLOCKLIST
from awx.main.utils.common import get_search_fields
@@ -364,3 +368,14 @@ class SmartFilter(object):
return res[0].result
raise RuntimeError("Parsing the filter_string %s went terribly wrong" % filter_string)
class DefaultCorrelationId(CorrelationId):
def filter(self, record):
guid = GuidMiddleware.get_guid() or '-'
if MODE == 'development':
guid = guid[:8]
record.guid = guid
return True

View File

@@ -3,6 +3,7 @@
from copy import copy
import json
import json_log_formatter
import logging
import traceback
import socket
@@ -14,6 +15,15 @@ from django.core.serializers.json import DjangoJSONEncoder
from django.conf import settings
class JobLifeCycleFormatter(json_log_formatter.JSONFormatter):
def json_record(self, message: str, extra: dict, record: logging.LogRecord):
if 'time' not in extra:
extra['time'] = now()
if record.exc_info:
extra['exc_info'] = self.formatException(record.exc_info)
return extra
class TimeFormatter(logging.Formatter):
'''
Custom log formatter used for inventory imports
@@ -144,6 +154,9 @@ class LogstashFormatter(LogstashFormatterBase):
if kind == 'job_events' and raw_data.get('python_objects', {}).get('job_event'):
job_event = raw_data['python_objects']['job_event']
guid = job_event.event_data.pop('guid', None)
if guid:
data_for_log['guid'] = guid
for field_object in job_event._meta.fields:
if not field_object.__class__ or not field_object.__class__.__name__:

View File

@@ -3,7 +3,8 @@
# Python
import logging
import os.path
import sys
import traceback
# Django
from django.conf import settings
@@ -21,27 +22,31 @@ class RSysLogHandler(logging.handlers.SysLogHandler):
super(RSysLogHandler, self)._connect_unixsocket(address)
self.socket.setblocking(False)
def handleError(self, record):
# for any number of reasons, rsyslogd has gone to lunch;
# this usually means that it's just been restarted (due to
# a configuration change) unfortunately, we can't log that
# because...rsyslogd is down (and would just put us back down this
# code path)
# as a fallback, it makes the most sense to just write the
# messages to sys.stderr (which will end up in supervisord logs,
# and in containerized installs, cascaded down to pod logs)
# because the alternative is blocking the
# socket.send() in the Python process, which we definitely don't
# want to do)
msg = f'{record.asctime} ERROR rsyslogd was unresponsive: '
exc = traceback.format_exc()
try:
msg += exc.splitlines()[-1]
except Exception:
msg += exc
msg = '\n'.join([msg, record.msg, ''])
sys.stderr.write(msg)
def emit(self, msg):
if not settings.LOG_AGGREGATOR_ENABLED:
return
if not os.path.exists(settings.LOGGING['handlers']['external_logger']['address']):
return
try:
return super(RSysLogHandler, self).emit(msg)
except ConnectionRefusedError:
# rsyslogd has gone to lunch; this generally means that it's just
# been restarted (due to a configuration change)
# unfortunately, we can't log that because...rsyslogd is down (and
# would just us back ddown this code path)
pass
except BlockingIOError:
# for <some reason>, rsyslogd is no longer reading from the domain socket, and
# we're unable to write any more to it without blocking (we've seen this behavior
# from time to time when logging is totally misconfigured;
# in this scenario, it also makes more sense to just drop the messages,
# because the alternative is blocking the socket.send() in the
# Python process, which we definitely don't want to do)
pass
return super(RSysLogHandler, self).emit(msg)
class SpecialInventoryHandler(logging.Handler):
@@ -103,6 +108,15 @@ if settings.COLOR_LOGS is True:
from logutils.colorize import ColorizingStreamHandler
class ColorHandler(ColorizingStreamHandler):
def colorize(self, line, record):
# comment out this method if you don't like the job_lifecycle
# logs rendered with cyan text
previous_level_map = self.level_map.copy()
if record.name == "awx.analytics.job_lifecycle":
self.level_map[logging.DEBUG] = (None, 'cyan', True)
msg = super(ColorHandler, self).colorize(line, record)
self.level_map = previous_level_map
return msg
def format(self, record):
message = logging.StreamHandler.format(self, record)

View File

@@ -24,9 +24,7 @@
tasks:
- name: delete project directory before update
file:
path: "{{project_path|quote}}"
state: absent
command: "rm -rf {{project_path}}/*" # volume mounted, cannot delete folder itself
tags:
- delete
@@ -57,6 +55,8 @@
force: "{{scm_clean}}"
username: "{{scm_username|default(omit)}}"
password: "{{scm_password|default(omit)}}"
# must be in_place because folder pre-existing, because it is mounted
in_place: true
environment:
LC_ALL: 'en_US.UTF-8'
register: svn_result
@@ -206,6 +206,9 @@
ANSIBLE_FORCE_COLOR: false
ANSIBLE_COLLECTIONS_PATHS: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_collections"
GIT_SSH_COMMAND: "ssh -o StrictHostKeyChecking=no"
# Put the local tmp directory in same volume as collection destination
# otherwise, files cannot be moved accross volumes and will cause error
ANSIBLE_LOCAL_TEMP: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/tmp"
when:
- "ansible_version.full is version_compare('2.9', '>=')"

View File

@@ -1,6 +1,7 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
import base64
import os
import re # noqa
import sys
@@ -58,11 +59,23 @@ DATABASES = {
}
}
# Whether or not the deployment is a K8S-based deployment
# In K8S-based deployments, instances have zero capacity - all playbook
# automation is intended to flow through defined Container Groups that
# interface with some (or some set of) K8S api (which may or may not include
# the K8S cluster where awx itself is running)
IS_K8S = False
# TODO: remove this setting in favor of a default execution environment
AWX_EXECUTION_ENVIRONMENT_DEFAULT_IMAGE = 'quay.io/ansible/awx-ee'
AWX_CONTAINER_GROUP_K8S_API_TIMEOUT = 10
AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES = 100
AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY = 5
AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE = 'default'
AWX_CONTAINER_GROUP_DEFAULT_IMAGE = 'ansible/ansible-runner'
AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE = os.getenv('MY_POD_NAMESPACE', 'default')
# TODO: remove this setting in favor of a default execution environment
AWX_CONTAINER_GROUP_DEFAULT_IMAGE = AWX_EXECUTION_ENVIRONMENT_DEFAULT_IMAGE
# Internationalization
# https://docs.djangoproject.com/en/dev/topics/i18n/
@@ -148,7 +161,10 @@ SCHEDULE_MAX_JOBS = 10
SITE_ID = 1
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'p7z7g1ql4%6+(6nlebb6hdk7sd^&fnjpal308%n%+p^_e6vo1y'
if os.path.exists('/etc/tower/SECRET_KEY'):
SECRET_KEY = open('/etc/tower/SECRET_KEY', 'rb').read().strip()
else:
SECRET_KEY = base64.encodebytes(os.urandom(32)).decode().rstrip()
# Hosts/domain names that are valid for this site; required if DEBUG is False
# See https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
@@ -169,6 +185,7 @@ REMOTE_HOST_HEADERS = ['REMOTE_ADDR', 'REMOTE_HOST']
PROXY_IP_ALLOWED_LIST = []
CUSTOM_VENV_PATHS = []
DEFAULT_EXECUTION_ENVIRONMENT = None
# Note: This setting may be overridden by database settings.
STDOUT_MAX_BYTES_DISPLAY = 1048576
@@ -262,6 +279,7 @@ TEMPLATES = [
'DIRS': [
os.path.join(BASE_DIR, 'templates'),
os.path.join(BASE_DIR, 'ui_next', 'build'),
os.path.join(BASE_DIR, 'ui_next', 'public')
],
},
]
@@ -284,11 +302,13 @@ INSTALLED_APPS = [
'polymorphic',
'taggit',
'social_django',
'django_guid',
'corsheaders',
'awx.conf',
'awx.main',
'awx.api',
'awx.ui',
'awx.ui_next',
'awx.sso',
'solo'
]
@@ -344,6 +364,9 @@ AUTHENTICATION_BACKENDS = (
'social_core.backends.github.GithubOAuth2',
'social_core.backends.github.GithubOrganizationOAuth2',
'social_core.backends.github.GithubTeamOAuth2',
'social_core.backends.github_enterprise.GithubEnterpriseOAuth2',
'social_core.backends.github_enterprise.GithubEnterpriseOrganizationOAuth2',
'social_core.backends.github_enterprise.GithubEnterpriseTeamOAuth2',
'social_core.backends.azuread.AzureADOAuth2',
'awx.sso.backends.SAMLAuth',
'django.contrib.auth.backends.ModelBackend',
@@ -520,6 +543,20 @@ SOCIAL_AUTH_GITHUB_TEAM_SECRET = ''
SOCIAL_AUTH_GITHUB_TEAM_ID = ''
SOCIAL_AUTH_GITHUB_TEAM_SCOPE = ['user:email', 'read:org']
SOCIAL_AUTH_GITHUB_ENTERPRISE_KEY = ''
SOCIAL_AUTH_GITHUB_ENTERPRISE_SECRET = ''
SOCIAL_AUTH_GITHUB_ENTERPRISE_SCOPE = ['user:email', 'read:org']
SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_KEY = ''
SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SECRET = ''
SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_NAME = ''
SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SCOPE = ['user:email', 'read:org']
SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_KEY = ''
SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SECRET = ''
SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ID = ''
SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SCOPE = ['user:email', 'read:org']
SOCIAL_AUTH_AZUREAD_OAUTH2_KEY = ''
SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET = ''
@@ -655,7 +692,7 @@ AD_HOC_COMMANDS = [
'win_user',
]
INV_ENV_VARIABLE_BLOCKED = ("HOME", "USER", "_", "TERM")
INV_ENV_VARIABLE_BLOCKED = ("HOME", "USER", "_", "TERM", "PATH")
# ----------------
# -- Amazon EC2 --
@@ -718,10 +755,10 @@ TOWER_INSTANCE_ID_VAR = 'remote_tower_id'
# ---------------------
# ----- Foreman -----
# ---------------------
SATELLITE6_ENABLED_VAR = 'foreman.enabled'
SATELLITE6_ENABLED_VAR = 'foreman_enabled'
SATELLITE6_ENABLED_VALUE = 'True'
SATELLITE6_EXCLUDE_EMPTY_GROUPS = True
SATELLITE6_INSTANCE_ID_VAR = 'foreman.id'
SATELLITE6_INSTANCE_ID_VAR = 'foreman_id'
# SATELLITE6_GROUP_PREFIX and SATELLITE6_GROUP_PATTERNS defined in source vars
# ---------------------
@@ -759,6 +796,8 @@ TOWER_URL_BASE = "https://towerhost"
INSIGHTS_URL_BASE = "https://example.org"
INSIGHTS_AGENT_MIME = 'application/example'
# See https://github.com/ansible/awx-facts-playbooks
INSIGHTS_SYSTEM_ID_FILE='/etc/redhat-access-insights/machine-id'
TOWER_SETTINGS_MANIFEST = {}
@@ -770,6 +809,7 @@ LOG_AGGREGATOR_LEVEL = 'INFO'
LOG_AGGREGATOR_MAX_DISK_USAGE_GB = 1
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx'
LOG_AGGREGATOR_RSYSLOGD_DEBUG = False
LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE = '/var/log/tower/rsyslog.err'
# The number of retry attempts for websocket session establishment
# If you're encountering issues establishing websockets in clustered Tower,
@@ -808,11 +848,14 @@ LOGGING = {
},
'dynamic_level_filter': {
'()': 'awx.main.utils.filters.DynamicLevelFilter'
}
},
'guid': {
'()': 'awx.main.utils.filters.DefaultCorrelationId'
},
},
'formatters': {
'simple': {
'format': '%(asctime)s %(levelname)-8s %(name)s %(message)s',
'format': '%(asctime)s %(levelname)-8s [%(guid)s] %(name)s %(message)s',
},
'json': {
'()': 'awx.main.utils.formatters.LogstashFormatter'
@@ -822,14 +865,17 @@ LOGGING = {
'format': '%(relativeSeconds)9.3f %(levelname)-8s %(message)s'
},
'dispatcher': {
'format': '%(asctime)s %(levelname)-8s %(name)s PID:%(process)d %(message)s',
'format': '%(asctime)s %(levelname)-8s [%(guid)s] %(name)s PID:%(process)d %(message)s',
},
'job_lifecycle': {
'()': 'awx.main.utils.formatters.JobLifeCycleFormatter',
},
},
'handlers': {
'console': {
'()': 'logging.StreamHandler',
'level': 'DEBUG',
'filters': ['require_debug_true_or_test'],
'filters': ['require_debug_true_or_test', 'guid'],
'formatter': 'simple',
},
'null': {
@@ -849,42 +895,34 @@ LOGGING = {
'class': 'awx.main.utils.handlers.RSysLogHandler',
'formatter': 'json',
'address': '/var/run/awx-rsyslog/rsyslog.sock',
'filters': ['external_log_enabled', 'dynamic_level_filter'],
'filters': ['external_log_enabled', 'dynamic_level_filter', 'guid'],
},
'tower_warnings': {
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
'class': 'logging.handlers.RotatingFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter'],
'class': 'logging.handlers.WatchedFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter', 'guid'],
'filename': os.path.join(LOG_ROOT, 'tower.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'simple',
},
'callback_receiver': {
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
'class': 'logging.handlers.RotatingFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter'],
'class': 'logging.handlers.WatchedFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter', 'guid'],
'filename': os.path.join(LOG_ROOT, 'callback_receiver.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'simple',
},
'dispatcher': {
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
'class': 'logging.handlers.RotatingFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter'],
'class': 'logging.handlers.WatchedFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter', 'guid'],
'filename': os.path.join(LOG_ROOT, 'dispatcher.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'dispatcher',
},
'wsbroadcast': {
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
'class': 'logging.handlers.RotatingFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter'],
'class': 'logging.handlers.WatchedFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter', 'guid'],
'filename': os.path.join(LOG_ROOT, 'wsbroadcast.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'simple',
},
'celery.beat': {
@@ -898,48 +936,44 @@ LOGGING = {
},
'task_system': {
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
'class': 'logging.handlers.RotatingFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter'],
'class': 'logging.handlers.WatchedFileHandler',
'filters': ['require_debug_false', 'dynamic_level_filter', 'guid'],
'filename': os.path.join(LOG_ROOT, 'task_system.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'simple',
},
'management_playbooks': {
'level': 'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
'class':'logging.handlers.WatchedFileHandler',
'filters': ['require_debug_false'],
'filename': os.path.join(LOG_ROOT, 'management_playbooks.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'simple',
},
'system_tracking_migrations': {
'level': 'WARNING',
'class':'logging.handlers.RotatingFileHandler',
'class':'logging.handlers.WatchedFileHandler',
'filters': ['require_debug_false'],
'filename': os.path.join(LOG_ROOT, 'tower_system_tracking_migrations.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'simple',
},
'rbac_migrations': {
'level': 'WARNING',
'class':'logging.handlers.RotatingFileHandler',
'class':'logging.handlers.WatchedFileHandler',
'filters': ['require_debug_false'],
'filename': os.path.join(LOG_ROOT, 'tower_rbac_migrations.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'simple',
},
'isolated_manager': {
'level': 'WARNING',
'class':'logging.handlers.RotatingFileHandler',
'class':'logging.handlers.WatchedFileHandler',
'filename': os.path.join(LOG_ROOT, 'isolated_manager.log'),
'maxBytes': 1024 * 1024 * 5, # 5 MB
'backupCount': 5,
'formatter':'simple',
},
'job_lifecycle': {
'level': 'DEBUG',
'class':'logging.handlers.WatchedFileHandler',
'filename': os.path.join(LOG_ROOT, 'job_lifecycle.log'),
'formatter': 'job_lifecycle',
},
},
'loggers': {
'django': {
@@ -1029,6 +1063,16 @@ LOGGING = {
'level': 'INFO',
'propagate': False
},
'awx.analytics.performance': {
'handlers': ['console', 'file', 'tower_warnings', 'external_logger'],
'level': 'DEBUG',
'propagate': False
},
'awx.analytics.job_lifecycle': {
'handlers': ['console', 'job_lifecycle'],
'level': 'DEBUG',
'propagate': False
},
'django_auth_ldap': {
'handlers': ['console', 'file', 'tower_warnings'],
'level': 'DEBUG',
@@ -1078,6 +1122,7 @@ AWX_CALLBACK_PROFILE = False
AWX_CLEANUP_PATHS = True
MIDDLEWARE = [
'django_guid.middleware.GuidMiddleware',
'awx.main.middleware.TimingMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'awx.main.middleware.MigrationRanCheckMiddleware',
@@ -1118,3 +1163,7 @@ BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS = 10
# How often websocket process will generate stats
BROADCAST_WEBSOCKET_STATS_POLL_RATE_SECONDS = 5
DJANGO_GUID = {
'GUID_HEADER_NAME': 'X-API-Request-Id',
}

View File

@@ -21,13 +21,6 @@ from split_settings.tools import optional, include
# Load default settings.
from .defaults import * # NOQA
if "pytest" in sys.modules:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-{}'.format(str(uuid.uuid4())),
},
}
# awx-manage shell_plus --notebook
NOTEBOOK_ARGUMENTS = [
@@ -53,6 +46,10 @@ LOGGING['loggers']['awx.isolated.manager.playbooks']['propagate'] = True # noqa
# celery is annoyingly loud when docker containers start
LOGGING['loggers'].pop('celery', None) # noqa
# avoid awx.main.dispatch WARNING-level scaling worker up/down messages
LOGGING['loggers']['awx.main.dispatch']['level'] = 'ERROR' # noqa
# suppress the spamminess of the awx.main.scheduler and .tasks loggers
LOGGING['loggers']['awx']['level'] = 'INFO' # noqa
ALLOWED_HOSTS = ['*']
@@ -158,11 +155,35 @@ AWX_VENV_PATH = os.path.join(BASE_VENV_PATH, "awx")
# default settings for development. If not present, we can still run using
# only the defaults.
try:
include(optional('local_*.py'), scope=locals())
if os.getenv('AWX_KUBE_DEVEL', False):
include(optional('minikube.py'), scope=locals())
else:
include(optional('local_*.py'), scope=locals())
except ImportError:
traceback.print_exc()
sys.exit(1)
# Use SQLite for unit tests instead of PostgreSQL. If the lines below are
# commented out, Django will create the test_awx-dev database in PostgreSQL to
# run unit tests.
if "pytest" in sys.modules:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-{}'.format(str(uuid.uuid4())),
},
}
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'awx.sqlite3'), # noqa
'TEST': {
# Test database cannot be :memory: for inventory tests.
'NAME': os.path.join(BASE_DIR, 'awx_test.sqlite3'), # noqa
},
}
}
CELERYBEAT_SCHEDULE.update({ # noqa
'isolated_heartbeat': {
@@ -174,15 +195,6 @@ CELERYBEAT_SCHEDULE.update({ # noqa
CLUSTER_HOST_ID = socket.gethostname()
if 'Docker Desktop' in os.getenv('OS', ''):
os.environ['SDB_NOTIFY_HOST'] = 'docker.for.mac.host.internal'
else:
try:
os.environ['SDB_NOTIFY_HOST'] = os.popen('ip route').read().split(' ')[2]
except Exception:
pass
AWX_CALLBACK_PROFILE = True
if 'sqlite3' not in DATABASES['default']['ENGINE']: # noqa

4
awx/settings/minikube.py Normal file
View File

@@ -0,0 +1,4 @@
BROADCAST_WEBSOCKET_SECRET = '🤖starscream🤖'
BROADCAST_WEBSOCKET_PORT = 8013
BROADCAST_WEBSOCKET_VERIFY_CERT = False
BROADCAST_WEBSOCKET_PROTOCOL = 'http'

View File

@@ -842,6 +842,298 @@ register(
placeholder=SOCIAL_AUTH_TEAM_MAP_PLACEHOLDER,
)
###############################################################################
# GITHUB ENTERPRISE OAUTH2 AUTHENTICATION SETTINGS
###############################################################################
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_CALLBACK_URL',
field_class=fields.CharField,
read_only=True,
default=SocialAuthCallbackURL('github-enterprise'),
label=_('GitHub Enterprise OAuth2 Callback URL'),
help_text=_('Provide this URL as the callback URL for your application as part '
'of your registration process. Refer to the Ansible Tower '
'documentation for more detail.'),
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise',
depends_on=['TOWER_URL_BASE'],
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_URL',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise URL'),
help_text=_('The URL for your Github Enterprise instance, e.g.: http(s)://hostname/. Refer to Github Enterprise '
'documentation for more details.'),
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise API URL'),
help_text=_('The API URL for your GitHub Enterprise instance, e.g.: http(s)://hostname/api/v3/. Refer to Github '
'Enterprise documentation for more details.'),
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_KEY',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise OAuth2 Key'),
help_text=_('The OAuth2 key (Client ID) from your GitHub Enterprise developer application.'),
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_SECRET',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise OAuth2 Secret'),
help_text=_('The OAuth2 secret (Client Secret) from your GitHub Enterprise developer application.'),
category=_('GitHub OAuth2'),
category_slug='github-enterprise',
encrypted=True,
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORGANIZATION_MAP',
field_class=SocialOrganizationMapField,
allow_null=True,
default=None,
label=_('GitHub Enterprise OAuth2 Organization Map'),
help_text=SOCIAL_AUTH_ORGANIZATION_MAP_HELP_TEXT,
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise',
placeholder=SOCIAL_AUTH_ORGANIZATION_MAP_PLACEHOLDER,
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_MAP',
field_class=SocialTeamMapField,
allow_null=True,
default=None,
label=_('GitHub Enterprise OAuth2 Team Map'),
help_text=SOCIAL_AUTH_TEAM_MAP_HELP_TEXT,
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise',
placeholder=SOCIAL_AUTH_TEAM_MAP_PLACEHOLDER,
)
###############################################################################
# GITHUB ENTERPRISE ORG OAUTH2 AUTHENTICATION SETTINGS
###############################################################################
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_CALLBACK_URL',
field_class=fields.CharField,
read_only=True,
default=SocialAuthCallbackURL('github-enterprise-org'),
label=_('GitHub Enterprise Organization OAuth2 Callback URL'),
help_text=_('Provide this URL as the callback URL for your application as part '
'of your registration process. Refer to the Ansible Tower '
'documentation for more detail.'),
category=_('GitHub Enterprise Organization OAuth2'),
category_slug='github-enterprise-org',
depends_on=['TOWER_URL_BASE'],
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_URL',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Organization URL'),
help_text=_('The URL for your Github Enterprise instance, e.g.: http(s)://hostname/. Refer to Github Enterprise '
'documentation for more details.'),
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise-org',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_API_URL',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Organization API URL'),
help_text=_('The API URL for your GitHub Enterprise instance, e.g.: http(s)://hostname/api/v3/. Refer to Github '
'Enterprise documentation for more details.'),
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise-org',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_KEY',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Organization OAuth2 Key'),
help_text=_('The OAuth2 key (Client ID) from your GitHub Enterprise organization application.'),
category=_('GitHub Enterprise Organization OAuth2'),
category_slug='github-enterprise-org',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SECRET',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Organization OAuth2 Secret'),
help_text=_('The OAuth2 secret (Client Secret) from your GitHub Enterprise organization application.'),
category=_('GitHub Enterprise Organization OAuth2'),
category_slug='github-enterprise-org',
encrypted=True,
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_NAME',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Organization Name'),
help_text=_('The name of your GitHub Enterprise organization, as used in your '
'organization\'s URL: https://github.com/<yourorg>/.'),
category=_('GitHub Enterprise Organization OAuth2'),
category_slug='github-enterprise-org',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_ORGANIZATION_MAP',
field_class=SocialOrganizationMapField,
allow_null=True,
default=None,
label=_('GitHub Enterprise Organization OAuth2 Organization Map'),
help_text=SOCIAL_AUTH_ORGANIZATION_MAP_HELP_TEXT,
category=_('GitHub Enterprise Organization OAuth2'),
category_slug='github-enterprise-org',
placeholder=SOCIAL_AUTH_ORGANIZATION_MAP_PLACEHOLDER,
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_TEAM_MAP',
field_class=SocialTeamMapField,
allow_null=True,
default=None,
label=_('GitHub Enterprise Organization OAuth2 Team Map'),
help_text=SOCIAL_AUTH_TEAM_MAP_HELP_TEXT,
category=_('GitHub Enterprise Organization OAuth2'),
category_slug='github-enterprise-org',
placeholder=SOCIAL_AUTH_TEAM_MAP_PLACEHOLDER,
)
###############################################################################
# GITHUB ENTERPRISE TEAM OAUTH2 AUTHENTICATION SETTINGS
###############################################################################
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_CALLBACK_URL',
field_class=fields.CharField,
read_only=True,
default=SocialAuthCallbackURL('github-enterprise-team'),
label=_('GitHub Enterprise Team OAuth2 Callback URL'),
help_text=_('Create an organization-owned application at '
'https://github.com/organizations/<yourorg>/settings/applications '
'and obtain an OAuth2 key (Client ID) and secret (Client Secret). '
'Provide this URL as the callback URL for your application.'),
category=_('GitHub Enterprise Team OAuth2'),
category_slug='github-enterprise-team',
depends_on=['TOWER_URL_BASE'],
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_URL',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Team URL'),
help_text=_('The URL for your Github Enterprise instance, e.g.: http(s)://hostname/. Refer to Github Enterprise '
'documentation for more details.'),
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise-team',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_API_URL',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Team API URL'),
help_text=_('The API URL for your GitHub Enterprise instance, e.g.: http(s)://hostname/api/v3/. Refer to Github '
'Enterprise documentation for more details.'),
category=_('GitHub Enterprise OAuth2'),
category_slug='github-enterprise-team',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_KEY',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Team OAuth2 Key'),
help_text=_('The OAuth2 key (Client ID) from your GitHub Enterprise organization application.'),
category=_('GitHub Enterprise Team OAuth2'),
category_slug='github-enterprise-team',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SECRET',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Team OAuth2 Secret'),
help_text=_('The OAuth2 secret (Client Secret) from your GitHub Enterprise organization application.'),
category=_('GitHub Enterprise Team OAuth2'),
category_slug='github-enterprise-team',
encrypted=True,
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ID',
field_class=fields.CharField,
allow_blank=True,
default='',
label=_('GitHub Enterprise Team ID'),
help_text=_('Find the numeric team ID using the Github Enterprise API: '
'http://fabian-kostadinov.github.io/2015/01/16/how-to-find-a-github-team-id/.'),
category=_('GitHub Enterprise Team OAuth2'),
category_slug='github-enterprise-team',
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ORGANIZATION_MAP',
field_class=SocialOrganizationMapField,
allow_null=True,
default=None,
label=_('GitHub Enterprise Team OAuth2 Organization Map'),
help_text=SOCIAL_AUTH_ORGANIZATION_MAP_HELP_TEXT,
category=_('GitHub Enterprise Team OAuth2'),
category_slug='github-enterprise-team',
placeholder=SOCIAL_AUTH_ORGANIZATION_MAP_PLACEHOLDER,
)
register(
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_TEAM_MAP',
field_class=SocialTeamMapField,
allow_null=True,
default=None,
label=_('GitHub Enterprise Team OAuth2 Team Map'),
help_text=SOCIAL_AUTH_TEAM_MAP_HELP_TEXT,
category=_('GitHub Enterprise Team OAuth2'),
category_slug='github-enterprise-team',
placeholder=SOCIAL_AUTH_TEAM_MAP_PLACEHOLDER,
)
###############################################################################
# MICROSOFT AZURE ACTIVE DIRECTORY SETTINGS
###############################################################################

Some files were not shown because too many files have changed in this diff Show More