Compare commits

...

108 Commits

Author SHA1 Message Date
softwarefactory-project-zuul[bot]
72de660ea1 Merge pull request #6880 from ryanpetrello/fix-awxkit-new-pip
Keep awxkit's requirements on the setup.py

Reviewed-by: Jacob Callahan
             https://github.com/JacobCallahan
2020-04-29 19:24:57 +00:00
softwarefactory-project-zuul[bot]
042c854ad3 Merge pull request #6877 from ryanpetrello/install-doc-cleanup
cleanup and clarify some installation instructions

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-29 18:53:31 +00:00
Ryan Petrello
9aed49e146 update the 11.2.0 changelog 2020-04-29 14:49:09 -04:00
Elyézer Rezende
5c91f66316 Keep awxkit's requirements on the setup.py
awxkit's setup.py was making use of pip internal structures to parse the
requirements.txt file. This is not a good thing as they may change,
actually that just happened.

To avoid this in the future, move the list of requirements to setup.py
and make requirements.txt list `.` as the only item. This way we keep a
single place to update requirements in the future and avoid accessing
pip's internals.
2020-04-29 14:47:21 -04:00
softwarefactory-project-zuul[bot]
5e223db945 Merge pull request #6808 from AlexSCorey/6694-WebhookDataOnJTDEETs
Adds webhook fields to job template details

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-29 18:43:53 +00:00
Ryan Petrello
67e2f19aaf cleanup and clarify some installation instructions 2020-04-29 14:09:04 -04:00
softwarefactory-project-zuul[bot]
e940f1f7df Merge pull request #6849 from ggiinnoo/ssl-seperate-key
Added variable to inventory to add a separate SSL key file

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-29 14:38:32 +00:00
softwarefactory-project-zuul[bot]
ebee75e91e Merge pull request #6868 from john-westcott-iv/sdb_notigy_host_ox_x
Set SDB_NOTIFY_HOST on non-docker OS X

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-29 14:01:04 +00:00
softwarefactory-project-zuul[bot]
e641433c68 Merge pull request #6860 from ryanpetrello/version-11-2-0
bump version 11.2.0

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-29 00:31:30 +00:00
softwarefactory-project-zuul[bot]
c5dc03d4a9 Merge pull request #6867 from AlexSCorey/TestDebug
Fixes failing test

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-28 23:18:33 +00:00
John Westcott IV
3a6a7c100a Going for a more generic 'just pass' option 2020-04-28 16:47:08 -04:00
John Westcott IV
cd07305c08 Set SDB_NOTIFY_HOST on non-docker OS X 2020-04-28 16:37:35 -04:00
Alex Corey
c39423d5ca Fixes failing test 2020-04-28 16:26:46 -04:00
Ryan Petrello
3f00329abd bump version 11.2.0 2020-04-28 13:55:59 -04:00
softwarefactory-project-zuul[bot]
def3114b5b Merge pull request #6856 from ryanpetrello/devel
merge in a few downstream fixes

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-28 17:21:28 +00:00
softwarefactory-project-zuul[bot]
fcc99baaa8 Merge pull request #6848 from vandreykiv/devel
Updated ingress template for kubernetes to support HTTP and HTTPS when annotations not set

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-28 16:52:46 +00:00
Ryan Petrello
aad371d224 Merge branch 'downstream' into devel 2020-04-28 12:18:13 -04:00
Vyacheslav Andreykiv
d5cb6ad58a Update main.yml
Reverted formating changes
2020-04-28 08:56:29 -07:00
ggiinnoo
0bad717db7 Wrong syntax 2020-04-28 17:51:41 +02:00
softwarefactory-project-zuul[bot]
b31abbba82 Merge pull request #6220 from kawsark/devel
Modified hashivault.py to support approle

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-28 15:50:21 +00:00
Ryan Petrello
0d30a67756 Merge pull request #4273 from wenottingham/certscertscerts
Allow unsigned certs in logging if cert verification is disabled.
2020-04-28 11:17:19 -04:00
Bill Nottingham
5d24acf613 Allow unsigned certs in logging if cert verification is disabled. 2020-04-28 11:16:54 -04:00
Ryan Petrello
94692c2364 Merge pull request #4272 from chrismeyersfsu/sos_report_redis_logs
sos report redis logs
2020-04-28 11:00:41 -04:00
Ryan Petrello
29abe35799 Merge pull request #4268 from fosterseth/fix-ip_conflict_cluster
Unset IP address of old instance if conflicting with new instance IP
2020-04-28 10:59:27 -04:00
Seth Foster
6652464e25 Unset old instance IP when conflicting new instance IP
With AWX_AUTO_DEPROVISION_INSTANCES on, instances
are registered with an ip address. However, new
instances might try to register before old instances
are deprivisioned. In this case old IPs can conflict with
the new ones. This will check for an ip conflict and unset
the IP of conflicting instance (set to None)

ansible/awx issue 6750
2020-04-28 10:52:15 -04:00
Pierre-Louis Bonicoli
f9d5860d63 Fix pylint errors 2020-04-28 10:33:41 -04:00
Pierre-Louis Bonicoli
d8ac2f5263 Avoid to repeat default value 2020-04-28 10:33:40 -04:00
Pierre-Louis Bonicoli
fdae3cd092 Explicitly references AppRole 2020-04-28 10:33:40 -04:00
Pierre-Louis Bonicoli
139384acc4 Simplify test branches 2020-04-28 10:33:40 -04:00
Pierre-Louis Bonicoli
9d85e8655d Both methods return a token: rename variable 2020-04-28 10:33:40 -04:00
Pierre-Louis Bonicoli
9fdd9061d3 Remove extraneous call to bool built-in function 2020-04-28 10:33:39 -04:00
kawsark
d3d4ce3804 Modified hashivault.py to support approle 2020-04-28 10:33:37 -04:00
chris meyers
960aa9df16 add redis logs to sosreport
* rhel8 /var/log/redis/redis.log
* rhel7 scl location
2020-04-28 09:12:42 -04:00
ggiinnoo
41a8d32dcc Added variable to inventory to add a separate key file 2020-04-28 11:13:47 +02:00
Bianca Henderson
12b47405f0 Merge pull request #4265 from john-westcott-iv/release_3.7.0
Restructuring modules so that lookup don't happen if deleting
2020-04-27 17:39:11 -04:00
Vyacheslav Andreykiv (Intel)
7659ffca1e fixed ingress template 2020-04-27 14:33:42 -07:00
Alan Rominger
41014e62b7 Avoid applying galaxy settings if no user settings exist (#4262) 2020-04-27 16:15:11 -04:00
softwarefactory-project-zuul[bot]
c2e9df74e4 Merge pull request #6790 from fherbert/feature/awx_cli_job_monitor
Add ability to monitor jobs and workflow_jobs via awx cli, fixes #6165

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-27 19:48:33 +00:00
Ryan Petrello
e1cedcfb04 Merge pull request #4266 from rooftopcellist/rsyslog_followup
Increase stopwait time for rsyslogd service
2020-04-27 15:26:49 -04:00
Francois Herbert
d0384799fd Add ability to monitor jobs and workflow_jobs via awx cli, fixes #6165
Signed-off-by: Francois Herbert <francois@herbert.org.nz>

Fixup linting

Remove timeout option, not really required and it's making the api test fail
2020-04-27 15:09:16 -04:00
Jim Ladd
45f8f0f412 Merge pull request #4264 from jladdjr/satellite6_want_ansible_ssh_host
Add support for satellite6_want_ansible_ssh_host
2020-04-27 11:39:03 -07:00
Christian Adams
c07b6285da Increase stopwait time for rsyslogd service 2020-04-27 14:33:46 -04:00
John Westcott IV
f17ced8f9c Fixing integration tests for deprecation warnings 2020-04-27 14:25:23 -04:00
softwarefactory-project-zuul[bot]
17702c94f7 Merge pull request #6833 from lewisface/devel
Keystone v3 support for OpenStack Credential

Reviewed-by: Ryan Petrello
             https://github.com/ryanpetrello
2020-04-27 18:13:52 +00:00
John Westcott IV
7f66f084f1 Restructuring modules so that lookup don't happen if deleting 2020-04-27 14:11:23 -04:00
Jim Ladd
616e7082b3 Add support for satellite6_want_ansible_ssh_host
* defaults to IPv6, falls back to IP
2020-04-27 11:04:56 -07:00
softwarefactory-project-zuul[bot]
83051e9138 Merge pull request #6005 from donomur/ca-trust-fix
Add CA trust volume to k8s installer management pod

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-27 17:36:20 +00:00
softwarefactory-project-zuul[bot]
33ff4ad9be Merge pull request #6762 from shaynecm/devel
Update inventory file 

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-27 17:34:55 +00:00
softwarefactory-project-zuul[bot]
e5bdd44e13 Merge pull request #5398 from dvaerum/patch-1
If pg_hostname is defined don't try to upgrade PostgreSQL

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-27 17:22:02 +00:00
Chris Meyers
fa73fe76b2 Merge pull request #4263 from ansible/fix-sliding_window_3_7_0
fix sliding window per/minute calc
2020-04-27 13:20:39 -04:00
Donovan Murphy
fb567dad1e add CA trust volume to management pod
Signed-off-by: Donovan Murphy <dono@dono.email>
2020-04-27 13:01:32 -04:00
softwarefactory-project-zuul[bot]
88f44cb185 Merge pull request #6811 from wenottingham/ssh_aaaaaaarghs
Don't override ANSIBLE_SSH_ARGS for ad-hoc commands.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-27 16:44:23 +00:00
Dennis Vestergaard Værum
fd2d0966e2 If pg_hostname is defined don't try to upgrade PostgreSQL 2020-04-27 12:12:37 -04:00
Ryan Petrello
efe9d85685 Merge pull request #4260 from jlmitch5/fixNotificationPagination
make sure right endpoint is getting called for notification pagination
2020-04-27 09:18:34 -04:00
Ryan Petrello
1f1069e9f3 Merge pull request #4261 from jlmitch5/fixLicenseTranslation
Remove unnecessary nesting license translate directive
2020-04-24 16:03:10 -04:00
chris meyers
ac46013fcc fix sliding window per/minute calc
* Add tests to ensure correctness
2020-04-24 15:55:11 -04:00
Christian Adams
f6a2d41e4c Remove unnecessary or translation 2020-04-24 15:35:56 -04:00
John Mitchell
3b84ff7002 remove unnecessary nesting translate directive from license partial 2020-04-24 15:35:56 -04:00
John Mitchell
b1b6af4ad5 make sure right endpoint is getting called for notification pagination 2020-04-24 15:32:14 -04:00
Ryan Petrello
04e6482f36 Merge pull request #6832 from ghjm/service_command
Improve the ansible-tower-service script
2020-04-24 15:24:40 -04:00
softwarefactory-project-zuul[bot]
f7a8e30d3e Merge pull request #6818 from ryanpetrello/rsyslogd-debug
add a setting for enabling high rsyslogd verbosity

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-24 19:13:42 +00:00
Graham Mainwaring
d55b223aae Use sysconfig file to get the list of services 2020-04-24 14:08:24 -04:00
Ryan Petrello
e51d0b6fde add a setting for enabling high rsyslogd verbosity 2020-04-24 14:01:17 -04:00
Graham Mainwaring
e36ffa5764 Return more status information from ansible-tower-service status and error check commands better 2020-04-24 11:37:16 -04:00
Dave Lewis
c86692784e Updated data/inventory/scripts/openstack/files/file_reference
Update file_references due to the inclution of project_domain_name in the openstack credentails.
2020-04-24 15:11:49 +01:00
Dave Lewis
952c91dea5 Updated injector.py and added new test into test_tasks.py 2020-04-24 14:06:13 +01:00
softwarefactory-project-zuul[bot]
e7a9604896 Merge pull request #6826 from AlanCoding/vendoring_collections_vmware
Enable VMWare inventory plugin

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-24 12:32:39 +00:00
AlanCoding
8bbd2c7aaa bump collection requirements, vmware patches
New VMWare and azure were released, pick those up

Enable using of VMWare inventory plugin for Ansible 2.9

Few fixes from manual testing with VMWare
  Remove invalid properties which resulted in no hosts imported
  Add some defaults to be more consistent with script
2020-04-23 19:57:53 -04:00
softwarefactory-project-zuul[bot]
9e87ddde69 Merge pull request #6820 from marshmalien/6530-wf-node-detail-cleanup
Show missing detail fields in workflow node view modal 

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-23 20:34:36 +00:00
softwarefactory-project-zuul[bot]
5d40cf7635 Merge pull request #6825 from ryanpetrello/test-other-loggers
allow users to test log aggregration if `awx` isn't in the loggers list

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-23 20:19:08 +00:00
Marliana Lara
75b54d2c14 Add unit tests to check for deleted details 2020-04-23 15:34:23 -04:00
Alex Corey
1bcd71ec9f Adds webhook fields to job template details 2020-04-23 15:19:03 -04:00
Ryan Petrello
31c6b3337c allow users to test log aggregration if awx isn't in the loggers list 2020-04-23 15:15:37 -04:00
softwarefactory-project-zuul[bot]
c0e07198cf Merge pull request #6283 from AlanCoding/vendoring_collections
Use vendored collections for inventory imports

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-23 18:54:50 +00:00
Marliana Lara
a2c3027bd7 Display a "Deleted" label for prompt view node details 2020-04-23 13:08:16 -04:00
softwarefactory-project-zuul[bot]
acdf15640c Merge pull request #6814 from surfer190/patch-1
Change the custom venv vars install line

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-23 15:38:29 +00:00
Dave Lewis
92b74266ca Correct missing "default=''"
On previous commit missed "default=''" so changing:

project_domain_name=cred.get_input('project_domain_name'))
to
project_domain_name=cred.get_input('project_domain_name', default=''))
2020-04-23 11:33:24 +01:00
Dave Lewis
08323a11b6 Addition of project domain name to OpenStack Credential
Keystone v3 requires user_domain_id and project_domain_name to authenticate, but AWX openstack credential only requests the user_domain_id.
Added in project_domain_name into the credential templating. Not added as a required field as this is only needed when using Keystone v3.
2020-04-23 11:24:51 +01:00
surfer190
6390794828 Change the custom venv vars install line
Add the inventory
2020-04-23 11:43:01 +02:00
AlanCoding
ab703e2a32 Add feature docs and document development tooling 2020-04-22 21:41:52 -04:00
AlanCoding
8c657e210c VMWare collection ready but waiting for release
turn off use of VMWare inventory plugin for now

Attempt to roll back general migration to Ansible 2.8
  failed, kept at 2.9 because of compatibility
2020-04-22 21:40:23 -04:00
Bill Nottingham
3f5820adf8 Don't override ANSIBLE_SSH_ARGS for ad-hoc commands.
This reverts 8beed7f4, which says:
    Disable ControlPersist for ad hoc commands, should avoid any
    issues with proot or needing to clean up sockets afterwards.

Given we've switched to the much less finicky bwrap for process
isolation, along with runner-based process killing, this probably
isn't needed any more.
2020-04-22 20:46:18 -04:00
softwarefactory-project-zuul[bot]
a34a63ec7f Merge pull request #6809 from wenottingham/nope-no-scls-here
Don't use SCLs for openshift.

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 23:12:45 +00:00
Bill Nottingham
47fef1dcf3 Don't use SCLs for openshift.
This may be a mis-merge?
2020-04-22 17:16:44 -04:00
softwarefactory-project-zuul[bot]
b6be891dc1 Merge pull request #6786 from nixocio/ui_issue_6511
Fix searching for a blank string creates a blank search filter

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 21:02:25 +00:00
softwarefactory-project-zuul[bot]
546df65753 Merge pull request #6804 from chrismeyersfsu/fix-collector_tests
lie about last time analytics were collected

Reviewed-by: https://github.com/apps/softwarefactory-project-zuul
2020-04-22 20:38:45 +00:00
nixocio
2a86a3e05b Fix searching for a blank string creates a blank search filter
Fix searching for a blank string creates a blank search filter. Also,
add unit-test to the related changes.

closes: https://github.com/ansible/awx/issues/6511
2020-04-22 16:16:10 -04:00
chris meyers
01c56a4f2b lie about last time analytics were collected
* Ensure that everything that we put into the database is returned via
analytics.
2020-04-22 14:10:45 -04:00
AlanCoding
68f5482c42 Update vmware implementation to new agreements 2020-04-21 22:49:20 -04:00
shaynecm
e428aeb97e pre-req's updated
added Python 3.6+ to bring inline with the inventory change
2020-04-21 12:43:19 +01:00
shaynecm
e42915d4cf updated to include py3
amended to be "/usr/bin/env python3" as suggested
2020-04-21 12:40:56 +01:00
shaynecm
1afd8b4309 Update inventory 2020-04-20 19:20:14 +01:00
AlanCoding
50197c6a12 Handle custom script options of hostnames and destination variable 2020-04-18 23:30:50 -04:00
AlanCoding
de0122d64e Respect the ec2 script nested groups option 2020-04-18 19:32:18 -04:00
AlanCoding
fff34f7227 Make inventory non-strict for production and remove comments 2020-04-17 15:43:03 -04:00
AlanCoding
e2f5aa987d Implement vmware plugin host filtering 2020-04-17 14:52:38 -04:00
AlanCoding
84d863ff9d Shorten list of VMWare inventory plugin properties
Do not include summaries

decision on vmware keyed groups

Handle first set of custom user vmware options
2020-04-16 20:56:02 -04:00
AlanCoding
47bdf86dfa Pick up collections paths in openstack inventory update prep
Update amazon.aws version

Update vmware to new release

Pin some more tests due to transitioning to inventory plugins

Remove more temporary hacks
2020-04-16 20:56:02 -04:00
Jim Ladd
6b015c9d81 Call parent get_plugin_env for foreman 2020-04-16 20:56:01 -04:00
AlanCoding
c3f2b3e44d bump versions of Galaxy collection requirements
Implement 2.9 version policy

Update ec2 collection name

Enable ovirt, refresh test files

Put in upstream forks to get it running for now

pick up openstack.cloud fix
2020-04-16 20:56:01 -04:00
Jim Ladd
6807878e2d I think this is the right place for iam_role_arn in the tests? 2020-04-16 20:56:01 -04:00
Jim Ladd
96c6cf9f05 pass iam_role_arn through to aws inv. plugin 2020-04-16 20:56:01 -04:00
Jim Ladd
8a20b5225b enable aws_ec2 plugin 2020-04-16 20:56:00 -04:00
Jim Ladd
0e2786d1f1 compat layer for foreman 2020-04-16 20:56:00 -04:00
Jim Ladd
f38437b6bc foreman plugin updates 2020-04-16 20:56:00 -04:00
AlanCoding
99ae614a63 Vmware was published
implement vmware inventory plugin

Enable the previously broken properties
2020-04-16 20:56:00 -04:00
AlanCoding
fcf75af6a7 Get current cloud sources working from collection
update test data files

Adopt official vendor location

openstack not published yet

Add collections to show paths

Add collections loc to installer settings

Add vendored collections to show path again
2020-04-16 20:55:59 -04:00
Yanis Guenane
541b9607f5 Collections: Adding a requirements.yml file 2020-04-16 20:55:59 -04:00
103 changed files with 1655 additions and 628 deletions

View File

@@ -2,6 +2,30 @@
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
## 11.2.0 (Apr 29, 2020)
- Inventory updates now use collection-based plugins by default (in Ansible 2.9+):
- amazon.aws.aws_ec2
- community.vmware.vmware_vm_inventory
- azure.azcollection.azure_rm
- google.cloud.gcp_compute
- theforeman.foreman.foreman
- openstack.cloud.openstack
- ovirt.ovirt_collection.ovirt
- awx.awx.tower
- Added support for Approle and LDAP/AD mechanisms to the Hashicorp Vault credential plugin (https://github.com/ansible/awx/issues/5076)
- Added Project (Domain Name) support for the OpenStack Keystone v3 API (https://github.com/ansible/awx/issues/6831)
- Added a new setting for raising log verbosity for rsyslogd (https://github.com/ansible/awx/pull/6818)
- Added the ability to monitor stdout in the CLI for running jobs and workflow jobs (https://github.com/ansible/awx/issues/6165)
- Fixed a bug which prevented the AWX CLI from properly installing with newer versions of pip (https://github.com/ansible/awx/issues/6870)
- Fixed a bug which broke AWX's external logging support when configured with HTTPS endpoints that utilize self-signed certificates (https://github.com/ansible/awx/issues/6851)
- Fixed a local docker installer bug that mistakenly attempted to upgrade PostgreSQL when an external pg_hostname is specified (https://github.com/ansible/awx/pull/5398)
- Fixed a race condition that caused task container crashes when pods are quickly brought down and back up (https://github.com/ansible/awx/issues/6750)
- Fixed a bug that caused 404 errors when attempting to view the second page of the workflow approvals view (https://github.com/ansible/awx/issues/6803)
- Fixed a bug that prevented the use of ANSIBLE_SSH_ARGS for ad-hoc-commands (https://github.com/ansible/awx/pull/6811)
- Fixed a bug that broke AWX installs/upgrades on Red Hat OpenShift (https://github.com/ansible/awx/issues/6791)
## 11.1.0 (Apr 22, 2020)
- Changed rsyslogd to persist queued events to disk (to prevent a risk of out-of-memory errors) (https://github.com/ansible/awx/issues/6746)
- Added the ability to configure the destination and maximum disk size of rsyslogd spool (in the event of a log aggregator outage) (https://github.com/ansible/awx/pull/6763)

View File

@@ -10,7 +10,6 @@ This document provides a guide for installing AWX.
+ [AWX branding](#awx-branding)
+ [Prerequisites](#prerequisites)
+ [System Requirements](#system-requirements)
+ [AWX Tunables](#awx-tunables)
+ [Choose a deployment platform](#choose-a-deployment-platform)
+ [Official vs Building Images](#official-vs-building-images)
* [Upgrading from previous versions](#upgrading-from-previous-versions)
@@ -49,7 +48,17 @@ This document provides a guide for installing AWX.
### Clone the repo
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). We generally recommend that you view the releases page:
https://github.com/ansible/awx/releases
...and clone the latest stable release, e.g.,
`git clone -b x.y.z https://github.com/ansible/awx.git`
Please note that deploying from `HEAD` (or the latest commit) is **not** stable, and that if you want to do this, you should proceed at your own risk (also, see the section #official-vs-building-images for building your own image).
For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
Once you have a local copy, run commands within the root of the project tree.
@@ -73,6 +82,7 @@ Before you can run a deployment, you'll need the following installed in your loc
- [Git](https://git-scm.com/) Requires Version 1.8.4+
- [Node 10.x LTS version](https://nodejs.org/en/download/)
- [NPM 6.x LTS](https://docs.npmjs.com/)
- Python 3.6+
### System Requirements
@@ -84,10 +94,6 @@ The system that runs the AWX service will need to satisfy the following requirem
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 10+.
### AWX Tunables
**TODO** add tunable bits
### Choose a deployment platform
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster, a Kubernetes cluster, or docker-compose. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
@@ -143,7 +149,7 @@ $ ansible-playbook -i inventory install.yml -e @vars.yml
### Prerequisites
To complete a deployment to OpenShift, you will obviously need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
To complete a deployment to OpenShift, you will need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
When using OpenShift for deploying AWX make sure you have correct privileges to add the security context 'privileged', otherwise the installation will fail. The privileged context is needed because of the use of [the bubblewrap tool](https://github.com/containers/bubblewrap) to add an additional layer of security when using containers.

View File

@@ -10,6 +10,7 @@ recursive-include awx/playbooks *.yml
recursive-include awx/lib/site-packages *
recursive-include awx/plugins *.ps1
recursive-include requirements *.txt
recursive-include requirements *.yml
recursive-include config *
recursive-include docs/licenses *
recursive-exclude awx devonly.py*

View File

@@ -209,7 +209,11 @@ requirements_awx: virtualenv_awx
requirements_awx_dev:
$(VENV_BASE)/awx/bin/pip install -r requirements/requirements_dev.txt
requirements: requirements_ansible requirements_awx
requirements_collections:
mkdir -p $(COLLECTION_BASE)
ansible-galaxy collection install -r requirements/collections_requirements.yml -p $(COLLECTION_BASE)
requirements: requirements_ansible requirements_awx requirements_collections
requirements_dev: requirements_awx requirements_ansible_py3 requirements_awx_dev requirements_ansible_dev

View File

@@ -1 +1 @@
11.1.0
11.2.0

View File

@@ -170,7 +170,13 @@ class SettingLoggingTest(GenericAPIView):
return Response({'error': 'Logging not enabled'}, status=status.HTTP_409_CONFLICT)
# Send test message to configured logger based on db settings
logging.getLogger('awx').error('AWX Connection Test Message')
try:
default_logger = settings.LOG_AGGREGATOR_LOGGERS[0]
if default_logger != 'awx':
default_logger = f'awx.analytics.{default_logger}'
except IndexError:
default_logger = 'awx'
logging.getLogger(default_logger).error('AWX Connection Test Message')
hostname = getattr(settings, 'LOG_AGGREGATOR_HOST', None)
protocol = getattr(settings, 'LOG_AGGREGATOR_PROTOCOL', None)

View File

@@ -44,8 +44,8 @@ class FixedSlidingWindow():
def cleanup(self, now_bucket=None):
now_bucket = now_bucket or now_seconds()
if self.start_time + 60 <= now_bucket:
self.start_time = now_bucket + 60 + 1
if self.start_time + 60 < now_bucket:
self.start_time = now_bucket - 60
# Delete old entries
for k in list(self.buckets.keys()):
@@ -53,16 +53,15 @@ class FixedSlidingWindow():
del self.buckets[k]
def record(self, ts=None):
ts = ts or datetime.datetime.now()
now_bucket = int((ts - datetime.datetime(1970,1,1)).total_seconds())
now_bucket = ts or dt_to_seconds(datetime.datetime.now())
val = self.buckets.get(now_bucket, 0)
self.buckets[now_bucket] = val + 1
self.cleanup(now_bucket)
def render(self):
self.cleanup()
def render(self, ts=None):
self.cleanup(now_bucket=ts)
return sum(self.buckets.values()) or 0

View File

@@ -810,6 +810,16 @@ register(
category=_('Logging'),
category_slug='logging',
)
register(
'LOG_AGGREGATOR_RSYSLOGD_DEBUG',
field_class=fields.BooleanField,
default=False,
label=_('Enable rsyslogd debugging'),
help_text=_('Enabled high verbosity debugging for rsyslogd. '
'Useful for debugging connection issues for external log aggregation.'),
category=_('Logging'),
category_slug='logging',
)
register(

View File

@@ -32,14 +32,33 @@ base_inputs = {
'type': 'string',
'multiline': True,
'help_text': _('The CA certificate used to verify the SSL certificate of the Vault server')
}],
}, {
'id': 'role_id',
'label': _('AppRole role_id'),
'type': 'string',
'multiline': False,
'help_text': _('The Role ID for AppRole Authentication')
}, {
'id': 'secret_id',
'label': _('AppRole secret_id'),
'type': 'string',
'multiline': False,
'secret': True,
'help_text': _('The Secret ID for AppRole Authentication')
}
],
'metadata': [{
'id': 'secret_path',
'label': _('Path to Secret'),
'type': 'string',
'help_text': _('The path to the secret stored in the secret backend e.g, /some/secret/')
},{
'id': 'auth_path',
'label': _('Path to Auth'),
'type': 'string',
'help_text': _('The path where the Authentication method is mounted e.g, approle')
}],
'required': ['url', 'token', 'secret_path'],
'required': ['url', 'secret_path'],
}
hashi_kv_inputs = copy.deepcopy(base_inputs)
@@ -88,8 +107,43 @@ hashi_ssh_inputs['metadata'] = [{
hashi_ssh_inputs['required'].extend(['public_key', 'role'])
def handle_auth(**kwargs):
token = None
if kwargs.get('token'):
token = kwargs['token']
elif kwargs.get('role_id') and kwargs.get('secret_id'):
token = approle_auth(**kwargs)
else:
raise Exception('Either token or AppRole parameters must be set')
return token
def approle_auth(**kwargs):
role_id = kwargs['role_id']
secret_id = kwargs['secret_id']
auth_path = kwargs.get('auth_path') or 'approle'
url = urljoin(kwargs['url'], 'v1')
cacert = kwargs.get('cacert', None)
request_kwargs = {'timeout': 30}
if cacert:
request_kwargs['verify'] = create_temporary_fifo(cacert.encode())
# AppRole Login
request_kwargs['json'] = {'role_id': role_id, 'secret_id': secret_id}
sess = requests.Session()
request_url = '/'.join([url, 'auth', auth_path, 'login']).rstrip('/')
resp = sess.post(request_url, **request_kwargs)
resp.raise_for_status()
token = resp.json()['auth']['client_token']
return token
def kv_backend(**kwargs):
token = kwargs['token']
token = handle_auth(**kwargs)
url = kwargs['url']
secret_path = kwargs['secret_path']
secret_backend = kwargs.get('secret_backend', None)
@@ -144,7 +198,7 @@ def kv_backend(**kwargs):
def ssh_backend(**kwargs):
token = kwargs['token']
token = handle_auth(**kwargs)
url = urljoin(kwargs['url'], 'v1')
secret_path = kwargs['secret_path']
role = kwargs['role']

View File

@@ -169,7 +169,7 @@ class AnsibleInventoryLoader(object):
self.tmp_private_dir = build_proot_temp_dir()
logger.debug("Using fresh temporary directory '{}' for isolation.".format(self.tmp_private_dir))
kwargs['proot_temp_dir'] = self.tmp_private_dir
kwargs['proot_show_paths'] = [functioning_dir(self.source)]
kwargs['proot_show_paths'] = [functioning_dir(self.source), settings.INVENTORY_COLLECTIONS_ROOT]
logger.debug("Running from `{}` working directory.".format(cwd))
if self.venv_path != settings.ANSIBLE_VENV_PATH:

View File

@@ -121,6 +121,17 @@ class InstanceManager(models.Manager):
if not hostname:
hostname = settings.CLUSTER_HOST_ID
with advisory_lock('instance_registration_%s' % hostname):
if settings.AWX_AUTO_DEPROVISION_INSTANCES:
# detect any instances with the same IP address.
# if one exists, set it to None
inst_conflicting_ip = self.filter(ip_address=ip_address).exclude(hostname=hostname)
if inst_conflicting_ip.exists():
for other_inst in inst_conflicting_ip:
other_hostname = other_inst.hostname
other_inst.ip_address = None
other_inst.save(update_fields=['ip_address'])
logger.warning("IP address {0} conflict detected, ip address unset for host {1}.".format(ip_address, other_hostname))
instance = self.filter(hostname=hostname)
if instance.exists():
instance = instance.get()

View File

@@ -799,6 +799,10 @@ ManagedCredentialType(
'id': 'project',
'label': ugettext_noop('Project (Tenant Name)'),
'type': 'string',
}, {
'id': 'project_domain_name',
'label': ugettext_noop('Project (Domain Name)'),
'type': 'string',
}, {
'id': 'domain',
'label': ugettext_noop('Domain Name'),

View File

@@ -77,6 +77,8 @@ def _openstack_data(cred):
username=cred.get_input('username', default=''),
password=cred.get_input('password', default=''),
project_name=cred.get_input('project', default=''))
if cred.has_input('project_domain_name'):
openstack_auth['project_domain_name'] = cred.get_input('project_domain_name', default='')
if cred.has_input('domain'):
openstack_auth['domain_name'] = cred.get_input('domain', default='')
verify_state = cred.get_input('verify_ssl', default=True)

View File

@@ -1612,6 +1612,11 @@ class PluginFileInjector(object):
# base injector should be one of None, "managed", or "template"
# this dictates which logic to borrow from playbook injectors
base_injector = None
# every source should have collection, but these are set here
# so that a source without a collection will have null values
namespace = None
collection = None
collection_migration = '2.9' # Starting with this version, we use collections
def __init__(self, ansible_version):
# This is InventoryOptions instance, could be source or inventory update
@@ -1638,7 +1643,11 @@ class PluginFileInjector(object):
"""
if self.plugin_name is None:
raise NotImplementedError('At minimum the plugin name is needed for inventory plugin use.')
return {'plugin': self.plugin_name}
if self.initial_version is None or Version(self.ansible_version) >= Version(self.collection_migration):
proper_name = f'{self.namespace}.{self.collection}.{self.plugin_name}'
else:
proper_name = self.plugin_name
return {'plugin': proper_name}
def inventory_contents(self, inventory_update, private_data_dir):
"""Returns a string that is the content for the inventory file for the inventory plugin
@@ -1693,7 +1702,10 @@ class PluginFileInjector(object):
return injected_env
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
return self._get_shared_env(inventory_update, private_data_dir, private_data_files)
env = self._get_shared_env(inventory_update, private_data_dir, private_data_files)
if self.initial_version is None or Version(self.ansible_version) >= Version(self.collection_migration):
env['ANSIBLE_COLLECTIONS_PATHS'] = settings.INVENTORY_COLLECTIONS_ROOT
return env
def get_script_env(self, inventory_update, private_data_dir, private_data_files):
injected_env = self._get_shared_env(inventory_update, private_data_dir, private_data_files)
@@ -1738,6 +1750,8 @@ class azure_rm(PluginFileInjector):
initial_version = '2.8' # Driven by unsafe group names issue, hostvars, host names
ini_env_reference = 'AZURE_INI_PATH'
base_injector = 'managed'
namespace = 'azure'
collection = 'azcollection'
def get_plugin_env(self, *args, **kwargs):
ret = super(azure_rm, self).get_plugin_env(*args, **kwargs)
@@ -1869,9 +1883,11 @@ class azure_rm(PluginFileInjector):
class ec2(PluginFileInjector):
plugin_name = 'aws_ec2'
# blocked by https://github.com/ansible/ansible/issues/54059
# initial_version = '2.8' # Driven by unsafe group names issue, parent_group templating, hostvars
initial_version = '2.9' # Driven by unsafe group names issue, parent_group templating, hostvars
ini_env_reference = 'EC2_INI_PATH'
base_injector = 'managed'
namespace = 'amazon'
collection = 'aws'
def get_plugin_env(self, *args, **kwargs):
ret = super(ec2, self).get_plugin_env(*args, **kwargs)
@@ -2011,6 +2027,9 @@ class ec2(PluginFileInjector):
grouping_data['key'] += ' | regex_replace("{rx}", "_")'.format(rx=legacy_regex)
# end compatibility content
if source_vars.get('iam_role_arn', None):
ret['iam_role_arn'] = source_vars['iam_role_arn']
# This was an allowed ec2.ini option, also plugin option, so pass through
if source_vars.get('boto_profile', None):
ret['boto_profile'] = source_vars['boto_profile']
@@ -2019,6 +2038,10 @@ class ec2(PluginFileInjector):
# Using the plugin, but still want dashes whitelisted
ret['use_contrib_script_compatible_sanitization'] = True
if source_vars.get('nested_groups') is False:
for this_keyed_group in keyed_groups:
this_keyed_group.pop('parent_group', None)
if keyed_groups:
ret['keyed_groups'] = keyed_groups
@@ -2030,18 +2053,35 @@ class ec2(PluginFileInjector):
compose_dict.update(self._compat_compose_vars())
# plugin provides "aws_ec2", but not this which the script gave
ret['groups'] = {'ec2': True}
# public_ip as hostname is non-default plugin behavior, script behavior
ret['hostnames'] = [
'network-interface.addresses.association.public-ip',
'dns-name',
'private-dns-name'
]
if source_vars.get('hostname_variable') is not None:
hnames = []
for expr in source_vars.get('hostname_variable').split(','):
if expr == 'public_dns_name':
hnames.append('dns-name')
elif not expr.startswith('tag:') and '_' in expr:
hnames.append(expr.replace('_', '-'))
else:
hnames.append(expr)
ret['hostnames'] = hnames
else:
# public_ip as hostname is non-default plugin behavior, script behavior
ret['hostnames'] = [
'network-interface.addresses.association.public-ip',
'dns-name',
'private-dns-name'
]
# The script returned only running state by default, the plugin does not
# https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html#options
# options: pending | running | shutting-down | terminated | stopping | stopped
inst_filters['instance-state-name'] = ['running']
# end compatibility content
if source_vars.get('destination_variable') or source_vars.get('vpc_destination_variable'):
for fd in ('destination_variable', 'vpc_destination_variable'):
if source_vars.get(fd):
compose_dict['ansible_host'] = source_vars.get(fd)
break
if compose_dict:
ret['compose'] = compose_dict
@@ -2108,6 +2148,8 @@ class gce(PluginFileInjector):
initial_version = '2.8' # Driven by unsafe group names issue, hostvars
ini_env_reference = 'GCE_INI_PATH'
base_injector = 'managed'
namespace = 'google'
collection = 'cloud'
def get_plugin_env(self, *args, **kwargs):
ret = super(gce, self).get_plugin_env(*args, **kwargs)
@@ -2208,14 +2250,119 @@ class gce(PluginFileInjector):
class vmware(PluginFileInjector):
# plugin_name = 'vmware_vm_inventory' # FIXME: implement me
plugin_name = 'vmware_vm_inventory'
initial_version = '2.9'
ini_env_reference = 'VMWARE_INI_PATH'
base_injector = 'managed'
namespace = 'community'
collection = 'vmware'
@property
def script_name(self):
return 'vmware_inventory.py' # exception
def inventory_as_dict(self, inventory_update, private_data_dir):
ret = super(vmware, self).inventory_as_dict(inventory_update, private_data_dir)
ret['strict'] = False
# Documentation of props, see
# https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_inventory_vm_attributes.rst
UPPERCASE_PROPS = [
"availableField",
"configIssue",
"configStatus",
"customValue", # optional
"datastore",
"effectiveRole",
"guestHeartbeatStatus", # optonal
"layout", # optional
"layoutEx", # optional
"name",
"network",
"overallStatus",
"parentVApp", # optional
"permission",
"recentTask",
"resourcePool",
"rootSnapshot",
"snapshot", # optional
"tag",
"triggeredAlarmState",
"value"
]
NESTED_PROPS = [
"capability",
"config",
"guest",
"runtime",
"storage",
"summary", # repeat of other properties
]
ret['properties'] = UPPERCASE_PROPS + NESTED_PROPS
ret['compose'] = {'ansible_host': 'guest.ipAddress'} # default value
ret['compose']['ansible_ssh_host'] = ret['compose']['ansible_host']
# the ansible_uuid was unique every host, every import, from the script
ret['compose']['ansible_uuid'] = '99999999 | random | to_uuid'
for prop in UPPERCASE_PROPS:
if prop == prop.lower():
continue
ret['compose'][prop.lower()] = prop
ret['with_nested_properties'] = True
# ret['property_name_format'] = 'lower_case' # only dacrystal/topic/vmware-inventory-plugin-property-format
# process custom options
vmware_opts = dict(inventory_update.source_vars_dict.items())
if inventory_update.instance_filters:
vmware_opts.setdefault('host_filters', inventory_update.instance_filters)
if inventory_update.group_by:
vmware_opts.setdefault('groupby_patterns', inventory_update.group_by)
alias_pattern = vmware_opts.get('alias_pattern')
if alias_pattern:
ret.setdefault('hostnames', [])
for alias in alias_pattern.split(','): # make best effort
striped_alias = alias.replace('{', '').replace('}', '').strip() # make best effort
if not striped_alias:
continue
ret['hostnames'].append(striped_alias)
host_pattern = vmware_opts.get('host_pattern') # not working in script
if host_pattern:
stripped_hp = host_pattern.replace('{', '').replace('}', '').strip() # make best effort
ret['compose']['ansible_host'] = stripped_hp
ret['compose']['ansible_ssh_host'] = stripped_hp
host_filters = vmware_opts.get('host_filters')
if host_filters:
ret.setdefault('filters', [])
for hf in host_filters.split(','):
striped_hf = hf.replace('{', '').replace('}', '').strip() # make best effort
if not striped_hf:
continue
ret['filters'].append(striped_hf)
else:
# default behavior filters by power state
ret['filters'] = ['runtime.powerState == "poweredOn"']
groupby_patterns = vmware_opts.get('groupby_patterns')
ret.setdefault('keyed_groups', [])
if groupby_patterns:
for pattern in groupby_patterns.split(','):
stripped_pattern = pattern.replace('{', '').replace('}', '').strip() # make best effort
ret['keyed_groups'].append({
'prefix': '', 'separator': '',
'key': stripped_pattern
})
else:
# default groups from script
for entry in ('guest.guestId', '"templates" if config.template else "guests"'):
ret['keyed_groups'].append({
'prefix': '', 'separator': '',
'key': entry
})
return ret
def build_script_private_data(self, inventory_update, private_data_dir):
cp = configparser.RawConfigParser()
credential = inventory_update.get_cloud_credential()
@@ -2246,6 +2393,8 @@ class openstack(PluginFileInjector):
plugin_name = 'openstack'
# minimum version of 2.7.8 may be theoretically possible
initial_version = '2.8' # Driven by consistency with other sources
namespace = 'openstack'
collection = 'cloud'
@property
def script_name(self):
@@ -2297,7 +2446,10 @@ class openstack(PluginFileInjector):
return self.build_script_private_data(inventory_update, private_data_dir, mk_cache=False)
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
return self.get_script_env(inventory_update, private_data_dir, private_data_files)
env = super(openstack, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
script_env = self.get_script_env(inventory_update, private_data_dir, private_data_files)
env.update(script_env)
return env
def inventory_as_dict(self, inventory_update, private_data_dir):
def use_host_name_for_name(a_bool_maybe):
@@ -2309,12 +2461,10 @@ class openstack(PluginFileInjector):
else:
return 'uuid'
ret = dict(
plugin=self.plugin_name,
fail_on_errors=True,
expand_hostvars=True,
inventory_hostname=use_host_name_for_name(False),
)
ret = super(openstack, self).inventory_as_dict(inventory_update, private_data_dir)
ret['fail_on_errors'] = True
ret['expand_hostvars'] = True
ret['inventory_hostname'] = use_host_name_for_name(False)
# Note: mucking with defaults will break import integrity
# For the plugin, we need to use the same defaults as the old script
# or else imports will conflict. To find script defaults you have
@@ -2339,8 +2489,10 @@ class openstack(PluginFileInjector):
class rhv(PluginFileInjector):
"""ovirt uses the custom credential templating, and that is all
"""
# plugin_name = 'FIXME' # contribute inventory plugin to Ansible
plugin_name = 'ovirt'
base_injector = 'template'
namespace = 'ovirt'
collection = 'ovirt_collection'
@property
def script_name(self):
@@ -2350,8 +2502,10 @@ class rhv(PluginFileInjector):
class satellite6(PluginFileInjector):
plugin_name = 'foreman'
ini_env_reference = 'FOREMAN_INI_PATH'
# initial_version = '2.8' # FIXME: turn on after plugin is validated
initial_version = '2.9'
# No base injector, because this does not work in playbooks. Bug??
namespace = 'theforeman'
collection = 'foreman'
@property
def script_name(self):
@@ -2413,18 +2567,60 @@ class satellite6(PluginFileInjector):
# this assumes that this is merged
# https://github.com/ansible/ansible/pull/52693
credential = inventory_update.get_cloud_credential()
ret = {}
ret = super(satellite6, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
if credential:
ret['FOREMAN_SERVER'] = credential.get_input('host', default='')
ret['FOREMAN_USER'] = credential.get_input('username', default='')
ret['FOREMAN_PASSWORD'] = credential.get_input('password', default='')
return ret
def inventory_as_dict(self, inventory_update, private_data_dir):
ret = super(satellite6, self).inventory_as_dict(inventory_update, private_data_dir)
want_ansible_ssh_host = False
foreman_opts = inventory_update.source_vars_dict.copy()
for k, v in foreman_opts.items():
if k == 'satellite6_want_ansible_ssh_host' and isinstance(v, bool):
want_ansible_ssh_host = v
# Compatibility content
group_by_hostvar = {
"environment": {"prefix": "foreman_environment_",
"separator": "",
"key": "foreman['environment_name'] | lower | regex_replace(' ', '') | "
"regex_replace('[^A-Za-z0-9\_]', '_') | regex_replace('none', '')"}, # NOQA: W605
"location": {"prefix": "foreman_location_",
"separator": "",
"key": "foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_')"},
"organization": {"prefix": "foreman_organization_",
"separator": "",
"key": "foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_')"},
"lifecycle_environment": {"prefix": "foreman_lifecycle_environment_",
"separator": "",
"key": "foreman['content_facet_attributes']['lifecycle_environment_name'] | "
"lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_')"},
"content_view": {"prefix": "foreman_content_view_",
"separator": "",
"key": "foreman['content_facet_attributes']['content_view_name'] | "
"lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_')"}
}
ret['keyed_groups'] = [group_by_hostvar[grouping_name] for grouping_name in group_by_hostvar]
ret['legacy_hostvars'] = True
ret['want_facts'] = True
ret['want_params'] = True
if want_ansible_ssh_host:
ret['compose'] = {'ansible_ssh_host': "foreman['ip6'] | default(foreman['ip'], true)"}
return ret
class cloudforms(PluginFileInjector):
# plugin_name = 'FIXME' # contribute inventory plugin to Ansible
ini_env_reference = 'CLOUDFORMS_INI_PATH'
# Also no base_injector because this does not work in playbooks
# namespace = '' # does not have a collection
# collection = ''
def build_script_private_data(self, inventory_update, private_data_dir):
cp = configparser.RawConfigParser()
@@ -2460,6 +2656,8 @@ class tower(PluginFileInjector):
plugin_name = 'tower'
base_injector = 'template'
initial_version = '2.8' # Driven by "include_metadata" hostvars
namespace = 'awx'
collection = 'awx'
def get_script_env(self, inventory_update, private_data_dir, private_data_files):
env = super(tower, self).get_script_env(inventory_update, private_data_dir, private_data_files)
@@ -2468,6 +2666,7 @@ class tower(PluginFileInjector):
return env
def inventory_as_dict(self, inventory_update, private_data_dir):
ret = super(tower, self).inventory_as_dict(inventory_update, private_data_dir)
# Credentials injected as env vars, same as script
try:
# plugin can take an actual int type
@@ -2475,11 +2674,9 @@ class tower(PluginFileInjector):
except ValueError:
# inventory_id could be a named URL
identifier = iri_to_uri(inventory_update.instance_filters)
return {
'plugin': self.plugin_name,
'inventory_id': identifier,
'include_metadata': True # used for license check
}
ret['inventory_id'] = identifier
ret['include_metadata'] = True # used for license check
return ret
for cls in PluginFileInjector.__subclasses__():

View File

@@ -2074,29 +2074,34 @@ class RunProjectUpdate(BaseTask):
if settings.GALAXY_IGNORE_CERTS:
env['ANSIBLE_GALAXY_IGNORE'] = True
# Set up the public Galaxy server, if enabled
galaxy_configured = False
if settings.PUBLIC_GALAXY_ENABLED:
galaxy_servers = [settings.PUBLIC_GALAXY_SERVER]
galaxy_servers = [settings.PUBLIC_GALAXY_SERVER] # static setting
else:
galaxy_configured = True
galaxy_servers = []
# Set up fallback Galaxy servers, if configured
if settings.FALLBACK_GALAXY_SERVERS:
galaxy_configured = True
galaxy_servers = settings.FALLBACK_GALAXY_SERVERS + galaxy_servers
# Set up the primary Galaxy server, if configured
if settings.PRIMARY_GALAXY_URL:
galaxy_configured = True
galaxy_servers = [{'id': 'primary_galaxy'}] + galaxy_servers
for key in GALAXY_SERVER_FIELDS:
value = getattr(settings, 'PRIMARY_GALAXY_{}'.format(key.upper()))
if value:
galaxy_servers[0][key] = value
for server in galaxy_servers:
for key in GALAXY_SERVER_FIELDS:
if not server.get(key):
continue
env_key = ('ANSIBLE_GALAXY_SERVER_{}_{}'.format(server.get('id', 'unnamed'), key)).upper()
env[env_key] = server[key]
if galaxy_servers:
# now set the precedence of galaxy servers
env['ANSIBLE_GALAXY_SERVER_LIST'] = ','.join([server.get('id', 'unnamed') for server in galaxy_servers])
if galaxy_configured:
for server in galaxy_servers:
for key in GALAXY_SERVER_FIELDS:
if not server.get(key):
continue
env_key = ('ANSIBLE_GALAXY_SERVER_{}_{}'.format(server.get('id', 'unnamed'), key)).upper()
env[env_key] = server[key]
if galaxy_servers:
# now set the precedence of galaxy servers
env['ANSIBLE_GALAXY_SERVER_LIST'] = ','.join([server.get('id', 'unnamed') for server in galaxy_servers])
return env
def _build_scm_url_extra_vars(self, project_update):
@@ -2407,7 +2412,7 @@ class RunInventoryUpdate(BaseTask):
@property
def proot_show_paths(self):
return [self.get_path_to('..', 'plugins', 'inventory')]
return [self.get_path_to('..', 'plugins', 'inventory'), settings.INVENTORY_COLLECTIONS_ROOT]
def build_private_data(self, inventory_update, private_data_dir):
"""
@@ -2737,9 +2742,12 @@ class RunAdHocCommand(BaseTask):
env['ANSIBLE_LOAD_CALLBACK_PLUGINS'] = '1'
env['ANSIBLE_SFTP_BATCH_MODE'] = 'False'
# Specify empty SSH args (should disable ControlPersist entirely for
# ad hoc commands).
env.setdefault('ANSIBLE_SSH_ARGS', '')
# Create a directory for ControlPath sockets that is unique to each
# ad hoc command and visible inside the proot environment (when enabled).
cp_dir = os.path.join(private_data_dir, 'cp')
if not os.path.exists(cp_dir):
os.mkdir(cp_dir, 0o700)
env['ANSIBLE_SSH_CONTROL_PATH'] = cp_dir
return env

View File

@@ -39,5 +39,5 @@ keyed_groups:
prefix: ''
separator: ''
plain_host_names: true
plugin: azure_rm
plugin: azure.azcollection.azure_rm
use_contrib_script_compatible_sanitization: true

View File

@@ -1,6 +1,6 @@
boto_profile: /tmp/my_boto_stuff
compose:
ansible_host: public_ip_address
ansible_host: public_dns_name
ec2_account_id: owner_id
ec2_ami_launch_index: ami_launch_index | string
ec2_architecture: architecture
@@ -50,9 +50,8 @@ filters:
groups:
ec2: true
hostnames:
- network-interface.addresses.association.public-ip
- dns-name
- private-dns-name
iam_role_arn: arn:aws:iam::123456789012:role/test-role
keyed_groups:
- key: placement.availability_zone
parent_group: zones
@@ -75,7 +74,7 @@ keyed_groups:
parent_group: '{{ placement.region }}'
prefix: ''
separator: ''
plugin: aws_ec2
plugin: amazon.aws.aws_ec2
regions:
- us-east-2
- ap-south-1

View File

@@ -40,7 +40,7 @@ keyed_groups:
- key: image
prefix: ''
separator: ''
plugin: gcp_compute
plugin: google.cloud.gcp_compute
projects:
- fooo
retrieve_image_info: true

View File

@@ -8,6 +8,7 @@ clouds:
auth_url: https://foo.invalid
domain_name: fooo
password: fooo
project_domain_name: fooo
project_name: fooo
username: fooo
private: false

View File

@@ -1,4 +1,4 @@
expand_hostvars: true
fail_on_errors: true
inventory_hostname: uuid
plugin: openstack
plugin: openstack.cloud.openstack

View File

@@ -0,0 +1,7 @@
{
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
"OVIRT_INI_PATH": "{{ file_reference }}",
"OVIRT_PASSWORD": "fooo",
"OVIRT_URL": "https://foo.invalid",
"OVIRT_USERNAME": "fooo"
}

View File

@@ -0,0 +1,5 @@
[ovirt]
ovirt_url=https://foo.invalid
ovirt_username=fooo
ovirt_password=fooo
ovirt_ca_file=fooo

View File

@@ -0,0 +1 @@
plugin: ovirt.ovirt_collection.ovirt

View File

@@ -1 +1,22 @@
plugin: foreman
compose:
ansible_ssh_host: foreman['ip6'] | default(foreman['ip'], true)
keyed_groups:
- key: foreman['environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_') | regex_replace('none', '')
prefix: foreman_environment_
separator: ''
- key: foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_')
prefix: foreman_location_
separator: ''
- key: foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_')
prefix: foreman_organization_
separator: ''
- key: foreman['content_facet_attributes']['lifecycle_environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_')
prefix: foreman_lifecycle_environment_
separator: ''
- key: foreman['content_facet_attributes']['content_view_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9\_]', '_')
prefix: foreman_content_view_
separator: ''
legacy_hostvars: true
plugin: theforeman.foreman.foreman
want_facts: true
want_params: true

View File

@@ -1,3 +1,3 @@
include_metadata: true
inventory_id: 42
plugin: tower
plugin: awx.awx.tower

View File

@@ -0,0 +1,7 @@
{
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
"VMWARE_HOST": "https://foo.invalid",
"VMWARE_PASSWORD": "fooo",
"VMWARE_USER": "fooo",
"VMWARE_VALIDATE_CERTS": "False"
}

View File

@@ -0,0 +1,56 @@
compose:
ansible_host: guest.ipAddress
ansible_ssh_host: guest.ipAddress
ansible_uuid: 99999999 | random | to_uuid
availablefield: availableField
configissue: configIssue
configstatus: configStatus
customvalue: customValue
effectiverole: effectiveRole
guestheartbeatstatus: guestHeartbeatStatus
layoutex: layoutEx
overallstatus: overallStatus
parentvapp: parentVApp
recenttask: recentTask
resourcepool: resourcePool
rootsnapshot: rootSnapshot
triggeredalarmstate: triggeredAlarmState
filters:
- config.zoo == "DC0_H0_VM0"
hostnames:
- config.foo
keyed_groups:
- key: config.asdf
prefix: ''
separator: ''
plugin: community.vmware.vmware_vm_inventory
properties:
- availableField
- configIssue
- configStatus
- customValue
- datastore
- effectiveRole
- guestHeartbeatStatus
- layout
- layoutEx
- name
- network
- overallStatus
- parentVApp
- permission
- recentTask
- resourcePool
- rootSnapshot
- snapshot
- tag
- triggeredAlarmState
- value
- capability
- config
- guest
- runtime
- storage
- summary
strict: false
with_nested_properties: true

View File

@@ -1,9 +1,11 @@
[ec2]
base_source_var = value_of_var
boto_profile = /tmp/my_boto_stuff
iam_role_arn = arn:aws:iam::123456789012:role/test-role
hostname_variable = public_dns_name
destination_variable = public_dns_name
regions = us-east-2,ap-south-1
regions_exclude = us-gov-west-1,cn-north-1
destination_variable = public_dns_name
vpc_destination_variable = ip_address
route53 = False
all_instances = True

View File

@@ -10,6 +10,7 @@ clouds:
auth_url: https://foo.invalid
domain_name: fooo
password: fooo
project_domain_name: fooo
project_name: fooo
username: fooo
private: false

View File

@@ -5,6 +5,7 @@ username = fooo
password = fooo
server = https://foo.invalid
base_source_var = value_of_var
host_filters = foobaa
groupby_patterns = fouo
alias_pattern = {{ config.foo }}
host_filters = {{ config.zoo == "DC0_H0_VM0" }}
groupby_patterns = {{ config.asdf }}

View File

@@ -5,6 +5,7 @@ import shutil
import csv
from django.utils.timezone import now
from datetime import timedelta
from django.db.backends.sqlite3.base import SQLiteCursorWrapper
from awx.main.analytics import collectors
@@ -73,7 +74,7 @@ def test_copy_tables_unified_job_query(
Ensure that various unified job types are in the output of the query.
"""
time_start = now()
time_start = now() - timedelta(hours=9)
inv_src = InventorySource.objects.create(
name="inventory_update1", inventory=inventory, source="gce"
)
@@ -130,7 +131,7 @@ def workflow_job(states=["new", "new", "new", "new", "new"]):
@pytest.mark.django_db
def test_copy_tables_workflow_job_node_query(sqlite_copy_expert, workflow_job):
time_start = now()
time_start = now() - timedelta(hours=9)
with tempfile.TemporaryDirectory() as tmpdir:
collectors.copy_tables(time_start, tmpdir, subset="workflow_job_node_query")

View File

@@ -16,7 +16,7 @@ DATA = os.path.join(os.path.dirname(data.__file__), 'inventory')
TEST_SOURCE_FIELDS = {
'vmware': {
'instance_filters': 'foobaa',
'instance_filters': '{{ config.name == "only_my_server" }},{{ somevar == "bar"}}',
'group_by': 'fouo'
},
'ec2': {
@@ -38,7 +38,10 @@ TEST_SOURCE_FIELDS = {
INI_TEST_VARS = {
'ec2': {
'boto_profile': '/tmp/my_boto_stuff'
'boto_profile': '/tmp/my_boto_stuff',
'iam_role_arn': 'arn:aws:iam::123456789012:role/test-role',
'hostname_variable': 'public_dns_name',
'destination_variable': 'public_dns_name'
},
'gce': {},
'openstack': {
@@ -50,6 +53,9 @@ INI_TEST_VARS = {
'rhv': {}, # there are none
'tower': {}, # there are none
'vmware': {
'alias_pattern': "{{ config.foo }}",
'host_filters': '{{ config.zoo == "DC0_H0_VM0" }}',
'groupby_patterns': "{{ config.asdf }}",
# setting VMWARE_VALIDATE_CERTS is duplicated with env var
},
'azure_rm': {
@@ -315,9 +321,10 @@ def test_inventory_update_injected_content(this_kind, script_or_plugin, inventor
with mock.patch('awx.main.models.inventory.PluginFileInjector.should_use_plugin', return_value=use_plugin):
# Also do not send websocket status updates
with mock.patch.object(UnifiedJob, 'websocket_emit_status', mock.Mock()):
# The point of this test is that we replace run with assertions
with mock.patch('awx.main.tasks.ansible_runner.interface.run', substitute_run):
# mocking the licenser is necessary for the tower source
with mock.patch('awx.main.models.inventory.get_licenser', mock_licenser):
# so this sets up everything for a run and then yields control over to substitute_run
task.run(inventory_update.pk)
with mock.patch.object(task, 'get_ansible_version', return_value='2.13'):
# The point of this test is that we replace run with assertions
with mock.patch('awx.main.tasks.ansible_runner.interface.run', substitute_run):
# mocking the licenser is necessary for the tower source
with mock.patch('awx.main.models.inventory.get_licenser', mock_licenser):
# so this sets up everything for a run and then yields control over to substitute_run
task.run(inventory_update.pk)

View File

@@ -0,0 +1,69 @@
import datetime
from awx.main.analytics.broadcast_websocket import FixedSlidingWindow
from awx.main.analytics.broadcast_websocket import dt_to_seconds
class TestFixedSlidingWindow():
def ts(self, **kwargs):
e = {
'year': 1985,
'month': 1,
'day': 1,
'hour': 1,
}
return dt_to_seconds(datetime.datetime(**kwargs, **e))
def test_record_same_minute(self):
"""
Legend:
- = record()
^ = render()
|---| = 1 minute, 60 seconds
....................
|------------------------------------------------------------|
^^^^^^^^^^^^^^^^^^^^
"""
fsw = FixedSlidingWindow(self.ts(minute=0, second=0, microsecond=0))
for i in range(20):
fsw.record(self.ts(minute=0, second=i, microsecond=0))
assert (i + 1) == fsw.render(self.ts(minute=0, second=i, microsecond=0))
def test_record_same_minute_render_diff_minute(self):
"""
Legend:
- = record()
^ = render()
|---| = 1 minute, 60 seconds
....................
|------------------------------------------------------------|
^^ ^
AB C
|------------------------------------------------------------|
^^^^^^^^^^^^^^^^^^^^^
DEEEEEEEEEEEEEEEEEEEF
"""
fsw = FixedSlidingWindow(self.ts(minute=0, second=0, microsecond=0))
for i in range(20):
fsw.record(self.ts(minute=0, second=i, microsecond=0))
assert 20 == fsw.render(self.ts(minute=0, second=19, microsecond=0)), \
"A. The second of the last record() call"
assert 20 == fsw.render(self.ts(minute=0, second=20, microsecond=0)), \
"B. The second after the last record() call"
assert 20 == fsw.render(self.ts(minute=0, second=59, microsecond=0)), \
"C. Last second in the same minute that all record() called in"
assert 20 == fsw.render(self.ts(minute=1, second=0, microsecond=0)), \
"D. First second of the minute following the minute that all record() calls in"
for i in range(20):
assert 20 - i == fsw.render(self.ts(minute=1, second=i, microsecond=0)), \
"E. Sliding window where 1 record() should drop from the results each time"
assert 0 == fsw.render(self.ts(minute=1, second=20, microsecond=0)), \
"F. First second one minute after all record() calls"

View File

@@ -44,7 +44,7 @@ data_loggly = {
'https',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
])
),
(
@@ -77,7 +77,7 @@ data_loggly = {
None,
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
])
),
(
@@ -88,7 +88,7 @@ data_loggly = {
None,
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
])
),
(
@@ -99,7 +99,7 @@ data_loggly = {
None,
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
])
),
(
@@ -110,7 +110,7 @@ data_loggly = {
None,
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
])
),
(
@@ -121,7 +121,7 @@ data_loggly = {
'https',
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
])
),
(
@@ -132,7 +132,7 @@ data_loggly = {
None,
'\n'.join([
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
])
),
]

View File

@@ -183,6 +183,51 @@ def test_openstack_client_config_generation(mocker, source, expected, private_da
}
@pytest.mark.parametrize("source,expected", [
(None, True), (False, False), (True, True)
])
def test_openstack_client_config_generation_with_project_domain_name(mocker, source, expected, private_data_dir):
update = tasks.RunInventoryUpdate()
credential_type = CredentialType.defaults['openstack']()
inputs = {
'host': 'https://keystone.openstack.example.org',
'username': 'demo',
'password': 'secrete',
'project': 'demo-project',
'domain': 'my-demo-domain',
'project_domain_name': 'project-domain',
}
if source is not None:
inputs['verify_ssl'] = source
credential = Credential(pk=1, credential_type=credential_type, inputs=inputs)
inventory_update = mocker.Mock(**{
'source': 'openstack',
'source_vars_dict': {},
'get_cloud_credential': mocker.Mock(return_value=credential),
'get_extra_credentials': lambda x: [],
'ansible_virtualenv_path': '/venv/foo'
})
cloud_config = update.build_private_data(inventory_update, private_data_dir)
cloud_credential = yaml.safe_load(
cloud_config.get('credentials')[credential]
)
assert cloud_credential['clouds'] == {
'devstack': {
'auth': {
'auth_url': 'https://keystone.openstack.example.org',
'password': 'secrete',
'project_name': 'demo-project',
'username': 'demo',
'domain_name': 'my-demo-domain',
'project_domain_name': 'project-domain',
},
'verify': expected,
'private': True,
}
}
@pytest.mark.parametrize("source,expected", [
(False, False), (True, True)
])
@@ -1807,8 +1852,9 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = mocker.Mock(return_value=None)
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, False, private_data_files)
with mocker.patch('awx.main.tasks._get_ansible_version', mocker.MagicMock(return_value='2.7')):
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, False, private_data_files)
assert 'AWS_ACCESS_KEY_ID' not in env
assert 'AWS_SECRET_ACCESS_KEY' not in env
@@ -1915,8 +1961,9 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = get_cred
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, False, private_data_files)
with mocker.patch('awx.main.tasks._get_ansible_version', mocker.MagicMock(return_value='2.7')):
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, False, private_data_files)
safe_env = {}
credentials = task.build_credentials_list(inventory_update)
@@ -2153,8 +2200,9 @@ class TestInventoryUpdateCredentials(TestJobExecution):
'satellite6_want_facts': False
}
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, False, private_data_files)
with mocker.patch('awx.main.tasks._get_ansible_version', mocker.MagicMock(return_value='2.7')):
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, False, private_data_files)
config = configparser.ConfigParser()
config.read(env['FOREMAN_INI_PATH'])

View File

@@ -22,6 +22,8 @@ def construct_rsyslog_conf_template(settings=settings):
spool_directory = '/var/lib/awx'
max_bytes = settings.MAX_EVENT_RES_DATA
if settings.LOG_AGGREGATOR_RSYSLOGD_DEBUG:
parts.append('$DebugLevel 2')
parts.extend([
'$WorkDirectory /var/lib/awx/rsyslog',
f'$MaxMessageSize {max_bytes}',
@@ -58,6 +60,7 @@ def construct_rsyslog_conf_template(settings=settings):
# https://github.com/rsyslog/rsyslog-doc/blob/master/source/configuration/modules/omhttp.rst
ssl = 'on' if parsed.scheme == 'https' else 'off'
skip_verify = 'off' if settings.LOG_AGGREGATOR_VERIFY_CERT else 'on'
allow_unsigned = 'off' if settings.LOG_AGGREGATOR_VERIFY_CERT else 'on'
if not port:
port = 443 if parsed.scheme == 'https' else 80
@@ -66,6 +69,7 @@ def construct_rsyslog_conf_template(settings=settings):
f'server="{host}"',
f'serverport="{port}"',
f'usehttps="{ssl}"',
f'allowunsignedcerts="{allow_unsigned}"',
f'skipverifyhost="{skip_verify}"',
'action.resumeRetryCount="-1"',
'template="awx"',

View File

@@ -127,14 +127,14 @@ class LogstashFormatter(LogstashFormatterBase):
pass # best effort here, if it's not valid JSON, then meh
return raw_data
elif kind == 'system_tracking':
data = copy(raw_data['ansible_facts'])
data = copy(raw_data.get('ansible_facts', {}))
else:
data = copy(raw_data)
if isinstance(data, str):
data = json.loads(data)
data_for_log = {}
if kind == 'job_events':
if kind == 'job_events' and raw_data.get('python_objects', {}).get('job_event'):
job_event = raw_data['python_objects']['job_event']
for field_object in job_event._meta.fields:
@@ -165,10 +165,10 @@ class LogstashFormatter(LogstashFormatterBase):
data['ansible_python'].pop('version_info', None)
data_for_log['ansible_facts'] = data
data_for_log['ansible_facts_modified'] = raw_data['ansible_facts_modified']
data_for_log['inventory_id'] = raw_data['inventory_id']
data_for_log['host_name'] = raw_data['host_name']
data_for_log['job_id'] = raw_data['job_id']
data_for_log['ansible_facts_modified'] = raw_data.get('ansible_facts_modified')
data_for_log['inventory_id'] = raw_data.get('inventory_id')
data_for_log['host_name'] = raw_data.get('host_name')
data_for_log['job_id'] = raw_data.get('job_id')
elif kind == 'performance':
def convert_to_type(t, val):
if t is float:

View File

@@ -136,9 +136,9 @@
register: doesRequirementsExist
- name: fetch galaxy roles from requirements.yml
command: ansible-galaxy install -r requirements.yml -p {{roles_destination|quote}}{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
command: ansible-galaxy install -r roles/requirements.yml -p {{roles_destination|quote}}{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{project_path|quote}}/roles"
chdir: "{{project_path|quote}}"
register: galaxy_result
when: doesRequirementsExist.stat.exists
changed_when: "'was installed successfully' in galaxy_result.stdout"
@@ -157,9 +157,9 @@
register: doesCollectionRequirementsExist
- name: fetch galaxy collections from collections/requirements.yml
command: ansible-galaxy collection install -r requirements.yml -p {{collections_destination|quote}}{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
command: ansible-galaxy collection install -r collections/requirements.yml -p {{collections_destination|quote}}{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{project_path|quote}}/collections"
chdir: "{{project_path|quote}}"
register: galaxy_collection_result
when: doesCollectionRequirementsExist.stat.exists
changed_when: "'Installing ' in galaxy_collection_result.stdout"

View File

@@ -120,6 +120,10 @@ LOGIN_URL = '/api/login/'
# This directory should not be web-accessible.
PROJECTS_ROOT = os.path.join(BASE_DIR, 'projects')
# Absolute filesystem path to the directory to host collections for
# running inventory imports
INVENTORY_COLLECTIONS_ROOT = os.path.join(BASE_DIR, 'vendor', 'inventory_collections')
# Absolute filesystem path to the directory for job status stdout (default for
# development and tests, default for production defined in production.py). This
# directory should not be web-accessible
@@ -938,6 +942,7 @@ LOG_AGGREGATOR_VERIFY_CERT = True
LOG_AGGREGATOR_LEVEL = 'INFO'
LOG_AGGREGATOR_MAX_DISK_USAGE_GB = 1
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx'
LOG_AGGREGATOR_RSYSLOGD_DEBUG = False
# The number of retry attempts for websocket session establishment
# If you're encountering issues establishing websockets in clustered Tower,

View File

@@ -176,7 +176,10 @@ CLUSTER_HOST_ID = socket.gethostname()
if 'Docker Desktop' in os.getenv('OS', ''):
os.environ['SDB_NOTIFY_HOST'] = 'docker.for.mac.host.internal'
else:
os.environ['SDB_NOTIFY_HOST'] = os.popen('ip route').read().split(' ')[2]
try:
os.environ['SDB_NOTIFY_HOST'] = os.popen('ip route').read().split(' ')[2]
except Exception:
pass
WEBSOCKET_ORIGIN_WHITELIST = ['https://localhost:8043', 'https://localhost:3000']
AWX_CALLBACK_PROFILE = True

View File

@@ -52,6 +52,9 @@ if "pytest" in sys.modules:
# This directory should NOT be web-accessible.
PROJECTS_ROOT = '/var/lib/awx/projects/'
# Location for cross-development of inventory plugins
# INVENTORY_COLLECTIONS_ROOT = '/awx_devel/awx/plugins/collections'
# Absolute filesystem path to the directory for job status stdout
# This directory should not be web-accessible
JOBOUTPUT_ROOT = os.path.join(BASE_DIR, 'job_status')

View File

@@ -80,7 +80,7 @@
collection="vm.approvals"
dataset="vm.dataset"
iterator="template"
base-path="unified_job_templates"
base-path="workflow_approvals"
query-set="vm.queryset"
hide-view-per-page="true">
</paginate>

View File

@@ -116,7 +116,7 @@
</div>
</div>
</div>
<div class="License-file--middle License-helperText" translate>
<div class="License-file--middle License-helperText">
<div class="License-separator"></div>
<div translate>OR</div>
<div class="License-separator"></div>

View File

@@ -60,15 +60,6 @@ msgstr "<b>我同意最终用户许可证协议</b>"
msgid "<b>User analytics</b>: This data is used to enhance future releases of the Tower Software and help streamline customer experience and success."
msgstr "<b>用户分析</b>:这些数据用于增强未来的 Tower 软件发行版本,并帮助简化客户体验和成功。"
#: client/src/license/license.partial.html:119
msgid ""
"<div class=\"License-separator\"></div>\n"
"\t\t\t\t\t\t<div translate=\"\">OR</div>\n"
"\t\t\t\t\t\t<div class=\"License-separator\"></div>"
msgstr "<div class=\"License-separator\"></div>\n"
"\\t\\t\\t\\t\\t\\t<div translate=\"\">或</div>\n"
"\\t\\t\\t\\t\\t\\t<div class=\"License-separator\"></div>"
#: client/src/login/loginModal/loginModal.partial.html:26
msgid ""
"<i class=\"LoginModal-alertIcon fa fa-exclamation-triangle\"></i>\n"

View File

@@ -0,0 +1,22 @@
import React from 'react';
import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { node } from 'prop-types';
import styled from 'styled-components';
import _Detail from './Detail';
const Detail = styled(_Detail)`
dd& {
color: red;
}
`;
function DeletedDetail({ i18n, label }) {
return <Detail label={label} value={i18n._(t`Deleted`)} />;
}
DeletedDetail.propTypes = {
label: node.isRequired,
};
export default withI18n()(DeletedDetail);

View File

@@ -1,3 +1,4 @@
export { default as DetailList } from './DetailList';
export { default as Detail, DetailName, DetailValue } from './Detail';
export { default as DeletedDetail } from './DeletedDetail';
export { default as UserDateDetail } from './UserDateDetail';

View File

@@ -34,6 +34,21 @@ function formatTimeout(timeout) {
);
}
function buildResourceLink(resource) {
const link = {
job_template: `/templates/job_template/${resource.id}/details`,
project: `/projects/${resource.id}/details`,
inventory_source: `/inventories/inventory/${resource.inventory}/sources/${resource.id}/details`,
workflow_job_template: `/templates/workflow_job_template/${resource.id}/details`,
};
return link[(resource?.type)] ? (
<Link to={link[resource.type]}>{resource.name}</Link>
) : (
resource.name
);
}
function hasPromptData(launchData) {
return (
launchData.ask_credential_on_launch ||
@@ -150,7 +165,7 @@ function PromptDetail({ i18n, resource, launchConfig = {} }) {
return (
<>
<DetailList gutter="sm">
<Detail label={i18n._(t`Name`)} value={details.name} />
<Detail label={i18n._(t`Name`)} value={buildResourceLink(resource)} />
<Detail label={i18n._(t`Description`)} value={details.description} />
<Detail
label={i18n._(t`Type`)}
@@ -160,19 +175,6 @@ function PromptDetail({ i18n, resource, launchConfig = {} }) {
label={i18n._(t`Timeout`)}
value={formatTimeout(details?.timeout)}
/>
{details?.summary_fields?.organization && (
<Detail
label={i18n._(t`Organization`)}
value={
<Link
to={`/organizations/${details?.summary_fields.organization.id}/details`}
>
{details?.summary_fields?.organization.name}
</Link>
}
/>
)}
{details?.type === 'project' && (
<PromptProjectDetail resource={details} />
)}
@@ -185,17 +187,20 @@ function PromptDetail({ i18n, resource, launchConfig = {} }) {
{details?.type === 'workflow_job_template' && (
<PromptWFJobTemplateDetail resource={details} />
)}
<UserDateDetail
label={i18n._(t`Created`)}
date={details?.created}
user={details?.summary_fields?.created_by}
/>
<UserDateDetail
label={i18n._(t`Last Modified`)}
date={details?.modified}
user={details?.summary_fields?.modified_by}
/>
{details?.created && (
<UserDateDetail
label={i18n._(t`Created`)}
date={details.created}
user={details?.summary_fields?.created_by}
/>
)}
{details?.modified && (
<UserDateDetail
label={i18n._(t`Last Modified`)}
date={details?.modified}
user={details?.summary_fields?.modified_by}
/>
)}
</DetailList>
{hasPromptData(launchConfig) && hasOverrides && (

View File

@@ -4,7 +4,7 @@ import { t } from '@lingui/macro';
import { Link } from 'react-router-dom';
import { Chip, ChipGroup, List, ListItem } from '@patternfly/react-core';
import { Detail } from '@components/DetailList';
import { Detail, DeletedDetail } from '@components/DetailList';
import { VariablesDetail } from '@components/CodeMirrorInput';
import CredentialChip from '@components/CredentialChip';
@@ -57,6 +57,20 @@ function PromptInventorySourceDetail({ i18n, resource }) {
return (
<>
{summary_fields?.organization ? (
<Detail
label={i18n._(t`Organization`)}
value={
<Link
to={`/organizations/${summary_fields.organization.id}/details`}
>
{summary_fields?.organization.name}
</Link>
}
/>
) : (
<DeletedDetail label={i18n._(t`Organization`)} />
)}
{summary_fields?.inventory && (
<Detail
label={i18n._(t`Inventory`)}

View File

@@ -3,6 +3,11 @@ import { mountWithContexts } from '@testUtils/enzymeHelpers';
import PromptInventorySourceDetail from './PromptInventorySourceDetail';
import mockInvSource from './data.inventory_source.json';
function assertDetail(wrapper, label, value) {
expect(wrapper.find(`Detail[label="${label}"] dt`).text()).toBe(label);
expect(wrapper.find(`Detail[label="${label}"] dd`).text()).toBe(value);
}
describe('PromptInventorySourceDetail', () => {
let wrapper;
@@ -21,18 +26,13 @@ describe('PromptInventorySourceDetail', () => {
});
test('should render expected details', () => {
function assertDetail(label, value) {
expect(wrapper.find(`Detail[label="${label}"] dt`).text()).toBe(label);
expect(wrapper.find(`Detail[label="${label}"] dd`).text()).toBe(value);
}
assertDetail('Inventory', 'Demo Inventory');
assertDetail('Source', 'scm');
assertDetail('Project', 'Mock Project');
assertDetail('Inventory File', 'foo');
assertDetail('Custom Inventory Script', 'Mock Script');
assertDetail('Verbosity', '2 (More Verbose)');
assertDetail('Cache Timeout', '2 Seconds');
assertDetail(wrapper, 'Inventory', 'Demo Inventory');
assertDetail(wrapper, 'Source', 'scm');
assertDetail(wrapper, 'Project', 'Mock Project');
assertDetail(wrapper, 'Inventory File', 'foo');
assertDetail(wrapper, 'Custom Inventory Script', 'Mock Script');
assertDetail(wrapper, 'Verbosity', '2 (More Verbose)');
assertDetail(wrapper, 'Cache Timeout', '2 Seconds');
expect(
wrapper
.find('Detail[label="Regions"]')
@@ -74,4 +74,12 @@ describe('PromptInventorySourceDetail', () => {
])
).toEqual(true);
});
test('should render "Deleted" details', () => {
delete mockInvSource.summary_fields.organization;
wrapper = mountWithContexts(
<PromptInventorySourceDetail resource={mockInvSource} />
);
assertDetail(wrapper, 'Organization', 'Deleted');
});
});

View File

@@ -4,7 +4,7 @@ import { t } from '@lingui/macro';
import { Link } from 'react-router-dom';
import { Chip, ChipGroup, List, ListItem } from '@patternfly/react-core';
import { Detail } from '@components/DetailList';
import { Detail, DeletedDetail } from '@components/DetailList';
import { VariablesDetail } from '@components/CodeMirrorInput';
import CredentialChip from '@components/CredentialChip';
import Sparkline from '@components/Sparkline';
@@ -13,6 +13,7 @@ import { toTitleCase } from '@util/strings';
function PromptJobTemplateDetail({ i18n, resource }) {
const {
allow_simultaneous,
ask_inventory_on_launch,
become_enabled,
diff_mode,
extra_vars,
@@ -84,7 +85,21 @@ function PromptJobTemplateDetail({ i18n, resource }) {
/>
)}
<Detail label={i18n._(t`Job Type`)} value={toTitleCase(job_type)} />
{summary_fields?.inventory && (
{summary_fields?.organization ? (
<Detail
label={i18n._(t`Organization`)}
value={
<Link
to={`/organizations/${summary_fields.organization.id}/details`}
>
{summary_fields?.organization.name}
</Link>
}
/>
) : (
<DeletedDetail label={i18n._(t`Organization`)} />
)}
{summary_fields?.inventory ? (
<Detail
label={i18n._(t`Inventory`)}
value={
@@ -95,8 +110,12 @@ function PromptJobTemplateDetail({ i18n, resource }) {
</Link>
}
/>
) : (
!ask_inventory_on_launch && (
<DeletedDetail label={i18n._(t`Inventory`)} />
)
)}
{summary_fields?.project && (
{summary_fields?.project ? (
<Detail
label={i18n._(t`Project`)}
value={
@@ -105,6 +124,8 @@ function PromptJobTemplateDetail({ i18n, resource }) {
</Link>
}
/>
) : (
<DeletedDetail label={i18n._(t`Project`)} />
)}
<Detail label={i18n._(t`Source Control Branch`)} value={scm_branch} />
<Detail label={i18n._(t`Playbook`)} value={playbook} />

View File

@@ -18,6 +18,11 @@ const mockJT = {
],
};
function assertDetail(wrapper, label, value) {
expect(wrapper.find(`Detail[label="${label}"] dt`).text()).toBe(label);
expect(wrapper.find(`Detail[label="${label}"] dd`).text()).toBe(value);
}
describe('PromptJobTemplateDetail', () => {
let wrapper;
@@ -34,24 +39,19 @@ describe('PromptJobTemplateDetail', () => {
});
test('should render expected details', () => {
function assertDetail(label, value) {
expect(wrapper.find(`Detail[label="${label}"] dt`).text()).toBe(label);
expect(wrapper.find(`Detail[label="${label}"] dd`).text()).toBe(value);
}
assertDetail('Job Type', 'Run');
assertDetail('Inventory', 'Demo Inventory');
assertDetail('Project', 'Mock Project');
assertDetail('Source Control Branch', 'Foo branch');
assertDetail('Playbook', 'ping.yml');
assertDetail('Forks', '2');
assertDetail('Limit', 'alpha:beta');
assertDetail('Verbosity', '3 (Debug)');
assertDetail('Show Changes', 'Off');
assertDetail('Job Slicing', '1');
assertDetail('Host Config Key', 'a1b2c3');
assertDetail('Webhook Service', 'Github');
assertDetail('Webhook Key', 'PiM3n2');
assertDetail(wrapper, 'Job Type', 'Run');
assertDetail(wrapper, 'Inventory', 'Demo Inventory');
assertDetail(wrapper, 'Project', 'Mock Project');
assertDetail(wrapper, 'Source Control Branch', 'Foo branch');
assertDetail(wrapper, 'Playbook', 'ping.yml');
assertDetail(wrapper, 'Forks', '2');
assertDetail(wrapper, 'Limit', 'alpha:beta');
assertDetail(wrapper, 'Verbosity', '3 (Debug)');
assertDetail(wrapper, 'Show Changes', 'Off');
assertDetail(wrapper, 'Job Slicing', '1');
assertDetail(wrapper, 'Host Config Key', 'a1b2c3');
assertDetail(wrapper, 'Webhook Service', 'Github');
assertDetail(wrapper, 'Webhook Key', 'PiM3n2');
expect(wrapper.find('StatusIcon')).toHaveLength(2);
expect(wrapper.find('Detail[label="Webhook URL"] dd').text()).toEqual(
expect.stringContaining('/api/v2/job_templates/7/github/')
@@ -112,4 +112,16 @@ describe('PromptJobTemplateDetail', () => {
'---foo: bar'
);
});
test('should render "Deleted" details', () => {
delete mockJT.summary_fields.inventory;
delete mockJT.summary_fields.organization;
delete mockJT.summary_fields.project;
wrapper = mountWithContexts(<PromptJobTemplateDetail resource={mockJT} />);
assertDetail(wrapper, 'Inventory', 'Deleted');
assertDetail(wrapper, 'Organization', 'Deleted');
assertDetail(wrapper, 'Project', 'Deleted');
});
});

View File

@@ -3,8 +3,9 @@ import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { Config } from '@contexts/Config';
import { List, ListItem } from '@patternfly/react-core';
import { Link } from 'react-router-dom';
import { Detail } from '@components/DetailList';
import { Detail, DeletedDetail } from '@components/DetailList';
import CredentialChip from '@components/CredentialChip';
import { toTitleCase } from '@util/strings';
@@ -49,6 +50,20 @@ function PromptProjectDetail({ i18n, resource }) {
return (
<>
{summary_fields?.organization ? (
<Detail
label={i18n._(t`Organization`)}
value={
<Link
to={`/organizations/${summary_fields.organization.id}/details`}
>
{summary_fields?.organization.name}
</Link>
}
/>
) : (
<DeletedDetail label={i18n._(t`Organization`)} />
)}
<Detail
label={i18n._(t`Source Control Type`)}
value={scm_type === '' ? i18n._(t`Manual`) : toTitleCase(scm_type)}

View File

@@ -3,13 +3,18 @@ import { mountWithContexts } from '@testUtils/enzymeHelpers';
import PromptProjectDetail from './PromptProjectDetail';
import mockProject from './data.project.json';
function assertDetail(wrapper, label, value) {
expect(wrapper.find(`Detail[label="${label}"] dt`).text()).toBe(label);
expect(wrapper.find(`Detail[label="${label}"] dd`).text()).toBe(value);
}
describe('PromptProjectDetail', () => {
let wrapper;
const config = {
project_base_dir: 'dir/foo/bar',
};
beforeAll(() => {
const config = {
project_base_dir: 'dir/foo/bar',
};
wrapper = mountWithContexts(
<PromptProjectDetail resource={mockProject} />,
{
@@ -27,23 +32,19 @@ describe('PromptProjectDetail', () => {
});
test('should render expected details', () => {
function assertDetail(label, value) {
expect(wrapper.find(`Detail[label="${label}"] dt`).text()).toBe(label);
expect(wrapper.find(`Detail[label="${label}"] dd`).text()).toBe(value);
}
assertDetail('Source Control Type', 'Git');
assertDetail(wrapper, 'Source Control Type', 'Git');
assertDetail(
wrapper,
'Source Control URL',
'https://github.com/ansible/ansible-tower-samples'
);
assertDetail('Source Control Branch', 'foo');
assertDetail('Source Control Refspec', 'refs/');
assertDetail('Cache Timeout', '3 Seconds');
assertDetail('Ansible Environment', 'mock virtual env');
assertDetail('Project Base Path', 'dir/foo/bar');
assertDetail('Playbook Directory', '_6__demo_project');
assertDetail('Source Control Credential', 'Scm: mock scm');
assertDetail(wrapper, 'Source Control Branch', 'foo');
assertDetail(wrapper, 'Source Control Refspec', 'refs/');
assertDetail(wrapper, 'Cache Timeout', '3 Seconds');
assertDetail(wrapper, 'Ansible Environment', 'mock virtual env');
assertDetail(wrapper, 'Project Base Path', 'dir/foo/bar');
assertDetail(wrapper, 'Playbook Directory', '_6__demo_project');
assertDetail(wrapper, 'Source Control Credential', 'Scm: mock scm');
expect(
wrapper
.find('Detail[label="Options"]')
@@ -55,4 +56,15 @@ describe('PromptProjectDetail', () => {
])
).toEqual(true);
});
test('should render "Deleted" details', () => {
delete mockProject.summary_fields.organization;
wrapper = mountWithContexts(
<PromptProjectDetail resource={mockProject} />,
{
context: { config },
}
);
assertDetail(wrapper, 'Organization', 'Deleted');
});
});

View File

@@ -52,6 +52,18 @@ function PromptWFJobTemplateDetail({ i18n, resource }) {
label={i18n._(t`Activity`)}
/>
)}
{summary_fields?.organization && (
<Detail
label={i18n._(t`Organization`)}
value={
<Link
to={`/organizations/${summary_fields.organization.id}/details`}
>
{summary_fields?.organization.name}
</Link>
}
/>
)}
{summary_fields?.inventory && (
<Detail
label={i18n._(t`Inventory`)}

View File

@@ -80,17 +80,19 @@ class Search extends React.Component {
const { searchKey, searchValue } = this.state;
const { onSearch, qsConfig } = this.props;
const isNonStringField =
qsConfig.integerFields.find(field => field === searchKey) ||
qsConfig.dateFields.find(field => field === searchKey);
if (searchValue) {
const isNonStringField =
qsConfig.integerFields.find(field => field === searchKey) ||
qsConfig.dateFields.find(field => field === searchKey);
const actualSearchKey = isNonStringField
? searchKey
: `${searchKey}__icontains`;
const actualSearchKey = isNonStringField
? searchKey
: `${searchKey}__icontains`;
onSearch(actualSearchKey, searchValue);
onSearch(actualSearchKey, searchValue);
this.setState({ searchValue: '' });
this.setState({ searchValue: '' });
}
}
handleSearchInputChange(searchValue) {
@@ -276,13 +278,16 @@ class Search extends React.Component {
onChange={this.handleSearchInputChange}
onKeyDown={this.handleTextKeyDown}
/>
<Button
variant={ButtonVariant.control}
aria-label={i18n._(t`Search submit button`)}
onClick={this.handleSearch}
>
<SearchIcon />
</Button>
<div css={!searchValue && `cursor:not-allowed`}>
<Button
variant={ButtonVariant.control}
isDisabled={!searchValue}
aria-label={i18n._(t`Search submit button`)}
onClick={this.handleSearch}
>
<SearchIcon />
</Button>
</div>
</InputGroup>
)}
</DataToolbarFilter>

View File

@@ -92,4 +92,53 @@ describe('<Search />', () => {
.handleDropdownSelect({ target: { innerText: 'Description' } });
expect(wrapper.state('searchKey')).toEqual('description');
});
test('attempt to search with empty string', () => {
const searchButton = 'button[aria-label="Search submit button"]';
const searchTextInput = 'input[aria-label="Search text input"]';
const columns = [{ name: 'Name', key: 'name', isDefault: true }];
const onSearch = jest.fn();
const wrapper = mountWithContexts(
<DataToolbar
id={`${QS_CONFIG.namespace}-list-toolbar`}
clearAllFilters={() => {}}
collapseListedFiltersBreakpoint="md"
>
<DataToolbarContent>
<Search qsConfig={QS_CONFIG} columns={columns} onSearch={onSearch} />
</DataToolbarContent>
</DataToolbar>
);
wrapper.find(searchTextInput).instance().value = '';
wrapper.find(searchTextInput).simulate('change');
wrapper.find(searchButton).simulate('click');
expect(onSearch).toHaveBeenCalledTimes(0);
});
test('search with a valid string', () => {
const searchButton = 'button[aria-label="Search submit button"]';
const searchTextInput = 'input[aria-label="Search text input"]';
const columns = [{ name: 'Name', key: 'name', isDefault: true }];
const onSearch = jest.fn();
const wrapper = mountWithContexts(
<DataToolbar
id={`${QS_CONFIG.namespace}-list-toolbar`}
clearAllFilters={() => {}}
collapseListedFiltersBreakpoint="md"
>
<DataToolbarContent>
<Search qsConfig={QS_CONFIG} columns={columns} onSearch={onSearch} />
</DataToolbarContent>
</DataToolbar>
);
wrapper.find(searchTextInput).instance().value = 'test-321';
wrapper.find(searchTextInput).simulate('change');
wrapper.find(searchButton).simulate('click');
expect(onSearch).toHaveBeenCalledTimes(1);
expect(onSearch).toBeCalledWith('name__icontains', 'test-321');
});
});

View File

@@ -2,9 +2,27 @@ import React from 'react';
import { mountWithContexts } from '@testUtils/enzymeHelpers';
import { act } from 'react-dom/test-utils';
import { createMemoryHistory } from 'history';
import { HostsAPI } from '@api';
import InventoryHostGroups from './InventoryHostGroups';
jest.mock('@api');
HostsAPI.readAllGroups.mockResolvedValue({
data: {
count: 1,
results: [
{
id: 1,
url: 'www.google.com',
summary_fields: {
inventory: { id: 1, name: 'foo' },
user_capabilities: { edit: true },
},
name: 'Bar',
},
],
},
});
HostsAPI.readGroupsOptions.mockResolvedValue({ data: { actions: {} } });
describe('<InventoryHostGroups />', () => {
test('initially renders successfully', async () => {

View File

@@ -9,8 +9,8 @@ import {
TextListItem,
TextListItemVariants,
TextListVariants,
Label,
} from '@patternfly/react-core';
import styled from 'styled-components';
import { t } from '@lingui/macro';
import AlertModal from '@components/AlertModal';
@@ -18,19 +18,18 @@ import { CardBody, CardActionsRow } from '@components/Card';
import ContentError from '@components/ContentError';
import ContentLoading from '@components/ContentLoading';
import CredentialChip from '@components/CredentialChip';
import { DetailList, Detail, UserDateDetail } from '@components/DetailList';
import {
Detail,
DetailList,
DeletedDetail,
UserDateDetail,
} from '@components/DetailList';
import DeleteButton from '@components/DeleteButton';
import ErrorDetail from '@components/ErrorDetail';
import LaunchButton from '@components/LaunchButton';
import { VariablesDetail } from '@components/CodeMirrorInput';
import { JobTemplatesAPI } from '@api';
const MissingDetail = styled(Detail)`
dd& {
color: red;
}
`;
function JobTemplateDetail({ i18n, template }) {
const {
ask_inventory_on_launch,
@@ -55,6 +54,9 @@ function JobTemplateDetail({ i18n, template }) {
use_fact_cache,
url,
verbosity,
webhook_service,
related: { webhook_receiver },
webhook_key,
} = template;
const [contentError, setContentError] = useState(null);
const [deletionError, setDeletionError] = useState(null);
@@ -133,10 +135,6 @@ function JobTemplateDetail({ i18n, template }) {
</TextList>
);
const renderMissingDataDetail = value => (
<MissingDetail label={value} value={i18n._(t`Deleted`)} />
);
const inventoryValue = (kind, id) => {
const inventorykind = kind === 'smart' ? 'smart_inventory' : 'inventory';
@@ -180,7 +178,7 @@ function JobTemplateDetail({ i18n, template }) {
}
/>
) : (
renderMissingDataDetail(i18n._(t`Project`))
<DeletedDetail label={i18n._(t`Organization`)} />
)}
{summary_fields.inventory ? (
<Detail
@@ -191,8 +189,9 @@ function JobTemplateDetail({ i18n, template }) {
)}
/>
) : (
!ask_inventory_on_launch &&
renderMissingDataDetail(i18n._(t`Inventory`))
!ask_inventory_on_launch && (
<DeletedDetail label={i18n._(t`Inventory`)} />
)
)}
{summary_fields.project ? (
<Detail
@@ -204,7 +203,7 @@ function JobTemplateDetail({ i18n, template }) {
}
/>
) : (
renderMissingDataDetail(i18n._(t`Project`))
<DeletedDetail label={i18n._(t`Project`)} />
)}
<Detail
label={i18n._(t`Source Control Branch`)}
@@ -245,6 +244,35 @@ function JobTemplateDetail({ i18n, template }) {
/>
</React.Fragment>
)}
{webhook_service && (
<Detail
label={i18n._(t`Webhook Service`)}
value={
webhook_service === 'github'
? i18n._(t`GitHub`)
: i18n._(t`GitLab`)
}
/>
)}
{webhook_receiver && (
<Detail
label={i18n._(t`Webhook URL`)}
value={`${document.location.origin}${webhook_receiver}`}
/>
)}
<Detail label={i18n._(t`Webhook Key`)} value={webhook_key} />
{summary_fields.webhook_credential && (
<Detail
label={i18n._(t`Webhook Credential`)}
value={
<Link
to={`/credentials/${summary_fields.webhook_credential.id}/details`}
>
<Label>{summary_fields.webhook_credential.name}</Label>
</Link>
}
/>
)}
{renderOptionsField && (
<Detail label={i18n._(t`Options`)} value={renderOptions} />
)}

View File

@@ -30,7 +30,6 @@ describe('<JobTemplateDetail />', () => {
afterEach(() => {
jest.clearAllMocks();
});
test('should render successfully with missing summary fields', async () => {
await act(async () => {
wrapper = mountWithContexts(
@@ -138,4 +137,16 @@ describe('<JobTemplateDetail />', () => {
el => el.length === 0
);
});
test('webhook fields should render properly', () => {
expect(wrapper.find('Detail[label="Webhook Service"]').length).toBe(1);
expect(wrapper.find('Detail[label="Webhook Service"]').prop('value')).toBe(
'GitHub'
);
expect(wrapper.find('Detail[label="Webhook URL"]').length).toBe(1);
expect(wrapper.find('Detail[label="Webhook URL"]').prop('value')).toContain(
'api/v2/job_templates/7/github/'
);
expect(wrapper.find('Detail[label="Webhook Key"]').length).toBe(1);
expect(wrapper.find('Detail[label="Webhook Credential"]').length).toBe(1);
});
});

View File

@@ -1,174 +1,182 @@
{
"id": 7,
"type": "job_template",
"url": "/api/v2/job_templates/7/",
"related": {
"named_url": "/api/v2/job_templates/Mike's JT/",
"created_by": "/api/v2/users/1/",
"modified_by": "/api/v2/users/1/",
"labels": "/api/v2/job_templates/7/labels/",
"inventory": "/api/v2/inventories/1/",
"project": "/api/v2/projects/6/",
"extra_credentials": "/api/v2/job_templates/7/extra_credentials/",
"credentials": "/api/v2/job_templates/7/credentials/",
"last_job": "/api/v2/jobs/12/",
"jobs": "/api/v2/job_templates/7/jobs/",
"schedules": "/api/v2/job_templates/7/schedules/",
"activity_stream": "/api/v2/job_templates/7/activity_stream/",
"launch": "/api/v2/job_templates/7/launch/",
"notification_templates_started": "/api/v2/job_templates/7/notification_templates_started/",
"notification_templates_success": "/api/v2/job_templates/7/notification_templates_success/",
"notification_templates_error": "/api/v2/job_templates/7/notification_templates_error/",
"access_list": "/api/v2/job_templates/7/access_list/",
"survey_spec": "/api/v2/job_templates/7/survey_spec/",
"object_roles": "/api/v2/job_templates/7/object_roles/",
"instance_groups": "/api/v2/job_templates/7/instance_groups/",
"slice_workflow_jobs": "/api/v2/job_templates/7/slice_workflow_jobs/",
"copy": "/api/v2/job_templates/7/copy/"
},
"summary_fields": {
"inventory": {
"id": 1,
"name": "Mike's Inventory",
"description": "",
"has_active_failures": false,
"total_hosts": 1,
"hosts_with_active_failures": 0,
"total_groups": 0,
"groups_with_active_failures": 0,
"has_inventory_sources": false,
"total_inventory_sources": 0,
"inventory_sources_with_failures": 0,
"organization_id": 1,
"kind": ""
},
"project": {
"id": 6,
"name": "Mike's Project",
"description": "",
"status": "successful",
"scm_type": "git"
},
"last_job": {
"id": 12,
"name": "Mike's JT",
"description": "",
"finished": "2019-10-01T14:34:35.142483Z",
"status": "successful",
"failed": false
},
"last_update": {
"id": 12,
"name": "Mike's JT",
"description": "",
"status": "successful",
"failed": false
},
"created_by": {
"id": 1,
"username": "admin",
"first_name": "",
"last_name": ""
},
"modified_by": {
"id": 1,
"username": "admin",
"first_name": "",
"last_name": ""
},
"object_roles": {
"admin_role": {
"description": "Can manage all aspects of the job template",
"name": "Admin",
"id": 24
},
"execute_role": {
"description": "May run the job template",
"name": "Execute",
"id": 25
},
"read_role": {
"description": "May view settings for the job template",
"name": "Read",
"id": 26
}
},
"user_capabilities": {
"edit": true,
"delete": true,
"start": true,
"schedule": true,
"copy": true
},
"labels": {
"count": 1,
"results": [
{
"id": 7,
"type": "job_template",
"url": "/api/v2/job_templates/7/",
"related": {
"named_url": "/api/v2/job_templates/Mike's JT/",
"created_by": "/api/v2/users/1/",
"modified_by": "/api/v2/users/1/",
"labels": "/api/v2/job_templates/7/labels/",
"inventory": "/api/v2/inventories/1/",
"project": "/api/v2/projects/6/",
"extra_credentials": "/api/v2/job_templates/7/extra_credentials/",
"credentials": "/api/v2/job_templates/7/credentials/",
"last_job": "/api/v2/jobs/12/",
"jobs": "/api/v2/job_templates/7/jobs/",
"schedules": "/api/v2/job_templates/7/schedules/",
"activity_stream": "/api/v2/job_templates/7/activity_stream/",
"launch": "/api/v2/job_templates/7/launch/",
"notification_templates_started": "/api/v2/job_templates/7/notification_templates_started/",
"notification_templates_success": "/api/v2/job_templates/7/notification_templates_success/",
"notification_templates_error": "/api/v2/job_templates/7/notification_templates_error/",
"access_list": "/api/v2/job_templates/7/access_list/",
"survey_spec": "/api/v2/job_templates/7/survey_spec/",
"object_roles": "/api/v2/job_templates/7/object_roles/",
"instance_groups": "/api/v2/job_templates/7/instance_groups/",
"slice_workflow_jobs": "/api/v2/job_templates/7/slice_workflow_jobs/",
"copy": "/api/v2/job_templates/7/copy/",
"webhook_receiver": "/api/v2/job_templates/7/github/"
},
"summary_fields": {
"inventory": {
"id": 1,
"name": "Mike's Inventory",
"description": "",
"has_active_failures": false,
"total_hosts": 1,
"hosts_with_active_failures": 0,
"total_groups": 0,
"groups_with_active_failures": 0,
"has_inventory_sources": false,
"total_inventory_sources": 0,
"inventory_sources_with_failures": 0,
"organization_id": 1,
"kind": ""
},
"project": {
"id": 6,
"name": "Mike's Project",
"description": "",
"status": "successful",
"scm_type": "git"
},
"last_job": {
"id": 12,
"name": "Mike's JT",
"description": "",
"finished": "2019-10-01T14:34:35.142483Z",
"status": "successful",
"failed": false
},
"last_update": {
"id": 12,
"name": "Mike's JT",
"description": "",
"status": "successful",
"failed": false
},
"created_by": {
"id": 1,
"username": "admin",
"first_name": "",
"last_name": ""
},
"modified_by": {
"id": 1,
"username": "admin",
"first_name": "",
"last_name": ""
},
"object_roles": {
"admin_role": {
"description": "Can manage all aspects of the job template",
"name": "Admin",
"id": 24
},
"execute_role": {
"description": "May run the job template",
"name": "Execute",
"id": 25
},
"read_role": {
"description": "May view settings for the job template",
"name": "Read",
"id": 26
}
},
"user_capabilities": {
"edit": true,
"delete": true,
"start": true,
"schedule": true,
"copy": true
},
"labels": {
"count": 1,
"results": [{
"id": 91,
"name": "L_91o2"
}
]
},
"survey": {
"title": "",
"description": ""
},
"recent_jobs": [
{
"id": 12,
"status": "successful",
"finished": "2019-10-01T14:34:35.142483Z",
"type": "job"
}
],
"extra_credentials": [],
"credentials": [
{
"id": 1, "kind": "ssh" , "name": "Credential 1"
}]
},
{
"id": 2, "kind": "awx" , "name": "Credential 2"
"survey": {
"title": "",
"description": ""
},
"recent_jobs": [{
"id": 12,
"status": "successful",
"finished": "2019-10-01T14:34:35.142483Z",
"type": "job"
}],
"extra_credentials": [],
"credentials": [{
"id": 1,
"kind": "ssh",
"name": "Credential 1"
},
{
"id": 2,
"kind": "awx",
"name": "Credential 2"
}
],
"webhook_credential": {
"id": "1",
"name": "Webhook Credential"
}
]
},
"created": "2019-09-30T16:18:34.564820Z",
"modified": "2019-10-01T14:47:31.818431Z",
"name": "Mike's JT",
"description": "",
"job_type": "run",
"inventory": 1,
"project": 6,
"playbook": "ping.yml",
"scm_branch": "Foo branch",
"forks": 0,
"limit": "",
"verbosity": 0,
"extra_vars": "",
"job_tags": "T_100,T_200",
"force_handlers": false,
"skip_tags": "S_100,S_200",
"start_at_task": "",
"timeout": 0,
"use_fact_cache": true,
"last_job_run": "2019-10-01T14:34:35.142483Z",
"last_job_failed": false,
"next_job_run": null,
"status": "successful",
"host_config_key": "",
"ask_scm_branch_on_launch": false,
"ask_diff_mode_on_launch": false,
"ask_variables_on_launch": false,
"ask_limit_on_launch": false,
"ask_tags_on_launch": false,
"ask_skip_tags_on_launch": false,
"ask_job_type_on_launch": false,
"ask_verbosity_on_launch": false,
"ask_inventory_on_launch": false,
"ask_credential_on_launch": false,
"survey_enabled": true,
"become_enabled": false,
"diff_mode": false,
"allow_simultaneous": false,
"custom_virtualenv": null,
"job_slice_count": 1
}
},
"created": "2019-09-30T16:18:34.564820Z",
"modified": "2019-10-01T14:47:31.818431Z",
"name": "Mike's JT",
"description": "",
"job_type": "run",
"inventory": 1,
"project": 6,
"playbook": "ping.yml",
"scm_branch": "Foo branch",
"forks": 0,
"limit": "",
"verbosity": 0,
"extra_vars": "",
"job_tags": "T_100,T_200",
"force_handlers": false,
"skip_tags": "S_100,S_200",
"start_at_task": "",
"timeout": 0,
"use_fact_cache": true,
"last_job_run": "2019-10-01T14:34:35.142483Z",
"last_job_failed": false,
"next_job_run": null,
"status": "successful",
"host_config_key": "",
"ask_scm_branch_on_launch": false,
"ask_diff_mode_on_launch": false,
"ask_variables_on_launch": false,
"ask_limit_on_launch": false,
"ask_tags_on_launch": false,
"ask_skip_tags_on_launch": false,
"ask_job_type_on_launch": false,
"ask_verbosity_on_launch": false,
"ask_inventory_on_launch": false,
"ask_credential_on_launch": false,
"survey_enabled": true,
"become_enabled": false,
"diff_mode": false,
"allow_simultaneous": false,
"custom_virtualenv": null,
"job_slice_count": 1,
"webhook_credential": 1,
"webhook_key": "asertdyuhjkhgfd234567kjgfds",
"webhook_service": "github"
}

View File

@@ -364,18 +364,16 @@ def main():
# End backwards compatability
state = module.params.get('state')
# Attempt to look up the related items the user specified (these will fail the module if not found)
if organization:
org_id = module.resolve_name_to_id('organizations', organization)
if user:
user_id = module.resolve_name_to_id('users', user)
if team:
team_id = module.resolve_name_to_id('teams', team)
# Deprication warnings
for legacy_input in OLD_INPUT_NAMES:
if module.params.get(legacy_input) is not None:
module.deprecate(msg='{0} parameter has been deprecated, please use inputs instead'.format(legacy_input), version="3.6")
if kind:
module.deprecate(msg='The kind parameter has been deprecated, please use credential_type instead', version="3.6")
cred_type_id = module.resolve_name_to_id('credential_types', credential_type if credential_type else KIND_CHOICES[kind])
if organization:
org_id = module.resolve_name_to_id('organizations', organization)
# Attempt to look up the object based on the provided name, credential type and optional organization
lookup_data = {
@@ -384,13 +382,23 @@ def main():
}
if organization:
lookup_data['organization'] = org_id
credential = module.get_one('credentials', **{'data': lookup_data})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(credential)
# Attempt to look up the related items the user specified (these will fail the module if not found)
if user:
user_id = module.resolve_name_to_id('users', user)
if team:
team_id = module.resolve_name_to_id('teams', team)
# Create credential input from legacy inputs
credential_inputs = {}
for legacy_input in OLD_INPUT_NAMES:
if module.params.get(legacy_input) is not None:
module.deprecate(msg='{0} parameter has been deprecated, please use inputs instead'.format(legacy_input), version="3.6")
credential_inputs[legacy_input] = module.params.get(legacy_input)
if inputs:
credential_inputs.update(inputs)
@@ -415,14 +423,10 @@ def main():
if team:
credential_fields['team'] = team_id
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(credential)
elif state == 'present':
# If the state was present we can let the module build or update the existing group, this will return on its own
module.create_or_update_if_needed(
credential, credential_fields, endpoint='credentials', item_type='credential'
)
# If the state was present we can let the module build or update the existing group, this will return on its own
module.create_or_update_if_needed(
credential, credential_fields, endpoint='credentials', item_type='credential'
)
if __name__ == '__main__':

View File

@@ -144,9 +144,9 @@ def main():
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(credential_type)
elif state == 'present':
# If the state was present and we can let the module build or update the existing credential type, this will return on its own
module.create_or_update_if_needed(credential_type, credential_type_params, endpoint='credential_types', item_type='credential type')
# If the state was present and we can let the module build or update the existing credential type, this will return on its own
module.create_or_update_if_needed(credential_type, credential_type_params, endpoint='credential_types', item_type='credential type')
if __name__ == '__main__':

View File

@@ -123,6 +123,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(group)
# Create the data that gets sent for create and update
group_fields = {
'name': new_name if new_name else name,
@@ -149,15 +153,11 @@ def main():
if id_list:
association_fields[relationship] = id_list
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(group)
elif state == 'present':
# If the state was present we can let the module build or update the existing group, this will return on its own
module.create_or_update_if_needed(
group, group_fields, endpoint='groups', item_type='group',
associations=association_fields
)
# If the state was present we can let the module build or update the existing group, this will return on its own
module.create_or_update_if_needed(
group, group_fields, endpoint='groups', item_type='group',
associations=association_fields
)
if __name__ == '__main__':

View File

@@ -119,6 +119,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(host)
# Create the data that gets sent for create and update
host_fields = {
'name': new_name if new_name else name,
@@ -130,12 +134,8 @@ def main():
if variables is not None:
host_fields['variables'] = json.dumps(variables)
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(host)
elif state == 'present':
# If the state was present and we can let the module build or update the existing host, this will return on its own
module.create_or_update_if_needed(host, host_fields, endpoint='hosts', item_type='host')
# If the state was present and we can let the module build or update the existing host, this will return on its own
module.create_or_update_if_needed(host, host_fields, endpoint='hosts', item_type='host')
if __name__ == '__main__':

View File

@@ -119,6 +119,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(inventory)
# Create the data that gets sent for create and update
inventory_fields = {
'name': name,
@@ -131,16 +135,12 @@ def main():
if variables is not None:
inventory_fields['variables'] = json.dumps(variables)
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(inventory)
elif state == 'present':
# We need to perform a check to make sure you are not trying to convert a regular inventory into a smart one.
if inventory and inventory['kind'] == '' and inventory_fields['kind'] == 'smart':
module.fail_json(msg='You cannot turn a regular inventory into a "smart" inventory.')
# We need to perform a check to make sure you are not trying to convert a regular inventory into a smart one.
if inventory and inventory['kind'] == '' and inventory_fields['kind'] == 'smart':
module.fail_json(msg='You cannot turn a regular inventory into a "smart" inventory.')
# If the state was present and we can let the module build or update the existing inventory, this will return on its own
module.create_or_update_if_needed(inventory, inventory_fields, endpoint='inventories', item_type='inventory')
# If the state was present and we can let the module build or update the existing inventory, this will return on its own
module.create_or_update_if_needed(inventory, inventory_fields, endpoint='inventories', item_type='inventory')
if __name__ == '__main__':

View File

@@ -198,6 +198,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(inventory_source)
# Create the data that gets sent for create and update
inventory_source_fields = {
'name': new_name if new_name else name,
@@ -234,12 +238,8 @@ def main():
if state == 'present' and not inventory_source and not inventory_source_fields['source']:
module.fail_json(msg="If creating a new inventory source, the source param must be present")
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(inventory_source)
elif state == 'present':
# If the state was present we can let the module build or update the existing inventory_source, this will return on its own
module.create_or_update_if_needed(inventory_source, inventory_source_fields, endpoint='inventory_sources', item_type='inventory source')
# If the state was present we can let the module build or update the existing inventory_source, this will return on its own
module.create_or_update_if_needed(inventory_source, inventory_source_fields, endpoint='inventory_sources', item_type='inventory source')
if __name__ == '__main__':

View File

@@ -414,6 +414,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
# Create the data that gets sent for create and update
new_fields = {}
new_fields['name'] = new_name if new_name else name
@@ -490,23 +494,19 @@ def main():
module._encrypted_changed_warning('survey_spec', existing_item, warning=True)
on_change = update_survey
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
elif state == 'present':
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='job_templates', item_type='job_template',
associations={
'credentials': credentials_ids,
'labels': labels_ids,
'notification_templates_success': notification_success_ids,
'notification_templates_started': notification_start_ids,
'notification_templates_error': notification_error_ids
},
on_create=on_change, on_update=on_change,
)
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='job_templates', item_type='job_template',
associations={
'credentials': credentials_ids,
'labels': labels_ids,
'notification_templates_success': notification_success_ids,
'notification_templates_started': notification_start_ids,
'notification_templates_error': notification_error_ids
},
on_create=on_change, on_update=on_change,
)
if __name__ == '__main__':

View File

@@ -403,6 +403,11 @@ def main():
messages = module.params.get('messages')
state = module.params.get('state')
# Deprecation warnings
for legacy_input in OLD_INPUT_NAMES:
if module.params.get(legacy_input) is not None:
module.deprecate(msg='{0} parameter has been deprecated, please use notification_configuration instead.'.format(legacy_input), version="3.6")
# Attempt to look up the related items the user specified (these will fail the module if not found)
organization_id = None
if organization:
@@ -416,11 +421,14 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
# Create notification_configuration from legacy inputs
final_notification_configuration = {}
for legacy_input in OLD_INPUT_NAMES:
if module.params.get(legacy_input) is not None:
module.deprecate(msg='{0} parameter has been deprecated, please use notification_configuration instead.'.format(legacy_input), version="3.6")
final_notification_configuration[legacy_input] = module.params.get(legacy_input)
# Give anything in notification_configuration prescedence over the individual inputs
if notification_configuration is not None:
@@ -440,17 +448,13 @@ def main():
if messages is not None:
new_fields['messages'] = messages
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
elif state == 'present':
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='notification_templates', item_type='notification_template',
associations={
}
)
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='notification_templates', item_type='notification_template',
associations={
}
)
if __name__ == '__main__':

View File

@@ -108,6 +108,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(organization)
# Create the data that gets sent for create and update
org_fields = {'name': name}
if description is not None:
@@ -117,12 +121,8 @@ def main():
if max_hosts is not None:
org_fields['max_hosts'] = max_hosts
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(organization)
elif state == 'present':
# If the state was present and we can let the module build or update the existing organization, this will return on its own
module.create_or_update_if_needed(organization, org_fields, endpoint='organizations', item_type='organization')
# If the state was present and we can let the module build or update the existing organization, this will return on its own
module.create_or_update_if_needed(organization, org_fields, endpoint='organizations', item_type='organization')
if __name__ == '__main__':

View File

@@ -236,6 +236,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(project)
# Create the data that gets sent for create and update
project_fields = {
'name': name,
@@ -260,7 +264,7 @@ def main():
if scm_type == '':
project_fields['local_path'] = local_path
if state != 'absent' and (scm_update_cache_timeout != 0 and scm_update_on_launch is not True):
if scm_update_cache_timeout != 0 and scm_update_on_launch is not True:
module.warn('scm_update_cache_timeout will be ignored since scm_update_on_launch was not set to true')
# If we are doing a not manual project, register our on_change method
@@ -269,12 +273,8 @@ def main():
if wait and scm_type != '':
on_change = wait_for_project_update
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(project)
elif state == 'present':
# If the state was present and we can let the module build or update the existing project, this will return on its own
module.create_or_update_if_needed(project, project_fields, endpoint='projects', item_type='project', on_create=on_change, on_update=on_change)
# If the state was present and we can let the module build or update the existing project, this will return on its own
module.create_or_update_if_needed(project, project_fields, endpoint='projects', item_type='project', on_create=on_change, on_update=on_change)
if __name__ == '__main__':

View File

@@ -102,6 +102,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(team)
# Create the data that gets sent for create and update
team_fields = {
'name': new_name if new_name else name,
@@ -110,12 +114,8 @@ def main():
if description is not None:
team_fields['description'] = description
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(team)
elif state == 'present':
# If the state was present and we can let the module build or update the existing team, this will return on its own
module.create_or_update_if_needed(team, team_fields, endpoint='teams', item_type='team')
# If the state was present and we can let the module build or update the existing team, this will return on its own
module.create_or_update_if_needed(team, team_fields, endpoint='teams', item_type='team')
if __name__ == '__main__':

View File

@@ -147,6 +147,10 @@ def main():
}
})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
# Create the data that gets sent for create and update
new_fields = {}
if username:
@@ -164,12 +168,8 @@ def main():
if password:
new_fields['password'] = password
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
elif state == 'present':
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(existing_item, new_fields, endpoint='users', item_type='user')
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(existing_item, new_fields, endpoint='users', item_type='user')
if __name__ == '__main__':

View File

@@ -175,6 +175,13 @@ def main():
organization_id = module.resolve_name_to_id('organizations', organization)
search_fields['organization'] = new_fields['organization'] = organization_id
# Attempt to look up an existing item based on the provided data
existing_item = module.get_one('workflow_job_templates', **{'data': search_fields})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
inventory = module.params.get('inventory')
if inventory:
new_fields['inventory'] = module.resolve_name_to_id('inventories', inventory)
@@ -183,9 +190,6 @@ def main():
if webhook_credential:
new_fields['webhook_credential'] = module.resolve_name_to_id('webhook_credential', webhook_credential)
# Attempt to look up an existing item based on the provided data
existing_item = module.get_one('workflow_job_templates', **{'data': search_fields})
# Create the data that gets sent for create and update
new_fields['name'] = new_name if new_name else name
for field_name in (
@@ -213,16 +217,12 @@ def main():
module._encrypted_changed_warning('survey_spec', existing_item, warning=True)
on_change = update_survey
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
elif state == 'present':
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='workflow_job_templates', item_type='workflow_job_template',
on_create=on_change, on_update=on_change
)
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='workflow_job_templates', item_type='workflow_job_template',
on_create=on_change, on_update=on_change
)
if __name__ == '__main__':

View File

@@ -218,6 +218,13 @@ def main():
workflow_job_template_id = wfjt_data['id']
search_fields['workflow_job_template'] = new_fields['workflow_job_template'] = workflow_job_template_id
# Attempt to look up an existing item based on the provided data
existing_item = module.get_one('workflow_job_template_nodes', **{'data': search_fields})
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
unified_job_template = module.params.get('unified_job_template')
if unified_job_template:
new_fields['unified_job_template'] = module.resolve_name_to_id('unified_job_templates', unified_job_template)
@@ -226,9 +233,6 @@ def main():
if inventory:
new_fields['inventory'] = module.resolve_name_to_id('inventory', inventory)
# Attempt to look up an existing item based on the provided data
existing_item = module.get_one('workflow_job_template_nodes', **{'data': search_fields})
# Create the data that gets sent for create and update
for field_name in (
'identifier', 'extra_data', 'scm_branch', 'job_type', 'job_tags', 'skip_tags',
@@ -262,16 +266,12 @@ def main():
# In the case of a new object, the utils need to know it is a node
new_fields['type'] = 'workflow_job_template_node'
if state == 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
elif state == 'present':
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='workflow_job_template_nodes', item_type='workflow_job_template_node',
associations=association_fields
)
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='workflow_job_template_nodes', item_type='workflow_job_template_node',
associations=association_fields
)
if __name__ == '__main__':

View File

@@ -40,17 +40,13 @@
that:
- "result is changed"
- name: Delete the source inventory
- name: Delete the inventory source with an invalid cred, source_project, sourece_script specified
tower_inventory_source:
name: "{{ openstack_inv_source }}"
description: Source for Test inventory
inventory: "{{ openstack_inv }}"
credential: "{{ openstack_cred }}"
overwrite: true
update_on_launch: true
source_vars:
private: false
source: openstack
credential: "Does Not Exit"
source_project: "Does Not Exist"
source_script: "Does Not Exist"
state: absent
- assert:

View File

@@ -147,10 +147,11 @@
- name: Delete Job Template 1
tower_job_template:
name: "{{ jt1 }}"
project: "{{ proj1 }}"
inventory: Demo Inventory
playbook: hello_world.yml
job_type: run
project: "Does Not Exist"
inventory: "Does Not Exist"
webhook_credential: "Does Not Exist"
state: absent
register: result

View File

@@ -104,9 +104,11 @@
unified_job_template: "{{ jt1_name }}"
workflow: "{{ wfjt_name }}"
- name: Delete a workflow job template
- name: Delete a workflow job template with an invalid inventory and webook_credential
tower_workflow_job_template:
name: "{{ wfjt_name }}"
inventory: "Does Not Exist"
webhook_credential: "Does Not Exist"
state: absent
register: result

View File

@@ -188,6 +188,10 @@ def main():
}
})
if state is 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
# Create the data that gets sent for create and update
new_fields = {}
{% for option in item['json']['actions']['POST'] %}
@@ -203,20 +207,16 @@ def main():
{% endif %}
{% endfor %}
if state is 'absent':
# If the state was absent we can let the module delete it if needed, the module will handle exiting from this
module.delete_if_needed(existing_item)
elif state is 'present':
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='{{ item_type }}', item_type='{{ singular_item_type }}',
associations={
# If the state was present and we can let the module build or update the existing item, this will return on its own
module.create_or_update_if_needed(
existing_item, new_fields,
endpoint='{{ item_type }}', item_type='{{ singular_item_type }}',
associations={
{% for association in associations[item_type] | default([]) %}
'{{ association['endpoint'] }}': {{ association['related_item'] }}_ids,
'{{ association['endpoint'] }}': {{ association['related_item'] }}_ids,
{% endfor %}
}
)
}
)
if __name__ == '__main__':

View File

@@ -552,3 +552,41 @@ class SettingsModify(CustomAction):
except json.decoder.JSONDecodeError:
return False
return True
class HasMonitor(object):
action = 'monitor'
def add_arguments(self, parser, resource_options_parser):
from .options import pk_or_name
parser.choices[self.action].add_argument(
'id',
type=functools.partial(
pk_or_name, None, self.resource, page=self.page
),
help=''
)
def perform(self, **kwargs):
response = self.page.get()
mon = monitor_workflow if response.type == 'workflow_job' else monitor
if not response.failed and response.status != 'successful':
status = mon(
response,
self.page.connection.session,
)
if status:
response.json['status'] = status
if status in ('failed', 'error'):
setattr(response, 'rc', 1)
else:
return 'Unable to monitor finished job'
class JobMonitor(HasMonitor, CustomAction):
resource = 'jobs'
class WorkflowJobMonitor(HasMonitor, CustomAction):
resource = 'workflow_jobs'

View File

@@ -1,2 +1 @@
PyYAML
requests
.

View File

@@ -2,12 +2,6 @@ import os
import glob
import shutil
from setuptools import setup, find_packages, Command
try: # for pip >= 10
from pip._internal.req import parse_requirements
except ImportError: # for pip <= 9.0.3
from pip.req import parse_requirements
requirements = [str(r.req) for r in parse_requirements('requirements.txt', session=False)]
def get_version():
@@ -66,7 +60,10 @@ setup(
'clean': CleanCommand,
},
include_package_data=True,
install_requires=requirements,
install_requires=[
'PyYAML',
'requests',
],
python_requires=">=3.6",
extras_require={
'formatting': ['jq'],

View File

@@ -130,7 +130,7 @@ override that location by setting the variable `custom_venvs_path`.
You can use the variables file like so:
$ ansible-playbook install.yml --extra-vars "@venv_vars.yaml"
$ ansible-playbook -i inventory install.yml --extra-vars "@venv_vars.yaml"
Once the AWX API is available, you will need to update the `CUSTOM_VENV_PATHS`
setting as described in `Preparing a New Custom Virtualenv`.

View File

@@ -1,8 +1,12 @@
# Transition to Ansible Inventory Plugins
Inventory updates have changed from using scripts, which are vendored as executable Python scripts, to using dynamically-generated YAML files which conform to the specifications of the `auto` inventory plugin. These are then parsed by their respective inventory plugin.
Inventory updates have changed from using deprecated inventory scripts, to using dynamically-generated YAML files which are parsed by their respective inventory plugin.
The major organizational change is that the inventory plugins are part of the Ansible core distribution, whereas the same logic used to be a part of AWX source.
In Ansible 2.8, the inventory plugins which are part of the Ansible core distribution were used.
This only applied to a few select sources.
In all other circumstances, inventory imports make use of the inventory plugin from vendored collections.
Those collections are downloaded from Ansible Galaxy at the time of packaging building the container image.
## Prior Background for Transition
@@ -14,11 +18,10 @@ AWX used to maintain logic that parsed `.ini` inventory file contents, in additi
The CLI entry point `ansible-inventory` was introduced in Ansible 2.4. In Tower 3.2, inventory imports began running this command as an intermediary between the inventory and the import's logic to save content to the database. Using `ansible-inventory` eliminates the need to maintain source-specific logic, relying on Ansible's code instead. This also enables consistent data structure output from `ansible-inventory`. There are many valid structures that a script can provide, but the output from `ansible-inventory` will always be the same, thus the AWX logic to parse the content is simplified. This is why even scripts must be ran through the `ansible-inventory` CLI.
Along with this switchover, a backported version of `ansible-inventory` was provided, which supports Ansible versions 2.2 and 2.3.
### Removal of Backport
Along with the `ansible-inventory` switchover, a backported version of `ansible-inventory` was provided, which supported Ansible versions 2.2 and 2.3.
In AWX 3.0.0 (and Tower 3.5), the backport of `ansible-inventory` was removed, and support for using custom virtual environments was added. This set the minimum version of Ansible necessary to run _any_ inventory update to 2.4.
@@ -30,9 +33,21 @@ In AWX 4.0.0 (and Tower 3.5) inventory source types start to switch over to plug
To see in which version the plugin transition will happen, see `awx/main/models/inventory.py` and look for the source name as a subclass of `PluginFileInjector`, and there should be an `initial_version`, which is the first version that was deemed (via testing) to have sufficient parity in the content for its inventory plugin returns. For example, `openstack` will begin using the inventory plugin in Ansible version 2.8. If you run an OpenStack inventory update with Ansible 2.7.x or lower, it will use the script.
The eventual goal is for all source types to have moved to plugins. For any given source, after the `initial_version` for plugin use is higher than the lowest supported Ansible version, the script can be removed and the logic for script credential injection will also be removed.
At some point, scripts will be removed and the script-related (for credentials and configuration) logic will also be removed.
For example, after AWX no longer supports Ansible 2.7, the script `awx/plugins/openstack_inventory.py` will be removed.
### Management of Collections
Collections are used for inventory imports starting in Ansible 2.9, and each collection has its own versioning independently from Ansible.
Versions for those collections are set in the requirements file `requirements/collections_requirements.yml`.
The location of vendored collections is set by the file-only setting `INVENTORY_COLLECTIONS_ROOT`.
For development purposes, this can be changed so that you can test against development versions of those collections.
Instructions for doing this are in `tools/collections`.
If, for some reason, you need to change the version of a particular collection used in inventory imports,
you can use the `ansible-galaxy` tool to update the collection inside of the `INVENTORY_COLLECTIONS_ROOT`.
Note that the logic for building the inventory file is written and tested only for the version pinned in the requirements file.
## Changes to Expect in Imports
@@ -54,11 +69,6 @@ More `hostvars` will appear if the inventory plugins are used. To maintain backw
A small number of `hostvars` will be lost because of general deprecation needs.
#### Host Names
In many cases, the host names will change. In all cases, accurate host tracking will still be maintained via the host `instance_id`.
## Writing Your Own Inventory File
If you do not want any of this compatibility-related functionality, then you can add an SCM inventory source that points to your own file. You can also apply a credential of a `managed_by_tower` type to that inventory source that matches the credential you are using, as long as it is not `gce` or `openstack`.

View File

@@ -1,4 +1,4 @@
localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"
localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python3"
[all:vars]
@@ -66,6 +66,8 @@ postgres_data_dir="~/.awx/pgdocker"
host_port=80
host_port_ssl=443
#ssl_certificate=
# Optional key file
#ssl_certificate_key=
docker_compose_dir="~/.awx/awxcompose"
# Required for Openshift when building the image on your own

View File

@@ -14,6 +14,8 @@ STATIC_ROOT = '/var/lib/awx/public/static'
PROJECTS_ROOT = '/var/lib/awx/projects'
INVENTORY_COLLECTIONS_ROOT = '/var/lib/awx/vendor/inventory_collections'
JOBOUTPUT_ROOT = '/var/lib/awx/job_status'
SECRET_KEY = get_secret()

View File

@@ -50,7 +50,7 @@ stderr_logfile_maxbytes=0
command = rsyslogd -n -i /var/run/awx-rsyslog/rsyslog.pid -f /var/lib/awx/rsyslog/rsyslog.conf
autostart = true
autorestart = true
stopwaitsecs = 1
stopwaitsecs = 5
stopsignal=KILL
stopasgroup=true
killasgroup=true

View File

@@ -74,8 +74,10 @@ ADD requirements/requirements_ansible.txt \
requirements/requirements.txt \
requirements/requirements_tower_uninstall.txt \
requirements/requirements_git.txt \
requirements/collections_requirements.yml \
/tmp/requirements/
RUN cd /tmp && VENV_BASE="/var/lib/awx/venv" make requirements_awx requirements_ansible_py3
RUN cd /tmp && COLLECTION_BASE="/var/lib/awx/vendor/inventory_collections" make requirements_collections
COPY {{ awx_sdist_file }} /tmp/{{ awx_sdist_file }}
RUN echo "{{ awx_version }}" > /var/lib/awx/.tower_version && \

View File

@@ -153,6 +153,7 @@ data:
STATIC_ROOT = '/var/lib/awx/public/static'
PROJECTS_ROOT = '/var/lib/awx/projects'
INVENTORY_COLLECTIONS_ROOT = '/var/lib/awx/vendor/inventory_collections'
JOBOUTPUT_ROOT = '/var/lib/awx/job_status'
SECRET_KEY = open('/etc/tower/SECRET_KEY', 'rb').read().strip()
ALLOWED_HOSTS = ['*']

View File

@@ -136,7 +136,7 @@ spec:
- name: {{ kubernetes_deployment_name }}-launch-awx-task
mountPath: "/usr/bin/launch_awx_task.sh"
subPath: "launch_awx_task.sh"
readOnly: true
readOnly: true
- name: {{ kubernetes_deployment_name }}-supervisor-web-config
mountPath: "/supervisor.conf"
@@ -212,7 +212,7 @@ spec:
- name: {{ kubernetes_deployment_name }}-launch-awx-task
mountPath: "/usr/bin/launch_awx_task.sh"
subPath: "launch_awx_task.sh"
readOnly: true
readOnly: true
- name: {{ kubernetes_deployment_name }}-supervisor-web-config
mountPath: "/supervisor.conf"
@@ -447,8 +447,10 @@ metadata:
{% for key, value in kubernetes_ingress_annotations.items() %}
{{ key }}: {{ value }}
{% endfor %}
{% endif %}
spec:
{% if kubernetes_ingress_hostname is defined %}
{% if kubernetes_ingress_tls_secret is defined %}
tls:
- hosts:

View File

@@ -15,6 +15,12 @@ spec:
imagePullPolicy: Always
command: ["sleep", "infinity"]
volumeMounts:
{% if ca_trust_dir is defined %}
- name: {{ kubernetes_deployment_name }}-ca-trust-dir
mountPath: "/etc/pki/ca-trust/source/anchors/"
readOnly: true
{% endif %}
- name: {{ kubernetes_deployment_name }}-application-config
mountPath: "/etc/tower/settings.py"
subPath: settings.py
@@ -51,6 +57,13 @@ spec:
{{ affinity | to_nice_yaml(indent=2) | indent(width=4, indentfirst=True) }}
{% endif %}
volumes:
{% if ca_trust_dir is defined %}
- name: {{ kubernetes_deployment_name }}-ca-trust-dir
hostPath:
path: "{{ ca_trust_dir }}"
type: Directory
{% endif %}
- name: {{ kubernetes_deployment_name }}-application-config
configMap:
name: {{ kubernetes_deployment_name }}-config

View File

@@ -57,7 +57,7 @@ data:
command = rsyslogd -n -i /var/run/awx-rsyslog/rsyslog.pid -f /var/lib/awx/rsyslog/rsyslog.conf
autostart = true
autorestart = true
stopwaitsecs = 1
stopwaitsecs = 5
stopsignal=KILL
stopasgroup=true
killasgroup=true

View File

@@ -1,4 +1,4 @@
---
openshift_oc_config_file: "{{ kubernetes_base_path }}/.kube/config"
openshift_oc_bin: "oc --config={{ openshift_oc_config_file }}"
uwsgi_bash: "scl enable rh-postgresql10"
uwsgi_bash: "bash -c"

View File

@@ -5,55 +5,10 @@
run_once: true
no_log: true
- name: Check for existing Postgres data
stat:
path: "{{ postgres_data_dir }}/pgdata/PG_VERSION"
register: pg_version_file
- name: Record Postgres version
set_fact:
old_pg_version: "{{ lookup('file', postgres_data_dir + '/pgdata/PG_VERSION') }}"
when: pg_version_file.stat.exists
- name: Determine whether to upgrade postgres
set_fact:
upgrade_postgres: "{{ old_pg_version is defined and old_pg_version == '9.6' }}"
- name: Set up new postgres paths pre-upgrade
file:
state: directory
path: "{{ item }}"
recurse: true
when: upgrade_postgres | bool
with_items:
- "{{ postgres_data_dir }}/10/data"
- name: Stop AWX before upgrading postgres
docker_service:
project_src: "{{ docker_compose_dir }}"
stopped: true
when: upgrade_postgres | bool
- name: Upgrade Postgres
shell: |
docker run --rm \
-v {{ postgres_data_dir }}/pgdata:/var/lib/postgresql/9.6/data \
-v {{ postgres_data_dir }}/10/data:/var/lib/postgresql/10/data \
-e PGUSER={{ pg_username }} -e POSTGRES_INITDB_ARGS="-U {{ pg_username }}" \
tianon/postgres-upgrade:9.6-to-10 --username={{ pg_username }}
when: upgrade_postgres | bool
- name: Copy old pg_hba.conf
copy:
src: "{{ postgres_data_dir + '/pgdata/pg_hba.conf' }}"
dest: "{{ postgres_data_dir + '/10/data/' }}"
when: upgrade_postgres | bool
- name: Remove old data directory
file:
path: "{{ postgres_data_dir + '/pgdata' }}"
state: absent
when: compose_start_containers|bool
- import_tasks: upgrade_postgres.yml
when:
- postgres_data_dir is defined
- pg_hostname is not defined
- import_tasks: set_image.yml
- import_tasks: compose.yml

View File

@@ -0,0 +1,50 @@
---
- name: Check for existing Postgres data
stat:
path: "{{ postgres_data_dir }}/pgdata/PG_VERSION"
register: pg_version_file
- name: Record Postgres version
set_fact:
old_pg_version: "{{ lookup('file', postgres_data_dir + '/pgdata/PG_VERSION') }}"
when: pg_version_file.stat.exists
- name: Determine whether to upgrade postgres
set_fact:
upgrade_postgres: "{{ old_pg_version is defined and old_pg_version == '9.6' }}"
- name: Set up new postgres paths pre-upgrade
file:
state: directory
path: "{{ item }}"
recurse: true
when: upgrade_postgres | bool
with_items:
- "{{ postgres_data_dir }}/10/data"
- name: Stop AWX before upgrading postgres
docker_service:
project_src: "{{ docker_compose_dir }}"
stopped: true
when: upgrade_postgres | bool
- name: Upgrade Postgres
shell: |
docker run --rm \
-v {{ postgres_data_dir }}/pgdata:/var/lib/postgresql/9.6/data \
-v {{ postgres_data_dir }}/10/data:/var/lib/postgresql/10/data \
-e PGUSER={{ pg_username }} -e POSTGRES_INITDB_ARGS="-U {{ pg_username }}" \
tianon/postgres-upgrade:9.6-to-10 --username={{ pg_username }}
when: upgrade_postgres | bool
- name: Copy old pg_hba.conf
copy:
src: "{{ postgres_data_dir + '/pgdata/pg_hba.conf' }}"
dest: "{{ postgres_data_dir + '/10/data/' }}"
when: upgrade_postgres | bool
- name: Remove old data directory
file:
path: "{{ postgres_data_dir + '/pgdata' }}"
state: absent
when: compose_start_containers|bool

View File

@@ -38,7 +38,10 @@ services:
{% if ca_trust_dir is defined %}
- "{{ ca_trust_dir +':/etc/pki/ca-trust/source/anchors:ro' }}"
{% endif %}
{% if ssl_certificate is defined %}
{% if (ssl_certificate is defined) and (ssl_certificate_key is defined) %}
- "{{ ssl_certificate +':/etc/nginx/awxweb.pem:ro' }}"
- "{{ ssl_certificate_key +':/etc/nginx/awxweb_key.pem:ro' }}"
{% elif (ssl_certificate is defined) and (ssl_certificate_key is not defined) %}
- "{{ ssl_certificate +':/etc/nginx/awxweb.pem:ro' }}"
{% endif %}
{% if (awx_container_search_domains is defined) and (',' in awx_container_search_domains) %}

View File

@@ -47,7 +47,12 @@ http {
{%endif %}
server {
{% if ssl_certificate is defined %}
{% if (ssl_certificate is defined) and (ssl_certificate_key is defined) %}
listen 8053 ssl;
ssl_certificate /etc/nginx/awxweb.pem;
ssl_certificate_key /etc/nginx/awxweb_key.pem;
{% elif (ssl_certificate is defined) and (ssl_certificate_key is not defined) %}
listen 8053 ssl;
ssl_certificate /etc/nginx/awxweb.pem;

View File

@@ -0,0 +1,18 @@
---
collections:
- name: awx.awx
version: 9.3.0
- name: azure.azcollection
version: 0.1.2
- name: amazon.aws
version: 0.1.1 # version 0.1.0 seems to have gone missing
- name: theforeman.foreman
version: 0.7.0 # contains the inventory plugin, but more patches are needed
- name: google.cloud
version: 0.0.9 # contains PR 167, should be good to go
- name: openstack.cloud
version: 0.0.1-dev85 # earlier had checksum mismatch
- name: community.vmware
version: 0.4.0 # first to contain necessary grouping and filtering features
- name: ovirt.ovirt_collection
version: 1.0.1 # new fix published, should be good to go

View File

@@ -0,0 +1,34 @@
### Inventory Updates Cross-Development with Collections
Inventory updates in production use vendored collections baked into the image,
which are downloaded from Ansible Galaxy in the build steps.
This gives instructions to short-circuit that process for a faster development process.
Running this script will do a `git clone` for all the relevant collections
into the folder `awx/plugins/collections`.
```
source tools/collections/clone_vendor.sh
```
After this is completed, you must change the path where the server looks
for the vendored inventory collections.
Add this line to your local settings:
```
INVENTORY_COLLECTIONS_ROOT = '/awx_devel/awx/plugins/collections'
```
Then when you run an inventory update of a particular type, it should
use the cloned collection.
This allows you to cd into a particular collection, add remotes,
change branches, etc.
#### Extra Build Steps
This will not work correctly in all circumstances.
Some collections make changes at build-time.
In particular, the foreman inventory plugin needs the NAME attribute changed to
the fully-qualified collection name, and will fail if this is not done.

View File

@@ -0,0 +1,71 @@
#!/bin/bash
base_dir=awx/plugins/collections/ansible_collections
if [ ! -d "$base_dir/azure/azcollection" ]
then
mkdir -p $base_dir/azure
git clone https://github.com/ansible-collections/azure.git $base_dir/azure/azcollection
else
echo "Azure collection already exists"
fi
if [ ! -d "$base_dir/ansible/amazon" ]
then
mkdir -p $base_dir/ansible
git clone https://github.com/ansible-collections/ansible.amazon.git $base_dir/ansible/amazon
else
echo "Amazon collection already exists"
fi
if [ ! -d "$base_dir/theforeman/foreman" ]
then
mkdir -p $base_dir/theforeman
git clone https://github.com/theforeman/foreman-ansible-modules.git $base_dir/theforeman/foreman
else
echo "foreman collection already exists"
fi
if [ ! -d "$base_dir/google/cloud" ]
then
mkdir -p $base_dir/google
git clone https://github.com/ansible-collections/ansible_collections_google.git $base_dir/google/cloud
else
echo "google collection already exists"
fi
if [ ! -d "$base_dir/openstack/cloud" ]
then
mkdir -p $base_dir/openstack
git clone https://github.com/openstack/ansible-collections-openstack.git $base_dir/openstack/cloud
else
echo "openstack collection already exists"
fi
if [ ! -d "$base_dir/community/vmware" ]
then
mkdir -p $base_dir/community
git clone https://github.com/ansible-collections/vmware.git $base_dir/community/vmware
else
echo "VMWare collection already exists"
fi
if [ ! -d "$base_dir/ovirt/ovirt_collection" ]
then
mkdir -p $base_dir/ovirt
git clone $base_dir/ovirt/ovirt_collection
else
echo "Ovirt collection already exists"
fi
if [ ! -d "$base_dir/awx/awx" ]
then
mkdir -p $base_dir/awx
ln -s $(shell pwd)/awx_collection $base_dir/awx/awx
git clone $base_dir/awx/awx
else
echo "awx collection already exists"
fi
echo "-- confirmation of what is installed --"
ANSIBLE_COLLECTIONS_PATHS=awx/plugins/collections ansible-galaxy collection list

View File

@@ -92,9 +92,12 @@ ADD requirements/requirements.txt \
requirements/requirements_dev.txt \
requirements/requirements_ansible_uninstall.txt \
requirements/requirements_tower_uninstall.txt \
requirements/collections_requirements.yml \
/tmp/requirements/
RUN mkdir -p /venv && chmod g+w /venv
RUN cd /tmp && VENV_BASE="/venv" make requirements_dev
RUN mkdir -p /vendor/inventory_collections && chmod g+w /vendor/inventory_collections
RUN cd /tmp && COLLECTION_BASE="/vendor/inventory_collections" make requirements_collections
# Use the distro provided npm to bootstrap our required version of node
RUN npm install -g n && n 10.15.0 && dnf remove -y nodejs

Some files were not shown because too many files have changed in this diff Show More