the main goal of this change is to make `make docker-isolated` work out
of the box
- specify the proper version for awx-expect --version
- update some deprecated playbook bits
- change the isolated container to privileged so bwrap will work
- fix awx-manage test_isolated_connection
- expedite the first isolated heartbeat so you don't have to wait 10m;
this is accomplished by _not_ setting Instance.last_isolated_check to
now() at insertion time (which causes the next check not to happen for
10 minutes)
- fix a bug that caused isolated node execution to fail when bwrap was
enabled
see: https://github.com/ansible/tower/issues/2150
This reverts commit 9863fe71dc.
it's not unusual for the secret data we pass into the `run_isolated.yml`
playbook to be quite long, namely because it can contain RSA key
data; by passing this value into the ansible-playbook process using
`vars_prompt`, we're limited by pexpect's tty line limit (which looks
like it caps out around 4k). Because of this, large payloads are
being truncated and causing job run failures.
this changes the implementation to use a named pipe instead, which
doesn't have the same limitation
see: #7183
* Gone are the days of playbook abuse.
* Our project update can now call a single Insight Ansible module to
loop over the set of maintenance plans to download and save playbooks
for use by job templates.
* use tower-expect command to determine job status when running
the isolated heartbeat playbook
* grok JSON output of playbook to obtain result information
* run playbook against multiple isolated hosts at the same time
(addresses scalability concerns)
* set a more reasonable default `AWX_ISOLATED_CHECK_INTERVAL`
* make manual cancellation work for high values of
`AWX_ISOLATED_CHECK_INTERVAL`
* remove the `/tmp/ansible_tower/jobs/` symlink directory
see: #6616
instead of launching isolated tasks via `systemctl`, treat
`awx.main.isolated.run` as an executable that knows how to daemonize
additionally, add `setup.py isolated_build` for isolated Tower source
distribution
* During insights project updates, if the etag from insights matches the
last etag (scm_revision) then don't get all the maintenance playbooks.
Otherwise, get all the maintenance playbooks and update scm_revision
with the etag.
* Add inisghts fact scan module
* Update fact scan playbook to call new insight fact scan module
* JT run w/ store_facts=True will save scanned facts. We "skim" the
machine_id fact from Insights fact scans and associate it with the host
that the fact scan came from.
Newer ansible versions don't allow with_items to reference undefined
variables so we'll provide a default.
We'll also update some conditionals to deal with naming requirements
changes on insights api endpoints
This fixes an issue where a user was trying to use an exclamation mark
in the password. Historically we've used the quote filter but more and
more we're seeing conflicts with special characters
* I explored the |urlencode() filter instead. Ansible and git didn't
like this.
* Quoting doesn't seemed to be required. Ansible and the git module are
smart enough to do the right thing. I tested with a space in the repo
url path and it works.
* Support "name" if given, else use a more reasonable file name for
the playbook file
* Fix an issue where we wouldn't populate the playbooks correctly in
the database
* Force downloading the playbooks every time since we can't tell when
the file might change
* Extend scm credential type to support username and password for
redhat portal accounts
* Update project update playbook to work with remote insights server
* Build a list of playbooks and store it in the database at sync time
* Fix an issue running playbook sync on jobs for scan jobs
* Remove a TODO that was unneeded
* Instead of using a fanout for project updates initial project
updates will sync the latest commit hash
* Prior to a node running a job it will ensure that the latest project
is synced