Populate the cache the first time the job is run for a revision
that needs them, and for future runs for that revision just
copy it into the private directory.
Delete the cache on project deletion.
Invalidate the cache on a new project revision
Also download roles/collections during the sync job
Since we're writing into a per-revision cache, we can do this easily now.
Don't try and install content if there aren't any requirements expecting it
Adjust pathing to the proper location.
Force install if doing a manual sync.
Requirements may be unversioned.
Remove the cache when delete-on-update is set
Integrate content caching with existing task logic
Revert the --force flags
use the update id as metric for role caching
Shift the movement of cache to job folder from rsync task to python
Only install roles and collections if needed
Deal with roles and collections for jobs without sync
Skip local copy if roles or collections turned off
update docs for content caching
Design pivot - use empty cache dir to indicate lack of content
Do not cache content if we did not install content
Test changes to allay concerns about reliability of local_path
Do not blow away cache for SCM inventory updates
Remove project update vars no longer used
Remove job pre-creation of content folders
code style edit, always use cache_id as property in tasks
Fix log message
Situations have come up where the 5+ minute kill signal for
run_task_manager is emitted to the worker process running it, but
since the worker improperly inherited the AWXConsumerBase().stop()
handler a deadlock ultimately was triggered on the database
connection.
this resolves an issue that causes an endless hang on with Cyberark AIM
lookups when a certificate *and* key are specified
the underlying issue here is that we can't rely on the underyling Python
ssl implementation to *only* read from the fifo that stores the pem data
*only once*; in reality, we need to just use *actual* tempfiles for
stability purposes
see: https://github.com/ansible/awx/issues/6986
see: https://github.com/urllib3/urllib3/issues/1880
this change fixes a bug introduced in the optimization at https://github.com/ansible/awx/pull/7352
1. Create inventory with multiple hosts
2. Run a playbook with a limit to match only one host
3. Run job, verify that it only acts on the one host
4. Go to inventory host list and see that all the hosts have last_job updated to point to the job that only acted on one host.
There is some history here.
https://github.com/ansible/awx/pull/7190 <- This PR was an attempt at fixing a
bug notting ran into where some jobs on k8s installs would get stuck in Waiting
forever.
The PR mentioned above introduced a bug where there are no instance groups on a
fresh k8s-based install. This is because this process currently happens in the
launch scripts, before the database is up.
With this patch, queue / instance group registration happens in the heartbeat,
right after auto-registering the instance.