This does a few things:
- Removes need for awx_sdist_builder image
- Reorders Dockerfile steps to optimize image cache between prod and dev builds
- Unifies VENV_BASE and COLLECTION_BASE in prod and dev builds
* Tried to fill in application_name in awx/__init__.py but I think that
is too late
* Fill in database application_name with enough information to easily
trace the connection from postgres back to the node and pid that
initiated the connection.
* Set application_name in django settings so that application_name is
set _before_ the first postgres connection is established.
kubectl and synchronize are now part of community.kubernetes
and ansible.posix collections, respectively. This change installs
these collections to a local directory to be used in inventory and
isolated management playbooks.
awx issue #6930
* Refresh inventory collection requirements
Fix bug specific to Docker development where the right folder of
install was not specified in the setting
Add initial rhv/ovirt version for consistency
* Update unit test to ovirt name change
for some reason (unsure why), django-extensions has begun noticing
ipython importability and treating "shell_plus" as "start an IPython
notebook by default
it could be that this is a bug in django-extensions that will be fixed
soon, but for now, this fixes the issue
There was quite a bit of drift between the main Dockerfile and this one since we last touched it. It’s heavier than it needs to be, but by using the main development image as the base here, we have less duplicated and outdated code overall.
this commit implements the bulk of `awx-manage run_dispatcher`, a new
command that binds to RabbitMQ via kombu and balances messages across
a pool of workers that are similar to celeryd workers in spirit.
Specifically, this includes:
- a new decorator, `awx.main.dispatch.task`, which can be used to
decorate functions or classes so that they can be designated as
"Tasks"
- support for fanout/broadcast tasks (at this point in time, only
`conf.Setting` memcached flushes use this functionality)
- support for job reaping
- support for success/failure hooks for job runs (i.e.,
`handle_work_success` and `handle_work_error`)
- support for auto scaling worker pool that scale processes up and down
on demand
- minimal support for RPC, such as status checks and pool recycle/reload