There was quite a bit of drift between the main Dockerfile and this one since we last touched it. It’s heavier than it needs to be, but by using the main development image as the base here, we have less duplicated and outdated code overall.
Currently, users are allowed to define virtual environments in
`settings.BASE_VENV_PATH` only, because that's the only place
Tower looks for virtual environments. This feature allows users
to custom define the directory paths, using API or UI, to look
for virtual environments. Tower aggregates virtual environments
from all these paths, except environments with special name `awx`.
Signed-off-by: Vismay Golwala <vgolwala@redhat.com>
run this command on _any_ node in an awx cluster:
$ awx-manage profile_sql --threshold=2.0 --minutes=1
...and for 1 minute, the timing for _every_ SQL query in _every_ awx
Python process that uses the Django ORM will be measured
queries that run longer than (in this example) 2 seconds will be
written to a per-process sqlite database in /var/lib/awx/profile, and
the file will contain an EXPLAIN VERBOSE for the query and the full
Python stack that led to that SQL query's execution (this includes not
just WSGI requests, but background processes like the runworker and
dispatcher)
$ awx-manage profile_sql --threshold=0
...can be used to disable profiling again (if you don't want to wait for
the minute to expire)
* Removed the emailing of admins on request error. When turned on, the
handler will include all django settings in the email. This is not
desirable from a security standpoint.
this commit implements the bulk of `awx-manage run_dispatcher`, a new
command that binds to RabbitMQ via kombu and balances messages across
a pool of workers that are similar to celeryd workers in spirit.
Specifically, this includes:
- a new decorator, `awx.main.dispatch.task`, which can be used to
decorate functions or classes so that they can be designated as
"Tasks"
- support for fanout/broadcast tasks (at this point in time, only
`conf.Setting` memcached flushes use this functionality)
- support for job reaping
- support for success/failure hooks for job runs (i.e.,
`handle_work_success` and `handle_work_error`)
- support for auto scaling worker pool that scale processes up and down
on demand
- minimal support for RPC, such as status checks and pool recycle/reload