When the API request is for /inventories/id use that as the URL in the
API response. When the request is for /constructed_inventories/id use
that.
Signed-off-by: Rick Elrod <rick@elrod.me>
these make targets are for starting the different daemons within the kube/docker development environment updating the name to make it better reflect their intention
also added comments above the make target to describe what they do
note: these comments show up when run `make help`
previously this is used so that task running in the task container can reach into the web container to restart rsyslog
now that the web container and task container are split there's no longer a way to do that so i renamed this env var to reference where it will now do
which is pointing to the supervisor conf file of the current running container
launch_awx.sh that this PR rename is also now only use for launching awx web container renaming to reflect it's purpose
also remove the no longer needed creation of rsyslog conf as rsyslog is no longer in the web container
Update Dockerfile.j2
supervisor.conf.j2 file is the template for supervisor.conf file for the web container rename to supervisor_web.conf make it more clear that it is use for the web container
get_local_queuename will return the pod name of the instance
now that web and task are in different pods when web container queue a task it will be put into a queue without as task worker to execute the task
Works by adding a dedicated producer in wsrelay that looks for
local django channels message with group "metrics". The producer
sends this to the consumer running in the web container.
The consumer running in the web container handles the message by
pushing it into the local redis instance.
The django view that handles a request at the /api/v2/metrics
endpoint will load this data from redis, format it, and return the
response.
We internally manipulate the message payload a bit (to know whether we
are originating it on the task side or the web system is originating
it). But when we get the message, we actually get a reference to the
dict containing the payload.
Other producers in wsrelay might still be acting on the message and
deciding whether or not to relay it. So we need to manipulate and send a
*copy* of the message, and leave the original alone.
Signed-off-by: Rick Elrod <rick@elrod.me>
We no longer need to do this from wsrelay, as it will automatically try
to reconnect when it hears the next beacon from heartbeet.
This also cleans up the logic for what we do when we want to delete a
node we previously knew about.
Signed-off-by: Rick Elrod <rick@elrod.me>