Adds ability to refresh project revision on sync'd rows in the project list
SUMMARY
This PR also adds the revision to the project details view as well as handles updating the revision on the project details view when the project is done syncing.
Since the project revision is not included in the websocket payload, when a project is done syncing the displayed revision may be out of date. As such, we wanted to expose that information to the user and give them the ability to "refresh" and fetch the new revision.
Here's what that flow looks like:
When a particular row finishes syncing the user should see this in place of the revision:
Clicking on that refresh button goes out and fetches the updated project (and with it the potentially updated revision).
We don't do this automatically on the projects list (and force the user to click on the refresh button) is due to issues we've had in the past with the UI triggering API calls based on websocket events.
The flow when a user is on the project details view is a little different because I wasn't as worried about spamming the API with requests.
When a project finishes syncing and the user is viewing the details I do go ahead and automatically refresh the project data (and with it, the revision). Here's what that looks like:
A few other notes:
@tiagodread @akus062381 @one-t I'm almost certain this is going to break some tests because I removed the ClipboardCopyButton in favor of PF's ClipboardCopy component. This component looks and behaves slightly differently from our home grown solution. Additionally, the PF ClipboardCopy button does not expose the ouiaId prop so I had to add a data-cy on that component. You'll likely have to use that identifier to then grab the button inside in order to test out the clipboard copy functionality.
Source Control Revision is a net-new detail in the project details. I think this was simply missed on our initial build of this page.
Here are the identifiers on the various bits:
Note that the identifiers on the project rows have the id of the project appended to them for uniqueness.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
UI
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Michael Abashian <None>
Reviewed-by: Sarah Akus <sarah.akus@gmail.com>
Avoid reload on manual logout
SUMMARY
#10383
Instead of reloading, set the authRedirectTo context variable to "/logout" and handle it as a special case when routing
Reviewed-by: Marliana Lara <marliana.lara@gmail.com>
Remove uses of ansible_virtualenv_path
Run a job with recent AWX, then inspect:
In [2]: Job.objects.order_by('-created').first().custom_virtualenv
Out[2]: '/var/lib/awx/venv/ansible'
We shouldn't do this. The job was ran after the custom_virtualenv field has become entirely informative / non-functional. It is very arguable whether jobs should keep this field. For pre-migration jobs, it is an accurate record of fact, so I'm not going out of my way to argue for removal.
With this change, new jobs will record a blank value for custom_virtualenv, which is more correct, because we don't do this.
I have carefully looked over @rebeccahhh's #10090, and I don't see any way that ansible_virtualenv_path is used. Model use is reported for templates, so that doesn't block removing the field on jobs either.
Reviewed-by: Alan Rominger <arominge@redhat.com>
Reviewed-by: Rebeccah Hunter <rhunter@redhat.com>
Reviewed-by: Bill Nottingham <None>
Reviewed-by: Shane McDonald <me@shanemcd.com>
Disable cancel button on http 405
SUMMARY
for #9905
When a 405 is received, it means the job is already cancelled. Treat the
request as a success and don't show an error modal.
We disable the button to handle a rare scenario where we receive the 405 long before
the job status is properly updated to "cancelled" over websockets. We want to prevent
more cancel requests when this happens. We're disabling instead of hiding the button
because, assuming the status hasn't changed over sockets, the buttons that usually
replace the cancel button on the toolbar won't be shown yet and we don't want to
needlessly flicker and shift button positions around by rapidly loading and unloading
a different number of buttons onto the bar.
Reviewed-by: Keith Grant <keithjgrant@gmail.com>
Reviewed-by: Sarah Akus <sarah.akus@gmail.com>
Update message for missing EE for Job pages
Update message for missing EE for Job pages
See: #10355
Reviewed-by: Keith Grant <keithjgrant@gmail.com>
Reviewed-by: Kersom <None>
Reviewed-by: Michael Abashian <None>
Adds more filters to dashboard chart
SUMMARY
Resolves#8401 and #10356
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
UI
AWX VERSION
ADDITIONAL INFORMATION
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Alex Corey <Alex.swansboro@gmail.com>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
add a new awx-manage command `custom_venvs`
add an awx-manage command that gets pip freeze data from custom_venv and outputs to command line stdout
SUMMARY
part of #7062 - this command is a glorified pip freeze + some extra stuff, people could navigate to each of their custom virtual environments themselves and run a pip freeze, but this allows them to not, and everyone likes their life to be easier. The extra stuff allows users to see the connections that their existing virtual envs have in awx to things like organizations, jobs, inventory updates, and projects.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
API
AWX VERSION
awx: 19.1.0
ADDITIONAL INFORMATION
This is built off of existing code and there is a line that gets custom venv paths from the settings module, that line does not seem to be working. I have written around that but want to make a note of it.
Reviewed-by: Alan Rominger <arominge@redhat.com>
Reviewed-by: Rebeccah Hunter <rhunter@redhat.com>
Reviewed-by: Jeff Bradberry <None>
Reviewed-by: Shane McDonald <me@shanemcd.com>
Reviewed-by: Elijah DeLee <kdelee@redhat.com>
add docs on debugging job event perf
related to ansible/tower#4985
Reviewed-by: Alan Rominger <arominge@redhat.com>
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Db partition analytics cmeyers2
Keep old primary key based analytics gathering for unpartitioned
tables.
Use created time on new partitioned tables.
80 million partitioned + 1.5 million unpartitioned Events
Query
awx-manage gather_analytics --dry-run Time
Micro Benchmark Query Time*
Query Only Time**
sequential index scan, multiple ::json casts, 100,000 event batches
102m7.836s
6s
80 minutes
sequential index scan, optimized json cast, 100,000 event batches
48m9.276s
2.2s
30.4 minutes
sequential index scan, optimized json cast, 1,00,000 event batches
39m35.094s
10s
13.3 minutes
sequential table scan, optimized json cast, per-partition batch 600,000 ***
36m42.081s
11.5s
25.5 minutes
*micro benchmarking consists of simply copying a query, running it manually, and observing the runtime.
**micro benchmark time x (80 million / batch size)
**Note that this testing does NOT include the extra modified range query that is needed for correctness. We expect this to be quite fast and is only needed to catch edge case events.
Reviewed-by: Ladislav Smola <lsmola@redhat.com>
Reviewed-by: Elijah DeLee <kdelee@redhat.com>
* job_created is a fake field as far as Django is concerned. Under the
hood, in postgres, this is the partition key so it is real. sqlite
doesn't support partitioning so we need to fake some things.
Specifically, we need to remove job_created from being auto-added to
get_event_queryset()
* Add pagination tests for <unified_job_name>/<id>/<job_events>?limit=x
endpoint to make sure the paginator is wired up.
* Use an initial request for max event `counter` to get the total row count,
otherwise rely on websocket message counters to update remote row count
* For running jobs, request event ranges with counters to handle events getting
saved to db out of display order
* For jobs that are no longer running, continue to use page/pageSize scheme for
paging through the job events
* trigger via jobs/<id>/job_events/?limit=10
* Can and should be used in conjunction with an indexed set of fields to
generate efficient pagination queries. i.e.
jobs/<id>/job_events?limit=10&start_line__gte=10
* If limit is not specified in the query params then the default
pagination will be used.
* Before, we would get the min and max pk of the set we are to gather.
This changeset removes that.
* Before, we would, basically, know the size of the set we are to gather
and would query 100,000 of those job event records at a time. That logic
is now gone.
* Now, for unpartitioned job events we gather 4 hours at a time by
created time.
* Now, for partitioned job events we gather 4 hours at a time by
modified time.
* The order by results in an in-memory sort that COULD blow out the
worker mem buffer and result in sorting having to take place on disk.
* This WILL happen with a default postgres 4MB mem buffer. We saw as
much as 20MB used. Note that AWX defaults postgres mem worker buffer to
3% of the DB memory on external installs and 1% on same-node installs.
So for a 16GB remote DB this would not be a problem.
* We are going to avoid this problem all together by NOT doing a sort
when gathering. Instead, we will sort remotely, in analytics.
* Old, _unpartitioned_main_jobevent table does not have the job_created
column
* New, main_jobevent does.
* Always in clude the job_created column. NULL if old, job_created if
new
* Bump events_table schema version from 1.2 to 1.3 because of the
job_created field
* We found that having multiple `::json` casts in a query slows down
queries more and more by =~> 33%.
* This change coerces postgres into only casting once. Micro
benchmarking shows =~ 2-3x performance boost
* pre-migration jobevents live in unpartitioned table
where only created field has index
* post-migration jobevents live in partitions
where modified field has index
(and should be used to ensure no events are missing)
* when collecting job events by creation time
it is possible to miss events that were created
at one point, but actually committed to the db
much later.
* since events' modified time is set when they are
committed to the db, we shouldn't miss any job events
* selecting job events by modified time wasn't possible
beforehand because we didn't have an index for
jobevent's modified field
Changes in old unpartitioned cleanup logic:
* Manually cascade delete events related to job(s)
(new partitions cleanup logic) For each event type:
* Get the event partitions that are within the cleanup date range
* Get a list of jobs to delete that are in the cutoff range.
* Jobs that are running, pending, or waiting in the job list are special.
* Use the special list to further filter the partition drop list.
* Drop partitions
* delete jobs