* adding roles to instance groups
added ResourceMixin to Instancegroup and changed the filtered_queryset
* added necessary changes to rebuild relationship between IG and roles
* added description to InstanceGroupAccess
* preliminary ui plug for demo purposes
* preliminary ui plug for demo purposes
added inventory special logic for use_role to allow attaching instance groups
added more tests to handle those cases
* Add access_list to InstanceGroup
* scratch branch to test migration work
* refactored to shorten logic
* Added migration and am removing logic that enabled Org admin permissions
* Add Obj admin role to JT, Inv, Org
* Changed tests to reflect new permissions
* refactored some of the tests
* cleaned up more tests and reworded help on InstanceGroupAccess
* Removed unnecessary delete of Route for instance group perms change
* Fix UI tests and migration
* fixed permissions on prompt for InstanceGroups
* added related object roles endpoint
* added ui/api function for options instance_groups
* separate the migrations in order to avoid issues with migrations not being finished
* changed migrations parent class to disable the activity stream error in migrations
* Added logging to migration as activitystream is disabled
* added clarifying comment to jobtemlateaccess and linted UI addition
* renamed migrations to avoid collisions
* Rename migrations to avoid collisions
* fix access problems and add Add bulk job max settings to api
filter workflow job nodes better
This will both improve performance by limiting the queryset for the node
sublists as well as fix our access problem.
override can_read instead of modify queryset in access.py
We do this because we are not going to expose bulk jobs to the list
views, which is complicatd and has poor performance implications.
Instead, we just care about individual Workflows that clients get linked
to not being broken.
fix comment
remove the get functions from the conf.py for bulk api max value
comment the api expose of the bulk job variables
reformt conf.py with make black
trailing space
add more assertion to the bulk host create test
Various fixes
- Don't skip checking resource RBAC permissions for admins
Necessary to handle bad input, e.g. providing a
unified_job_template id that doesn't exit
- In awxkit, only "walk" if we get 'url' in the result
- Bulk host create should return url pointing to inventory,
not inventory/hosts
dont do org check for superuser
we needed to inherit from GenericAPIView to get the options to render
correctly
q!
add execution env support
add organization validation to the workflowjob
Update awx/api/serializers.py
Co-authored-by: Elijah DeLee <kdelee@redhat.com>
Update awx/api/serializers.py
Co-authored-by: Elijah DeLee <kdelee@redhat.com>
Provide a view that allows users to launch many jobs with one POST
request. Under the hood, this creates a workflow with a number of jobs
all in a "flat" structure -- much like a sliced job, but with arbitrary
"joblets".
For ~ 100 nodes looking at ~ 200 some queries, which is more than the
proof of concept, but still an order of magnitude better than individual
job launches.
Still more work to implement other m2m objects, and need to address what
Organization should be assigned to a WorkflowJob launched by a BulkJob.
They need this so they can step into the workflow_job_nodes and get the
status of all the containing jobs.
Also want to test when there are MANY job templates etc in the system
because the querires like
UnifiedJobTemplate.accessible_pk_qs(request.user, 'execute_role').all()
queries scare me, seems like it could be a lot of things.
use "many=True" instead of ListField
Still seeing identical number of queries when creatin 100 jobs, going to
investigate more
only validate type in nested serializer
then, we actually get the database object after we do the RBAC checks
This drops us down from hundreds of queries to launch 100 jobs,
to less than 100 queries to launch 100 jobs (I got around 24 queries to
launch 100 jobs with credentials)
pave way for more promptable things
add "limit" as possible prompt on launch to bulk jobs
re-organize how we add credentials to pave way for the other m2m items
not having to repeat too much code
add labels to the bulk job
add the other fields to the workflowjobnode
move urls around
allow system admins, org admins, and inventory admins to bulk create
hosts.
Testing on an "open" licensed awx as system admin, I created 1000 hosts with 6 queries in ~ 0.15 seconds
Testing on an "open" licensed awx as organization admin, I created 1000 hosts with 11 queries in ~ 0.15 seconds
fix org max host check
also only do permission denied if license is a trial
add /api/v2/bulk to list bulk apis available
add api description templates
One motiviation to not take a list of hosts with mixed inventories is to
keep things simple re: RBAC and keeping a constant number of queries.
If there is great clamor for accepting list of hosts to insert into
arbitrary different inventories, we could probably make it happen - we'd
need to pop the inventory off of each of the hosts, run the
HostSerializer validate, then in top level BulkHostCreateSerializer
fetch all the inventories/check permissions/org host limits for those
inventories/etc. But that makes this that much more complicated.
add test for rbac access
test also helped me find a bug in a query, fixed that
add test to assert num queries scales as expected
also move other test to dedicated file
also test with super user like I meant to
record activity stream for the inventory
this records that a certain number of hosts were added by a certain user
we could consider if there is any other additional information we want
to include
Previously, in some cases, an InventoryUpdate sourced by an SCM project
would still run and be successful even after the project it is sourced
from failed to update. This would happen because the InventoryUpdate
would revert the project back to its last working revision. This
behavior is confusing and inconsistent with how we handle jobs (which
just refuse to launch when the project is failed).
This change pulls out the logic that the job launch serializer and
RunJob#pre_run_hook had implemented (independently) to check if the
project is in a failed state, and puts it into a method on the Project
model. This is then checked in the project launch serializer as well as
the inventory update serializer, along with
SourceControlMixin#sync_and_copy as a fallback for things that don't run
the serializer validation (such as scheduled jobs and WFJT jobs).
Signed-off-by: Rick Elrod <rick@elrod.me>
This will enable us to provide more useful information for the user,
now that all user-triggered health checks are async.
Also, de-bounce the health check endpoint to not allow additional
health check tasks to be triggered when one is already in progress.
awx-web container does not have access to receptor socket, and the
execution node health check requires receptorctl.
This change runs the health check asynchronously in the task container.
- allow the node_state to be set to deprovisioning
- set the links that touch the instance to removing
- only allow on K8S
- only allow to be done to execution nodes
add scaffolding for instance install_bundle endpoint
- add instance_install_bundle view (does not do anything yet)
- add `instance_install_bundle` related field to serializer
- add `/install_bundle` to instance URL
- `/install_bundle` only available for execution and hop node
- `/install_bundle` endpoint response contain a downloadable tgz with moc data
TODO: add actual data to the install bundle response
Signed-off-by: Hao Liu <haoli@redhat.com>
- node_state is now read only
- node_state gets set automatically to Installed in the create view
- raise a validation error when creating on non-K8S
- allow SystemAdministrator the 'add' permission for Instances
- expose the new listener_port field
- nodes with states Provisioning, Provisioning Fail, Deprovisioning,
and Deprovisioning Fail should bypass health checks and should never
transition due to the existing machinery
- nodes with states Unavailable and Installed can transition to Ready
if they check out as healthy
- nodes in the Ready state should transition to Unavailable if they
fail a check
Remove corresponding views for job instance_groups
Validate job_slice_count in API
Remove defaults from some job launch view prompts
the null default is preferable
Additionally, move the inventory-specific hacks of yesteryear
into the prompts_dict method of the WorkflowJob model
try to make it clear exactly what this is hacking and why
Correctly summarize label prompts, and add missing EE
Expand unit tests to apply more fields
adding missing fields to preserve during copy to workflow.py
Fix bug where empty workflow job vars blanked node vars (#12904)
* Fix bug where empty workflow job vars blanked node vars
* Fix bug where workflow job has no extra_vars, add test
* Add empty workflow job extra vars to assure fix