* Select control node before start task
Consume capacity on control nodes for controlling tasks and consider
remainging capacity on control nodes before selecting them.
This depends on the requirement that control and hybrid nodes should all
be in the instance group named 'controlplane'. Many tests do not satisfy that
requirement. I'll update the tests in another commit.
* update tests to use controlplane
We don't start any tasks if we don't have a controlplane instance group
Due to updates to fixtures, update tests to set node type and capacity
explicitly so they get expected result.
* Fixes for accounting of control capacity consumed
Update method is used to account for currently consumed capacity for
instance groups in the in-memory capacity tracking data structure we initialize in
after_lock_init and then update via calculate_capacity_consumed (both in
task_manager.py)
Also update fit_task_to_instance to consider control impact on instances
Trust that these functions do the right thing looking for a
node with capacity, and cut out redundant check for the whole group's
capacity per Alan's reccomendation.
* Refactor now redundant code
Deal with control type tasks before we loop over the preferred instance
groups, which cuts out the need for some redundant logic.
Also, fix a bug where I was missing assigning the execution node in one case!
* set job explanation on tasks that need capacity
move the job explanation for jobs that need capacity to a function
so we can re-use it in the three places we need it.
* project updates always run on the controlplane
Instance group ordering makes no sense on project updates because they
always need to run on the control plane.
Also, since hybrid nodes should always run the control processes for the
jobs running on them as execution nodes, account for this when looking for a
execution node.
* fix misleading message
the variables and wording were both misleading, fix to be more accurate
description in the two different cases where this log may be emitted.
* use settings correctly
use settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME instead of a hardcoded
name
cache the controlplane_ig object during the after lock init to avoid
an uneccesary query
eliminate mistakenly duplicated AWX_CONTROL_PLANE_TASK_IMPACT and use
only AWX_CONTROL_NODE_TASK_IMPACT
* add test for control capacity consumption
add test to verify that when there are 2 jobs and only capacity for one
that one will move into waiting and the other stays in pending
* add test for hybrid node capacity consumption
assert that the hybrid node is used for both control and execution and
capacity is deducted correctly
* add test for task.capacity_type = control
Test that control type tasks have the right capacity consumed and
get assigned to the right instance group
Also fix lint in the tests
* jobs_running not accurate for control nodes
We can either NOT use "idle instances" for control nodes, or we need
to update the jobs_running property on the Instance model to count
jobs where the node is the controller_node.
I didn't do that because it may be an expensive query, and it would be
hard to make it match with jobs_running on the InstanceGroup which
filters on tasks assigned to the instance group.
This change chooses to stop considering "idle" control nodes an option,
since we can't acurrately identify them.
The way things are without any change, is we are continuing to over consume capacity on control nodes
because this method sees all control nodes as "idle" at the beginning
of the task manager run, and then only counts jobs started in that run
in the in-memory tracking. So jobs which last over a number of task
manager runs build up consuming capacity, which is accurately reported
via Instance.consumed_capacity
* Reduce default task impact for control nodes
This is something we can experiment with as far as what users
want at install time, but start with just 1 for now.
* update capacity docs
Describe usage of the new setting and the concept of control impact.
Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Rebeccah <rhunter@redhat.com>
--- Added 3 new sub-package : awx.main.tasks.system , awx.main.tasks.jobs , awx.main.tasks.receptor
--- Modified the functional tests and unit tests accordingly
- the list, detail, and health check API views should not include them
- the Instance-InstanceGroup association views should not allow them
to be changed
- the ping view excludes them
- list_instances management command excludes them
- Instance.set_capacity_value sets hop nodes to 0 capacity
- TaskManager will exclude them from the nodes available for job execution
- TaskManager.reap_jobs_from_orphaned_instances will consider hop nodes
to be an orphaned instance
- The apply_cluster_membership_policies task will not manipulate hop nodes
- get_broadcast_hosts will ignore hop nodes
- active_count also will ignore hop nodes
* Full finalize the planned work for health checks of execution nodes
* Implementation of instance health_check endpoint
* Also do version conditional to node_type
* Do not use receptor mesh to check main cluster nodes health
* Fix bugs from testing health check of cluster nodes, add doc
* Add a few fields to health check serializer missed before
* Light refactoring of error field processing
* Fix errors clearing error, write more unit tests
* Update health check info in docs
* Bump migration of health check after rebase
* Mark string for translation
* Add related health_check link for system auditors too
* Handle health_check cluster node timeout, add errors for peer judgement
* Exclude control-only nodes from IG policy calculations
Also, as a reverse to this, exclude execution-only nodes from
the calculations if the group in question is the controlplane
* Incorporate review comments
* Always use controlplane as project update backup IG
Before, this was done conditionally to container_group jobs
this logic changes it so that controlgroup will always be a
firm backstop for project updates
* Code a little more defensively to make unit tests pass
* Fix unit tests
* Clean up added work_type processing for mesh_code branch
* track both execution and control capacity
* Remove unused execution_capacity property
* Count all forms of capacity to make test pass
* Force jobs to be on execution nodes, updates on control nodes
* Introduce capacity_type property to abstract some details out
* Update test to cover all job types at same time
* Register OpenShift nodes as control types
* Remove unqualified consumed_capacity from task manager and make unit tests work
* Remove unqualified consumed_capacity from task manager and make unit tests work
* Update unit test to execution vs control TM logic changes
* Fix bug, else handling for work_type method
Set JT.organization with value from its project
Remove validation requiring JT.organization
Undo some of the additional org definitions in tests
Revert some tests no longer needed for feature
exclude workflow approvals from unified organization field
revert awxkit changes for providing organization
Roll back additional JT creation permission requirement
Fix up more issues by persisting organization field when project is removed
Restrict project org editing, logging, and testing
Grant removed inventory org admin permissions in migration
Add special validate_unique for job templates
this deals with enforcing name-organization uniqueness
Add back in special message where config is unknown
when receiving 403 on job relaunch
Fix logical and performance bugs with data migration
within JT.inventory.organization make-permission-explicit migration
remove nested loops so we do .iterator() on JT queryset
in reverse migration, carefully remove execute role on JT
held by org admins of inventory organization,
as well as the execute_role holders
Use current state of Role model in logic, with 1 notable exception
that is used to filter on ancestors
the ancestor and descentent relationship in the migration model
is not reliable
output of this is saved as an integer list to avoid future
compatibility errors
make the parents rebuilding logic skip over irrelevant models
this is the largest performance gain for small resource numbers
This is the old version of this feature from 2019
this allows setting the organization in the data sent
to the API when creating a JT, and exposes the field
in the UI as well
Subsequent commit changes the field from editable
to read-only, but as of this commit, the machinery
is not hooked up to infer it from project
we recently made a change so that instances no longer bind to
instance-group specific queues, but now instead they each bind to
a direct queue for their specific hostname
(https://github.com/ansible/tower/pull/1922)
Because of this, we shouldn't *need* to reconfigure the queue binds at
runtime anymore when group membership changes. Under our new model,
every celeryd listens on a queue named after its hostname; when the
scheduler finds a task to run, it picks an Instance in the target
Instance Group and sends the task to the queue for that Instance's
hostname.
* Before, we had a special group, tower, that ran any async work that
tower needed done. This allowed users fine grain control over which
nodes did background work. However, this granularity was too complicated
for users. So now, all tower system work goes to a special non-user
exposed celery queue. Tower remains the fallback instance group to
execute jobs on. The tower group will be created upon install and
protected from deletion.
* Purging old task manager unit tests
* Migrating those tests to functional tests
* Updating fixtures and factories to support a change in the way the
task manager is tested
* Fix an issue with the mk_credential fixture when used in functional
tests. Previously it had trouble with multiplel invocations when
persistence was used