mirror of
https://github.com/ansible/awx.git
synced 2026-02-01 01:28:09 -03:30
Consume control capacity (#11665)
* Select control node before start task Consume capacity on control nodes for controlling tasks and consider remainging capacity on control nodes before selecting them. This depends on the requirement that control and hybrid nodes should all be in the instance group named 'controlplane'. Many tests do not satisfy that requirement. I'll update the tests in another commit. * update tests to use controlplane We don't start any tasks if we don't have a controlplane instance group Due to updates to fixtures, update tests to set node type and capacity explicitly so they get expected result. * Fixes for accounting of control capacity consumed Update method is used to account for currently consumed capacity for instance groups in the in-memory capacity tracking data structure we initialize in after_lock_init and then update via calculate_capacity_consumed (both in task_manager.py) Also update fit_task_to_instance to consider control impact on instances Trust that these functions do the right thing looking for a node with capacity, and cut out redundant check for the whole group's capacity per Alan's reccomendation. * Refactor now redundant code Deal with control type tasks before we loop over the preferred instance groups, which cuts out the need for some redundant logic. Also, fix a bug where I was missing assigning the execution node in one case! * set job explanation on tasks that need capacity move the job explanation for jobs that need capacity to a function so we can re-use it in the three places we need it. * project updates always run on the controlplane Instance group ordering makes no sense on project updates because they always need to run on the control plane. Also, since hybrid nodes should always run the control processes for the jobs running on them as execution nodes, account for this when looking for a execution node. * fix misleading message the variables and wording were both misleading, fix to be more accurate description in the two different cases where this log may be emitted. * use settings correctly use settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME instead of a hardcoded name cache the controlplane_ig object during the after lock init to avoid an uneccesary query eliminate mistakenly duplicated AWX_CONTROL_PLANE_TASK_IMPACT and use only AWX_CONTROL_NODE_TASK_IMPACT * add test for control capacity consumption add test to verify that when there are 2 jobs and only capacity for one that one will move into waiting and the other stays in pending * add test for hybrid node capacity consumption assert that the hybrid node is used for both control and execution and capacity is deducted correctly * add test for task.capacity_type = control Test that control type tasks have the right capacity consumed and get assigned to the right instance group Also fix lint in the tests * jobs_running not accurate for control nodes We can either NOT use "idle instances" for control nodes, or we need to update the jobs_running property on the Instance model to count jobs where the node is the controller_node. I didn't do that because it may be an expensive query, and it would be hard to make it match with jobs_running on the InstanceGroup which filters on tasks assigned to the instance group. This change chooses to stop considering "idle" control nodes an option, since we can't acurrately identify them. The way things are without any change, is we are continuing to over consume capacity on control nodes because this method sees all control nodes as "idle" at the beginning of the task manager run, and then only counts jobs started in that run in the in-memory tracking. So jobs which last over a number of task manager runs build up consuming capacity, which is accurately reported via Instance.consumed_capacity * Reduce default task impact for control nodes This is something we can experiment with as far as what users want at install time, but start with just 1 for now. * update capacity docs Describe usage of the new setting and the concept of control impact. Co-authored-by: Alan Rominger <arominge@redhat.com> Co-authored-by: Rebeccah <rhunter@redhat.com>
This commit is contained in:
@@ -77,15 +77,41 @@ When a job is made to run, AWX will add `1` to the number of forks selected to c
|
||||
systems with a `forks` value of `5`, then the actual `forks` value from the perspective of Job Impact will be 6.
|
||||
|
||||
#### Impact of Job Types in AWX
|
||||
Jobs have two types of impact. Task "execution" impact and task "control" impact.
|
||||
|
||||
For instances that are the "controller_node" for a task,
|
||||
the impact is set by settings.AWX_CONTROL_NODE_TASK_IMPACT and it is the same no matter what type of job.
|
||||
|
||||
For instances that are the "execution_node" for a task, the impact is calculated as following:
|
||||
|
||||
Jobs and Ad-hoc jobs follow the above model, `forks + 1`.
|
||||
|
||||
Other job types have a fixed impact:
|
||||
Other job types have a fixed execution impact:
|
||||
|
||||
* Inventory Updates: 1
|
||||
* Project Updates: 1
|
||||
* System Jobs: 5
|
||||
|
||||
For jobs that execute on the same node as they are controlled by, both settings.AWX_CONTROL_NODE_TASK_IMPACT and the job task execution impact apply.
|
||||
|
||||
Examples:
|
||||
Given settings.AWX_CONTROL_NODE_TASK_IMPACT is 1:
|
||||
- Project updates (where the execution_node is always the same as the controller_node), have a total impact of 2.
|
||||
- Container group jobs (where the execution node is not a member of the cluster) only control impact applies, and the controller node has a total task impact of 1.
|
||||
|
||||
### Selecting the Right settings.AWX_CONTROL_NODE_TASK_IMPACT
|
||||
|
||||
This setting allows you to determine how much impact controlling jobs has. This
|
||||
can be helpful if you notice symptoms of your control plane exceeding desired
|
||||
CPU or memory usage, as it effectivly throttles how many jobs can be run
|
||||
concurrently by your control plane. This is usually a concern with container
|
||||
groups, which at this time effectively have infinite capacity, so it is easy to
|
||||
end up with too many jobs running concurrently, overwhelming the control plane
|
||||
pods with events and control processes.
|
||||
|
||||
If you want more throttling behavior, increase the setting.
|
||||
If you want less throttling behavior, lower the setting.
|
||||
|
||||
### Selecting the Right Capacity
|
||||
|
||||
Selecting between a memory-focused capacity algorithm and a CPU-focused capacity for your AWX use means you'll be selecting between a minimum
|
||||
|
||||
Reference in New Issue
Block a user