Compare commits

..

132 Commits

Author SHA1 Message Date
Elijah DeLee
799968460d Fixup conversion of memory and cpu settings to support k8s resource request format (#11725)
fix memory and cpu settings to suport k8s resource request format

* fix conversion of memory setting to bytes

This setting has not been getting set by default, and needed some fixing
up to be compatible with setting the memory in the same way as we set it
in the operator, as well as with other changes from last year which
assume that ansible runner is returning memory in bytes.

This way we can start setting this setting in the operator, and get a
more accurate reflection of how much memory is available to the control
pod in k8s.

On platforms where services are all sharing memory, we deduct a
penalty from the memory available. On k8s we don't need to do this
because the web, redis, and task containers each have memory
allocated to them.

* Support CPU setting expressed in units used by k8s

This setting has not been getting set by default, and needed some fixing
up to be compatible with setting the CPU resource request/limits in the
same way as we set it in the resource requests/limits.

This way we can start setting this setting in the
operator, and get a more accurate reflection of how much cpu is
available to the control pod in k8s.

Because cpu on k8s can be partial cores, migrate cpu field to decimal.

k8s does not allow granularity of less than 100m (equivalent to 0.1 cores), so only
store up to 1 decimal place.

fix analytics to deal with decimal cpu

need to use DjangoJSONEncoder when Decimal fields in data passed to
json.dumps
2022-02-15 14:08:24 -05:00
Amol Gautam
3f08e26881 Merge pull request #11571 from amolgautam25/tasks-refactor-2
Added new class for  Ansible Runner Callbacks
2022-02-15 10:31:32 -05:00
Alex Corey
9af2c92795 Merge pull request #11691 from AlexSCorey/11634-ContaineGroupNameFix
Fixes erroneous disabling of name input field on container and instance group forms
2022-02-14 16:14:32 -05:00
Alex Corey
dabae456d9 Merge pull request #11653 from AlexSCorey/11588-TopLevelInstances
Adds top level instances list
2022-02-14 16:06:55 -05:00
Alex Corey
c40785b6eb Fixes erroneous disabling of name input field on container and instance group forms 2022-02-14 15:47:50 -05:00
Alex Corey
e2e80313ac Refactor the health check button 2022-02-14 15:35:25 -05:00
Alex Corey
14a99a7b9e resolves advanced search button 2022-02-14 15:35:24 -05:00
Alex Corey
50e8c299c6 Adds top level instances list 2022-02-14 15:35:24 -05:00
Alex Corey
326d12382f Adds Inventory labels (#11558)
* Adds inventory labels end point

* Adds label field to inventory form
2022-02-14 15:14:08 -05:00
Kersom
1de9dddd21 Merge pull request #11724 from nixocio/ui_issue_11708
Bump node to LTS version
2022-02-14 13:11:57 -05:00
nixocio
87b1f0d0de Bump node to LTS version
Bump node to LTS version
2022-02-14 12:41:11 -05:00
Kersom
f085afd92f Merge pull request #11592 from nixocio/ui_issue_11017_utils
Modify usage of ansible_facts on advanced search
2022-02-14 10:30:45 -05:00
Elijah DeLee
604cbc1737 Consume control capacity (#11665)
* Select control node before start task

Consume capacity on control nodes for controlling tasks and consider
remainging capacity on control nodes before selecting them.

This depends on the requirement that control and hybrid nodes should all
be in the instance group named 'controlplane'. Many tests do not satisfy that
requirement. I'll update the tests in another commit.

* update tests to use controlplane

We don't start any tasks if we don't have a controlplane instance group

Due to updates to fixtures, update tests to set node type and capacity
explicitly so they get expected result.

* Fixes for accounting of control capacity consumed

Update method is used to account for currently consumed capacity for
instance groups in the in-memory capacity tracking data structure we initialize in
after_lock_init and then update via calculate_capacity_consumed (both in
task_manager.py)

Also update fit_task_to_instance to consider control impact on instances

Trust that these functions do the right thing looking for a
node with capacity, and cut out redundant check for the whole group's
capacity per Alan's reccomendation.

* Refactor now redundant code

Deal with control type tasks before we loop over the preferred instance
groups, which cuts out the need for some redundant logic.

Also, fix a bug where I was missing assigning the execution node in one case!

* set job explanation on tasks that need capacity

move the job explanation for jobs that need capacity to a function
so we can re-use it in the three places we need it.

* project updates always run on the controlplane

Instance group ordering makes no sense on project updates because they
always need to run on the control plane.

Also, since hybrid nodes should always run the control processes for the
jobs running on them as execution nodes, account for this when looking for a
execution node.

* fix misleading message

the variables and wording were both misleading, fix to be more accurate
description in the two different cases where this log may be emitted.

* use settings correctly

use settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME instead of a hardcoded
name
cache the controlplane_ig object during the after lock init to avoid
an uneccesary query
eliminate mistakenly duplicated AWX_CONTROL_PLANE_TASK_IMPACT and use
only AWX_CONTROL_NODE_TASK_IMPACT

* add test for control capacity consumption

add test to verify that when there are 2 jobs and only capacity for one
that one will move into waiting and the other stays in pending

* add test for hybrid node capacity consumption

assert that the hybrid node is used for both control and execution and
capacity is deducted correctly

* add test for task.capacity_type = control

Test that control type tasks have the right capacity consumed and
get assigned to the right instance group

Also fix lint in the tests

* jobs_running not accurate for control nodes

We can either NOT use "idle instances" for control nodes, or we need
to update the jobs_running property on the Instance model to count
jobs where the node is the controller_node.

I didn't do that because it may be an expensive query, and it would be
hard to make it match with jobs_running on the InstanceGroup which
filters on tasks assigned to the instance group.

This change chooses to stop considering "idle" control nodes an option,
since we can't acurrately identify them.

The way things are without any change, is we are continuing to over consume capacity on control nodes
because this method sees all control nodes as "idle" at the beginning
of the task manager run, and then only counts jobs started in that run
in the in-memory tracking. So jobs which last over a number of task
manager runs build up consuming capacity, which is accurately reported
via Instance.consumed_capacity

* Reduce default task impact for control nodes

This is something we can experiment with as far as what users
want at install time, but start with just 1 for now.

* update capacity docs

Describe usage of the new setting and the concept of control impact.

Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Rebeccah <rhunter@redhat.com>
2022-02-14 10:13:22 -05:00
Shane McDonald
60b6faff19 Merge pull request #11655 from ivarmu/devel
Let an organization admin to add new users to it's tower organization
2022-02-12 19:35:51 -05:00
Rebeccah Hunter
b26c1c16b9 Merge pull request #11728 from ansible/node_state_unhealthy_to_error
[mesh viz] change the term unhealthy to error
2022-02-11 16:02:43 -05:00
Rebeccah
c2bf9d94be change the term unhealthy to error 2022-02-11 15:42:33 -05:00
Brandon Sharp
ea09adbbf3 Add await to handleLaunch (#11649)
* Add async to handleLaunch

* Fix package-lock

Co-authored-by: Wambugu  Kironji <wkironji@redhat.com>
2022-02-11 13:40:20 -05:00
Seth Foster
9d0de57fae Merge pull request #11717 from fosterseth/emit_event_detail_metrics
Add metric for number of events emitted over websocket broadcast
2022-02-11 12:52:16 -05:00
nixocio
da733538c4 Modify usage of ansible_facts on advanced search
Modify usage of ansible_facts on advanced search, once `ansible_facts`
key is selected render a text input allowing the user to type special
query expected for ansible_facts.

This change will add more flexibility to the usage of ansible_facts when
creating a smart inventory.

See: https://github.com/ansible/awx/issues/11017
2022-02-11 10:24:04 -05:00
Seth Foster
6db7cea148 variable name changes 2022-02-10 10:57:00 -05:00
Seth Foster
3993aa9524 Add metric for number of events emitted over websocket broadcast 2022-02-09 21:57:01 -05:00
Alex Corey
6f9d4d89cd Adds credential password step to ad hoc commands wizard (#11598) 2022-02-09 15:59:50 -05:00
Amol Gautam
443bdc1234 Decoupled callback functions from BaseTask Class
--- Removed all callback functions from 'jobs.py' and put them in a new file '/awx/main/tasks/callback.py'
--- Modified Unit tests unit moved
--- Moved 'update_model' from jobs.py to /awx/main/utils/update_model.py
2022-02-09 13:46:32 -05:00
Ivan Aragonés Muniesa
9cd43d044e let an organization admin to add new users to it's tower organization 2022-02-09 18:59:53 +01:00
Kersom
f8e680867b Merge pull request #11710 from nixocio/ui_npm_audit
Run npm audit fix
2022-02-09 12:48:54 -05:00
Rebeccah Hunter
96a5540083 Merge pull request #11632 from ansible/minikube-docs-part-2
update minikube dev env docs with newer keywords for instantiate-awx-deployment.yml
2022-02-09 11:44:43 -05:00
Shane McDonald
750e1bd80a Merge pull request #11342 from shanemcd/custom-uwsgi-mount-path
Allow for running AWX at non-root path (URL prefixing)
2022-02-09 10:37:04 -05:00
Jeff Bradberry
a12f161be5 Merge pull request #11711 from jbradberry/firehose-with-partitioning
Fix the firehose job creation script
2022-02-09 10:07:47 -05:00
Jeff Bradberry
04568ea830 Fix the firehose job creation script
to account for the changes made due to the job event table partitioning work.
2022-02-09 09:49:17 -05:00
nixocio
3be0b527d6 Run npm audit fix
Run npm audit fix

See: https://github.com/ansible/awx/issues/11709
2022-02-09 09:03:20 -05:00
Kersom
afc0732a32 Merge pull request #11568 from nixocio/ui_rs5
Bump react scripts to 5.0
2022-02-09 07:49:43 -05:00
nixocio
9703fb06fc Bump react scripts to 5.0
Bump react scripts to 5.0

See: https://github.com/ansible/awx/issues/11543

Bump eslint

Bump eslint and related plugins

Add @babe/core

Add @babe/core remove babel/core.

Rename .eslintrc to .eslintrc.json

Rename .eslintrc to .eslintrc.json

Add extra plugin

Move babe-plugin-macro as dev dependencies

Move babe-plugin-macro as dev dependencies

Add preset-react

Add preset-react

Fixing lint errors

Fixing lint errors

Run eslint --fix

Run eslint --fix

Turn no-restricted-exports off

Turn no-restricted-exports off

Revert "Run eslint --fix"

This reverts commit e760885b6c199f2ca18091088cb79bfa77c1d3ed.

Run --fix

Run --fix

Fix lint errors

Also bump specificity of Select CSS border component to avoid bug of
missing borders.

Also update API tests related to lincenses.
2022-02-08 11:12:51 -05:00
Shane McDonald
54cbf13219 Merge pull request #11696 from sean-m-sullivan/awx_collection_role_update_v2
add execution_environment_admin to role module
2022-02-08 10:12:00 -05:00
Shane McDonald
6774a12c67 Merge pull request #11694 from shanemcd/scoped-schema
Scope schema.json to target branch
2022-02-08 09:48:08 -05:00
Sean Sullivan
94e53d988b add execution adminitrator to role module 2022-02-08 09:44:50 -05:00
Shane McDonald
22d47ea8c4 Update port binding for UI dev tooling
Jake says "Folks sometimes run the ui dev server independently of the tools_awx container"

Co-authored-by: Jake McDermott <9753817+jakemcdermott@users.noreply.github.com>
2022-02-08 08:33:21 -05:00
Sarah Akus
73bba00cc6 Merge pull request #11670 from keithjgrant/11628-missing-job-output
Display all job type events in job output
2022-02-07 18:04:18 -05:00
Shane McDonald
6ed429ada2 Scope api schema.json to target branch 2022-02-07 17:54:01 -05:00
Keith J. Grant
d2c2d459c4 display all job type events in job output 2022-02-07 14:48:39 -08:00
John Westcott IV
c8b906ffb7 Workflow changes (#11692)
Modifying workflows to install python for make commands
Squashing CI tasks to remove repeated steps
Modifying pre-commit.sh to not fail if there are no python file changes
2022-02-07 15:42:35 -05:00
Shane McDonald
264f1d6638 Merge pull request #11685 from shanemcd/skip-pytest-7.0.0
Skip pytest 7.0.0
2022-02-04 16:09:42 -05:00
Shane McDonald
16c7908adc Skip pytest 7.0.0
A test was failing with:

    from importlib.readers import FileReader
E   ModuleNotFoundError: No module named 'importlib.readers'
2022-02-04 15:48:18 -05:00
Sarabraj Singh
c9d05d7d4a Merge pull request #11474 from sarabrajsingh/supervisord-rsyslog-event-listener-buff
adding event handler specific to when awx-rsyslog throws PROCESS_LOG_STDERR
2022-02-04 11:59:51 -05:00
Sarabraj Singh
ec7e4488dc adding event handler specific to when awx-rsyslog throws PROCESS_LOG_STDERR errors based on 4XX http errors; increased clarity in stderr log messages; removed useless None intializations 2022-02-04 11:18:45 -05:00
Alex Corey
72f440acf5 Merge pull request #11675 from AlexSCorey/11630-WrongtooltipDocs
Fix tooltip documentation in settings
2022-02-04 10:23:11 -05:00
Alan Rominger
21bf698c81 Merge pull request #11617 from AlanCoding/task_job_id
Fix error on timeout with non-job types
2022-02-04 09:41:25 -05:00
Shane McDonald
489ee30e54 Simplify code that generates named URLS 2022-02-03 19:00:07 -05:00
Shane McDonald
2abab0772f Bind port for UI live reload tooling in development environmentt
This allows for running:

```
docker exec -ti tools_awx_1 npm --prefix=awx/ui start
```
2022-02-03 19:00:07 -05:00
Shane McDonald
0bca0fabaa Fix bug in named url middleware when running at non-root path
The most notable change here is the removal of the conditional in
process_request. I don't know why we were preferring REQUEST_URI over
PATH_INFO. When the app is running at /, they are always the same as far as I
can tell. However, when using SCRIPT_NAME, this was incorrectly setting path and
path_info to /myprefix/myprefix/.
2022-02-03 19:00:07 -05:00
Shane McDonald
93ac3fea43 Make UI work when not running at root path 2022-02-03 19:00:07 -05:00
Shane McDonald
c72b71a43a Use relative paths for UI assets
Found at https://create-react-app.dev/docs/deployment/#serving-the-same-build-from-different-paths
2022-02-03 19:00:07 -05:00
Shane McDonald
9e8c40598c Allow for overriding UWSGI mount path
This is just one piece of the puzzle as I try to add support for URL prefixing.
2022-02-03 19:00:07 -05:00
Shane McDonald
4ded4afb7d Move production UWSGI config to a file 2022-02-03 19:00:07 -05:00
Seth Foster
801c45da6d Merge pull request #11681 from fosterseth/fix_cleanup_named_pipe
remove any named pipes before unzipping artifacts
2022-02-03 15:43:05 -05:00
srinathman
278b356a18 Update saml.md (#11663)
* Update saml.md

- Updated link to python documentation
- Added instructions for superadmin permissions

Co-authored-by: John Westcott IV <john.westcott.iv@redhat.com>
2022-02-03 13:33:50 -05:00
Shane McDonald
a718e01dbf Merge pull request #11676 from shanemcd/automate-labels
Automate labels with GHA
2022-02-03 10:53:15 -05:00
Shane McDonald
8e6cdde861 Automate labels 2022-02-03 09:45:00 -05:00
Alex Corey
62b0c2b647 Fix tooltip documentation 2022-02-02 16:18:41 -05:00
Seth Foster
1cd30ceb31 remove any named pipes before unzipping artifacts 2022-02-02 15:54:31 -05:00
Shane McDonald
15c7a3f85b Merge pull request #11673 from ansible/fix_dockerfile_kube_dev_deps
Includes gettext on build-deps for multi-stage builds
2022-02-02 15:31:54 -05:00
Alex Corey
d977aff8cf Merge pull request #11668 from nixocio/ui_issue_11582
Fix typerror cannot read property of null
2022-02-02 14:46:04 -05:00
Marcelo Moreira de Mello
e3b44c3950 Includes gettext on build-deps for multi-stage builds 2022-02-02 14:12:27 -05:00
nixocio
ba035efc91 Fix typerror cannot read property of null
```
> x = null
null
> x?.contains
undefined
> x.contains
Uncaught TypeError: Cannot read property 'contains' of null
```

See: https://github.com/ansible/awx/issues/11582
2022-02-02 13:54:37 -05:00
Sarah Akus
76cfd7784a Merge pull request #11517 from AlexSCorey/11236-ExpandCollapseAll
Adds expand collapse all functionality on job output page.
2022-02-02 09:43:13 -05:00
Alex Corey
3e6875ce1d Adds expand collapse all functionality on job output page. 2022-02-02 09:26:08 -05:00
Shane McDonald
1ab7aa0fc4 Merge pull request #11662 from simaishi/remove_tower_setup_script
Remove ansible-tower-setup script
2022-02-01 15:25:00 -05:00
Shane McDonald
5950e0bfcb Merge pull request #11643 from john-westcott-iv/github_meta_changes
GitHub meta changes
2022-02-01 13:15:40 -05:00
Satoe Imaishi
ac540d3d3f Remove tower-setup script - no longer used 2022-02-01 12:51:02 -05:00
Rebeccah Hunter
848ddc5f3e Merge pull request #10912 from rh-dluong/add_org_alias_to_org_mapping
Add organization_alias to Org Mapping as intended
2022-02-01 11:44:48 -05:00
Marliana Lara
30d1d63813 Add wf node list item info popover (#11587) 2022-02-01 11:10:24 -05:00
dluong
9781a9094f Added functionality to where user can add organization alias to org mapping so that the user doesn't have to match the saml attr exactly as the org name 2022-02-01 09:46:37 -05:00
Kersom
ab3de5898d Merge pull request #11646 from jainnikhil30/fix_jobs_id
add job id to the jobs details page
2022-02-01 08:45:51 -05:00
Nikhil Jain
7ff8a3764b add job id to the jobs details page 2022-02-01 10:34:02 +05:30
Tiago Góes
32d6d746b3 Merge pull request #11638 from jakemcdermott/fix-prompted-inventory-role-level
Only display usable inventories for launch prompt
2022-01-31 17:48:28 -03:00
Shane McDonald
ecf9a0827d Merge pull request #11618 from fosterseth/ps_in_dev_image
Install ps in dev image
2022-01-31 12:42:59 -05:00
John Westcott IV
a9a7fac308 Removing the Installer option in issues and pr templates 2022-01-31 10:56:59 -05:00
Alan Rominger
54b5884943 Merge pull request #11642 from AlanCoding/new_black_rule
Fix newly-added black rules
2022-01-31 10:01:50 -05:00
John Westcott IV
1fb38137dc Adding Collection and Installer category to issues/prs 2022-01-30 14:01:25 -05:00
John Westcott IV
2d6192db75 Adding triage label to any new issue 2022-01-30 13:59:37 -05:00
Jeff Bradberry
9ecceb4a1e Merge pull request #11639 from jbradberry/fix-updater-script
Deal properly with comments in requirements_git.txt
2022-01-30 10:16:22 -05:00
Alan Rominger
6b25fcaa80 Fix newly-added black rules 2022-01-29 23:17:58 -05:00
Jeff Bradberry
c5c83a4240 Deal properly with comments in requirements_git.txt
The updater.sh script was expecting that _every_ line in this file was
a repo reference.
2022-01-28 17:30:42 -05:00
Jake McDermott
5e0eb5ab97 Only display usable inventories for launch prompt 2022-01-28 16:13:19 -05:00
Alan Rominger
2de5ffc8d9 Merge pull request #11627 from AlanCoding/fast_heartbeat
Prevent duplicate query in local health check
2022-01-28 13:19:56 -05:00
Elijah DeLee
3b2fe39a0a update another part of minikube dev env docs
vars in ansible/instantiate-awx-deployment.yml in awx-operator repo appear to have been updated, because when we used the `tower_...` vars, they did not apply
2022-01-27 23:31:20 -05:00
Alan Rominger
285ff080d0 Prevent duplicate query in local health check 2022-01-27 15:27:07 -05:00
Jeff Bradberry
627bde9e9e Merge pull request #11614 from jbradberry/register_peers_warn_2cycles
Only do a warning on 2-cycles for the register_peers command
2022-01-27 10:25:19 -05:00
Shane McDonald
ef7d5e6004 Merge pull request #11621 from ansible/update-minikube-dev-env-docs
Update minikube dev environment docs
2022-01-27 09:56:50 -05:00
Elijah DeLee
598c8a1c4d Update minikube docs
Replace reference to a non-existent playbook with current directions from awx-operator
Also add some tips about how to interact with the deployment
2022-01-27 08:37:14 -05:00
Seth Foster
b3c20ee0ae Install ps in dev image 2022-01-26 18:12:52 -05:00
Alan Rominger
cd8d382038 Fix error on timeout with non-job types 2022-01-26 17:00:59 -05:00
Shane McDonald
b678d61318 Merge pull request #11569 from zjzh/devel
Update ad_hoc_commands.py
2022-01-26 16:51:30 -05:00
Brian Coca
43c8231f7d fix deprecated indentation and type (#11599)
* fix deprecated indentation and type

This was breaking docs build for any plugins that used this fragment

fixes #10776
2022-01-26 16:10:02 -05:00
Shane McDonald
db401e0daa Merge pull request #11616 from shanemcd/hostname
Install hostname in dev image
2022-01-26 15:04:07 -05:00
Shane McDonald
675d4c5f2b Install hostname in dev image 2022-01-26 14:39:57 -05:00
Jeff Bradberry
fdbf3ed279 Only do a warning on 2-cycles for the register_peers command
It has no way of knowing whether a later command will fix the
situation, and this will come up in the installer.  Let's just trust
the pre-flight checks.
2022-01-26 11:50:57 -05:00
Shane McDonald
5660f9ac59 Merge pull request #11514 from shanemcd/python39
Upgrade to Python 3.9
2022-01-26 10:59:14 -05:00
Alex Corey
546e63aa4c Merge pull request #11581 from AlexSCorey/UpdateReleaseNotes
Adds more detail to the AWX release notes
2022-01-26 10:43:52 -05:00
Alex Corey
ddbd143793 Adds more detail to the AWX release notes 2022-01-26 09:52:40 -05:00
Shane McDonald
35ba321546 Unpin virtualenv version 2022-01-25 17:41:38 -05:00
Shane McDonald
2fe7fe30f8 Remove epel
This doesnt seem to be needed anymore
2022-01-25 17:39:42 -05:00
Alan Rominger
8d4d1d594b Merge pull request #11608 from AlanCoding/mount_awx_devel
Mount awx_devel in execution nodes for developer utility
2022-01-25 16:42:56 -05:00
Alan Rominger
c86fafbd7e Mount awx_devel in execution nodes for developer utility 2022-01-25 12:28:26 -05:00
Jeff Bradberry
709c439afc Merge pull request #11591 from ansible/enable-hop-nodes-endpoints
Turn off the filtering of hop nodes from the Instance endpoints
2022-01-25 12:03:23 -05:00
Sarah Akus
4cdc88e4bb Merge pull request #11534 from marshmalien/7678-inv-sync-link
Link from sync status icon to prefiltered list of inventory source sync jobs
2022-01-25 12:03:09 -05:00
Jeff Bradberry
7c550a76a5 Make sure to filter out control-plane nodes in inspect_execution_nodes
Also, make sure that the cluster host doesn't get marked as lost by
this machinery.
2022-01-25 11:06:20 -05:00
Marcelo Moreira de Mello
cfabbcaaf6 Merge pull request #11602 from ansible/avoid_project_updates_on_create_preload_data
Avoid Project..get_or_create() in create_preload_data
2022-01-24 18:20:29 -05:00
Marcelo Moreira de Mello
7ae6286152 Avoid Project..get_or_create() in create_preload_data
Django ORM method get_or_create() does not call save() directly,
but it calls the create() [1].

The create method ignores the skip_update=True option, which then
will trigger a project update, however the EE was not yet created
in the database.

To avoid this problem, we just check the existence of the default
project and creates it with save(skip_update=True) manually.
2022-01-24 17:59:29 -05:00
Jeff Bradberry
fd9c28c960 Adjust register_queue command to not allow hop nodes to be added 2022-01-24 17:40:55 -05:00
Jeff Bradberry
fa9ee96f7f Adjust the list_instances command to show hop nodes
with appropriate attributes removed or added.
2022-01-24 17:22:12 -05:00
Jeff Bradberry
334c33ca07 Handle receptorctl advertisements for hop nodes
counting it towards their heartbeat.  Also, leave off the link to the
health check endpoint from hop node Instances.
2022-01-24 16:51:45 -05:00
Keith Grant
85cc67fb4e Update status icons (#11561)
* update StatusLabels on job detail

* change StatusIcon to use PF circle icons

* change status icon to status label on host event modal

* update status label on wf job output

* update tests for status label changes

* fix default status icon color
2022-01-24 14:01:02 -05:00
Shane McDonald
af9eb7c374 Update timezone test 2022-01-24 12:21:28 -05:00
Shane McDonald
44968cc01e Upgrade to Python 3.9 2022-01-24 12:21:20 -05:00
Shane McDonald
af69b25eaa Merge pull request #11332 from shanemcd/bump-deps
Security-related updates for some Python dependencies.
2022-01-24 12:13:53 -05:00
Shane McDonald
eb33b95083 Merge pull request #11548 from shanemcd/revert-11428
Revert "Make awx-python script available in k8s app images"
2022-01-24 12:10:01 -05:00
Marcelo Moreira de Mello
aa9124e072 Merge pull request #11566 from ansible/expose_isolate_path_podman_O
Support user customization of EE mount options and mount paths
2022-01-21 22:41:23 -05:00
Marcelo Moreira de Mello
c086fad945 Added verbosity to molecule logs 2022-01-21 21:30:49 -05:00
Marcelo Moreira de Mello
0fef88c358 Support user customization of container mount options and mount paths 2022-01-21 17:12:32 -05:00
Jeff Bradberry
56f8f8d3f4 Turn off the filtering of hop nodes from the Instance endpoints
except for the health check.
2022-01-21 15:19:59 -05:00
John Westcott IV
5bced09fc5 Handeling different types of response.data (#11576) 2022-01-21 15:16:09 -05:00
Jake McDermott
b4e9ff7ce0 Merge pull request #11573 from nixocio/ui_rename_files
Rename remaining .jsx files to .js
2022-01-21 10:55:06 -05:00
Alex Corey
208cbabb31 Merge pull request #11580 from jakemcdermott/readme-update-templates-2
Update ui dev readme
2022-01-21 10:50:01 -05:00
Jake McDermott
2fb5cfd55d Update ui dev readme 2022-01-21 10:31:35 -05:00
Jake McDermott
582036ba45 Merge pull request #11579 from jakemcdermott/readme-update-templates
Update ui dev readme, templates
2022-01-21 10:12:50 -05:00
Jake McDermott
e06f9f5438 Update ui dev readme, templates 2022-01-21 09:55:54 -05:00
nixocio
461876da93 Rename remaining .jsx files to .js
Rename remaining .jsx files to .js
2022-01-20 14:17:32 -05:00
zzj
16d39bb72b Update ad_hoc_commands.py
refactoring code with set comprehension which is more concise and efficient
2022-01-20 18:50:33 +08:00
Shane McDonald
9d636cad29 Revert "Make awx-python script available in k8s app images"
This reverts commit 88bbd43314.
2022-01-15 10:38:50 -05:00
Marliana Lara
11cc7e37e1 Add prefiltered link to inventory source sync jobs 2022-01-13 11:48:40 -05:00
Shane McDonald
aad150cf1d Pin rsa package to latest version 2021-11-16 09:02:11 +00:00
Shane McDonald
39370f1eab Security-related updates for some Python dependencies. 2021-11-14 08:45:49 +00:00
350 changed files with 30441 additions and 35879 deletions

View File

@@ -16,7 +16,7 @@ https://www.ansible.com/security
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
- API
- UI
- Installer
- Collection
##### SUMMARY
<!-- Briefly describe the problem. -->

View File

@@ -1,26 +1,24 @@
---
name: Bug Report
description: Create a report to help us improve
labels:
- bug
body:
- type: markdown
attributes:
value: |
Issues are for **concrete, actionable bugs and feature requests** only. For debugging help or technical support, please use:
- The #ansible-awx channel on irc.libera.chat
- https://groups.google.com/forum/#!forum/awx-project
- The awx project mailing list, https://groups.google.com/forum/#!forum/awx-project
- type: checkboxes
id: terms
attributes:
label: Please confirm the following
options:
- label: I agree to follow this project's [code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
required: true
- label: I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
required: true
- label: I understand that AWX is open source software provided for free and that I am not entitled to status updates or other assurances.
- label: I understand that AWX is open source software provided for free and that I might not receive a timely response.
required: true
- type: textarea
@@ -39,6 +37,15 @@ body:
validations:
required: true
- type: checkboxes
id: components
attributes:
label: Select the relevant components
options:
- label: UI
- label: API
- label: Docs
- type: dropdown
id: awx-install-method
attributes:

View File

@@ -25,6 +25,7 @@ the change does.
<!--- Name of the module/plugin/module/task -->
- API
- UI
- Collection
##### AWX VERSION
<!--- Paste verbatim output from `make VERSION` between quotes below -->

12
.github/issue_labeler.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
needs_triage:
- '.*'
"type:bug":
- "Please confirm the following"
"type:enhancement":
- "Feature Idea"
"component:ui":
- "\\[X\\] UI"
"component:api":
- "\\[X\\] API"
"component:docs":
- "\\[X\\] Docs"

14
.github/pr_labeler.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
"component:api":
- any: ['awx/**/*', '!awx/ui/*']
"component:ui":
- any: ['awx/ui/**/*']
"component:docs":
- any: ['docs/**/*']
"component:cli":
- any: ['awxkit/**/*']
"component:collection":
- any: ['awx_collection/**/*']

View File

@@ -5,14 +5,48 @@ env:
on:
pull_request:
jobs:
api-test:
common_tests:
name: ${{ matrix.tests.name }}
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
strategy:
fail-fast: false
matrix:
tests:
- name: api-test
command: /start_tests.sh
label: Run API Tests
- name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters
label: Run API Linters
- name: api-swagger
command: /start_tests.sh swagger
label: Generate API Reference
- name: awx-collection
command: /start_tests.sh test_collection_all
label: Run Collection Tests
- name: api-schema
label: Check API Schema
command: /start_tests.sh detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{ github.event.pull_request.base.ref }}
- name: ui-lint
label: Run UI Linters
command: make ui-lint
- name: ui-test
label: Run UI Tests
command: make ui-test
steps:
- uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
@@ -25,154 +59,11 @@ jobs:
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Run API Tests
- name: ${{ matrix.texts.label }}
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} /start_tests.sh
api-lint:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} ${{ matrix.tests.command }}
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Run API Linters
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} /var/lib/awx/venv/awx/bin/tox -e linters
api-swagger:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || : || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Generate API Reference
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} /start_tests.sh swagger
awx-collection:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Run Collection Tests
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} /start_tests.sh test_collection_all
api-schema:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Check API Schema
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} /start_tests.sh detect-schema-change
ui-lint:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Run UI Linters
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} make ui-lint
ui-test:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Run UI Tests
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} make ui-test
awx-operator:
runs-on: ubuntu-latest
steps:
@@ -207,7 +98,7 @@ jobs:
ansible-galaxy collection install -r molecule/requirements.yml
sudo rm -f $(which kustomize)
make kustomize
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule test -s kind
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind
env:
AWX_TEST_IMAGE: awx
AWX_TEST_VERSION: ci

View File

@@ -13,6 +13,14 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin

View File

@@ -18,6 +18,14 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Install system deps
run: sudo apt-get install -y gettext

22
.github/workflows/label_issue.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Label Issue
on:
issues:
types:
- opened
- reopened
- edited
jobs:
triage:
runs-on: ubuntu-latest
name: Label Issue
steps:
- name: Label Issue
uses: github/issue-labeler@v2.4.1
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
not-before: 2021-12-07T07:00:00Z
configuration-path: .github/issue_labeler.yml
enable-versioned-regex: 0

20
.github/workflows/label_pr.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: Label PR
on:
pull_request_target:
types:
- opened
- reopened
- synchronize
jobs:
triage:
runs-on: ubuntu-latest
name: Label PR
steps:
- name: Label PR
uses: actions/labeler@v3
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/pr_labeler.yml

View File

@@ -43,6 +43,14 @@ jobs:
with:
path: awx
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Checkout awx-logos
uses: actions/checkout@v2
with:

View File

@@ -14,6 +14,14 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
@@ -39,6 +47,6 @@ jobs:
run: |
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
ansible localhost -c local -m aws_s3 \
-a 'src=${{ github.workspace }}/schema.json bucket=awx-public-ci-files object=schema.json mode=put permission=public-read'
-a "src=${{ github.workspace }}/schema.json bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=put permission=public-read"

View File

@@ -1,5 +1,4 @@
PYTHON ?= python3.8
PYTHON_VERSION = $(shell $(PYTHON) -c "from distutils.sysconfig import get_python_version; print(get_python_version())")
PYTHON ?= python3.9
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
@@ -44,7 +43,7 @@ I18N_FLAG_FILE = .i18n_built
receiver test test_unit test_coverage coverage_html \
dev_build release_build sdist \
ui-release ui-devel \
VERSION docker-compose-sources \
VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit
clean-tmp:
@@ -266,16 +265,16 @@ api-lint:
awx-link:
[ -d "/awx_devel/awx.egg-info" ] || $(PYTHON) /awx_devel/setup.py egg_info_dev
cp -f /tmp/awx.egg-link /var/lib/awx/venv/awx/lib/python$(PYTHON_VERSION)/site-packages/awx.egg-link
cp -f /tmp/awx.egg-link /var/lib/awx/venv/awx/lib/$(PYTHON)/site-packages/awx.egg-link
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
PYTEST_ARGS ?= -n auto
# Run all API unit tests.
test:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider -n auto $(TEST_DIRS)
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider $(PYTEST_ARGS) $(TEST_DIRS)
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
@@ -367,7 +366,7 @@ clean-ui:
rm -rf $(UI_BUILD_FLAG_FILE)
awx/ui/node_modules:
NODE_OPTIONS=--max-old-space-size=4096 $(NPM_BIN) --prefix awx/ui --loglevel warn ci
NODE_OPTIONS=--max-old-space-size=6144 $(NPM_BIN) --prefix awx/ui --loglevel warn ci
$(UI_BUILD_FLAG_FILE): awx/ui/node_modules
$(PYTHON) tools/scripts/compilemessages.py
@@ -471,8 +470,9 @@ docker-compose-runtest: awx/projects docker-compose-sources
docker-compose-build-swagger: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
SCHEMA_DIFF_BASE_BRANCH ?= devel
detect-schema-change: genschema
curl https://s3.amazonaws.com/awx-public-ci-files/schema.json -o reference-schema.json
curl https://s3.amazonaws.com/awx-public-ci-files/$(SCHEMA_DIFF_BASE_BRANCH)/schema.json -o reference-schema.json
# Ignore differences in whitespace with -b
diff -u -b reference-schema.json schema.json
@@ -530,6 +530,9 @@ psql-container:
VERSION:
@echo "awx: $(VERSION)"
PYTHON_VERSION:
@echo "$(PYTHON)" | sed 's:python::'
Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml -e receptor_image=$(RECEPTOR_IMAGE)

View File

@@ -214,8 +214,15 @@ class APIView(views.APIView):
'user_name': request.user,
'url_path': request.path,
'remote_addr': request.META.get('REMOTE_ADDR', None),
'error': response.data.get('error', response.status_text),
}
if type(response.data) is dict:
msg_data['error'] = response.data.get('error', response.status_text)
elif type(response.data) is list:
msg_data['error'] = ", ".join(list(map(lambda x: x.get('error', response.status_text), response.data)))
else:
msg_data['error'] = response.status_text
try:
status_msg = getattr(settings, 'API_400_ERROR_LOG_FORMAT').format(**msg_data)
except Exception as e:

View File

@@ -379,19 +379,22 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
def _get_related(self, obj):
return {} if obj is None else self.get_related(obj)
def _generate_named_url(self, url_path, obj, node):
url_units = url_path.split('/')
def _generate_friendly_id(self, obj, node):
reset_counters()
named_url = node.generate_named_url(obj)
url_units[4] = named_url
return '/'.join(url_units)
return node.generate_named_url(obj)
def get_related(self, obj):
res = OrderedDict()
view = self.context.get('view', None)
if view and (hasattr(view, 'retrieve') or view.request.method == 'POST') and type(obj) in settings.NAMED_URL_GRAPH:
original_url = self.get_url(obj)
res['named_url'] = self._generate_named_url(original_url, obj, settings.NAMED_URL_GRAPH[type(obj)])
original_path = self.get_url(obj)
path_components = original_path.lstrip('/').rstrip('/').split('/')
friendly_id = self._generate_friendly_id(obj, settings.NAMED_URL_GRAPH[type(obj)])
path_components[-1] = friendly_id
new_path = '/' + '/'.join(path_components) + '/'
res['named_url'] = new_path
if getattr(obj, 'created_by', None):
res['created_by'] = self.reverse('api:user_detail', kwargs={'pk': obj.created_by.pk})
if getattr(obj, 'modified_by', None):
@@ -862,7 +865,7 @@ class UnifiedJobSerializer(BaseSerializer):
if 'elapsed' in ret:
if obj and obj.pk and obj.started and not obj.finished:
td = now() - obj.started
ret['elapsed'] = (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10 ** 6) / (10 ** 6 * 1.0)
ret['elapsed'] = (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / (10**6 * 1.0)
ret['elapsed'] = float(ret['elapsed'])
# Because this string is saved in the db in the source language,
# it must be marked for translation after it is pulled from the db, not when set
@@ -1640,7 +1643,25 @@ class BaseSerializerWithVariables(BaseSerializer):
return vars_validate_or_raise(value)
class InventorySerializer(BaseSerializerWithVariables):
class LabelsListMixin(object):
def _summary_field_labels(self, obj):
label_list = [{'id': x.id, 'name': x.name} for x in obj.labels.all()[:10]]
if has_model_field_prefetched(obj, 'labels'):
label_ct = len(obj.labels.all())
else:
if len(label_list) < 10:
label_ct = len(label_list)
else:
label_ct = obj.labels.count()
return {'count': label_ct, 'results': label_list}
def get_summary_fields(self, obj):
res = super(LabelsListMixin, self).get_summary_fields(obj)
res['labels'] = self._summary_field_labels(obj)
return res
class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables):
show_capabilities = ['edit', 'delete', 'adhoc', 'copy']
capabilities_prefetch = ['admin', 'adhoc', {'copy': 'organization.inventory_admin'}]
@@ -1681,6 +1702,7 @@ class InventorySerializer(BaseSerializerWithVariables):
object_roles=self.reverse('api:inventory_object_roles_list', kwargs={'pk': obj.pk}),
instance_groups=self.reverse('api:inventory_instance_groups_list', kwargs={'pk': obj.pk}),
copy=self.reverse('api:inventory_copy', kwargs={'pk': obj.pk}),
labels=self.reverse('api:inventory_label_list', kwargs={'pk': obj.pk}),
)
)
if obj.organization:
@@ -2750,24 +2772,6 @@ class OrganizationCredentialSerializerCreate(CredentialSerializerCreate):
fields = ('*', '-user', '-team')
class LabelsListMixin(object):
def _summary_field_labels(self, obj):
label_list = [{'id': x.id, 'name': x.name} for x in obj.labels.all()[:10]]
if has_model_field_prefetched(obj, 'labels'):
label_ct = len(obj.labels.all())
else:
if len(label_list) < 10:
label_ct = len(label_list)
else:
label_ct = obj.labels.count()
return {'count': label_ct, 'results': label_list}
def get_summary_fields(self, obj):
res = super(LabelsListMixin, self).get_summary_fields(obj)
res['labels'] = self._summary_field_labels(obj)
return res
class JobOptionsSerializer(LabelsListMixin, BaseSerializer):
class Meta:
fields = (
@@ -4787,7 +4791,7 @@ class InstanceNodeSerializer(BaseSerializer):
def get_node_state(self, obj):
if not obj.enabled:
return "disabled"
return "unhealthy" if obj.errors else "healthy"
return "error" if obj.errors else "healthy"
class InstanceSerializer(BaseSerializer):
@@ -4833,7 +4837,8 @@ class InstanceSerializer(BaseSerializer):
res['jobs'] = self.reverse('api:instance_unified_jobs_list', kwargs={'pk': obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if self.context['request'].user.is_superuser or self.context['request'].user.is_system_auditor:
res['health_check'] = self.reverse('api:instance_health_check', kwargs={'pk': obj.pk})
if obj.node_type != 'hop':
res['health_check'] = self.reverse('api:instance_health_check', kwargs={'pk': obj.pk})
return res
def get_consumed_capacity(self, obj):

View File

@@ -20,6 +20,7 @@ from awx.api.views import (
InventoryAccessList,
InventoryObjectRolesList,
InventoryInstanceGroupsList,
InventoryLabelList,
InventoryCopy,
)
@@ -41,6 +42,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/access_list/$', InventoryAccessList.as_view(), name='inventory_access_list'),
url(r'^(?P<pk>[0-9]+)/object_roles/$', InventoryObjectRolesList.as_view(), name='inventory_object_roles_list'),
url(r'^(?P<pk>[0-9]+)/instance_groups/$', InventoryInstanceGroupsList.as_view(), name='inventory_instance_groups_list'),
url(r'^(?P<pk>[0-9]+)/labels/$', InventoryLabelList.as_view(), name='inventory_label_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', InventoryCopy.as_view(), name='inventory_copy'),
]

View File

@@ -157,6 +157,7 @@ from awx.api.views.inventory import ( # noqa
InventoryAccessList,
InventoryObjectRolesList,
InventoryJobTemplateList,
InventoryLabelList,
InventoryCopy,
)
from awx.api.views.mesh_visualizer import MeshVisualizer # noqa
@@ -365,14 +366,11 @@ class InstanceList(ListAPIView):
serializer_class = serializers.InstanceSerializer
search_fields = ('hostname',)
def get_queryset(self):
return super().get_queryset().exclude(node_type='hop')
class InstanceDetail(RetrieveUpdateAPIView):
name = _("Instance Detail")
queryset = models.Instance.objects.exclude(node_type='hop')
model = models.Instance
serializer_class = serializers.InstanceSerializer
def update(self, request, *args, **kwargs):
@@ -418,10 +416,14 @@ class InstanceInstanceGroupsList(InstanceGroupMembershipMixin, SubListCreateAtta
class InstanceHealthCheck(GenericAPIView):
name = _('Instance Health Check')
queryset = models.Instance.objects.exclude(node_type='hop')
model = models.Instance
serializer_class = serializers.InstanceHealthCheckSerializer
permission_classes = (IsSystemAdminOrAuditor,)
def get_queryset(self):
# FIXME: For now, we don't have a good way of checking the health of a hop node.
return super().get_queryset().exclude(node_type='hop')
def get(self, request, *args, **kwargs):
obj = self.get_object()
data = self.get_serializer(data=request.data).to_representation(obj)

View File

@@ -16,17 +16,21 @@ from rest_framework.response import Response
from rest_framework import status
# AWX
from awx.main.models import (
ActivityStream,
Inventory,
JobTemplate,
Role,
User,
InstanceGroup,
InventoryUpdateEvent,
InventoryUpdate,
from awx.main.models import ActivityStream, Inventory, JobTemplate, Role, User, InstanceGroup, InventoryUpdateEvent, InventoryUpdate
from awx.main.models.label import Label
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
SubListAPIView,
SubListAttachDetachAPIView,
ResourceAccessList,
CopyAPIView,
DeleteLastUnattachLabelMixin,
SubListCreateAttachDetachAPIView,
)
from awx.api.generics import ListCreateAPIView, RetrieveUpdateDestroyAPIView, SubListAPIView, SubListAttachDetachAPIView, ResourceAccessList, CopyAPIView
from awx.api.serializers import (
InventorySerializer,
@@ -35,6 +39,7 @@ from awx.api.serializers import (
InstanceGroupSerializer,
InventoryUpdateEventSerializer,
JobTemplateSerializer,
LabelSerializer,
)
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, ControlledByScmMixin
@@ -152,6 +157,30 @@ class InventoryJobTemplateList(SubListAPIView):
return qs.filter(inventory=parent)
class InventoryLabelList(DeleteLastUnattachLabelMixin, SubListCreateAttachDetachAPIView, SubListAPIView):
model = Label
serializer_class = LabelSerializer
parent_model = Inventory
relationship = 'labels'
def post(self, request, *args, **kwargs):
# If a label already exists in the database, attach it instead of erroring out
# that it already exists
if 'id' not in request.data and 'name' in request.data and 'organization' in request.data:
existing = Label.objects.filter(name=request.data['name'], organization_id=request.data['organization'])
if existing.exists():
existing = existing[0]
request.data['id'] = existing.id
del request.data['name']
del request.data['organization']
if Label.objects.filter(inventory_labels=self.kwargs['pk']).count() > 100:
return Response(
dict(msg=_('Maximum number of labels for {} reached.'.format(self.parent_model._meta.verbose_name_raw))), status=status.HTTP_400_BAD_REQUEST
)
return super(InventoryLabelList, self).post(request, *args, **kwargs)
class InventoryCopy(CopyAPIView):
model = Inventory

View File

@@ -13,6 +13,9 @@ from django.utils.translation import ugettext_lazy as _
from rest_framework.fields import BooleanField, CharField, ChoiceField, DictField, DateTimeField, EmailField, IntegerField, ListField, NullBooleanField # noqa
from rest_framework.serializers import PrimaryKeyRelatedField # noqa
# AWX
from awx.main.constants import CONTAINER_VOLUMES_MOUNT_TYPES, MAX_ISOLATED_PATH_COLON_DELIMITER
logger = logging.getLogger('awx.conf.fields')
# Use DRF fields to convert/validate settings:
@@ -109,6 +112,49 @@ class StringListPathField(StringListField):
self.fail('type_error', input_type=type(paths))
class StringListIsolatedPathField(StringListField):
# Valid formats
# '/etc/pki/ca-trust'
# '/etc/pki/ca-trust:/etc/pki/ca-trust'
# '/etc/pki/ca-trust:/etc/pki/ca-trust:O'
default_error_messages = {
'type_error': _('Expected list of strings but got {input_type} instead.'),
'path_error': _('{path} is not a valid path choice. You must provide an absolute path.'),
'mount_error': _('{scontext} is not a valid mount option. Allowed types are {mount_types}'),
'syntax_error': _('Invalid syntax. A string HOST-DIR[:CONTAINER-DIR[:OPTIONS]] is expected but got {path}.'),
}
def to_internal_value(self, paths):
if isinstance(paths, (list, tuple)):
for p in paths:
if not isinstance(p, str):
self.fail('type_error', input_type=type(p))
if not p.startswith('/'):
self.fail('path_error', path=p)
if p.count(':'):
if p.count(':') > MAX_ISOLATED_PATH_COLON_DELIMITER:
self.fail('syntax_error', path=p)
try:
src, dest, scontext = p.split(':')
except ValueError:
scontext = 'z'
src, dest = p.split(':')
finally:
for sp in [src, dest]:
if not len(sp):
self.fail('syntax_error', path=sp)
if not sp.startswith('/'):
self.fail('path_error', path=sp)
if scontext not in CONTAINER_VOLUMES_MOUNT_TYPES:
self.fail('mount_error', scontext=scontext, mount_types=CONTAINER_VOLUMES_MOUNT_TYPES)
return super(StringListIsolatedPathField, self).to_internal_value(sorted(paths))
else:
self.fail('type_error', input_type=type(paths))
class URLField(CharField):
# these lines set up a custom regex that allow numbers in the
# top-level domain

View File

@@ -853,7 +853,12 @@ class InventoryAccess(BaseAccess):
"""
model = Inventory
prefetch_related = ('created_by', 'modified_by', 'organization')
prefetch_related = (
'created_by',
'modified_by',
'organization',
Prefetch('labels', queryset=Label.objects.all().order_by('name')),
)
def filtered_queryset(self, allowed=None, ad_hoc=None):
return self.model.accessible_objects(self.user, 'read_role')

View File

@@ -211,7 +211,7 @@ def projects_by_scm_type(since, **kwargs):
return counts
@register('instance_info', '1.1', description=_('Cluster topology and capacity'))
@register('instance_info', '1.2', description=_('Cluster topology and capacity'))
def instance_info(since, include_hostnames=False, **kwargs):
info = {}
instances = models.Instance.objects.values_list('hostname').values(

View File

@@ -90,7 +90,7 @@ def package(target, data, timestamp):
if isinstance(item, str):
f.add(item, arcname=f'./{name}')
else:
buf = json.dumps(item).encode('utf-8')
buf = json.dumps(item, cls=DjangoJSONEncoder).encode('utf-8')
info = tarfile.TarInfo(f'./{name}')
info.size = len(buf)
info.mtime = timestamp.timestamp()
@@ -230,7 +230,7 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
try:
last_entry = max(last_entries.get(key) or last_gather, until - timedelta(weeks=4))
results = (func(since or last_entry, collection_type=collection_type, until=until), func.__awx_analytics_version__)
json.dumps(results) # throwaway check to see if the data is json-serializable
json.dumps(results, cls=DjangoJSONEncoder) # throwaway check to see if the data is json-serializable
data[filename] = results
except Exception:
logger.exception("Could not generate metric {}".format(filename))

View File

@@ -160,6 +160,7 @@ class Metrics:
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),

View File

@@ -72,8 +72,8 @@ register(
'HTTP headers and meta keys to search to determine remote host '
'name or IP. Add additional items to this list, such as '
'"HTTP_X_FORWARDED_FOR", if behind a reverse proxy. '
'See the "Proxy Support" section of the Adminstrator guide for '
'more details.'
'See the "Proxy Support" section of the AAP Installation guide '
'for more details.'
),
category=_('System'),
category_slug='system',
@@ -259,10 +259,14 @@ register(
register(
'AWX_ISOLATION_SHOW_PATHS',
field_class=fields.StringListField,
field_class=fields.StringListIsolatedPathField,
required=False,
label=_('Paths to expose to isolated jobs'),
help_text=_('List of paths that would otherwise be hidden to expose to isolated jobs. Enter one path per line.'),
help_text=_(
'List of paths that would otherwise be hidden to expose to isolated jobs. Enter one path per line. '
'Volumes will be mounted from the execution node to the container. '
'The supported format is HOST-DIR[:CONTAINER-DIR[:OPTIONS]]. '
),
category=_('Jobs'),
category_slug='jobs',
)

View File

@@ -85,3 +85,10 @@ RECEPTOR_PENDING = 'ansible-runner-???'
# Naming pattern for AWX jobs in /tmp folder, like /tmp/awx_42_xiwm
# also update awxkit.api.pages.unified_jobs if changed
JOB_FOLDER_PREFIX = 'awx_%s_'
# :z option tells Podman that two containers share the volume content with r/w
# :O option tells Podman to mount the directory from the host as a temporary storage using the overlay file system.
# see podman-run manpage for further details
# /HOST-DIR:/CONTAINER-DIR:OPTIONS
CONTAINER_VOLUMES_MOUNT_TYPES = ['z', 'O']
MAX_ISOLATED_PATH_COLON_DELIMITER = 2

View File

@@ -22,6 +22,7 @@ import psutil
from awx.main.models import UnifiedJob
from awx.main.dispatch import reaper
from awx.main.utils.common import convert_mem_str_to_bytes
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -319,7 +320,8 @@ class AutoscalePool(WorkerPool):
if self.max_workers is None:
settings_absmem = getattr(settings, 'SYSTEM_TASK_ABS_MEM', None)
if settings_absmem is not None:
total_memory_gb = int(settings_absmem)
# There are 1073741824 bytes in a gigabyte. Convert bytes to gigabytes by dividing by 2**30
total_memory_gb = convert_mem_str_to_bytes(settings_absmem) // 2**30
else:
total_memory_gb = (psutil.virtual_memory().total >> 30) + 1 # noqa: round up
# 5 workers per GB of total memory

View File

@@ -116,19 +116,20 @@ class CallbackBrokerWorker(BaseWorker):
def flush(self, force=False):
now = tz_now()
if force or (time.time() - self.last_flush) > settings.JOB_EVENT_BUFFER_SECONDS or any([len(events) >= 1000 for events in self.buff.values()]):
bulk_events_saved = 0
singular_events_saved = 0
metrics_bulk_events_saved = 0
metrics_singular_events_saved = 0
metrics_events_batch_save_errors = 0
metrics_events_broadcast = 0
for cls, events in self.buff.items():
logger.debug(f'{cls.__name__}.objects.bulk_create({len(events)})')
for e in events:
if not e.created:
e.created = now
e.modified = now
duration_to_save = time.perf_counter()
metrics_duration_to_save = time.perf_counter()
try:
cls.objects.bulk_create(events)
bulk_events_saved += len(events)
metrics_bulk_events_saved += len(events)
except Exception:
# if an exception occurs, we should re-attempt to save the
# events one-by-one, because something in the list is
@@ -137,22 +138,24 @@ class CallbackBrokerWorker(BaseWorker):
for e in events:
try:
e.save()
singular_events_saved += 1
metrics_singular_events_saved += 1
except Exception:
logger.exception('Database Error Saving Job Event')
duration_to_save = time.perf_counter() - duration_to_save
metrics_duration_to_save = time.perf_counter() - metrics_duration_to_save
for e in events:
if not getattr(e, '_skip_websocket_message', False):
metrics_events_broadcast += 1
emit_event_detail(e)
self.buff = {}
self.last_flush = time.time()
# only update metrics if we saved events
if (bulk_events_saved + singular_events_saved) > 0:
if (metrics_bulk_events_saved + metrics_singular_events_saved) > 0:
self.subsystem_metrics.inc('callback_receiver_batch_events_errors', metrics_events_batch_save_errors)
self.subsystem_metrics.inc('callback_receiver_events_insert_db_seconds', duration_to_save)
self.subsystem_metrics.inc('callback_receiver_events_insert_db', bulk_events_saved + singular_events_saved)
self.subsystem_metrics.observe('callback_receiver_batch_events_insert_db', bulk_events_saved)
self.subsystem_metrics.inc('callback_receiver_events_in_memory', -(bulk_events_saved + singular_events_saved))
self.subsystem_metrics.inc('callback_receiver_events_insert_db_seconds', metrics_duration_to_save)
self.subsystem_metrics.inc('callback_receiver_events_insert_db', metrics_bulk_events_saved + metrics_singular_events_saved)
self.subsystem_metrics.observe('callback_receiver_batch_events_insert_db', metrics_bulk_events_saved)
self.subsystem_metrics.inc('callback_receiver_events_in_memory', -(metrics_bulk_events_saved + metrics_singular_events_saved))
self.subsystem_metrics.inc('callback_receiver_events_broadcast', metrics_events_broadcast)
if self.subsystem_metrics.should_pipe_execute() is True:
self.subsystem_metrics.pipe_execute()

View File

@@ -25,13 +25,17 @@ class Command(BaseCommand):
if not Organization.objects.exists():
o, _ = Organization.objects.get_or_create(name='Default')
p, _ = Project.objects.get_or_create(
name='Demo Project',
scm_type='git',
scm_url='https://github.com/ansible/ansible-tower-samples',
scm_update_on_launch=True,
scm_update_cache_timeout=0,
)
# Avoid calling directly the get_or_create() to bypass project update
p = Project.objects.filter(name='Demo Project', scm_type='git').first()
if not p:
p = Project(
name='Demo Project',
scm_type='git',
scm_url='https://github.com/ansible/ansible-tower-samples',
scm_update_on_launch=True,
scm_update_cache_timeout=0,
)
p.organization = o
p.save(skip_update=True)

View File

@@ -11,13 +11,16 @@ class Ungrouped(object):
policy_instance_percentage = None
policy_instance_minimum = None
def __init__(self):
self.qs = Instance.objects.filter(rampart_groups__isnull=True)
@property
def instances(self):
return Instance.objects.filter(rampart_groups__isnull=True).exclude(node_type='hop')
return self.qs
@property
def capacity(self):
return sum(x.capacity for x in self.instances)
return sum(x.capacity for x in self.instances.all())
class Command(BaseCommand):
@@ -29,26 +32,29 @@ class Command(BaseCommand):
groups = list(InstanceGroup.objects.all())
ungrouped = Ungrouped()
if len(ungrouped.instances):
if len(ungrouped.instances.all()):
groups.append(ungrouped)
for instance_group in groups:
fmt = '[{0.name} capacity={0.capacity}'
if instance_group.policy_instance_percentage:
fmt += ' policy={0.policy_instance_percentage}%'
if instance_group.policy_instance_minimum:
fmt += ' policy>={0.policy_instance_minimum}'
print((fmt + ']').format(instance_group))
for x in instance_group.instances.all():
for ig in groups:
policy = ''
if ig.policy_instance_percentage:
policy = f' policy={ig.policy_instance_percentage}%'
if ig.policy_instance_minimum:
policy = f' policy>={ig.policy_instance_minimum}'
print(f'[{ig.name} capacity={ig.capacity}{policy}]')
for x in ig.instances.all():
color = '\033[92m'
if x.capacity == 0:
if x.capacity == 0 and x.node_type != 'hop':
color = '\033[91m'
if x.enabled is False:
if not x.enabled:
color = '\033[90m[DISABLED] '
if no_color:
color = ''
fmt = '\t' + color + '{0.hostname} capacity={0.capacity} node_type={0.node_type} version={1}'
if x.capacity:
fmt += ' heartbeat="{0.modified:%Y-%m-%d %H:%M:%S}"'
print((fmt + '\033[0m').format(x, x.version or '?'))
print('')
capacity = f' capacity={x.capacity}' if x.node_type != 'hop' else ''
version = f" version={x.version or '?'}" if x.node_type != 'hop' else ''
heartbeat = f' heartbeat="{x.modified:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else ''
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}\033[0m')
print()

View File

@@ -1,3 +1,5 @@
import warnings
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
@@ -45,7 +47,7 @@ class Command(BaseCommand):
peers = set(options['peers'] or options['exact'])
incoming = set(InstanceLink.objects.filter(target=nodes[options['source']]).values_list('source__hostname', flat=True))
if peers & incoming:
raise CommandError(f"Source node {options['source']} cannot link to nodes already peering to it: {peers & incoming}.")
warnings.warn(f"Source node {options['source']} should not link to nodes already peering to it: {peers & incoming}.")
if options['peers']:
missing_peers = set(options['peers']) - set(nodes)

View File

@@ -53,14 +53,14 @@ class RegisterQueue:
def add_instances_to_group(self, ig):
changed = False
instance_list_unique = set([x.strip() for x in self.hostname_list if x])
instance_list_unique = {x for x in (x.strip() for x in self.hostname_list) if x}
instances = []
for inst_name in instance_list_unique:
instance = Instance.objects.filter(hostname=inst_name)
instance = Instance.objects.filter(hostname=inst_name).exclude(node_type='hop')
if instance.exists():
instances.append(instance[0])
else:
raise InstanceNotFound("Instance does not exist: {}".format(inst_name), changed)
raise InstanceNotFound("Instance does not exist or cannot run jobs: {}".format(inst_name), changed)
ig.instances.add(*instances)

View File

@@ -243,7 +243,13 @@ class InstanceGroupManager(models.Manager):
for t in tasks:
# TODO: dock capacity for isolated job management tasks running in queue
impact = t.task_impact
if t.status == 'waiting' or not t.execution_node:
control_groups = []
if t.controller_node:
control_groups = instance_ig_mapping.get(t.controller_node, [])
if not control_groups:
logger.warn(f"No instance group found for {t.controller_node}, capacity consumed may be innaccurate.")
if t.status == 'waiting' or (not t.execution_node and not t.is_container_group_task):
# Subtract capacity from any peer groups that share instances
if not t.instance_group:
impacted_groups = []
@@ -260,6 +266,12 @@ class InstanceGroupManager(models.Manager):
graph[group_name][f'consumed_{capacity_type}_capacity'] += impact
if breakdown:
graph[group_name]['committed_capacity'] += impact
for group_name in control_groups:
if group_name not in graph:
self.zero_out_group(graph, group_name, breakdown)
graph[group_name][f'consumed_control_capacity'] += settings.AWX_CONTROL_NODE_TASK_IMPACT
if breakdown:
graph[group_name]['committed_capacity'] += settings.AWX_CONTROL_NODE_TASK_IMPACT
elif t.status == 'running':
# Subtract capacity from all groups that contain the instance
if t.execution_node not in instance_ig_mapping:
@@ -271,6 +283,7 @@ class InstanceGroupManager(models.Manager):
impacted_groups = []
else:
impacted_groups = instance_ig_mapping[t.execution_node]
for group_name in impacted_groups:
if group_name not in graph:
self.zero_out_group(graph, group_name, breakdown)
@@ -279,6 +292,12 @@ class InstanceGroupManager(models.Manager):
graph[group_name][f'consumed_{capacity_type}_capacity'] += impact
if breakdown:
graph[group_name]['running_capacity'] += impact
for group_name in control_groups:
if group_name not in graph:
self.zero_out_group(graph, group_name, breakdown)
graph[group_name][f'consumed_control_capacity'] += settings.AWX_CONTROL_NODE_TASK_IMPACT
if breakdown:
graph[group_name]['running_capacity'] += settings.AWX_CONTROL_NODE_TASK_IMPACT
else:
logger.error('Programming error, %s not in ["running", "waiting"]', t.log_format)
return graph

View File

@@ -180,11 +180,7 @@ class URLModificationMiddleware(MiddlewareMixin):
return '/'.join(url_units)
def process_request(self, request):
if hasattr(request, 'environ') and 'REQUEST_URI' in request.environ:
old_path = urllib.parse.urlsplit(request.environ['REQUEST_URI']).path
old_path = old_path[request.path.find(request.path_info) :]
else:
old_path = request.path_info
old_path = request.path_info
new_path = self._convert_named_url(old_path)
if request.path_info != new_path:
request.environ['awx.named_url_rewritten'] = request.path

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.20 on 2022-01-18 16:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0156_capture_mesh_topology'),
]
operations = [
migrations.AddField(
model_name='inventory',
name='labels',
field=models.ManyToManyField(blank=True, help_text='Labels associated with this inventory.', related_name='inventory_labels', to='main.Label'),
),
]

View File

@@ -0,0 +1,19 @@
# Generated by Django 2.2.24 on 2022-02-14 17:37
from decimal import Decimal
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0157_inventory_labels'),
]
operations = [
migrations.AlterField(
model_name='instance',
name='cpu',
field=models.DecimalField(decimal_places=1, default=Decimal('0'), editable=False, max_digits=4),
),
]

View File

@@ -160,9 +160,7 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
@property
def notification_templates(self):
all_orgs = set()
for h in self.hosts.all():
all_orgs.add(h.inventory.organization)
all_orgs = {h.inventory.organization for h in self.hosts.all()}
active_templates = dict(error=set(), success=set(), started=set())
base_notification_templates = NotificationTemplate.objects
for org in all_orgs:

View File

@@ -82,8 +82,10 @@ class Instance(HasPolicyEditsMixin, BaseModel):
modified = models.DateTimeField(auto_now=True)
# Fields defined in health check or heartbeat
version = models.CharField(max_length=120, blank=True)
cpu = models.IntegerField(
default=0,
cpu = models.DecimalField(
default=Decimal(0.0),
max_digits=4,
decimal_places=1,
editable=False,
)
memory = models.BigIntegerField(
@@ -145,7 +147,14 @@ class Instance(HasPolicyEditsMixin, BaseModel):
@property
def consumed_capacity(self):
return sum(x.task_impact for x in UnifiedJob.objects.filter(execution_node=self.hostname, status__in=('running', 'waiting')))
capacity_consumed = 0
if self.node_type in ('hybrid', 'execution'):
capacity_consumed += sum(x.task_impact for x in UnifiedJob.objects.filter(execution_node=self.hostname, status__in=('running', 'waiting')))
if self.node_type in ('hybrid', 'control'):
capacity_consumed += sum(
settings.AWX_CONTROL_NODE_TASK_IMPACT for x in UnifiedJob.objects.filter(controller_node=self.hostname, status__in=('running', 'waiting'))
)
return capacity_consumed
@property
def remaining_capacity(self):
@@ -195,7 +204,7 @@ class Instance(HasPolicyEditsMixin, BaseModel):
if ref_time is None:
ref_time = now()
grace_period = settings.CLUSTER_NODE_HEARTBEAT_PERIOD * 2
if self.node_type == 'execution':
if self.node_type in ('execution', 'hop'):
grace_period += settings.RECEPTOR_SERVICE_ADVERTISEMENT_PERIOD
return self.last_seen < ref_time - timedelta(seconds=grace_period)
@@ -263,7 +272,11 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.mark_offline(perform_save=False, errors=errors)
update_fields.extend(['cpu_capacity', 'mem_capacity', 'capacity', 'errors'])
self.save(update_fields=update_fields)
# disabling activity stream will avoid extra queries, which is important for heatbeat actions
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
self.save(update_fields=update_fields)
def local_health_check(self):
"""Only call this method on the instance that this record represents"""
@@ -341,15 +354,21 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
app_label = 'main'
@staticmethod
def fit_task_to_most_remaining_capacity_instance(task, instances):
def fit_task_to_most_remaining_capacity_instance(task, instances, impact=None, capacity_type=None, add_hybrid_control_cost=False):
impact = impact if impact else task.task_impact
capacity_type = capacity_type if capacity_type else task.capacity_type
instance_most_capacity = None
most_remaining_capacity = -1
for i in instances:
if i.node_type not in (task.capacity_type, 'hybrid'):
if i.node_type not in (capacity_type, 'hybrid'):
continue
if i.remaining_capacity >= task.task_impact and (
instance_most_capacity is None or i.remaining_capacity > instance_most_capacity.remaining_capacity
):
would_be_remaining = i.remaining_capacity - impact
# hybrid nodes _always_ control their own tasks
if add_hybrid_control_cost and i.node_type == 'hybrid':
would_be_remaining -= settings.AWX_CONTROL_NODE_TASK_IMPACT
if would_be_remaining >= 0 and (instance_most_capacity is None or would_be_remaining > most_remaining_capacity):
instance_most_capacity = i
most_remaining_capacity = would_be_remaining
return instance_most_capacity
@staticmethod

View File

@@ -170,6 +170,12 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
editable=False,
help_text=_('Flag indicating the inventory is being deleted.'),
)
labels = models.ManyToManyField(
"Label",
blank=True,
related_name='inventory_labels',
help_text=_('Labels associated with this inventory.'),
)
def get_absolute_url(self, request=None):
return reverse('api:inventory_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -9,6 +9,7 @@ from django.utils.translation import ugettext_lazy as _
from awx.api.versioning import reverse
from awx.main.models.base import CommonModelNameNotUnique
from awx.main.models.unified_jobs import UnifiedJobTemplate, UnifiedJob
from awx.main.models.inventory import Inventory
__all__ = ('Label',)
@@ -35,15 +36,14 @@ class Label(CommonModelNameNotUnique):
@staticmethod
def get_orphaned_labels():
return Label.objects.filter(organization=None, unifiedjobtemplate_labels__isnull=True)
return Label.objects.filter(organization=None, unifiedjobtemplate_labels__isnull=True, inventory_labels__isnull=True)
def is_detached(self):
return bool(Label.objects.filter(id=self.id, unifiedjob_labels__isnull=True, unifiedjobtemplate_labels__isnull=True).count())
return Label.objects.filter(id=self.id, unifiedjob_labels__isnull=True, unifiedjobtemplate_labels__isnull=True, inventory_labels__isnull=True).exists()
def is_candidate_for_detach(self):
c1 = UnifiedJob.objects.filter(labels__in=[self.id]).count()
c2 = UnifiedJobTemplate.objects.filter(labels__in=[self.id]).count()
if (c1 + c2 - 1) == 0:
return True
else:
return False
c3 = Inventory.objects.filter(labels__in=[self.id]).count()
return (c1 + c2 + c3 - 1) == 0

View File

@@ -613,26 +613,6 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
def get_notification_friendly_name(self):
return "Project Update"
@property
def preferred_instance_groups(self):
'''
Project updates should pretty much always run on the control plane
however, we are not yet saying no to custom groupings within the control plane
Thus, we return custom groups and then unconditionally add the control plane
'''
if self.organization is not None:
organization_groups = [x for x in self.organization.instance_groups.all()]
else:
organization_groups = []
template_groups = [x for x in super(ProjectUpdate, self).preferred_instance_groups]
selected_groups = template_groups + organization_groups
controlplane_ig = self.control_plane_instance_group
if controlplane_ig and controlplane_ig[0] and controlplane_ig[0] not in selected_groups:
selected_groups += controlplane_ig
return selected_groups
def save(self, *args, **kwargs):
added_update_fields = []
if not self.job_tags:

View File

@@ -70,6 +70,7 @@ class TaskManager:
"""
instances = Instance.objects.filter(hostname__isnull=False, enabled=True).exclude(node_type='hop')
self.real_instances = {i.hostname: i for i in instances}
self.controlplane_ig = None
instances_partial = [
SimpleNamespace(
@@ -86,6 +87,8 @@ class TaskManager:
instances_by_hostname = {i.hostname: i for i in instances_partial}
for rampart_group in InstanceGroup.objects.prefetch_related('instances'):
if rampart_group.name == settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME:
self.controlplane_ig = rampart_group
self.graph[rampart_group.name] = dict(
graph=DependencyGraph(),
execution_capacity=0,
@@ -283,39 +286,13 @@ class TaskManager:
task.send_notification_templates('running')
logger.debug('Transitioning %s to running status.', task.log_format)
schedule_task_manager()
elif rampart_group.is_container_group:
task.instance_group = rampart_group
if task.capacity_type == 'execution':
# find one real, non-containerized instance with capacity to
# act as the controller for k8s API interaction
try:
task.controller_node = Instance.choose_online_control_plane_node()
task.log_lifecycle("controller_node_chosen")
except IndexError:
logger.warning("No control plane nodes available to run containerized job {}".format(task.log_format))
return
else:
# project updates and system jobs don't *actually* run in pods, so
# just pick *any* non-containerized host and use it as the execution node
task.execution_node = Instance.choose_online_control_plane_node()
task.log_lifecycle("execution_node_chosen")
logger.debug('Submitting containerized {} to queue {}.'.format(task.log_format, task.execution_node))
# at this point we already have control/execution nodes selected for the following cases
else:
task.instance_group = rampart_group
task.execution_node = instance.hostname
task.log_lifecycle("execution_node_chosen")
if instance.node_type == 'execution':
try:
task.controller_node = Instance.choose_online_control_plane_node()
task.log_lifecycle("controller_node_chosen")
except IndexError:
logger.warning("No control plane nodes available to manage {}".format(task.log_format))
return
else:
# control plane nodes will manage jobs locally for performance and resilience
task.controller_node = task.execution_node
task.log_lifecycle("controller_node_chosen")
logger.debug('Submitting job {} to queue {} controlled by {}.'.format(task.log_format, task.execution_node, task.controller_node))
execution_node_msg = f' and execution node {task.execution_node}' if task.execution_node else ''
logger.debug(
f'Submitting job {task.log_format} controlled by {task.controller_node} to instance group {rampart_group.name}{execution_node_msg}.'
)
with disable_activity_stream():
task.celery_task_id = str(uuid.uuid4())
task.save()
@@ -323,6 +300,13 @@ class TaskManager:
if rampart_group is not None:
self.consume_capacity(task, rampart_group.name, instance=instance)
if task.controller_node:
self.consume_capacity(
task,
settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME,
instance=self.real_instances[task.controller_node],
impact=settings.AWX_CONTROL_NODE_TASK_IMPACT,
)
def post_commit():
if task.status != 'failed' and type(task) is not WorkflowJob:
@@ -497,9 +481,10 @@ class TaskManager:
task.job_explanation = job_explanation
tasks_to_update_job_explanation.append(task)
continue
preferred_instance_groups = task.preferred_instance_groups
found_acceptable_queue = False
preferred_instance_groups = task.preferred_instance_groups
if isinstance(task, WorkflowJob):
if task.unified_job_template_id in running_workflow_templates:
if not task.allow_simultaneous:
@@ -510,9 +495,36 @@ class TaskManager:
self.start_task(task, None, task.get_jobs_fail_chain(), None)
continue
# Determine if there is control capacity for the task
if task.capacity_type == 'control':
control_impact = task.task_impact + settings.AWX_CONTROL_NODE_TASK_IMPACT
else:
control_impact = settings.AWX_CONTROL_NODE_TASK_IMPACT
control_instance = InstanceGroup.fit_task_to_most_remaining_capacity_instance(
task, self.graph[settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]['instances'], impact=control_impact, capacity_type='control'
)
if not control_instance:
self.task_needs_capacity(task, tasks_to_update_job_explanation)
logger.debug(f"Skipping task {task.log_format} in pending, not enough capacity left on controlplane to control new tasks")
continue
task.controller_node = control_instance.hostname
# All task.capacity_type == 'control' jobs should run on control plane, no need to loop over instance groups
if task.capacity_type == 'control':
task.execution_node = control_instance.hostname
control_instance.remaining_capacity = max(0, control_instance.remaining_capacity - control_impact)
control_instance.jobs_running += 1
self.graph[settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]['graph'].add_job(task)
execution_instance = self.real_instances[control_instance.hostname]
self.start_task(task, self.controlplane_ig, task.get_jobs_fail_chain(), execution_instance)
found_acceptable_queue = True
continue
for rampart_group in preferred_instance_groups:
if task.capacity_type == 'execution' and rampart_group.is_container_group:
self.graph[rampart_group.name]['graph'].add_job(task)
if rampart_group.is_container_group:
control_instance.jobs_running += 1
self.graph[settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]['graph'].add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), None)
found_acceptable_queue = True
break
@@ -521,28 +533,32 @@ class TaskManager:
if settings.IS_K8S and task.capacity_type == 'execution':
logger.debug("Skipping group {}, task cannot run on control plane".format(rampart_group.name))
continue
remaining_capacity = self.get_remaining_capacity(rampart_group.name, capacity_type=task.capacity_type)
if task.task_impact > 0 and remaining_capacity <= 0:
logger.debug("Skipping group {}, remaining_capacity {} <= 0".format(rampart_group.name, remaining_capacity))
continue
# at this point we know the instance group is NOT a container group
# because if it was, it would have started the task and broke out of the loop.
execution_instance = InstanceGroup.fit_task_to_most_remaining_capacity_instance(
task, self.graph[rampart_group.name]['instances']
task, self.graph[rampart_group.name]['instances'], add_hybrid_control_cost=True
) or InstanceGroup.find_largest_idle_instance(self.graph[rampart_group.name]['instances'], capacity_type=task.capacity_type)
if execution_instance or rampart_group.is_container_group:
if not rampart_group.is_container_group:
execution_instance.remaining_capacity = max(0, execution_instance.remaining_capacity - task.task_impact)
execution_instance.jobs_running += 1
logger.debug(
"Starting {} in group {} instance {} (remaining_capacity={})".format(
task.log_format, rampart_group.name, execution_instance.hostname, remaining_capacity
)
)
if execution_instance:
task.execution_node = execution_instance.hostname
# If our execution instance is a hybrid, prefer to do control tasks there as well.
if execution_instance.node_type == 'hybrid':
control_instance = execution_instance
task.controller_node = execution_instance.hostname
if execution_instance:
execution_instance = self.real_instances[execution_instance.hostname]
control_instance.remaining_capacity = max(0, control_instance.remaining_capacity - settings.AWX_CONTROL_NODE_TASK_IMPACT)
task.log_lifecycle("controller_node_chosen")
if control_instance != execution_instance:
control_instance.jobs_running += 1
execution_instance.remaining_capacity = max(0, execution_instance.remaining_capacity - task.task_impact)
execution_instance.jobs_running += 1
task.log_lifecycle("execution_node_chosen")
logger.debug(
"Starting {} in group {} instance {} (remaining_capacity={})".format(
task.log_format, rampart_group.name, execution_instance.hostname, execution_instance.remaining_capacity
)
)
execution_instance = self.real_instances[execution_instance.hostname]
self.graph[rampart_group.name]['graph'].add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), execution_instance)
found_acceptable_queue = True
@@ -554,18 +570,21 @@ class TaskManager:
)
)
if not found_acceptable_queue:
task.log_lifecycle("needs_capacity")
job_explanation = gettext_noop("This job is not ready to start because there is not enough available capacity.")
if task.job_explanation != job_explanation:
if task.created < (tz_now() - self.time_delta_job_explanation):
# Many launched jobs are immediately blocked, but most blocks will resolve in a few seconds.
# Therefore we should only update the job_explanation after some time has elapsed to
# prevent excessive task saves.
task.job_explanation = job_explanation
tasks_to_update_job_explanation.append(task)
logger.debug("{} couldn't be scheduled on graph, waiting for next cycle".format(task.log_format))
self.task_needs_capacity(task, tasks_to_update_job_explanation)
UnifiedJob.objects.bulk_update(tasks_to_update_job_explanation, ['job_explanation'])
def task_needs_capacity(self, task, tasks_to_update_job_explanation):
task.log_lifecycle("needs_capacity")
job_explanation = gettext_noop("This job is not ready to start because there is not enough available capacity.")
if task.job_explanation != job_explanation:
if task.created < (tz_now() - self.time_delta_job_explanation):
# Many launched jobs are immediately blocked, but most blocks will resolve in a few seconds.
# Therefore we should only update the job_explanation after some time has elapsed to
# prevent excessive task saves.
task.job_explanation = job_explanation
tasks_to_update_job_explanation.append(task)
logger.debug("{} couldn't be scheduled on graph, waiting for next cycle".format(task.log_format))
def timeout_approval_node(self):
workflow_approvals = WorkflowApproval.objects.filter(status='pending')
now = tz_now()
@@ -600,16 +619,17 @@ class TaskManager:
def calculate_capacity_consumed(self, tasks):
self.graph = InstanceGroup.objects.capacity_values(tasks=tasks, graph=self.graph)
def consume_capacity(self, task, instance_group, instance=None):
def consume_capacity(self, task, instance_group, instance=None, impact=None):
impact = impact if impact else task.task_impact
logger.debug(
'{} consumed {} capacity units from {} with prior total of {}'.format(
task.log_format, task.task_impact, instance_group, self.graph[instance_group]['consumed_capacity']
task.log_format, impact, instance_group, self.graph[instance_group]['consumed_capacity']
)
)
self.graph[instance_group]['consumed_capacity'] += task.task_impact
self.graph[instance_group]['consumed_capacity'] += impact
for capacity_type in ('control', 'execution'):
if instance is None or instance.node_type in ('hybrid', capacity_type):
self.graph[instance_group][f'consumed_{capacity_type}_capacity'] += task.task_impact
self.graph[instance_group][f'consumed_{capacity_type}_capacity'] += impact
def get_remaining_capacity(self, instance_group, capacity_type='execution'):
return self.graph[instance_group][f'{capacity_type}_capacity'] - self.graph[instance_group][f'consumed_{capacity_type}_capacity']

257
awx/main/tasks/callback.py Normal file
View File

@@ -0,0 +1,257 @@
import json
import time
import logging
from collections import deque
import os
import stat
# Django
from django.utils.timezone import now
from django.conf import settings
from django_guid.middleware import GuidMiddleware
# AWX
from awx.main.redact import UriCleaner
from awx.main.constants import MINIMAL_EVENTS
from awx.main.utils.update_model import update_model
from awx.main.queue import CallbackQueueDispatcher
logger = logging.getLogger('awx.main.tasks.callback')
class RunnerCallback:
event_data_key = 'job_id'
def __init__(self, model=None):
self.parent_workflow_job_id = None
self.host_map = {}
self.guid = GuidMiddleware.get_guid()
self.job_created = None
self.recent_event_timings = deque(maxlen=settings.MAX_WEBSOCKET_EVENT_RATE)
self.dispatcher = CallbackQueueDispatcher()
self.safe_env = {}
self.event_ct = 0
self.model = model
def update_model(self, pk, _attempt=0, **updates):
return update_model(self.model, pk, _attempt=0, **updates)
def event_handler(self, event_data):
#
# ⚠️ D-D-D-DANGER ZONE ⚠️
# This method is called once for *every event* emitted by Ansible
# Runner as a playbook runs. That means that changes to the code in
# this method are _very_ likely to introduce performance regressions.
#
# Even if this function is made on average .05s slower, it can have
# devastating performance implications for playbooks that emit
# tens or hundreds of thousands of events.
#
# Proceed with caution!
#
"""
Ansible runner puts a parent_uuid on each event, no matter what the type.
AWX only saves the parent_uuid if the event is for a Job.
"""
# cache end_line locally for RunInventoryUpdate tasks
# which generate job events from two 'streams':
# ansible-inventory and the awx.main.commands.inventory_import
# logger
if event_data.get(self.event_data_key, None):
if self.event_data_key != 'job_id':
event_data.pop('parent_uuid', None)
if self.parent_workflow_job_id:
event_data['workflow_job_id'] = self.parent_workflow_job_id
event_data['job_created'] = self.job_created
if self.host_map:
host = event_data.get('event_data', {}).get('host', '').strip()
if host:
event_data['host_name'] = host
if host in self.host_map:
event_data['host_id'] = self.host_map[host]
else:
event_data['host_name'] = ''
event_data['host_id'] = ''
if event_data.get('event') == 'playbook_on_stats':
event_data['host_map'] = self.host_map
if isinstance(self, RunnerCallbackForProjectUpdate):
# need a better way to have this check.
# it's common for Ansible's SCM modules to print
# error messages on failure that contain the plaintext
# basic auth credentials (username + password)
# it's also common for the nested event data itself (['res']['...'])
# to contain unredacted text on failure
# this is a _little_ expensive to filter
# with regex, but project updates don't have many events,
# so it *should* have a negligible performance impact
task = event_data.get('event_data', {}).get('task_action')
try:
if task in ('git', 'svn'):
event_data_json = json.dumps(event_data)
event_data_json = UriCleaner.remove_sensitive(event_data_json)
event_data = json.loads(event_data_json)
except json.JSONDecodeError:
pass
if 'event_data' in event_data:
event_data['event_data']['guid'] = self.guid
# To prevent overwhelming the broadcast queue, skip some websocket messages
if self.recent_event_timings:
cpu_time = time.time()
first_window_time = self.recent_event_timings[0]
last_window_time = self.recent_event_timings[-1]
if event_data.get('event') in MINIMAL_EVENTS:
should_emit = True # always send some types like playbook_on_stats
elif event_data.get('stdout') == '' and event_data['start_line'] == event_data['end_line']:
should_emit = False # exclude events with no output
else:
should_emit = any(
[
# if 30the most recent websocket message was sent over 1 second ago
cpu_time - first_window_time > 1.0,
# if the very last websocket message came in over 1/30 seconds ago
self.recent_event_timings.maxlen * (cpu_time - last_window_time) > 1.0,
# if the queue is not yet full
len(self.recent_event_timings) != self.recent_event_timings.maxlen,
]
)
if should_emit:
self.recent_event_timings.append(cpu_time)
else:
event_data.setdefault('event_data', {})
event_data['skip_websocket_message'] = True
elif self.recent_event_timings.maxlen:
self.recent_event_timings.append(time.time())
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
self.event_ct += 1
'''
Handle artifacts
'''
if event_data.get('event_data', {}).get('artifact_data', {}):
self.instance.artifacts = event_data['event_data']['artifact_data']
self.instance.save(update_fields=['artifacts'])
return False
def cancel_callback(self):
"""
Ansible runner callback to tell the job when/if it is canceled
"""
unified_job_id = self.instance.pk
self.instance.refresh_from_db()
if not self.instance:
logger.error('unified job {} was deleted while running, canceling'.format(unified_job_id))
return True
if self.instance.cancel_flag or self.instance.status == 'canceled':
cancel_wait = (now() - self.instance.modified).seconds if self.instance.modified else 0
if cancel_wait > 5:
logger.warn('Request to cancel {} took {} seconds to complete.'.format(self.instance.log_format, cancel_wait))
return True
return False
def finished_callback(self, runner_obj):
"""
Ansible runner callback triggered on finished run
"""
event_data = {
'event': 'EOF',
'final_counter': self.event_ct,
'guid': self.guid,
}
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
def status_handler(self, status_data, runner_config):
"""
Ansible runner callback triggered on status transition
"""
if status_data['status'] == 'starting':
job_env = dict(runner_config.env)
'''
Take the safe environment variables and overwrite
'''
for k, v in self.safe_env.items():
if k in job_env:
job_env[k] = v
from awx.main.signals import disable_activity_stream # Circular import
with disable_activity_stream():
self.instance = self.update_model(self.instance.pk, job_args=json.dumps(runner_config.command), job_cwd=runner_config.cwd, job_env=job_env)
elif status_data['status'] == 'failed':
# For encrypted ssh_key_data, ansible-runner worker will open and write the
# ssh_key_data to a named pipe. Then, once the podman container starts, ssh-agent will
# read from this named pipe so that the key can be used in ansible-playbook.
# Once the podman container exits, the named pipe is deleted.
# However, if the podman container fails to start in the first place, e.g. the image
# name is incorrect, then this pipe is not cleaned up. Eventually ansible-runner
# processor will attempt to write artifacts to the private data dir via unstream_dir, requiring
# that it open this named pipe. This leads to a hang. Thus, before any artifacts
# are written by the processor, it's important to remove this ssh_key_data pipe.
private_data_dir = self.instance.job_env.get('AWX_PRIVATE_DATA_DIR', None)
if private_data_dir:
key_data_file = os.path.join(private_data_dir, 'artifacts', str(self.instance.id), 'ssh_key_data')
if os.path.exists(key_data_file) and stat.S_ISFIFO(os.stat(key_data_file).st_mode):
os.remove(key_data_file)
elif status_data['status'] == 'error':
result_traceback = status_data.get('result_traceback', None)
if result_traceback:
from awx.main.signals import disable_activity_stream # Circular import
with disable_activity_stream():
self.instance = self.update_model(self.instance.pk, result_traceback=result_traceback)
class RunnerCallbackForProjectUpdate(RunnerCallback):
event_data_key = 'project_update_id'
def __init__(self, *args, **kwargs):
super(RunnerCallbackForProjectUpdate, self).__init__(*args, **kwargs)
self.playbook_new_revision = None
self.host_map = {}
def event_handler(self, event_data):
super_return_value = super(RunnerCallbackForProjectUpdate, self).event_handler(event_data)
returned_data = event_data.get('event_data', {})
if returned_data.get('task_action', '') == 'set_fact':
returned_facts = returned_data.get('res', {}).get('ansible_facts', {})
if 'scm_version' in returned_facts:
self.playbook_new_revision = returned_facts['scm_version']
return super_return_value
class RunnerCallbackForInventoryUpdate(RunnerCallback):
event_data_key = 'inventory_update_id'
def __init__(self, *args, **kwargs):
super(RunnerCallbackForInventoryUpdate, self).__init__(*args, **kwargs)
self.end_line = 0
def event_handler(self, event_data):
self.end_line = event_data['end_line']
return super(RunnerCallbackForInventoryUpdate, self).event_handler(event_data)
class RunnerCallbackForAdHocCommand(RunnerCallback):
event_data_key = 'ad_hoc_command_id'
def __init__(self, *args, **kwargs):
super(RunnerCallbackForAdHocCommand, self).__init__(*args, **kwargs)
self.host_map = {}
class RunnerCallbackForSystemJob(RunnerCallback):
event_data_key = 'system_job_id'

View File

@@ -1,5 +1,5 @@
# Python
from collections import deque, OrderedDict
from collections import OrderedDict
from distutils.dir_util import copy_tree
import errno
import functools
@@ -19,10 +19,8 @@ from uuid import uuid4
# Django
from django_guid.middleware import GuidMiddleware
from django.conf import settings
from django.db import transaction, DatabaseError
from django.utils.timezone import now
from django.db import transaction
# Runner
@@ -37,8 +35,12 @@ from gitdb.exc import BadName as BadGitName
from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV, MINIMAL_EVENTS, JOB_FOLDER_PREFIX
from awx.main.redact import UriCleaner
from awx.main.constants import (
PRIVILEGE_ESCALATION_METHODS,
STANDARD_INVENTORY_UPDATE_ENV,
JOB_FOLDER_PREFIX,
MAX_ISOLATED_PATH_COLON_DELIMITER,
)
from awx.main.models import (
Instance,
Inventory,
@@ -55,7 +57,13 @@ from awx.main.models import (
SystemJobEvent,
build_safe_env,
)
from awx.main.queue import CallbackQueueDispatcher
from awx.main.tasks.callback import (
RunnerCallback,
RunnerCallbackForAdHocCommand,
RunnerCallbackForInventoryUpdate,
RunnerCallbackForProjectUpdate,
RunnerCallbackForSystemJob,
)
from awx.main.tasks.receptor import AWXReceptorJob
from awx.main.exceptions import AwxTaskError, PostRunError, ReceptorNodeNotFound
from awx.main.utils.ansible import read_ansible_config
@@ -70,6 +78,7 @@ from awx.main.utils.common import (
from awx.conf.license import get_license
from awx.main.utils.handlers import SpecialInventoryHandler
from awx.main.tasks.system import handle_success_and_failure_notifications, update_smart_memberships_for_inventory, update_inventory_computed_fields
from awx.main.utils.update_model import update_model
from rest_framework.exceptions import PermissionDenied
from django.utils.translation import ugettext_lazy as _
@@ -99,46 +108,14 @@ class BaseTask(object):
model = None
event_model = None
abstract = True
callback_class = RunnerCallback
def __init__(self):
self.cleanup_paths = []
self.parent_workflow_job_id = None
self.host_map = {}
self.guid = GuidMiddleware.get_guid()
self.job_created = None
self.recent_event_timings = deque(maxlen=settings.MAX_WEBSOCKET_EVENT_RATE)
self.runner_callback = self.callback_class(model=self.model)
def update_model(self, pk, _attempt=0, **updates):
"""Reload the model instance from the database and update the
given fields.
"""
try:
with transaction.atomic():
# Retrieve the model instance.
instance = self.model.objects.get(pk=pk)
# Update the appropriate fields and save the model
# instance, then return the new instance.
if updates:
update_fields = ['modified']
for field, value in updates.items():
setattr(instance, field, value)
update_fields.append(field)
if field == 'status':
update_fields.append('failed')
instance.save(update_fields=update_fields)
return instance
except DatabaseError as e:
# Log out the error to the debug logger.
logger.debug('Database error updating %s, retrying in 5 ' 'seconds (retry #%d): %s', self.model._meta.object_name, _attempt + 1, e)
# Attempt to retry the update, assuming we haven't already
# tried too many times.
if _attempt < 5:
time.sleep(5)
return self.update_model(pk, _attempt=_attempt + 1, **updates)
else:
logger.error('Failed to update %s after %d retries.', self.model._meta.object_name, _attempt)
return update_model(self.model, pk, _attempt=0, **updates)
def get_path_to(self, *args):
"""
@@ -147,6 +124,9 @@ class BaseTask(object):
return os.path.abspath(os.path.join(os.path.dirname(__file__), *args))
def build_execution_environment_params(self, instance, private_data_dir):
"""
Return params structure to be executed by the container runtime
"""
if settings.IS_K8S:
return {}
@@ -158,6 +138,9 @@ class BaseTask(object):
"container_options": ['--user=root'],
}
if settings.DEFAULT_CONTAINER_RUN_OPTIONS:
params['container_options'].extend(settings.DEFAULT_CONTAINER_RUN_OPTIONS)
if instance.execution_environment.credential:
cred = instance.execution_environment.credential
if all([cred.has_input(field_name) for field_name in ('host', 'username', 'password')]):
@@ -176,9 +159,17 @@ class BaseTask(object):
if settings.AWX_ISOLATION_SHOW_PATHS:
params['container_volume_mounts'] = []
for this_path in settings.AWX_ISOLATION_SHOW_PATHS:
# Using z allows the dir to mounted by multiple containers
# Verify if a mount path and SELinux context has been passed
# Using z allows the dir to be mounted by multiple containers
# Uppercase Z restricts access (in weird ways) to 1 container at a time
params['container_volume_mounts'].append(f'{this_path}:{this_path}:z')
if this_path.count(':') == MAX_ISOLATED_PATH_COLON_DELIMITER:
src, dest, scontext = this_path.split(':')
params['container_volume_mounts'].append(f'{src}:{dest}:{scontext}')
elif this_path.count(':') == MAX_ISOLATED_PATH_COLON_DELIMITER - 1:
src, dest = this_path.split(':')
params['container_volume_mounts'].append(f'{src}:{dest}:z')
else:
params['container_volume_mounts'].append(f'{this_path}:{this_path}:z')
return params
def build_private_data(self, instance, private_data_dir):
@@ -330,7 +321,7 @@ class BaseTask(object):
script_data = instance.inventory.get_script_data(**script_params)
# maintain a list of host_name --> host_id
# so we can associate emitted events to Host objects
self.host_map = {hostname: hv.pop('remote_tower_id', '') for hostname, hv in script_data.get('_meta', {}).get('hostvars', {}).items()}
self.runner_callback.host_map = {hostname: hv.pop('remote_tower_id', '') for hostname, hv in script_data.get('_meta', {}).get('hostvars', {}).items()}
json_data = json.dumps(script_data)
path = os.path.join(private_data_dir, 'inventory')
fn = os.path.join(path, 'hosts')
@@ -424,165 +415,6 @@ class BaseTask(object):
instance.ansible_version = ansible_version_info
instance.save(update_fields=['ansible_version'])
def event_handler(self, event_data):
#
# ⚠️ D-D-D-DANGER ZONE ⚠️
# This method is called once for *every event* emitted by Ansible
# Runner as a playbook runs. That means that changes to the code in
# this method are _very_ likely to introduce performance regressions.
#
# Even if this function is made on average .05s slower, it can have
# devastating performance implications for playbooks that emit
# tens or hundreds of thousands of events.
#
# Proceed with caution!
#
"""
Ansible runner puts a parent_uuid on each event, no matter what the type.
AWX only saves the parent_uuid if the event is for a Job.
"""
# cache end_line locally for RunInventoryUpdate tasks
# which generate job events from two 'streams':
# ansible-inventory and the awx.main.commands.inventory_import
# logger
if isinstance(self, RunInventoryUpdate):
self.end_line = event_data['end_line']
if event_data.get(self.event_data_key, None):
if self.event_data_key != 'job_id':
event_data.pop('parent_uuid', None)
if self.parent_workflow_job_id:
event_data['workflow_job_id'] = self.parent_workflow_job_id
event_data['job_created'] = self.job_created
if self.host_map:
host = event_data.get('event_data', {}).get('host', '').strip()
if host:
event_data['host_name'] = host
if host in self.host_map:
event_data['host_id'] = self.host_map[host]
else:
event_data['host_name'] = ''
event_data['host_id'] = ''
if event_data.get('event') == 'playbook_on_stats':
event_data['host_map'] = self.host_map
if isinstance(self, RunProjectUpdate):
# it's common for Ansible's SCM modules to print
# error messages on failure that contain the plaintext
# basic auth credentials (username + password)
# it's also common for the nested event data itself (['res']['...'])
# to contain unredacted text on failure
# this is a _little_ expensive to filter
# with regex, but project updates don't have many events,
# so it *should* have a negligible performance impact
task = event_data.get('event_data', {}).get('task_action')
try:
if task in ('git', 'svn'):
event_data_json = json.dumps(event_data)
event_data_json = UriCleaner.remove_sensitive(event_data_json)
event_data = json.loads(event_data_json)
except json.JSONDecodeError:
pass
if 'event_data' in event_data:
event_data['event_data']['guid'] = self.guid
# To prevent overwhelming the broadcast queue, skip some websocket messages
if self.recent_event_timings:
cpu_time = time.time()
first_window_time = self.recent_event_timings[0]
last_window_time = self.recent_event_timings[-1]
if event_data.get('event') in MINIMAL_EVENTS:
should_emit = True # always send some types like playbook_on_stats
elif event_data.get('stdout') == '' and event_data['start_line'] == event_data['end_line']:
should_emit = False # exclude events with no output
else:
should_emit = any(
[
# if 30the most recent websocket message was sent over 1 second ago
cpu_time - first_window_time > 1.0,
# if the very last websocket message came in over 1/30 seconds ago
self.recent_event_timings.maxlen * (cpu_time - last_window_time) > 1.0,
# if the queue is not yet full
len(self.recent_event_timings) != self.recent_event_timings.maxlen,
]
)
if should_emit:
self.recent_event_timings.append(cpu_time)
else:
event_data.setdefault('event_data', {})
event_data['skip_websocket_message'] = True
elif self.recent_event_timings.maxlen:
self.recent_event_timings.append(time.time())
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
self.event_ct += 1
'''
Handle artifacts
'''
if event_data.get('event_data', {}).get('artifact_data', {}):
self.instance.artifacts = event_data['event_data']['artifact_data']
self.instance.save(update_fields=['artifacts'])
return False
def cancel_callback(self):
"""
Ansible runner callback to tell the job when/if it is canceled
"""
unified_job_id = self.instance.pk
self.instance = self.update_model(unified_job_id)
if not self.instance:
logger.error('unified job {} was deleted while running, canceling'.format(unified_job_id))
return True
if self.instance.cancel_flag or self.instance.status == 'canceled':
cancel_wait = (now() - self.instance.modified).seconds if self.instance.modified else 0
if cancel_wait > 5:
logger.warn('Request to cancel {} took {} seconds to complete.'.format(self.instance.log_format, cancel_wait))
return True
return False
def finished_callback(self, runner_obj):
"""
Ansible runner callback triggered on finished run
"""
event_data = {
'event': 'EOF',
'final_counter': self.event_ct,
'guid': self.guid,
}
event_data.setdefault(self.event_data_key, self.instance.id)
self.dispatcher.dispatch(event_data)
def status_handler(self, status_data, runner_config):
"""
Ansible runner callback triggered on status transition
"""
if status_data['status'] == 'starting':
job_env = dict(runner_config.env)
'''
Take the safe environment variables and overwrite
'''
for k, v in self.safe_env.items():
if k in job_env:
job_env[k] = v
from awx.main.signals import disable_activity_stream # Circular import
with disable_activity_stream():
self.instance = self.update_model(self.instance.pk, job_args=json.dumps(runner_config.command), job_cwd=runner_config.cwd, job_env=job_env)
elif status_data['status'] == 'error':
result_traceback = status_data.get('result_traceback', None)
if result_traceback:
from awx.main.signals import disable_activity_stream # Circular import
with disable_activity_stream():
self.instance = self.update_model(self.instance.pk, result_traceback=result_traceback)
@with_path_cleanup
def run(self, pk, **kwargs):
"""
@@ -602,22 +434,15 @@ class BaseTask(object):
status, rc = 'error', None
extra_update_fields = {}
fact_modification_times = {}
self.event_ct = 0
self.runner_callback.event_ct = 0
'''
Needs to be an object property because status_handler uses it in a callback context
'''
self.safe_env = {}
self.safe_cred_env = {}
private_data_dir = None
# store a reference to the parent workflow job (if any) so we can include
# it in event data JSON
if self.instance.spawned_by_workflow:
self.parent_workflow_job_id = self.instance.get_workflow_job().id
self.job_created = str(self.instance.created)
try:
self.instance.send_notification_templates("running")
private_data_dir = self.build_private_data_dir(self.instance)
@@ -649,7 +474,16 @@ class BaseTask(object):
self.build_extra_vars_file(self.instance, private_data_dir)
args = self.build_args(self.instance, private_data_dir, passwords)
env = self.build_env(self.instance, private_data_dir, private_data_files=private_data_files)
self.safe_env = build_safe_env(env)
self.runner_callback.safe_env = build_safe_env(env)
self.runner_callback.instance = self.instance
# store a reference to the parent workflow job (if any) so we can include
# it in event data JSON
if self.instance.spawned_by_workflow:
self.runner_callback.parent_workflow_job_id = self.instance.get_workflow_job().id
self.runner_callback.job_created = str(self.instance.created)
credentials = self.build_credentials_list(self.instance)
@@ -657,7 +491,7 @@ class BaseTask(object):
if credential:
credential.credential_type.inject_credential(credential, env, self.safe_cred_env, args, private_data_dir)
self.safe_env.update(self.safe_cred_env)
self.runner_callback.safe_env.update(self.safe_cred_env)
self.write_args_file(private_data_dir, args)
@@ -703,16 +537,14 @@ class BaseTask(object):
if not params[v]:
del params[v]
self.dispatcher = CallbackQueueDispatcher()
self.instance.log_lifecycle("running_playbook")
if isinstance(self.instance, SystemJob):
res = ansible_runner.interface.run(
project_dir=settings.BASE_DIR,
event_handler=self.event_handler,
finished_callback=self.finished_callback,
status_handler=self.status_handler,
cancel_callback=self.cancel_callback,
event_handler=self.runner_callback.event_handler,
finished_callback=self.runner_callback.finished_callback,
status_handler=self.runner_callback.status_handler,
cancel_callback=self.runner_callback.cancel_callback,
**params,
)
else:
@@ -734,7 +566,7 @@ class BaseTask(object):
extra_update_fields['job_explanation'] = self.instance.job_explanation
# ensure failure notification sends even if playbook_on_stats event is not triggered
handle_success_and_failure_notifications.apply_async([self.instance.job.id])
handle_success_and_failure_notifications.apply_async([self.instance.id])
except ReceptorNodeNotFound as exc:
extra_update_fields['job_explanation'] = str(exc)
@@ -743,7 +575,7 @@ class BaseTask(object):
extra_update_fields['result_traceback'] = traceback.format_exc()
logger.exception('%s Exception occurred while running task', self.instance.log_format)
finally:
logger.debug('%s finished running, producing %s events.', self.instance.log_format, self.event_ct)
logger.debug('%s finished running, producing %s events.', self.instance.log_format, self.runner_callback.event_ct)
try:
self.post_run_hook(self.instance, status)
@@ -757,7 +589,7 @@ class BaseTask(object):
logger.exception('{} Post run hook errored.'.format(self.instance.log_format))
self.instance = self.update_model(pk)
self.instance = self.update_model(pk, status=status, emitted_events=self.event_ct, **extra_update_fields)
self.instance = self.update_model(pk, status=status, emitted_events=self.runner_callback.event_ct, **extra_update_fields)
try:
self.final_run_hook(self.instance, status, private_data_dir, fact_modification_times)
@@ -780,7 +612,6 @@ class RunJob(BaseTask):
model = Job
event_model = JobEvent
event_data_key = 'job_id'
def build_private_data(self, job, private_data_dir):
"""
@@ -1179,22 +1010,13 @@ class RunProjectUpdate(BaseTask):
model = ProjectUpdate
event_model = ProjectUpdateEvent
event_data_key = 'project_update_id'
callback_class = RunnerCallbackForProjectUpdate
def __init__(self, *args, job_private_data_dir=None, **kwargs):
super(RunProjectUpdate, self).__init__(*args, **kwargs)
self.playbook_new_revision = None
self.original_branch = None
self.job_private_data_dir = job_private_data_dir
def event_handler(self, event_data):
super(RunProjectUpdate, self).event_handler(event_data)
returned_data = event_data.get('event_data', {})
if returned_data.get('task_action', '') == 'set_fact':
returned_facts = returned_data.get('res', {}).get('ansible_facts', {})
if 'scm_version' in returned_facts:
self.playbook_new_revision = returned_facts['scm_version']
def build_private_data(self, project_update, private_data_dir):
"""
Return SSH private key data needed for this project update.
@@ -1595,8 +1417,8 @@ class RunProjectUpdate(BaseTask):
super(RunProjectUpdate, self).post_run_hook(instance, status)
# To avoid hangs, very important to release lock even if errors happen here
try:
if self.playbook_new_revision:
instance.scm_revision = self.playbook_new_revision
if self.runner_callback.playbook_new_revision:
instance.scm_revision = self.runner_callback.playbook_new_revision
instance.save(update_fields=['scm_revision'])
# Roles and collection folders copy to durable cache
@@ -1636,8 +1458,8 @@ class RunProjectUpdate(BaseTask):
'failed',
'canceled',
):
if self.playbook_new_revision:
p.scm_revision = self.playbook_new_revision
if self.runner_callback.playbook_new_revision:
p.scm_revision = self.runner_callback.playbook_new_revision
else:
if status == 'successful':
logger.error("{} Could not find scm revision in check".format(instance.log_format))
@@ -1673,7 +1495,7 @@ class RunInventoryUpdate(BaseTask):
model = InventoryUpdate
event_model = InventoryUpdateEvent
event_data_key = 'inventory_update_id'
callback_class = RunnerCallbackForInventoryUpdate
def build_private_data(self, inventory_update, private_data_dir):
"""
@@ -1896,6 +1718,7 @@ class RunInventoryUpdate(BaseTask):
if status != 'successful':
return # nothing to save, step out of the way to allow error reporting
inventory_update.refresh_from_db()
private_data_dir = inventory_update.job_env['AWX_PRIVATE_DATA_DIR']
expected_output = os.path.join(private_data_dir, 'artifacts', str(inventory_update.id), 'output.json')
with open(expected_output) as f:
@@ -1930,13 +1753,13 @@ class RunInventoryUpdate(BaseTask):
options['verbosity'] = inventory_update.verbosity
handler = SpecialInventoryHandler(
self.event_handler,
self.cancel_callback,
self.runner_callback.event_handler,
self.runner_callback.cancel_callback,
verbosity=inventory_update.verbosity,
job_timeout=self.get_instance_timeout(self.instance),
start_time=inventory_update.started,
counter=self.event_ct,
initial_line=self.end_line,
counter=self.runner_callback.event_ct,
initial_line=self.runner_callback.end_line,
)
inv_logger = logging.getLogger('awx.main.commands.inventory_import')
formatter = inv_logger.handlers[0].formatter
@@ -1970,7 +1793,7 @@ class RunAdHocCommand(BaseTask):
model = AdHocCommand
event_model = AdHocCommandEvent
event_data_key = 'ad_hoc_command_id'
callback_class = RunnerCallbackForAdHocCommand
def build_private_data(self, ad_hoc_command, private_data_dir):
"""
@@ -2128,7 +1951,7 @@ class RunSystemJob(BaseTask):
model = SystemJob
event_model = SystemJobEvent
event_data_key = 'system_job_id'
callback_class = RunnerCallbackForSystemJob
def build_execution_environment_params(self, system_job, private_data_dir):
return {}

View File

@@ -411,9 +411,9 @@ class AWXReceptorJob:
streamer='process',
quiet=True,
_input=resultfile,
event_handler=self.task.event_handler,
finished_callback=self.task.finished_callback,
status_handler=self.task.status_handler,
event_handler=self.task.runner_callback.event_handler,
finished_callback=self.task.runner_callback.finished_callback,
status_handler=self.task.runner_callback.status_handler,
**self.runner_params,
)
@@ -458,7 +458,7 @@ class AWXReceptorJob:
if processor_future.done():
return processor_future.result()
if self.task.cancel_callback():
if self.task.runner_callback.cancel_callback():
result = namedtuple('result', ['status', 'rc'])
return result('canceled', 1)

View File

@@ -436,14 +436,16 @@ def inspect_execution_nodes(instance_list):
workers = mesh_status['Advertisements']
for ad in workers:
hostname = ad['NodeID']
if not any(cmd['WorkType'] == 'ansible-runner' for cmd in ad['WorkCommands'] or []):
continue
changed = False
if hostname in node_lookup:
instance = node_lookup[hostname]
else:
logger.warn(f"Unrecognized node on mesh advertising ansible-runner work type: {hostname}")
logger.warn(f"Unrecognized node advertising on mesh: {hostname}")
continue
# Control-plane nodes are dealt with via local_health_check instead.
if instance.node_type in ('control', 'hybrid'):
continue
was_lost = instance.is_lost(ref_time=nowtime)
@@ -454,6 +456,10 @@ def inspect_execution_nodes(instance_list):
instance.last_seen = last_seen
instance.save(update_fields=['last_seen'])
# Only execution nodes should be dealt with by execution_node_health_check
if instance.node_type == 'hop':
continue
if changed:
execution_node_health_check.apply_async([hostname])
elif was_lost:
@@ -482,7 +488,6 @@ def cluster_node_heartbeat():
for inst in instance_list:
if inst.hostname == settings.CLUSTER_HOST_ID:
this_inst = inst
instance_list.remove(inst)
break
else:
(changed, this_inst) = Instance.objects.get_or_register()
@@ -492,6 +497,8 @@ def cluster_node_heartbeat():
inspect_execution_nodes(instance_list)
for inst in list(instance_list):
if inst == this_inst:
continue
if inst.is_lost(ref_time=nowtime):
lost_instances.append(inst)
instance_list.remove(inst)
@@ -506,7 +513,9 @@ def cluster_node_heartbeat():
raise RuntimeError("Cluster Host Not Found: {}".format(settings.CLUSTER_HOST_ID))
# IFF any node has a greater version than we do, then we'll shutdown services
for other_inst in instance_list:
if other_inst.version == "" or other_inst.version.startswith('ansible-runner') or other_inst.node_type == 'execution':
if other_inst.node_type in ('execution', 'hop'):
continue
if other_inst.version == "" or other_inst.version.startswith('ansible-runner'):
continue
if Version(other_inst.version.split('-', 1)[0]) > Version(awx_application_version.split('-', 1)[0]) and not settings.DEBUG:
logger.error(
@@ -530,7 +539,7 @@ def cluster_node_heartbeat():
# * It was set to 0 by another tower node running this method
# * It was set to 0 by this node, but auto deprovisioning is off
#
# If auto deprovisining is on, don't bother setting the capacity to 0
# If auto deprovisioning is on, don't bother setting the capacity to 0
# since we will delete the node anyway.
if other_inst.capacity != 0 and not settings.AWX_AUTO_DEPROVISION_INSTANCES:
other_inst.mark_offline(errors=_('Another cluster node has determined this instance to be unresponsive'))

View File

@@ -15,6 +15,7 @@ from awx.main.tests.factories import (
)
from django.core.cache import cache
from django.conf import settings
def pytest_addoption(parser):
@@ -80,13 +81,44 @@ def instance_group_factory():
@pytest.fixture
def default_instance_group(instance_factory, instance_group_factory):
return create_instance_group("default", instances=[create_instance("hostA")])
def controlplane_instance_group(instance_factory, instance_group_factory):
"""There always has to be a controlplane instancegroup and at least one instance in it"""
return create_instance_group(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, create_instance('hybrid-1', node_type='hybrid', capacity=500))
@pytest.fixture
def controlplane_instance_group(instance_factory, instance_group_factory):
return create_instance_group("controlplane", instances=[create_instance("hostA")])
def default_instance_group(instance_factory, instance_group_factory):
return create_instance_group("default", instances=[create_instance("hostA", node_type='execution')])
@pytest.fixture
def control_instance():
'''Control instance in the controlplane automatic IG'''
inst = create_instance('control-1', node_type='control', capacity=500)
return inst
@pytest.fixture
def control_instance_low_capacity():
'''Control instance in the controlplane automatic IG that has low capacity'''
inst = create_instance('control-1', node_type='control', capacity=5)
return inst
@pytest.fixture
def execution_instance():
'''Execution node in the automatic default IG'''
ig = create_instance_group('default')
inst = create_instance('receptor-1', node_type='execution', capacity=500)
ig.instances.add(inst)
return inst
@pytest.fixture
def hybrid_instance():
'''Hybrid node in the default controlplane IG'''
inst = create_instance('hybrid-1', node_type='hybrid', capacity=500)
return inst
@pytest.fixture

View File

@@ -28,12 +28,15 @@ from awx.main.models import (
#
def mk_instance(persisted=True, hostname='instance.example.org'):
def mk_instance(persisted=True, hostname='instance.example.org', node_type='hybrid', capacity=100):
if not persisted:
raise RuntimeError('creating an Instance requires persisted=True')
from django.conf import settings
return Instance.objects.get_or_create(uuid=settings.SYSTEM_UUID, hostname=hostname)[0]
instance = Instance.objects.get_or_create(uuid=settings.SYSTEM_UUID, hostname=hostname, node_type=node_type, capacity=capacity)[0]
if node_type in ('control', 'hybrid'):
mk_instance_group(name=settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, instance=instance)
return instance
def mk_instance_group(name='default', instance=None, minimum=0, percentage=0):
@@ -52,7 +55,9 @@ def mk_organization(name, description=None, persisted=True):
description = description or '{}-description'.format(name)
org = Organization(name=name, description=description)
if persisted:
mk_instance(persisted)
instances = Instance.objects.all()
if not instances:
mk_instance(persisted)
org.save()
return org

View File

@@ -132,8 +132,8 @@ def generate_teams(organization, persisted, **kwargs):
return teams
def create_instance(name, instance_groups=None):
return mk_instance(hostname=name)
def create_instance(name, instance_groups=None, node_type='hybrid', capacity=200):
return mk_instance(hostname=name, node_type=node_type, capacity=capacity)
def create_instance_group(name, instances=None, minimum=0, percentage=0):

View File

@@ -127,7 +127,7 @@ class TestApprovalNodes:
]
@pytest.mark.django_db
def test_approval_node_approve(self, post, admin_user, job_template):
def test_approval_node_approve(self, post, admin_user, job_template, controlplane_instance_group):
# This test ensures that a user (with permissions to do so) can APPROVE
# workflow approvals. Also asserts that trying to APPROVE approvals
# that have already been dealt with will throw an error.
@@ -152,7 +152,7 @@ class TestApprovalNodes:
post(reverse('api:workflow_approval_approve', kwargs={'pk': approval.pk}), user=admin_user, expect=400)
@pytest.mark.django_db
def test_approval_node_deny(self, post, admin_user, job_template):
def test_approval_node_deny(self, post, admin_user, job_template, controlplane_instance_group):
# This test ensures that a user (with permissions to do so) can DENY
# workflow approvals. Also asserts that trying to DENY approvals
# that have already been dealt with will throw an error.

View File

@@ -308,7 +308,7 @@ def test_beginning_of_time(job_template):
'rrule, tz',
[
['DTSTART:20300112T210000Z RRULE:FREQ=DAILY;INTERVAL=1', 'UTC'],
['DTSTART;TZID=America/New_York:20300112T210000 RRULE:FREQ=DAILY;INTERVAL=1', 'America/New_York'],
['DTSTART;TZID=US/Eastern:20300112T210000 RRULE:FREQ=DAILY;INTERVAL=1', 'US/Eastern'],
],
)
def test_timezone_property(job_template, rrule, tz):

View File

@@ -7,7 +7,7 @@ from awx.main.tasks.system import apply_cluster_membership_policies
@pytest.mark.django_db
def test_multi_group_basic_job_launch(instance_factory, default_instance_group, mocker, instance_group_factory, job_template_factory):
def test_multi_group_basic_job_launch(instance_factory, controlplane_instance_group, mocker, instance_group_factory, job_template_factory):
i1 = instance_factory("i1")
i2 = instance_factory("i2")
ig1 = instance_group_factory("ig1", instances=[i1])
@@ -67,7 +67,7 @@ def test_multi_group_with_shared_dependency(instance_factory, controlplane_insta
@pytest.mark.django_db
def test_workflow_job_no_instancegroup(workflow_job_template_factory, default_instance_group, mocker):
def test_workflow_job_no_instancegroup(workflow_job_template_factory, controlplane_instance_group, mocker):
wfjt = workflow_job_template_factory('anicedayforawalk').workflow_job_template
wfj = WorkflowJob.objects.create(workflow_job_template=wfjt)
wfj.status = "pending"
@@ -79,9 +79,10 @@ def test_workflow_job_no_instancegroup(workflow_job_template_factory, default_in
@pytest.mark.django_db
def test_overcapacity_blocking_other_groups_unaffected(instance_factory, default_instance_group, mocker, instance_group_factory, job_template_factory):
def test_overcapacity_blocking_other_groups_unaffected(instance_factory, controlplane_instance_group, mocker, instance_group_factory, job_template_factory):
i1 = instance_factory("i1")
i1.capacity = 1000
# need to account a little extra for controller node capacity impact
i1.capacity = 1020
i1.save()
i2 = instance_factory("i2")
ig1 = instance_group_factory("ig1", instances=[i1])
@@ -120,7 +121,7 @@ def test_overcapacity_blocking_other_groups_unaffected(instance_factory, default
@pytest.mark.django_db
def test_failover_group_run(instance_factory, default_instance_group, mocker, instance_group_factory, job_template_factory):
def test_failover_group_run(instance_factory, controlplane_instance_group, mocker, instance_group_factory, job_template_factory):
i1 = instance_factory("i1")
i2 = instance_factory("i2")
ig1 = instance_group_factory("ig1", instances=[i1])

View File

@@ -7,19 +7,20 @@ from awx.main.scheduler import TaskManager
from awx.main.scheduler.dependency_graph import DependencyGraph
from awx.main.utils import encrypt_field
from awx.main.models import WorkflowJobTemplate, JobTemplate, Job
from awx.main.models.ha import Instance, InstanceGroup
from awx.main.models.ha import Instance
from django.conf import settings
@pytest.mark.django_db
def test_single_job_scheduler_launch(default_instance_group, job_template_factory, mocker):
instance = default_instance_group.instances.all()[0]
def test_single_job_scheduler_launch(hybrid_instance, controlplane_instance_group, job_template_factory, mocker):
instance = controlplane_instance_group.instances.all()[0]
objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred', jobs=["job_should_start"])
j = objects.jobs["job_should_start"]
j.status = 'pending'
j.save()
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, default_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, [], instance)
@pytest.mark.django_db
@@ -47,7 +48,7 @@ class TestJobLifeCycle:
if expect_commit is not None:
assert mock_commit.mock_calls == expect_commit
def test_task_manager_workflow_rescheduling(self, job_template_factory, inventory, project, default_instance_group):
def test_task_manager_workflow_rescheduling(self, job_template_factory, inventory, project, controlplane_instance_group):
jt = JobTemplate.objects.create(allow_simultaneous=True, inventory=inventory, project=project, playbook='helloworld.yml')
wfjt = WorkflowJobTemplate.objects.create(name='foo')
for i in range(2):
@@ -80,7 +81,7 @@ class TestJobLifeCycle:
# no further action is necessary, so rescheduling should not happen
self.run_tm(tm, [mock.call('successful')], [])
def test_task_manager_workflow_workflow_rescheduling(self):
def test_task_manager_workflow_workflow_rescheduling(self, controlplane_instance_group):
wfjts = [WorkflowJobTemplate.objects.create(name='foo')]
for i in range(5):
wfjt = WorkflowJobTemplate.objects.create(name='foo{}'.format(i))
@@ -100,22 +101,6 @@ class TestJobLifeCycle:
self.run_tm(tm, expect_schedule=[mock.call()])
wfjts[0].refresh_from_db()
@pytest.fixture
def control_instance(self):
'''Control instance in the controlplane automatic IG'''
ig = InstanceGroup.objects.create(name='controlplane')
inst = Instance.objects.create(hostname='control-1', node_type='control', capacity=500)
ig.instances.add(inst)
return inst
@pytest.fixture
def execution_instance(self):
'''Execution node in the automatic default IG'''
ig = InstanceGroup.objects.create(name='default')
inst = Instance.objects.create(hostname='receptor-1', node_type='execution', capacity=500)
ig.instances.add(inst)
return inst
def test_control_and_execution_instance(self, project, system_job_template, job_template, inventory_source, control_instance, execution_instance):
assert Instance.objects.count() == 2
@@ -142,10 +127,78 @@ class TestJobLifeCycle:
assert uj.capacity_type == 'execution'
assert [uj.execution_node, uj.controller_node] == [execution_instance.hostname, control_instance.hostname], uj
@pytest.mark.django_db
def test_job_fails_to_launch_when_no_control_capacity(self, job_template, control_instance_low_capacity, execution_instance):
enough_capacity = job_template.create_unified_job()
insufficient_capacity = job_template.create_unified_job()
all_ujs = [enough_capacity, insufficient_capacity]
for uj in all_ujs:
uj.signal_start()
# There is only enough control capacity to run one of the jobs so one should end up in pending and the other in waiting
tm = TaskManager()
self.run_tm(tm)
for uj in all_ujs:
uj.refresh_from_db()
assert enough_capacity.status == 'waiting'
assert insufficient_capacity.status == 'pending'
assert [enough_capacity.execution_node, enough_capacity.controller_node] == [
execution_instance.hostname,
control_instance_low_capacity.hostname,
], enough_capacity
@pytest.mark.django_db
def test_hybrid_capacity(self, job_template, hybrid_instance):
enough_capacity = job_template.create_unified_job()
insufficient_capacity = job_template.create_unified_job()
expected_task_impact = enough_capacity.task_impact + settings.AWX_CONTROL_NODE_TASK_IMPACT
all_ujs = [enough_capacity, insufficient_capacity]
for uj in all_ujs:
uj.signal_start()
# There is only enough control capacity to run one of the jobs so one should end up in pending and the other in waiting
tm = TaskManager()
self.run_tm(tm)
for uj in all_ujs:
uj.refresh_from_db()
assert enough_capacity.status == 'waiting'
assert insufficient_capacity.status == 'pending'
assert [enough_capacity.execution_node, enough_capacity.controller_node] == [
hybrid_instance.hostname,
hybrid_instance.hostname,
], enough_capacity
assert expected_task_impact == hybrid_instance.consumed_capacity
@pytest.mark.django_db
def test_project_update_capacity(self, project, hybrid_instance, instance_group_factory, controlplane_instance_group):
pu = project.create_unified_job()
instance_group_factory(name='second_ig', instances=[hybrid_instance])
expected_task_impact = pu.task_impact + settings.AWX_CONTROL_NODE_TASK_IMPACT
pu.signal_start()
tm = TaskManager()
self.run_tm(tm)
pu.refresh_from_db()
assert pu.status == 'waiting'
assert [pu.execution_node, pu.controller_node] == [
hybrid_instance.hostname,
hybrid_instance.hostname,
], pu
assert expected_task_impact == hybrid_instance.consumed_capacity
# The hybrid node is in both instance groups, but the project update should
# always get assigned to the controlplane
assert pu.instance_group.name == settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME
pu.status = 'successful'
pu.save()
assert hybrid_instance.consumed_capacity == 0
@pytest.mark.django_db
def test_single_jt_multi_job_launch_blocks_last(default_instance_group, job_template_factory, mocker):
instance = default_instance_group.instances.all()[0]
def test_single_jt_multi_job_launch_blocks_last(controlplane_instance_group, job_template_factory, mocker):
instance = controlplane_instance_group.instances.all()[0]
objects = job_template_factory(
'jt', organization='org1', project='proj', inventory='inv', credential='cred', jobs=["job_should_start", "job_should_not_start"]
)
@@ -157,17 +210,17 @@ def test_single_jt_multi_job_launch_blocks_last(default_instance_group, job_temp
j2.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j1, default_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j1, controlplane_instance_group, [], instance)
j1.status = "successful"
j1.save()
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j2, default_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j2, controlplane_instance_group, [], instance)
@pytest.mark.django_db
def test_single_jt_multi_job_launch_allow_simul_allowed(default_instance_group, job_template_factory, mocker):
instance = default_instance_group.instances.all()[0]
def test_single_jt_multi_job_launch_allow_simul_allowed(controlplane_instance_group, job_template_factory, mocker):
instance = controlplane_instance_group.instances.all()[0]
objects = job_template_factory(
'jt', organization='org1', project='proj', inventory='inv', credential='cred', jobs=["job_should_start", "job_should_not_start"]
)
@@ -184,12 +237,15 @@ def test_single_jt_multi_job_launch_allow_simul_allowed(default_instance_group,
j2.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, default_instance_group, [], instance), mock.call(j2, default_instance_group, [], instance)])
TaskManager.start_task.assert_has_calls(
[mock.call(j1, controlplane_instance_group, [], instance), mock.call(j2, controlplane_instance_group, [], instance)]
)
@pytest.mark.django_db
def test_multi_jt_capacity_blocking(default_instance_group, job_template_factory, mocker):
instance = default_instance_group.instances.all()[0]
def test_multi_jt_capacity_blocking(hybrid_instance, job_template_factory, mocker):
instance = hybrid_instance
controlplane_instance_group = instance.rampart_groups.first()
objects1 = job_template_factory('jt1', organization='org1', project='proj1', inventory='inv1', credential='cred1', jobs=["job_should_start"])
objects2 = job_template_factory('jt2', organization='org2', project='proj2', inventory='inv2', credential='cred2', jobs=["job_should_not_start"])
j1 = objects1.jobs["job_should_start"]
@@ -200,15 +256,15 @@ def test_multi_jt_capacity_blocking(default_instance_group, job_template_factory
j2.save()
tm = TaskManager()
with mock.patch('awx.main.models.Job.task_impact', new_callable=mock.PropertyMock) as mock_task_impact:
mock_task_impact.return_value = 500
mock_task_impact.return_value = 505
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule()
mock_job.assert_called_once_with(j1, default_instance_group, [], instance)
mock_job.assert_called_once_with(j1, controlplane_instance_group, [], instance)
j1.status = "successful"
j1.save()
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule()
mock_job.assert_called_once_with(j2, default_instance_group, [], instance)
mock_job.assert_called_once_with(j2, controlplane_instance_group, [], instance)
@pytest.mark.django_db
@@ -240,9 +296,9 @@ def test_single_job_dependencies_project_launch(controlplane_instance_group, job
@pytest.mark.django_db
def test_single_job_dependencies_inventory_update_launch(default_instance_group, job_template_factory, mocker, inventory_source_factory):
def test_single_job_dependencies_inventory_update_launch(controlplane_instance_group, job_template_factory, mocker, inventory_source_factory):
objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred', jobs=["job_should_start"])
instance = default_instance_group.instances.all()[0]
instance = controlplane_instance_group.instances.all()[0]
j = objects.jobs["job_should_start"]
j.status = 'pending'
j.save()
@@ -260,18 +316,18 @@ def test_single_job_dependencies_inventory_update_launch(default_instance_group,
mock_iu.assert_called_once_with(j, ii)
iu = [x for x in ii.inventory_updates.all()]
assert len(iu) == 1
TaskManager.start_task.assert_called_once_with(iu[0], default_instance_group, [j], instance)
TaskManager.start_task.assert_called_once_with(iu[0], controlplane_instance_group, [j], instance)
iu[0].status = "successful"
iu[0].save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, default_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, [], instance)
@pytest.mark.django_db
def test_job_dependency_with_already_updated(default_instance_group, job_template_factory, mocker, inventory_source_factory):
def test_job_dependency_with_already_updated(controlplane_instance_group, job_template_factory, mocker, inventory_source_factory):
objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred', jobs=["job_should_start"])
instance = default_instance_group.instances.all()[0]
instance = controlplane_instance_group.instances.all()[0]
j = objects.jobs["job_should_start"]
j.status = 'pending'
j.save()
@@ -293,7 +349,7 @@ def test_job_dependency_with_already_updated(default_instance_group, job_templat
mock_iu.assert_not_called()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, default_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, [], instance)
@pytest.mark.django_db
@@ -349,10 +405,10 @@ def test_shared_dependencies_launch(controlplane_instance_group, job_template_fa
@pytest.mark.django_db
def test_job_not_blocking_project_update(default_instance_group, job_template_factory):
def test_job_not_blocking_project_update(controlplane_instance_group, job_template_factory):
objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred', jobs=["job"])
job = objects.jobs["job"]
job.instance_group = default_instance_group
job.instance_group = controlplane_instance_group
job.status = "running"
job.save()
@@ -362,7 +418,7 @@ def test_job_not_blocking_project_update(default_instance_group, job_template_fa
proj = objects.project
project_update = proj.create_project_update()
project_update.instance_group = default_instance_group
project_update.instance_group = controlplane_instance_group
project_update.status = "pending"
project_update.save()
assert not task_manager.job_blocked_by(project_update)
@@ -373,10 +429,10 @@ def test_job_not_blocking_project_update(default_instance_group, job_template_fa
@pytest.mark.django_db
def test_job_not_blocking_inventory_update(default_instance_group, job_template_factory, inventory_source_factory):
def test_job_not_blocking_inventory_update(controlplane_instance_group, job_template_factory, inventory_source_factory):
objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred', jobs=["job"])
job = objects.jobs["job"]
job.instance_group = default_instance_group
job.instance_group = controlplane_instance_group
job.status = "running"
job.save()
@@ -389,7 +445,7 @@ def test_job_not_blocking_inventory_update(default_instance_group, job_template_
inv_source.source = "ec2"
inv.inventory_sources.add(inv_source)
inventory_update = inv_source.create_inventory_update()
inventory_update.instance_group = default_instance_group
inventory_update.instance_group = controlplane_instance_group
inventory_update.status = "pending"
inventory_update.save()

View File

@@ -24,3 +24,8 @@ def test_custom_400_error_log(caplog, post, admin_user):
with override_settings(API_400_ERROR_LOG_FORMAT="{status_code} {error}"):
post(url=reverse('api:setting_logging_test'), data={}, user=admin_user, expect=409)
assert '409 Logging not enabled' in caplog.text
# The above tests the generation function with a dict/object.
# The tower-qa test tests.api.inventories.test_inventory_update.TestInventoryUpdate.test_update_all_inventory_sources_with_nonfunctional_sources tests the function with a list
# Someday it would be nice to test the else condition (not a dict/list) but we need to find an API test which will do this. For now it was added just as a catch all

View File

@@ -1,7 +1,7 @@
import pytest
from unittest import mock
from awx.main.models import AdHocCommand, InventoryUpdate, JobTemplate, ProjectUpdate
from awx.main.models import AdHocCommand, InventoryUpdate, JobTemplate
from awx.main.models.activity_stream import ActivityStream
from awx.main.models.ha import Instance, InstanceGroup
from awx.main.tasks.system import apply_cluster_membership_policies
@@ -92,7 +92,9 @@ def test_instance_dup(org_admin, organization, project, instance_factory, instan
@pytest.mark.django_db
def test_policy_instance_few_instances(instance_factory, instance_group_factory):
i1 = instance_factory("i1")
# we need to use node_type=execution because node_type=hybrid will implicitly
# create the controlplane execution group if it doesn't already exist
i1 = instance_factory("i1", node_type='execution')
ig_1 = instance_group_factory("ig1", percentage=25)
ig_2 = instance_group_factory("ig2", percentage=25)
ig_3 = instance_group_factory("ig3", percentage=25)
@@ -113,7 +115,7 @@ def test_policy_instance_few_instances(instance_factory, instance_group_factory)
assert len(ig_4.instances.all()) == 1
assert i1 in ig_4.instances.all()
i2 = instance_factory("i2")
i2 = instance_factory("i2", node_type='execution')
count += 1
apply_cluster_membership_policies()
assert ActivityStream.objects.count() == count
@@ -334,13 +336,14 @@ def test_mixed_group_membership(instance_factory, instance_group_factory):
@pytest.mark.django_db
def test_instance_group_capacity(instance_factory, instance_group_factory):
i1 = instance_factory("i1")
i2 = instance_factory("i2")
i3 = instance_factory("i3")
node_capacity = 100
i1 = instance_factory("i1", capacity=node_capacity)
i2 = instance_factory("i2", capacity=node_capacity)
i3 = instance_factory("i3", capacity=node_capacity)
ig_all = instance_group_factory("all", instances=[i1, i2, i3])
assert ig_all.capacity == 300
assert ig_all.capacity == node_capacity * 3
ig_single = instance_group_factory("single", instances=[i1])
assert ig_single.capacity == 100
assert ig_single.capacity == node_capacity
@pytest.mark.django_db
@@ -385,16 +388,6 @@ class TestInstanceGroupOrdering:
# API does not allow setting IGs on inventory source, so ignore those
assert iu.preferred_instance_groups == [ig_inv, ig_org]
def test_project_update_instance_groups(self, instance_group_factory, project, controlplane_instance_group):
pu = ProjectUpdate.objects.create(project=project, organization=project.organization)
assert pu.preferred_instance_groups == [controlplane_instance_group]
ig_org = instance_group_factory("OrgIstGrp", [controlplane_instance_group.instances.first()])
ig_tmp = instance_group_factory("TmpIstGrp", [controlplane_instance_group.instances.first()])
project.organization.instance_groups.add(ig_org)
assert pu.preferred_instance_groups == [ig_org, controlplane_instance_group]
project.instance_groups.add(ig_tmp)
assert pu.preferred_instance_groups == [ig_tmp, ig_org, controlplane_instance_group]
def test_job_instance_groups(self, instance_group_factory, inventory, project, default_instance_group):
jt = JobTemplate.objects.create(inventory=inventory, project=project)
job = jt.create_unified_job()

View File

@@ -3,6 +3,7 @@ from unittest import mock
from awx.main.models.label import Label
from awx.main.models.unified_jobs import UnifiedJobTemplate, UnifiedJob
from awx.main.models.inventory import Inventory
mock_query_set = mock.MagicMock()
@@ -10,43 +11,45 @@ mock_query_set = mock.MagicMock()
mock_objects = mock.MagicMock(filter=mock.MagicMock(return_value=mock_query_set))
@pytest.mark.django_db
@mock.patch('awx.main.models.label.Label.objects', mock_objects)
class TestLabelFilterMocked:
def test_get_orphaned_labels(self, mocker):
ret = Label.get_orphaned_labels()
assert mock_query_set == ret
Label.objects.filter.assert_called_with(organization=None, unifiedjobtemplate_labels__isnull=True)
Label.objects.filter.assert_called_with(organization=None, unifiedjobtemplate_labels__isnull=True, inventory_labels__isnull=True)
def test_is_detached(self, mocker):
mock_query_set.count.return_value = 1
mock_query_set.exists.return_value = True
label = Label(id=37)
ret = label.is_detached()
assert ret is True
Label.objects.filter.assert_called_with(id=37, unifiedjob_labels__isnull=True, unifiedjobtemplate_labels__isnull=True)
mock_query_set.count.assert_called_with()
Label.objects.filter.assert_called_with(id=37, unifiedjob_labels__isnull=True, unifiedjobtemplate_labels__isnull=True, inventory_labels__isnull=True)
mock_query_set.exists.assert_called_with()
def test_is_detached_not(self, mocker):
mock_query_set.count.return_value = 0
mock_query_set.exists.return_value = False
label = Label(id=37)
ret = label.is_detached()
assert ret is False
Label.objects.filter.assert_called_with(id=37, unifiedjob_labels__isnull=True, unifiedjobtemplate_labels__isnull=True)
mock_query_set.count.assert_called_with()
Label.objects.filter.assert_called_with(id=37, unifiedjob_labels__isnull=True, unifiedjobtemplate_labels__isnull=True, inventory_labels__isnull=True)
mock_query_set.exists.assert_called_with()
@pytest.mark.parametrize(
"jt_count,j_count,expected",
"jt_count,j_count,inv_count,expected",
[
(1, 0, True),
(0, 1, True),
(1, 1, False),
(1, 0, 0, True),
(0, 1, 0, True),
(0, 0, 1, True),
(1, 1, 1, False),
],
)
def test_is_candidate_for_detach(self, mocker, jt_count, j_count, expected):
def test_is_candidate_for_detach(self, mocker, jt_count, j_count, inv_count, expected):
mock_job_qs = mocker.MagicMock()
mock_job_qs.count = mocker.MagicMock(return_value=j_count)
mocker.patch.object(UnifiedJob, 'objects', mocker.MagicMock(filter=mocker.MagicMock(return_value=mock_job_qs)))
@@ -55,12 +58,18 @@ class TestLabelFilterMocked:
mock_jt_qs.count = mocker.MagicMock(return_value=jt_count)
mocker.patch.object(UnifiedJobTemplate, 'objects', mocker.MagicMock(filter=mocker.MagicMock(return_value=mock_jt_qs)))
mock_inv_qs = mocker.MagicMock()
mock_inv_qs.count = mocker.MagicMock(return_value=inv_count)
mocker.patch.object(Inventory, 'objects', mocker.MagicMock(filter=mocker.MagicMock(return_value=mock_inv_qs)))
label = Label(id=37)
ret = label.is_candidate_for_detach()
UnifiedJob.objects.filter.assert_called_with(labels__in=[label.id])
UnifiedJobTemplate.objects.filter.assert_called_with(labels__in=[label.id])
Inventory.objects.filter.assert_called_with(labels__in=[label.id])
mock_job_qs.count.assert_called_with()
mock_jt_qs.count.assert_called_with()
mock_inv_qs.count.assert_called_with()
assert ret is expected

View File

@@ -0,0 +1,61 @@
import pytest
from unittest import mock
from awx.main.utils.common import (
convert_mem_str_to_bytes,
get_mem_effective_capacity,
get_corrected_memory,
convert_cpu_str_to_decimal_cpu,
get_cpu_effective_capacity,
get_corrected_cpu,
)
@pytest.mark.parametrize(
"value,converted_value,mem_capacity",
[
('2G', 2000000000, 19),
('4G', 4000000000, 38),
('2Gi', 2147483648, 20),
('2.1G', 1, 1), # expressing memory with non-integers is not supported, and we'll fall back to 1 fork for memory capacity.
('4Gi', 4294967296, 40),
('2M', 2000000, 1),
('3M', 3000000, 1),
('2Mi', 2097152, 1),
('2048Mi', 2147483648, 20),
('4096Mi', 4294967296, 40),
('64G', 64000000000, 610),
('64Garbage', 1, 1),
],
)
def test_SYSTEM_TASK_ABS_MEM_conversion(value, converted_value, mem_capacity):
with mock.patch('django.conf.settings') as mock_settings:
mock_settings.SYSTEM_TASK_ABS_MEM = value
mock_settings.SYSTEM_TASK_FORKS_MEM = 100
mock_settings.IS_K8S = True
assert convert_mem_str_to_bytes(value) == converted_value
assert get_corrected_memory(-1) == converted_value
assert get_mem_effective_capacity(-1) == mem_capacity
@pytest.mark.parametrize(
"value,converted_value,cpu_capacity",
[
('2', 2.0, 8),
('1.5', 1.5, 6),
('100m', 0.1, 1),
('2000m', 2.0, 8),
('4MillionCPUm', 1.0, 4), # Any suffix other than 'm' is not supported, we fall back to 1 CPU
('Random', 1.0, 4), # Any setting value other than integers, floats millicores (e.g 1, 1.0, or 1000m) is not supported, fall back to 1 CPU
('2505m', 2.5, 10),
('1.55', 1.6, 6),
],
)
def test_SYSTEM_TASK_ABS_CPU_conversion(value, converted_value, cpu_capacity):
with mock.patch('django.conf.settings') as mock_settings:
mock_settings.SYSTEM_TASK_ABS_CPU = value
mock_settings.SYSTEM_TASK_FORKS_CPU = 4
assert convert_cpu_str_to_decimal_cpu(value) == converted_value
assert get_corrected_cpu(-1) == converted_value
assert get_cpu_effective_capacity(-1) == cpu_capacity

View File

@@ -18,6 +18,8 @@ class FakeObject(object):
class Job(FakeObject):
task_impact = 43
is_container_group_task = False
controller_node = ''
execution_node = ''
def log_format(self):
return 'job 382 (fake)'

View File

@@ -1,5 +1,4 @@
# -*- coding: utf-8 -*-
import configparser
import json
import os
@@ -34,7 +33,7 @@ from awx.main.models import (
)
from awx.main.models.credential import HIDDEN_PASSWORD, ManagedCredentialType
from awx.main import tasks
from awx.main.tasks import jobs, system
from awx.main.utils import encrypt_field, encrypt_value
from awx.main.utils.safe_yaml import SafeLoader
from awx.main.utils.execution_environments import CONTAINER_ROOT, to_host_path
@@ -113,12 +112,12 @@ def adhoc_update_model_wrapper(adhoc_job):
def test_send_notifications_not_list():
with pytest.raises(TypeError):
tasks.system.send_notifications(None)
system.send_notifications(None)
def test_send_notifications_job_id(mocker):
with mocker.patch('awx.main.models.UnifiedJob.objects.get'):
tasks.system.send_notifications([], job_id=1)
system.send_notifications([], job_id=1)
assert UnifiedJob.objects.get.called
assert UnifiedJob.objects.get.called_with(id=1)
@@ -127,7 +126,7 @@ def test_work_success_callback_missing_job():
task_data = {'type': 'project_update', 'id': 9999}
with mock.patch('django.db.models.query.QuerySet.get') as get_mock:
get_mock.side_effect = ProjectUpdate.DoesNotExist()
assert tasks.system.handle_work_success(task_data) is None
assert system.handle_work_success(task_data) is None
@mock.patch('awx.main.models.UnifiedJob.objects.get')
@@ -138,7 +137,7 @@ def test_send_notifications_list(mock_notifications_filter, mock_job_get, mocker
mock_notifications = [mocker.MagicMock(spec=Notification, subject="test", body={'hello': 'world'})]
mock_notifications_filter.return_value = mock_notifications
tasks.system.send_notifications([1, 2], job_id=1)
system.send_notifications([1, 2], job_id=1)
assert Notification.objects.filter.call_count == 1
assert mock_notifications[0].status == "successful"
assert mock_notifications[0].save.called
@@ -168,7 +167,7 @@ def test_safe_env_returns_new_copy():
@pytest.mark.parametrize("source,expected", [(None, True), (False, False), (True, True)])
def test_openstack_client_config_generation(mocker, source, expected, private_data_dir):
update = tasks.jobs.RunInventoryUpdate()
update = jobs.RunInventoryUpdate()
credential_type = CredentialType.defaults['openstack']()
inputs = {
'host': 'https://keystone.openstack.example.org',
@@ -208,7 +207,7 @@ def test_openstack_client_config_generation(mocker, source, expected, private_da
@pytest.mark.parametrize("source,expected", [(None, True), (False, False), (True, True)])
def test_openstack_client_config_generation_with_project_domain_name(mocker, source, expected, private_data_dir):
update = tasks.jobs.RunInventoryUpdate()
update = jobs.RunInventoryUpdate()
credential_type = CredentialType.defaults['openstack']()
inputs = {
'host': 'https://keystone.openstack.example.org',
@@ -250,7 +249,7 @@ def test_openstack_client_config_generation_with_project_domain_name(mocker, sou
@pytest.mark.parametrize("source,expected", [(None, True), (False, False), (True, True)])
def test_openstack_client_config_generation_with_region(mocker, source, expected, private_data_dir):
update = tasks.jobs.RunInventoryUpdate()
update = jobs.RunInventoryUpdate()
credential_type = CredentialType.defaults['openstack']()
inputs = {
'host': 'https://keystone.openstack.example.org',
@@ -294,7 +293,7 @@ def test_openstack_client_config_generation_with_region(mocker, source, expected
@pytest.mark.parametrize("source,expected", [(False, False), (True, True)])
def test_openstack_client_config_generation_with_private_source_vars(mocker, source, expected, private_data_dir):
update = tasks.jobs.RunInventoryUpdate()
update = jobs.RunInventoryUpdate()
credential_type = CredentialType.defaults['openstack']()
inputs = {
'host': 'https://keystone.openstack.example.org',
@@ -357,7 +356,7 @@ class TestExtraVarSanitation(TestJobExecution):
job.created_by = User(pk=123, username='angry-spud')
job.inventory = Inventory(pk=123, name='example-inv')
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.build_extra_vars_file(job, private_data_dir)
fd = open(os.path.join(private_data_dir, 'env', 'extravars'))
@@ -393,7 +392,7 @@ class TestExtraVarSanitation(TestJobExecution):
def test_launchtime_vars_unsafe(self, job, private_data_dir):
job.extra_vars = json.dumps({'msg': self.UNSAFE})
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.build_extra_vars_file(job, private_data_dir)
@@ -404,7 +403,7 @@ class TestExtraVarSanitation(TestJobExecution):
def test_nested_launchtime_vars_unsafe(self, job, private_data_dir):
job.extra_vars = json.dumps({'msg': {'a': [self.UNSAFE]}})
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.build_extra_vars_file(job, private_data_dir)
@@ -415,7 +414,7 @@ class TestExtraVarSanitation(TestJobExecution):
def test_allowed_jt_extra_vars(self, job, private_data_dir):
job.job_template.extra_vars = job.extra_vars = json.dumps({'msg': self.UNSAFE})
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.build_extra_vars_file(job, private_data_dir)
@@ -427,7 +426,7 @@ class TestExtraVarSanitation(TestJobExecution):
def test_nested_allowed_vars(self, job, private_data_dir):
job.extra_vars = json.dumps({'msg': {'a': {'b': [self.UNSAFE]}}})
job.job_template.extra_vars = job.extra_vars
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.build_extra_vars_file(job, private_data_dir)
@@ -441,7 +440,7 @@ class TestExtraVarSanitation(TestJobExecution):
# `other_var=SENSITIVE`
job.job_template.extra_vars = json.dumps({'msg': self.UNSAFE})
job.extra_vars = json.dumps({'msg': 'other-value', 'other_var': self.UNSAFE})
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.build_extra_vars_file(job, private_data_dir)
@@ -456,7 +455,7 @@ class TestExtraVarSanitation(TestJobExecution):
def test_overwritten_jt_extra_vars(self, job, private_data_dir):
job.job_template.extra_vars = json.dumps({'msg': 'SAFE'})
job.extra_vars = json.dumps({'msg': self.UNSAFE})
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.build_extra_vars_file(job, private_data_dir)
@@ -472,7 +471,7 @@ class TestGenericRun:
job.websocket_emit_status = mock.Mock()
job.execution_environment = execution_environment
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.instance = job
task.update_model = mock.Mock(return_value=job)
task.model.objects.get = mock.Mock(return_value=job)
@@ -494,7 +493,7 @@ class TestGenericRun:
job.send_notification_templates = mock.Mock()
job.execution_environment = execution_environment
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.instance = job
task.update_model = mock.Mock(wraps=update_model_wrapper)
task.model.objects.get = mock.Mock(return_value=job)
@@ -508,45 +507,45 @@ class TestGenericRun:
assert c in task.update_model.call_args_list
def test_event_count(self):
task = tasks.jobs.RunJob()
task.dispatcher = mock.MagicMock()
task.instance = Job()
task.event_ct = 0
task = jobs.RunJob()
task.runner_callback.dispatcher = mock.MagicMock()
task.runner_callback.instance = Job()
task.runner_callback.event_ct = 0
event_data = {}
[task.event_handler(event_data) for i in range(20)]
assert 20 == task.event_ct
[task.runner_callback.event_handler(event_data) for i in range(20)]
assert 20 == task.runner_callback.event_ct
def test_finished_callback_eof(self):
task = tasks.jobs.RunJob()
task.dispatcher = mock.MagicMock()
task.instance = Job(pk=1, id=1)
task.event_ct = 17
task.finished_callback(None)
task.dispatcher.dispatch.assert_called_with({'event': 'EOF', 'final_counter': 17, 'job_id': 1, 'guid': None})
task = jobs.RunJob()
task.runner_callback.dispatcher = mock.MagicMock()
task.runner_callback.instance = Job(pk=1, id=1)
task.runner_callback.event_ct = 17
task.runner_callback.finished_callback(None)
task.runner_callback.dispatcher.dispatch.assert_called_with({'event': 'EOF', 'final_counter': 17, 'job_id': 1, 'guid': None})
def test_save_job_metadata(self, job, update_model_wrapper):
class MockMe:
pass
task = tasks.jobs.RunJob()
task.instance = job
task.safe_env = {'secret_key': 'redacted_value'}
task.update_model = mock.Mock(wraps=update_model_wrapper)
task = jobs.RunJob()
task.runner_callback.instance = job
task.runner_callback.safe_env = {'secret_key': 'redacted_value'}
task.runner_callback.update_model = mock.Mock(wraps=update_model_wrapper)
runner_config = MockMe()
runner_config.command = {'foo': 'bar'}
runner_config.cwd = '/foobar'
runner_config.env = {'switch': 'blade', 'foot': 'ball', 'secret_key': 'secret_value'}
task.status_handler({'status': 'starting'}, runner_config)
task.runner_callback.status_handler({'status': 'starting'}, runner_config)
task.update_model.assert_called_with(
task.runner_callback.update_model.assert_called_with(
1, job_args=json.dumps({'foo': 'bar'}), job_cwd='/foobar', job_env={'switch': 'blade', 'foot': 'ball', 'secret_key': 'redacted_value'}
)
def test_created_by_extra_vars(self):
job = Job(created_by=User(pk=123, username='angry-spud'))
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task._write_extra_vars_file = mock.Mock()
task.build_extra_vars_file(job, None)
@@ -563,7 +562,7 @@ class TestGenericRun:
job.extra_vars = json.dumps({'super_secret': encrypt_value('CLASSIFIED', pk=None)})
job.survey_passwords = {'super_secret': '$encrypted$'}
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task._write_extra_vars_file = mock.Mock()
task.build_extra_vars_file(job, None)
@@ -576,7 +575,7 @@ class TestGenericRun:
job = Job(project=Project(), inventory=Inventory())
job.execution_environment = execution_environment
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.instance = job
task._write_extra_vars_file = mock.Mock()
@@ -595,7 +594,7 @@ class TestAdhocRun(TestJobExecution):
adhoc_job.websocket_emit_status = mock.Mock()
adhoc_job.send_notification_templates = mock.Mock()
task = tasks.jobs.RunAdHocCommand()
task = jobs.RunAdHocCommand()
task.update_model = mock.Mock(wraps=adhoc_update_model_wrapper)
task.model.objects.get = mock.Mock(return_value=adhoc_job)
task.build_inventory = mock.Mock()
@@ -619,7 +618,7 @@ class TestAdhocRun(TestJobExecution):
})
#adhoc_job.websocket_emit_status = mock.Mock()
task = tasks.jobs.RunAdHocCommand()
task = jobs.RunAdHocCommand()
#task.update_model = mock.Mock(wraps=adhoc_update_model_wrapper)
#task.build_inventory = mock.Mock(return_value='/tmp/something.inventory')
task._write_extra_vars_file = mock.Mock()
@@ -634,7 +633,7 @@ class TestAdhocRun(TestJobExecution):
def test_created_by_extra_vars(self):
adhoc_job = AdHocCommand(created_by=User(pk=123, username='angry-spud'))
task = tasks.jobs.RunAdHocCommand()
task = jobs.RunAdHocCommand()
task._write_extra_vars_file = mock.Mock()
task.build_extra_vars_file(adhoc_job, None)
@@ -693,7 +692,7 @@ class TestJobCredentials(TestJobExecution):
}
def test_username_jinja_usage(self, job, private_data_dir):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
ssh = CredentialType.defaults['ssh']()
credential = Credential(pk=1, credential_type=ssh, inputs={'username': '{{ ansible_ssh_pass }}'})
job.credentials.add(credential)
@@ -704,7 +703,7 @@ class TestJobCredentials(TestJobExecution):
@pytest.mark.parametrize("flag", ['become_username', 'become_method'])
def test_become_jinja_usage(self, job, private_data_dir, flag):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
ssh = CredentialType.defaults['ssh']()
credential = Credential(pk=1, credential_type=ssh, inputs={'username': 'joe', flag: '{{ ansible_ssh_pass }}'})
job.credentials.add(credential)
@@ -715,7 +714,7 @@ class TestJobCredentials(TestJobExecution):
assert 'Jinja variables are not allowed' in str(e.value)
def test_ssh_passwords(self, job, private_data_dir, field, password_name, expected_flag):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
ssh = CredentialType.defaults['ssh']()
credential = Credential(pk=1, credential_type=ssh, inputs={'username': 'bob', field: 'secret'})
credential.inputs[field] = encrypt_field(credential, field)
@@ -732,7 +731,7 @@ class TestJobCredentials(TestJobExecution):
assert expected_flag in ' '.join(args)
def test_net_ssh_key_unlock(self, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
net = CredentialType.defaults['net']()
credential = Credential(pk=1, credential_type=net, inputs={'ssh_key_unlock': 'secret'})
credential.inputs['ssh_key_unlock'] = encrypt_field(credential, 'ssh_key_unlock')
@@ -745,7 +744,7 @@ class TestJobCredentials(TestJobExecution):
assert 'secret' in expect_passwords.values()
def test_net_first_ssh_key_unlock_wins(self, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
for i in range(3):
net = CredentialType.defaults['net']()
credential = Credential(pk=i, credential_type=net, inputs={'ssh_key_unlock': 'secret{}'.format(i)})
@@ -759,7 +758,7 @@ class TestJobCredentials(TestJobExecution):
assert 'secret0' in expect_passwords.values()
def test_prefer_ssh_over_net_ssh_key_unlock(self, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
net = CredentialType.defaults['net']()
net_credential = Credential(pk=1, credential_type=net, inputs={'ssh_key_unlock': 'net_secret'})
net_credential.inputs['ssh_key_unlock'] = encrypt_field(net_credential, 'ssh_key_unlock')
@@ -778,7 +777,7 @@ class TestJobCredentials(TestJobExecution):
assert 'ssh_secret' in expect_passwords.values()
def test_vault_password(self, private_data_dir, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
vault = CredentialType.defaults['vault']()
credential = Credential(pk=1, credential_type=vault, inputs={'vault_password': 'vault-me'})
credential.inputs['vault_password'] = encrypt_field(credential, 'vault_password')
@@ -793,7 +792,7 @@ class TestJobCredentials(TestJobExecution):
assert '--ask-vault-pass' in ' '.join(args)
def test_vault_password_ask(self, private_data_dir, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
vault = CredentialType.defaults['vault']()
credential = Credential(pk=1, credential_type=vault, inputs={'vault_password': 'ASK'})
credential.inputs['vault_password'] = encrypt_field(credential, 'vault_password')
@@ -808,7 +807,7 @@ class TestJobCredentials(TestJobExecution):
assert '--ask-vault-pass' in ' '.join(args)
def test_multi_vault_password(self, private_data_dir, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
vault = CredentialType.defaults['vault']()
for i, label in enumerate(['dev', 'prod', 'dotted.name']):
credential = Credential(pk=i, credential_type=vault, inputs={'vault_password': 'pass@{}'.format(label), 'vault_id': label})
@@ -831,7 +830,7 @@ class TestJobCredentials(TestJobExecution):
assert '--vault-id dotted.name@prompt' in ' '.join(args)
def test_multi_vault_id_conflict(self, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
vault = CredentialType.defaults['vault']()
for i in range(2):
credential = Credential(pk=i, credential_type=vault, inputs={'vault_password': 'some-pass', 'vault_id': 'conflict'})
@@ -844,7 +843,7 @@ class TestJobCredentials(TestJobExecution):
assert 'multiple vault credentials were specified with --vault-id' in str(e.value)
def test_multi_vault_password_ask(self, private_data_dir, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
vault = CredentialType.defaults['vault']()
for i, label in enumerate(['dev', 'prod']):
credential = Credential(pk=i, credential_type=vault, inputs={'vault_password': 'ASK', 'vault_id': label})
@@ -999,7 +998,7 @@ class TestJobCredentials(TestJobExecution):
assert safe_env['VMWARE_PASSWORD'] == HIDDEN_PASSWORD
def test_openstack_credentials(self, private_data_dir, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.instance = job
openstack = CredentialType.defaults['openstack']()
credential = Credential(
@@ -1067,7 +1066,7 @@ class TestJobCredentials(TestJobExecution):
],
)
def test_net_credentials(self, authorize, expected_authorize, job, private_data_dir):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.instance = job
net = CredentialType.defaults['net']()
inputs = {'username': 'bob', 'password': 'secret', 'ssh_key_data': self.EXAMPLE_PRIVATE_KEY, 'authorize_password': 'authorizeme'}
@@ -1135,7 +1134,7 @@ class TestJobCredentials(TestJobExecution):
assert env['TURBO_BUTTON'] == str(True)
def test_custom_environment_injectors_with_reserved_env_var(self, private_data_dir, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.instance = job
some_cloud = CredentialType(
kind='cloud',
@@ -1171,7 +1170,7 @@ class TestJobCredentials(TestJobExecution):
assert safe_env['MY_CLOUD_PRIVATE_VAR'] == HIDDEN_PASSWORD
def test_custom_environment_injectors_with_extra_vars(self, private_data_dir, job):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
some_cloud = CredentialType(
kind='cloud',
name='SomeCloud',
@@ -1190,7 +1189,7 @@ class TestJobCredentials(TestJobExecution):
assert hasattr(extra_vars["api_token"], '__UNSAFE__')
def test_custom_environment_injectors_with_boolean_extra_vars(self, job, private_data_dir):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
some_cloud = CredentialType(
kind='cloud',
name='SomeCloud',
@@ -1209,7 +1208,7 @@ class TestJobCredentials(TestJobExecution):
return ['successful', 0]
def test_custom_environment_injectors_with_complicated_boolean_template(self, job, private_data_dir):
task = tasks.jobs.RunJob()
task = jobs.RunJob()
some_cloud = CredentialType(
kind='cloud',
name='SomeCloud',
@@ -1230,7 +1229,7 @@ class TestJobCredentials(TestJobExecution):
"""
extra_vars that contain secret field values should be censored in the DB
"""
task = tasks.jobs.RunJob()
task = jobs.RunJob()
some_cloud = CredentialType(
kind='cloud',
name='SomeCloud',
@@ -1335,7 +1334,7 @@ class TestJobCredentials(TestJobExecution):
def test_awx_task_env(self, settings, private_data_dir, job):
settings.AWX_TASK_ENV = {'FOO': 'BAR'}
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.instance = job
env = task.build_env(job, private_data_dir)
@@ -1362,7 +1361,7 @@ class TestProjectUpdateGalaxyCredentials(TestJobExecution):
def test_galaxy_credentials_ignore_certs(self, private_data_dir, project_update, ignore):
settings.GALAXY_IGNORE_CERTS = ignore
task = tasks.jobs.RunProjectUpdate()
task = jobs.RunProjectUpdate()
task.instance = project_update
env = task.build_env(project_update, private_data_dir)
if ignore:
@@ -1371,7 +1370,7 @@ class TestProjectUpdateGalaxyCredentials(TestJobExecution):
assert 'ANSIBLE_GALAXY_IGNORE' not in env
def test_galaxy_credentials_empty(self, private_data_dir, project_update):
class RunProjectUpdate(tasks.jobs.RunProjectUpdate):
class RunProjectUpdate(jobs.RunProjectUpdate):
__vars__ = {}
def _write_extra_vars_file(self, private_data_dir, extra_vars, *kw):
@@ -1390,7 +1389,7 @@ class TestProjectUpdateGalaxyCredentials(TestJobExecution):
assert not k.startswith('ANSIBLE_GALAXY_SERVER')
def test_single_public_galaxy(self, private_data_dir, project_update):
class RunProjectUpdate(tasks.jobs.RunProjectUpdate):
class RunProjectUpdate(jobs.RunProjectUpdate):
__vars__ = {}
def _write_extra_vars_file(self, private_data_dir, extra_vars, *kw):
@@ -1439,7 +1438,7 @@ class TestProjectUpdateGalaxyCredentials(TestJobExecution):
)
project_update.project.organization.galaxy_credentials.add(public_galaxy)
project_update.project.organization.galaxy_credentials.add(rh)
task = tasks.jobs.RunProjectUpdate()
task = jobs.RunProjectUpdate()
task.instance = project_update
env = task.build_env(project_update, private_data_dir)
assert sorted([(k, v) for k, v in env.items() if k.startswith('ANSIBLE_GALAXY')]) == [
@@ -1481,7 +1480,7 @@ class TestProjectUpdateCredentials(TestJobExecution):
}
def test_username_and_password_auth(self, project_update, scm_type):
task = tasks.jobs.RunProjectUpdate()
task = jobs.RunProjectUpdate()
ssh = CredentialType.defaults['ssh']()
project_update.scm_type = scm_type
project_update.credential = Credential(pk=1, credential_type=ssh, inputs={'username': 'bob', 'password': 'secret'})
@@ -1495,7 +1494,7 @@ class TestProjectUpdateCredentials(TestJobExecution):
assert 'secret' in expect_passwords.values()
def test_ssh_key_auth(self, project_update, scm_type):
task = tasks.jobs.RunProjectUpdate()
task = jobs.RunProjectUpdate()
ssh = CredentialType.defaults['ssh']()
project_update.scm_type = scm_type
project_update.credential = Credential(pk=1, credential_type=ssh, inputs={'username': 'bob', 'ssh_key_data': self.EXAMPLE_PRIVATE_KEY})
@@ -1509,7 +1508,7 @@ class TestProjectUpdateCredentials(TestJobExecution):
def test_awx_task_env(self, project_update, settings, private_data_dir, scm_type, execution_environment):
project_update.execution_environment = execution_environment
settings.AWX_TASK_ENV = {'FOO': 'BAR'}
task = tasks.jobs.RunProjectUpdate()
task = jobs.RunProjectUpdate()
task.instance = project_update
project_update.scm_type = scm_type
@@ -1524,7 +1523,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
return InventoryUpdate(pk=1, execution_environment=execution_environment, inventory_source=InventorySource(pk=1, inventory=Inventory(pk=1)))
def test_source_without_credential(self, mocker, inventory_update, private_data_dir):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
inventory_update.source = 'ec2'
inventory_update.get_cloud_credential = mocker.Mock(return_value=None)
@@ -1537,7 +1536,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert 'AWS_SECRET_ACCESS_KEY' not in env
def test_ec2_source(self, private_data_dir, inventory_update, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
aws = CredentialType.defaults['aws']()
inventory_update.source = 'ec2'
@@ -1561,7 +1560,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert safe_env['AWS_SECRET_ACCESS_KEY'] == HIDDEN_PASSWORD
def test_vmware_source(self, inventory_update, private_data_dir, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
vmware = CredentialType.defaults['vmware']()
inventory_update.source = 'vmware'
@@ -1589,7 +1588,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
env["VMWARE_VALIDATE_CERTS"] == "False",
def test_azure_rm_source_with_tenant(self, private_data_dir, inventory_update, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
azure_rm = CredentialType.defaults['azure_rm']()
inventory_update.source = 'azure_rm'
@@ -1625,7 +1624,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert safe_env['AZURE_SECRET'] == HIDDEN_PASSWORD
def test_azure_rm_source_with_password(self, private_data_dir, inventory_update, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
azure_rm = CredentialType.defaults['azure_rm']()
inventory_update.source = 'azure_rm'
@@ -1654,7 +1653,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert safe_env['AZURE_PASSWORD'] == HIDDEN_PASSWORD
def test_gce_source(self, inventory_update, private_data_dir, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
gce = CredentialType.defaults['gce']()
inventory_update.source = 'gce'
@@ -1684,7 +1683,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert json_data['project_id'] == 'some-project'
def test_openstack_source(self, inventory_update, private_data_dir, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
openstack = CredentialType.defaults['openstack']()
inventory_update.source = 'openstack'
@@ -1724,7 +1723,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
)
def test_satellite6_source(self, inventory_update, private_data_dir, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
satellite6 = CredentialType.defaults['satellite6']()
inventory_update.source = 'satellite6'
@@ -1747,7 +1746,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert safe_env["FOREMAN_PASSWORD"] == HIDDEN_PASSWORD
def test_insights_source(self, inventory_update, private_data_dir, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
insights = CredentialType.defaults['insights']()
inventory_update.source = 'insights'
@@ -1776,7 +1775,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
@pytest.mark.parametrize('verify', [True, False])
def test_tower_source(self, verify, inventory_update, private_data_dir, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
tower = CredentialType.defaults['controller']()
inventory_update.source = 'controller'
@@ -1804,7 +1803,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert safe_env['CONTROLLER_PASSWORD'] == HIDDEN_PASSWORD
def test_tower_source_ssl_verify_empty(self, inventory_update, private_data_dir, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
tower = CredentialType.defaults['controller']()
inventory_update.source = 'controller'
@@ -1832,7 +1831,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert env['TOWER_VERIFY_SSL'] == 'False'
def test_awx_task_env(self, inventory_update, private_data_dir, settings, mocker):
task = tasks.jobs.RunInventoryUpdate()
task = jobs.RunInventoryUpdate()
task.instance = inventory_update
gce = CredentialType.defaults['gce']()
inventory_update.source = 'gce'
@@ -1883,7 +1882,7 @@ def test_aquire_lock_open_fail_logged(logging_getLogger, os_open):
logger = mock.Mock()
logging_getLogger.return_value = logger
ProjectUpdate = tasks.jobs.RunProjectUpdate()
ProjectUpdate = jobs.RunProjectUpdate()
with pytest.raises(OSError):
ProjectUpdate.acquire_lock(instance)
@@ -1910,7 +1909,7 @@ def test_aquire_lock_acquisition_fail_logged(fcntl_lockf, logging_getLogger, os_
fcntl_lockf.side_effect = err
ProjectUpdate = tasks.jobs.RunProjectUpdate()
ProjectUpdate = jobs.RunProjectUpdate()
with pytest.raises(IOError):
ProjectUpdate.acquire_lock(instance)
os_close.assert_called_with(3)
@@ -1947,7 +1946,7 @@ def test_notification_job_not_finished(logging_getLogger, mocker):
logging_getLogger.return_value = logger
with mocker.patch('awx.main.models.UnifiedJob.objects.get', uj):
tasks.system.handle_success_and_failure_notifications(1)
system.handle_success_and_failure_notifications(1)
assert logger.warn.called_with(f"Failed to even try to send notifications for job '{uj}' due to job not being in finished state.")
@@ -1955,7 +1954,7 @@ def test_notification_job_finished(mocker):
uj = mocker.MagicMock(send_notification_templates=mocker.MagicMock(), finished=True)
with mocker.patch('awx.main.models.UnifiedJob.objects.get', mocker.MagicMock(return_value=uj)):
tasks.system.handle_success_and_failure_notifications(1)
system.handle_success_and_failure_notifications(1)
uj.send_notification_templates.assert_called()
@@ -1964,7 +1963,7 @@ def test_job_run_no_ee():
proj = Project(pk=1, organization=org)
job = Job(project=proj, organization=org, inventory=Inventory(pk=1))
job.execution_environment = None
task = tasks.jobs.RunJob()
task = jobs.RunJob()
task.instance = job
task.update_model = mock.Mock(return_value=job)
task.model.objects.get = mock.Mock(return_value=job)
@@ -1983,7 +1982,7 @@ def test_project_update_no_ee():
proj = Project(pk=1, organization=org)
project_update = ProjectUpdate(pk=1, project=proj, scm_type='git')
project_update.execution_environment = None
task = tasks.jobs.RunProjectUpdate()
task = jobs.RunProjectUpdate()
task.instance = project_update
with pytest.raises(RuntimeError) as e:

View File

@@ -692,28 +692,33 @@ def parse_yaml_or_json(vars_str, silent_failure=True):
return vars_dict
def get_cpu_effective_capacity(cpu_count):
from django.conf import settings
def convert_cpu_str_to_decimal_cpu(cpu_str):
"""Convert a string indicating cpu units to decimal.
settings_abscpu = getattr(settings, 'SYSTEM_TASK_ABS_CPU', None)
env_abscpu = os.getenv('SYSTEM_TASK_ABS_CPU', None)
Useful for dealing with cpu setting that may be expressed in units compatible with
kubernetes.
if env_abscpu is not None:
return int(env_abscpu)
elif settings_abscpu is not None:
return int(settings_abscpu)
See https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units
"""
cpu = cpu_str
millicores = False
settings_forkcpu = getattr(settings, 'SYSTEM_TASK_FORKS_CPU', None)
env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
if cpu_str[-1] == 'm':
cpu = cpu_str[:-1]
millicores = True
if env_forkcpu:
forkcpu = int(env_forkcpu)
elif settings_forkcpu:
forkcpu = int(settings_forkcpu)
else:
forkcpu = 4
try:
cpu = float(cpu)
except ValueError:
cpu = 1.0
millicores = False
logger.warning(f"Could not convert SYSTEM_TASK_ABS_CPU {cpu_str} to a decimal number, falling back to default of 1 cpu")
return cpu_count * forkcpu
if millicores:
cpu = cpu / 1000
# Per kubernetes docs, fractional CPU less than .1 are not allowed
return max(0.1, round(cpu, 1))
def get_corrected_cpu(cpu_count): # formerlly get_cpu_capacity
@@ -725,34 +730,70 @@ def get_corrected_cpu(cpu_count): # formerlly get_cpu_capacity
settings_abscpu = getattr(settings, 'SYSTEM_TASK_ABS_CPU', None)
env_abscpu = os.getenv('SYSTEM_TASK_ABS_CPU', None)
if env_abscpu is not None or settings_abscpu is not None:
return 0
if env_abscpu is not None:
return convert_cpu_str_to_decimal_cpu(env_abscpu)
elif settings_abscpu is not None:
return convert_cpu_str_to_decimal_cpu(settings_abscpu)
return cpu_count # no correction
def get_mem_effective_capacity(mem_mb):
def get_cpu_effective_capacity(cpu_count):
from django.conf import settings
settings_absmem = getattr(settings, 'SYSTEM_TASK_ABS_MEM', None)
env_absmem = os.getenv('SYSTEM_TASK_ABS_MEM', None)
cpu_count = get_corrected_cpu(cpu_count)
if env_absmem is not None:
return int(env_absmem)
elif settings_absmem is not None:
return int(settings_absmem)
settings_forkcpu = getattr(settings, 'SYSTEM_TASK_FORKS_CPU', None)
env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
settings_forkmem = getattr(settings, 'SYSTEM_TASK_FORKS_MEM', None)
env_forkmem = os.getenv('SYSTEM_TASK_FORKS_MEM', None)
if env_forkmem:
forkmem = int(env_forkmem)
elif settings_forkmem:
forkmem = int(settings_forkmem)
if env_forkcpu:
forkcpu = int(env_forkcpu)
elif settings_forkcpu:
forkcpu = int(settings_forkcpu)
else:
forkmem = 100
forkcpu = 4
return max(1, ((mem_mb // 1024 // 1024) - 2048) // forkmem)
return max(1, int(cpu_count * forkcpu))
def convert_mem_str_to_bytes(mem_str):
"""Convert string with suffix indicating units to memory in bytes (base 2)
Useful for dealing with memory setting that may be expressed in units compatible with
kubernetes.
See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory
"""
# If there is no suffix, the memory sourced from the request is in bytes
if mem_str.isdigit():
return int(mem_str)
conversions = {
'Ei': lambda x: x * 2**60,
'E': lambda x: x * 10**18,
'Pi': lambda x: x * 2**50,
'P': lambda x: x * 10**15,
'Ti': lambda x: x * 2**40,
'T': lambda x: x * 10**12,
'Gi': lambda x: x * 2**30,
'G': lambda x: x * 10**9,
'Mi': lambda x: x * 2**20,
'M': lambda x: x * 10**6,
'Ki': lambda x: x * 2**10,
'K': lambda x: x * 10**3,
}
mem = 0
mem_unit = None
for i, char in enumerate(mem_str):
if not char.isdigit():
mem_unit = mem_str[i:]
mem = int(mem_str[:i])
break
if not mem_unit or mem_unit not in conversions.keys():
error = f"Unsupported value for SYSTEM_TASK_ABS_MEM: {mem_str}, memory must be expressed in bytes or with known suffix: {conversions.keys()}. Falling back to 1 byte"
logger.warning(error)
return 1
return max(1, conversions[mem_unit](mem))
def get_corrected_memory(memory):
@@ -761,12 +802,48 @@ def get_corrected_memory(memory):
settings_absmem = getattr(settings, 'SYSTEM_TASK_ABS_MEM', None)
env_absmem = os.getenv('SYSTEM_TASK_ABS_MEM', None)
if env_absmem is not None or settings_absmem is not None:
return 0
# Runner returns memory in bytes
# so we convert memory from settings to bytes as well.
if env_absmem is not None:
return convert_mem_str_to_bytes(env_absmem)
elif settings_absmem is not None:
return convert_mem_str_to_bytes(settings_absmem)
return memory
def get_mem_effective_capacity(mem_bytes):
from django.conf import settings
mem_bytes = get_corrected_memory(mem_bytes)
settings_mem_mb_per_fork = getattr(settings, 'SYSTEM_TASK_FORKS_MEM', None)
env_mem_mb_per_fork = os.getenv('SYSTEM_TASK_FORKS_MEM', None)
if env_mem_mb_per_fork:
mem_mb_per_fork = int(env_mem_mb_per_fork)
elif settings_mem_mb_per_fork:
mem_mb_per_fork = int(settings_mem_mb_per_fork)
else:
mem_mb_per_fork = 100
# Per docs, deduct 2GB of memory from the available memory
# to cover memory consumption of background tasks when redis/web etc are colocated with
# the other control processes
memory_penalty_bytes = 2147483648
if settings.IS_K8S:
# In k8s, this is dealt with differently because
# redis and the web containers have their own memory allocation
memory_penalty_bytes = 0
# convert memory to megabytes because our setting of how much memory we
# should allocate per fork is in megabytes
mem_mb = (mem_bytes - memory_penalty_bytes) // 2**20
max_forks_based_on_memory = mem_mb // mem_mb_per_fork
return max(1, max_forks_based_on_memory)
_inventory_updates = threading.local()
_task_manager = threading.local()

View File

@@ -0,0 +1,40 @@
from django.db import transaction, DatabaseError
import logging
import time
logger = logging.getLogger('awx.main.tasks.utils')
def update_model(model, pk, _attempt=0, **updates):
"""Reload the model instance from the database and update the
given fields.
"""
try:
with transaction.atomic():
# Retrieve the model instance.
instance = model.objects.get(pk=pk)
# Update the appropriate fields and save the model
# instance, then return the new instance.
if updates:
update_fields = ['modified']
for field, value in updates.items():
setattr(instance, field, value)
update_fields.append(field)
if field == 'status':
update_fields.append('failed')
instance.save(update_fields=update_fields)
return instance
except DatabaseError as e:
# Log out the error to the debug logger.
logger.debug('Database error updating %s, retrying in 5 seconds (retry #%d): %s', model._meta.object_name, _attempt + 1, e)
# Attempt to retry the update, assuming we haven't already
# tried too many times.
if _attempt < 5:
time.sleep(5)
return update_model(model, pk, _attempt=_attempt + 1, **updates)
else:
logger.error('Failed to update %s after %d retries.', model._meta.object_name, _attempt)

View File

@@ -73,6 +73,9 @@ AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE = os.getenv('MY_POD_NAMESPACE', 'default')
# Timeout when waiting for pod to enter running state. If the pod is still in pending state , it will be terminated. Valid time units are "s", "m", "h". Example : "5m" , "10s".
AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT = "2h"
# How much capacity controlling a task costs a hybrid or control node
AWX_CONTROL_NODE_TASK_IMPACT = 1
# Internationalization
# https://docs.djangoproject.com/en/dev/topics/i18n/
#
@@ -162,7 +165,7 @@ ALLOWED_HOSTS = []
# reverse proxy.
REMOTE_HOST_HEADERS = ['REMOTE_ADDR', 'REMOTE_HOST']
# If we is behind a reverse proxy/load balancer, use this setting to
# If we are behind a reverse proxy/load balancer, use this setting to
# allow the proxy IP addresses from which Tower should trust custom
# REMOTE_HOST_HEADERS header values
# REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', ''REMOTE_ADDR', 'REMOTE_HOST']
@@ -989,3 +992,8 @@ DEFAULT_EXECUTION_QUEUE_NAME = 'default'
DEFAULT_EXECUTION_QUEUE_POD_SPEC_OVERRIDE = ''
# Name of the default controlplane queue
DEFAULT_CONTROL_PLANE_QUEUE_NAME = 'controlplane'
# Extend container runtime attributes.
# For example, to disable SELinux in containers for podman
# DEFAULT_CONTAINER_RUN_OPTIONS = ['--security-opt', 'label=disable']
DEFAULT_CONTAINER_RUN_OPTIONS = []

View File

@@ -21,7 +21,6 @@ from split_settings.tools import optional, include
# Load default settings.
from .defaults import * # NOQA
# awx-manage shell_plus --notebook
NOTEBOOK_ARGUMENTS = ['--NotebookApp.token=', '--ip', '0.0.0.0', '--port', '8888', '--allow-root', '--no-browser']

View File

@@ -614,6 +614,7 @@ class SocialSingleOrganizationMapField(HybridDictField):
users = SocialMapField(allow_null=True, required=False)
remove_admins = fields.BooleanField(required=False)
remove_users = fields.BooleanField(required=False)
organization_alias = SocialMapField(allow_null=True, required=False)
child = _Forbidden()
@@ -723,7 +724,6 @@ class SAMLTeamAttrTeamOrgMapField(HybridDictField):
team = fields.CharField(required=True, allow_null=False)
team_alias = fields.CharField(required=False, allow_null=True)
organization = fields.CharField(required=True, allow_null=False)
organization_alias = fields.CharField(required=False, allow_null=True)
child = _Forbidden()

View File

@@ -77,7 +77,7 @@ def _update_m2m_from_expression(user, related, expr, remove=True):
related.remove(user)
def _update_org_from_attr(user, related, attr, remove, remove_admins, remove_auditors):
def _update_org_from_attr(user, related, attr, remove, remove_admins, remove_auditors, backend):
from awx.main.models import Organization
from django.conf import settings
@@ -86,7 +86,15 @@ def _update_org_from_attr(user, related, attr, remove, remove_admins, remove_aud
for org_name in attr:
try:
if settings.SAML_AUTO_CREATE_OBJECTS:
org = Organization.objects.get_or_create(name=org_name)[0]
try:
organization_alias = backend.setting('ORGANIZATION_MAP').get(org_name).get('organization_alias')
if organization_alias is not None:
organization_name = organization_alias
else:
organization_name = org_name
except Exception:
organization_name = org_name
org = Organization.objects.get_or_create(name=organization_name)[0]
org.create_default_galaxy_credential()
else:
org = Organization.objects.get(name=org_name)
@@ -117,7 +125,12 @@ def update_user_orgs(backend, details, user=None, *args, **kwargs):
org_map = backend.setting('ORGANIZATION_MAP') or {}
for org_name, org_opts in org_map.items():
org = Organization.objects.get_or_create(name=org_name)[0]
organization_alias = org_opts.get('organization_alias')
if organization_alias:
organization_name = organization_alias
else:
organization_name = org_name
org = Organization.objects.get_or_create(name=organization_name)[0]
org.create_default_galaxy_credential()
# Update org admins from expression(s).
@@ -173,9 +186,9 @@ def update_user_orgs_by_saml_attr(backend, details, user=None, *args, **kwargs):
attr_admin_values = kwargs.get('response', {}).get('attributes', {}).get(org_map.get('saml_admin_attr'), [])
attr_auditor_values = kwargs.get('response', {}).get('attributes', {}).get(org_map.get('saml_auditor_attr'), [])
_update_org_from_attr(user, "member_role", attr_values, remove, False, False)
_update_org_from_attr(user, "admin_role", attr_admin_values, False, remove_admins, False)
_update_org_from_attr(user, "auditor_role", attr_auditor_values, False, False, remove_auditors)
_update_org_from_attr(user, "member_role", attr_values, remove, False, False, backend)
_update_org_from_attr(user, "admin_role", attr_admin_values, False, remove_admins, False, backend)
_update_org_from_attr(user, "auditor_role", attr_auditor_values, False, False, remove_auditors, backend)
def update_user_teams_by_saml_attr(backend, details, user=None, *args, **kwargs):
@@ -195,16 +208,12 @@ def update_user_teams_by_saml_attr(backend, details, user=None, *args, **kwargs)
team_name = team_name_map.get('team', None)
team_alias = team_name_map.get('team_alias', None)
organization_name = team_name_map.get('organization', None)
organization_alias = team_name_map.get('organization_alias', None)
if team_name in saml_team_names:
if not organization_name:
# Settings field validation should prevent this.
logger.error("organization name invalid for team {}".format(team_name))
continue
if organization_alias:
organization_name = organization_alias
try:
if settings.SAML_AUTO_CREATE_OBJECTS:
org = Organization.objects.get_or_create(name=organization_name)[0]

View File

@@ -32,7 +32,16 @@ class TestSAMLMap:
def backend(self):
class Backend:
s = {
'ORGANIZATION_MAP': {'Default': {'remove': True, 'admins': 'foobar', 'remove_admins': True, 'users': 'foo', 'remove_users': True}},
'ORGANIZATION_MAP': {
'Default': {
'remove': True,
'admins': 'foobar',
'remove_admins': True,
'users': 'foo',
'remove_users': True,
'organization_alias': '',
}
},
'TEAM_MAP': {'Blue': {'organization': 'Default', 'remove': True, 'users': ''}, 'Red': {'organization': 'Default', 'remove': True, 'users': ''}},
}
@@ -77,6 +86,11 @@ class TestSAMLMap:
assert org.admin_role.members.count() == 2
assert org.member_role.members.count() == 2
# Test organization alias feature
backend.setting('ORGANIZATION_MAP')['Default']['organization_alias'] = 'Default_Alias'
update_user_orgs(backend, None, u1)
assert Organization.objects.get(name="Default_Alias") is not None
for o in Organization.objects.all():
assert o.galaxy_credentials.count() == 1
assert o.galaxy_credentials.first().name == 'Ansible Galaxy'
@@ -191,7 +205,28 @@ class TestSAMLAttr:
return MockSettings()
def test_update_user_orgs_by_saml_attr(self, orgs, users, galaxy_credential, kwargs, mock_settings):
@pytest.fixture
def backend(self):
class Backend:
s = {
'ORGANIZATION_MAP': {
'Default1': {
'remove': True,
'admins': 'foobar',
'remove_admins': True,
'users': 'foo',
'remove_users': True,
'organization_alias': 'o1_alias',
}
}
}
def setting(self, key):
return self.s[key]
return Backend()
def test_update_user_orgs_by_saml_attr(self, orgs, users, galaxy_credential, kwargs, mock_settings, backend):
with mock.patch('django.conf.settings', mock_settings):
o1, o2, o3 = orgs
u1, u2, u3 = users
@@ -224,6 +259,9 @@ class TestSAMLAttr:
assert o2.member_role.members.count() == 3
assert o3.member_role.members.count() == 1
update_user_orgs_by_saml_attr(backend, None, u1, **kwargs)
assert Organization.objects.get(name="o1_alias").member_role.members.count() == 1
for o in Organization.objects.all():
assert o.galaxy_credentials.count() == 1
assert o.galaxy_credentials.first().name == 'Ansible Galaxy'
@@ -298,11 +336,11 @@ class TestSAMLAttr:
update_user_teams_by_saml_attr(None, None, u1, **kwargs)
assert Team.objects.filter(name='Yellow', organization__name='Default4').count() == 0
assert Team.objects.filter(name='Yellow_Alias', organization__name='Default4_Alias').count() == 1
assert Team.objects.get(name='Yellow_Alias', organization__name='Default4_Alias').member_role.members.count() == 1
assert Team.objects.filter(name='Yellow_Alias', organization__name='Default4').count() == 1
assert Team.objects.get(name='Yellow_Alias', organization__name='Default4').member_role.members.count() == 1
# only Org 4 got created/updated
org = Organization.objects.get(name='Default4_Alias')
org = Organization.objects.get(name='Default4')
assert org.galaxy_credentials.count() == 1
assert org.galaxy_credentials.first().name == 'Ansible Galaxy'

View File

@@ -79,7 +79,7 @@ class TestSAMLTeamAttrField:
'remove': True,
'saml_attr': 'foobar',
'team_org_map': [
{'team': 'Engineering', 'team_alias': 'Engineering Team', 'organization': 'Ansible', 'organization_alias': 'Awesome Org'},
{'team': 'Engineering', 'team_alias': 'Engineering Team', 'organization': 'Ansible'},
{'team': 'Engineering', 'organization': 'Ansible2'},
{'team': 'Engineering2', 'organization': 'Ansible'},
],

0
awx/ui/.babel.rc Normal file
View File

View File

@@ -7,4 +7,5 @@ build
node_modules
dist
images
instrumented
instrumented
*test*.js

View File

@@ -1,14 +1,19 @@
{
"parser": "babel-eslint",
"parser": "@babel/eslint-parser",
"ignorePatterns": ["./node_modules/"],
"parserOptions": {
"requireConfigFile": false,
"ecmaVersion": 6,
"sourceType": "module",
"ecmaFeatures": {
"jsx": true,
"modules": true
}
},
"babelOptions": {
"presets": ["@babel/preset-react"]
}
},
"plugins": ["react-hooks", "jsx-a11y", "i18next"],
"plugins": ["react-hooks", "jsx-a11y", "i18next", "@babel"],
"extends": [
"airbnb",
"prettier",
@@ -17,7 +22,7 @@
],
"settings": {
"react": {
"version": "16.5.2"
"version": "detect"
},
"import/resolver": {
"node": {
@@ -141,6 +146,9 @@
"jsx-a11y/label-has-associated-control": "off",
"react-hooks/rules-of-hooks": "error",
"react-hooks/exhaustive-deps": "warn",
"react/jsx-filename-extension": "off"
"react/jsx-filename-extension": "off",
"no-restricted-exports": "off",
"react/function-component-definition": "off",
"prefer-regex-literals": "off"
}
}

View File

@@ -56,8 +56,8 @@ The UI is built using [ReactJS](https://reactjs.org/docs/getting-started.html) a
The AWX UI requires the following:
- Node 14.x LTS
- NPM 6.x LTS
- Node >= 16.14.0 LTS
- NPM 8.x
Run the following to install all the dependencies:

View File

@@ -1,4 +1,4 @@
FROM node:14
FROM node:16.14.0
ARG NPMRC_FILE=.npmrc
ENV NPMRC_FILE=${NPMRC_FILE}
ARG TARGET='https://awx:8043'
@@ -6,7 +6,7 @@ ENV TARGET=${TARGET}
ENV CI=true
WORKDIR /ui
ADD .eslintignore .eslintignore
ADD .eslintrc .eslintrc
ADD .eslintrc.json .eslintrc.json
ADD .linguirc .linguirc
ADD jsconfig.json jsconfig.json
ADD public public

View File

@@ -1,7 +1,7 @@
# AWX-PF
# AWX-UI
## Requirements
- node 14.x LTS, npm 7.x LTS, make, git
- node >= 16.14.0, npm >= 8.x make, git
## Development
The API development server will need to be running. See [CONTRIBUTING.md](../../CONTRIBUTING.md).
@@ -17,7 +17,7 @@ npm --prefix=awx/ui start
### Build for the Development Containers
If you just want to build a ui for the container-based awx development
environment, use these make targets:
environment and do not need to work on the ui code, use these make targets:
```shell
# The ui will be reachable at https://localhost:8043 or
@@ -108,8 +108,8 @@ To run:
```shell
cd awx/awx/ui
docker build -t awx-ui-next .
docker run --name tools_ui_1 --network _sources_default --link 'tools_awx_1:awx' -e TARGET="https://awx:8043" -p '3001:3001' --rm -v $(pwd)/src:/ui/src awx-ui-next
docker build -t awx-ui .
docker run --name tools_ui_1 --network _sources_default --link 'tools_awx_1:awx' -e TARGET="https://awx:8043" -p '3001:3001' --rm -v $(pwd)/src:/ui/src awx-ui
```
**Note:** This is for CI, test systems, zuul, etc. For local development, see [usage](https://github.com/ansible/awx/blob/devel/awx/ui/README.md#Development)

39299
awx/ui/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,9 @@
{
"name": "ui",
"homepage": ".",
"private": true,
"engines": {
"node": "14.x"
"node": ">=16.14.0"
},
"dependencies": {
"@lingui/react": "3.9.0",
@@ -13,7 +14,6 @@
"ace-builds": "^1.4.12",
"ansi-to-html": "0.7.2",
"axios": "0.22.0",
"babel-plugin-macros": "^3.0.1",
"codemirror": "^5.47.0",
"d3": "7.1.1",
"dagre": "^0.8.4",
@@ -34,31 +34,36 @@
"styled-components": "5.3.0"
},
"devDependencies": {
"@babel/core": "^7.16.10",
"@babel/eslint-parser": "^7.16.5",
"@babel/eslint-plugin": "^7.16.5",
"@babel/plugin-syntax-jsx": "7.16.7",
"@babel/polyfill": "^7.8.7",
"@babel/preset-react": "7.16.7",
"@cypress/instrument-cra": "^1.4.0",
"@lingui/cli": "^3.7.1",
"@lingui/loader": "^3.8.3",
"@lingui/macro": "^3.7.1",
"@nteract/mockument": "^1.0.4",
"@wojtekmaj/enzyme-adapter-react-17": "0.6.5",
"babel-core": "^7.0.0-bridge.0",
"babel-plugin-macros": "3.1.0",
"enzyme": "^3.10.0",
"enzyme-adapter-react-16": "^1.14.0",
"enzyme-to-json": "^3.3.5",
"eslint": "7.30.0",
"eslint-config-airbnb": "18.2.1",
"eslint": "^8.7.0",
"eslint-config-airbnb": "19.0.4",
"eslint-config-prettier": "8.3.0",
"eslint-import-resolver-webpack": "0.11.1",
"eslint-plugin-i18next": "^5.0.0",
"eslint-plugin-import": "^2.14.0",
"eslint-plugin-jsx-a11y": "^6.4.1",
"eslint-plugin-react": "^7.11.1",
"eslint-plugin-react-hooks": "4.2.0",
"eslint-import-resolver-webpack": "0.13.2",
"eslint-plugin-i18next": "5.1.2",
"eslint-plugin-import": "2.25.4",
"eslint-plugin-jsx-a11y": "6.5.1",
"eslint-plugin-react": "7.28.0",
"eslint-plugin-react-hooks": "4.3.0",
"http-proxy-middleware": "^1.0.3",
"jest-websocket-mock": "^2.0.2",
"mock-socket": "^9.0.3",
"prettier": "2.3.2",
"react-scripts": "^4.0.3"
"react-scripts": "5.0.0"
},
"scripts": {
"prelint": "lingui compile",
@@ -66,7 +71,7 @@
"prestart-instrumented": "lingui compile",
"pretest": "lingui compile",
"pretest-watch": "lingui compile",
"start": "ESLINT_NO_DEV_ERRORS=true PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts start",
"start": "GENERATE_SOURCEMAP=false ESLINT_NO_DEV_ERRORS=true PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts start",
"start-instrumented": "ESLINT_NO_DEV_ERRORS=true DEBUG=instrument-cra PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts -r @cypress/instrument-cra start",
"build": "INLINE_RUNTIME_CHUNK=false react-scripts build",
"test": "TZ='UTC' react-scripts test --watchAll=false",

View File

@@ -1,3 +1,4 @@
/* eslint-disable default-param-last */
import axios from 'axios';
import { encodeQueryString } from 'util/qs';
import debounce from 'util/debounce';

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class ActivityStream extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/activity_stream/';
this.baseUrl = 'api/v2/activity_stream/';
}
}

View File

@@ -4,7 +4,7 @@ import RunnableMixin from '../mixins/Runnable.mixin';
class AdHocCommands extends RunnableMixin(Base) {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/ad_hoc_commands/';
this.baseUrl = 'api/v2/ad_hoc_commands/';
}
readCredentials(id) {

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Applications extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/applications/';
this.baseUrl = 'api/v2/applications/';
}
readTokens(appId, params) {

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Auth extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/auth/';
this.baseUrl = 'api/v2/auth/';
}
}

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Config extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/config/';
this.baseUrl = 'api/v2/config/';
this.read = this.read.bind(this);
}

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class CredentialInputSources extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/credential_input_sources/';
this.baseUrl = 'api/v2/credential_input_sources/';
}
}

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class CredentialTypes extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/credential_types/';
this.baseUrl = 'api/v2/credential_types/';
}
async loadAllTypes(

View File

@@ -20,7 +20,7 @@ describe('CredentialTypesAPI', () => {
expect(mockHttp.get).toHaveBeenCalledTimes(1);
expect(mockHttp.get.mock.calls[0]).toEqual([
`/api/v2/credential_types/`,
`api/v2/credential_types/`,
{ params: { page_size: 200 } },
]);
expect(types).toEqual(typesData);
@@ -41,11 +41,11 @@ describe('CredentialTypesAPI', () => {
expect(mockHttp.get).toHaveBeenCalledTimes(2);
expect(mockHttp.get.mock.calls[0]).toEqual([
`/api/v2/credential_types/`,
`api/v2/credential_types/`,
{ params: { page_size: 200 } },
]);
expect(mockHttp.get.mock.calls[1]).toEqual([
`/api/v2/credential_types/`,
`api/v2/credential_types/`,
{ params: { page_size: 200, page: 2 } },
]);
expect(types).toHaveLength(4);

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Credentials extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/credentials/';
this.baseUrl = 'api/v2/credentials/';
this.readAccessList = this.readAccessList.bind(this);
this.readAccessOptions = this.readAccessOptions.bind(this);

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Dashboard extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/dashboard/';
this.baseUrl = 'api/v2/dashboard/';
}
readJobGraph(params) {

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class ExecutionEnvironments extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/execution_environments/';
this.baseUrl = 'api/v2/execution_environments/';
}
readUnifiedJobTemplates(id, params) {

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Groups extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/groups/';
this.baseUrl = 'api/v2/groups/';
this.associateHost = this.associateHost.bind(this);
this.createHost = this.createHost.bind(this);

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Hosts extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/hosts/';
this.baseUrl = 'api/v2/hosts/';
this.readFacts = this.readFacts.bind(this);
this.readAllGroups = this.readAllGroups.bind(this);

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class InstanceGroups extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/instance_groups/';
this.baseUrl = 'api/v2/instance_groups/';
this.associateInstance = this.associateInstance.bind(this);
this.disassociateInstance = this.disassociateInstance.bind(this);

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Instances extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/instances/';
this.baseUrl = 'api/v2/instances/';
this.readHealthCheckDetail = this.readHealthCheckDetail.bind(this);
this.healthCheck = this.healthCheck.bind(this);

View File

@@ -4,7 +4,7 @@ import InstanceGroupsMixin from '../mixins/InstanceGroups.mixin';
class Inventories extends InstanceGroupsMixin(Base) {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/inventories/';
this.baseUrl = 'api/v2/inventories/';
this.readAccessList = this.readAccessList.bind(this);
this.readAccessOptions = this.readAccessOptions.bind(this);
@@ -116,6 +116,20 @@ class Inventories extends InstanceGroupsMixin(Base) {
values
);
}
associateLabel(id, label, orgId) {
return this.http.post(`${this.baseUrl}${id}/labels/`, {
name: label.name,
organization: orgId,
});
}
disassociateLabel(id, label) {
return this.http.post(`${this.baseUrl}${id}/labels/`, {
id: label.id,
disassociate: true,
});
}
}
export default Inventories;

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class InventoryScripts extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/inventory_scripts/';
this.baseUrl = 'api/v2/inventory_scripts/';
}
}

View File

@@ -8,7 +8,7 @@ class InventorySources extends LaunchUpdateMixin(
) {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/inventory_sources/';
this.baseUrl = 'api/v2/inventory_sources/';
this.createSchedule = this.createSchedule.bind(this);
this.createSyncStart = this.createSyncStart.bind(this);

View File

@@ -4,7 +4,7 @@ import RunnableMixin from '../mixins/Runnable.mixin';
class InventoryUpdates extends RunnableMixin(Base) {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/inventory_updates/';
this.baseUrl = 'api/v2/inventory_updates/';
this.createSyncCancel = this.createSyncCancel.bind(this);
}

View File

@@ -8,7 +8,7 @@ class JobTemplates extends SchedulesMixin(
) {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/job_templates/';
this.baseUrl = 'api/v2/job_templates/';
this.createSchedule = this.createSchedule.bind(this);
this.launch = this.launch.bind(this);

View File

@@ -4,7 +4,7 @@ import RunnableMixin from '../mixins/Runnable.mixin';
class Jobs extends RunnableMixin(Base) {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/jobs/';
this.baseUrl = 'api/v2/jobs/';
this.jobEventSlug = '/job_events/';
}

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Labels extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/labels/';
this.baseUrl = 'api/v2/labels/';
}
}

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Me extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/me/';
this.baseUrl = 'api/v2/me/';
}
}

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class Metrics extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/metrics/';
this.baseUrl = 'api/v2/metrics/';
}
}
export default Metrics;

View File

@@ -3,7 +3,7 @@ import Base from '../Base';
class NotificationTemplates extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/notification_templates/';
this.baseUrl = 'api/v2/notification_templates/';
}
test(id) {

Some files were not shown because too many files have changed in this diff Show More