Compare commits

..

793 Commits

Author SHA1 Message Date
Alex Corey
d87c091eea Refactors the project form 2022-10-05 15:43:01 -04:00
Shane McDonald
63fd18edcb Merge pull request #12736 from Sunidhi-Gaonkar1/devel
Adding ppc64le support parameters
2022-10-04 08:37:38 -04:00
Rick Elrod
208254ab81 A few super minor nits in api views/serializers (#12996)
Signed-off-by: Rick Elrod <rick@elrod.me>
2022-10-03 19:24:57 -05:00
John Westcott IV
534763727f Merge pull request #12728 from john-westcott-iv/ig_fallback
Adding prevent_instance_group_fallback
2022-10-03 10:47:51 -04:00
Elijah DeLee
8333b0cf66 fix name to be consistent (#12975)
* fix name to be consistent

this is not a mean, its the last value
so say that in the name

* add remaining capacity to dashboard

also make legends pretty with nice names
2022-09-29 16:52:12 -04:00
John Westcott IV
d1588b94b0 Updating migration file again 2022-09-29 14:20:49 -04:00
Sarabraj Singh
2dcc7ec749 implementing Alan's recommendations for ig_fallback 2022-09-29 14:19:37 -04:00
John Westcott IV
2d756959d3 Altering prefered_instance_groups for ad_hoc_commands and inventory objects 2022-09-29 14:19:37 -04:00
John Westcott IV
e6518a1d1c Updating the migration id 2022-09-29 14:19:37 -04:00
John Westcott IV
84d00722b9 Add prevent_instance_group_fallback to awxkit 2022-09-29 14:19:37 -04:00
John Westcott IV
a95a76ec56 Fixing warnings from rebase 2022-09-29 14:19:37 -04:00
John Westcott IV
420b3c8b84 Adding prevent instance group fallback to inventory and jt defail screens 2022-09-29 14:19:37 -04:00
John Westcott IV
5ba0bf3a64 Fixing UI tests 2022-09-29 14:19:37 -04:00
John Westcott IV
7031753a6d Updating migration file 2022-09-29 14:19:37 -04:00
John Westcott IV
6415671d93 Creating options (like job template) on inventory screen 2022-09-29 14:19:37 -04:00
John Westcott IV
e5fd42c4da Removing debug message and adding help details about empty groups 2022-09-29 14:19:36 -04:00
John Westcott IV
0f675cd375 Updating modules for prevent_instance_group_fallback 2022-09-29 14:19:36 -04:00
John Westcott IV
a85268f74a Fixing inventoy help text 2022-09-29 14:19:36 -04:00
John Westcott IV
0983bd8dc0 Adding prevent_instance_group_fallback 2022-09-29 14:19:36 -04:00
Hao Liu
87c65c9997 Merge pull request #12976 from TheRealHaoLiu/seperate-vars-from-inventory
instance install bundle group vars
2022-09-28 17:56:44 -04:00
Rick Elrod
1b46805373 [ui] Don't double-entity encode on event stdout (#12950)
- stdout output on events was being double HTML entity encoded meaning
  that all output with < and > was shown as literal "&lt;" and "&gt;"

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-28 16:35:17 -05:00
Hao Liu
d48e31b928 instance install bundle group vars
split out customer modifiable variable in the install bundle into a vars file

Signed-off-by: Hao Liu <haoli@redhat.com>
2022-09-28 17:25:38 -04:00
Lila Yasin
ea51e137eb Merge pull request #12461 from andreadecorte/fix_doc
Fix notification doc for Workflow Job Template module
2022-09-28 15:20:44 -04:00
Elijah DeLee
d9f5193a18 move grafana/prometheus docs to own README (#12960)
* move grafana/prometheus docs to own README
2022-09-28 14:05:05 -04:00
Elijah DeLee
710b02a443 always display awx_status_total
this way we don't have null data in monitoring data
this makes writing alerts and dashboards easier
2022-09-28 14:02:57 -04:00
Jeff Bradberry
5b5aac675b Merge pull request #12959 from ansible/new-health-check-started
Add a new Instance.health_check_started field
2022-09-28 10:58:43 -04:00
Jeff Bradberry
6b0618b244 Merge pull request #12968 from ansible/instance-serializer-defaults
Make sure to include field defaults for Instance node_type and node_state
2022-09-28 10:53:31 -04:00
kialam
ceea0a0a39 Add tooltips to Instance form; change name field to host name. (#12912) 2022-09-28 10:22:49 -03:00
Rebeccah Hunter
6b86c450b1 Merge pull request #12967 from rebeccahhh/fix_grafana_dashboard
I Grafana's dashboard visuals, so now I am fixing it.
2022-09-28 08:09:06 -04:00
Alan Rominger
1a696c4f25 Merge pull request #12864 from AlanCoding/project_groups
Avoid cache warning for dispatching control type tasks
2022-09-27 20:00:12 -04:00
Alex Corey
34501fee24 Removes references to current_user (#12818)
* Remove refernces to current user id in the cookie

* Removes current_user data from the cookie on api side
2022-09-27 20:15:57 -03:00
Jeff Bradberry
5aa55d7347 Make sure to include field defaults for Instance node_type and node_state 2022-09-27 17:15:45 -04:00
Jeff Bradberry
65179d9cd0 Add a new Instance.health_check_started field
This will enable us to provide more useful information for the user,
now that all user-triggered health checks are async.

Also, de-bounce the health check endpoint to not allow additional
health check tasks to be triggered when one is already in progress.
2022-09-27 17:09:41 -04:00
Rick Elrod
42109fb45a [collection] Remove instance defaults from docs (#12964)
We don't specify defaults in the module (because it messes up Instance
updates because AWX things we are trying to change things to be the
default).

- Update the docs to remove the defaults that no longer exist
- Update tests to make them pass (oops)
- Fix tangentially related typo in Kind development docs

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-27 15:18:38 -05:00
Sarah Akus
ca46aec483 Merge pull request #12955 from AlexSCorey/12903-MeshScalingUICleanup
Normal Users no longer see Instances in side nav
2022-09-27 16:08:53 -04:00
Alex Corey
2e9956c9fc Prevents unauthorized users from seeing instances list link in side nav 2022-09-27 15:51:23 -04:00
Alan Rominger
5648d9d96f Avoid cache warning for dispatching control type tasks 2022-09-27 15:18:13 -04:00
kialam
2b2ddb68cf Merge pull request #12962 from kialam/fix-403-local-proxy-error
Remove changeOrigin proxy setting.
2022-09-27 09:36:26 -07:00
Kia Lam
12e8608f98 Remove changeOrigin proxy setting. 2022-09-27 09:16:00 -07:00
Rebeccah
eaad749cc9 I broke grafana with my rename, so now I'm fixing it, and adding a better name in overall that is less focused on alerts. 2022-09-27 11:58:43 -04:00
Sarah Akus
4ffa577d05 Merge pull request #12874 from mabashian/wf-inv-permissions
Fixed bug where inventory field was erroneously disabled on WFJT form
2022-09-27 11:27:28 -04:00
mabashian
7143777638 Fixes unit tests after updating the Inventory Lookup 2022-09-27 10:55:26 -04:00
mabashian
cc6eaa7f44 Fixes bug where inventory field was erroneously disabled on WFJT form
We were disabling the field when a user did not have sufficient permissions to create an Inventory.  I updated this logic to check if a user has use permissions on the selected inventory before disabling the field.
2022-09-27 10:55:25 -04:00
Alex Corey
84fa19f2ad Merge pull request #12953 from mabashian/ui-makefile-force
Pass --force when installing ui deps to get around dependency resolution warnings
2022-09-26 16:30:51 -04:00
mabashian
c101619d08 Pass --force when installing ui deps to get around dependency resolution warnings 2022-09-26 15:41:59 -04:00
kialam
cdd2282282 Merge pull request #12915 from kialam/fix-legend-and-tooltip-overflow-topology-view
Add scroll overflow for legend and tooltip in Topology View.
2022-09-26 11:45:36 -07:00
kialam
6e57bc47aa Merge pull request #12943 from kialam/add-locators
Add locators for QE.
2022-09-26 11:15:12 -07:00
Kia Lam
a1a4f26f19 Add scroll overflow for legend and tooltip in Topology View. 2022-09-26 11:05:19 -07:00
Kia Lam
fb4a7373a1 Add locators for QE. 2022-09-26 10:54:13 -07:00
Hao Liu
9c2185c68f Merge pull request #12744 from ansible/feature-mesh-scaling
[feature] Ability to add execution nodes at runtime
2022-09-26 10:59:46 -04:00
Rebeccah Hunter
a66b27edff Merge pull request #12908 from rebeccahhh/devel
new example grafana alert rule
2022-09-26 10:49:49 -04:00
Hao Liu
2dcb127d4e Merge pull request #12945 from TheRealHaoLiu/fix-import-order-partially
Fix import order partially
2022-09-26 09:35:41 -04:00
Hao Liu
790998335c Merge pull request #12947 from TheRealHaoLiu/fix-nit
Fix remove unnecessary comment
2022-09-26 09:29:43 -04:00
Rebeccah
88f0ab0233 add new alert rule for when error rate is over a certain rate, also fix
typo in URL and in grafana alert rule

Important learning: no newlines in rules/equations

turns out datasourceUid can be set in prometheus_source.yml, and it can be anything we want. So I have set it to awx_alert, the PBFAnumbersetc value it was set to before was an autogenerated UID, and it would actually work just with that generated value, but because we want it to make sense, we're setting the value in prometheus_source.yml

finally, update the docs to be reflective of grafana docs and how to export new rules a user might want to add.

Co-authored-by: Elijah DeLee <kdelee@redhat.com>
2022-09-23 15:05:57 -04:00
Hao Liu
3ad7913353 Fix remove unnecessary comment 2022-09-23 12:12:27 -04:00
Hao Liu
795569227a Fix import ordering partially
Signed-off-by: Hao Liu <haoli@redhat.com>
2022-09-23 11:50:09 -04:00
Alex Corey
93f50b5211 Fixes credential form test button (#12844) 2022-09-23 11:07:01 -04:00
Seth Foster
c53228daf5 Set initial value node_type and node_state 2022-09-23 09:46:16 -04:00
Seth Foster
5b7a359c91 Add doc for adding execution node 2022-09-23 09:46:16 -04:00
Hao Liu
01b41afa0f includ template yml in sdist 2022-09-23 09:46:16 -04:00
Rick Elrod
bf8ba63860 Add instance module to controller action group
Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-23 09:46:16 -04:00
Rick Elrod
ba26909dc5 Restrict node_state and node_type choices
Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-23 09:46:16 -04:00
Rick Elrod
7d645c8ff6 [collection] Add 'instance' module
Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-23 09:46:16 -04:00
Jeff Bradberry
b879cbc2ec Prevent any edits to hop nodes
to retain the behavior that they had pre-mesh-scaling.
2022-09-23 09:46:15 -04:00
Hao Liu
af8b5243a3 Update requirements.yml 2022-09-23 09:46:15 -04:00
Hao Liu
4bf612851f ignore template file from yamllint 2022-09-23 09:46:15 -04:00
Hao Liu
ada0d45654 put install bundle file in templates dir
also enable Copr repo in the playbook

Signed-off-by: Hao Liu <haoli@redhat.com>
2022-09-23 09:46:15 -04:00
Alex Corey
c153ac9d3b Adds unit tests for RemoveInstanceButton 2022-09-23 09:46:15 -04:00
Kia Lam
78cc9fb019 Fix missing details message in Topology view. 2022-09-23 09:46:15 -04:00
Seth Foster
301807466d Only get receptor.conf lock in k8s environment
- Writing to receptor.conf only takes place in K8S, so only get a
lock if IS_K8S is true
2022-09-23 09:46:15 -04:00
Seth Foster
e0c9013d9c Prevent altering certain fields on Instance
- Prevents changing hostname, listener_port, or node_type for instances
that already exist
- API default node_type is execution
- API default node_state is installed
2022-09-23 09:46:15 -04:00
Kia Lam
9c6aa93093 Remove action items from Instance peers list. 2022-09-23 09:46:15 -04:00
Kia Lam
4a41098b24 Add health check toast notification for Instance list and detail views. 2022-09-23 09:46:15 -04:00
Kia Lam
0510978516 Use reusable HealthCheckAlert component. 2022-09-23 09:46:15 -04:00
Kia Lam
6009d98163 Modify proxy config to allow UI to point to named sites. 2022-09-23 09:46:15 -04:00
Alex Corey
532ad777a3 Resolves peers list search bug 2022-09-23 09:46:15 -04:00
Kia Lam
b4edfc24ac Add more helper unit tests. 2022-09-23 09:46:14 -04:00
Jeff Bradberry
0e578534fa Update the instance install bundle requirements.yml
to point to the 0.1.0 release of ansible.receptor.
2022-09-23 09:46:14 -04:00
Alex Corey
6619cc39f7 properly deprovisions instance 2022-09-23 09:46:14 -04:00
Kia Lam
d4b25058cd Add update node logic; fix JSX formatting on SVG elements. 2022-09-23 09:46:14 -04:00
Kia Lam
c1ba769b20 Add enabled and disabled node states to legend. 2022-09-23 09:46:14 -04:00
Kia Lam
fd10d83893 Account for node state of 'unavailable' in the UI. 2022-09-23 09:46:14 -04:00
Hao Liu
b1168ce77d update receptor collection role name in install bundle 2022-09-23 09:46:14 -04:00
Seth Foster
1fde9c4f0c add firewall rules to control node 2022-09-23 09:46:14 -04:00
Kia Lam
03685e51b5 Fix Instance Detail StatusLabel to show node_state. 2022-09-23 09:46:14 -04:00
Jeff Bradberry
08c18d71bf Move InstanceLink creation and updating to the async tasks
So that they get applied in situations that do not go through the API.
2022-09-23 09:46:14 -04:00
Seth Foster
dfe6ce1ba8 remove tests that assume health check runs in view 2022-09-23 09:46:14 -04:00
Seth Foster
eaa4f2483f Run instance health check in task container
awx-web container does not have access to receptor socket, and the
execution node health check requires receptorctl.

This change runs the health check asynchronously in the task container.
2022-09-23 09:46:14 -04:00
Jeff Bradberry
68a44529b6 Register pages for the Instance peers and install bundle endpoints
This includes exposing a new interface for Page objects, Page.bytes,
to return the full bytestring contents of the response.
2022-09-23 09:46:14 -04:00
Alex Corey
25afb8477e Adds functionality to deprovision an instance from list and details view 2022-09-23 09:46:14 -04:00
Jeff Bradberry
f3a9d4db07 Assign a default queue to wait_for_jobs() 2022-09-23 09:46:14 -04:00
Kia Lam
cb49eec2b5 Allow k8s to create Instance Groups. 2022-09-23 09:46:13 -04:00
Kia Lam
3333080616 Remove 'hop' node type from Add Instance form. 2022-09-23 09:46:13 -04:00
Kia Lam
e2b9352dad Replace Chip with Label component for IG labels. 2022-09-23 09:46:13 -04:00
Kia Lam
da945eed93 Fix node state. 2022-09-23 09:46:13 -04:00
Jeff Bradberry
ebd200380a Resolve a deadlock in write_receptor_config() 2022-09-23 09:46:13 -04:00
Jeff Bradberry
1b650d6927 When deprovisioning a node, kick off a task that waits on running jobs
After all jobs on the node are complete, delete the node then
broadcast the write_receptor_config task.

Also, make sure that write_receptor_config updates the state of links
that are in 'adding' state.
2022-09-23 09:46:13 -04:00
Jeff Bradberry
b6946c7e35 Update API to support setting instances to Deprovisioning
- allow the node_state to be set to deprovisioning
- set the links that touch the instance to removing
- only allow on K8S
- only allow to be done to execution nodes
2022-09-23 09:46:13 -04:00
Hao Liu
0b1891d82a generate complete install bundle
```
➜  34.213.5.206_install_bundle git:(instance-install-bundle-content) ✗ tree
.
├── install_receptor.yml
├── inventory.yml
├── receptor
│   ├── tls
│   │   ├── ca
│   │   │   └── receptor-ca.crt
│   │   ├── receptor.crt
│   │   └── receptor.key
│   └── work-public-key.pem
└── requirements.yml
```

Signed-off-by: Hao Liu <haoli@redhat.com>
2022-09-23 09:46:13 -04:00
Jeff Bradberry
3bc86ca8cb Follow up on new execution node creation
- hop nodes are descoped
- links need to be created on execution node creation
- expose the 'edit' capabilities on the instance serializer
2022-09-23 09:46:13 -04:00
Kia Lam
dba03616f4 Fix unit tests. 2022-09-23 09:46:13 -04:00
Kia Lam
a59aa44249 Update status label to reflect instance node states. 2022-09-23 09:46:13 -04:00
Seth Foster
3b024a057f Allow work signing for execution node (#12771)
- work-signing added to the generated receptor config
- During receptor task submission, signwork is True when submitting to
  an execution node
2022-09-23 09:46:13 -04:00
Kia Lam
e1c33935fb Properly show Peers tab in UI. 2022-09-23 09:46:13 -04:00
Kia Lam
8ebeeaf148 Add correct permissions for memory capacity slider. 2022-09-23 09:46:13 -04:00
Kia Lam
28f24c8811 Represent enabled field in Topology View:
- use dotted circles to represent `enabled: false`
- use solid circle stroke to represent `enabled: true`
- excise places where `Unavailable` node state is used in the UI.
2022-09-23 09:46:12 -04:00
Kia Lam
89a6162dcd Add new node details; update legend. 2022-09-23 09:46:12 -04:00
Alex Corey
7e627e1d1e Adds Instance Peers Tab and update Instance Details view with more data (#12655)
* Adds InstancePeers tab and updates details view

* attempt to fix failing api tests
2022-09-23 09:46:12 -04:00
Jeff Bradberry
0465a10df5 Deal with exceptions when running execution_node_health_check (#12733) 2022-09-23 09:46:12 -04:00
Hao Liu
5051224781 conditionally show install_bundle link for instances (#12679)
- only show install_bundle link for k8s
- only show install_bundle link for execution and hop nodes
2022-09-23 09:46:12 -04:00
TheRealHaoLiu
7956fc3c31 add instance install bundle endpoint
add scaffolding for instance install_bundle endpoint

- add instance_install_bundle view (does not do anything yet)
- add `instance_install_bundle` related field to serializer
- add `/install_bundle` to instance URL
- `/install_bundle` only available for execution and hop node
- `/install_bundle` endpoint response contain a downloadable tgz with moc data

TODO: add actual data to the install bundle response

Signed-off-by: Hao Liu <haoli@redhat.com>
2022-09-23 09:46:12 -04:00
Shane McDonald
9b034ad574 generate control node receptor.conf
when a new remote execution/hop node is added
regenerate the receptor.conf for all control node to
peer out to the new remote execution node

Signed-off-by: Hao Liu <haoli@redhat.com>
Co-Authored-By: Seth Foster <fosterseth@users.noreply.github.com>
Co-Authored-By: Shane McDonald <me@shanemcd.com>
2022-09-23 09:46:12 -04:00
Kia Lam
4bf9925cf7 Topology changes:
- add new node and link states
    - add directionality to links
    - update icons
2022-09-23 09:46:12 -04:00
Alex Corey
d2c63a9b36 Adds tests 2022-09-23 09:46:12 -04:00
Alex Corey
5d3a19e542 Adds Instance Add form 2022-09-23 09:46:12 -04:00
Jeff Bradberry
e4518f7b13 Changes in posting constraints due to rescoping to OCP/K8S-only
- node_state is now read only
- node_state gets set automatically to Installed in the create view
- raise a validation error when creating on non-K8S
- allow SystemAdministrator the 'add' permission for Instances
- expose the new listener_port field
2022-09-23 09:46:12 -04:00
Sarabraj Singh
350efc12f5 machinery to allow POSTing payloads to instances/ endpoint 2022-09-23 09:46:12 -04:00
Jeff Bradberry
604fac2295 Update task management to only do things with ready instances 2022-09-23 09:46:11 -04:00
Jeff Bradberry
24bfacb654 Check state when processing receptorctl advertisements
Nodes that show up and were in one of the unready states need to be
transitioned to ready, even if the logic in Instance.is_lost was not
met.
2022-09-23 09:46:11 -04:00
Jeff Bradberry
3bcd539b3d Make sure that the health checks handle the state transitions properly
- nodes with states Provisioning, Provisioning Fail, Deprovisioning,
  and Deprovisioning Fail should bypass health checks and should never
  transition due to the existing machinery
- nodes with states Unavailable and Installed can transition to Ready
  if they check out as healthy
- nodes in the Ready state should transition to Unavailable if they
  fail a check
2022-09-23 09:46:11 -04:00
Jeff Bradberry
81e68cb9bf Update node and link registration to put them in the right state
'Installed' for the nodes, 'Established' for the links.
2022-09-23 09:46:11 -04:00
Jeff Bradberry
a575f17db5 Add the state fields and the peer relationships to the serializers 2022-09-23 09:46:11 -04:00
Jeff Bradberry
2fba3db48f Add state fields to Instance and InstanceLink
Also, listener_port to Instance.
2022-09-23 09:46:11 -04:00
Alan Rominger
ff6fb32297 Merge pull request #12875 from ansible/feature-prompt-on-launch-on-templates
Feature prompt on launch on templates
2022-09-23 09:16:02 -04:00
Oleksii Baranov
4c64fb3323 Ensure schedule collection test has enough hosts for slices 2022-09-22 16:08:23 -04:00
John Westcott IV
1cfbc02d98 Collection test fixes from prompting changes
DNE can sometimes be dne depending on versions, fixing test to find either

Adding additional node to Demo Inventory for job slice counting
2022-09-22 16:08:23 -04:00
Alan Rominger
e231e08869 Fix bug with missing parent field and diff with parent
Remove corresponding views for job instance_groups

Validate job_slice_count in API

Remove defaults from some job launch view prompts
  the null default is preferable
2022-09-22 16:08:23 -04:00
mabashian
e069150fbf Removes fetching of default instance groups in the UI on launch and schedule/node creation 2022-09-22 16:08:23 -04:00
Alan Rominger
61093b2532 Treat instance_groups prompt as template-less 2022-09-22 16:08:22 -04:00
mabashian
23f4f7bb00 Remove duplicate Limit detail on schedule
Bumps migration number from 0168 to 0169

Make labels and IGs requests synchronously when getting launch data

Moves label creation out to a util
2022-09-22 16:08:22 -04:00
Alan Rominger
816e491d17 Fix another bug applying extra_vars to incompatible job types 2022-09-22 16:08:22 -04:00
John Westcott IV
dca27b59c9 Fixing is_detached methods' filters 2022-09-22 16:08:22 -04:00
Sarabraj Singh
7de5f77262 adding test coverage to ensure that FIELDS_TO_PRESERVE_AT_COPY is behaving as expected for WFJTs 2022-09-22 16:08:22 -04:00
John Westcott IV
86e7151508 Get more specific as to which timeout caused the issue 2022-09-22 16:08:21 -04:00
John Westcott IV
75597cf29c Altering --timeout from awxkit to --action-timeout to remove conflict with new launch timeout 2022-09-22 16:08:21 -04:00
Oleksii Baranov
d07177be9c Add additional schedule fields for new prompts 2022-09-22 16:08:21 -04:00
Alan Rominger
b38e08174a Write logic to combing workflow labels, IGs with nodes
Additionally, move the inventory-specific hacks of yesteryear
  into the prompts_dict method of the WorkflowJob model
  try to make it clear exactly what this is hacking and why

Correctly summarize label prompts, and add missing EE

Expand unit tests to apply more fields

adding missing fields to preserve during copy to workflow.py

Fix bug where empty workflow job vars blanked node vars (#12904)

* Fix bug where empty workflow job vars blanked node vars

* Fix bug where workflow job has no extra_vars, add test

* Add empty workflow job extra vars to assure fix
2022-09-22 16:08:07 -04:00
John Westcott IV
b501b30db4 Changing label functions to account for new relationships
Removing unreferenced get_orphaned_labels

Forcing forks and job_slice_count to be >=0
2022-09-22 16:08:06 -04:00
Alan Rominger
64dad61b29 Add support for instance_groups and labels on schedule create 2022-09-22 16:08:06 -04:00
Sarabraj Singh
2369dc9621 adding fix for labels pushdown on workflow job nodes 2022-09-22 16:08:06 -04:00
Alan Rominger
ef90adb67e Complete consolidation of the label views 2022-09-22 16:08:06 -04:00
John Westcott IV
a528a78e0e Fixing serializers per review
Removing try/except around instance_groups

Removing redefined execution_environment

Reordering labels/creds/igs/ee/etc

Removing special treatment for EEs when doing setattrs

Adding help_text to execution environments

Adding EE serializer on JobCreateScheduleSerializer
2022-09-22 16:07:53 -04:00
Oleksii Baranov
ffe970aee5 Added instance_groups method to the awxkit models
Also added additional payload fields to the wfjt model.
2022-09-22 15:58:16 -04:00
Oleksii Baranov
4579ab0d60 Add new add_label method to the wfjt node and schedules awxkit models 2022-09-22 15:58:16 -04:00
John Westcott IV
efeeeefd4c Removing labels and instance_groups from the job serializer page as top level items (still in summary fields) 2022-09-22 15:58:16 -04:00
John Westcott IV
c1b20a8ba7 Removing non-functional lines 2022-09-22 15:58:15 -04:00
mabashian
2a30a9b10f Add more ui unit test coverage for prompt changes
Flips default job/skip tags value from empty string to null on WF form
2022-09-22 15:58:15 -04:00
Alan Rominger
34e8087aee DRY edits to access classes for new prompts
Remove if-not-data conditional from WFJTnode.can_change
  these are cannonical for can_add, but this looks like a bug

Change JTaccess.can_unattach to call same method in super()
  previously called can_attach, which is problematic

Better consolidate launch config m2m related checks

Test and fix pre-existing WFJT node RBAC bug

recognize not-provided instance group list on launch, avoiding bug where it fell back to default

fix bug where timeout field was saved on WFJT nodes after creating approval node

remove labels from schedule serializer summary_fields

remove unnecessary prefetch of credentials from WFJT node queryset
2022-09-22 15:58:15 -04:00
mabashian
ead56bfa1b Adds elements and identifiers for cypress tests
Properly display instance groups and labels on node details view
2022-09-22 15:58:15 -04:00
John Westcott IV
d63c940e2f Changing migration sfrom 0167 to 0168
Fixing linting error
2022-09-22 15:58:12 -04:00
mabashian
e05eaeccab Fixes for various prompt related ui issues
Fixes bug where Forks showed up in both default values and prompted values in launch summary

Fixes prompting IGs with defaults on launch

Make job tags and skip tags full width on workflow form

Fixes bug where we attempted to fetch instance groups for workflows

Fetch default instance groups from jt/schedule for schedule form prompt

Grab default IGs when adding a node that prompts for them

Adds support for saving labels on a new wf node

Fix linting errors

Fixes for various prompt on launch related issues

Adds support for saving instance groups on a new node

Adds support for saving instance groups when editing an existing node

Fix workflowReducer test

Updates useSelected to handle a non-empty starting state

Fixes visualizerNode tests

Fix visualizer test

Second batch of prompt related ui issues:

Fixes bug saving existing node when instance groups is not promptable

Fixes bug removing newly added label

Adds onError function to label prompt

Fixes tooltips on the other prompts step

Properly fetch all labels to show on schedule details
2022-09-22 15:55:02 -04:00
John Westcott IV
e076f1ee2a Making labels additive and not adding a many item to config if already in parent 2022-09-22 15:39:49 -04:00
Alan Rominger
68e11d2b81 Add WorkflowJob.instance_groups and distinguish from char_prompts
This removes a loop that ran on import
  the loop was giving the wrong behavior
  and it initialized too many fields as char_prompts fields

With this, we will now enumerate the char_prompts type fields manually
2022-09-22 15:39:49 -04:00
mabashian
697193d3d6 Extends LabelSelect to have a custom chip render. This allows us to disable labels that cannot be removed on job launch 2022-09-22 15:39:49 -04:00
John Westcott IV
4f5596eb0c Adding unit/functional tests, fixing tests
Making common class for LabelList

Fixing related field name

Fixing get_effective_slice_ct to look for corerct field and also override _eager_field
2022-09-22 15:39:16 -04:00
mabashian
42a7866da9 Cleanup UI linting, tests, and import
Cleans up UI linting errors

Fix broken UI unit tests

Adds missing LabelsMixin import
2022-09-22 15:37:31 -04:00
John Westcott IV
809df74050 Adding EE/IG/labels/forks/timeout/job_slice_count to schedules
Modifying schedules to work with related fields

Updating awx.awx.workflow_job_template_node
2022-09-22 15:35:27 -04:00
Oleksii Baranov
2e217ed466 Add awxkit optional fields for new prompts
Added additional fields for the awskit to support prompts:
 * ee
 * labels
 * forks
 * timeout
 * ig
 * job_slices
2022-09-22 15:23:57 -04:00
mabashian
d5d24e421b Leverage the IG mixin on the schedules model
Move associate/disassociate label methods into mixin

Move label/IG saving out to related endpoints off of a schedule
2022-09-22 15:23:12 -04:00
Sarabraj Singh
663ef2cc64 adding prompt-to-launch field on Labels field in Workflow Templates; with necessary UI and testing changes
Co-authored-by: Keith Grant <keithjgrant@gmail.com>
2022-09-22 15:18:47 -04:00
mabashian
4e665ca77f Change ask_job_slicing_on_launch to ask_job_slice_count_on_launch to match api
Adds support for prompting labels on launch in the UI

Fix execution environment prompting in UI

Round out support for prompting all the things on JT launch

Adds timeout to job details

Adds fetchAllLabels to JT/WFJT data models

Moves labels methods out to a mixin so they can be shared across JTs/WFJTs/Schedules

Fixes bug where ee was not being sent on launch

Adds the ability to prompt for ee's, ig's, labels, timeout and job slicing to schedules

Fixes bug where saving schedule form without opening the prompt would throw errors

Adds support for IGs and labels to workflow node prompting

Adds support for label prompting to node modal

Fix job template form tests
2022-09-22 15:18:23 -04:00
John Westcott IV
33c0fb79d6 JT param everything (#12646)
* Making almost all fields promptable on job templates and config models
* Adding EE, IG and label access checks
* Changing jobs preferred instance group function to handle the new IG cache field
* Adding new ask fields to job template modules
* Address unit/functional tests
* Adding migration file
2022-09-22 15:16:12 -04:00
mabashian
04d0e3915c Refactors EE Lookup to support prompting. Adds prompting for EE to JT form
Adds prompt on launch buttons to labels, forks, job slicing, timeout, and instance groups

Adds prompting for labels on workflow job template

Updates flags that denote when prompting is necessary in various places

Adds prompting support for timeout, job slicing, forks, labels, instance groups and execution environments to the prompt details

Show prompted ee, forks, job slice and labels on schedule details

Adds support for ee, labels, forks, job slicing and timeout prompting to the node view modal

Add default values when prompting for ee's, forks, job slicing and timeout

Adds launch prompt step for execution environments

Adds fields for timeout, job slicing and forks to other prompts step of launch
2022-09-22 15:16:08 -04:00
Alex Corey
a27680f7e9 Merge pull request #12727 from akira6592/improve-badge
Improves visibility of workflow approval notification bell
2022-09-22 10:13:13 -04:00
Alex Corey
4072b2786a Merge pull request #12935 from AlexSCorey/fixDependabotWorflow
Fixes workflow that updates dependabot prs
2022-09-22 09:35:31 -04:00
Sunidhi-Gaonkar1
d0b95c063b Adding ppc64le support parameters 2022-09-22 15:28:01 +05:30
Alex Corey
948d300f43 Fixes workflow that update dependabot prs 2022-09-21 12:47:35 -04:00
Rick Elrod
1b9326888e [proj signing] Fix error message, rename action (#12926)
- Fix out of scope variable in error message in the action plugin
- Rename action plugin from playbook_integrity to verify_project

Refs #12887 which pointed out the out of scope variable

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-20 14:25:40 -05:00
Jessica Steurer
d67aef9d8e Merge pull request #12885 from john-westcott-iv/remove_extra_plugin_routing
Remove extra redirects from the runtime.yml
2022-09-19 16:58:12 -03:00
Jessica Steurer
358024d029 Merge pull request #12849 from AlexSCorey/12413-trackSCMInventory
Adds project revision hash to inventory source views
2022-09-19 14:19:41 -03:00
Sarah Akus
9df447fe75 Merge pull request #12778 from keithjgrant/12542-schedule-exceptions
Schedule exceptions
2022-09-15 16:28:56 -04:00
Keith J. Grant
7e7991bb63 adjust DetailList spacing when two appear in succession 2022-09-15 09:37:03 -07:00
Keith J. Grant
35e9d00beb improve frequency validation performance 2022-09-14 15:33:00 -07:00
Elijah DeLee
461b5221f3 Add graphs for job event processing to dashboard 2022-09-14 16:23:53 -04:00
Elijah DeLee
10d06f219d add alerting rule to grafana
This rule alerts if the redis queue is larger than what the rolling
average event insertion rate/second * 120. In other words, if the redis
queue is larger than it appears we can process events in two minutes.

It appears it has to meet this condition for 60 seconds to start firing.

Future commits will address how to configure contact points like slack.

shout out to @jainnikhil30 and @rebeccahhh who figured this out in jam
session this morning.
2022-09-14 16:23:53 -04:00
s-hertel
ecc4f46334 Remove extra collection redirects from the runtime.yml. The keys in plugin_routing should not be fully qualified plugin names. 2022-09-14 16:01:02 -04:00
Jessica Steurer
a227fea5ef Merge pull request #12868 from keithjgrant/12853-ws-event-duplication
Don't add ws events twice to job output
2022-09-14 16:02:07 -03:00
Jessica Steurer
3f4d0bc15d Merge pull request #12788 from AlexSCorey/5941-Translations
Ensures that strings in helpText files do not miss being translated
2022-09-14 12:02:51 -03:00
Rick Elrod
0812425671 [ui] Minor tweak to capitalize GPG properly (#12734)
"GPG Public Key", not "Gpg Public Key"

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-14 01:37:09 +00:00
Alex Corey
94344c0214 Merge pull request #12859 from AlexSCorey/updateCanIUse-lite
updates CanIUseLite
2022-09-13 13:48:20 -04:00
Keith J. Grant
16da9b784a add schedule integration test locators 2022-09-12 16:30:46 -07:00
Keith J. Grant
1e952bab95 fix error message on new schedules with no instances 2022-09-12 12:58:25 -07:00
Jake Jackson
484db004db Update Kind Docs (#12865)
* update kind docs formatting and update some commands

* add tested on fedora update
2022-09-12 13:04:04 -04:00
Alex Corey
7465d7685f updates CanIUseLite 2022-09-09 11:17:54 -04:00
Alex Corey
15fd5559a7 Adds scm track to inventory updates, refactors job detail view in UI 2022-09-09 11:15:39 -04:00
Seth Foster
f0c125efb3 Merge pull request #12762 from akira6592/fix-doc-link
fix link of Patternfly style guide
2022-09-09 09:52:00 -04:00
Keith J. Grant
2d39b81e12 don't add ws events twice to job output 2022-09-08 16:09:02 -07:00
Akira Yokochi
1044d34d98 fix link on doc 2022-09-08 22:49:11 +00:00
Rick Elrod
63567fcc52 [sig validation] better error for job template run (#12735)
When launching a job template, if the last project update failed due to
signature validation, show an error that actually says that.

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-08 02:13:41 -05:00
akira6592
492ef6cf64 fix import order 2022-09-08 13:22:57 +09:00
akira6592
9041dc9dcd use NotificationBadge instead of Badge on header 2022-09-08 13:22:56 +09:00
akira6592
78973f845b made it easier to notice unapproved 2022-09-08 13:22:56 +09:00
Matthew Jones
cea8c16064 Merge pull request #12724 from mtward/issue-11605
Fix: preserve_existing_hosts flag in awx.awx.group module, while adding a new host to inventory group, retains only 25 existing hosts related #11605
2022-09-07 20:23:58 -04:00
John Westcott IV
e7c97923a3 Merge pull request #12785 from jangel97/devel
Fix list_instances command
* Change from modified to last seen
2022-09-07 14:48:38 -04:00
Keith J. Grant
078c3ae6d8 add schedule form validation to ensure at least one occurrence 2022-09-07 10:33:16 -07:00
Rick Elrod
1ab3dba476 Add "cryptography" kind to CredentialType (#12842)
This was missed when we landed #12813. Adds cryptography
kind to the CredentialType allowed kinds list, which now
produces the proper error message when attempting to PUT
to modify the managed credential type.

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-07 12:22:47 -05:00
Alan Rominger
15964dc395 Merge pull request #11745 from AlanCoding/cancel_rework_no_close
Close database connections while processing job output
2022-09-06 15:45:29 -04:00
Keith Grant
b83b65da16 clear output follow mode flag on search (#12791) 2022-09-06 15:15:06 -04:00
Alan Rominger
430f1986c7 Merge pull request #12830 from AlanCoding/dev_stuff
Fix LDAP volume conditional, better metrics interval
2022-09-06 11:51:51 -04:00
Alex Corey
c589f8776c Fixes possible missed translation 2022-09-06 11:26:41 -04:00
Jose Angel Morena
82679ce9a3 replace modified by last_seen in heartbeat 2022-09-06 17:14:19 +02:00
Lila Yasin
6d2e28bfb0 [collection] Add GPG key information to inputs and credential types in documentation. (#12817) 2022-09-06 10:05:36 -05:00
Luiz Costa
7a4da5a8fa Add GPG credential support to awxkit 2022-09-06 10:05:36 -05:00
Rick Elrod
c475a7b6c0 [ui] make signature cred. field be project-global (#12695)
Rather than only allowing the signature credential to be specified on
project using git, allow it to be specified on any project at all.

This moves the field to always show, and moves it out of the git
subform.

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-06 10:05:36 -05:00
Rick Elrod
32bb603554 Update action plugin to use ansible-sign library
Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-06 10:05:36 -05:00
Rick Elrod
8d71292d1a Integrity checking on project sync
Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-06 10:05:36 -05:00
Veda Periwal
e896dc1aa7 Add Content Signature Validation Credential field to Projects Form page and Projects Detail page 2022-09-06 10:05:36 -05:00
Hao Liu
f5a2246817 add new managed credential type for gpg pub key
add new managed credential type for gpg pub key
add migration file to setup managed credential types to add the new credential type

Signed-off-by: Hao Liu <haoli@redhat.com>
2022-09-06 10:05:36 -05:00
Hao Liu
c467b6ea13 add signature_validation_credential to Project
add new column to `main_project` table
- `signature_validation_credential`

update project module for awx_collection
- added input arg for `signature_validation_credential`

Co-Authored-By: Lila Yasin  <89486372+djyasin@users.noreply.github.com>
2022-09-06 10:05:36 -05:00
Alex Corey
1636f6b196 Merge pull request #12835 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/patternfly-4.210.2
Bump @patternfly/patternfly from 4.202.1 to 4.210.2 in /awx/ui
2022-09-06 10:33:00 -04:00
Alex Corey
5da528ffbb Merge pull request #12834 from ansible/dependabot/npm_and_yarn/awx/ui/devel/ace-builds-1.10.1
Bump ace-builds from 1.8.1 to 1.10.1 in /awx/ui
2022-09-06 10:30:46 -04:00
Alex Corey
2e65ae49a5 Merge pull request #12806 from ansible/dependabot/npm_and_yarn/awx/ui/devel/luxon-3.0.3
Bump luxon from 3.0.1 to 3.0.3 in /awx/ui
2022-09-06 10:15:08 -04:00
Alex Corey
d06bc815f8 Merge pull request #12807 from ansible/dependabot/npm_and_yarn/awx/ui/devel/dompurify-2.4.0
Bump dompurify from 2.3.10 to 2.4.0 in /awx/ui
2022-09-06 10:14:28 -04:00
dependabot[bot]
0290784f9b Bump @patternfly/patternfly from 4.202.1 to 4.210.2 in /awx/ui
Bumps [@patternfly/patternfly](https://github.com/patternfly/patternfly) from 4.202.1 to 4.210.2.
- [Release notes](https://github.com/patternfly/patternfly/releases)
- [Changelog](https://github.com/patternfly/patternfly/blob/main/RELEASE-NOTES.md)
- [Commits](https://github.com/patternfly/patternfly/compare/prerelease-v4.202.1...prerelease-v4.210.2)

---
updated-dependencies:
- dependency-name: "@patternfly/patternfly"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-06 14:13:52 +00:00
dependabot[bot]
1cc52afc42 Bump ace-builds from 1.8.1 to 1.10.1 in /awx/ui
Bumps [ace-builds](https://github.com/ajaxorg/ace-builds) from 1.8.1 to 1.10.1.
- [Release notes](https://github.com/ajaxorg/ace-builds/releases)
- [Changelog](https://github.com/ajaxorg/ace-builds/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ajaxorg/ace-builds/compare/v1.8.1...v1.10.1)

---
updated-dependencies:
- dependency-name: ace-builds
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-06 14:13:17 +00:00
Alex Corey
88f7f987cd Merge pull request #12810 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/react-table-4.100.8
Bump @patternfly/react-table from 4.93.1 to 4.100.8 in /awx/ui
2022-09-06 10:12:01 -04:00
Alan Rominger
f512971991 Add project sync to job cancel chain 2022-09-05 22:29:19 -04:00
Alan Rominger
53de245877 Fix LDAP volume conditional, better metrics interval 2022-09-04 22:33:12 -04:00
Shane McDonald
749622427c Merge pull request #12825 from shanemcd/extend-includes
Extend black excludes instead of overriding
2022-09-02 15:40:41 -04:00
Alan Rominger
725d6fa896 Merge pull request #12820 from AlanCoding/five_seconds
Make the metrics default sampling interval 5s
2022-09-02 15:21:57 -04:00
Shane McDonald
a107bb684c Extend black excludes instead of overriding
By default it will ignore things in .gitignore, which we want
2022-09-02 15:11:45 -04:00
Alan Rominger
ccbc8ce7de Make the metrics default sampling interval 5s 2022-09-02 13:38:49 -04:00
Shane McDonald
260e1d4f2d Make static asset location consistent across all deployments (#12819) 2022-09-02 17:12:06 +00:00
Shane McDonald
1afa49f3ff Merge pull request #12632 from TheRealHaoLiu/kind-k8s-devel
Add documentation for running development environment in kind
2022-09-02 12:12:01 -04:00
Rick Elrod
6f88ea1dc7 Common Inventory slicing method for job slices
- Extract how slicing is done from Inventory#get_script_data and pull it
  into a new method, Inventory#get_sliced_hosts
- Make use of this method in Inventory#get_script_data
- Make use of this method in Job#_get_inventory_hosts (used by
  Job#start_job_fact_cache and Job#finish_job_fact_cache).

This fixes an issue (namely in Tower 4.1) where job slicing with fact
caching enabled doesn't save facts for all hosts.

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-09-01 16:15:07 -05:00
Alan Rominger
c59bbdecdb Refactor canceling to work through messaging and signals, not database
If canceled attempted before, still allow attempting another cancel
in this case, attempt to send the sigterm signal again.
Keep clicking, you might help!

Replace other cancel_callbacks with sigterm watcher
  adapt special inventory mechanism for this too

Get rid of the cancel_watcher method with exception in main thread

Handle academic case of sigterm race condition

Process cancelation as control signal

Fully connect cancel method and run_dispatcher to control

Never transition workflows directly to canceled, add logs
2022-09-01 15:20:31 -04:00
Matthew Jones
f9428c10b9 Merge pull request #12803 from matburt/fix_cleanup_schedules
Fix an issue where default cleanup schedules only run once
2022-09-01 10:40:11 -04:00
dependabot[bot]
1ca054f43d Bump @patternfly/react-table from 4.93.1 to 4.100.8 in /awx/ui
Bumps [@patternfly/react-table](https://github.com/patternfly/patternfly-react) from 4.93.1 to 4.100.8.
- [Release notes](https://github.com/patternfly/patternfly-react/releases)
- [Commits](https://github.com/patternfly/patternfly-react/compare/@patternfly/react-table@4.93.1...@patternfly/react-table@4.100.8)

---
updated-dependencies:
- dependency-name: "@patternfly/react-table"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-01 08:09:44 +00:00
dependabot[bot]
374f76b527 Bump dompurify from 2.3.10 to 2.4.0 in /awx/ui
Bumps [dompurify](https://github.com/cure53/DOMPurify) from 2.3.10 to 2.4.0.
- [Release notes](https://github.com/cure53/DOMPurify/releases)
- [Commits](https://github.com/cure53/DOMPurify/compare/2.3.10...2.4.0)

---
updated-dependencies:
- dependency-name: dompurify
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-01 08:08:10 +00:00
dependabot[bot]
f9dd5e0f1c Bump luxon from 3.0.1 to 3.0.3 in /awx/ui
Bumps [luxon](https://github.com/moment/luxon) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/moment/luxon/releases)
- [Changelog](https://github.com/moment/luxon/blob/master/CHANGELOG.md)
- [Commits](https://github.com/moment/luxon/compare/3.0.1...3.0.3)

---
updated-dependencies:
- dependency-name: luxon
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-09-01 08:07:33 +00:00
Matthew Jones
bb7509498e Fix an issue where default cleanup schedules only run once
This looks like an oversight that has existed for a long time. We intend to run these on a pretty regular basis
2022-08-31 20:10:20 -04:00
Keith Grant
8a06ffbe15 poll for events processing completion (#12689) 2022-08-31 16:03:35 -04:00
Hao Liu
8ad948f268 Merge pull request #12797 from TheRealHaoLiu/remove-helm-from-dockerfile
remove helm from dockerfile template
2022-08-31 14:18:25 -04:00
Hao Liu
73f808dee7 remove helm from dockerfile template
Signed-off-by: Hao Liu <haoli@redhat.com>
2022-08-31 13:48:30 -04:00
Shane McDonald
fecab52f86 Merge pull request #12796 from shanemcd/fix-tests
Prevent openldap from getting downgraded during build
2022-08-31 13:34:04 -04:00
Shane McDonald
609c67d85e Prevent openldap from getting downgraded during build
We noticed here that openldap was getting downgraded and caused our test suite to blow up https://github.com/ansible/awx/runs/8118323342?check_suite_focus=true
2022-08-31 13:09:29 -04:00
Keith J. Grant
0005d249c0 update tests 2022-08-30 15:44:52 -07:00
Hao Liu
8828ea706e add make target for building custom awx kube image (#12789) 2022-08-30 20:19:36 +00:00
Shane McDonald
4070ef3f33 Merge pull request #12787 from shanemcd/pre-build-ui
Speed up image build when UI is pre-built on host
2022-08-30 15:51:43 -04:00
Keith Grant
39f6e2fa32 fix TypeError when config is undefined (#12697) 2022-08-30 15:11:45 -04:00
Shane McDonald
1dfdff4a9e Speed up image build when UI is pre-built on host 2022-08-30 12:36:25 -04:00
Alan Rominger
310e354164 Merge pull request #12769 from AlanCoding/self_conn
Fix sanity check to use the relevant active connection
2022-08-29 20:36:48 -04:00
Keith J. Grant
dda2931e60 fix exception frequency placeholder text 2022-08-29 13:43:49 -07:00
Alan Rominger
6d207d2490 Merge pull request #12754 from kdelee/fix_metrics_consumed_capacity
calcuate consumed capacity in same way in metrics
2022-08-29 16:37:53 -04:00
Alan Rominger
01037fa561 Fix sanity check to use the relevant active connection 2022-08-29 16:33:07 -04:00
Alan Rominger
61f3e5cbed Merge pull request #12702 from AlanCoding/poll_cancel
Check exit conditions in loop waiting for project flock
2022-08-29 16:29:39 -04:00
Alan Rominger
44995e944a Merge pull request #12766 from AlanCoding/lazy_no_more
Revert "Merge pull request #12584 from AlanCoding/lazy_workers"
2022-08-29 16:06:50 -04:00
Keith J. Grant
4a92fcfc62 add schedule exceptions to details 2022-08-29 11:55:32 -07:00
Elijah DeLee
d3f15f5784 Merge pull request #4 from AlanCoding/elijah_metrics
Minor changes to instance loop structure
2022-08-29 14:33:46 -04:00
Alan Rominger
2437a84b48 Minor changes to instance loop structure 2022-08-29 14:28:50 -04:00
Shane McDonald
696f099940 Merge pull request #12749 from shanemcd/not-so-aggressive
Make error handling less aggressive when checking status of dispatcher task
2022-08-29 11:50:56 -04:00
Shane McDonald
3f0f538c40 Merge pull request #12759 from shanemcd/auto-prom
Automate bootstrapping of Prometheus in the development environment
2022-08-29 11:25:13 -04:00
Shane McDonald
66529d0f70 Automate bootstrapping of Prometheus in the development environment 2022-08-29 09:39:44 -04:00
Alan Rominger
974f845059 Revert "Merge pull request #12584 from AlanCoding/lazy_workers"
This reverts commit 64157f7207, reversing
changes made to 9e8ba6ca09.
2022-08-28 23:04:13 -04:00
Keith J. Grant
f6b3413a11 add schedule exemptions to form 2022-08-26 16:00:08 -07:00
Shane McDonald
b4ef687b60 Merge pull request #12760 from shanemcd/another-domino-falls
Fix browsable API in development environment
2022-08-26 17:43:37 -04:00
Shane McDonald
2ef531b2dc Fix browsable API in development environment
Fallout from https://github.com/ansible/awx/pull/12722
2022-08-26 17:19:16 -04:00
Elijah DeLee
125801ec5b add panel to grafana dashboard for capacity
also reorganize so there are two columns of panels, not
just one long skinny set of panels
2022-08-26 15:42:40 -04:00
Shane McDonald
691d9d7dc4 Merge pull request #12755 from shanemcd/fix-dev-env-admin-pw
Fix auto-generated dev env admin password
2022-08-26 13:33:43 -04:00
Shane McDonald
5ca898541f Fix auto-generated dev env admin password
Fallout from https://github.com/ansible/awx/pull/12753
2022-08-26 13:07:46 -04:00
Shane McDonald
24821ff030 Merge pull request #12753 from shanemcd/custom-dev-env-admin-pw
Allow for setting custom admin password in dev environment
2022-08-26 11:55:17 -04:00
Elijah DeLee
99815f8962 calcuate consumed capacity in same way in metrics
We should be consistent about this. Also this takes us from doing a as
many queries to the UnifiedJob table as we have instances to doing 1
query to the UnifiedJob table (and both do 1 query to Instances table)
2022-08-26 11:40:36 -04:00
Shane McDonald
d752e6ce6d Allow for setting custom admin password in dev environment 2022-08-26 11:29:11 -04:00
Shane McDonald
457dd890cb Make error handling less aggressive when checking status of dispatcher task 2022-08-26 09:05:38 -04:00
Christian Adams
4fbf5e9e2f Merge pull request #12731 from rooftopcellist/fix-messages-target
Fix make target for compiling api strings
2022-08-24 17:01:43 -04:00
Christian M. Adams
687b4ac71d Fix make target for compiling api strings 2022-08-24 16:36:25 -04:00
John Westcott IV
a1b364f80c Configuring Keycloak to also do OIDC (#12700) 2022-08-24 07:08:39 -04:00
mtward
271938c5fc Update group.py 2022-08-23 15:06:11 -04:00
Jessica Steurer
ff49cc5636 Merge pull request #12552 from whitej6/jlw-generic-oidc
Implement Generic OIDC Provider
2022-08-23 15:38:43 -03:00
Shane McDonald
9946e644c8 Merge pull request #12722 from shanemcd/fix-static-root
Fix STATIC_ROOT in defaults
2022-08-23 12:58:12 -04:00
Shane McDonald
1ed7a50755 Fix STATIC_ROOT in defaults
Reasoning:

- This is breaking the UI in official image builds of devel
- This is always being overridden in our packaging
- PROJECTS_ROOT and JOBOUTPUT_ROOT also hardcode /var/lib/awx
2022-08-23 12:39:54 -04:00
Jeremy White
9f3396d867 rebasing 2022-08-23 09:51:04 -05:00
John Westcott IV
bcd018707a Adding ability to auto-apply community label to PRs and Issues (#12718) 2022-08-23 07:08:24 -04:00
Shane McDonald
a462978433 Merge pull request #12699 from shanemcd/remove-settings-py-during-build
Remove need for settings.py during image build
2022-08-22 14:13:36 -04:00
Shane McDonald
6d11003975 Remove need for settings.py during image build 2022-08-22 13:46:42 -04:00
Shane McDonald
017e474325 Merge pull request #12704 from shanemcd/dynamic-log-config
Consolidate and refactor logging configuration code
2022-08-22 13:31:28 -04:00
Alex Corey
5d717af778 Merge pull request #12713 from AlexSCorey/CustomizeDependatPRBodies
Edits existing PR body
2022-08-22 12:24:25 -04:00
Alex Corey
8d08ac559d Puts new pr string on a new line 2022-08-22 12:05:43 -04:00
Shane McDonald
4e24867a0b Merge pull request #12703 from shanemcd/ded-code
Delete unused playbook profiling code
2022-08-22 11:33:37 -04:00
Alex Corey
2b4b8839d1 Edits existing PR body 2022-08-22 11:31:49 -04:00
Yuki Yamashita
dba33f9ef5 Replace gethostbyname to getaddrinfo for plugins ipv6 support related #11450 (#12561)
Co-authored-by: yukiy <yyamashi@redhat.com>
2022-08-22 11:07:10 -03:00
Julen Landa Alustiza
db2649d7ba Merge pull request #12706 from ansible/revert-12692-mop_up
Revert "Fix errors in websocket code due to missing template"
2022-08-22 15:53:35 +02:00
Alan Rominger
edc3da85cc Revert "Fix errors in websocket code due to missing template" 2022-08-20 19:09:57 -04:00
Alan Rominger
2357e24d1d Merge pull request #12701 from AlanCoding/no_more_schedules
Make schedule teardown more reliable
2022-08-20 07:05:21 -04:00
Shane McDonald
e4d1056450 Change log level for UnifiedJob#log_lifecycle 2022-08-19 17:56:17 -04:00
Shane McDonald
37d9c9eb1b Consolidate and refactor logging configuration code 2022-08-19 17:16:27 -04:00
Shane McDonald
d42a85714a Delete unused playbook profiling code
We haven't had this feature since pre-AWX 18 (since EEs were introduced) and I cant find any other reference to this.
2022-08-19 17:03:22 -04:00
Alan Rominger
88bf03c6bf Check exit conditions in loop waiting for project flock 2022-08-19 16:08:56 -04:00
Alan Rominger
4b8a56be39 Make schedule teardown more reliable 2022-08-19 15:42:00 -04:00
Alan Rominger
2aa99234f4 Merge pull request #12692 from AlanCoding/mop_up
Fix errors in websocket code due to missing template
2022-08-19 14:46:10 -04:00
Michael Abashian
bf9f1b1d56 Added more context to subscription details and rearrange the order of some of the fields (#12649)
* Adds more context to subscription details and rearranges some of the fields

* Fixes broken unit test after updating subscription details
2022-08-19 09:41:23 -04:00
Alan Rominger
704e4781d9 Fix errors in websocket code due to missing template 2022-08-18 14:05:06 -04:00
Alan Rominger
4a8613ce4c Avoid updating modified_by from None to None (#11838)
This should help the case of inventory updates in particular
  where imported hosts are managed by the system
2022-08-18 11:39:29 -04:00
Alan Rominger
e87fabe6bb Submit job to dispatcher as part of transaction (#12573)
Make it so that submitting a task to the dispatcher happens as part of the transaction.
  this applies to dispatcher task "publishers" which NOTIFY the pg_notify queue
  if the transaction is not successful, it will not be sent, as per postgres docs

This keeps current behavior for pg_notify listeners
  practically, this only applies for the awx-manage run_dispatcher service
  this requires creating a separate connection and keeping it long-lived
  arbitrary code will occasionally close the main connection, which would stop listening

Stop sending the waiting status websocket message
  this is required because the ordering cannot be maintained with other changes here
  the instance group data is moved to the running websocket message payload

Move call to create_partition from task manager to pre_run_hook
  mock this in relevant unit tests
2022-08-18 09:43:53 -04:00
Alan Rominger
532aa83555 Merge pull request #11833 from AlanCoding/facts_update_fields
Use update_fields for Ansible facts update
2022-08-17 22:37:45 -04:00
Alan Rominger
d87bb973d5 Merge pull request #12090 from AlanCoding/mind_your_own_business
Avoid parent instance update when status was unchanged
2022-08-17 22:29:31 -04:00
Alan Rominger
a72da3bd1a Merge pull request #12582 from AlanCoding/clean_and_forget
Move reaper logic into worker, avoiding bottlenecks
2022-08-17 18:53:47 -04:00
Alan Rominger
56df3f0c2a Merge pull request #12671 from AlanCoding/cut_the_line
Avoid dependency manager for jobs with no deps
2022-08-17 18:50:52 -04:00
Alan Rominger
e0c59d12c1 Change data structure so we can conditionally reap waiting jobs 2022-08-17 16:00:30 -04:00
Alan Rominger
7645cc2707 Remove mocks for reap method that was removed 2022-08-17 15:43:29 -04:00
Alan Rominger
6719010050 Add back in cleanup call 2022-08-17 15:42:48 -04:00
Alan Rominger
ccd46a1c0f Move reaper logic into worker, avoiding bottlenecks 2022-08-17 15:42:47 -04:00
Alex Corey
cc1e349ea8 Merge pull request #12604 from ansible/dependabot/npm_and_yarn/awx/ui/devel/ace-builds-1.8.1
Bump ace-builds from 1.6.0 to 1.8.1 in /awx/ui
2022-08-17 14:11:27 -04:00
Alex Corey
e509d5f1de Merge pull request #12606 from ansible/dependabot/npm_and_yarn/awx/ui/devel/dompurify-2.3.10
Bump dompurify from 2.3.8 to 2.3.10 in /awx/ui
2022-08-17 14:10:51 -04:00
Alan Rominger
4fca27c664 Merge pull request #12289 from AlanCoding/idle_help
Correct help text for job idle timeout
2022-08-17 13:55:44 -04:00
Alan Rominger
51be22aebd Merge pull request #12668 from AlanCoding/graph_tweaks
Remove an old metrics field and add a new one to dashboard
2022-08-17 13:49:17 -04:00
Alan Rominger
54b21e5872 Avoid dependency manager for jobs with no deps 2022-08-17 13:32:59 -04:00
Alan Rominger
85beb9eb70 Merge pull request #12676 from AlanCoding/forward_picks
Stability fixes, and related logging for slowdowns in dispatcher task processing
2022-08-17 13:32:34 -04:00
Alan Rominger
56739ac246 Use delay_update to set error message, according to merge note 2022-08-17 11:45:40 -04:00
Alan Rominger
1ea3c564df Apply a failed status if cancel_flag is not set 2022-08-17 11:42:09 -04:00
Alan Rominger
621833ef0e Add extra workers if computing based on memory
Co-authored-by: Elijah DeLee <kdelee@redhat.com>
2022-08-17 11:41:59 -04:00
Shane McDonald
16be38bb54 Allow for passing custom job_explanation to reaper methods
Co-authored-by: Alan Rominger <arominge@redhat.com>
2022-08-17 11:41:49 -04:00
Shane McDonald
c5976e2584 Add setting for missed heartbeats before marking node offline 2022-08-17 11:39:30 -04:00
Shane McDonald
3c51cb130f Add grace period settings for task manager timeout, and pod / job waiting reapers
Co-authored-by: Alan Rominger <arominge@redhat.com>
2022-08-17 11:39:01 -04:00
Shane McDonald
c649809eb2 Remove debug method that calls cleanup
- It's unclear why this was here.
- Removing it doesnt appear to cause any problems.
- It still gets called during heartbeats.
2022-08-17 11:35:43 -04:00
Alan Rominger
43a53f41dd Add logs about heartbeat skew
Co-authored-by: Shane McDonald <me@shanemcd.com>
2022-08-17 11:33:59 -04:00
Alan Rominger
a3fef27002 Add logs to debug waiting bottlenecking 2022-08-17 11:33:49 -04:00
Alan Rominger
cfc1255812 Merge pull request #12442 from AlanCoding/waiting_reaper
Fix false reaper false-positives of waiting jobs that are waiting for worker
2022-08-17 11:20:05 -04:00
Alan Rominger
278db2cdde Split reaper for running and waiting jobs
Avoid running jobs that have already been reapted

Co-authored-by: Elijah DeLee <kdelee@redhat.com>

Remove unnecessary extra actions

Fix waiting jobs in other cases of reaping
2022-08-17 10:53:29 -04:00
Alan Rominger
64157f7207 Merge pull request #12584 from AlanCoding/lazy_workers
Wait 60 seconds before scaling down a worker
2022-08-17 10:18:19 -04:00
Alan Rominger
9e8ba6ca09 Merge pull request #12494 from AlanCoding/revival
Register system again if deleted by another pod
2022-08-17 10:12:39 -04:00
Alan Rominger
268ab128d7 Merge pull request #12527 from AlanCoding/offline_db
Further resiliency changes, specifically focused on case of database going offline
2022-08-17 10:10:50 -04:00
Alan Rominger
fad5934c1e Merge pull request #12356 from AlanCoding/copytree_neo
Replace git shallow clone with shutil.copytree
2022-08-17 10:07:28 -04:00
Alan Rominger
c9e3873a28 Use update_fields for Ansible facts update 2022-08-17 08:22:41 -04:00
Jessica Steurer
6a19aabd44 feature_request_form_update (#12625)
* Feature_update

* Feature_update

* update-feature-request

* update-edit
2022-08-17 08:52:30 -03:00
Alan Rominger
11e63e2e89 Remove an old metrics field and add a new one to dashboard 2022-08-16 22:37:27 -04:00
Hao Liu
7c885dcadb add help command to make (#12669)
add `make help`
that prints all available make targets
help text generated from comments above the make target starting with `##`

Signed-off-by: Hao Liu <haoli@redhat.com>
2022-08-16 20:36:47 -04:00
John Westcott IV
b84a192bad Altering events relationship to hosts to increase performance (#12447)
Removing cascade on delete at model level that could cause locking issues.
2022-08-16 12:03:05 -04:00
Elijah DeLee
35afb10add fix use of distinct on query that UI
When on the screen in the UI that loads the job events, the ui includes
a filter to exclude job events where stdout = ''. Because this is a
TextField and was not in the allow list, we were applying DISTINCT to
the query. This made it very unperformant for large jobs, especially
on the query that gets the count and cannot put a LIMIT on the query.

Also correctly prefetch the related job_template data on the view to
cut down the number of queries we make from around 50 to under 10.

We need to analyze other similar views for other prefetch type
optimizations we should make.
2022-08-16 10:08:33 -04:00
Hao Liu
13fc845bcc develop AWX on MacOS using K8S
Add instruction for AWX development on MacOS using Kind Cluster

Signed-off-by: Hao Liu <haoli@redhat.com>
2022-08-15 22:48:23 -04:00
Alan Rominger
f1bd1f1dfc Merge pull request #12658 from AlanCoding/more_panels
Add more graphs for task manager refactor
2022-08-15 16:07:43 -04:00
Sarah Akus
67c9e1a0cb Merge pull request #12650 from matburt/fix_default_adhoc_verbosity
Fixed a bug where the initial form value of verbosity isn't respected
2022-08-15 15:48:49 -04:00
Alan Rominger
f6da9a5073 Add more graphs for task manager refactor 2022-08-15 15:29:34 -04:00
Seth Foster
38a0950f46 Merge pull request #12656 from fosterseth/metrics_tm_on_commit
Add metric for task manager on_commit calls
2022-08-15 13:54:34 -04:00
Seth Foster
55d295c2a6 Add metric to measure task manager transaction, including on_commit calls 2022-08-15 12:44:29 -04:00
Elijah DeLee
be45919ee4 have postgres log to console in dev env
also log slow queries and link to documentation for other possible
settings
2022-08-15 12:09:17 -04:00
mabashian
0a4a9f96c2 Explicitly set value for verbosity to 0 as the default value which corresponds to 0 (Normal) 2022-08-12 14:03:36 -04:00
Matthew Jones
1ae1da3f9c Fix a bug where the form value of verbosity isn't respect 2022-08-12 09:29:31 -04:00
Keith Grant
cae2c06190 Complex schedules UI (#12445)
* refactor ScheduleFormFields into own file

* refactor ScheduleForm

* wip complex schedules form

* build rruleset from inputs

* update schedule form validation for multiple repeat frequencies

* add basic rrule set parsing when opening schedule form

* complex schedule bugfixes, handle edge cases, etc

* fix schedule saving/parsing for single-occurrence schedules

* working with timezone issues

* fix rrule until times to be in UTC

* update tests for new schedule form format

* update ouiaIds

* tweak schedules spacing

* update ScheduleForm tests

* show message for unsupported schedule types

* default schedules to browser timezone

* show error type/message in ErrorDetail

* shows frequencies on ScheduleDetails view

* handles nullish values
2022-08-11 16:55:52 -04:00
John Westcott IV
993dd61024 Forcing an unbind for a django-auth-ldap sticky session to the LDAP server (#12367)
* Forcing an unbind for a django-auth-ldap sticky session to the LDAP server

* Focring _connection_bound to false after closing and modifying exceptino logging
2022-08-11 16:46:41 -03:00
Alan Rominger
ea07aef73e Correct help text for job idle timeout 2022-08-11 09:39:29 -04:00
John Westcott IV
268a4ad32d Modifying reaper of administrative work units to allow for change from Controller to Hybrid nodes (#12614) 2022-08-11 09:03:35 -03:00
Sean Sullivan
3712af4df8 update role to provide better error messages (#12599) 2022-08-11 07:09:11 -04:00
Sean Sullivan
8cf75fce8c Update awx collection workflow nodes to look for type (#12597) 2022-08-11 07:08:27 -04:00
Alan Rominger
46be2d9e5b Replace git shallow clone with shutil.copytree
Introduce build_project_dir method
  the base method will create an empty project dir for workdir

Share code between job and inventory tasks with new mixin
  combine rest of pre_run_hook logic
  structure to hold lock for entire sync process

force sync to run for inventory updates due to UI issues

Remove reference to removed scm_last_revision field
2022-08-10 16:18:56 -04:00
Alan Rominger
998000bfbe Surface correct error from bulk_create on unrecoverable error 2022-08-10 16:16:57 -04:00
Alan Rominger
43a50cc62c Fix event counting in error handling path 2022-08-10 16:16:57 -04:00
Alan Rominger
30f556f845 Further resiliency changes focused on offline database
Make logs from database outage more manageable

Raise exception if update_model never recovers from problem
2022-08-10 16:16:57 -04:00
Alan Rominger
c5985c4c81 Change lazy worker method name and adjust log 2022-08-10 16:12:03 -04:00
Alan Rominger
a9170236e1 Wait 60 seconds before scaling down a worker 2022-08-10 16:12:03 -04:00
Seth Foster
85a5b58d18 Merge pull request #12629 from fosterseth/task_manager_refactor_squashed
Task manager refactor
2022-08-10 16:02:05 -04:00
Seth Foster
6fb3c8daa8 Merge pull request #44 from AlanCoding/one_of_seths_own
Inherit from our own APIView, not rest framework
2022-08-10 15:38:14 -04:00
Alan Rominger
a0103acbef Inherit from our own APIView, not rest framework 2022-08-10 15:31:19 -04:00
Alan Rominger
f7e6a32444 Optimize task manager with debug toolbar, adjust prefetch (#12588) 2022-08-10 10:05:13 -04:00
Alex Corey
7bbc256ff1 Merge pull request #12637 from AlexSCorey/12636-WorkflowApprovalTranslations
Fixes lack of translation on workflow approval list item actions
2022-08-09 15:47:34 -04:00
Alex Corey
64f62d6755 fixes translation issue 2022-08-09 15:30:08 -04:00
Alex Corey
b4cfe868fb Merge pull request #12546 from mabashian/6018-node-alias
Fix bug where node alias is not remaining after changing the template on a wf node
2022-08-09 10:16:46 -04:00
Alex Corey
8d8681580d Merge pull request #12548 from AlexSCorey/12512-UpdateWorkflowApprovalToolbar
Refactors and redesigns workflow approval to impove UX
2022-08-09 10:02:27 -04:00
Alex Corey
8892cf2622 Adds toast to workflow approval on cancel 2022-08-09 09:40:34 -04:00
Alan Rominger
585d3f4e2a Register system again if deleted by another pod
Avoid cases where missing instance
  would throw error on startup
  this gives time for heartbeat to register it
2022-08-08 22:36:17 -04:00
Alex Corey
2c9a0444e6 Easier review workflow output (#12459)
* Adds new tab component and positions it properly on screen

* Adds filtering, and navigation to node outputs
2022-08-08 16:13:51 -04:00
Alan Rominger
279cebcef3 Merge pull request #12586 from AlanCoding/connections_graph
Add a graph to show database connections being used
2022-08-08 15:49:20 -04:00
Seth Foster
e6f8852b05 Cache task_impact
task_impact is now a field on the database
It is calculated and set during create_unified_job

set task_impact on .save for adhoc commands
2022-08-05 14:33:47 -04:00
Alan Rominger
d06a3f060d Block sliced workflow jobs on any job type from their JT (#12551) 2022-08-05 14:33:45 -04:00
Seth Foster
957b2b7188 Cache preferred instance groups
When creating unified job, stash the list of pk values from the
instance groups returned from preferred_instance_groups so that the
task management system does not need to call out to this method
repeatedly.

.preferred_instance_groups_cache is the new field
2022-08-05 14:33:28 -04:00
Alan Rominger
b94b3a1e91 [task_manager_refactor] Move approval node expiration logic into queryset (#12502)
Instead of loading all pending Workflow Approvals in the task manager,
  run a query that will only return the expired apporovals
  directly expire all which are returned by that query

Cache expires time as a new field in order to simplify WorkflowApproval filter
2022-08-05 14:33:27 -04:00
Elijah DeLee
7776a81e22 add job to dependency graph in start task
We always add the job to the graph right before calling start task.
Reduce complexity of proper operation by just doing this in start_task,
because if you call start_task, you need to add it to the dependency
graph
2022-08-05 14:33:26 -04:00
Elijah DeLee
bf89093fac unify call pattern for get_tasks 2022-08-05 14:33:26 -04:00
Elijah DeLee
76d76d13b0 Start pending workflows in TaskManager
we had tried doing this in the WorkflowManager, but we decided that
we want to handle ALL pending jobs and "soft blockers" to jobs with the
TaskManager/DependencyGraph and not duplicate that logic in the
WorkflowManager.
2022-08-05 14:33:26 -04:00
Elijah DeLee
e603c23b40 fix sliced jobs blocking logic in depedency graph
We have to look at the sliced job's unified_job_template_id
Now, task_blocked_by works for sliced jobs too.
2022-08-05 14:33:26 -04:00
Alan Rominger
8af4dd5988 Fix unintended slice job blocking 2022-08-05 14:33:25 -04:00
Seth Foster
0a47d05d26 split schedule_task_manager into 3
each call to schedule_task_manager becomes one of

ScheduleTaskManager
ScheduleDependencyManager
ScheduleWorkflowManager
2022-08-05 14:33:25 -04:00
Seth Foster
b3eb9e0193 pid kill each of the 3 task managers on timeout 2022-08-05 14:33:25 -04:00
Elijah DeLee
b26d2ab0e9 fix looking at wrong id for wf allow_simultaneous 2022-08-05 14:33:25 -04:00
Elijah DeLee
7eb0c7dd28 exit task manager loops early if we are timed out
add settings to define task manager timeout and grace period

This gives us still TASK_MANAGER_TIMEOUT_GRACE_PERIOD amount of time to
get out of the task manager.

Also, apply start task limit in WorkflowManager to starting pending
workflows
2022-08-05 14:33:24 -04:00
Elijah DeLee
236c1df676 fix lint errors 2022-08-05 14:33:24 -04:00
Seth Foster
ff118f2177 Manage pending workflow jobs in Workflow Manager
get_tasks uses UnifiedJob
Additionally, make local overrides run after development settings
2022-08-05 14:31:48 -04:00
Elijah DeLee
29d91da1d2 we can do all the work in one loop
more than saving the loop, we save building the WorkflowDag twice which
makes LOTS of queries!!!

Also, do a bulk update on the WorkflowJobNodes instead of saving in a
loop :fear:
2022-08-05 14:31:48 -04:00
Elijah DeLee
ad08eafb9a add debug views for task manager(s)
implement https://github.com/ansible/awx/issues/12446
in development environment, enable set of views that run
the task manager(s).

Also introduce a setting that disables any calls to schedule()
that do not originate from the debug views when in the development
environment. With guards around both if we are in the development
environment and the setting, I think we're pretty safe this won't get
triggered unintentionally.

use MODE to determine if we are in devel env

Also, move test for skipping task managers to the tasks file
2022-08-05 14:31:24 -04:00
Seth Foster
431b9370df Split TaskManager into
- DependencyManager spawns dependencies if necessary
- WorkflowManager processes running workflows to see if a new job is
  ready to spawn
- TaskManager starts tasks if unblocked and has execution capacity
2022-08-05 14:29:02 -04:00
Alex Corey
3e93eefe62 Merge pull request #12618 from vedaperi/3999-NotificationHelpText
Add Help Text with documentation link to Notification Templates page
2022-08-05 10:41:07 -04:00
John Westcott IV
782667a34e Allow multiple values in SOCIAL_AUTH_SAML_USER_FLAGS_BY_ATTR.is_*_[value|role] settings (#12558) 2022-08-05 10:39:50 -04:00
dependabot[bot]
90524611ea Bump ace-builds from 1.6.0 to 1.8.1 in /awx/ui
Bumps [ace-builds](https://github.com/ajaxorg/ace-builds) from 1.6.0 to 1.8.1.
- [Release notes](https://github.com/ajaxorg/ace-builds/releases)
- [Changelog](https://github.com/ajaxorg/ace-builds/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ajaxorg/ace-builds/compare/v1.6.0...v1.8.1)

---
updated-dependencies:
- dependency-name: ace-builds
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-05 14:39:49 +00:00
dependabot[bot]
583086ae62 Bump dompurify from 2.3.8 to 2.3.10 in /awx/ui
Bumps [dompurify](https://github.com/cure53/DOMPurify) from 2.3.8 to 2.3.10.
- [Release notes](https://github.com/cure53/DOMPurify/releases)
- [Commits](https://github.com/cure53/DOMPurify/compare/2.3.8...2.3.10)

---
updated-dependencies:
- dependency-name: dompurify
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-05 14:39:23 +00:00
Alex Corey
19c24cba10 Merge pull request #12602 from ansible/dependabot/npm_and_yarn/awx/ui/devel/prop-types-15.8.1
Bump prop-types from 15.7.2 to 15.8.1 in /awx/ui
2022-08-04 09:56:23 -04:00
Jeff Bradberry
5290c692c1 Merge pull request #12620 from jbradberry/even-narrower-reload
Restrict files that trigger a reload
2022-08-04 09:21:31 -04:00
Jeff Bradberry
90a19057d5 Restrict files that trigger a reload
to files explicitly ending in '.py' that do not start with a dot.
This will avoid Emacs lockfiles from triggering the restart.
2022-08-03 18:23:48 -04:00
dependabot[bot]
a05c328081 Bump prop-types from 15.7.2 to 15.8.1 in /awx/ui
Bumps [prop-types](https://github.com/facebook/prop-types) from 15.7.2 to 15.8.1.
- [Release notes](https://github.com/facebook/prop-types/releases)
- [Changelog](https://github.com/facebook/prop-types/blob/main/CHANGELOG.md)
- [Commits](https://github.com/facebook/prop-types/compare/v15.7.2...v15.8.1)

---
updated-dependencies:
- dependency-name: prop-types
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-03 16:09:07 +00:00
Alex Corey
6d9e353a4e Merge pull request #12603 from ansible/dependabot/npm_and_yarn/awx/ui/devel/rrule-2.7.1
Bump rrule from 2.7.0 to 2.7.1 in /awx/ui
2022-08-03 12:06:51 -04:00
Alex Corey
82c062eab9 Merge pull request #12605 from ansible/dependabot/npm_and_yarn/awx/ui/devel/luxon-3.0.1
Bump luxon from 2.4.0 to 3.0.1 in /awx/ui
2022-08-03 12:06:32 -04:00
vedaperi
c0d59801d5 Add help text to Notification Templates form and detail with link to documentation 2022-08-02 18:15:56 -07:00
Alex Corey
93ea8a0919 Adds toast to detail view and fixes non-disabled action button on list view 2022-08-02 17:18:29 -04:00
Rebeccah Hunter
67f1ab2237 Merge pull request #12609 from john-westcott-iv/oracle_awx_triage_reply
Adding triage response for inquaries around Oracles version of AWX
2022-08-01 13:53:02 -04:00
John Westcott IV
71be8fadcb Adding GitHub check to ensure PRs have the proper X/Y/Z flags (#12577)
* Adding GitHub check to ensure PRs have the proper X/Y/Z flags
* Changing the Z release wording
2022-08-01 12:59:01 -04:00
John Westcott IV
c41becec13 Adding triage response for inquaries around Oracles version of AWX 2022-08-01 12:00:48 -04:00
mabashian
6d0d8e57a4 Fix bug where node alias is not remaining after changing the template on a wf node 2022-08-01 11:28:50 -04:00
Shane McDonald
6446b627ad Merge pull request #12608 from shanemcd/fix-k8s-dev-env
Fix Kubernetes dev environment + update docs
2022-08-01 11:11:45 -04:00
Shane McDonald
fcebd188a6 Fix Kubernetes dev environment + update docs 2022-08-01 10:45:10 -04:00
Alex Corey
1fca505b61 Refactors and redesigns workflow approval to impove UX 2022-08-01 09:59:53 -04:00
dependabot[bot]
a0e9c30b4a Bump luxon from 2.4.0 to 3.0.1 in /awx/ui
Bumps [luxon](https://github.com/moment/luxon) from 2.4.0 to 3.0.1.
- [Release notes](https://github.com/moment/luxon/releases)
- [Changelog](https://github.com/moment/luxon/blob/master/CHANGELOG.md)
- [Commits](https://github.com/moment/luxon/compare/2.4.0...3.0.1)

---
updated-dependencies:
- dependency-name: luxon
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-01 08:11:23 +00:00
dependabot[bot]
bc94dc0257 Bump rrule from 2.7.0 to 2.7.1 in /awx/ui
Bumps [rrule](https://github.com/jakubroztocil/rrule) from 2.7.0 to 2.7.1.
- [Release notes](https://github.com/jakubroztocil/rrule/releases)
- [Changelog](https://github.com/jakubroztocil/rrule/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jakubroztocil/rrule/compare/v2.7.0...v2.7.1)

---
updated-dependencies:
- dependency-name: rrule
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-01 08:10:38 +00:00
Shane McDonald
65771b7629 Merge pull request #12562 from shanemcd/auto-install-setuptools-scm
Automatically install setuptools-scm in script called from Makefile
2022-07-31 17:17:26 -04:00
Keith Grant
86a67abbce Merge pull request #12531 from jtmelhorn/devel
[#12478] Change Inventory "Status" column header to "Sync Status"
2022-07-29 15:50:08 -07:00
Keith Grant
d555093325 Fix job output follow mode & scrolling (#12555)
* reworks/fixes follow mode

* reduces batch size for better job output perceived performance

* improves job output scroll button behavior
2022-07-28 15:26:25 -04:00
John Westcott IV
95a099acc5 Adding remove_superuser and remove_system_auditors to the SAML user attribute map (#12522) 2022-07-28 14:38:16 -04:00
John Westcott IV
d1fc2702ec Adding subscriptions module and adding pool_id to license module (#12560) 2022-07-28 12:16:47 -04:00
Alan Rominger
3aa8320fc7 Add a graph to show database connections being used 2022-07-28 11:52:36 -04:00
John Westcott IV
734899228b Updating CONTRIBUTING guide (#12565) 2022-07-27 09:59:09 -04:00
Rick Elrod
87f729c642 [FieldLookupBackend] limit iexact to string fields (#12569)
Change:
- Case-insensitive search only makes sense on strings, so check the
  type of the field we are searching and ensure it is a string field
  (TextField, CharField, or some subclass thereof).

- This prevents a 500 error when a user uses iexact on, e.g., an
  integer field. Now, a 400 Bad Request is returned instead.

Test Plan:
- Added simple unit tests for iexact

Tickets:
- Fixes #9222

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-07-26 12:46:50 -05:00
John Westcott IV
62fc3994fb Modifying SAML adapter to not auto-add default galaxy creds to orgs on login (#12504)
* Modifying SAML adapter to not auto-add default galaxy creds to orgs on login

* Adding test, fixing old tests and moving add_default_galaxy_credential to pipeline
2022-07-25 17:16:22 -03:00
Shane McDonald
0d097964be Automatically install setuptools-scm in script called from Makefile 2022-07-22 12:59:39 -04:00
Christian Adams
9f8b3948e1 Merge pull request #12147 from rooftopcellist/bump-receptor-1.2.3
Bump Receptorctl to 1.2.3
2022-07-21 11:45:27 -04:00
Jessica Steurer
1ce8240192 Merge pull request #12528 from vedaperi/12436-RemoveUpdateOnProjectUpdate
Remove update_on_project_update
2022-07-20 16:14:23 -03:00
Jeff Bradberry
1bcfc8f28e Merge pull request #12544 from jbradberry/awxkit-fix-no-content
Suppress 204 No Content results causing an error during import
2022-07-20 10:48:02 -04:00
vedaperi
71925de902 Enhanced detail component (#12432)
* Enhanced detail component to handle cases with no values, and refactored components that use detail component.

* Add optional chaining operators where necessary to pass test cases

* add test cases to test suites of modified files

Co-authored-by: Veda Periwal <vperiwal@vperiwal-mac.attlocal.net>
2022-07-19 17:17:27 -04:00
Aditya Mulik
54057f1c80 Merge pull request #12467 from adityamulik/localization_scripts
Localization Scripts for AWX UI & API
2022-07-19 16:40:10 -04:00
Aditya Mulik
ae388d943d Merge pull request #12541 from adityamulik/translations_updated_2022-07-18_20_51_59
Pushing updated strings for localization
2022-07-19 16:39:44 -04:00
Alan Rominger
2d310dc4e5 Optimize object creation by getting fewer empty relationships (#12508)
This optimizes the ActivityStreamSerializer by only getting many-to-many
  relationships that are speculatively non-empty
  based on information we have in other fields

We run this every time we create an object as an on_commit action
  so it is expected this will have a major impact on response times for launching jobs
2022-07-19 14:27:51 -04:00
Jeff Bradberry
fe1a767f4f Suppress 204 No Content results causing an error during import 2022-07-19 12:25:24 -04:00
adityamulik
8c6581d80a Pushing updated strings for localization 2022-07-18 20:52:59 -04:00
Jessica Steurer
33e445f4f6 Merge pull request #12489 from kialam/vendor-d3.js-webworker
Remove external script call to D3.js.
2022-07-18 19:10:50 -03:00
Kia Lam
9bcb60d9e0 Remove d3 csp declaration. 2022-07-18 08:57:03 -07:00
Kia Lam
40109d58c7 Host d3 files needed for webworker. 2022-07-18 08:57:02 -07:00
Kia Lam
2ef3f5f9e8 Remove external script call to D3.js. 2022-07-18 08:57:02 -07:00
John Westcott IV
389c4a3180 Adding fields to job_metadata for workflows and approval nodes (#12255) 2022-07-18 16:53:49 +02:00
Justin Melhorn
bee48671cd [#12478] Change Inventory "Status" column header to "Sync Status"
Signed-off-by: Justin Melhorn <jtmelhorn@gmail.com>
2022-07-17 16:38:24 -04:00
Veda Periwal
21f551f48a Remove update_on_project_update from inventory sources form and corresponding files 2022-07-15 11:18:16 -07:00
Alex Corey
cbb019ed09 Merge pull request #12510 from AlexSCorey/11822-JobOutputDocumentation-Overview
Adds Overview of job output with some images to help.
2022-07-15 10:52:47 -04:00
Alex Corey
bf5dfdaba7 Adds Overview of job output with some images to help. 2022-07-15 10:32:41 -04:00
Jessica Steurer
0f7f8af9b8 Merge pull request #12346 from john-westcott-iv/dependabot_fixes
Updating pyjwt per dependabot
2022-07-15 10:42:24 -03:00
Sarabraj Singh
0237402390 Merge pull request #12509 from sarabrajsingh/docs/awx-release-docs-refactoring
buffed docs for awx release and canonical triage responses
2022-07-15 08:21:58 -04:00
Hao Liu
84d7fa882d Merge pull request #12513 from TheRealHaoLiu/fix-workflow-job-template-export
fix WorkflowJobTemplate export
2022-07-14 14:44:58 -04:00
Sarabraj Singh
cd2fae3471 buffed docs for AWX Release and canonical Triage responses 2022-07-14 14:13:18 -04:00
John Westcott IV
8be64145f9 Updating pyjwt per dependabot 2022-07-14 08:35:46 -04:00
djyasin
23d28fb4c8 Merge pull request #12457 from djyasin/feature/bu-metrics-added-forks-in-unified-jobs-table
Added forks to unified jobs table.
2022-07-13 11:33:19 -04:00
Lila
aeffd6f393 Bumped up version number of the collector. 2022-07-13 09:59:41 -04:00
djyasin
ab6b4bad03 Merge branch 'ansible:devel' into devel 2022-07-13 09:53:22 -04:00
Hao Liu
769c253ac2 fix WorkflowJobTemplate export where WorkflowApprovalTemplate is not properly exported
fixes https://github.com/ansible/awx/issues/7946
- added WorkflowApprovalTemplate page type to allow URL registration
- added resources regex that’s associated resource URL with WorkflowApprovalTemplate
- registered the new resource regex with WorkflowApprovalTemplate page type
- modified `DEPENDENT_EXPORT` handling (insisted by @jbradberry)
- added special case handling for WorkflowApprovalTemplate due to its unique nature

unique nature of WorkflowApprovalTemplate
- when exporting WorkflowJobTemplate with approval node the WorkflowJobTemplateNode need to contain a related "create_approval_template" the POST data for "create_approval_template" need to come from the "workflow_approval_template"
- during the export of a WorkflowJobTemplateNode that is an approval node we need to get the data from "workflow_approval_template" and use that to populate the "create_approval_template"

Co-Authored-By: Jeff Bradberry <685957+jbradberry@users.noreply.github.com>
Signed-off-by: Hao Liu <haoli@redhat.com>
2022-07-12 19:48:02 -04:00
Michael Abashian
8031b3d402 Translate contents of Hosts Automated field as a single string (#12480)
* Translate contents of Hosts Automated field as a single string

* Adds unit test case for hiding Hosts automated detail when no value is present
2022-07-12 15:24:33 -04:00
Sarabraj Singh
bd93ac7edd Merge pull request #12505 from sarabrajsingh/bugfix/add-setuptools-scm-dependency-to-workflow
added setuptools-scm dependency to promote.yml workflow
2022-07-12 10:21:10 -04:00
John Westcott IV
37ff9913d3 Adding GOOGLE_APPLICATION_CREDENTIALS env var (#12389)
* Adding GOOGLE_APPLICATION_CREDENTIALS env var
* Updating tests
2022-07-12 08:51:02 -04:00
Sarabraj Singh
9cb44a7e52 added setuptools-scm dependency to promote.yml workflow 2022-07-11 17:10:29 -04:00
John Westcott IV
6279295541 Updating workflow job template collection test (#12468)
Adding additional use case

Fixing error with workflow calling itslef

Adding better cleanup of assets created as part of the test
2022-07-11 17:07:07 -03:00
John Westcott IV
de17cff39c Modified triage replied (#12473)
Split no progress into issue and pr

added community.general standard response
2022-07-11 12:43:30 -04:00
Alex Corey
22ca49e673 Merge pull request #12493 from AlexSCorey/bumpCodeMirror
Bump code mirror
2022-07-11 09:43:54 -04:00
Tom Page
008a4b4d30 Fix workflow job template webhook credential bug - #12324 (#12325)
Signed-off-by: tompage1994@hotmail.co.uk <tpage@redhat.com>
2022-07-11 09:13:44 -03:00
Alex Corey
8d4089c7f3 Bumps code mirror and adds license files 2022-07-08 15:09:54 -04:00
vedaperi
e296d0adad Add Help Text with documentation link to Schedules page (#12448)
* Added help text to schedule form and detail with link to documentation

* Added test cases for help text in schedule form and detail

* Add help text to schedule form and detail with link to documentation

Co-authored-by: Veda Periwal <vperiwal@vperiwal-mac.attlocal.net>
2022-07-08 15:06:50 -04:00
Aditya Mulik
df38650aee Localization Scripts for AWX UI & API 2022-07-08 11:44:56 -04:00
Alex Corey
401b30b3ed Merge pull request #12451 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/patternfly-4.202.1
Bump @patternfly/patternfly from 4.196.7 to 4.202.1 in /awx/ui
2022-07-08 08:13:30 -04:00
Alex Corey
20cc54694c Merge pull request #12454 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/react-icons-4.75.1
Bump @patternfly/react-icons from 4.49.19 to 4.75.1 in /awx/ui
2022-07-08 08:12:58 -04:00
dependabot[bot]
e6ec0952fb Bump @patternfly/patternfly from 4.196.7 to 4.202.1 in /awx/ui
Bumps [@patternfly/patternfly](https://github.com/patternfly/patternfly) from 4.196.7 to 4.202.1.
- [Release notes](https://github.com/patternfly/patternfly/releases)
- [Changelog](https://github.com/patternfly/patternfly/blob/main/RELEASE-NOTES.md)
- [Commits](https://github.com/patternfly/patternfly/compare/prerelease-v4.196.7...prerelease-v4.202.1)

---
updated-dependencies:
- dependency-name: "@patternfly/patternfly"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-06 20:03:52 +00:00
dependabot[bot]
db1dec3a98 Bump @patternfly/react-icons from 4.49.19 to 4.75.1 in /awx/ui
Bumps [@patternfly/react-icons](https://github.com/patternfly/patternfly-react) from 4.49.19 to 4.75.1.
- [Release notes](https://github.com/patternfly/patternfly-react/releases)
- [Commits](https://github.com/patternfly/patternfly-react/compare/@patternfly/react-icons@4.49.19...@patternfly/react-icons@4.75.1)

---
updated-dependencies:
- dependency-name: "@patternfly/react-icons"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-06 20:03:40 +00:00
Alex Corey
1853d3850e Merge pull request #12450 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/react-table-4.93.1
Bump @patternfly/react-table from 4.83.1 to 4.93.1 in /awx/ui
2022-07-06 16:02:18 -04:00
Andrea Decorte
a8e3c37bb9 Fix notification doc for Workflow Job Template module
Signed-off-by: Andrea Decorte <adecorte@redhat.com>
2022-07-04 09:34:58 +02:00
Lila
1e57c84383 Added forks to unified jobs table.
Co-authored-by: sarabrajsingh <singh.sarabraj@gmail.com>
2022-07-01 10:30:48 -04:00
dependabot[bot]
3cf120c6a7 Bump @patternfly/react-table from 4.83.1 to 4.93.1 in /awx/ui
Bumps [@patternfly/react-table](https://github.com/patternfly/patternfly-react) from 4.83.1 to 4.93.1.
- [Release notes](https://github.com/patternfly/patternfly-react/releases)
- [Commits](https://github.com/patternfly/patternfly-react/compare/@patternfly/react-table@4.83.1...@patternfly/react-table@4.93.1)

---
updated-dependencies:
- dependency-name: "@patternfly/react-table"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 08:06:59 +00:00
Alan Rominger
fd671ecc9d Give specific messages if job was killed due to SIGTERM or SIGKILL (#12435)
* Reap jobs on dispatcher startup to increase clarity, replace existing reaping logic

* Exit jobs if receiving SIGTERM signal

* Fix unwanted reaping on shutdown, let subprocess close out

* Add some sanity tests for signal module

* Add a log for an unhandled dispatcher error

* Refine wording of error messages

Co-authored-by: Elijah DeLee <kdelee@redhat.com>
2022-06-30 13:20:08 -04:00
Shane McDonald
a0d5f1fb03 Merge pull request #12428 from djyasin/updating_setuppy
Updated setup.py --version to python3 -m setuptools_scm.
2022-06-30 12:17:54 -04:00
Alex Corey
ff882a322b Merge pull request #12412 from AlexSCorey/11994-FailedJobErrorMessage
Adds a failure message to job output when job failed and no events exist
2022-06-29 11:40:44 -04:00
Tom Page
b70231f7d0 Allow modification of schedule if there are two of the same name (#12407) 2022-06-28 20:23:54 -03:00
Alex Corey
93d1aa0a9d Adds a failure message to job output when job failed and no events exist. 2022-06-28 18:30:37 -04:00
Alex Corey
c586f8bbc6 Removes references to Ansible Tower in favor of Ansible Controller (#12422) 2022-06-28 14:35:32 -04:00
Alex Corey
26912a06d1 Merge pull request #12424 from AlexSCorey/11433-UpdateLaunchButtonTest
Updates irrelevant test
2022-06-28 14:31:26 -04:00
Alex Corey
218a3d333b updates test 2022-06-28 14:14:12 -04:00
Seth Foster
d2013bd416 Merge pull request #12366 from fosterseth/remove_update_on_project_update
Remove deprecated field update_on_project_update
2022-06-28 13:15:57 -04:00
Shane McDonald
6a3f9690b0 Remove setup.py entirely 2022-06-27 14:15:32 -04:00
Jeff Bradberry
d59b6f834c Merge pull request #12431 from jbradberry/fix-ugettext-deprecation
Fix a ugettext deprecation that snuck back in
2022-06-27 13:58:07 -04:00
Shane McDonald
cbea36745e Transition from setup.py to setup.cfg 2022-06-27 13:30:01 -04:00
Jeff Bradberry
ae7be525e1 Fix a ugettext deprecation that snuck back in
at some point after the Django 3.2 upgrade.
2022-06-27 13:27:35 -04:00
jainnikhil30
5062ce1e61 add database connection to the metrics endpoint (#12427)
* add database connection to the metrics endpoint

* bump the counts collector version to 1.2

* check for postgresql as database so to not break the tests
2022-06-27 09:37:23 -04:00
Alex Corey
566665ee8c Merge pull request #12417 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/react-core-4.221.3
Bump @patternfly/react-core from 4.214.1 to 4.221.3 in /awx/ui
2022-06-27 09:36:58 -04:00
Alex Corey
96423af160 Merge pull request #12419 from ansible/dependabot/npm_and_yarn/awx/ui/devel/react-router-dom-5.3.3
Bump react-router-dom from 5.2.0 to 5.3.3 in /awx/ui
2022-06-27 09:36:22 -04:00
Alex Corey
a01bef8d2c Merge pull request #12420 from ansible/dependabot/npm_and_yarn/awx/ui/devel/lingui/react-3.14.0
Bump @lingui/react from 3.13.3 to 3.14.0 in /awx/ui
2022-06-27 09:35:40 -04:00
Seth Foster
0522233892 remove update_on_project_update from InventorySource 2022-06-24 15:27:08 -04:00
Lila
63ea6bb5b3 Updated setup.py --version to python3 -m setuptools_scm. 2022-06-24 10:22:56 -04:00
Sarah Akus
c2715d7c29 Merge pull request #12378 from john-westcott-iv/winrm_debug_5925
Making verbosity list and options a constant and adding WinRM debug
2022-06-24 09:06:14 -04:00
Alan Rominger
783b744bdb Pass combined artifacts from nested workflows into downstream nodes (#12223)
* Track combined artifacts on workflow jobs

* Avoid schema change for passing nested workflow artifacts

* Basic support for nested workflow artifacts, add test

* Forgot that only does not work with polymorphic

* Remove incorrect field

* Consolidate logic and prevent recursion with UJ artifacts method

* Stop trying to do precedence by status, filter for obvious ones

* Review comments about sets

* Fix up bug with convergence node paths and artifacts
2022-06-23 16:54:53 -03:00
Alex Corey
f7982a0d64 Merge pull request #12421 from AlexSCorey/updateAxios
Bumps Axios and Adds license files
2022-06-23 13:07:28 -04:00
Sarabraj Singh
2147ac226e Merge pull request #12408 from sarabrajsingh/feature/new-awx-cli-import-export-error-codes
[new] bubble up an error code when something goes wrong with import/export
2022-06-23 10:58:14 -04:00
Alex Corey
6cc22786bc Adds license files 2022-06-23 09:26:34 -04:00
dependabot[bot]
861a9f581e Bump @lingui/react from 3.13.3 to 3.14.0 in /awx/ui
Bumps [@lingui/react](https://github.com/lingui/js-lingui) from 3.13.3 to 3.14.0.
- [Release notes](https://github.com/lingui/js-lingui/releases)
- [Changelog](https://github.com/lingui/js-lingui/blob/main/CHANGELOG.md)
- [Commits](https://github.com/lingui/js-lingui/compare/v3.13.3...v3.14.0)

---
updated-dependencies:
- dependency-name: "@lingui/react"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-23 12:34:58 +00:00
dependabot[bot]
e57a8183ba Bump react-router-dom from 5.2.0 to 5.3.3 in /awx/ui
Bumps [react-router-dom](https://github.com/remix-run/react-router/tree/HEAD/packages/react-router-dom) from 5.2.0 to 5.3.3.
- [Release notes](https://github.com/remix-run/react-router/releases)
- [Commits](https://github.com/remix-run/react-router/commits/v5.3.3/packages/react-router-dom)

---
updated-dependencies:
- dependency-name: react-router-dom
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-23 12:34:20 +00:00
dependabot[bot]
8a7163ffad Bump @patternfly/react-core from 4.214.1 to 4.221.3 in /awx/ui
Bumps [@patternfly/react-core](https://github.com/patternfly/patternfly-react) from 4.214.1 to 4.221.3.
- [Release notes](https://github.com/patternfly/patternfly-react/releases)
- [Commits](https://github.com/patternfly/patternfly-react/compare/@patternfly/react-core@4.214.1...@patternfly/react-core@4.221.3)

---
updated-dependencies:
- dependency-name: "@patternfly/react-core"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-23 12:32:57 +00:00
Alex Corey
439b351c95 Merge pull request #12392 from nixocio/update_bot_user
Update user dependabot
2022-06-23 08:31:16 -04:00
Alex Corey
14afab918e Creates a verbosity select dropdowns and moves options constant into same file 2022-06-23 08:28:37 -04:00
Alex Corey
ef8d4e73ae Creates a verbosity select dropdowns and moves options constant into same file 2022-06-22 14:04:12 -04:00
John Westcott IV
61f483ae32 Fixing UI general test 2022-06-22 14:04:12 -04:00
John Westcott IV
21bed7473d Making verbosity list and options a constant and adding WinRM debug to everything 2022-06-22 14:04:11 -04:00
John Westcott IV
31d8ddcf84 Updating release docs (#12403)
Adding standard subject line to triage_replies.md

Removing PR commit generated change log in favor of github auto-commit log

Updating some images

Adding AWX matrix chanel to IRC notifications

Adding references between operator and AWX releases
2022-06-22 12:36:54 -04:00
Seth Foster
9419270897 Merge pull request #12393 from fosterseth/subsystem_metrics_delete_redis_keys
Subsystem metrics reset_values should remove all redis keys
2022-06-22 11:34:20 -04:00
Alex Corey
f755d93a58 Merge pull request #12373 from AlexSCorey/updateJS-Yaml
Updates js-yaml to 4.x and updates files.
2022-06-22 11:25:52 -04:00
Sarabraj Singh
05df2ebad2 bubble up an error code when something goes wrong with import/export
(cherry picked from commit babd6f0975)
2022-06-22 10:29:01 -04:00
Jeff Bradberry
b44442c460 Merge pull request #12351 from AlexSCorey/5673-t-importExportSchedules
Adds import export to awx cli for schedules as a top level object
2022-06-22 10:13:56 -04:00
Shane McDonald
989b389ba4 Merge pull request #12397 from sean-m-sullivan/awx_license_delete
add state to awx license module
2022-06-22 09:20:29 -04:00
Sarabraj Singh
5bd4aade0e Merge pull request #12404 from ansible/revert-12335-feature/awx-cli-import-export-error-codes
Revert "import/export error codes when something bad happens"
2022-06-21 22:01:46 -04:00
Jessica Steurer
470910b612 Merge pull request #12309 from jbradberry/cli-multiple-extra-vars
Allow for multiple --extra_vars or --variables flags in awx-cli
2022-06-21 19:34:25 -03:00
Sarabraj Singh
dbb81551c8 Revert "import/export error codes when something bad happens" 2022-06-21 17:36:21 -04:00
Sarabraj Singh
f7c5cb2979 Merge pull request #12335 from sarabrajsingh/feature/awx-cli-import-export-error-codes
import/export error codes when something bad happens
2022-06-21 16:49:03 -04:00
Sarabraj Singh
babd6f0975 bubble up an error code when something goes wrong with import/export 2022-06-21 15:53:59 -04:00
sean-m-sullivan
7bcceb7e98 add state to awx license module 2022-06-21 13:07:16 -04:00
Seth Foster
c92619a2dc Subsystem metrics reset_values should remove all redis keys 2022-06-16 16:54:37 -04:00
Alan Rominger
923cc671db Merge pull request #12391 from AlanCoding/compose_graphs
Do the grafana thing in docker-compose templating itself
2022-06-16 16:23:36 -04:00
Alan Rominger
db105c21e4 Set default false values 2022-06-16 15:46:42 -04:00
Alan Rominger
372aa36207 Make the prometheus config file ignored by git 2022-06-16 15:42:10 -04:00
Alan Rominger
173318764b Remove existing yml file for prometheus 2022-06-16 15:37:18 -04:00
Alan Rominger
1dd535a859 Remove old way of doing grafana graphs 2022-06-16 15:31:45 -04:00
nixocio
e7d37b26f3 Update user dependabot
Update user dependabot
2022-06-16 15:31:39 -04:00
Alan Rominger
f4ef7d6927 Add volumes to the clean command 2022-06-16 14:03:22 -04:00
Elijah DeLee
7cbe112e4e possible work around for 500 on /api/v2/metrics (#12376)
we've observed this in development and some users have reported experiencing 500's on /api/v2/metrics because of a key error here where a metric is missing from a certain instance
2022-06-16 13:15:25 -04:00
Alan Rominger
c441db2aab docs workding edits and depends_on 2022-06-16 12:07:26 -04:00
Alan Rominger
fb292d9706 Move visualization containers into docker-compose 2022-06-16 10:25:02 -04:00
Sarah Akus
35a5f93182 Merge pull request #12323 from AlexSCorey/5857-t-SanitizeLoginHTML
Removes Sanatize html in favor of dom purify library
2022-06-16 09:59:21 -04:00
Jessica Steurer
116dc0c480 Merge pull request #12340 from john-westcott-iv/shedule_timezone_12255
Add documentation around schedule timezone change
2022-06-15 15:34:49 -03:00
Alex Corey
b87ba1c53d Merge pull request #12382 from nixocio/ui_close_css
Update css var
2022-06-15 11:56:47 -04:00
Alex Corey
59691b71bb Merge pull request #12360 from nixocio/ui_issue_5012
Add column to display resource related to a schedule
2022-06-15 11:53:33 -04:00
Alex Corey
cc0bb3e401 Merge pull request #12365 from ansible/dependabot/npm_and_yarn/awx/ui/devel/ace-builds-1.6.0
Bump ace-builds from 1.5.1 to 1.6.0 in /awx/ui
2022-06-15 11:46:53 -04:00
nixocio
7ef90bd9f4 Update css var
Update css var
2022-06-15 11:37:04 -04:00
John Westcott IV
f820c49b82 Fixing typo in ISSUE_TEMPLATE.md (#12381) 2022-06-15 10:34:22 -04:00
Jessica Steurer
ac62d86f2a Merge pull request #12361 from kialam/refresh-data-lookup-modal
Allow lookup modals to refresh when opened.
2022-06-15 09:40:40 -03:00
John Westcott IV
b9e67e7972 Allowing blank issues with a template for testing purposes only (#12377) 2022-06-14 17:17:07 -04:00
Jeff Bradberry
48a2ebd48c Merge pull request #12271 from HampusLundqvist/gitlab-webhooks-fixes-#12268
return event_status on push, tag push, and merge gitlab webhook events
2022-06-14 17:12:27 -04:00
Sarah Akus
ee13ddd87d Merge pull request #12332 from nixocio/ui_issue_8097
Add typeahed for single choice surveys
2022-06-14 15:20:38 -04:00
Seth Foster
3fcf7429a3 Merge pull request #12246 from fosterseth/fix_haproxy_startup_error
use haproxy 2.3 with maxconn set to avoid startup failures
2022-06-14 14:41:14 -04:00
Sarah Akus
51a8790d56 Merge pull request #12348 from nixocio/ui_issue_111987
Update project status to reflect project sync related to job template
2022-06-14 14:41:01 -04:00
Jessica Steurer
c231e4d05e Merge pull request #12370 from nixocio/ui_issue_11795
Add column org to template list
2022-06-14 14:28:56 -03:00
Seth Foster
987e5a084d use haproxy 2.3 with maxconn set to avoid startup failures 2022-06-14 13:09:40 -04:00
Seth Foster
70ac7b2920 Merge pull request #12352 from fosterseth/docs_subsystem_metrics
Add docs for subsystem metrics
2022-06-14 13:05:21 -04:00
Alex Corey
bda335cb19 Updates js-yaml to 4.x and updates files. 2022-06-14 12:24:40 -04:00
Seth Foster
30c060cb27 Merge pull request #12235 from fosterseth/subsystem_metrics_task_manager
Subsystem metrics for task manager
2022-06-14 12:02:54 -04:00
Kersom
9b0a2b0b76 Merge pull request #12312 from nixocio/ui_issue_11167_rebased
Update logout/login redirect for different users
2022-06-14 11:55:05 -04:00
Seth Foster
2f82b75748 Add subsystem metrics for task manager 2022-06-14 11:00:11 -04:00
Sarah Akus
84fcd2ff00 Merge pull request #12363 from nixocio/ui_issue_5195
Modify position of tooltip for management job list
2022-06-14 10:29:49 -04:00
Jeff Bradberry
3bc0c53e37 Merge pull request #12368 from jbradberry/narrower-autoreload
Narrow down the inotifywait criteria for reloading the dev environment
2022-06-14 10:13:41 -04:00
Alex Corey
bc2dbcfce8 Merge pull request #12344 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/patternfly-4.196.7
Bump @patternfly/patternfly from 4.194.4 to 4.196.7 in /awx/ui
2022-06-13 16:58:48 -04:00
nixocio
876edf54a3 Modify position of tooltip for management job list
Modify position of tooltip for management job list. Also, remove
duplicated tooltip.
2022-06-13 16:42:43 -04:00
nixocio
b31bf8fab1 Add column org to template list
Add column org to template list

See: https://github.com/ansible/awx/issues/11795
2022-06-13 16:37:32 -04:00
Jeff Bradberry
e8b2998578 Narrow down the inotifywait criteria for reloading the dev environment
- listen specifically within awx/awx, so that changes in awxkit or
  awx_collection don't trigger spurious reloads
- expand the exclude pattern to ignore the test directories
2022-06-13 16:08:20 -04:00
nixocio
8a92a01652 Add column to display resource related to a schedule
Add column to display what resource is related to a schedule

See: https://github.com/ansible/awx/issues/5012
2022-06-13 14:28:44 -04:00
Seth Foster
705f86f8cf Merge pull request #12287 from fosterseth/fix_children_summary_not_tree
detect if job events are tree-like and collapsible
2022-06-13 14:27:39 -04:00
Alex Corey
9ab6a6d57e Merge pull request #11429 from akelling/patch-1
Update README.md
2022-06-13 14:19:16 -04:00
Sarah Akus
791eb4c1e1 Merge pull request #12349 from nixocio/ui_issue_12092
Add loading state when saving a visualizer
2022-06-13 14:06:34 -04:00
dependabot[bot]
870ca29388 Bump ace-builds from 1.5.1 to 1.6.0 in /awx/ui
Bumps [ace-builds](https://github.com/ajaxorg/ace-builds) from 1.5.1 to 1.6.0.
- [Release notes](https://github.com/ajaxorg/ace-builds/releases)
- [Changelog](https://github.com/ajaxorg/ace-builds/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ajaxorg/ace-builds/compare/v1.5.1...v1.6.0)

---
updated-dependencies:
- dependency-name: ace-builds
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-13 18:00:10 +00:00
Kersom
816518cfab Merge pull request #12302 from ansible/dependabot/npm_and_yarn/awx/ui/devel/react-ace-10.1.0
Bump react-ace from 9.4.0 to 10.1.0 in /awx/ui
2022-06-13 13:58:55 -04:00
Alex Corey
9e981583a6 Merge branch 'devel' into patch-1 2022-06-13 13:55:02 -04:00
Alex Corey
d6fb8d6cd7 Update tools/docker-compose/README.md
Co-authored-by: Shane McDonald <me@shanemcd.com>
2022-06-13 13:53:48 -04:00
Sarah Akus
7dbf5f7138 Merge pull request #12358 from nixocio/ui_issue_5883
Hide add access button based on the user profile for credentials
2022-06-13 13:38:36 -04:00
dependabot[bot]
aaec9487e6 Bump react-ace from 9.4.0 to 10.1.0 in /awx/ui
Bumps [react-ace](https://github.com/securingsincity/react-ace) from 9.4.0 to 10.1.0.
- [Release notes](https://github.com/securingsincity/react-ace/releases)
- [Changelog](https://github.com/securingsincity/react-ace/blob/main/CHANGELOG.md)
- [Commits](https://github.com/securingsincity/react-ace/compare/v9.4.0...v10.1.0)

---
updated-dependencies:
- dependency-name: react-ace
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-13 17:37:54 +00:00
Kia Lam
96fa881df1 Fix unit test. 2022-06-13 08:59:31 -07:00
Seth Foster
b7057fdc3e Add docs for subsystem metrics 2022-06-13 11:49:56 -04:00
nixocio
2679c99cad Add loading state when saving a visualizer
Add loading state when saving a visualizer

See: https://github.com/ansible/awx/issues/12092
2022-06-13 10:47:27 -04:00
Jessica Steurer
ea3a8d4912 Merge pull request #12306 from ansible/10961-webhook-notification-does-not-allow-for-use-of-jinja-statements
Duplication of PR of Jinga 2 Rendering
2022-06-13 09:38:42 -03:00
John Westcott IV
63d9cd7b57 .github folder maintaince (#12327)
* Removing old awxbot files
* Removing security bug report as GitHub now shows the security piolicy from /SECURITY.md
* Changing feature_request from md to yml
* Adding additional options to bug report components andinstall method
* Removing old ISSUE_TEMPLATE.md
* Changing issue type and adding additional components
* Removing auto-generated change log
* Adding awx_collection and cli components
* Changing content search pattern for type labels
* Changing from collection to awx_collection tag and adding dependencies tag
* Adding unicode bug to bug repot to match feature unicode character
* Changing bug to bug or docs
* Remove docker on * and boot2docker infavor of docker development environmnet
* Create top level issue with: CoC, Enterprise, Top level help
* Remove old CODEOWNERS file
2022-06-13 07:44:15 -04:00
Kia Lam
b692bbaa12 Allow lookup modals to refresh when opened. 2022-06-10 14:44:53 -07:00
John Westcott IV
186af73e5d Fixing slashes for copy/paste of links (#12359) 2022-06-10 14:29:12 -04:00
John Westcott IV
fddf292d47 Additional changes from review 2022-06-10 10:26:24 -04:00
John Westcott IV
1180634ba7 Fixing UI checks 2022-06-10 10:26:23 -04:00
John Westcott IV
9abdafe101 Removing read_only as its the default setting 2022-06-10 10:26:23 -04:00
John Westcott IV
48ebcd5918 Fixing assertion of schedule_zoneinfo 2022-06-10 10:26:23 -04:00
John Westcott IV
fe6d0ce9cc Adding help text to until and timezone fields 2022-06-10 10:26:23 -04:00
John Westcott IV
62dabcae63 Removing unneeded function 2022-06-10 10:26:23 -04:00
Keith J. Grant
0b63af8d4d add schedules timezone link warning to UI 2022-06-10 10:26:23 -04:00
John Westcott IV
b05ebe9623 Starting UI change to warn if linked TZ is selected 2022-06-10 10:26:23 -04:00
John Westcott IV
c836fafb61 modifying schedules API to return a list of links 2022-06-10 10:26:23 -04:00
nixocio
96330f608d Hide add access based on the user profile for credentials
* Show add access button if it is a system admin
* Hide access button if the user is credential admin, org admin, but the
  credential does not belong to any org.
* Show access button if the user is a credential admin, org admin, and
  the credential is associated to an org.
* Show access button if the user is an org admin and the credential is
  associated to the org.

All those permutations are allowed by the API RBAC.
This PR update UX to not allow the user to attempt to perform any
action that will raise an error when modifying access to the
credentials.
2022-06-10 10:09:18 -04:00
Kersom
23aaf5b3ad Add cancel button to workflow job output (#12338)
Add cancel button to workflow job output

See: https://github.com/ansible/awx/issues/10514
2022-06-09 20:16:07 -04:00
Kersom
a3e86dcd73 Hide management job for non system admin as node choice (#12341)
Hide management job for non system admin as node type choice. Also, fix
related uni-tests related to this change.

See: https://github.com/ansible/awx/issues/12334
Also: https://github.com/ansible/awx/pull/10572
2022-06-09 20:15:03 -04:00
Alan Rominger
81b8028ea2 Merge pull request #12355 from AlanCoding/autoreload_once
Make awx-autoreloader work faster for large code changes
2022-06-09 15:19:17 -04:00
Alan Rominger
a4bfb032ff Make awx-autoreloader work faster for large code changes 2022-06-09 14:52:03 -04:00
Keith J. Grant
2704b202bf check for is_tree flag from children summary response 2022-06-09 14:25:39 -04:00
Seth Foster
550d9d5e42 detect if job events are tree-like and collapsable in the UI 2022-06-09 14:25:39 -04:00
John Westcott IV
ab2d05a07d Update replies documentation (#12305)
Adding heads and a couple standard replies and rewording other replies.
2022-06-09 13:41:53 -04:00
Alan Rominger
4543f6935f Only do substitutions for container path conversions with resolved paths (#12313)
* Resolve paths as much as possible before doing replacements

* Move unused method out of main code, test symlink
2022-06-09 11:36:29 -04:00
Alan Rominger
78d3d6dc94 Merge pull request #12219 from AlanCoding/really_skip
Change Demo Project status to successful
2022-06-09 11:19:57 -04:00
Alex Corey
02e7424f51 Adds import export to awx cli for schedules as a top level object 2022-06-09 09:47:50 -04:00
Andrea Decorte
2d6ca4cbb1 Update role module example (#12295)
Update example to use current parameter for workflows
instead of the deprecated one.

Signed-off-by: Andrea Decorte <adecorte@redhat.com>
2022-06-09 09:38:55 -04:00
Aine Riordan
e244644a1d Fix typo in application module example (#12187) 2022-06-09 09:38:34 -04:00
Jessica Steurer
d216457c09 Merge pull request #12320 from nixocio/ui_issue_2899
Pre-fill project for job template from query params
2022-06-09 10:24:29 -03:00
nixocio
20a1da61c0 Update project status to reflect project sync related to job template
Update project status to reflect project update sync related to job
template that was launched with branch override.

We were displaying status of project sync itself, not from the project
update job as expected.

Also, rename `Project Status` to be `Project Update Status`.

See: https://github.com/ansible/awx/issues/11987
2022-06-08 13:41:45 -04:00
Jessica Steurer
bf7ab1ede7 Merge pull request #12315 from djyasin/job_tag_characters
Job tag characters
2022-06-08 12:09:18 -03:00
Alex Corey
3b6b449545 Removes unneeded license files 2022-06-08 10:04:25 -04:00
Alex Corey
781cf531e6 Removes Sanatize html in favor of dom purify library 2022-06-08 10:04:25 -04:00
dependabot[bot]
9b7475247c Bump @patternfly/patternfly from 4.194.4 to 4.196.7 in /awx/ui
Bumps [@patternfly/patternfly](https://github.com/patternfly/patternfly) from 4.194.4 to 4.196.7.
- [Release notes](https://github.com/patternfly/patternfly/releases)
- [Changelog](https://github.com/patternfly/patternfly/blob/main/RELEASE-NOTES.md)
- [Commits](https://github.com/patternfly/patternfly/compare/prerelease-v4.194.4...prerelease-v4.196.7)

---
updated-dependencies:
- dependency-name: "@patternfly/patternfly"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-08 14:00:52 +00:00
Alex Corey
44dc7f8d1d Merge pull request #12333 from ansible/dependabot/npm_and_yarn/awx/ui/devel/rrule-2.7.0
Bump rrule from 2.6.4 to 2.7.0 in /awx/ui
2022-06-08 09:59:39 -04:00
Kersom
60eaf9e235 Provide feedback when a health check is being performed (#12330)
Provide feedback when a health check is being performed
2022-06-07 16:27:46 -04:00
Jessica Steurer
f5102ed24d Merge pull request #12102 from john-westcott-iv/allow_fqcn
Respect optional fully qualified collection name (ansible.builtin.) for playbook identification
2022-06-07 16:44:36 -03:00
Jessica Steurer
309178e4e2 Merge pull request #12331 from kialam/fix-worker-json-404
Allow worker files to be loaded as blob objects.
2022-06-07 16:33:59 -03:00
Rebeccah Hunter
76ffdbb993 Merge pull request #12308 from rebeccahhh/job_event_lag
Metrics for callback receiver job event lag
2022-06-07 11:50:17 -04:00
nixocio
d8037618c8 Update logout/login redirect for different users
* Logout as User A and Login as User B redirects to `/home'
* Logout as User A and Login as User A redirects to `/home'
* Allow session to timeout as User A and Login as User A redirects to User A's last location

See: https://github.com/ansible/awx/issues/11167
2022-06-07 09:48:41 -04:00
Alex Corey
e94e15977c Merge pull request #12328 from ansible/dependabot/npm_and_yarn/awx/ui/async-2.6.4
Bump async from 2.6.3 to 2.6.4 in /awx/ui
2022-06-07 09:13:47 -04:00
John Westcott IV
f37951249f Adding options fqcn (ansible.builtin.) to playbook identification 2022-06-06 17:32:37 -04:00
Jeff Bradberry
9191079dda Merge pull request #11921 from jbradberry/fix-export-reconstruct-endpoint
Look up the correct top-level resource name when reconstructing foreign keys
2022-06-06 17:08:02 -04:00
Keith Grant
fdd560747d Persistent list filters (#12229)
* add PersistentFilters component

* add PersistentFilters test

* add persistent filters to all list pages

* update tests

* clear sessionStorage on logout

* fix persistent filter on wfjt detail; cleanup
2022-06-06 16:56:45 -04:00
Jeff Bradberry
faa5df19ca Merge pull request #12252 from jbradberry/fix-analytics-unicode
Double escape all unicode escape sequences in job events data
2022-06-06 16:41:06 -04:00
Rebeccah
5f9326b131 added average event processing metric (in seconds) that can be served to
grafana via prometheus.

This metric is a good indicator of how far behind the callback receiver
is. The higher the load the further behind/the greater the number of
seconds the metric will display.

This number being high may indicate the need for horizontal scaling in
the control plane or vertically scaling the number of callback
receivers.
2022-06-06 15:14:56 -04:00
dependabot[bot]
8e389d40b4 Bump rrule from 2.6.4 to 2.7.0 in /awx/ui
Bumps [rrule](https://github.com/jakubroztocil/rrule) from 2.6.4 to 2.7.0.
- [Release notes](https://github.com/jakubroztocil/rrule/releases)
- [Changelog](https://github.com/jakubroztocil/rrule/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jakubroztocil/rrule/commits)

---
updated-dependencies:
- dependency-name: rrule
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-06 18:58:46 +00:00
nixocio
e62c77e783 Add typeahed for single choice surveys
Add typeahed for single choice surveys, also fix a couple of missing
translations for Select component.

See: https://github.com/ansible/awx/issues/8097
2022-06-06 13:57:00 -04:00
Kia Lam
48b3a43ec2 Allow worker files to be loaded as blob objects. 2022-06-06 10:47:30 -07:00
Lila
5f783fd5ee Revised job_tags to handle more than 1024 characters. 2022-06-06 13:28:22 -04:00
dependabot[bot]
e112cf93c2 Bump async from 2.6.3 to 2.6.4 in /awx/ui
Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
- [Release notes](https://github.com/caolan/async/releases)
- [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md)
- [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4)

---
updated-dependencies:
- dependency-name: async
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-06 13:51:52 +00:00
Alex Corey
d9f26a411e Merge pull request #12318 from ansible/dependabot/npm_and_yarn/awx/ui/node-forge-1.3.1
Bump node-forge from 1.2.1 to 1.3.1 in /awx/ui
2022-06-05 14:25:42 -04:00
Kersom
ea84e7a491 Merge pull request #12322 from nixocio/fix_typo
Fix typo
2022-06-03 22:46:06 -04:00
Alex Corey
7fab619fed Merge pull request #12317 from ansible/dependabot/npm_and_yarn/awx/ui/ejs-3.1.8
Bump ejs from 3.1.6 to 3.1.8 in /awx/ui
2022-06-03 16:13:35 -04:00
nixocio
699a35b88a Fix typo
Fix typo on triage replies
2022-06-03 15:22:49 -04:00
nixocio
8095adb945 Pre-fill project for job template from query params
Pre-fill project when creating JT from Project -> Job Templates
List
2022-06-03 11:32:01 -04:00
Hampus Lundqvist
8d36712860 return status on event types defined in ref_keys 2022-06-03 16:10:44 +02:00
dependabot[bot]
0db34d0498 Bump node-forge from 1.2.1 to 1.3.1 in /awx/ui
Bumps [node-forge](https://github.com/digitalbazaar/forge) from 1.2.1 to 1.3.1.
- [Release notes](https://github.com/digitalbazaar/forge/releases)
- [Changelog](https://github.com/digitalbazaar/forge/blob/main/CHANGELOG.md)
- [Commits](https://github.com/digitalbazaar/forge/compare/v1.2.1...v1.3.1)

---
updated-dependencies:
- dependency-name: node-forge
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-03 14:06:45 +00:00
dependabot[bot]
7ab254e5e3 Bump ejs from 3.1.6 to 3.1.8 in /awx/ui
Bumps [ejs](https://github.com/mde/ejs) from 3.1.6 to 3.1.8.
- [Release notes](https://github.com/mde/ejs/releases)
- [Changelog](https://github.com/mde/ejs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mde/ejs/compare/v3.1.6...v3.1.8)

---
updated-dependencies:
- dependency-name: ejs
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-03 14:06:14 +00:00
Alex Corey
dd7ab459e2 Merge pull request #12196 from AlexSCorey/popoversInventoryAndInventorySource
Adds popover text for Inventory and InventorySources
2022-06-03 10:01:36 -04:00
Alex Corey
33df2e8aa4 Adds popover text for Inventory and InventorySources 2022-06-03 09:38:45 -04:00
Jessica Steurer
39b8fd433b Merge pull request #12251 from nixocio/ui_issue_11196
Add controller_node to job details page
2022-06-03 08:57:29 -03:00
Kersom
c31d74100d Add host description in a couple of screens (#12292)
Add host description in a couple of screens

See:https://github.com/ansible/awx/issues/3348
Also: https://github.com/ansible/awx/issues/9363
2022-06-02 15:40:41 -04:00
Alan Rominger
3af89c1e2b Merge pull request #12307 from AlanCoding/twilio
Upgrade twilio dependency to pick up fix
2022-06-02 13:48:34 -04:00
John Westcott IV
1d35bba8c3 Variablizing the awx_template_version for building to allow release process to update the version in the module_util (#12248) 2022-06-02 12:28:57 -04:00
djyasin
c3c3e24875 Merge pull request #12314 from john-westcott-iv/add_irc_msg_to_release
Adding irc bullhorn to release process
2022-06-02 11:57:32 -04:00
John Westcott IV
ab9c97b158 Adding irc bullhorn to release process 2022-06-02 11:30:57 -04:00
nixocio
5e700c992d Add controller_node to job details page
Add controller_node to job details page. Modify serializers to make
controller_node available to the UI.

See: https://github.com/ansible/awx/issues/11196
Also: https://github.com/ansible/awx/issues/12132
2022-06-02 11:21:06 -04:00
Seth Foster
b548ad21a9 Merge pull request #12240 from fosterseth/make_prometheus_grafana
Add prometheus and grafana make commands for local environment
2022-06-01 17:55:43 -04:00
Jeff Bradberry
127016d36b Allow for multiple --extra_vars or --variables flags in awx-cli
This is particularly useful when you are using the @filepath version
of the flag, since otherwise there would be no way to issue the
command with multiple vars files.

Also, add `-e` as an alias to `--extra_vars`
2022-06-01 13:24:24 -04:00
kialam
3d0391173b Add popover help text to job details and ad hoc job details (#12261)
* Add popover text to Job Details page.

* Add module documentation links to ad hoc job detail page.

* Add forks help text to job details.
2022-06-01 13:00:59 -04:00
kialam
ce560bcd5f Cleanup some text strings files to return object literals (#12269)
* Cleanup some text strings files to return object literals instead of arrow functions.

* Fix render.

* Fix unit tests.
2022-06-01 12:10:55 -04:00
Alan Rominger
d553c37d7d Upgrade twilio dependency to pick up fix 2022-06-01 11:35:43 -04:00
John Maynard
8a5e89e24b Switch Jinja2 environment for rendering before testing JSON to ImmutableSandboxedEnvironment
Render Jinja template before checking for valid JSON
2022-06-01 11:10:15 -04:00
Kersom
8c3e289170 Merge pull request #12178 from Tioborto/feat/add-token-description-column
feat: add token description column
2022-06-01 10:17:28 -04:00
Seth Foster
9364c8e562 typo 2022-05-31 17:18:45 -04:00
Seth Foster
5831949ebf maxconn 2022-05-31 17:16:27 -04:00
Seth Foster
7fe98a670f haproxy 2022-05-31 17:12:19 -04:00
Seth Foster
6f68f3cba6 Add make prometheus and make grafana commands to dev environment 2022-05-31 17:07:15 -04:00
Alex Corey
4dc956c76f Merge pull request #12275 from ansible/dependabot/npm_and_yarn/awx/ui/devel/ace-builds-1.5.1
Bump ace-builds from 1.4.12 to 1.5.1 in /awx/ui
2022-05-31 10:32:25 -04:00
Alex Corey
11a56117eb Merge pull request #12284 from ansible/dependabot/npm_and_yarn/awx/ui/devel/codemirror-5.65.4
Bump codemirror from 5.61.0 to 5.65.4 in /awx/ui
2022-05-31 10:31:51 -04:00
Alex Corey
10eed6286a Merge pull request #12285 from ansible/dependabot/npm_and_yarn/awx/ui/devel/styled-components-5.3.5
Bump styled-components from 5.3.0 to 5.3.5 in /awx/ui
2022-05-31 10:31:09 -04:00
Jessica Steurer
d36befd9ce Merge pull request #12283 from jainnikhil30/add_forks_to_job_details
add forks to the job details
2022-05-26 18:03:29 -03:00
Jessica Steurer
0c4ddc7f6f Merge pull request #12280 from nixocio/ui_issue_12279
Allow to copy entity within the minute
2022-05-26 14:09:35 -03:00
nixocio
3ef9679de3 Allow to copy entity within the minute
Allow to copy entity within the minute - add seconds, and miliseconds as part of the name
of copied entity.

See: https://github.com/ansible/awx/issues/12279
2022-05-25 16:35:22 -04:00
dependabot[bot]
d36441489a Bump styled-components from 5.3.0 to 5.3.5 in /awx/ui
Bumps [styled-components](https://github.com/styled-components/styled-components) from 5.3.0 to 5.3.5.
- [Release notes](https://github.com/styled-components/styled-components/releases)
- [Changelog](https://github.com/styled-components/styled-components/blob/main/CHANGELOG.md)
- [Commits](https://github.com/styled-components/styled-components/compare/v5.3.0...v5.3.5)

---
updated-dependencies:
- dependency-name: styled-components
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-25 18:46:16 +00:00
Alex Corey
d26c12dd7c Merge pull request #12243 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/patternfly-4.194.4
Bump @patternfly/patternfly from 4.192.1 to 4.194.4 in /awx/ui
2022-05-25 14:44:27 -04:00
dependabot[bot]
7fa7ed3658 Bump @patternfly/patternfly from 4.192.1 to 4.194.4 in /awx/ui
Bumps [@patternfly/patternfly](https://github.com/patternfly/patternfly) from 4.192.1 to 4.194.4.
- [Release notes](https://github.com/patternfly/patternfly/releases)
- [Changelog](https://github.com/patternfly/patternfly/blob/main/RELEASE-NOTES.md)
- [Commits](https://github.com/patternfly/patternfly/compare/prerelease-v4.192.1...prerelease-v4.194.4)

---
updated-dependencies:
- dependency-name: "@patternfly/patternfly"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-25 18:16:48 +00:00
Jessica Steurer
2c68e7a3d2 Merge pull request #12247 from nixocio/ui_issue_12129
Add job_explanation job details page
2022-05-25 14:45:39 -03:00
Alex Corey
0c9b1c3c79 Merge pull request #12274 from ansible/dependabot/npm_and_yarn/awx/ui/devel/lingui/react-3.13.3
Bump @lingui/react from 3.9.0 to 3.13.3 in /awx/ui
2022-05-25 12:09:01 -04:00
dependabot[bot]
e10b0e513e Bump @lingui/react from 3.9.0 to 3.13.3 in /awx/ui
Bumps [@lingui/react](https://github.com/lingui/js-lingui) from 3.9.0 to 3.13.3.
- [Release notes](https://github.com/lingui/js-lingui/releases)
- [Changelog](https://github.com/lingui/js-lingui/blob/main/CHANGELOG.md)
- [Commits](https://github.com/lingui/js-lingui/compare/v3.9.0...v3.13.3)

---
updated-dependencies:
- dependency-name: "@lingui/react"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-25 15:49:56 +00:00
dependabot[bot]
68c66edada Bump ace-builds from 1.4.12 to 1.5.1 in /awx/ui
Bumps [ace-builds](https://github.com/ajaxorg/ace-builds) from 1.4.12 to 1.5.1.
- [Release notes](https://github.com/ajaxorg/ace-builds/releases)
- [Changelog](https://github.com/ajaxorg/ace-builds/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ajaxorg/ace-builds/compare/v1.4.12...v1.5.1)

---
updated-dependencies:
- dependency-name: ace-builds
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-25 15:49:30 +00:00
dependabot[bot]
6eb17e7af7 Bump codemirror from 5.61.0 to 5.65.4 in /awx/ui
Bumps [codemirror](https://github.com/codemirror/CodeMirror) from 5.61.0 to 5.65.4.
- [Release notes](https://github.com/codemirror/CodeMirror/releases)
- [Changelog](https://github.com/codemirror/CodeMirror/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codemirror/CodeMirror/compare/5.61.0...5.65.4)

---
updated-dependencies:
- dependency-name: codemirror
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-25 15:49:11 +00:00
Alex Corey
9a24da3098 Merge pull request #12281 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/react-table-4.83.1
Bump @patternfly/react-table from 4.75.2 to 4.83.1 in /awx/ui
2022-05-25 11:48:18 -04:00
Nikhil
8ed0543b8b add forks to the job details 2022-05-25 20:07:38 +05:30
dependabot[bot]
73a84444d1 Bump @patternfly/react-table from 4.75.2 to 4.83.1 in /awx/ui
Bumps [@patternfly/react-table](https://github.com/patternfly/patternfly-react) from 4.75.2 to 4.83.1.
- [Release notes](https://github.com/patternfly/patternfly-react/releases)
- [Commits](https://github.com/patternfly/patternfly-react/compare/@patternfly/react-table@4.75.2...@patternfly/react-table@4.83.1)

---
updated-dependencies:
- dependency-name: "@patternfly/react-table"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-24 19:40:54 +00:00
Alex Corey
451767c179 Merge pull request #12207 from ansible/dependabot/npm_and_yarn/awx/ui/devel/eslint-plugin-i18next-5.2.1
Bump eslint-plugin-i18next from 5.1.2 to 5.2.1 in /awx/ui
2022-05-24 15:39:40 -04:00
Alan Rominger
8366386126 Merge pull request #12260 from AlanCoding/callback_status
Fix the callback receiver --status command
2022-05-24 15:26:02 -04:00
Alex Corey
997686a2ea Merge pull request #12257 from AlexSCorey/updateDependabot
Dependabot runs monthly and only makes prs for production dependencies
2022-05-24 09:35:09 -04:00
HampusLundqvist
f02212b1fe return event_status on all gitlab webhook types 2022-05-23 22:13:00 +02:00
Jessica Steurer
2ba68ef5d0 Merge pull request #12249 from keithjgrant/filter-ws-jobs
use qs params when fetching new/updated jobs to preserve filters
2022-05-19 17:24:52 -03:00
djyasin
2041665880 Merge pull request #12227 from ansible/vaultcredentialsbug
Prevent edit of  vault ID once credential is created.
2022-05-19 15:13:41 -04:00
Alan Rominger
1e6ca01686 Fix the callback receiver --status command 2022-05-19 15:00:49 -04:00
Alex Corey
e15a76e7aa Dependabot runs monthly and only makes prs for production dependencies 2022-05-19 11:16:51 -04:00
Alex Corey
64db44acef Adds popover for Notification Templates and Instance group details (#12197) 2022-05-18 19:04:21 -04:00
Keith J. Grant
9972389a8d fetch relevant jobs based on WS events 2022-05-18 14:40:18 -07:00
Seth Foster
e0b1274eee Merge pull request #12094 from sean-m-sullivan/wait
update awx collection wait interval to 2
2022-05-18 15:00:24 -04:00
Jeff Bradberry
973facebba Double escape all unicode escape sequences in job events data
when collecting it for analytics.
2022-05-18 12:00:03 -04:00
sean-m-sullivan
df649e2c56 update awx collection wait interval to 2 2022-05-18 09:57:40 -04:00
nixocio
a778017efb Add job_explanation job details page
Add job_explanation job details page

See: https://github.com/ansible/awx/issues/12129
2022-05-18 09:16:39 -04:00
Keith J. Grant
6a9305818e use qs params when fetching new/updated jobs to preserve filters 2022-05-17 14:57:57 -07:00
Alexandre Bortoluzzi
2669904c72 fix: header row style 2022-05-17 23:04:34 +02:00
Kersom
35529b5eeb Add help text popovers to /#/applications details fields (#12222)
Add help text popovers to /#/applications details fields

See: https://github.com/ansible/awx/issues/11873
2022-05-17 20:11:51 +00:00
Sarah Akus
d55ed8713c Merge pull request #12239 from kialam/fix-12228-edit-deleted-wf-node
Fix on save error message for wf approval nodes.
2022-05-17 12:07:45 -04:00
Kersom
7973f28bed Merge pull request #12237 from ansible/dependabot/npm_and_yarn/awx/ui/devel/mock-socket-9.1.3
Bump mock-socket from 9.0.3 to 9.1.3 in /awx/ui
2022-05-17 11:31:45 -04:00
dependabot[bot]
8189964cce Bump mock-socket from 9.0.3 to 9.1.3 in /awx/ui
Bumps [mock-socket](https://github.com/thoov/mock-socket) from 9.0.3 to 9.1.3.
- [Release notes](https://github.com/thoov/mock-socket/releases)
- [Changelog](https://github.com/thoov/mock-socket/blob/master/CHANGELOG.md)
- [Commits](https://github.com/thoov/mock-socket/commits)

---
updated-dependencies:
- dependency-name: mock-socket
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-17 14:24:27 +00:00
Sarah Akus
ee4c901dc7 Merge pull request #12210 from ansible/dependabot/npm_and_yarn/awx/ui/devel/react-error-boundary-3.1.4
Bump react-error-boundary from 3.1.3 to 3.1.4 in /awx/ui
2022-05-17 10:17:44 -04:00
Lila
78220cad82 Disables ability to edit vault ID on the UI side. 2022-05-16 16:56:29 -04:00
Lila
40279bc6c0 Wrote corresponding tests.
Updated verbiage to be more in line with existing messages.
2022-05-16 16:55:49 -04:00
Lila
f6fb46d99e Prevent edit of vault ID once credential is created and added check to ensure user is actually trying to change vault id. 2022-05-16 16:54:03 -04:00
Kia Lam
954b32941e Fix on save error message for wf approval nodes. 2022-05-16 11:17:23 -07:00
Seth Foster
48b016802c Merge pull request #12049 from fosterseth/awxkit_import_help_text
Improve awxkit import -h
2022-05-16 11:59:44 -04:00
Alex Corey
35aa5dd79f Merge pull request #12212 from ansible/dependabot/npm_and_yarn/awx/ui/devel/luxon-2.4.0
Bump luxon from 2.0.1 to 2.4.0 in /awx/ui
2022-05-16 09:44:09 -04:00
JST
237402068c Merge pull request #12073 from fosterseth/scm_invsrc_project_update
SCM inv source should trigger project update
2022-05-16 09:17:45 -03:00
Kersom
31dda6e9d6 Add help text popovers to /#/execution_environments details fields (#12224)
Add help text popovers to /#/execution_environments details fields

See: https://github.com/ansible/awx/issues/11874
2022-05-13 14:53:36 -04:00
Alan Rominger
bca6e00e37 Change Demo Project status to successful 2022-05-12 16:14:09 -04:00
Sarah Akus
1c9b4af61d Merge pull request #12213 from nixocio/ui_issue_5727
Add details related workflow job on the workflow approval details
2022-05-12 16:02:25 -04:00
Seth Foster
eba4a3f1c2 in case we fail a job in task manager, we need to add the project update to the inventoryupdate.source_project field 2022-05-12 15:21:17 -04:00
Seth Foster
0ae9fe3624 if dependency fails, fail job in task manager 2022-05-12 14:00:13 -04:00
Seth Foster
1b662fcca5 SCM inv source trigger project update
- scm based inventory sources should launch project updates prior to
running inventory updates for that source.
- fixes scenario where a job is based on projectA, but the inventory
source is based on projectB. Running the job will likely trigger a
sync for projectA, but not projectB.

comments
2022-05-12 14:00:12 -04:00
John Westcott IV
cfdba959dd Falling back to project.status if the last project sync job was deleted (#12215) 2022-05-12 12:22:04 -04:00
John Westcott IV
78660ad0a2 Updated dependencies to reduce issues with dependabot and container scanning (#12180)
Modify updater.sh to remove the local path references.
2022-05-12 09:25:36 -04:00
kialam
70697869d7 Merge pull request #12220 from kialam/add-popover-detail-job-templates
Fix pop over text for job template details page.
2022-05-11 18:34:42 -07:00
Kia Lam
10e55108ef Fix pop over text for job template details page. 2022-05-11 16:14:58 -07:00
JST
d4223b8877 Merge pull request #12204 from kialam/add-popover-detail-job-templates
Add popover text to JT and WJT details pages.
2022-05-11 17:39:47 -03:00
Shane McDonald
9537d148d7 Merge pull request #12175 from TheRealHaoLiu/change-ee-container-volume-selinux-label
change SELinux label for EE volume mount
2022-05-11 16:00:02 -04:00
Kia Lam
a133a14b70 Fix unit tests. 2022-05-11 12:29:32 -07:00
Jeff Bradberry
4ca9e9577b Merge pull request #12216 from jangel97/devel
add param all_pages to method export_assets
2022-05-11 14:51:50 -04:00
Jose Angel Morena
44986fad36 set all_pages to True by default in get_method 2022-05-11 19:54:26 +02:00
Jose Angel Morena
eb2fca86b6 set all_pages to True by default in get_method 2022-05-11 19:52:32 +02:00
nixocio
458a1fc035 Add details related workflow job on the workflow approval details
Add details related workflow job on the work flow approval details

Remove not used prop isLoading, fix, and expand unit-tests related to
workflow approval details.
2022-05-11 13:32:59 -04:00
Kia Lam
6e87b29e92 Add help text to JT and WJT forms. 2022-05-11 09:10:22 -07:00
Kia Lam
be1d0c525c Add popover text to JT and WJT details pages. 2022-05-11 09:10:21 -07:00
Alex Corey
0787cb4fc2 Merge pull request #12185 from AlexSCorey/8690-SortSchedulesByType
Adds sorting by type on the schedules list
2022-05-11 10:57:10 -04:00
dependabot[bot]
19063a2d90 Bump luxon from 2.0.1 to 2.4.0 in /awx/ui
Bumps [luxon](https://github.com/moment/luxon) from 2.0.1 to 2.4.0.
- [Release notes](https://github.com/moment/luxon/releases)
- [Changelog](https://github.com/moment/luxon/blob/master/CHANGELOG.md)
- [Commits](https://github.com/moment/luxon/compare/2.0.1...2.4.0)

---
updated-dependencies:
- dependency-name: luxon
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-11 14:40:47 +00:00
Alex Corey
e8e2f820d2 Merge pull request #12153 from ansible/dependabot/npm_and_yarn/awx/ui/devel/d3-7.4.4
Bump d3 from 7.1.1 to 7.4.4 in /awx/ui
2022-05-11 10:35:29 -04:00
Alan Rominger
aaad634483 Only use in-memory cache for database settings, set ttl=5 (#12166)
* Only use in-memory cache for database settings

Make necessary adjustments to monkeypatch
  as it is very vunerable to recursion
  Remove migration exception that is now redundant

Clear cache if a setting is changed

* Use dedicated middleware for setting cache stuff
  Clear cache for each request

* Add tests for in-memory cache
2022-05-10 21:58:22 -04:00
dependabot[bot]
dfa4127bae Bump react-error-boundary from 3.1.3 to 3.1.4 in /awx/ui
Bumps [react-error-boundary](https://github.com/bvaughn/react-error-boundary) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/bvaughn/react-error-boundary/releases)
- [Changelog](https://github.com/bvaughn/react-error-boundary/blob/master/CHANGELOG.md)
- [Commits](https://github.com/bvaughn/react-error-boundary/compare/v3.1.3...v3.1.4)

---
updated-dependencies:
- dependency-name: react-error-boundary
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-10 21:14:25 +00:00
Jeff Bradberry
f3725c714a Merge pull request #12119 from kimbernator/devel
Remove hardcoded public schema in cleanup_jobs.py
2022-05-10 17:14:11 -04:00
dependabot[bot]
cef3ed01ac Bump eslint-plugin-i18next from 5.1.2 to 5.2.1 in /awx/ui
Bumps [eslint-plugin-i18next](https://github.com/edvardchen/eslint-plugin-i18next) from 5.1.2 to 5.2.1.
- [Release notes](https://github.com/edvardchen/eslint-plugin-i18next/releases)
- [Changelog](https://github.com/edvardchen/eslint-plugin-i18next/blob/main/CHANGELOG.md)
- [Commits](https://github.com/edvardchen/eslint-plugin-i18next/compare/v5.1.2...v5.2.1)

---
updated-dependencies:
- dependency-name: eslint-plugin-i18next
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-10 21:12:30 +00:00
Alex Corey
fc1a3f46f9 Merge pull request #12154 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/react-table-4.75.2
Bump @patternfly/react-table from 4.67.19 to 4.75.2 in /awx/ui
2022-05-10 17:10:16 -04:00
Sarabraj Singh
bfa5feb51b Merge pull request #12205 from sarabrajsingh/revert-and-fix-12186
Revert and fix 12186
2022-05-10 15:29:56 -04:00
Sarabraj Singh
4c0813bd69 deleting folder contents using find command 2022-05-10 14:43:27 -04:00
Sarabraj Singh
9b0b0f2a5f Revert "fixing rm -Rf logic to delete contents of folder but leave parent folder intact"
This reverts commit df2d303ab0.
2022-05-10 14:06:09 -04:00
JST
e87c121f8f Merge pull request #12156 from mabashian/large-workflow-crash
Don't repeatedly traverse workflow nodes when finding ancestors
2022-05-10 14:41:49 -03:00
Seth Foster
65dfc424bc Improve help text for import and export 2022-05-10 13:18:40 -04:00
Sarabraj Singh
dfea9cc526 Merge pull request #12186 from sarabrajsingh/bugfix-delete-on-update-11733
fixing rm -Rf logic to delete contents of folder
2022-05-09 16:08:39 -04:00
Rebeccah Hunter
0d97a0364a Merge pull request #12170 from ansible/give_us_the_deets
Update triage_replies give us more info
2022-05-09 15:22:57 -04:00
kialam
1da57a4a12 Merge pull request #12191 from kialam/fix-12188-undefined-wf-approval-list
Fix deleted wf approval node name.
2022-05-09 11:16:21 -07:00
Rebeccah Hunter
b73078e9db Merge pull request #11373 from rebeccahhh/fix-settings_cache_threading_awx
add lock to cachetools usage
2022-05-09 13:56:16 -04:00
Kia Lam
b17f22cd38 Fix unit tests. 2022-05-09 10:55:51 -07:00
Alex Corey
7b225057ce Merge pull request #12198 from AlexSCorey/fixPRLabeler
Prevents the api label from being added to UI only PRs
2022-05-09 13:25:51 -04:00
Alex Corey
8242078c06 Prevents the api label from being added to UI only PRs 2022-05-09 11:17:22 -04:00
John Westcott IV
a86740c3c9 Adding ability to start and plumb splunk instance (#12183) 2022-05-09 09:50:28 -04:00
Kia Lam
cbde56549d Fix deleted wf approval node name. 2022-05-06 13:51:16 -07:00
CWollinger
385a94866c add tooltip for checkbox in DataListToolbar (#12133)
Signed-off-by: CWollinger <CWollinger@web.de>
2022-05-06 16:36:07 -04:00
chris meyers
21972c91dd add lock to cachetools usage
* We observed daphne giving tracebacks when accessing logging settings.
  Originally, configure tower in tower settings was no a suspect because
  daphne is not multi-process. We've had issues with configure tower in
  tower settings and multi-process before. We later learned that Daphne
  is multi-threaded. Configure tower in tower was back to being a
  suspect. We constructed a minimal reproducer to show that multiple
  threads accessing settings can cause the same traceback that we saw in
  daphne. See
  https://gist.github.com/chrismeyersfsu/7aa4bdcf76e435efd617cb078c64d413
  for that recreator. These fixes stop the recreation.
2022-05-06 16:24:36 -04:00
JST
36d3f9afdb Merge pull request #12184 from marshmalien/2912-prefill-playbook
Autopopulate playbook field when there is one resource
2022-05-06 17:18:18 -03:00
Sarabraj Singh
df2d303ab0 fixing rm -Rf logic to delete contents of folder but leave parent folder intact 2022-05-06 15:41:34 -04:00
Alex Corey
05eba350b7 Adds sorting by type on the schedules list. Also adds functionality for bulk_data command to create schedules 2022-05-06 09:45:45 -04:00
Alexandre Bortoluzzi
1e12e12578 style: prettier file 2022-05-06 14:32:54 +02:00
Alexandre Bortoluzzi
bbdab82433 fix: user token list item tests 2022-05-06 14:26:10 +02:00
kialam
f7be6b6423 Remove timezone formatting for date picker entry. (#12163) 2022-05-05 16:46:38 -04:00
Marliana Lara
ba358eaa4f Autopopulate playbook field when there is one resource 2022-05-05 16:12:26 -04:00
JST
162e09972f Merge pull request #12172 from keithjgrant/11869-users-help-text
Add help text to user token detail
2022-05-05 15:35:14 -03:00
JST
2cfccdbe16 Merge pull request #12158 from nixocio/ui_issue_11862
Add help text popovers to /#/credentials details fields
2022-05-05 14:00:15 -03:00
Kersom
434fa7b7be Merge pull request #12161 from nixocio/ui_css_details
Adding popover for details is showing breaking of words
2022-05-05 16:34:49 +03:00
Sarah Akus
2f8bdf1eab Merge pull request #12173 from kialam/fix-12167-unresponsive-datepicker
Upgrade @patternfly/react-core.
2022-05-05 09:16:13 -04:00
Alexandre Bortoluzzi
e1705738a1 fix: french vocabulary 2022-05-05 12:51:35 +02:00
Alexandre Bortoluzzi
4cfb8fe482 feat: display token description on user tokens list page 2022-05-05 12:51:17 +02:00
Hao Liu
d52d2af4b4 change SELinux label for EE volume mount
- The `z` option indicates that the bind mount content is shared among multiple containers.
- The `Z` option indicates that the bind mount content is private and unshared.

If multiple container attempt to mount the same directory `Z` option will cause a raise condition where only the last container started will have access to the file.

Ref: https://docs.docker.com/storage/bind-mounts/#configure-the-selinux-label
Signed-off-by: Hao Liu <haoli@redhat.com>
2022-05-04 21:31:34 -04:00
Kia Lam
97fd3832d4 Upgrade @patternfly/react-core. 2022-05-04 14:19:21 -07:00
Keith J. Grant
3cedd0e0bd add help text to user token detail 2022-05-04 13:23:28 -07:00
Rebeccah Hunter
507b1898ce Update triage_replies.md 2022-05-04 15:28:28 -04:00
Kersom
e3fe9010b7 Merge pull request #12152 from ansible/dependabot/npm_and_yarn/awx/ui/devel/testing-library/react-12.1.5
Bump @testing-library/react from 12.1.4 to 12.1.5 in /awx/ui
2022-05-04 20:58:20 +03:00
Kersom
2c350b8b90 Merge pull request #12151 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/patternfly-4.192.1
Bump @patternfly/patternfly from 4.183.1 to 4.192.1 in /awx/ui
2022-05-04 11:04:25 -04:00
nixocio
d74e258079 Add help text popovers to /#/credentials details fields
Add help text popovers to /#/credentials details fields

See: https://github.com/ansible/awx/issues/11862
2022-05-04 09:29:39 -04:00
nixocio
b03cabd314 Adding popover for details is showing breaking of words
Now that we are adding popovers for details pages, I noticed a couple of
strings wrapping in odd places, update css to avoid that.

Also `word-break: break-word` was deprecated.
2022-05-03 16:54:32 -04:00
Keith Grant
6a63af83c0 Merge pull request #12150 from keithjgrant/add-old-version-message-to-triage-replies
add old version message to triage replies
2022-05-03 11:47:07 -07:00
Alan Rominger
452744b67e Delay update of artifacts and error fields until final job save (#11832)
* Delay update of artifacts until final job save

Save tracebacks from receptor module to callback object

Move receptor traceback check up to be more logical

Use new mock_me fixture to avoid DB call with me method

Update the special runner message to the delay_update pattern

* Move special runner message into post-processing of callback fields
2022-05-03 14:42:50 -04:00
mabashian
703a68d4fe Don't repeatedly traverse workflow nodes when finding ancestors 2022-05-03 13:39:03 -04:00
dependabot[bot]
557893e4b0 Bump @patternfly/react-table from 4.67.19 to 4.75.2 in /awx/ui
Bumps [@patternfly/react-table](https://github.com/patternfly/patternfly-react) from 4.67.19 to 4.75.2.
- [Release notes](https://github.com/patternfly/patternfly-react/releases)
- [Commits](https://github.com/patternfly/patternfly-react/compare/@patternfly/react-table@4.67.19...@patternfly/react-table@4.75.2)

---
updated-dependencies:
- dependency-name: "@patternfly/react-table"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-03 16:39:23 +00:00
dependabot[bot]
d7051fb6ce Bump d3 from 7.1.1 to 7.4.4 in /awx/ui
Bumps [d3](https://github.com/d3/d3) from 7.1.1 to 7.4.4.
- [Release notes](https://github.com/d3/d3/releases)
- [Changelog](https://github.com/d3/d3/blob/main/CHANGES.md)
- [Commits](https://github.com/d3/d3/compare/v7.1.1...v7.4.4)

---
updated-dependencies:
- dependency-name: d3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-03 16:38:37 +00:00
dependabot[bot]
867c50da19 Bump @testing-library/react from 12.1.4 to 12.1.5 in /awx/ui
Bumps [@testing-library/react](https://github.com/testing-library/react-testing-library) from 12.1.4 to 12.1.5.
- [Release notes](https://github.com/testing-library/react-testing-library/releases)
- [Changelog](https://github.com/testing-library/react-testing-library/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testing-library/react-testing-library/compare/v12.1.4...v12.1.5)

---
updated-dependencies:
- dependency-name: "@testing-library/react"
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-03 16:38:14 +00:00
dependabot[bot]
e8d76ec272 Bump @patternfly/patternfly from 4.183.1 to 4.192.1 in /awx/ui
Bumps [@patternfly/patternfly](https://github.com/patternfly/patternfly) from 4.183.1 to 4.192.1.
- [Release notes](https://github.com/patternfly/patternfly/releases)
- [Changelog](https://github.com/patternfly/patternfly/blob/main/RELEASE-NOTES.md)
- [Commits](https://github.com/patternfly/patternfly/compare/prerelease-v4.183.1...prerelease-v4.192.1)

---
updated-dependencies:
- dependency-name: "@patternfly/patternfly"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-03 16:37:37 +00:00
Keith J. Grant
c102c61532 add old version message to triage replies 2022-05-03 08:20:45 -07:00
John Westcott IV
adb2b0da89 Adding standard message for AWX release (#12105) 2022-05-03 09:24:48 -04:00
JST
3610008699 Merge pull request #12145 from marshmalien/4057-project-scm-type-details
Add SCM Type detail to job detail view
2022-05-03 10:09:35 -03:00
kialam
3b44838dde Merge pull request #12146 from nixocio/ui_all
Update dependabot
2022-05-02 14:46:58 -07:00
nixocio
0205d7deab Update dependabot
dependency-type to be direct
2022-05-02 17:15:54 -04:00
Marliana Lara
dd47829bdb Add SCM Type detail to job details 2022-05-02 16:25:46 -04:00
JST
e7e72d13a9 Merge pull request #12137 from keithjgrant/10831-inventory-source-project-validation
remove incorrect form error message in inv source
2022-05-02 17:00:15 -03:00
kialam
4bbdf1ec8a Merge pull request #12138 from kialam/add-directory-dependabot
Add directory destination to dependabot yaml file
2022-05-02 12:30:53 -07:00
Kia Lam
4596df449e Add directory. 2022-05-02 12:07:50 -07:00
Christian M. Adams
2b0846e8a2 Bump Receptorctl to 1.2.3 2022-05-02 14:41:04 -04:00
Kersom
ecbb636ba1 Merge pull request #12136 from nixocio/ui_fix_css_schedules
Align items on schedule form
2022-05-02 14:29:57 -04:00
Keith J. Grant
e3aed9dad4 remove incorrect form error message in inv source 2022-05-02 10:34:33 -07:00
JST
213983a322 Merge pull request #12078 from AlexSCorey/12058-CleanUpReactWarnings
Cleans up some console warnings.
2022-05-02 13:13:20 -03:00
nixocio
2977084787 Align items on schedule form
Align items on schedule form
2022-05-02 11:11:34 -04:00
Sarah Akus
b6362a63cc Merge pull request #12134 from ansible/add-deleted-inventory-locator-for-JT-detail
add new locator for deleted inventory in JT detail screen
2022-04-29 16:18:45 -04:00
akus062381
7517ba820b add new locator for deleted inventory 2022-04-29 15:57:32 -04:00
Alan Rominger
29d60844a8 Fix notification timing issue by sending in the latter of 2 events (#12110)
* Track host_status_counts and use that to process notifications

* Remove now unused setting

* Back out changes to callback class not needed after all

* Skirt the need for duck typing by leaning on the cached field

* Delete tests for deleted task

* Revert "Back out changes to callback class not needed after all"

This reverts commit 3b8ae350d218991d42bffd65ce4baac6f41926b2.

* Directly hardcode stats_event_type for callback class

* Fire notifications if stats event was never sent

* Remove test content for deleted methods

* Add placeholder for when no hosts matched

* Make field default be None, denote events processed with empty dict

* Make UI process null value for host_status_counts

* Fix tracking of EOF dispatch for system jobs

* Reorganize EVENT_MAP into class properties

* Consolidate conditional I missed from EVENT_MAP refactor

* Give up on the null condition, also applies for empty hosts

* Remove cls position argument not being used

* Move wrapup method out of class, add tests
2022-04-29 13:54:31 -04:00
Alex Corey
41b0607d7e Merge pull request #12108 from marshmalien/org-host-credTypes-helpText
Add organization, host, and credential type detail view help text
2022-04-29 13:13:41 -04:00
John Westcott IV
13f7166a30 Fixing write location of ssh_key-data-cert.pub (#12122) 2022-04-29 12:22:09 -04:00
Sarah Akus
0cc9b84ead Merge pull request #11998 from mabashian/344-host-count
Adds total host count to inv and smart inv details views
2022-04-29 10:37:26 -04:00
JST
68ee4311bf Merge pull request #12128 from mabashian/11990-schedule-prompt-tags-v2
Add tags/skip tags to the list of things that will cause the Prompt button to be displayed on the schedule form
2022-04-28 17:23:13 -03:00
Alex Corey
6e6c3f676e Merge pull request #12120 from AlexSCorey/addDependabot
Add dependabot for ui
2022-04-28 15:43:03 -04:00
John Westcott IV
c67f50831b Modifying schedules API to allow for rrulesets #5733 (#12043)
* Added schedule_rruleset lookup plugin for awx.awx
* Added DB migration for rrule size
* Updated schedule docs
* The schedule API endpoint will now return an array of errors on rule validation to try and inform the user of all errors instead of just the first
2022-04-28 15:38:20 -04:00
Alex Corey
50ef234bd6 Update .github/dependabot.yml
Co-authored-by: Marliana Lara <marliana.lara@gmail.com>
2022-04-28 15:14:08 -04:00
Jeff Bradberry
2bef5ce09b Merge pull request #12099 from jbradberry/add-content-type-option-header
Add the X-Content-Type-Options nosniff header
2022-04-28 14:41:02 -04:00
Seth Foster
a49c4796f4 Merge pull request #12115 from sean-m-sullivan/workflow_node_updates
update workflow nodes to allow workflows and system jobs
2022-04-28 14:12:33 -04:00
Seth Foster
9eab9586e5 Merge pull request #12114 from sean-m-sullivan/awx_collection_alias
update awx collection workflow module
2022-04-28 13:42:30 -04:00
mabashian
cd35787a86 Adds total host count to inv and smart inv details views 2022-04-28 11:40:27 -04:00
mabashian
cbe84ff4f3 Add tags/skip tags to the list of things that will cause the Prompt button to be displayed on the schedule form 2022-04-28 11:33:46 -04:00
Alex Corey
410f38eccf add dependabot for ui 2022-04-28 09:30:54 -04:00
Sarah Akus
b885fc2d86 Merge pull request #12123 from marshmalien/12109-fix-user-role-association
Fix user role association in access modal
2022-04-27 19:23:53 -04:00
JST
4c93f5794a Merge pull request #12098 from nixocio/ui_work_flow
Fix broken job WFJT details when related JT is deleted
2022-04-27 17:47:28 -03:00
sean-m-sullivan
456bb75dcb update awx collection workflow module 2022-04-27 16:32:37 -04:00
sean-m-sullivan
02fd8b0d20 update workflow nodes 2022-04-27 16:18:00 -04:00
Marliana Lara
fbe6c80f86 Fix user role association in access modal 2022-04-27 16:01:15 -04:00
Jeremy Kimber
3d5f302d10 remove hardcoded public schema in cleanup_jobs.py 2022-04-27 12:45:15 -05:00
Sarah Akus
856a2c1734 Merge pull request #12107 from keithjgrant/12101-job-output-single-item-pagination
fix off-by-one error in job output pagination
2022-04-27 10:43:20 -04:00
John Westcott IV
4277b73438 Adding /etc/supervisord.conf to sosreports (#12104) 2022-04-27 10:34:45 -04:00
Alex Corey
2888f9f8d0 Cleans up some console warnings. 2022-04-26 17:17:41 -04:00
Jeff Bradberry
68221cdcbe Merge pull request #12106 from jbradberry/django-bump
Bump Django to 3.2.13
2022-04-26 15:07:52 -04:00
Sean Sullivan
f50501cc2a update awx.awx collection to allow remote project. (#12093) 2022-04-26 15:07:29 -04:00
Marliana Lara
c84fac65e0 Add organization, host, and credential type detail view help text. 2022-04-26 11:50:36 -04:00
Jeff Bradberry
d64c457b3d Bump Django to 3.2.13 2022-04-26 10:34:28 -04:00
Keith J. Grant
1bd5a880dc fix off-by-one error in job output pagination 2022-04-25 13:09:44 -07:00
Jeff Bradberry
47d5a89f40 Add the X-Content-Type-Options nosniff header 2022-04-25 13:45:16 -04:00
nixocio
6060e7e29f Fix broken job WFJT details when related JT is deleted
Fix broken job WFJT details when related JT is deleted
2022-04-25 12:33:12 -04:00
Alan Rominger
29702400f1 Avoid parent instance update when status was unchanged 2022-04-22 09:07:03 -04:00
Jeff Bradberry
b562d5cc88 Look up the correct top-level resource name when reconstructing foreign keys
during an awx-cli export.
2022-03-18 10:32:33 -04:00
Andrew Kelling
dfde30798e Update README.md
Cleaned up wording
2021-12-07 11:59:11 -07:00
781 changed files with 52138 additions and 28561 deletions

View File

@@ -1,3 +1,2 @@
awx/ui/node_modules
Dockerfile
.git

17
.github/BOTMETA.yml vendored
View File

@@ -1,17 +0,0 @@
---
files:
awx/ui/:
labels: component:ui
maintainers: $team_ui
awx/api/:
labels: component:api
maintainers: $team_api
awx/main/:
labels: component:api
maintainers: $team_api
installer/:
labels: component:installer
macros:
team_api: wwitzel3 matburt chrismeyersfsu cchurch AlanCoding ryanpetrello rooftopcellist
team_ui: jlmitch5 jaredevantabor mabashian marshmalien benthomasson jakemcdermott

1
.github/CODEOWNERS vendored
View File

@@ -1 +0,0 @@
workflows/e2e_test.yml @tiagodread @shanemcd @jakemcdermott

View File

@@ -6,17 +6,37 @@ practices regarding responsible disclosure, see
https://www.ansible.com/security
-->
<!--
PLEASE DO NOT USE A BLANK TEMPLATE IN THE AWX REPO.
This is a legacy template used for internal testing ONLY.
Any issues opened will this template will be automatically closed.
Instead use the bug or feature request.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
- Feature Idea
- Documentation
- Breaking Change
- New or Enhanced Feature
- Bug, Docs Fix or other nominal change
##### COMPONENT NAME
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
- API
- UI
- Collection
- Docs
- CLI
- Other
##### SUMMARY
<!-- Briefly describe the problem. -->

View File

@@ -1,13 +1,12 @@
---
name: Bug Report
description: Create a report to help us improve
description: "🐞 Create a report to help us improve"
body:
- type: markdown
attributes:
value: |
Issues are for **concrete, actionable bugs and feature requests** only. For debugging help or technical support, please use:
- The #ansible-awx channel on irc.libera.chat
- The awx project mailing list, https://groups.google.com/forum/#!forum/awx-project
Bug Report issues are for **concrete, actionable bugs** only.
For debugging help or technical support, please see the [Get Involved section of our README](https://github.com/ansible/awx#get-involved)
- type: checkboxes
id: terms
@@ -24,7 +23,7 @@ body:
- type: textarea
id: summary
attributes:
label: Summary
label: Bug Summary
description: Briefly describe the problem.
validations:
required: false
@@ -45,6 +44,9 @@ body:
- label: UI
- label: API
- label: Docs
- label: Collection
- label: CLI
- label: Other
- type: dropdown
id: awx-install-method
@@ -57,9 +59,8 @@ body:
- minikube
- openshift
- minishift
- docker on linux
- docker for mac
- boot2docker
- docker development environment
- N/A
validations:
required: true

12
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
---
blank_issues_enabled: true
contact_links:
- name: For debugging help or technical support
url: https://github.com/ansible/awx#get-involved
about: For general debugging or technical support please see the Get Involved section of our readme.
- name: 📝 Ansible Code of Conduct
url: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_template_chooser
about: AWX uses the Ansible Code of Conduct; ❤ Be nice to other members of the community. ☮ Behave.
- name: 💼 For Enterprise
url: https://www.ansible.com/products/engine?utm_medium=github&utm_source=issue_template_chooser
about: Red Hat offers support for the Ansible Automation Platform

View File

@@ -1,17 +0,0 @@
---
name: "✨ Feature request"
about: Suggest an idea for this project
---
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
- http://web.libera.chat/?channels=#ansible-awx
- https://groups.google.com/forum/#!forum/awx-project
We have to limit this because of limited volunteer time to respond to issues! -->
##### ISSUE TYPE
- Feature Idea
##### SUMMARY
<!-- Briefly describe the problem or desired enhancement. -->

View File

@@ -0,0 +1,88 @@
---
name: ✨ Feature request
description: Suggest an idea for this project
body:
- type: markdown
attributes:
value: |
Feature Request issues are for **feature requests** only.
For debugging help or technical support, please see the [Get Involved section of our README](https://github.com/ansible/awx#get-involved)
- type: checkboxes
id: terms
attributes:
label: Please confirm the following
options:
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
required: true
- label: I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
required: true
- label: I understand that AWX is open source software provided for free and that I might not receive a timely response.
required: true
- type: dropdown
id: feature-type
attributes:
label: Feature type
description: >-
What kind of feature is this?
multiple: false
options:
- "New Feature"
- "Enhancement to Existing Feature"
validations:
required: true
- type: textarea
id: summary
attributes:
label: Feature Summary
description: Briefly describe the desired enhancement.
validations:
required: true
- type: checkboxes
id: components
attributes:
label: Select the relevant components
options:
- label: UI
- label: API
- label: Docs
- label: Collection
- label: CLI
- label: Other
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to reproduce
description: >-
Describe the necessary steps to understand the scenario of the requested enhancement.
Include all the steps that will help the developer and QE team understand what you are requesting.
validations:
required: true
- type: textarea
id: current-results
attributes:
label: Current results
description: What is currently happening on the scenario?
validations:
required: true
- type: textarea
id: sugested-results
attributes:
label: Sugested feature result
description: What is the result this new feature will bring?
validations:
required: true
- type: textarea
id: additional-information
attributes:
label: Additional information
description: Please provide any other information you think is relevant that could help us understand your feature request.
validations:
required: false

View File

@@ -1,9 +0,0 @@
---
name: "\U0001F525 Security bug report"
about: How to report security vulnerabilities
---
For all security related bugs, email security@ansible.com instead of using this issue tracker and you will receive a prompt response.
For more information on the Ansible community's practices regarding responsible disclosure, see https://www.ansible.com/security

View File

@@ -1,9 +0,0 @@
Bug Report: type:bug
Bugfix Pull Request: type:bug
Feature Request: type:enhancement
Feature Pull Request: type:enhancement
UI: component:ui
API: component:api
Installer: component:installer
Docs Pull Request: component:docs
Documentation: component:docs

View File

@@ -1,11 +1,3 @@
<!--- changelog-entry
# Fill in 'msg' below to have an entry automatically added to the next release changelog.
# Leaving 'msg' blank will not generate a changelog entry for this PR.
# Please ensure this is a simple (and readable) one-line string.
---
msg: ""
-->
##### SUMMARY
<!--- Describe the change, including rationale and design decisions -->
@@ -17,15 +9,18 @@ the change does.
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Pull Request
- Bugfix Pull Request
- Docs Pull Request
- Breaking Change
- New or Enhanced Feature
- Bug, Docs Fix or other nominal change
##### COMPONENT NAME
<!--- Name of the module/plugin/module/task -->
- API
- UI
- Collection
- CLI
- Docs
- Other
##### AWX VERSION
<!--- Paste verbatim output from `make VERSION` between quotes below -->

19
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
version: 2
updates:
- package-ecosystem: "npm"
directory: "/awx/ui"
schedule:
interval: "monthly"
open-pull-requests-limit: 5
allow:
- dependency-type: "production"
reviewers:
- "AlexSCorey"
- "keithjgrant"
- "kialam"
- "mabashian"
- "marshmalien"
labels:
- "component:ui"
- "dependencies"
target-branch: "devel"

View File

@@ -1,12 +1,16 @@
needs_triage:
- '.*'
"type:bug":
- "Please confirm the following"
- "Bug Summary"
"type:enhancement":
- "Feature Idea"
- "Feature Summary"
"component:ui":
- "\\[X\\] UI"
"component:api":
- "\\[X\\] API"
"component:docs":
- "\\[X\\] Docs"
"component:awx_collection":
- "\\[X\\] Collection"
"component:cli":
- "\\[X\\] awxkit"

View File

@@ -1,14 +1,19 @@
"component:api":
- any: ['awx/**/*', '!awx/ui/*']
- any: ["awx/**/*", "!awx/ui/**"]
"component:ui":
- any: ['awx/ui/**/*']
- any: ["awx/ui/**/*"]
"component:docs":
- any: ['docs/**/*']
- any: ["docs/**/*"]
"component:cli":
- any: ['awxkit/**/*']
- any: ["awxkit/**/*"]
"component:collection":
- any: ['awx_collection/**/*']
"component:awx_collection":
- any: ["awx_collection/**/*"]
"dependencies":
- any: ["awx/ui/package.json"]
- any: ["awx/requirements/*.txt"]
- any: ["awx/requirements/requirements.in"]

View File

@@ -1,31 +1,114 @@
## General
- For the roundup of all the different mailing lists available from AWX, Ansible, and beyond visit: https://docs.ansible.com/ansible/latest/community/communication.html
- For the roundup of all the different mailing lists available from AWX, Ansible, and beyond visit: https://docs.ansible.com/ansible/latest/community/communication.html
- Hello, we think your question is answered in our FAQ. Does this: https://www.ansible.com/products/awx-project/faq cover your question?
- You can find the latest documentation here: https://docs.ansible.com/automation-controller/latest/html/userguide/index.html
## Visit our mailing list
- Hello, your question seems like a good one to ask on our mailing list at https://groups.google.com/g/awx-project. You can also join #ansible-awx on https://libera.chat/ and ask your question there.
## Create an issue
- Hello, thanks for reaching out on list. We think this merits an issue on our Github, https://github.com/ansible/awx/issues. If you could open an issue up on Github it will get tagged and integrated into our planning and workflow. All future work will be tracked there.
## Create a Pull Request
- Hello, we think your idea is good, please consider contributing a PR for this, following our contributing guidelines: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
## PRs/Issues
## Receptor
- You can find the receptor docs here: https://receptor.readthedocs.io/en/latest/
- Hello, your issue seems related to receptor, could you please open an issue in the receptor repository? https://github.com/ansible/receptor. Thanks!
### Visit our mailing list
- Hello, this appears to be less of a bug report or feature request and more of a question. Could you please ask this on our mailing list? See https://github.com/ansible/awx/#get-involved for information for ways to connect with us.
## Ansible Engine not AWX
- Hello, your question seems to be about Ansible development, not about AWX. Try asking on the Ansible-devel specific mailing list: https://groups.google.com/g/ansible-devel
### Denied Submission
- Hi! \
\
Thanks very much for your submission to AWX. It means a lot to us that you have taken time to contribute. \
\
At this time we do not want to merge this PR. Our reasons for this are: \
\
(A) INSERT ITEM HERE \
\
Please know that we are always up for discussion but this project is very active. Because of this, we're unlikely to see comments made on closed PRs, and we lock them after some time. If you or anyone else has any further questions, please let us know by using any of the communication methods listed in the page below: \
\
https://github.com/ansible/awx/#get-involved \
\
In the future, sometimes starting a discussion on the development list prior to implementing a feature can make getting things included a little easier, but it is not always necessary. \
\
Thank you once again for this and your interest in AWX!
### No Progress Issue
- Hi! \
\
Thank you very much for for this issue. It means a lot to us that you have taken time to contribute by opening this report. \
\
On this issue, there were comments added but it has been some time since then without response. At this time we are closing this issue. If you get time to address the comments we can reopen the issue if you can contact us by using any of the communication methods listed in the page below: \
\
https://github.com/ansible/awx/#get-involved \
\
Thank you once again for this and your interest in AWX!
### No Progress PR
- Hi! \
\
Thank you very much for your submission to AWX. It means a lot to us that you have taken time to contribute. \
\
On this PR, changes were requested but it has been some time since then. We think this PR has merit but without the requested changes we are unable to merge it. At this time we are closing your PR. If you get time to address the changes you are welcome to open another PR or we can reopen this PR upon request if you contact us by using any of the communication methods listed in the page below: \
\
https://github.com/ansible/awx/#get-involved \
\
Thank you once again for this and your interest in AWX!
## Common
### Give us more info
- Hello, we'd love to help, but we need a little more information about the problem you're having. Screenshots, log outputs, or any reproducers would be very helpful.
### Code of Conduct
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html
### EE Contents / Community General
- Hello. The awx-ee contains the collections and dependencies needed for supported AWX features to function. Anything beyond that (like the community.general package) will require you to build your own EE. For information on how to do that, see https://ansible-builder.readthedocs.io/en/stable/ \
\
The Ansible Community is looking at building an EE that corresponds to all of the collections inside the ansible package. That may help you if and when it happens; see https://github.com/ansible-community/community-topics/issues/31 for details.
## Mailing List Triage
### Create an issue
- Hello, thanks for reaching out on list. We think this merits an issue on our Github, https://github.com/ansible/awx/issues. If you could open an issue up on Github it will get tagged and integrated into our planning and workflow. All future work will be tracked there. Issues should include as much information as possible, including screenshots, log outputs, or any reproducers.
### Create a Pull Request
- Hello, we think your idea is good! Please consider contributing a PR for this following our contributing guidelines: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
### Receptor
- You can find the receptor docs here: https://receptor.readthedocs.io/en/latest/
- Hello, your issue seems related to receptor. Could you please open an issue in the receptor repository? https://github.com/ansible/receptor. Thanks!
### Ansible Engine not AWX
- Hello, your question seems to be about Ansible development, not about AWX. Try asking on the Ansible-devel specific mailing list: https://groups.google.com/g/ansible-devel
- Hello, your question seems to be about using Ansible, not about AWX. https://groups.google.com/g/ansible-project is the best place to visit for user questions about Ansible. Thanks!
## Ansible Galaxy not AWX
- Hey there, that sounds like an FAQ question, did this: https://www.ansible.com/products/awx-project/faq cover your question?
### Ansible Galaxy not AWX
- Hey there. That sounds like an FAQ question. Did this: https://www.ansible.com/products/awx-project/faq cover your question?
## Contributing Guidelines
- AWX: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
### Contributing Guidelines
- AWX: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
- AWX-Operator: https://github.com/ansible/awx-operator/blob/devel/CONTRIBUTING.md
## Code of Conduct
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html
### Oracle AWX
We'd be happy to help if you can reproduce this with AWX since we do not have Oracle's Linux Automation Manager. If you need help with this specific version of Oracles Linux Automation Manager you will need to contact your Oracle for support.
### AWX Release
Subject: Announcing AWX Xa.Ya.za and AWX-Operator Xb.Yb.zb
- Hi all, \
\
We're happy to announce that the next release of AWX, version <b>`Xa.Ya.za`</b> is now available! \
In addition AWX Operator version <b>`Xb.Yb.zb`</b> has also been released! \
\
Please see the releases pages for more details: \
AWX: https://github.com/ansible/awx/releases/tag/Xa.Ya.za \
Operator: https://github.com/ansible/awx-operator/releases/tag/Xb.Yb.zb \
\
The AWX team.
## Try latest version
- Hello, this issue pertains to an older version of AWX. Try upgrading to the latest version and let us know if that resolves your issue.

View File

@@ -111,6 +111,15 @@ jobs:
repository: ansible/awx-operator
path: awx-operator
- name: Get python version from Makefile
working-directory: awx
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Install playbook dependencies
run: |
python3 -m pip install docker

View File

@@ -19,3 +19,34 @@ jobs:
not-before: 2021-12-07T07:00:00Z
configuration-path: .github/issue_labeler.yml
enable-versioned-regex: 0
community:
runs-on: ubuntu-latest
name: Label Issue - Community
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests
- name: Check if user is a member of Ansible org
uses: jannekem/run-python-script-action@v1
id: check_user
with:
script: |
import requests
headers = {'Accept': 'application/vnd.github+json', 'Authorization': 'token ${{ secrets.GITHUB_TOKEN }}'}
response = requests.get('${{ fromJson(toJson(github.event.issue.user.url)) }}/orgs?per_page=100', headers=headers)
is_member = False
for org in response.json():
if org['login'] == 'ansible':
is_member = True
if is_member:
print("User is member")
else:
print("User is community")
- name: Add community label if not a member
if: contains(steps.check_user.outputs.stdout, 'community')
uses: andymckay/labeler@e6c4322d0397f3240f0e7e30a33b5c5df2d39e90
with:
add-labels: "community"
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -18,3 +18,34 @@ jobs:
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/pr_labeler.yml
community:
runs-on: ubuntu-latest
name: Label PR - Community
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v4
- name: Install python requests
run: pip install requests
- name: Check if user is a member of Ansible org
uses: jannekem/run-python-script-action@v1
id: check_user
with:
script: |
import requests
headers = {'Accept': 'application/vnd.github+json', 'Authorization': 'token ${{ secrets.GITHUB_TOKEN }}'}
response = requests.get('${{ fromJson(toJson(github.event.pull_request.user.url)) }}/orgs?per_page=100', headers=headers)
is_member = False
for org in response.json():
if org['login'] == 'ansible':
is_member = True
if is_member:
print("User is member")
else:
print("User is community")
- name: Add community label if not a member
if: contains(steps.check_user.outputs.stdout, 'community')
uses: andymckay/labeler@e6c4322d0397f3240f0e7e30a33b5c5df2d39e90
with:
add-labels: "community"
repo-token: ${{ secrets.GITHUB_TOKEN }}

45
.github/workflows/pr_body_check.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
---
name: PR Check
env:
BRANCH: ${{ github.base_ref || 'devel' }}
on:
pull_request:
types: [opened, edited, reopened, synchronize]
jobs:
pr-check:
name: Scan PR description for semantic versioning keywords
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Write PR body to a file
run: |
cat >> pr.body << __SOME_RANDOM_PR_EOF__
${{ github.event.pull_request.body }}
__SOME_RANDOM_PR_EOF__
- name: Display the received body for troubleshooting
run: cat pr.body
# We want to write these out individually just incase the options were joined on a single line
- name: Check for each of the lines
run: |
grep "Bug, Docs Fix or other nominal change" pr.body > Z
grep "New or Enhanced Feature" pr.body > Y
grep "Breaking Change" pr.body > X
exit 0
# We exit 0 and set the shell to prevent the returns from the greps from failing this step
# See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
shell: bash {0}
- name: Check for exactly one item
run: |
if [ $(cat X Y Z | wc -l) != 1 ] ; then
echo "The PR body must contain exactly one of [ 'Bug, Docs Fix or other nominal change', 'New or Enhanced Feature', 'Breaking Change' ]"
echo "We counted $(cat X Y Z | wc -l)"
echo "See the default PR body for examples"
exit 255;
else
exit 0;
fi

View File

@@ -21,7 +21,7 @@ jobs:
- name: Install dependencies
run: |
python${{ env.py_version }} -m pip install wheel twine
python${{ env.py_version }} -m pip install wheel twine setuptools-scm
- name: Set official collection namespace
run: echo collection_namespace=awx >> $GITHUB_ENV
@@ -33,7 +33,7 @@ jobs:
- name: Build collection and publish to galaxy
run: |
COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz
@@ -70,4 +70,4 @@ jobs:
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:latest
docker push quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository }}:latest

View File

@@ -100,23 +100,10 @@ jobs:
AWX_TEST_IMAGE: ${{ github.repository }}
AWX_TEST_VERSION: ${{ github.event.inputs.version }}
- name: Generate changelog
uses: shanemcd/simple-changelog-generator@v1
id: changelog
with:
repo: "${{ github.repository }}"
- name: Write changelog to file
run: |
cat << 'EOF' > /tmp/awx-changelog
${{ steps.changelog.outputs.changelog }}
EOF
- name: Create draft release for AWX
working-directory: awx
run: |
ansible-playbook -v tools/ansible/stage.yml \
-e changelog_path=/tmp/awx-changelog \
-e repo=${{ github.repository }} \
-e awx_image=ghcr.io/${{ github.repository }} \
-e version=${{ github.event.inputs.version }} \

View File

@@ -0,0 +1,29 @@
---
name: Dependency Pr Update
on:
pull_request:
types: [labeled, opened, reopened]
jobs:
pr-check:
name: Update Dependabot Prs
if: contains(github.event.pull_request.labels.*.name, 'dependencies') && contains(github.event.pull_request.labels.*.name, 'component:ui')
runs-on: ubuntu-latest
steps:
- name: Checkout branch
uses: actions/checkout@v3
- name: Update PR Body
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
OWNER: ${{ github.repository_owner }}
REPO: ${{ github.event.repository.name }}
PR: ${{github.event.pull_request.number}}
PR_BODY: ${{github.event.pull_request.body}}
run: |
gh pr checkout ${{ env.PR }}
echo "${{ env.PR_BODY }}" > my_pr_body.txt
echo "" >> my_pr_body.txt
echo "Bug, Docs Fix or other nominal change" >> my_pr_body.txt
gh pr edit ${{env.PR}} --body-file my_pr_body.txt

1
.gitignore vendored
View File

@@ -38,7 +38,6 @@ awx/ui/build
awx/ui/.env.local
awx/ui/instrumented
rsyslog.pid
tools/prometheus/data
tools/docker-compose/ansible/awx_dump.sql
tools/docker-compose/Dockerfile
tools/docker-compose/_build

View File

@@ -8,6 +8,8 @@ ignore: |
awx/ui/test/e2e/tests/smoke-vars.yml
awx/ui/node_modules
tools/docker-compose/_sources
# django template files
awx/api/templates/instance_install_bundle/**
extends: default

View File

@@ -19,16 +19,17 @@ Have questions about this document or anything not covered here? Come chat with
- [Purging containers and images](#purging-containers-and-images)
- [Pre commit hooks](#pre-commit-hooks)
- [What should I work on?](#what-should-i-work-on)
- [Translations](#translations)
- [Submitting Pull Requests](#submitting-pull-requests)
- [PR Checks run by Zuul](#pr-checks-run-by-zuul)
- [Reporting Issues](#reporting-issues)
- [Getting Help](#getting-help)
## Things to know prior to submitting code
- All code submissions are done through pull requests against the `devel` branch.
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see [git push docs](https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt).
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.libera.chat, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
@@ -42,8 +43,7 @@ The AWX development environment workflow and toolchain uses Docker and the docke
Prior to starting the development services, you'll need `docker` and `docker-compose`. On Linux, you can generally find these in your distro's packaging, but you may find that Docker themselves maintain a separate repo that tracks more closely to the latest releases.
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
respectively.
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows) respectively.
For Linux platforms, refer to the following from Docker:
@@ -79,17 +79,13 @@ See the [README.md](./tools/docker-compose/README.md) for docs on how to build t
### Building API Documentation
AWX includes support for building [Swagger/OpenAPI
documentation](https://swagger.io). To build the documentation locally, run:
AWX includes support for building [Swagger/OpenAPI documentation](https://swagger.io). To build the documentation locally, run:
```bash
(container)/awx_devel$ make swagger
```
This will write a file named `swagger.json` that contains the API specification
in OpenAPI format. A variety of online tools are available for translating
this data into more consumable formats (such as HTML). http://editor.swagger.io
is an example of one such service.
This will write a file named `swagger.json` that contains the API specification in OpenAPI format. A variety of online tools are available for translating this data into more consumable formats (such as HTML). http://editor.swagger.io is an example of one such service.
### Accessing the AWX web interface
@@ -115,20 +111,30 @@ While you can use environment variables to skip the pre-commit hooks GitHub will
## What should I work on?
We have a ["good first issue" label](https://github.com/ansible/awx/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) we put on some issues that might be a good starting point for new contributors.
Fixing bugs and updating the documentation are always appreciated, so reviewing the backlog of issues is always a good place to start.
For feature work, take a look at the current [Enhancements](https://github.com/ansible/awx/issues?q=is%3Aissue+is%3Aopen+label%3Atype%3Aenhancement).
If it has someone assigned to it then that person is the person responsible for working the enhancement. If you feel like you could contribute then reach out to that person.
Fixing bugs, adding translations, and updating the documentation are always appreciated, so reviewing the backlog of issues is always a good place to start. For extra information on debugging tools, see [Debugging](./docs/debugging/).
**NOTES**
> Issue assignment will only be done for maintainers of the project. If you decide to work on an issue, please feel free to add a comment in the issue to let others know that you are working on it; but know that we will accept the first pull request from whomever is able to fix an issue. Once your PR is accepted we can add you as an assignee to an issue upon request.
**NOTE**
> If you work in a part of the codebase that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us in the `#ansible-awx` channel on irc.libera.chat, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
**NOTE**
> If you're planning to develop features or fixes for the UI, please review the [UI Developer doc](./awx/ui/README.md).
### Translations
At this time we do not accept PRs for adding additional language translations as we have an automated process for generating our translations. This is because translations require constant care as new strings are added and changed in the code base. Because of this the .po files are overwritten during every translation release cycle. We also can't support a lot of translations on AWX as its an open source project and each language adds time and cost to maintain. If you would like to see AWX translated into a new language please create an issue and ask others you know to upvote the issue. Our translation team will review the needs of the community and see what they can do around supporting additional language.
If you find an issue with an existing translation, please see the [Reporting Issues](#reporting-issues) section to open an issue and our translation team will work with you on a resolution.
## Submitting Pull Requests
Fixes and Features for AWX will go through the Github pull request process. Submit your pull request (PR) against the `devel` branch.
@@ -152,28 +158,14 @@ We like to keep our commit history clean, and will require resubmission of pull
Sometimes it might take us a while to fully review your PR. We try to keep the `devel` branch in good working order, and so we review requests carefully. Please be patient.
All submitted PRs will have the linter and unit tests run against them via Zuul, and the status reported in the PR.
## PR Checks run by Zuul
Zuul jobs for awx are defined in the [zuul-jobs](https://github.com/ansible/zuul-jobs) repo.
Zuul runs the following checks that must pass:
1. `tox-awx-api-lint`
2. `tox-awx-ui-lint`
3. `tox-awx-api`
4. `tox-awx-ui`
5. `tox-awx-swagger`
Zuul runs the following checks that are non-voting (can not pass but serve to inform PR reviewers):
1. `tox-awx-detect-schema-change`
This check generates the schema and diffs it against a reference copy of the `devel` version of the schema.
Reviewers should inspect the `job-output.txt.gz` related to the check if their is a failure (grep for `diff -u -b` to find beginning of diff).
If the schema change is expected and makes sense in relation to the changes made by the PR, then you are good to go!
If not, the schema changes should be fixed, but this decision must be enforced by reviewers.
When your PR is initially submitted the checks will not be run until a maintainer allows them to be. Once a maintainer has done a quick review of your work the PR will have the linter and unit tests run against them via GitHub Actions, and the status reported in the PR.
## Reporting Issues
We welcome your feedback, and encourage you to file an issue when you run into a problem. But before opening a new issues, we ask that you please view our [Issues guide](./ISSUES.md).
## Getting Help
If you require additional assistance, please reach out to us at `#ansible-awx` on irc.libera.chat, or submit your question to the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
For extra information on debugging tools, see [Debugging](./docs/debugging/).

View File

@@ -3,7 +3,7 @@ recursive-include awx *.po
recursive-include awx *.mo
recursive-include awx/static *
recursive-include awx/templates *.html
recursive-include awx/api/templates *.md *.html
recursive-include awx/api/templates *.md *.html *.yml
recursive-include awx/ui/build *.html
recursive-include awx/ui/build *
recursive-include awx/playbooks *.yml

136
Makefile
View File

@@ -5,8 +5,8 @@ NPM_BIN ?= npm
CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
VERSION := $(shell $(PYTHON) setup.py --version)
COLLECTION_VERSION := $(shell $(PYTHON) setup.py --version | cut -d . -f 1-3)
VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py)
COLLECTION_VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
# NOTE: This defaults the container image version to the branch that's active
COMPOSE_TAG ?= $(GIT_BRANCH)
@@ -15,6 +15,12 @@ MAIN_NODE_TYPE ?= hybrid
KEYCLOAK ?= false
# If set to true docker-compose will also start an ldap instance
LDAP ?= false
# If set to true docker-compose will also start a splunk instance
SPLUNK ?= false
# If set to true docker-compose will also start a prometheus instance
PROMETHEUS ?= false
# If set to true docker-compose will also start a grafana instance
GRAFANA ?= false
VENV_BASE ?= /var/lib/awx/venv
@@ -43,7 +49,7 @@ I18N_FLAG_FILE = .i18n_built
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \
dev_build release_build sdist \
sdist \
ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit
@@ -66,7 +72,7 @@ clean-languages:
rm -f $(I18N_FLAG_FILE)
find ./awx/locale/ -type f -regex ".*\.mo$" -delete
# Remove temporary build files, compiled Python files.
## Remove temporary build files, compiled Python files.
clean: clean-ui clean-api clean-awxkit clean-dist
rm -rf awx/public
rm -rf awx/lib/site-packages
@@ -88,7 +94,7 @@ clean-api:
clean-awxkit:
rm -rf awxkit/*.egg-info awxkit/.tox awxkit/build/*
# convenience target to assert environment variables are defined
## convenience target to assert environment variables are defined
guard-%:
@if [ "$${$*}" = "" ]; then \
echo "The required environment variable '$*' is not set"; \
@@ -111,7 +117,7 @@ virtualenv_awx:
fi; \
fi
# Install third-party requirements needed for AWX's environment.
## Install third-party requirements needed for AWX's environment.
# this does not use system site packages intentionally
requirements_awx: virtualenv_awx
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
@@ -130,7 +136,7 @@ requirements_dev: requirements_awx requirements_awx_dev
requirements_test: requirements
# "Install" awx package in development mode.
## "Install" awx package in development mode.
develop:
@if [ "$(VIRTUAL_ENV)" ]; then \
pip uninstall -y awx; \
@@ -147,21 +153,21 @@ version_file:
fi; \
$(PYTHON) -c "import awx; print(awx.__version__)" > /var/lib/awx/.awx_version; \
# Refresh development environment after pulling new code.
## Refresh development environment after pulling new code.
refresh: clean requirements_dev version_file develop migrate
# Create Django superuser.
## Create Django superuser.
adduser:
$(MANAGEMENT_COMMAND) createsuperuser
# Create database tables and apply any new migrations.
## Create database tables and apply any new migrations.
migrate:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(MANAGEMENT_COMMAND) migrate --noinput
# Run after making changes to the models to create a new migration.
## Run after making changes to the models to create a new migration.
dbchange:
$(MANAGEMENT_COMMAND) makemigrations
@@ -198,7 +204,7 @@ uwsgi: collectstatic
--logformat "%(addr) %(method) %(uri) - %(proto) %(status)"
awx-autoreload:
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel "$(DEV_RELOAD_COMMAND)"
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)"
daphne:
@if [ "$(VENV_BASE)" ]; then \
@@ -212,7 +218,7 @@ wsbroadcast:
fi; \
$(PYTHON) manage.py run_wsbroadcast
# Run to start the background task dispatcher for development.
## Run to start the background task dispatcher for development.
dispatcher:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -220,7 +226,7 @@ dispatcher:
$(PYTHON) manage.py run_dispatcher
# Run to start the zeromq callback receiver
## Run to start the zeromq callback receiver
receiver:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -267,12 +273,12 @@ api-lint:
yamllint -s .
awx-link:
[ -d "/awx_devel/awx.egg-info" ] || $(PYTHON) /awx_devel/setup.py egg_info_dev
[ -d "/awx_devel/awx.egg-info" ] || $(PYTHON) /awx_devel/tools/scripts/egg_info_dev
cp -f /tmp/awx.egg-link /var/lib/awx/venv/awx/lib/$(PYTHON)/site-packages/awx.egg-link
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
PYTEST_ARGS ?= -n auto
# Run all API unit tests.
## Run all API unit tests.
test:
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -286,6 +292,7 @@ COLLECTION_TEST_TARGET ?=
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
COLLECTION_TEMPLATE_VERSION ?= false
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
@@ -313,7 +320,7 @@ awx_collection_build: $(shell find awx_collection -type f)
-e collection_package=$(COLLECTION_PACKAGE) \
-e collection_namespace=$(COLLECTION_NAMESPACE) \
-e collection_version=$(COLLECTION_VERSION) \
-e '{"awx_template_version":false}'
-e '{"awx_template_version": $(COLLECTION_TEMPLATE_VERSION)}'
ansible-galaxy collection build awx_collection_build --force --output-path=awx_collection_build
build_collection: awx_collection_build
@@ -334,23 +341,24 @@ test_unit:
fi; \
py.test awx/main/tests/unit awx/conf/tests/unit awx/sso/tests/unit
# Run all API unit tests with coverage enabled.
## Run all API unit tests with coverage enabled.
test_coverage:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
py.test --create-db --cov=awx --cov-report=xml --junitxml=./reports/junit.xml $(TEST_DIRS)
# Output test coverage as HTML (into htmlcov directory).
## Output test coverage as HTML (into htmlcov directory).
coverage_html:
coverage html
# Run API unit tests across multiple Python/Django versions with Tox.
## Run API unit tests across multiple Python/Django versions with Tox.
test_tox:
tox -v
# Make fake data
DATA_GEN_PRESET = ""
## Make fake data
bulk_data:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -371,9 +379,10 @@ clean-ui:
rm -rf $(UI_BUILD_FLAG_FILE)
awx/ui/node_modules:
NODE_OPTIONS=--max-old-space-size=6144 $(NPM_BIN) --prefix awx/ui --loglevel warn ci
NODE_OPTIONS=--max-old-space-size=6144 $(NPM_BIN) --prefix awx/ui --loglevel warn --force ci
$(UI_BUILD_FLAG_FILE): awx/ui/node_modules
$(UI_BUILD_FLAG_FILE):
$(MAKE) awx/ui/node_modules
$(PYTHON) tools/scripts/compilemessages.py
$(NPM_BIN) --prefix awx/ui --loglevel warn run compile-strings
$(NPM_BIN) --prefix awx/ui --loglevel warn run build
@@ -417,21 +426,13 @@ ui-test-general:
$(NPM_BIN) run --prefix awx/ui pretest
$(NPM_BIN) run --prefix awx/ui/ test-general --runInBand
# Build a pip-installable package into dist/ with a timestamped version number.
dev_build:
$(PYTHON) setup.py dev_build
# Build a pip-installable package into dist/ with the release version number.
release_build:
$(PYTHON) setup.py release_build
HEADLESS ?= no
ifeq ($(HEADLESS), yes)
dist/$(SDIST_TAR_FILE):
else
dist/$(SDIST_TAR_FILE): $(UI_BUILD_FLAG_FILE)
endif
$(PYTHON) setup.py $(SDIST_COMMAND)
$(PYTHON) -m build -s
ln -sf $(SDIST_TAR_FILE) dist/awx.tar.gz
sdist: dist/$(SDIST_TAR_FILE)
@@ -452,6 +453,11 @@ COMPOSE_OPTS ?=
CONTROL_PLANE_NODE_COUNT ?= 1
EXECUTION_NODE_COUNT ?= 2
MINIKUBE_CONTAINER_GROUP ?= false
EXTRA_SOURCES_ANSIBLE_OPTS ?=
ifneq ($(ADMIN_PASSWORD),)
EXTRA_SOURCES_ANSIBLE_OPTS := -e admin_password=$(ADMIN_PASSWORD) $(EXTRA_SOURCES_ANSIBLE_OPTS)
endif
docker-compose-sources: .git/hooks/pre-commit
@if [ $(MINIKUBE_CONTAINER_GROUP) = true ]; then\
@@ -466,7 +472,11 @@ docker-compose-sources: .git/hooks/pre-commit
-e execution_node_count=$(EXECUTION_NODE_COUNT) \
-e minikube_container_group=$(MINIKUBE_CONTAINER_GROUP) \
-e enable_keycloak=$(KEYCLOAK) \
-e enable_ldap=$(LDAP)
-e enable_ldap=$(LDAP) \
-e enable_splunk=$(SPLUNK) \
-e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) $(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
@@ -500,7 +510,7 @@ docker-compose-container-group-clean:
fi
rm -rf tools/docker-compose-minikube/_sources/
# Base development image build
## Base development image build
docker-compose-build:
ansible-playbook tools/ansible/dockerfile.yml -e build_dev=True -e receptor_image=$(RECEPTOR_IMAGE)
DOCKER_BUILDKIT=1 docker build -t $(DEVEL_IMAGE_NAME) \
@@ -514,20 +524,17 @@ docker-clean:
fi
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm tools_awx_db
docker volume rm -f tools_awx_db tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose
# Docker Development Environment with Elastic Stack Connected
## Docker Development Environment with Elastic Stack Connected
docker-compose-elk: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
prometheus:
docker run -u0 --net=tools_default --link=`docker ps | egrep -o "tools_awx(_run)?_([^ ]+)?"`:awxweb --volume `pwd`/tools/prometheus:/prometheus --name prometheus -d -p 0.0.0.0:9090:9090 prom/prometheus --web.enable-lifecycle --config.file=/prometheus/prometheus.yml
docker-compose-container-group:
MINIKUBE_CONTAINER_GROUP=true make docker-compose
@@ -558,26 +565,34 @@ Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
-e template_dest=_build_kube_dev \
-e receptor_image=$(RECEPTOR_IMAGE)
## Build awx_kube_devel image for development on local Kubernetes environment.
awx-kube-dev-build: Dockerfile.kube-dev
DOCKER_BUILDKIT=1 docker build -f Dockerfile.kube-dev \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
## Build awx image for deployment on Kubernetes environment.
awx-kube-build: Dockerfile
DOCKER_BUILDKIT=1 docker build -f Dockerfile \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
# Translation TASKS
# --------------------------------------
# generate UI .pot file, an empty template of strings yet to be translated
## generate UI .pot file, an empty template of strings yet to be translated
pot: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-template --clean
# generate UI .po files for each locale (will update translated strings for `en`)
## generate UI .po files for each locale (will update translated strings for `en`)
po: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-strings -- --clean
# generate API django .pot .po
LANG = "en-us"
LANG = "en_us"
## generate API django .pot .po
messages:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
@@ -586,3 +601,38 @@ messages:
print-%:
@echo $($*)
# HELP related targets
# --------------------------------------
HELP_FILTER=.PHONY
## Display help targets
help:
@printf "Available targets:\n"
@make -s help/generate | grep -vE "\w($(HELP_FILTER))"
## Display help for all targets
help/all:
@printf "Available targets:\n"
@make -s help/generate
## Generate help output from MAKEFILE_LIST
help/generate:
@awk '/^[-a-zA-Z_0-9%:\\\.\/]+:/ { \
helpMessage = match(lastLine, /^## (.*)/); \
if (helpMessage) { \
helpCommand = $$1; \
helpMessage = substr(lastLine, RSTART + 3, RLENGTH); \
gsub("\\\\", "", helpCommand); \
gsub(":+$$", "", helpCommand); \
printf " \x1b[32;01m%-35s\x1b[0m %s\n", helpCommand, helpMessage; \
} else { \
helpCommand = $$1; \
gsub("\\\\", "", helpCommand); \
gsub(":+$$", "", helpCommand); \
printf " \x1b[32;01m%-35s\x1b[0m %s\n", helpCommand, "No help available"; \
} \
} \
{ lastLine = $$0 }' $(MAKEFILE_LIST) | sort -u
@printf "\n"

View File

@@ -6,9 +6,40 @@ import os
import sys
import warnings
from pkg_resources import get_distribution
__version__ = get_distribution('awx').version
def get_version():
version_from_file = get_version_from_file()
if version_from_file:
return version_from_file
else:
from setuptools_scm import get_version
version = get_version(root='..', relative_to=__file__)
return version
def get_version_from_file():
vf = version_file()
if vf:
with open(vf, 'r') as file:
return file.read().strip()
def version_file():
current_dir = os.path.dirname(os.path.abspath(__file__))
version_file = os.path.join(current_dir, '..', 'VERSION')
if os.path.exists(version_file):
return version_file
try:
import pkg_resources
__version__ = pkg_resources.get_distribution('awx').version
except pkg_resources.DistributionNotFound:
__version__ = get_version()
__all__ = ['__version__']
@@ -21,7 +52,6 @@ try:
except ImportError: # pragma: no cover
MODE = 'production'
import hashlib
try:
@@ -78,9 +108,10 @@ def oauth2_getattribute(self, attr):
# Custom method to override
# oauth2_provider.settings.OAuth2ProviderSettings.__getattribute__
from django.conf import settings
from oauth2_provider.settings import DEFAULTS
val = None
if 'migrate' not in sys.argv:
if (isinstance(attr, str)) and (attr in DEFAULTS) and (not attr.startswith('_')):
# certain Django OAuth Toolkit migrations actually reference
# setting lookups for references to model classes (e.g.,
# oauth2_settings.REFRESH_TOKEN_MODEL)
@@ -159,7 +190,7 @@ def manage():
sys.stdout.write('%s\n' % __version__)
# If running as a user without permission to read settings, display an
# error message. Allow --help to still work.
elif settings.SECRET_KEY == 'permission-denied':
elif not os.getenv('SKIP_SECRET_KEY_CHECK', False) and settings.SECRET_KEY == 'permission-denied':
if len(sys.argv) == 1 or len(sys.argv) >= 2 and sys.argv[1] in ('-h', '--help', 'help'):
execute_from_command_line(sys.argv)
sys.stdout.write('\n')

View File

@@ -157,7 +157,7 @@ class FieldLookupBackend(BaseFilterBackend):
# A list of fields that we know can be filtered on without the possiblity
# of introducing duplicates
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField)
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField)
def get_fields_from_lookup(self, model, lookup):
@@ -232,6 +232,9 @@ class FieldLookupBackend(BaseFilterBackend):
re.compile(value)
except re.error as e:
raise ValueError(e.args[0])
elif new_lookup.endswith('__iexact'):
if not isinstance(field, (CharField, TextField)):
raise ValueError(f'{field.name} is not a text field and cannot be filtered by case-insensitive search')
elif new_lookup.endswith('__search'):
related_model = getattr(field, 'related_model', None)
if not related_model:
@@ -258,8 +261,8 @@ class FieldLookupBackend(BaseFilterBackend):
search_filters = {}
needs_distinct = False
# Can only have two values: 'AND', 'OR'
# If 'AND' is used, an iterm must satisfy all condition to show up in the results.
# If 'OR' is used, an item just need to satisfy one condition to appear in results.
# If 'AND' is used, an item must satisfy all conditions to show up in the results.
# If 'OR' is used, an item just needs to satisfy one condition to appear in results.
search_filter_relation = 'OR'
for key, values in request.query_params.lists():
if key in self.RESERVED_NAMES:

View File

@@ -63,7 +63,6 @@ __all__ = [
'SubDetailAPIView',
'ResourceAccessList',
'ParentMixin',
'DeleteLastUnattachLabelMixin',
'SubListAttachDetachAPIView',
'CopyAPIView',
'BaseUsersList',
@@ -98,7 +97,6 @@ class LoggedLoginView(auth_views.LoginView):
current_user = UserSerializer(self.request.user)
current_user = smart_str(JSONRenderer().render(current_user.data))
current_user = urllib.parse.quote('%s' % current_user, '')
ret.set_cookie('current_user', current_user, secure=settings.SESSION_COOKIE_SECURE or None)
ret.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return ret
@@ -775,28 +773,6 @@ class SubListAttachDetachAPIView(SubListCreateAttachDetachAPIView):
return {'id': None}
class DeleteLastUnattachLabelMixin(object):
"""
Models for which you want the last instance to be deleted from the database
when the last disassociate is called should inherit from this class. Further,
the model should implement is_detached()
"""
def unattach(self, request, *args, **kwargs):
(sub_id, res) = super(DeleteLastUnattachLabelMixin, self).unattach_validate(request)
if res:
return res
res = super(DeleteLastUnattachLabelMixin, self).unattach_by_id(request, sub_id)
obj = self.model.objects.get(id=sub_id)
if obj.is_detached():
obj.delete()
return res
class SubDetailAPIView(ParentMixin, generics.RetrieveAPIView, GenericAPIView):
pass

View File

@@ -29,7 +29,6 @@ from django.utils.translation import gettext_lazy as _
from django.utils.encoding import force_str
from django.utils.text import capfirst
from django.utils.timezone import now
from django.utils.functional import cached_property
# Django REST Framework
from rest_framework.exceptions import ValidationError, PermissionDenied
@@ -155,6 +154,7 @@ SUMMARIZABLE_FK_FIELDS = {
'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed'),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'kubernetes', 'credential_type_id'),
'signature_validation_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'credential_type_id'),
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed', 'type', 'canceled_on'),
'job_template': DEFAULT_SUMMARY_FIELDS,
'workflow_job_template': DEFAULT_SUMMARY_FIELDS,
@@ -615,7 +615,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
def validate(self, attrs):
attrs = super(BaseSerializer, self).validate(attrs)
try:
# Create/update a model instance and run it's full_clean() method to
# Create/update a model instance and run its full_clean() method to
# do any validation implemented on the model class.
exclusions = self.get_validation_exclusions(self.instance)
obj = self.instance or self.Meta.model()
@@ -1471,6 +1471,7 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
'allow_override',
'custom_virtualenv',
'default_environment',
'signature_validation_credential',
) + (
'last_update_failed',
'last_updated',
@@ -1607,7 +1608,6 @@ class ProjectUpdateSerializer(UnifiedJobSerializer, ProjectOptionsSerializer):
class ProjectUpdateDetailSerializer(ProjectUpdateSerializer):
host_status_counts = serializers.SerializerMethodField(help_text=_('A count of hosts uniquely assigned to each status.'))
playbook_counts = serializers.SerializerMethodField(help_text=_('A count of all plays and tasks for the job run.'))
class Meta:
@@ -1622,14 +1622,6 @@ class ProjectUpdateDetailSerializer(ProjectUpdateSerializer):
return data
def get_host_status_counts(self, obj):
try:
counts = obj.project_update_events.only('event_data').get(event='playbook_on_stats').get_host_status_counts()
except ProjectUpdateEvent.DoesNotExist:
counts = {}
return counts
class ProjectUpdateListSerializer(ProjectUpdateSerializer, UnifiedJobListSerializer):
class Meta:
@@ -1688,6 +1680,7 @@ class InventorySerializer(LabelsListMixin, BaseSerializerWithVariables):
'total_inventory_sources',
'inventory_sources_with_failures',
'pending_deletion',
'prevent_instance_group_fallback',
)
def get_related(self, obj):
@@ -2082,7 +2075,7 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
class Meta:
model = InventorySource
fields = ('*', 'name', 'inventory', 'update_on_launch', 'update_cache_timeout', 'source_project', 'update_on_project_update') + (
fields = ('*', 'name', 'inventory', 'update_on_launch', 'update_cache_timeout', 'source_project') + (
'last_update_failed',
'last_updated',
) # Backwards compatibility.
@@ -2145,11 +2138,6 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
raise serializers.ValidationError(_("Cannot use manual project for SCM-based inventory."))
return value
def validate_update_on_project_update(self, value):
if value and self.instance and self.instance.schedules.exists():
raise serializers.ValidationError(_("Setting not compatible with existing schedules."))
return value
def validate_inventory(self, value):
if value and value.kind == 'smart':
raise serializers.ValidationError({"detail": _("Cannot create Inventory Source for Smart Inventory")})
@@ -2200,7 +2188,7 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
if ('source' in attrs or 'source_project' in attrs) and get_field_from_model_or_attrs('source_project') is None:
raise serializers.ValidationError({"source_project": _("Project required for scm type sources.")})
else:
redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path', 'update_on_project_update']))
redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path']))
if redundant_scm_fields:
raise serializers.ValidationError({"detail": _("Cannot set %s if not SCM type." % ' '.join(redundant_scm_fields))})
@@ -2245,7 +2233,7 @@ class InventoryUpdateSerializer(UnifiedJobSerializer, InventorySourceOptionsSeri
'source_project_update',
'custom_virtualenv',
'instance_group',
'-controller_node',
'scm_revision',
)
def get_related(self, obj):
@@ -2320,7 +2308,6 @@ class InventoryUpdateDetailSerializer(InventoryUpdateSerializer):
class InventoryUpdateListSerializer(InventoryUpdateSerializer, UnifiedJobListSerializer):
class Meta:
model = InventoryUpdate
fields = ('*', '-controller_node') # field removal undone by UJ serializer
class InventoryUpdateCancelSerializer(InventoryUpdateSerializer):
@@ -2673,6 +2660,13 @@ class CredentialSerializer(BaseSerializer):
return credential_type
def validate_inputs(self, inputs):
if self.instance and self.instance.credential_type.kind == "vault":
if 'vault_id' in inputs and inputs['vault_id'] != self.instance.inputs['vault_id']:
raise ValidationError(_('Vault IDs cannot be changed once they have been created.'))
return inputs
class CredentialSerializerCreate(CredentialSerializer):
@@ -2930,6 +2924,12 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
'ask_verbosity_on_launch',
'ask_inventory_on_launch',
'ask_credential_on_launch',
'ask_execution_environment_on_launch',
'ask_labels_on_launch',
'ask_forks_on_launch',
'ask_job_slice_count_on_launch',
'ask_timeout_on_launch',
'ask_instance_groups_on_launch',
'survey_enabled',
'become_enabled',
'diff_mode',
@@ -2938,6 +2938,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
'job_slice_count',
'webhook_service',
'webhook_credential',
'prevent_instance_group_fallback',
)
read_only_fields = ('*', 'custom_virtualenv')
@@ -3107,7 +3108,6 @@ class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
class JobDetailSerializer(JobSerializer):
host_status_counts = serializers.SerializerMethodField(help_text=_('A count of hosts uniquely assigned to each status.'))
playbook_counts = serializers.SerializerMethodField(help_text=_('A count of all plays and tasks for the job run.'))
custom_virtualenv = serializers.ReadOnlyField()
@@ -3123,14 +3123,6 @@ class JobDetailSerializer(JobSerializer):
return data
def get_host_status_counts(self, obj):
try:
counts = obj.get_event_queryset().only('event_data').get(event='playbook_on_stats').get_host_status_counts()
except JobEvent.DoesNotExist:
counts = {}
return counts
class JobCancelSerializer(BaseSerializer):
@@ -3201,7 +3193,7 @@ class JobRelaunchSerializer(BaseSerializer):
return attrs
class JobCreateScheduleSerializer(BaseSerializer):
class JobCreateScheduleSerializer(LabelsListMixin, BaseSerializer):
can_schedule = serializers.SerializerMethodField()
prompts = serializers.SerializerMethodField()
@@ -3227,14 +3219,17 @@ class JobCreateScheduleSerializer(BaseSerializer):
try:
config = obj.launch_config
ret = config.prompts_dict(display=True)
if 'inventory' in ret:
ret['inventory'] = self._summarize('inventory', ret['inventory'])
if 'credentials' in ret:
all_creds = [self._summarize('credential', cred) for cred in ret['credentials']]
ret['credentials'] = all_creds
for field_name in ('inventory', 'execution_environment'):
if field_name in ret:
ret[field_name] = self._summarize(field_name, ret[field_name])
for field_name, singular in (('credentials', 'credential'), ('instance_groups', 'instance_group')):
if field_name in ret:
ret[field_name] = [self._summarize(singular, obj) for obj in ret[field_name]]
if 'labels' in ret:
ret['labels'] = self._summary_field_labels(config)
return ret
except JobLaunchConfig.DoesNotExist:
return {'all': _('Unknown, job may have been ran before launch configurations were saved.')}
return {'all': _('Unknown, job may have been run before launch configurations were saved.')}
class AdHocCommandSerializer(UnifiedJobSerializer):
@@ -3319,21 +3314,10 @@ class AdHocCommandSerializer(UnifiedJobSerializer):
class AdHocCommandDetailSerializer(AdHocCommandSerializer):
host_status_counts = serializers.SerializerMethodField(help_text=_('A count of hosts uniquely assigned to each status.'))
class Meta:
model = AdHocCommand
fields = ('*', 'host_status_counts')
def get_host_status_counts(self, obj):
try:
counts = obj.ad_hoc_command_events.only('event_data').get(event='playbook_on_stats').get_host_status_counts()
except AdHocCommandEvent.DoesNotExist:
counts = {}
return counts
class AdHocCommandCancelSerializer(AdHocCommandSerializer):
@@ -3415,6 +3399,9 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
limit = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
scm_branch = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
skip_tags = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
job_tags = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
class Meta:
model = WorkflowJobTemplate
fields = (
@@ -3433,6 +3420,11 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
'webhook_service',
'webhook_credential',
'-execution_environment',
'ask_labels_on_launch',
'ask_skip_tags_on_launch',
'ask_tags_on_launch',
'skip_tags',
'job_tags',
)
def get_related(self, obj):
@@ -3476,7 +3468,7 @@ class WorkflowJobTemplateSerializer(JobTemplateMixin, LabelsListMixin, UnifiedJo
# process char_prompts, these are not direct fields on the model
mock_obj = self.Meta.model()
for field_name in ('scm_branch', 'limit'):
for field_name in ('scm_branch', 'limit', 'skip_tags', 'job_tags'):
if field_name in attrs:
setattr(mock_obj, field_name, attrs[field_name])
attrs.pop(field_name)
@@ -3502,6 +3494,9 @@ class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
limit = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
scm_branch = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
skip_tags = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
job_tags = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
class Meta:
model = WorkflowJob
fields = (
@@ -3521,6 +3516,8 @@ class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
'webhook_service',
'webhook_credential',
'webhook_guid',
'skip_tags',
'job_tags',
)
def get_related(self, obj):
@@ -3637,6 +3634,9 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
skip_tags = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
diff_mode = serializers.BooleanField(required=False, allow_null=True, default=None)
verbosity = serializers.ChoiceField(allow_null=True, required=False, default=None, choices=VERBOSITY_CHOICES)
forks = serializers.IntegerField(required=False, allow_null=True, min_value=0, default=None)
job_slice_count = serializers.IntegerField(required=False, allow_null=True, min_value=0, default=None)
timeout = serializers.IntegerField(required=False, allow_null=True, default=None)
exclude_errors = ()
class Meta:
@@ -3652,13 +3652,21 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
'skip_tags',
'diff_mode',
'verbosity',
'execution_environment',
'forks',
'job_slice_count',
'timeout',
)
def get_related(self, obj):
res = super(LaunchConfigurationBaseSerializer, self).get_related(obj)
if obj.inventory_id:
res['inventory'] = self.reverse('api:inventory_detail', kwargs={'pk': obj.inventory_id})
if obj.execution_environment_id:
res['execution_environment'] = self.reverse('api:execution_environment_detail', kwargs={'pk': obj.execution_environment_id})
res['labels'] = self.reverse('api:{}_labels_list'.format(get_type_for_model(self.Meta.model)), kwargs={'pk': obj.pk})
res['credentials'] = self.reverse('api:{}_credentials_list'.format(get_type_for_model(self.Meta.model)), kwargs={'pk': obj.pk})
res['instance_groups'] = self.reverse('api:{}_instance_groups_list'.format(get_type_for_model(self.Meta.model)), kwargs={'pk': obj.pk})
return res
def _build_mock_obj(self, attrs):
@@ -4110,7 +4118,6 @@ class SystemJobEventSerializer(AdHocCommandEventSerializer):
class JobLaunchSerializer(BaseSerializer):
# Representational fields
passwords_needed_to_start = serializers.ReadOnlyField()
can_start_without_user_input = serializers.BooleanField(read_only=True)
@@ -4133,6 +4140,12 @@ class JobLaunchSerializer(BaseSerializer):
skip_tags = serializers.CharField(required=False, write_only=True, allow_blank=True)
limit = serializers.CharField(required=False, write_only=True, allow_blank=True)
verbosity = serializers.ChoiceField(required=False, choices=VERBOSITY_CHOICES, write_only=True)
execution_environment = serializers.PrimaryKeyRelatedField(queryset=ExecutionEnvironment.objects.all(), required=False, write_only=True)
labels = serializers.PrimaryKeyRelatedField(many=True, queryset=Label.objects.all(), required=False, write_only=True)
forks = serializers.IntegerField(required=False, write_only=True, min_value=0)
job_slice_count = serializers.IntegerField(required=False, write_only=True, min_value=0)
timeout = serializers.IntegerField(required=False, write_only=True)
instance_groups = serializers.PrimaryKeyRelatedField(many=True, queryset=InstanceGroup.objects.all(), required=False, write_only=True)
class Meta:
model = JobTemplate
@@ -4160,6 +4173,12 @@ class JobLaunchSerializer(BaseSerializer):
'ask_verbosity_on_launch',
'ask_inventory_on_launch',
'ask_credential_on_launch',
'ask_execution_environment_on_launch',
'ask_labels_on_launch',
'ask_forks_on_launch',
'ask_job_slice_count_on_launch',
'ask_timeout_on_launch',
'ask_instance_groups_on_launch',
'survey_enabled',
'variables_needed_to_start',
'credential_needed_to_start',
@@ -4167,6 +4186,12 @@ class JobLaunchSerializer(BaseSerializer):
'job_template_data',
'defaults',
'verbosity',
'execution_environment',
'labels',
'forks',
'job_slice_count',
'timeout',
'instance_groups',
)
read_only_fields = (
'ask_scm_branch_on_launch',
@@ -4179,6 +4204,12 @@ class JobLaunchSerializer(BaseSerializer):
'ask_verbosity_on_launch',
'ask_inventory_on_launch',
'ask_credential_on_launch',
'ask_execution_environment_on_launch',
'ask_labels_on_launch',
'ask_forks_on_launch',
'ask_job_slice_count_on_launch',
'ask_timeout_on_launch',
'ask_instance_groups_on_launch',
)
def get_credential_needed_to_start(self, obj):
@@ -4203,6 +4234,17 @@ class JobLaunchSerializer(BaseSerializer):
if cred.credential_type.managed and 'vault_id' in cred.credential_type.defined_fields:
cred_dict['vault_id'] = cred.get_input('vault_id', default=None)
defaults_dict.setdefault(field_name, []).append(cred_dict)
elif field_name == 'execution_environment':
if obj.execution_environment_id:
defaults_dict[field_name] = {'id': obj.execution_environment.id, 'name': obj.execution_environment.name}
else:
defaults_dict[field_name] = {}
elif field_name == 'labels':
for label in obj.labels.all():
label_dict = {'id': label.id, 'name': label.name}
defaults_dict.setdefault(field_name, []).append(label_dict)
elif field_name == 'instance_groups':
defaults_dict[field_name] = []
else:
defaults_dict[field_name] = getattr(obj, field_name)
return defaults_dict
@@ -4225,6 +4267,15 @@ class JobLaunchSerializer(BaseSerializer):
elif template.project.status in ('error', 'failed'):
errors['playbook'] = _("Missing a revision to run due to failed project update.")
latest_update = template.project.project_updates.last()
if latest_update is not None and latest_update.failed:
failed_validation_tasks = latest_update.project_update_events.filter(
event='runner_on_failed',
play="Perform project signature/checksum verification",
)
if failed_validation_tasks:
errors['playbook'] = _("Last project update failed due to signature validation failure.")
# cannot run a playbook without an inventory
if template.inventory and template.inventory.pending_deletion is True:
errors['inventory'] = _("The inventory associated with this Job Template is being deleted.")
@@ -4301,6 +4352,10 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
scm_branch = serializers.CharField(required=False, write_only=True, allow_blank=True)
workflow_job_template_data = serializers.SerializerMethodField()
labels = serializers.PrimaryKeyRelatedField(many=True, queryset=Label.objects.all(), required=False, write_only=True)
skip_tags = serializers.CharField(required=False, write_only=True, allow_blank=True)
job_tags = serializers.CharField(required=False, write_only=True, allow_blank=True)
class Meta:
model = WorkflowJobTemplate
fields = (
@@ -4320,8 +4375,22 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
'workflow_job_template_data',
'survey_enabled',
'ask_variables_on_launch',
'ask_labels_on_launch',
'labels',
'ask_skip_tags_on_launch',
'ask_tags_on_launch',
'skip_tags',
'job_tags',
)
read_only_fields = (
'ask_inventory_on_launch',
'ask_variables_on_launch',
'ask_skip_tags_on_launch',
'ask_labels_on_launch',
'ask_limit_on_launch',
'ask_scm_branch_on_launch',
'ask_tags_on_launch',
)
read_only_fields = ('ask_inventory_on_launch', 'ask_variables_on_launch')
def get_survey_enabled(self, obj):
if obj:
@@ -4329,10 +4398,15 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
return False
def get_defaults(self, obj):
defaults_dict = {}
for field_name in WorkflowJobTemplate.get_ask_mapping().keys():
if field_name == 'inventory':
defaults_dict[field_name] = dict(name=getattrd(obj, '%s.name' % field_name, None), id=getattrd(obj, '%s.pk' % field_name, None))
elif field_name == 'labels':
for label in obj.labels.all():
label_dict = {"id": label.id, "name": label.name}
defaults_dict.setdefault(field_name, []).append(label_dict)
else:
defaults_dict[field_name] = getattr(obj, field_name)
return defaults_dict
@@ -4341,6 +4415,7 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
return dict(name=obj.name, id=obj.id, description=obj.description)
def validate(self, attrs):
template = self.instance
accepted, rejected, errors = template._accept_or_ignore_job_kwargs(**attrs)
@@ -4358,6 +4433,7 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
WFJT_inventory = template.inventory
WFJT_limit = template.limit
WFJT_scm_branch = template.scm_branch
super(WorkflowJobLaunchSerializer, self).validate(attrs)
template.extra_vars = WFJT_extra_vars
template.inventory = WFJT_inventory
@@ -4502,7 +4578,10 @@ class NotificationTemplateSerializer(BaseSerializer):
body = messages[event].get('body', {})
if body:
try:
potential_body = json.loads(body)
rendered_body = (
sandbox.ImmutableSandboxedEnvironment(undefined=DescriptiveUndefined).from_string(body).render(JobNotificationMixin.context_stub())
)
potential_body = json.loads(rendered_body)
if not isinstance(potential_body, dict):
error_list.append(
_("Webhook body for '{}' should be a json dictionary. Found type '{}'.".format(event, type(potential_body).__name__))
@@ -4645,69 +4724,74 @@ class SchedulePreviewSerializer(BaseSerializer):
# We reject rrules if:
# - DTSTART is not include
# - INTERVAL is not included
# - SECONDLY is used
# - TZID is used
# - BYDAY prefixed with a number (MO is good but not 20MO)
# - BYYEARDAY
# - BYWEEKNO
# - Multiple DTSTART or RRULE elements
# - Can't contain both COUNT and UNTIL
# - COUNT > 999
# - Multiple DTSTART
# - At least one of RRULE is not included
# - EXDATE or RDATE is included
# For any rule in the ruleset:
# - INTERVAL is not included
# - SECONDLY is used
# - BYDAY prefixed with a number (MO is good but not 20MO)
# - Can't contain both COUNT and UNTIL
# - COUNT > 999
def validate_rrule(self, value):
rrule_value = value
multi_by_month_day = r".*?BYMONTHDAY[\:\=][0-9]+,-*[0-9]+"
multi_by_month = r".*?BYMONTH[\:\=][0-9]+,[0-9]+"
by_day_with_numeric_prefix = r".*?BYDAY[\:\=][0-9]+[a-zA-Z]{2}"
match_count = re.match(r".*?(COUNT\=[0-9]+)", rrule_value)
match_multiple_dtstart = re.findall(r".*?(DTSTART(;[^:]+)?\:[0-9]+T[0-9]+Z?)", rrule_value)
match_native_dtstart = re.findall(r".*?(DTSTART:[0-9]+T[0-9]+) ", rrule_value)
match_multiple_rrule = re.findall(r".*?(RRULE\:)", rrule_value)
match_multiple_rrule = re.findall(r".*?(RULE\:[^\s]*)", rrule_value)
errors = []
if not len(match_multiple_dtstart):
raise serializers.ValidationError(_('Valid DTSTART required in rrule. Value should start with: DTSTART:YYYYMMDDTHHMMSSZ'))
errors.append(_('Valid DTSTART required in rrule. Value should start with: DTSTART:YYYYMMDDTHHMMSSZ'))
if len(match_native_dtstart):
raise serializers.ValidationError(_('DTSTART cannot be a naive datetime. Specify ;TZINFO= or YYYYMMDDTHHMMSSZZ.'))
errors.append(_('DTSTART cannot be a naive datetime. Specify ;TZINFO= or YYYYMMDDTHHMMSSZZ.'))
if len(match_multiple_dtstart) > 1:
raise serializers.ValidationError(_('Multiple DTSTART is not supported.'))
if not len(match_multiple_rrule):
raise serializers.ValidationError(_('RRULE required in rrule.'))
if len(match_multiple_rrule) > 1:
raise serializers.ValidationError(_('Multiple RRULE is not supported.'))
if 'interval' not in rrule_value.lower():
raise serializers.ValidationError(_('INTERVAL required in rrule.'))
if 'secondly' in rrule_value.lower():
raise serializers.ValidationError(_('SECONDLY is not supported.'))
if re.match(multi_by_month_day, rrule_value):
raise serializers.ValidationError(_('Multiple BYMONTHDAYs not supported.'))
if re.match(multi_by_month, rrule_value):
raise serializers.ValidationError(_('Multiple BYMONTHs not supported.'))
if re.match(by_day_with_numeric_prefix, rrule_value):
raise serializers.ValidationError(_("BYDAY with numeric prefix not supported."))
if 'byyearday' in rrule_value.lower():
raise serializers.ValidationError(_("BYYEARDAY not supported."))
if 'byweekno' in rrule_value.lower():
raise serializers.ValidationError(_("BYWEEKNO not supported."))
if 'COUNT' in rrule_value and 'UNTIL' in rrule_value:
raise serializers.ValidationError(_("RRULE may not contain both COUNT and UNTIL"))
if match_count:
count_val = match_count.groups()[0].strip().split("=")
if int(count_val[1]) > 999:
raise serializers.ValidationError(_("COUNT > 999 is unsupported."))
errors.append(_('Multiple DTSTART is not supported.'))
if "rrule:" not in rrule_value.lower():
errors.append(_('One or more rule required in rrule.'))
if "exdate:" in rrule_value.lower():
raise serializers.ValidationError(_('EXDATE not allowed in rrule.'))
if "rdate:" in rrule_value.lower():
raise serializers.ValidationError(_('RDATE not allowed in rrule.'))
for a_rule in match_multiple_rrule:
if 'interval' not in a_rule.lower():
errors.append("{0}: {1}".format(_('INTERVAL required in rrule'), a_rule))
elif 'secondly' in a_rule.lower():
errors.append("{0}: {1}".format(_('SECONDLY is not supported'), a_rule))
if re.match(by_day_with_numeric_prefix, a_rule):
errors.append("{0}: {1}".format(_("BYDAY with numeric prefix not supported"), a_rule))
if 'COUNT' in a_rule and 'UNTIL' in a_rule:
errors.append("{0}: {1}".format(_("RRULE may not contain both COUNT and UNTIL"), a_rule))
match_count = re.match(r".*?(COUNT\=[0-9]+)", a_rule)
if match_count:
count_val = match_count.groups()[0].strip().split("=")
if int(count_val[1]) > 999:
errors.append("{0}: {1}".format(_("COUNT > 999 is unsupported"), a_rule))
try:
Schedule.rrulestr(rrule_value)
except Exception as e:
import traceback
logger.error(traceback.format_exc())
raise serializers.ValidationError(_("rrule parsing failed validation: {}").format(e))
errors.append(_("rrule parsing failed validation: {}").format(e))
if errors:
raise serializers.ValidationError(errors)
return value
class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSerializer):
show_capabilities = ['edit', 'delete']
timezone = serializers.SerializerMethodField()
until = serializers.SerializerMethodField()
timezone = serializers.SerializerMethodField(
help_text=_(
'The timezone this schedule runs in. This field is extracted from the RRULE. If the timezone in the RRULE is a link to another timezone, the link will be reflected in this field.'
),
)
until = serializers.SerializerMethodField(
help_text=_('The date this schedule will end. This field is computed from the RRULE. If the schedule does not end an emptry string will be returned'),
)
class Meta:
model = Schedule
@@ -4741,6 +4825,8 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
if isinstance(obj.unified_job_template, SystemJobTemplate):
summary_fields['unified_job_template']['job_type'] = obj.unified_job_template.job_type
# We are not showing instance groups on summary fields because JTs don't either
if 'inventory' in summary_fields:
return summary_fields
@@ -4761,13 +4847,6 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
raise serializers.ValidationError(_('Inventory Source must be a cloud resource.'))
elif type(value) == Project and value.scm_type == '':
raise serializers.ValidationError(_('Manual Project cannot have a schedule set.'))
elif type(value) == InventorySource and value.source == 'scm' and value.update_on_project_update:
raise serializers.ValidationError(
_(
'Inventory sources with `update_on_project_update` cannot be scheduled. '
'Schedule its source project `{}` instead.'.format(value.source_project.name)
)
)
return value
def validate(self, attrs):
@@ -4782,7 +4861,7 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
class InstanceLinkSerializer(BaseSerializer):
class Meta:
model = InstanceLink
fields = ('source', 'target')
fields = ('source', 'target', 'link_state')
source = serializers.SlugRelatedField(slug_field="hostname", read_only=True)
target = serializers.SlugRelatedField(slug_field="hostname", read_only=True)
@@ -4791,63 +4870,80 @@ class InstanceLinkSerializer(BaseSerializer):
class InstanceNodeSerializer(BaseSerializer):
class Meta:
model = Instance
fields = ('id', 'hostname', 'node_type', 'node_state')
node_state = serializers.SerializerMethodField()
def get_node_state(self, obj):
if not obj.enabled:
return "disabled"
return "error" if obj.errors else "healthy"
fields = ('id', 'hostname', 'node_type', 'node_state', 'enabled')
class InstanceSerializer(BaseSerializer):
show_capabilities = ['edit']
consumed_capacity = serializers.SerializerMethodField()
percent_capacity_remaining = serializers.SerializerMethodField()
jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that ' 'are targeted for this instance'), read_only=True)
jobs_running = serializers.IntegerField(help_text=_('Count of jobs in the running or waiting state that are targeted for this instance'), read_only=True)
jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance'), read_only=True)
health_check_pending = serializers.SerializerMethodField()
class Meta:
model = Instance
read_only_fields = ('uuid', 'hostname', 'version', 'node_type')
read_only_fields = ('ip_address', 'uuid', 'version')
fields = (
"id",
"type",
"url",
"related",
"uuid",
"hostname",
"created",
"modified",
"last_seen",
"last_health_check",
"errors",
'id',
'hostname',
'type',
'url',
'related',
'summary_fields',
'uuid',
'created',
'modified',
'last_seen',
'health_check_started',
'health_check_pending',
'last_health_check',
'errors',
'capacity_adjustment',
"version",
"capacity",
"consumed_capacity",
"percent_capacity_remaining",
"jobs_running",
"jobs_total",
"cpu",
"memory",
"cpu_capacity",
"mem_capacity",
"enabled",
"managed_by_policy",
"node_type",
'version',
'capacity',
'consumed_capacity',
'percent_capacity_remaining',
'jobs_running',
'jobs_total',
'cpu',
'memory',
'cpu_capacity',
'mem_capacity',
'enabled',
'managed_by_policy',
'node_type',
'node_state',
'ip_address',
'listener_port',
)
extra_kwargs = {
'node_type': {'initial': Instance.Types.EXECUTION, 'default': Instance.Types.EXECUTION},
'node_state': {'initial': Instance.States.INSTALLED, 'default': Instance.States.INSTALLED},
}
def get_related(self, obj):
res = super(InstanceSerializer, self).get_related(obj)
res['jobs'] = self.reverse('api:instance_unified_jobs_list', kwargs={'pk': obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if settings.IS_K8S and obj.node_type in (Instance.Types.EXECUTION,):
res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk})
res['peers'] = self.reverse('api:instance_peers_list', kwargs={"pk": obj.pk})
if self.context['request'].user.is_superuser or self.context['request'].user.is_system_auditor:
if obj.node_type != 'hop':
res['health_check'] = self.reverse('api:instance_health_check', kwargs={'pk': obj.pk})
return res
def get_summary_fields(self, obj):
summary = super().get_summary_fields(obj)
# use this handle to distinguish between a listView and a detailView
if self.is_detail_view:
summary['links'] = InstanceLinkSerializer(InstanceLink.objects.select_related('target', 'source').filter(source=obj), many=True).data
return summary
def get_consumed_capacity(self, obj):
return obj.consumed_capacity
@@ -4857,10 +4953,54 @@ class InstanceSerializer(BaseSerializer):
else:
return float("{0:.2f}".format(((float(obj.capacity) - float(obj.consumed_capacity)) / (float(obj.capacity))) * 100))
def validate(self, attrs):
if self.instance.node_type == 'hop':
raise serializers.ValidationError(_('Hop node instances may not be changed.'))
return attrs
def get_health_check_pending(self, obj):
return obj.health_check_pending
def validate(self, data):
if self.instance:
if self.instance.node_type == Instance.Types.HOP:
raise serializers.ValidationError("Hop node instances may not be changed.")
else:
if not settings.IS_K8S:
raise serializers.ValidationError("Can only create instances on Kubernetes or OpenShift.")
return data
def validate_node_type(self, value):
if not self.instance:
if value not in (Instance.Types.EXECUTION,):
raise serializers.ValidationError("Can only create execution nodes.")
else:
if self.instance.node_type != value:
raise serializers.ValidationError("Cannot change node type.")
return value
def validate_node_state(self, value):
if self.instance:
if value != self.instance.node_state:
if not settings.IS_K8S:
raise serializers.ValidationError("Can only change the state on Kubernetes or OpenShift.")
if value != Instance.States.DEPROVISIONING:
raise serializers.ValidationError("Can only change instances to the 'deprovisioning' state.")
if self.instance.node_type not in (Instance.Types.EXECUTION,):
raise serializers.ValidationError("Can only deprovision execution nodes.")
else:
if value and value != Instance.States.INSTALLED:
raise serializers.ValidationError("Can only create instances in the 'installed' state.")
return value
def validate_hostname(self, value):
if self.instance and self.instance.hostname != value:
raise serializers.ValidationError("Cannot change hostname.")
return value
def validate_listener_port(self, value):
if self.instance and self.instance.listener_port != value:
raise serializers.ValidationError("Cannot change listener port.")
return value
class InstanceHealthCheckSerializer(BaseSerializer):
@@ -5036,8 +5176,7 @@ class ActivityStreamSerializer(BaseSerializer):
object_association = serializers.SerializerMethodField(help_text=_("When present, shows the field name of the role or relationship that changed."))
object_type = serializers.SerializerMethodField(help_text=_("When present, shows the model on which the role or relationship was defined."))
@cached_property
def _local_summarizable_fk_fields(self):
def _local_summarizable_fk_fields(self, obj):
summary_dict = copy.copy(SUMMARIZABLE_FK_FIELDS)
# Special requests
summary_dict['group'] = summary_dict['group'] + ('inventory_id',)
@@ -5057,7 +5196,13 @@ class ActivityStreamSerializer(BaseSerializer):
('workflow_approval', ('id', 'name', 'unified_job_id')),
('instance', ('id', 'hostname')),
]
return field_list
# Optimization - do not attempt to summarize all fields, pair down to only relations that exist
if not obj:
return field_list
existing_association_types = [obj.object1, obj.object2]
if 'user' in existing_association_types:
existing_association_types.append('role')
return [entry for entry in field_list if entry[0] in existing_association_types]
class Meta:
model = ActivityStream
@@ -5141,7 +5286,7 @@ class ActivityStreamSerializer(BaseSerializer):
data = {}
if obj.actor is not None:
data['actor'] = self.reverse('api:user_detail', kwargs={'pk': obj.actor.pk})
for fk, __ in self._local_summarizable_fk_fields:
for fk, __ in self._local_summarizable_fk_fields(obj):
if not hasattr(obj, fk):
continue
m2m_list = self._get_related_objects(obj, fk)
@@ -5198,7 +5343,7 @@ class ActivityStreamSerializer(BaseSerializer):
def get_summary_fields(self, obj):
summary_fields = OrderedDict()
for fk, related_fields in self._local_summarizable_fk_fields:
for fk, related_fields in self._local_summarizable_fk_fields(obj):
try:
if not hasattr(obj, fk):
continue

View File

@@ -0,0 +1,21 @@
receptor_verify: true
receptor_tls: true
receptor_work_commands:
ansible-runner:
command: ansible-runner
params: worker
allowruntimeparams: true
verifysignature: true
custom_worksign_public_keyfile: receptor/work-public-key.pem
custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/receptor-ca.crt
receptor_user: awx
receptor_group: awx
receptor_protocol: 'tcp'
receptor_listener: true
receptor_port: {{ instance.listener_port }}
receptor_dependencies:
- podman
- crun
- python39-pip

View File

@@ -0,0 +1,18 @@
{% verbatim %}
---
- hosts: all
become: yes
tasks:
- name: Create the receptor user
user:
name: "{{ receptor_user }}"
shell: /bin/bash
- name: Enable Copr repo for Receptor
command: dnf copr enable ansible-awx/receptor -y
- import_role:
name: ansible.receptor.setup
- name: Install ansible-runner
pip:
name: ansible-runner
executable: pip3.9
{% endverbatim %}

View File

@@ -0,0 +1,7 @@
---
all:
hosts:
remote-execution:
ansible_host: {{ instance.hostname }}
ansible_user: <username> # user provided
ansible_ssh_private_key_file: ~/.ssh/id_rsa

View File

@@ -0,0 +1,6 @@
---
collections:
- name: ansible.receptor
source: https://github.com/ansible/receptor-collection/
type: git
version: 0.1.1

17
awx/api/urls/debug.py Normal file
View File

@@ -0,0 +1,17 @@
from django.urls import re_path
from awx.api.views.debug import (
DebugRootView,
TaskManagerDebugView,
DependencyManagerDebugView,
WorkflowManagerDebugView,
)
urls = [
re_path(r'^$', DebugRootView.as_view(), name='debug'),
re_path(r'^task_manager/$', TaskManagerDebugView.as_view(), name='task_manager'),
re_path(r'^dependency_manager/$', DependencyManagerDebugView.as_view(), name='dependency_manager'),
re_path(r'^workflow_manager/$', WorkflowManagerDebugView.as_view(), name='workflow_manager'),
]
__all__ = ['urls']

View File

@@ -3,7 +3,15 @@
from django.urls import re_path
from awx.api.views import InstanceList, InstanceDetail, InstanceUnifiedJobsList, InstanceInstanceGroupsList, InstanceHealthCheck
from awx.api.views import (
InstanceList,
InstanceDetail,
InstanceUnifiedJobsList,
InstanceInstanceGroupsList,
InstanceHealthCheck,
InstanceInstallBundle,
InstancePeersList,
)
urls = [
@@ -12,6 +20,8 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/jobs/$', InstanceUnifiedJobsList.as_view(), name='instance_unified_jobs_list'),
re_path(r'^(?P<pk>[0-9]+)/instance_groups/$', InstanceInstanceGroupsList.as_view(), name='instance_instance_groups_list'),
re_path(r'^(?P<pk>[0-9]+)/health_check/$', InstanceHealthCheck.as_view(), name='instance_health_check'),
re_path(r'^(?P<pk>[0-9]+)/peers/$', InstancePeersList.as_view(), name='instance_peers_list'),
re_path(r'^(?P<pk>[0-9]+)/install_bundle/$', InstanceInstallBundle.as_view(), name='instance_install_bundle'),
]
__all__ = ['urls']

View File

@@ -3,7 +3,7 @@
from django.urls import re_path
from awx.api.views import LabelList, LabelDetail
from awx.api.views.labels import LabelList, LabelDetail
urls = [re_path(r'^$', LabelList.as_view(), name='label_list'), re_path(r'^(?P<pk>[0-9]+)/$', LabelDetail.as_view(), name='label_detail')]

View File

@@ -3,7 +3,7 @@
from django.urls import re_path
from awx.api.views import ScheduleList, ScheduleDetail, ScheduleUnifiedJobsList, ScheduleCredentialsList
from awx.api.views import ScheduleList, ScheduleDetail, ScheduleUnifiedJobsList, ScheduleCredentialsList, ScheduleLabelsList, ScheduleInstanceGroupList
urls = [
@@ -11,6 +11,8 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/$', ScheduleDetail.as_view(), name='schedule_detail'),
re_path(r'^(?P<pk>[0-9]+)/jobs/$', ScheduleUnifiedJobsList.as_view(), name='schedule_unified_jobs_list'),
re_path(r'^(?P<pk>[0-9]+)/credentials/$', ScheduleCredentialsList.as_view(), name='schedule_credentials_list'),
re_path(r'^(?P<pk>[0-9]+)/labels/$', ScheduleLabelsList.as_view(), name='schedule_labels_list'),
re_path(r'^(?P<pk>[0-9]+)/instance_groups/$', ScheduleInstanceGroupList.as_view(), name='schedule_instance_groups_list'),
]
__all__ = ['urls']

View File

@@ -2,9 +2,9 @@
# All Rights Reserved.
from __future__ import absolute_import, unicode_literals
from django.conf import settings
from django.urls import include, re_path
from awx import MODE
from awx.api.generics import LoggedLoginView, LoggedLogoutView
from awx.api.views import (
ApiRootView,
@@ -145,7 +145,12 @@ urlpatterns = [
re_path(r'^logout/$', LoggedLogoutView.as_view(next_page='/api/', redirect_field_name='next'), name='logout'),
re_path(r'^o/', include(oauth2_root_urls)),
]
if settings.SETTINGS_MODULE == 'awx.settings.development':
if MODE == 'development':
# Only include these if we are in the development environment
from awx.api.swagger import SwaggerSchemaView
urlpatterns += [re_path(r'^swagger/$', SwaggerSchemaView.as_view(), name='swagger_view')]
from awx.api.urls.debug import urls as debug_urls
urlpatterns += [re_path(r'^debug/', include(debug_urls))]

View File

@@ -10,6 +10,8 @@ from awx.api.views import (
WorkflowJobNodeFailureNodesList,
WorkflowJobNodeAlwaysNodesList,
WorkflowJobNodeCredentialsList,
WorkflowJobNodeLabelsList,
WorkflowJobNodeInstanceGroupsList,
)
@@ -20,6 +22,8 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/failure_nodes/$', WorkflowJobNodeFailureNodesList.as_view(), name='workflow_job_node_failure_nodes_list'),
re_path(r'^(?P<pk>[0-9]+)/always_nodes/$', WorkflowJobNodeAlwaysNodesList.as_view(), name='workflow_job_node_always_nodes_list'),
re_path(r'^(?P<pk>[0-9]+)/credentials/$', WorkflowJobNodeCredentialsList.as_view(), name='workflow_job_node_credentials_list'),
re_path(r'^(?P<pk>[0-9]+)/labels/$', WorkflowJobNodeLabelsList.as_view(), name='workflow_job_node_labels_list'),
re_path(r'^(?P<pk>[0-9]+)/instance_groups/$', WorkflowJobNodeInstanceGroupsList.as_view(), name='workflow_job_node_instance_groups_list'),
]
__all__ = ['urls']

View File

@@ -11,6 +11,8 @@ from awx.api.views import (
WorkflowJobTemplateNodeAlwaysNodesList,
WorkflowJobTemplateNodeCredentialsList,
WorkflowJobTemplateNodeCreateApproval,
WorkflowJobTemplateNodeLabelsList,
WorkflowJobTemplateNodeInstanceGroupsList,
)
@@ -21,6 +23,8 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/failure_nodes/$', WorkflowJobTemplateNodeFailureNodesList.as_view(), name='workflow_job_template_node_failure_nodes_list'),
re_path(r'^(?P<pk>[0-9]+)/always_nodes/$', WorkflowJobTemplateNodeAlwaysNodesList.as_view(), name='workflow_job_template_node_always_nodes_list'),
re_path(r'^(?P<pk>[0-9]+)/credentials/$', WorkflowJobTemplateNodeCredentialsList.as_view(), name='workflow_job_template_node_credentials_list'),
re_path(r'^(?P<pk>[0-9]+)/labels/$', WorkflowJobTemplateNodeLabelsList.as_view(), name='workflow_job_template_node_labels_list'),
re_path(r'^(?P<pk>[0-9]+)/instance_groups/$', WorkflowJobTemplateNodeInstanceGroupsList.as_view(), name='workflow_job_template_node_instance_groups_list'),
re_path(r'^(?P<pk>[0-9]+)/create_approval_template/$', WorkflowJobTemplateNodeCreateApproval.as_view(), name='workflow_job_template_node_create_approval'),
]

View File

@@ -22,6 +22,7 @@ from django.conf import settings
from django.core.exceptions import FieldError, ObjectDoesNotExist
from django.db.models import Q, Sum
from django.db import IntegrityError, ProgrammingError, transaction, connection
from django.db.models.fields.related import ManyToManyField, ForeignKey
from django.shortcuts import get_object_or_404
from django.utils.safestring import mark_safe
from django.utils.timezone import now
@@ -68,7 +69,6 @@ from awx.api.generics import (
APIView,
BaseUsersList,
CopyAPIView,
DeleteLastUnattachLabelMixin,
GenericAPIView,
ListAPIView,
ListCreateAPIView,
@@ -85,6 +85,7 @@ from awx.api.generics import (
SubListCreateAttachDetachAPIView,
SubListDestroyAPIView,
)
from awx.api.views.labels import LabelSubListCreateAttachDetachView
from awx.api.versioning import reverse
from awx.main import models
from awx.main.utils import (
@@ -93,7 +94,7 @@ from awx.main.utils import (
get_object_or_400,
getattrd,
get_pk_from_dict,
schedule_task_manager,
ScheduleWorkflowManager,
ignore_inventory_computed_fields,
)
from awx.main.utils.encryption import encrypt_value
@@ -115,13 +116,28 @@ from awx.api.metadata import RoleMetadata
from awx.main.constants import ACTIVE_STATES, SURVEY_TYPE_MAPPING
from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.api.views.mixin import (
ControlledByScmMixin,
InstanceGroupMembershipMixin,
OrganizationCountsMixin,
RelatedJobsPreventDeleteMixin,
UnifiedJobDeletionMixin,
NoTruncateMixin,
)
from awx.api.views.instance_install_bundle import InstanceInstallBundle # noqa
from awx.api.views.inventory import ( # noqa
InventoryList,
InventoryDetail,
InventoryUpdateEventsList,
InventoryList,
InventoryDetail,
InventoryActivityStreamList,
InventoryInstanceGroupsList,
InventoryAccessList,
InventoryObjectRolesList,
InventoryJobTemplateList,
InventoryLabelList,
InventoryCopy,
)
from awx.api.views.mesh_visualizer import MeshVisualizer # noqa
from awx.api.views.organization import ( # noqa
OrganizationList,
OrganizationDetail,
@@ -145,21 +161,6 @@ from awx.api.views.organization import ( # noqa
OrganizationAccessList,
OrganizationObjectRolesList,
)
from awx.api.views.inventory import ( # noqa
InventoryList,
InventoryDetail,
InventoryUpdateEventsList,
InventoryList,
InventoryDetail,
InventoryActivityStreamList,
InventoryInstanceGroupsList,
InventoryAccessList,
InventoryObjectRolesList,
InventoryJobTemplateList,
InventoryLabelList,
InventoryCopy,
)
from awx.api.views.mesh_visualizer import MeshVisualizer # noqa
from awx.api.views.root import ( # noqa
ApiRootView,
ApiOAuthAuthorizationRootView,
@@ -174,7 +175,6 @@ from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, Gitlab
from awx.api.pagination import UnifiedJobEventPagination
from awx.main.utils import set_environ
logger = logging.getLogger('awx.api.views')
@@ -359,7 +359,7 @@ class DashboardJobsGraphView(APIView):
return Response(dashboard_data)
class InstanceList(ListAPIView):
class InstanceList(ListCreateAPIView):
name = _("Instances")
model = models.Instance
@@ -398,6 +398,17 @@ class InstanceUnifiedJobsList(SubListAPIView):
return qs
class InstancePeersList(SubListAPIView):
name = _("Instance Peers")
parent_model = models.Instance
model = models.Instance
serializer_class = serializers.InstanceSerializer
parent_access = 'read'
search_fields = {'hostname'}
relationship = 'peers'
class InstanceInstanceGroupsList(InstanceGroupMembershipMixin, SubListCreateAttachDetachAPIView):
name = _("Instance's Instance Groups")
@@ -440,40 +451,21 @@ class InstanceHealthCheck(GenericAPIView):
def post(self, request, *args, **kwargs):
obj = self.get_object()
if obj.health_check_pending:
return Response({'msg': f"Health check was already in progress for {obj.hostname}."}, status=status.HTTP_200_OK)
if obj.node_type == 'execution':
# Note: hop nodes are already excluded by the get_queryset method
obj.health_check_started = now()
obj.save(update_fields=['health_check_started'])
if obj.node_type == models.Instance.Types.EXECUTION:
from awx.main.tasks.system import execution_node_health_check
runner_data = execution_node_health_check(obj.hostname)
obj.refresh_from_db()
data = self.get_serializer(data=request.data).to_representation(obj)
# Add in some extra unsaved fields
for extra_field in ('transmit_timing', 'run_timing'):
if extra_field in runner_data:
data[extra_field] = runner_data[extra_field]
execution_node_health_check.apply_async([obj.hostname])
else:
from awx.main.tasks.system import cluster_node_health_check
if settings.CLUSTER_HOST_ID == obj.hostname:
cluster_node_health_check(obj.hostname)
else:
cluster_node_health_check.apply_async([obj.hostname], queue=obj.hostname)
start_time = time.time()
prior_check_time = obj.last_health_check
while time.time() - start_time < 50.0:
obj.refresh_from_db(fields=['last_health_check'])
if obj.last_health_check != prior_check_time:
break
if time.time() - start_time < 1.0:
time.sleep(0.1)
else:
time.sleep(1.0)
else:
obj.mark_offline(errors=_('Health check initiated by user determined this instance to be unresponsive'))
obj.refresh_from_db()
data = self.get_serializer(data=request.data).to_representation(obj)
return Response(data, status=status.HTTP_200_OK)
cluster_node_health_check.apply_async([obj.hostname], queue=obj.hostname)
return Response({'msg': f"Health check is running for {obj.hostname}."}, status=status.HTTP_200_OK)
class InstanceGroupList(ListCreateAPIView):
@@ -537,6 +529,7 @@ class ScheduleList(ListCreateAPIView):
name = _("Schedules")
model = models.Schedule
serializer_class = serializers.ScheduleSerializer
ordering = ('id',)
class ScheduleDetail(RetrieveUpdateDestroyAPIView):
@@ -577,8 +570,7 @@ class ScheduleZoneInfo(APIView):
swagger_topic = 'System Configuration'
def get(self, request):
zones = [{'name': zone} for zone in models.Schedule.get_zoneinfo()]
return Response(zones)
return Response({'zones': models.Schedule.get_zoneinfo(), 'links': models.Schedule.get_zoneinfo_links()})
class LaunchConfigCredentialsBase(SubListAttachDetachAPIView):
@@ -618,6 +610,19 @@ class ScheduleCredentialsList(LaunchConfigCredentialsBase):
parent_model = models.Schedule
class ScheduleLabelsList(LabelSubListCreateAttachDetachView):
parent_model = models.Schedule
class ScheduleInstanceGroupList(SubListAttachDetachAPIView):
model = models.InstanceGroup
serializer_class = serializers.InstanceGroupSerializer
parent_model = models.Schedule
relationship = 'instance_groups'
class ScheduleUnifiedJobsList(SubListAPIView):
model = models.UnifiedJob
@@ -1675,7 +1680,7 @@ class HostList(HostRelatedSearchMixin, ListCreateAPIView):
return Response(dict(error=_(str(e))), status=status.HTTP_400_BAD_REQUEST)
class HostDetail(RelatedJobsPreventDeleteMixin, ControlledByScmMixin, RetrieveUpdateDestroyAPIView):
class HostDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
always_allow_superuser = False
model = models.Host
@@ -1709,7 +1714,7 @@ class InventoryHostsList(HostRelatedSearchMixin, SubListCreateAttachDetachAPIVie
return qs
class HostGroupsList(ControlledByScmMixin, SubListCreateAttachDetachAPIView):
class HostGroupsList(SubListCreateAttachDetachAPIView):
'''the list of groups a host is directly a member of'''
model = models.Group
@@ -1825,7 +1830,7 @@ class EnforceParentRelationshipMixin(object):
return super(EnforceParentRelationshipMixin, self).create(request, *args, **kwargs)
class GroupChildrenList(ControlledByScmMixin, EnforceParentRelationshipMixin, SubListCreateAttachDetachAPIView):
class GroupChildrenList(EnforceParentRelationshipMixin, SubListCreateAttachDetachAPIView):
model = models.Group
serializer_class = serializers.GroupSerializer
@@ -1871,7 +1876,7 @@ class GroupPotentialChildrenList(SubListAPIView):
return qs.exclude(pk__in=except_pks)
class GroupHostsList(HostRelatedSearchMixin, ControlledByScmMixin, SubListCreateAttachDetachAPIView):
class GroupHostsList(HostRelatedSearchMixin, SubListCreateAttachDetachAPIView):
'''the list of hosts directly below a group'''
model = models.Host
@@ -1935,7 +1940,7 @@ class GroupActivityStreamList(SubListAPIView):
return qs.filter(Q(group=parent) | Q(host__in=parent.hosts.all()))
class GroupDetail(RelatedJobsPreventDeleteMixin, ControlledByScmMixin, RetrieveUpdateDestroyAPIView):
class GroupDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = models.Group
serializer_class = serializers.GroupSerializer
@@ -2382,10 +2387,13 @@ class JobTemplateLaunch(RetrieveAPIView):
for field, ask_field_name in modified_ask_mapping.items():
if not getattr(obj, ask_field_name):
data.pop(field, None)
elif field == 'inventory':
elif isinstance(getattr(obj.__class__, field).field, ForeignKey):
data[field] = getattrd(obj, "%s.%s" % (field, 'id'), None)
elif field == 'credentials':
data[field] = [cred.id for cred in obj.credentials.all()]
elif isinstance(getattr(obj.__class__, field).field, ManyToManyField):
if field == 'instance_groups':
data[field] = []
continue
data[field] = [item.id for item in getattr(obj, field).all()]
else:
data[field] = getattr(obj, field)
return data
@@ -2398,9 +2406,8 @@ class JobTemplateLaunch(RetrieveAPIView):
"""
modern_data = data.copy()
id_fd = '{}_id'.format('inventory')
if 'inventory' not in modern_data and id_fd in modern_data:
modern_data['inventory'] = modern_data[id_fd]
if 'inventory' not in modern_data and 'inventory_id' in modern_data:
modern_data['inventory'] = modern_data['inventory_id']
# credential passwords were historically provided as top-level attributes
if 'credential_passwords' not in modern_data:
@@ -2720,28 +2727,9 @@ class JobTemplateCredentialsList(SubListCreateAttachDetachAPIView):
return super(JobTemplateCredentialsList, self).is_valid_relation(parent, sub, created)
class JobTemplateLabelList(DeleteLastUnattachLabelMixin, SubListCreateAttachDetachAPIView):
class JobTemplateLabelList(LabelSubListCreateAttachDetachView):
model = models.Label
serializer_class = serializers.LabelSerializer
parent_model = models.JobTemplate
relationship = 'labels'
def post(self, request, *args, **kwargs):
# If a label already exists in the database, attach it instead of erroring out
# that it already exists
if 'id' not in request.data and 'name' in request.data and 'organization' in request.data:
existing = models.Label.objects.filter(name=request.data['name'], organization_id=request.data['organization'])
if existing.exists():
existing = existing[0]
request.data['id'] = existing.id
del request.data['name']
del request.data['organization']
if models.Label.objects.filter(unifiedjobtemplate_labels=self.kwargs['pk']).count() > 100:
return Response(
dict(msg=_('Maximum number of labels for {} reached.'.format(self.parent_model._meta.verbose_name_raw))), status=status.HTTP_400_BAD_REQUEST
)
return super(JobTemplateLabelList, self).post(request, *args, **kwargs)
class JobTemplateCallback(GenericAPIView):
@@ -2967,6 +2955,22 @@ class WorkflowJobNodeCredentialsList(SubListAPIView):
relationship = 'credentials'
class WorkflowJobNodeLabelsList(SubListAPIView):
model = models.Label
serializer_class = serializers.LabelSerializer
parent_model = models.WorkflowJobNode
relationship = 'labels'
class WorkflowJobNodeInstanceGroupsList(SubListAttachDetachAPIView):
model = models.InstanceGroup
serializer_class = serializers.InstanceGroupSerializer
parent_model = models.WorkflowJobNode
relationship = 'instance_groups'
class WorkflowJobTemplateNodeList(ListCreateAPIView):
model = models.WorkflowJobTemplateNode
@@ -2985,6 +2989,19 @@ class WorkflowJobTemplateNodeCredentialsList(LaunchConfigCredentialsBase):
parent_model = models.WorkflowJobTemplateNode
class WorkflowJobTemplateNodeLabelsList(LabelSubListCreateAttachDetachView):
parent_model = models.WorkflowJobTemplateNode
class WorkflowJobTemplateNodeInstanceGroupsList(SubListAttachDetachAPIView):
model = models.InstanceGroup
serializer_class = serializers.InstanceGroupSerializer
parent_model = models.WorkflowJobTemplateNode
relationship = 'instance_groups'
class WorkflowJobTemplateNodeChildrenBaseList(EnforceParentRelationshipMixin, SubListCreateAttachDetachAPIView):
model = models.WorkflowJobTemplateNode
@@ -3197,13 +3214,17 @@ class WorkflowJobTemplateLaunch(RetrieveAPIView):
data['extra_vars'] = extra_vars
modified_ask_mapping = models.WorkflowJobTemplate.get_ask_mapping()
modified_ask_mapping.pop('extra_vars')
for field_name, ask_field_name in obj.get_ask_mapping().items():
for field, ask_field_name in modified_ask_mapping.items():
if not getattr(obj, ask_field_name):
data.pop(field_name, None)
elif field_name == 'inventory':
data[field_name] = getattrd(obj, "%s.%s" % (field_name, 'id'), None)
data.pop(field, None)
elif isinstance(getattr(obj.__class__, field).field, ForeignKey):
data[field] = getattrd(obj, "%s.%s" % (field, 'id'), None)
elif isinstance(getattr(obj.__class__, field).field, ManyToManyField):
data[field] = [item.id for item in getattr(obj, field).all()]
else:
data[field_name] = getattr(obj, field_name)
data[field] = getattr(obj, field)
return data
def post(self, request, *args, **kwargs):
@@ -3392,7 +3413,7 @@ class WorkflowJobCancel(RetrieveAPIView):
obj = self.get_object()
if obj.can_cancel:
obj.cancel()
schedule_task_manager()
ScheduleWorkflowManager().schedule()
return Response(status=status.HTTP_202_ACCEPTED)
else:
return self.http_method_not_allowed(request, *args, **kwargs)
@@ -3690,15 +3711,21 @@ class JobCreateSchedule(RetrieveAPIView):
extra_data=config.extra_data,
survey_passwords=config.survey_passwords,
inventory=config.inventory,
execution_environment=config.execution_environment,
char_prompts=config.char_prompts,
credentials=set(config.credentials.all()),
labels=set(config.labels.all()),
instance_groups=list(config.instance_groups.all()),
)
if not request.user.can_access(models.Schedule, 'add', schedule_data):
raise PermissionDenied()
creds_list = schedule_data.pop('credentials')
related_fields = ('credentials', 'labels', 'instance_groups')
related = [schedule_data.pop(relationship) for relationship in related_fields]
schedule = models.Schedule.objects.create(**schedule_data)
schedule.credentials.add(*creds_list)
for relationship, items in zip(related_fields, related):
for item in items:
getattr(schedule, relationship).add(item)
data = serializers.ScheduleSerializer(schedule, context=self.get_serializer_context()).data
data.serializer.instance = None # hack to avoid permissions.py assuming this is Job model
@@ -3840,7 +3867,7 @@ class JobJobEventsList(BaseJobEventsList):
def get_queryset(self):
job = self.get_parent_object()
self.check_parent_access(job)
return job.get_event_queryset().select_related('host').order_by('start_line')
return job.get_event_queryset().prefetch_related('job__job_template', 'host').order_by('start_line')
class JobJobEventsChildrenSummary(APIView):
@@ -3849,7 +3876,7 @@ class JobJobEventsChildrenSummary(APIView):
meta_events = ('debug', 'verbose', 'warning', 'error', 'system_warning', 'deprecated')
def get(self, request, **kwargs):
resp = dict(children_summary={}, meta_event_nested_uuid={}, event_processing_finished=False)
resp = dict(children_summary={}, meta_event_nested_uuid={}, event_processing_finished=False, is_tree=True)
job = get_object_or_404(models.Job, pk=kwargs['pk'])
if not job.event_processing_finished:
return Response(resp)
@@ -3869,13 +3896,41 @@ class JobJobEventsChildrenSummary(APIView):
# key is counter of meta events (i.e. verbose), value is uuid of the assigned parent
map_meta_counter_nested_uuid = {}
# collapsable tree view in the UI only makes sense for tree-like
# hierarchy. If ansible is ran with a strategy like free or host_pinned, then
# events can be out of sequential order, and no longer follow a tree structure
# E1
# E2
# E3
# E4 <- parent is E3
# E5 <- parent is E1
# in the above, there is no clear way to collapse E1, because E5 comes after
# E3, which occurs after E1. Thus the tree view should be disabled.
# mark the last seen uuid at a given level (0-3)
# if a parent uuid is not in this list, then we know the events are not tree-like
# and return a response with is_tree: False
level_current_uuid = [None, None, None, None]
prev_non_meta_event = events[0]
for i, e in enumerate(events):
if not e['event'] in JobJobEventsChildrenSummary.meta_events:
prev_non_meta_event = e
if not e['uuid']:
continue
if not e['event'] in JobJobEventsChildrenSummary.meta_events:
level = models.JobEvent.LEVEL_FOR_EVENT[e['event']]
level_current_uuid[level] = e['uuid']
# if setting level 1, for example, set levels 2 and 3 back to None
for u in range(level + 1, len(level_current_uuid)):
level_current_uuid[u] = None
puuid = e['parent_uuid']
if puuid and puuid not in level_current_uuid:
# improper tree detected, so bail out early
resp['is_tree'] = False
return Response(resp)
# if event is verbose (or debug, etc), we need to "assign" it a
# parent. This code looks at the event level of the previous
@@ -4401,18 +4456,6 @@ class NotificationDetail(RetrieveAPIView):
serializer_class = serializers.NotificationSerializer
class LabelList(ListCreateAPIView):
model = models.Label
serializer_class = serializers.LabelSerializer
class LabelDetail(RetrieveUpdateAPIView):
model = models.Label
serializer_class = serializers.LabelSerializer
class ActivityStreamList(SimpleListAPIView):
model = models.ActivityStream

68
awx/api/views/debug.py Normal file
View File

@@ -0,0 +1,68 @@
from collections import OrderedDict
from django.conf import settings
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from awx.api.generics import APIView
from awx.main.scheduler import TaskManager, DependencyManager, WorkflowManager
class TaskManagerDebugView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Task'
def get(self, request):
TaskManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
msg = f"Running {self.prefix} manager. To disable other triggers to the {self.prefix} manager, set AWX_DISABLE_TASK_MANAGERS to True"
else:
msg = f"AWX_DISABLE_TASK_MANAGERS is True, this view is the only way to trigger the {self.prefix} manager"
return Response(msg)
class DependencyManagerDebugView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Dependency'
def get(self, request):
DependencyManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
msg = f"Running {self.prefix} manager. To disable other triggers to the {self.prefix} manager, set AWX_DISABLE_TASK_MANAGERS to True"
else:
msg = f"AWX_DISABLE_TASK_MANAGERS is True, this view is the only way to trigger the {self.prefix} manager"
return Response(msg)
class WorkflowManagerDebugView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
prefix = 'Workflow'
def get(self, request):
WorkflowManager().schedule()
if not settings.AWX_DISABLE_TASK_MANAGERS:
msg = f"Running {self.prefix} manager. To disable other triggers to the {self.prefix} manager, set AWX_DISABLE_TASK_MANAGERS to True"
else:
msg = f"AWX_DISABLE_TASK_MANAGERS is True, this view is the only way to trigger the {self.prefix} manager"
return Response(msg)
class DebugRootView(APIView):
_ignore_model_permissions = True
exclude_from_schema = True
permission_classes = [AllowAny]
def get(self, request, format=None):
'''List of available debug urls'''
data = OrderedDict()
data['task_manager'] = '/api/debug/task_manager/'
data['dependency_manager'] = '/api/debug/dependency_manager/'
data['workflow_manager'] = '/api/debug/workflow_manager/'
return Response(data)

View File

@@ -0,0 +1,199 @@
# Copyright (c) 2018 Red Hat, Inc.
# All Rights Reserved.
import datetime
import io
import ipaddress
import os
import tarfile
import asn1
from awx.api import serializers
from awx.api.generics import GenericAPIView, Response
from awx.api.permissions import IsSystemAdminOrAuditor
from awx.main import models
from cryptography import x509
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.x509 import DNSName, IPAddress, ObjectIdentifier, OtherName
from cryptography.x509.oid import NameOID
from django.http import HttpResponse
from django.template.loader import render_to_string
from django.utils.translation import gettext_lazy as _
from rest_framework import status
# Red Hat has an OID namespace (RHANANA). Receptor has its own designation under that.
RECEPTOR_OID = "1.3.6.1.4.1.2312.19.1"
# generate install bundle for the instance
# install bundle directory structure
# ├── install_receptor.yml (playbook)
# ├── inventory.yml
# ├── group_vars
# │ └── all.yml
# ├── receptor
# │ ├── tls
# │ │ ├── ca
# │ │ │ └── receptor-ca.crt
# │ │ ├── receptor.crt
# │ │ └── receptor.key
# │ └── work-public-key.pem
# └── requirements.yml
class InstanceInstallBundle(GenericAPIView):
name = _('Install Bundle')
model = models.Instance
serializer_class = serializers.InstanceSerializer
permission_classes = (IsSystemAdminOrAuditor,)
def get(self, request, *args, **kwargs):
instance_obj = self.get_object()
if instance_obj.node_type not in ('execution',):
return Response(
data=dict(msg=_('Install bundle can only be generated for execution nodes.')),
status=status.HTTP_400_BAD_REQUEST,
)
with io.BytesIO() as f:
with tarfile.open(fileobj=f, mode='w:gz') as tar:
# copy /etc/receptor/tls/ca/receptor-ca.crt to receptor/tls/ca in the tar file
tar.add(
os.path.realpath('/etc/receptor/tls/ca/receptor-ca.crt'), arcname=f"{instance_obj.hostname}_install_bundle/receptor/tls/ca/receptor-ca.crt"
)
# copy /etc/receptor/signing/work-public-key.pem to receptor/work-public-key.pem
tar.add('/etc/receptor/signing/work-public-key.pem', arcname=f"{instance_obj.hostname}_install_bundle/receptor/work-public-key.pem")
# generate and write the receptor key to receptor/tls/receptor.key in the tar file
key, cert = generate_receptor_tls(instance_obj)
key_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.key")
key_tarinfo.size = len(key)
tar.addfile(key_tarinfo, io.BytesIO(key))
cert_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/receptor/tls/receptor.crt")
cert_tarinfo.size = len(cert)
tar.addfile(cert_tarinfo, io.BytesIO(cert))
# generate and write install_receptor.yml to the tar file
playbook = generate_playbook().encode('utf-8')
playbook_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/install_receptor.yml")
playbook_tarinfo.size = len(playbook)
tar.addfile(playbook_tarinfo, io.BytesIO(playbook))
# generate and write inventory.yml to the tar file
inventory_yml = generate_inventory_yml(instance_obj).encode('utf-8')
inventory_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/inventory.yml")
inventory_yml_tarinfo.size = len(inventory_yml)
tar.addfile(inventory_yml_tarinfo, io.BytesIO(inventory_yml))
# generate and write group_vars/all.yml to the tar file
group_vars = generate_group_vars_all_yml(instance_obj).encode('utf-8')
group_vars_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/group_vars/all.yml")
group_vars_tarinfo.size = len(group_vars)
tar.addfile(group_vars_tarinfo, io.BytesIO(group_vars))
# generate and write requirements.yml to the tar file
requirements_yml = generate_requirements_yml().encode('utf-8')
requirements_yml_tarinfo = tarfile.TarInfo(f"{instance_obj.hostname}_install_bundle/requirements.yml")
requirements_yml_tarinfo.size = len(requirements_yml)
tar.addfile(requirements_yml_tarinfo, io.BytesIO(requirements_yml))
# respond with the tarfile
f.seek(0)
response = HttpResponse(f.read(), status=status.HTTP_200_OK)
response['Content-Disposition'] = f"attachment; filename={instance_obj.hostname}_install_bundle.tar.gz"
return response
def generate_playbook():
return render_to_string("instance_install_bundle/install_receptor.yml")
def generate_requirements_yml():
return render_to_string("instance_install_bundle/requirements.yml")
def generate_inventory_yml(instance_obj):
return render_to_string("instance_install_bundle/inventory.yml", context=dict(instance=instance_obj))
def generate_group_vars_all_yml(instance_obj):
return render_to_string("instance_install_bundle/group_vars/all.yml", context=dict(instance=instance_obj))
def generate_receptor_tls(instance_obj):
# generate private key for the receptor
key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
# encode receptor hostname to asn1
hostname = instance_obj.hostname
encoder = asn1.Encoder()
encoder.start()
encoder.write(hostname.encode(), nr=asn1.Numbers.UTF8String)
hostname_asn1 = encoder.output()
san_params = [
DNSName(hostname),
OtherName(ObjectIdentifier(RECEPTOR_OID), hostname_asn1),
]
try:
san_params.append(IPAddress(ipaddress.IPv4Address(hostname)))
except ipaddress.AddressValueError:
pass
# generate certificate for the receptor
csr = (
x509.CertificateSigningRequestBuilder()
.subject_name(
x509.Name(
[
x509.NameAttribute(NameOID.COMMON_NAME, hostname),
]
)
)
.add_extension(
x509.SubjectAlternativeName(san_params),
critical=False,
)
.sign(key, hashes.SHA256())
)
# sign csr with the receptor ca key from /etc/receptor/ca/receptor-ca.key
with open('/etc/receptor/tls/ca/receptor-ca.key', 'rb') as f:
ca_key = serialization.load_pem_private_key(
f.read(),
password=None,
)
with open('/etc/receptor/tls/ca/receptor-ca.crt', 'rb') as f:
ca_cert = x509.load_pem_x509_certificate(f.read())
cert = (
x509.CertificateBuilder()
.subject_name(csr.subject)
.issuer_name(ca_cert.issuer)
.public_key(csr.public_key())
.serial_number(x509.random_serial_number())
.not_valid_before(datetime.datetime.utcnow())
.not_valid_after(datetime.datetime.utcnow() + datetime.timedelta(days=10))
.add_extension(
csr.extensions.get_extension_for_class(x509.SubjectAlternativeName).value,
critical=csr.extensions.get_extension_for_class(x509.SubjectAlternativeName).critical,
)
.sign(ca_key, hashes.SHA256())
)
key = key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption(),
)
cert = cert.public_bytes(
encoding=serialization.Encoding.PEM,
)
return key, cert

View File

@@ -18,8 +18,6 @@ from rest_framework import status
# AWX
from awx.main.models import ActivityStream, Inventory, JobTemplate, Role, User, InstanceGroup, InventoryUpdateEvent, InventoryUpdate
from awx.main.models.label import Label
from awx.api.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
@@ -27,9 +25,8 @@ from awx.api.generics import (
SubListAttachDetachAPIView,
ResourceAccessList,
CopyAPIView,
DeleteLastUnattachLabelMixin,
SubListCreateAttachDetachAPIView,
)
from awx.api.views.labels import LabelSubListCreateAttachDetachView
from awx.api.serializers import (
@@ -39,9 +36,8 @@ from awx.api.serializers import (
InstanceGroupSerializer,
InventoryUpdateEventSerializer,
JobTemplateSerializer,
LabelSerializer,
)
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin, ControlledByScmMixin
from awx.api.views.mixin import RelatedJobsPreventDeleteMixin
from awx.api.pagination import UnifiedJobEventPagination
@@ -75,7 +71,7 @@ class InventoryList(ListCreateAPIView):
serializer_class = InventorySerializer
class InventoryDetail(RelatedJobsPreventDeleteMixin, ControlledByScmMixin, RetrieveUpdateDestroyAPIView):
class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Inventory
serializer_class = InventorySerializer
@@ -157,28 +153,9 @@ class InventoryJobTemplateList(SubListAPIView):
return qs.filter(inventory=parent)
class InventoryLabelList(DeleteLastUnattachLabelMixin, SubListCreateAttachDetachAPIView, SubListAPIView):
class InventoryLabelList(LabelSubListCreateAttachDetachView):
model = Label
serializer_class = LabelSerializer
parent_model = Inventory
relationship = 'labels'
def post(self, request, *args, **kwargs):
# If a label already exists in the database, attach it instead of erroring out
# that it already exists
if 'id' not in request.data and 'name' in request.data and 'organization' in request.data:
existing = Label.objects.filter(name=request.data['name'], organization_id=request.data['organization'])
if existing.exists():
existing = existing[0]
request.data['id'] = existing.id
del request.data['name']
del request.data['organization']
if Label.objects.filter(inventory_labels=self.kwargs['pk']).count() > 100:
return Response(
dict(msg=_('Maximum number of labels for {} reached.'.format(self.parent_model._meta.verbose_name_raw))), status=status.HTTP_400_BAD_REQUEST
)
return super(InventoryLabelList, self).post(request, *args, **kwargs)
class InventoryCopy(CopyAPIView):

71
awx/api/views/labels.py Normal file
View File

@@ -0,0 +1,71 @@
# AWX
from awx.api.generics import SubListCreateAttachDetachAPIView, RetrieveUpdateAPIView, ListCreateAPIView
from awx.main.models import Label
from awx.api.serializers import LabelSerializer
# Django
from django.utils.translation import gettext_lazy as _
# Django REST Framework
from rest_framework.response import Response
from rest_framework.status import HTTP_400_BAD_REQUEST
class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
"""
For related labels lists like /api/v2/inventories/N/labels/
We want want the last instance to be deleted from the database
when the last disassociate happens.
Subclasses need to define parent_model
"""
model = Label
serializer_class = LabelSerializer
relationship = 'labels'
def unattach(self, request, *args, **kwargs):
(sub_id, res) = super().unattach_validate(request)
if res:
return res
res = super().unattach_by_id(request, sub_id)
obj = self.model.objects.get(id=sub_id)
if obj.is_detached():
obj.delete()
return res
def post(self, request, *args, **kwargs):
# If a label already exists in the database, attach it instead of erroring out
# that it already exists
if 'id' not in request.data and 'name' in request.data and 'organization' in request.data:
existing = Label.objects.filter(name=request.data['name'], organization_id=request.data['organization'])
if existing.exists():
existing = existing[0]
request.data['id'] = existing.id
del request.data['name']
del request.data['organization']
# Give a 400 error if we have attached too many labels to this object
label_filter = self.parent_model._meta.get_field(self.relationship).remote_field.name
if Label.objects.filter(**{label_filter: self.kwargs['pk']}).count() > 100:
return Response(dict(msg=_(f'Maximum number of labels for {self.parent_model._meta.verbose_name_raw} reached.')), status=HTTP_400_BAD_REQUEST)
return super().post(request, *args, **kwargs)
class LabelDetail(RetrieveUpdateAPIView):
model = Label
serializer_class = LabelSerializer
class LabelList(ListCreateAPIView):
name = _("Labels")
model = Label
serializer_class = LabelSerializer

View File

@@ -10,13 +10,12 @@ from django.shortcuts import get_object_or_404
from django.utils.timezone import now
from django.utils.translation import gettext_lazy as _
from rest_framework.permissions import SAFE_METHODS
from rest_framework.exceptions import PermissionDenied
from rest_framework.response import Response
from rest_framework import status
from awx.main.constants import ACTIVE_STATES
from awx.main.utils import get_object_or_400, parse_yaml_or_json
from awx.main.utils import get_object_or_400
from awx.main.models.ha import Instance, InstanceGroup
from awx.main.models.organization import Team
from awx.main.models.projects import Project
@@ -186,35 +185,6 @@ class OrganizationCountsMixin(object):
return full_context
class ControlledByScmMixin(object):
"""
Special method to reset SCM inventory commit hash
if anything that it manages changes.
"""
def _reset_inv_src_rev(self, obj):
if self.request.method in SAFE_METHODS or not obj:
return
project_following_sources = obj.inventory_sources.filter(update_on_project_update=True, source='scm')
if project_following_sources:
# Allow inventory changes unrelated to variables
if self.model == Inventory and (
not self.request or not self.request.data or parse_yaml_or_json(self.request.data.get('variables', '')) == parse_yaml_or_json(obj.variables)
):
return
project_following_sources.update(scm_last_revision='')
def get_object(self):
obj = super(ControlledByScmMixin, self).get_object()
self._reset_inv_src_rev(obj)
return obj
def get_parent_object(self):
obj = super(ControlledByScmMixin, self).get_parent_object()
self._reset_inv_src_rev(obj)
return obj
class NoTruncateMixin(object):
def get_serializer_context(self):
context = super().get_serializer_context()

View File

@@ -204,7 +204,7 @@ class GitlabWebhookReceiver(WebhookReceiverBase):
return h.hexdigest()
def get_event_status_api(self):
if self.get_event_type() != 'Merge Request Hook':
if self.get_event_type() not in self.ref_keys.keys():
return
project = self.request.data.get('project', {})
repo_url = project.get('web_url')

View File

@@ -1,7 +1,6 @@
# Python
import contextlib
import logging
import sys
import threading
import time
import os
@@ -31,7 +30,7 @@ from awx.conf.models import Setting
logger = logging.getLogger('awx.conf.settings')
SETTING_MEMORY_TTL = 5 if 'callback_receiver' in ' '.join(sys.argv) else 0
SETTING_MEMORY_TTL = 5
# Store a special value to indicate when a setting is not set in the database.
SETTING_CACHE_NOTSET = '___notset___'
@@ -81,7 +80,7 @@ def _ctit_db_wrapper(trans_safe=False):
yield
except DBError as exc:
if trans_safe:
level = logger.exception
level = logger.warning
if isinstance(exc, ProgrammingError):
if 'relation' in str(exc) and 'does not exist' in str(exc):
# this generally means we can't fetch Tower configuration
@@ -90,7 +89,7 @@ def _ctit_db_wrapper(trans_safe=False):
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
level = logger.debug
level('Database settings are not available, using defaults.')
level(f'Database settings are not available, using defaults. error: {str(exc)}')
else:
logger.exception('Error modifying something related to database settings.')
finally:
@@ -234,6 +233,8 @@ class SettingsWrapper(UserSettingsHolder):
self.__dict__['_awx_conf_init_readonly'] = False
self.__dict__['cache'] = EncryptedCacheProxy(cache, registry)
self.__dict__['registry'] = registry
self.__dict__['_awx_conf_memoizedcache'] = cachetools.TTLCache(maxsize=2048, ttl=SETTING_MEMORY_TTL)
self.__dict__['_awx_conf_memoizedcache_lock'] = threading.Lock()
# record the current pid so we compare it post-fork for
# processes like the dispatcher and callback receiver
@@ -396,12 +397,20 @@ class SettingsWrapper(UserSettingsHolder):
def SETTINGS_MODULE(self):
return self._get_default('SETTINGS_MODULE')
@cachetools.cached(cache=cachetools.TTLCache(maxsize=2048, ttl=SETTING_MEMORY_TTL))
@cachetools.cachedmethod(
cache=lambda self: self.__dict__['_awx_conf_memoizedcache'],
key=lambda *args, **kwargs: SettingsWrapper.hashkey(*args, **kwargs),
lock=lambda self: self.__dict__['_awx_conf_memoizedcache_lock'],
)
def _get_local_with_cache(self, name):
"""Get value while accepting the in-memory cache if key is available"""
with _ctit_db_wrapper(trans_safe=True):
return self._get_local(name)
def __getattr__(self, name):
value = empty
if name in self.all_supported_settings:
with _ctit_db_wrapper(trans_safe=True):
value = self._get_local(name)
value = self._get_local_with_cache(name)
if value is not empty:
return value
return self._get_default(name)
@@ -475,6 +484,23 @@ class SettingsWrapper(UserSettingsHolder):
set_on_default = getattr(self.default_settings, 'is_overridden', lambda s: False)(setting)
return set_locally or set_on_default
@classmethod
def hashkey(cls, *args, **kwargs):
"""
Usage of @cachetools.cached has changed to @cachetools.cachedmethod
The previous cachetools decorator called the hash function and passed in (self, key).
The new cachtools decorator calls the hash function with just (key).
Ideally, we would continue to pass self, however, the cachetools decorator interface
does not allow us to.
This hashkey function is to maintain that the key generated looks like
('<SettingsWrapper>', key). The thought is that maybe it is important to namespace
our cache to the SettingsWrapper scope in case some other usage of this cache exists.
I can not think of how any other system could and would use our private cache, but
for safety sake we are ensuring the key schema does not change.
"""
return cachetools.keys.hashkey(f"<{cls.__name__}>", *args, **kwargs)
def __getattr_without_cache__(self, name):
# Django 1.10 added an optimization to settings lookup:

View File

@@ -28,6 +28,9 @@ def handle_setting_change(key, for_delete=False):
cache_keys = {Setting.get_cache_key(k) for k in setting_keys}
cache.delete_many(cache_keys)
# if we have changed a setting, we want to avoid mucking with the in-memory cache entirely
settings._awx_conf_memoizedcache.clear()
# Send setting_changed signal with new value for each setting.
for setting_key in setting_keys:
setting_changed.send(sender=Setting, setting=setting_key, value=getattr(settings, setting_key, None), enter=not bool(for_delete))

View File

@@ -8,6 +8,8 @@ import codecs
from uuid import uuid4
import time
from unittest import mock
from django.conf import LazySettings
from django.core.cache.backends.locmem import LocMemCache
from django.core.exceptions import ImproperlyConfigured
@@ -299,3 +301,33 @@ def test_readonly_sensitive_cache_data_is_encrypted(settings):
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
@pytest.mark.defined_in_file(AWX_VAR='DEFAULT')
def test_in_memory_cache_only_for_registered_settings(settings):
"Test that we only make use of the in-memory TTL cache for registered settings"
settings._awx_conf_memoizedcache.clear()
settings.MIDDLEWARE
assert len(settings._awx_conf_memoizedcache) == 0 # does not cache MIDDLEWARE
settings.registry.register('AWX_VAR', field_class=fields.CharField, category=_('System'), category_slug='system')
settings._wrapped.__dict__['all_supported_settings'] = ['AWX_VAR'] # because it is cached_property
settings._awx_conf_memoizedcache.clear()
assert settings.AWX_VAR == 'DEFAULT'
assert len(settings._awx_conf_memoizedcache) == 1 # caches registered settings
@pytest.mark.defined_in_file(AWX_VAR='DEFAULT')
def test_in_memory_cache_works(settings):
settings._awx_conf_memoizedcache.clear()
settings.registry.register('AWX_VAR', field_class=fields.CharField, category=_('System'), category_slug='system')
settings._wrapped.__dict__['all_supported_settings'] = ['AWX_VAR']
settings._awx_conf_memoizedcache.clear()
with mock.patch('awx.conf.settings.SettingsWrapper._get_local', return_value='DEFAULT') as mock_get:
assert settings.AWX_VAR == 'DEFAULT'
mock_get.assert_called_once_with('AWX_VAR')
with mock.patch.object(settings, '_get_local') as mock_get:
assert settings.AWX_VAR == 'DEFAULT'
mock_get.assert_not_called()

View File

@@ -1440,7 +1440,7 @@ msgstr "指定した認証情報は無効 (HTTP 401) です。"
#: awx/api/views/root.py:193 awx/api/views/root.py:234
msgid "Unable to connect to proxy server."
msgstr "プロキシサーバーに接続できません。"
msgstr "プロキシサーバーに接続できません。"
#: awx/api/views/root.py:195 awx/api/views/root.py:236
msgid "Could not connect to subscription service."
@@ -1976,7 +1976,7 @@ msgstr "リモートホスト名または IP を判別するために検索す
#: awx/main/conf.py:85
msgid "Proxy IP Allowed List"
msgstr "プロキシ IP 許可リスト"
msgstr "プロキシ IP 許可リスト"
#: awx/main/conf.py:87
msgid ""
@@ -2198,7 +2198,7 @@ msgid ""
"Follow symbolic links when scanning for playbooks. Be aware that setting "
"this to True can lead to infinite recursion if a link points to a parent "
"directory of itself."
msgstr "Playbook スキャンするときは、シンボリックリンクをたどってください。リンクがそれ自体の親ディレクトリーをしている場合は、こを True に定すると無限再帰が発生する可能性があることに注意してください。"
msgstr "Playbook スキャン時にシンボリックリンクをたどります。リンクが親ディレクトリーを参照している場合は、この設定を True に定すると無限再帰が発生する可能性があります。"
#: awx/main/conf.py:337
msgid "Ignore Ansible Galaxy SSL Certificate Verification"
@@ -2499,7 +2499,7 @@ msgstr "Insights for Ansible Automation Platform の最終収集日。"
msgid ""
"Last gathered entries for expensive collectors for Insights for Ansible "
"Automation Platform."
msgstr "Insights for Ansible Automation Platform の高価なコレクター最後に収集されたエントリー"
msgstr "Insights for Ansible Automation Platform でコストがかかっているコレクターに関して最後に収集されたエントリー"
#: awx/main/conf.py:686
msgid "Insights for Ansible Automation Platform Gather Interval"
@@ -3692,7 +3692,7 @@ msgstr "タスクの開始"
#: awx/main/models/events.py:189
msgid "Variables Prompted"
msgstr "変数のプロモート"
msgstr "提示される変数"
#: awx/main/models/events.py:190
msgid "Gathering Facts"
@@ -3741,15 +3741,15 @@ msgstr "エラー"
#: awx/main/models/execution_environments.py:17
msgid "Always pull container before running."
msgstr "実行前に必ずコンテナーをプルしてください。"
msgstr "実行前に必ずコンテナーをプルする"
#: awx/main/models/execution_environments.py:18
msgid "Only pull the image if not present before running."
msgstr "実行する前に、存在しない場合のみイメージをプルしてください。"
msgstr "イメージが存在しない場合のみ実行前にプルする"
#: awx/main/models/execution_environments.py:19
msgid "Never pull container before running."
msgstr "実行前にコンテナーをプルしないでください。"
msgstr "実行前にコンテナーをプルしない"
#: awx/main/models/execution_environments.py:29
msgid ""
@@ -5228,7 +5228,7 @@ msgid ""
"SSL) or \"ldaps://ldap.example.com:636\" (SSL). Multiple LDAP servers may be "
"specified by separating with spaces or commas. LDAP authentication is "
"disabled if this parameter is empty."
msgstr "\"ldap://ldap.example.com:389\" (非 SSL) または \"ldaps://ldap.example.com:636\" (SSL) などの LDAP サーバーに接続する URI です。複数の LDAP サーバーをスペースまたはンマで区切って指定できます。LDAP 認証は、このパラメーターが空の場合は無効になります。"
msgstr "\"ldap://ldap.example.com:389\" (非 SSL) または \"ldaps://ldap.example.com:636\" (SSL) などの LDAP サーバーに接続する URI です。複数の LDAP サーバーをスペースまたはンマで区切って指定できます。LDAP 認証は、このパラメーターが空の場合は無効になります。"
#: awx/sso/conf.py:170 awx/sso/conf.py:187 awx/sso/conf.py:198
#: awx/sso/conf.py:209 awx/sso/conf.py:226 awx/sso/conf.py:244
@@ -6236,4 +6236,5 @@ msgstr "%s が現在アップグレード中です。"
#: awx/ui/urls.py:24
msgid "This page will refresh when complete."
msgstr "このページは完了すると更新されます。"
msgstr "このページは完了すると更新されます。"

View File

@@ -956,7 +956,7 @@ msgstr "인스턴스 그룹의 인스턴스"
#: awx/api/views/__init__.py:450
msgid "Schedules"
msgstr "일정"
msgstr "스케줄"
#: awx/api/views/__init__.py:464
msgid "Schedule Recurrence Rule Preview"
@@ -3261,7 +3261,7 @@ msgstr "JSON 또는 YAML 구문을 사용하여 인젝터를 입력합니다.
#: awx/main/models/credential/__init__.py:412
#, python-format
msgid "adding %s credential type"
msgstr "인증 정보 유형 %s 추가 중"
msgstr "인증 정보 유형 %s 추가 중"
#: awx/main/models/credential/__init__.py:590
#: awx/main/models/credential/__init__.py:672
@@ -6236,4 +6236,5 @@ msgstr "%s 현재 업그레이드 중입니다."
#: awx/ui/urls.py:24
msgid "This page will refresh when complete."
msgstr "완료되면 이 페이지가 새로 고침됩니다."
msgstr "완료되면 이 페이지가 새로 고침됩니다."

View File

@@ -348,7 +348,7 @@ msgstr "SCM track_submodules 只能用于 git 项目。"
msgid ""
"Only Container Registry credentials can be associated with an Execution "
"Environment"
msgstr "只有容器 registry 凭证可以与执行环境关联"
msgstr "只有容器注册表凭证可以与执行环境关联"
#: awx/api/serializers.py:1440
msgid "Cannot change the organization of an execution environment"
@@ -629,7 +629,7 @@ msgstr "不支持在不替换的情况下在启动时删除 {} 凭证。提供
#: awx/api/serializers.py:4338
msgid "The inventory associated with this Workflow is being deleted."
msgstr "与此 Workflow 关联的清单将被删除。"
msgstr "与此工作流关联的清单将被删除。"
#: awx/api/serializers.py:4405
msgid "Message type '{}' invalid, must be either 'message' or 'body'"
@@ -3229,7 +3229,7 @@ msgstr "云"
#: awx/main/models/credential/__init__.py:336
#: awx/main/models/credential/__init__.py:1113
msgid "Container Registry"
msgstr "容器 Registry"
msgstr "容器注册表"
#: awx/main/models/credential/__init__.py:337
msgid "Personal Access Token"
@@ -3560,7 +3560,7 @@ msgstr "身份验证 URL"
#: awx/main/models/credential/__init__.py:1120
msgid "Authentication endpoint for the container registry."
msgstr "容器 registry 的身份验证端点。"
msgstr "容器注册表的身份验证端点。"
#: awx/main/models/credential/__init__.py:1130
msgid "Password or Token"
@@ -3764,7 +3764,7 @@ msgstr "镜像位置"
msgid ""
"The full image location, including the container registry, image name, and "
"version tag."
msgstr "完整镜像位置,包括容器 registry、镜像名称和版本标签。"
msgstr "完整镜像位置,包括容器注册表、镜像名称和版本标签。"
#: awx/main/models/execution_environments.py:51
msgid "Pull image before running?"
@@ -6238,4 +6238,5 @@ msgstr "%s 当前正在升级。"
#: awx/ui/urls.py:24
msgid "This page will refresh when complete."
msgstr "完成后,此页面会刷新。"
msgstr "完成后,此页面会刷新。"

View File

@@ -12,7 +12,7 @@ from django.conf import settings
from django.db.models import Q, Prefetch
from django.contrib.auth.models import User
from django.utils.translation import gettext_lazy as _
from django.core.exceptions import ObjectDoesNotExist
from django.core.exceptions import ObjectDoesNotExist, FieldDoesNotExist
# Django REST Framework
from rest_framework.exceptions import ParseError, PermissionDenied
@@ -281,13 +281,23 @@ class BaseAccess(object):
"""
return True
def assure_relationship_exists(self, obj, relationship):
if '.' in relationship:
return # not attempting validation for complex relationships now
try:
obj._meta.get_field(relationship)
except FieldDoesNotExist:
raise NotImplementedError(f'The relationship {relationship} does not exist for model {type(obj)}')
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
self.assure_relationship_exists(obj, relationship)
if skip_sub_obj_read_check:
return self.can_change(obj, None)
else:
return bool(self.can_change(obj, None) and self.user.can_access(type(sub_obj), 'read', sub_obj))
def can_unattach(self, obj, sub_obj, relationship, data=None):
self.assure_relationship_exists(obj, relationship)
return self.can_change(obj, data)
def check_related(self, field, Model, data, role_field='admin_role', obj=None, mandatory=False):
@@ -328,6 +338,8 @@ class BaseAccess(object):
role = getattr(resource, role_field, None)
if role is None:
# Handle special case where resource does not have direct roles
if role_field == 'read_role':
return self.user.can_access(type(resource), 'read', resource)
access_method_type = {'admin_role': 'change', 'execute_role': 'start'}[role_field]
return self.user.can_access(type(resource), access_method_type, resource, None)
return self.user in role
@@ -499,6 +511,21 @@ class BaseAccess(object):
return False
class UnifiedCredentialsMixin(BaseAccess):
"""
The credentials many-to-many is a standard relationship for JT, jobs, and others
Permission to attach is always use permission, and permission to unattach is admin to the parent object
"""
@check_superuser
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if relationship == 'credentials':
if not isinstance(sub_obj, Credential):
raise RuntimeError(f'Can only attach credentials to credentials relationship, got {type(sub_obj)}')
return self.can_change(obj, None) and (self.user in sub_obj.use_role)
return super().can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
class NotificationAttachMixin(BaseAccess):
"""For models that can have notifications attached
@@ -552,7 +579,8 @@ class InstanceAccess(BaseAccess):
return super(InstanceAccess, self).can_unattach(obj, sub_obj, relationship, relationship, data=data)
def can_add(self, data):
return False
return self.user.is_superuser
def can_change(self, obj, data):
return False
@@ -1031,7 +1059,7 @@ class GroupAccess(BaseAccess):
return bool(obj and self.user in obj.inventory.admin_role)
class InventorySourceAccess(NotificationAttachMixin, BaseAccess):
class InventorySourceAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAccess):
"""
I can see inventory sources whenever I can see their inventory.
I can change inventory sources whenever I can change their inventory.
@@ -1075,18 +1103,6 @@ class InventorySourceAccess(NotificationAttachMixin, BaseAccess):
return self.user in obj.inventory.update_role
return False
@check_superuser
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if relationship == 'credentials' and isinstance(sub_obj, Credential):
return obj and obj.inventory and self.user in obj.inventory.admin_role and self.user in sub_obj.use_role
return super(InventorySourceAccess, self).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
@check_superuser
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if relationship == 'credentials' and isinstance(sub_obj, Credential):
return obj and obj.inventory and self.user in obj.inventory.admin_role
return super(InventorySourceAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
class InventoryUpdateAccess(BaseAccess):
"""
@@ -1485,7 +1501,7 @@ class ProjectUpdateAccess(BaseAccess):
return obj and self.user in obj.project.admin_role
class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAccess):
"""
I can see job templates when:
- I have read role for the job template.
@@ -1549,8 +1565,7 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
if self.user not in inventory.use_role:
return False
ee = get_value(ExecutionEnvironment, 'execution_environment')
if ee and not self.user.can_access(ExecutionEnvironment, 'read', ee):
if not self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role'):
return False
project = get_value(Project, 'project')
@@ -1600,10 +1615,8 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
if self.changes_are_non_sensitive(obj, data):
return True
if data.get('execution_environment'):
ee = get_object_from_data('execution_environment', ExecutionEnvironment, data)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
if not self.check_related('execution_environment', ExecutionEnvironment, data, obj=obj, role_field='read_role'):
return False
for required_field, cls in (('inventory', Inventory), ('project', Project)):
is_mandatory = True
@@ -1667,17 +1680,13 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
if not obj.organization:
return False
return self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role
if relationship == 'credentials' and isinstance(sub_obj, Credential):
return self.user in obj.admin_role and self.user in sub_obj.use_role
return super(JobTemplateAccess, self).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
@check_superuser
def can_unattach(self, obj, sub_obj, relationship, *args, **kwargs):
if relationship == "instance_groups":
return self.can_attach(obj, sub_obj, relationship, *args, **kwargs)
if relationship == 'credentials' and isinstance(sub_obj, Credential):
return self.user in obj.admin_role
return super(JobTemplateAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
return super(JobTemplateAccess, self).can_unattach(obj, sub_obj, relationship, *args, **kwargs)
class JobAccess(BaseAccess):
@@ -1824,7 +1833,7 @@ class SystemJobAccess(BaseAccess):
return False # no relaunching of system jobs
class JobLaunchConfigAccess(BaseAccess):
class JobLaunchConfigAccess(UnifiedCredentialsMixin, BaseAccess):
"""
Launch configs must have permissions checked for
- relaunching
@@ -1832,63 +1841,69 @@ class JobLaunchConfigAccess(BaseAccess):
In order to create a new object with a copy of this launch config, I need:
- use access to related inventory (if present)
- read access to Execution Environment (if present), unless the specified ee is already in the template
- use role to many-related credentials (if any present)
- read access to many-related labels (if any present), unless the specified label is already in the template
- read access to many-related instance groups (if any present), unless the specified instance group is already in the template
"""
model = JobLaunchConfig
select_related = 'job'
prefetch_related = ('credentials', 'inventory')
def _unusable_creds_exist(self, qs):
return qs.exclude(pk__in=Credential._accessible_pk_qs(Credential, self.user, 'use_role')).exists()
M2M_CHECKS = {'credentials': Credential, 'labels': Label, 'instance_groups': InstanceGroup}
def has_credentials_access(self, obj):
# user has access if no related credentials exist that the user lacks use role for
return not self._unusable_creds_exist(obj.credentials)
def _related_filtered_queryset(self, cls):
if cls is Label:
return LabelAccess(self.user).filtered_queryset()
elif cls is InstanceGroup:
return InstanceGroupAccess(self.user).filtered_queryset()
else:
return cls._accessible_pk_qs(cls, self.user, 'use_role')
def has_obj_m2m_access(self, obj):
for relationship, cls in self.M2M_CHECKS.items():
if getattr(obj, relationship).exclude(pk__in=self._related_filtered_queryset(cls)).exists():
return False
return True
@check_superuser
def can_add(self, data, template=None):
# This is a special case, we don't check related many-to-many elsewhere
# launch RBAC checks use this
if 'credentials' in data and data['credentials'] or 'reference_obj' in data:
if 'reference_obj' in data:
prompted_cred_qs = data['reference_obj'].credentials.all()
else:
# If given model objects, only use the primary key from them
cred_pks = [cred.pk for cred in data['credentials']]
if template:
for cred in template.credentials.all():
if cred.pk in cred_pks:
cred_pks.remove(cred.pk)
prompted_cred_qs = Credential.objects.filter(pk__in=cred_pks)
if self._unusable_creds_exist(prompted_cred_qs):
if 'reference_obj' in data:
if not self.has_obj_m2m_access(data['reference_obj']):
return False
return self.check_related('inventory', Inventory, data, role_field='use_role')
else:
for relationship, cls in self.M2M_CHECKS.items():
if relationship in data and data[relationship]:
# If given model objects, only use the primary key from them
sub_obj_pks = [sub_obj.pk for sub_obj in data[relationship]]
if template:
for sub_obj in getattr(template, relationship).all():
if sub_obj.pk in sub_obj_pks:
sub_obj_pks.remove(sub_obj.pk)
if cls.objects.filter(pk__in=sub_obj_pks).exclude(pk__in=self._related_filtered_queryset(cls)).exists():
return False
return self.check_related('inventory', Inventory, data, role_field='use_role') and self.check_related(
'execution_environment', ExecutionEnvironment, data, role_field='read_role'
)
@check_superuser
def can_use(self, obj):
return self.check_related('inventory', Inventory, {}, obj=obj, role_field='use_role', mandatory=True) and self.has_credentials_access(obj)
return (
self.has_obj_m2m_access(obj)
and self.check_related('inventory', Inventory, {}, obj=obj, role_field='use_role', mandatory=True)
and self.check_related('execution_environment', ExecutionEnvironment, {}, obj=obj, role_field='read_role')
)
def can_change(self, obj, data):
return self.check_related('inventory', Inventory, data, obj=obj, role_field='use_role')
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if isinstance(sub_obj, Credential) and relationship == 'credentials':
return self.user in sub_obj.use_role
else:
raise NotImplementedError('Only credentials can be attached to launch configurations.')
def can_unattach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if isinstance(sub_obj, Credential) and relationship == 'credentials':
if skip_sub_obj_read_check:
return True
else:
return self.user in sub_obj.read_role
else:
raise NotImplementedError('Only credentials can be attached to launch configurations.')
return self.check_related('inventory', Inventory, data, obj=obj, role_field='use_role') and self.check_related(
'execution_environment', ExecutionEnvironment, data, obj=obj, role_field='read_role'
)
class WorkflowJobTemplateNodeAccess(BaseAccess):
class WorkflowJobTemplateNodeAccess(UnifiedCredentialsMixin, BaseAccess):
"""
I can see/use a WorkflowJobTemplateNode if I have read permission
to associated Workflow Job Template
@@ -1911,7 +1926,7 @@ class WorkflowJobTemplateNodeAccess(BaseAccess):
"""
model = WorkflowJobTemplateNode
prefetch_related = ('success_nodes', 'failure_nodes', 'always_nodes', 'unified_job_template', 'credentials', 'workflow_job_template')
prefetch_related = ('success_nodes', 'failure_nodes', 'always_nodes', 'unified_job_template', 'workflow_job_template')
def filtered_queryset(self):
return self.model.objects.filter(workflow_job_template__in=WorkflowJobTemplate.accessible_objects(self.user, 'read_role'))
@@ -1923,7 +1938,8 @@ class WorkflowJobTemplateNodeAccess(BaseAccess):
return (
self.check_related('workflow_job_template', WorkflowJobTemplate, data, mandatory=True)
and self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role')
and JobLaunchConfigAccess(self.user).can_add(data)
and self.check_related('inventory', Inventory, data, role_field='use_role')
and self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role')
)
def wfjt_admin(self, obj):
@@ -1932,17 +1948,14 @@ class WorkflowJobTemplateNodeAccess(BaseAccess):
else:
return self.user in obj.workflow_job_template.admin_role
def ujt_execute(self, obj):
def ujt_execute(self, obj, data=None):
if not obj.unified_job_template:
return True
return self.check_related('unified_job_template', UnifiedJobTemplate, {}, obj=obj, role_field='execute_role', mandatory=True)
return self.check_related('unified_job_template', UnifiedJobTemplate, data, obj=obj, role_field='execute_role', mandatory=True)
def can_change(self, obj, data):
if not data:
return True
# should not be able to edit the prompts if lacking access to UJT or WFJT
return self.ujt_execute(obj) and self.wfjt_admin(obj) and JobLaunchConfigAccess(self.user).can_change(obj, data)
return self.ujt_execute(obj, data=data) and self.wfjt_admin(obj) and JobLaunchConfigAccess(self.user).can_change(obj, data)
def can_delete(self, obj):
return self.wfjt_admin(obj)
@@ -1955,29 +1968,14 @@ class WorkflowJobTemplateNodeAccess(BaseAccess):
return True
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if not self.wfjt_admin(obj):
return False
if relationship == 'credentials':
# Need permission to related template to attach a credential
if not self.ujt_execute(obj):
return False
return JobLaunchConfigAccess(self.user).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
elif relationship in ('success_nodes', 'failure_nodes', 'always_nodes'):
return self.check_same_WFJT(obj, sub_obj)
else:
raise NotImplementedError('Relationship {} not understood for WFJT nodes.'.format(relationship))
if relationship in ('success_nodes', 'failure_nodes', 'always_nodes'):
return self.wfjt_admin(obj) and self.check_same_WFJT(obj, sub_obj)
return super().can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
def can_unattach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if not self.wfjt_admin(obj):
return False
if relationship == 'credentials':
if not self.ujt_execute(obj):
return False
return JobLaunchConfigAccess(self.user).can_unattach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
elif relationship in ('success_nodes', 'failure_nodes', 'always_nodes'):
return self.check_same_WFJT(obj, sub_obj)
else:
raise NotImplementedError('Relationship {} not understood for WFJT nodes.'.format(relationship))
def can_unattach(self, obj, sub_obj, relationship, data=None):
if relationship in ('success_nodes', 'failure_nodes', 'always_nodes'):
return self.wfjt_admin(obj)
return super().can_unattach(obj, sub_obj, relationship, data=None)
class WorkflowJobNodeAccess(BaseAccess):
@@ -2052,13 +2050,10 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'workflow_admin_role').exists()
if data.get('execution_environment'):
ee = get_object_from_data('execution_environment', ExecutionEnvironment, data)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
return self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True) and self.check_related(
'inventory', Inventory, data, role_field='use_role'
return bool(
self.check_related('organization', Organization, data, role_field='workflow_admin_role', mandatory=True)
and self.check_related('inventory', Inventory, data, role_field='use_role')
and self.check_related('execution_environment', ExecutionEnvironment, data, role_field='read_role')
)
def can_copy(self, obj):
@@ -2104,14 +2099,10 @@ class WorkflowJobTemplateAccess(NotificationAttachMixin, BaseAccess):
if self.user.is_superuser:
return True
if data and data.get('execution_environment'):
ee = get_object_from_data('execution_environment', ExecutionEnvironment, data)
if not self.user.can_access(ExecutionEnvironment, 'read', ee):
return False
return (
self.check_related('organization', Organization, data, role_field='workflow_admin_role', obj=obj)
and self.check_related('inventory', Inventory, data, role_field='use_role', obj=obj)
and self.check_related('execution_environment', ExecutionEnvironment, data, obj=obj, role_field='read_role')
and self.user in obj.admin_role
)
@@ -2518,7 +2509,7 @@ class UnifiedJobAccess(BaseAccess):
return super(UnifiedJobAccess, self).get_queryset().filter(workflowapproval__isnull=True)
class ScheduleAccess(BaseAccess):
class ScheduleAccess(UnifiedCredentialsMixin, BaseAccess):
"""
I can see a schedule if I can see it's related unified job, I can create them or update them if I have write access
"""
@@ -2559,12 +2550,6 @@ class ScheduleAccess(BaseAccess):
def can_delete(self, obj):
return self.can_change(obj, {})
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
return JobLaunchConfigAccess(self.user).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
def can_unattach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
return JobLaunchConfigAccess(self.user).can_unattach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
class NotificationTemplateAccess(BaseAccess):
"""

View File

@@ -12,12 +12,11 @@ from django.contrib.sessions.models import Session
from django.utils.timezone import now, timedelta
from django.utils.translation import gettext_lazy as _
from psycopg2.errors import UntranslatableCharacter
from awx.conf.license import get_license
from awx.main.utils import get_awx_version, camelcase_to_underscore, datetime_hook
from awx.main import models
from awx.main.analytics import register
from awx.main.scheduler.task_manager_models import TaskManagerInstances
"""
This module is used to define metrics collected by awx.main.analytics.gather()
@@ -131,7 +130,7 @@ def config(since, **kwargs):
}
@register('counts', '1.1', description=_('Counts of objects such as organizations, inventories, and projects'))
@register('counts', '1.2', description=_('Counts of objects such as organizations, inventories, and projects'))
def counts(since, **kwargs):
counts = {}
for cls in (
@@ -174,6 +173,13 @@ def counts(since, **kwargs):
.count()
)
counts['pending_jobs'] = models.UnifiedJob.objects.exclude(launch_type='sync').filter(status__in=('pending',)).count()
if connection.vendor == 'postgresql':
with connection.cursor() as cursor:
cursor.execute(f"select count(*) from pg_stat_activity where datname=\'{connection.settings_dict['NAME']}\'")
counts['database_connections'] = cursor.fetchone()[0]
else:
# We should be using postgresql, but if we do that change that ever we should change the below value
counts['database_connections'] = 1
return counts
@@ -230,25 +236,25 @@ def projects_by_scm_type(since, **kwargs):
@register('instance_info', '1.2', description=_('Cluster topology and capacity'))
def instance_info(since, include_hostnames=False, **kwargs):
info = {}
instances = models.Instance.objects.values_list('hostname').values(
'uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'hostname', 'enabled'
)
for instance in instances:
consumed_capacity = sum(x.task_impact for x in models.UnifiedJob.objects.filter(execution_node=instance['hostname'], status__in=('running', 'waiting')))
# Use same method that the TaskManager does to compute consumed capacity without querying all running jobs for each Instance
active_tasks = models.UnifiedJob.objects.filter(status__in=['running', 'waiting']).only('task_impact', 'controller_node', 'execution_node')
tm_instances = TaskManagerInstances(active_tasks, instance_fields=['uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'enabled'])
for tm_instance in tm_instances.instances_by_hostname.values():
instance = tm_instance.obj
instance_info = {
'uuid': instance['uuid'],
'version': instance['version'],
'capacity': instance['capacity'],
'cpu': instance['cpu'],
'memory': instance['memory'],
'managed_by_policy': instance['managed_by_policy'],
'enabled': instance['enabled'],
'consumed_capacity': consumed_capacity,
'remaining_capacity': instance['capacity'] - consumed_capacity,
'uuid': instance.uuid,
'version': instance.version,
'capacity': instance.capacity,
'cpu': instance.cpu,
'memory': instance.memory,
'managed_by_policy': instance.managed_by_policy,
'enabled': instance.enabled,
'consumed_capacity': tm_instance.consumed_capacity,
'remaining_capacity': instance.capacity - tm_instance.consumed_capacity,
}
if include_hostnames is True:
instance_info['hostname'] = instance['hostname']
info[instance['uuid']] = instance_info
instance_info['hostname'] = instance.hostname
info[instance.uuid] = instance_info
return info
@@ -378,10 +384,7 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
WHERE ({tbl}.{where_column} > '{since.isoformat()}' AND {tbl}.{where_column} <= '{until.isoformat()}')) TO STDOUT WITH CSV HEADER'''
return query
try:
return _copy_table(table='events', query=query(f"{tbl}.event_data::jsonb"), path=full_path)
except UntranslatableCharacter:
return _copy_table(table='events', query=query(f"replace({tbl}.event_data::text, '\\u0000', '')::jsonb"), path=full_path)
return _copy_table(table='events', query=query(fr"replace({tbl}.event_data, '\u', '\u005cu')::jsonb"), path=full_path)
@register('events_table', '1.5', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
@@ -394,7 +397,7 @@ def events_table_partitioned_modified(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, 'main_jobevent', 'modified', project_job_created=True, **kwargs)
@register('unified_jobs_table', '1.3', format='csv', description=_('Data on jobs run'), expensive=four_hour_slicing)
@register('unified_jobs_table', '1.4', format='csv', description=_('Data on jobs run'), expensive=four_hour_slicing)
def unified_jobs_table(since, full_path, until, **kwargs):
unified_job_query = '''COPY (SELECT main_unifiedjob.id,
main_unifiedjob.polymorphic_ctype_id,
@@ -420,7 +423,8 @@ def unified_jobs_table(since, full_path, until, **kwargs):
main_unifiedjob.job_explanation,
main_unifiedjob.instance_group_id,
main_unifiedjob.installed_collections,
main_unifiedjob.ansible_version
main_unifiedjob.ansible_version,
main_job.forks
FROM main_unifiedjob
JOIN django_content_type ON main_unifiedjob.polymorphic_ctype_id = django_content_type.id
LEFT JOIN main_job ON main_unifiedjob.id = main_job.unifiedjob_ptr_id

View File

@@ -3,6 +3,7 @@ from prometheus_client import CollectorRegistry, Gauge, Info, generate_latest
from awx.conf.license import get_license
from awx.main.utils import get_awx_version
from awx.main.models import UnifiedJob
from awx.main.analytics.collectors import (
counts,
instance_info,
@@ -126,6 +127,8 @@ def metrics():
LICENSE_INSTANCE_TOTAL = Gauge('awx_license_instance_total', 'Total number of managed hosts provided by your license', registry=REGISTRY)
LICENSE_INSTANCE_FREE = Gauge('awx_license_instance_free', 'Number of remaining managed hosts provided by your license', registry=REGISTRY)
DATABASE_CONNECTIONS = Gauge('awx_database_connections_total', 'Number of connections to database', registry=REGISTRY)
license_info = get_license()
SYSTEM_INFO.info(
{
@@ -163,10 +166,13 @@ def metrics():
USER_SESSIONS.labels(type='user').set(current_counts['active_user_sessions'])
USER_SESSIONS.labels(type='anonymous').set(current_counts['active_anonymous_sessions'])
DATABASE_CONNECTIONS.set(current_counts['database_connections'])
all_job_data = job_counts(None)
statuses = all_job_data.get('status', {})
for status, value in statuses.items():
STATUS.labels(status=status).set(value)
states = set(dict(UnifiedJob.STATUS_CHOICES).keys()) - set(['new'])
for state in states:
STATUS.labels(status=state).set(statuses.get(state, 0))
RUNNING_JOBS.set(current_counts['running_jobs'])
PENDING_JOBS.set(current_counts['pending_jobs'])

View File

@@ -8,7 +8,7 @@ from django.apps import apps
from awx.main.consumers import emit_channel_notification
root_key = 'awx_metrics'
logger = logging.getLogger('awx.main.wsbroadcast')
logger = logging.getLogger('awx.main.analytics')
class BaseM:
@@ -16,16 +16,22 @@ class BaseM:
self.field = field
self.help_text = help_text
self.current_value = 0
self.metric_has_changed = False
def clear_value(self, conn):
def reset_value(self, conn):
conn.hset(root_key, self.field, 0)
self.current_value = 0
def inc(self, value):
self.current_value += value
self.metric_has_changed = True
def set(self, value):
self.current_value = value
self.metric_has_changed = True
def get(self):
return self.current_value
def decode(self, conn):
value = conn.hget(root_key, self.field)
@@ -34,7 +40,9 @@ class BaseM:
def to_prometheus(self, instance_data):
output_text = f"# HELP {self.field} {self.help_text}\n# TYPE {self.field} gauge\n"
for instance in instance_data:
output_text += f'{self.field}{{node="{instance}"}} {instance_data[instance][self.field]}\n'
if self.field in instance_data[instance]:
# on upgrade, if there are stale instances, we can end up with issues where new metrics are not present
output_text += f'{self.field}{{node="{instance}"}} {instance_data[instance][self.field]}\n'
return output_text
@@ -46,8 +54,10 @@ class FloatM(BaseM):
return 0.0
def store_value(self, conn):
conn.hincrbyfloat(root_key, self.field, self.current_value)
self.current_value = 0
if self.metric_has_changed:
conn.hincrbyfloat(root_key, self.field, self.current_value)
self.current_value = 0
self.metric_has_changed = False
class IntM(BaseM):
@@ -58,8 +68,10 @@ class IntM(BaseM):
return 0
def store_value(self, conn):
conn.hincrby(root_key, self.field, self.current_value)
self.current_value = 0
if self.metric_has_changed:
conn.hincrby(root_key, self.field, self.current_value)
self.current_value = 0
self.metric_has_changed = False
class SetIntM(BaseM):
@@ -70,10 +82,9 @@ class SetIntM(BaseM):
return 0
def store_value(self, conn):
# do not set value if it has not changed since last time this was called
if self.current_value is not None:
if self.metric_has_changed:
conn.hset(root_key, self.field, self.current_value)
self.current_value = None
self.metric_has_changed = False
class SetFloatM(SetIntM):
@@ -94,13 +105,13 @@ class HistogramM(BaseM):
self.sum = IntM(field + '_sum', '')
super(HistogramM, self).__init__(field, help_text)
def clear_value(self, conn):
def reset_value(self, conn):
conn.hset(root_key, self.field, 0)
self.inf.clear_value(conn)
self.sum.clear_value(conn)
self.inf.reset_value(conn)
self.sum.reset_value(conn)
for b in self.buckets_to_keys.values():
b.clear_value(conn)
super(HistogramM, self).clear_value(conn)
b.reset_value(conn)
super(HistogramM, self).reset_value(conn)
def observe(self, value):
for b in self.buckets:
@@ -136,7 +147,7 @@ class HistogramM(BaseM):
class Metrics:
def __init__(self, auto_pipe_execute=True, instance_name=None):
def __init__(self, auto_pipe_execute=False, instance_name=None):
self.pipe = redis.Redis.from_url(settings.BROKER_URL).pipeline()
self.conn = redis.Redis.from_url(settings.BROKER_URL)
self.last_pipe_execute = time.time()
@@ -152,8 +163,14 @@ class Metrics:
Instance = apps.get_model('main', 'Instance')
if instance_name:
self.instance_name = instance_name
elif settings.IS_TESTING():
self.instance_name = "awx_testing"
else:
self.instance_name = Instance.objects.me().hostname
try:
self.instance_name = Instance.objects.me().hostname
except Exception as e:
self.instance_name = settings.CLUSTER_HOST_ID
logger.info(f'Instance {self.instance_name} seems to be unregistered, error: {e}')
# metric name, help_text
METRICSLIST = [
@@ -161,15 +178,39 @@ class Metrics:
IntM('callback_receiver_events_popped_redis', 'Number of events popped from redis'),
IntM('callback_receiver_events_in_memory', 'Current number of events in memory (in transfer from redis to db)'),
IntM('callback_receiver_batch_events_errors', 'Number of times batch insertion failed'),
FloatM('callback_receiver_events_insert_db_seconds', 'Time spent saving events to database'),
FloatM('callback_receiver_events_insert_db_seconds', 'Total time spent saving events to database'),
IntM('callback_receiver_events_insert_db', 'Number of events batch inserted into database'),
IntM('callback_receiver_events_broadcast', 'Number of events broadcast to other control plane nodes'),
HistogramM(
'callback_receiver_batch_events_insert_db', 'Number of events batch inserted into database', settings.SUBSYSTEM_METRICS_BATCH_INSERT_BUCKETS
),
SetFloatM('callback_receiver_event_processing_avg_seconds', 'Average processing time per event per callback receiver batch'),
FloatM('subsystem_metrics_pipe_execute_seconds', 'Time spent saving metrics to redis'),
IntM('subsystem_metrics_pipe_execute_calls', 'Number of calls to pipe_execute'),
FloatM('subsystem_metrics_send_metrics_seconds', 'Time spent sending metrics to other nodes'),
SetFloatM('task_manager_get_tasks_seconds', 'Time spent in loading tasks from db'),
SetFloatM('task_manager_start_task_seconds', 'Time spent starting task'),
SetFloatM('task_manager_process_running_tasks_seconds', 'Time spent processing running tasks'),
SetFloatM('task_manager_process_pending_tasks_seconds', 'Time spent processing pending tasks'),
SetFloatM('task_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('task_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('task_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('task_manager_tasks_started', 'Number of tasks started'),
SetIntM('task_manager_running_processed', 'Number of running tasks processed'),
SetIntM('task_manager_pending_processed', 'Number of pending tasks processed'),
SetIntM('task_manager_tasks_blocked', 'Number of tasks blocked from running'),
SetFloatM('task_manager_commit_seconds', 'Time spent in db transaction, including on_commit calls'),
SetFloatM('dependency_manager_get_tasks_seconds', 'Time spent loading pending tasks from db'),
SetFloatM('dependency_manager_generate_dependencies_seconds', 'Time spent generating dependencies for pending tasks'),
SetFloatM('dependency_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('dependency_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('dependency_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetIntM('dependency_manager_pending_processed', 'Number of pending tasks processed'),
SetFloatM('workflow_manager__schedule_seconds', 'Time spent in running the entire _schedule'),
IntM('workflow_manager__schedule_calls', 'Number of calls to _schedule, after lock is acquired'),
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
]
# turn metric list into dictionary with the metric name as a key
self.METRICS = {}
@@ -179,29 +220,39 @@ class Metrics:
# track last time metrics were sent to other nodes
self.previous_send_metrics = SetFloatM('send_metrics_time', 'Timestamp of previous send_metrics call')
def clear_values(self):
def reset_values(self):
# intended to be called once on app startup to reset all metric
# values to 0
for m in self.METRICS.values():
m.clear_value(self.conn)
m.reset_value(self.conn)
self.metrics_have_changed = True
self.conn.delete(root_key + "_lock")
for m in self.conn.scan_iter(root_key + '_instance_*'):
self.conn.delete(m)
def inc(self, field, value):
if value != 0:
self.METRICS[field].inc(value)
self.metrics_have_changed = True
if self.auto_pipe_execute is True and self.should_pipe_execute() is True:
if self.auto_pipe_execute is True:
self.pipe_execute()
def set(self, field, value):
self.METRICS[field].set(value)
self.metrics_have_changed = True
if self.auto_pipe_execute is True and self.should_pipe_execute() is True:
if self.auto_pipe_execute is True:
self.pipe_execute()
def get(self, field):
return self.METRICS[field].get()
def decode(self, field):
return self.METRICS[field].decode(self.conn)
def observe(self, field, value):
self.METRICS[field].observe(value)
self.metrics_have_changed = True
if self.auto_pipe_execute is True and self.should_pipe_execute() is True:
if self.auto_pipe_execute is True:
self.pipe_execute()
def serialize_local_metrics(self):
@@ -249,8 +300,8 @@ class Metrics:
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
# get acquire redis lock before sending metrics
lock = self.conn.lock(root_key + '_lock', thread_local=False)
# acquire redis lock before sending metrics
lock = self.conn.lock(root_key + '_lock')
if not lock.acquire(blocking=False):
return
try:
@@ -266,7 +317,12 @@ class Metrics:
self.previous_send_metrics.set(current_time)
self.previous_send_metrics.store_value(self.conn)
finally:
lock.release()
try:
lock.release()
except Exception as exc:
# After system failures, we might throw redis.exceptions.LockNotOwnedError
# this is to avoid print a Traceback, and importantly, avoid raising an exception into parent context
logger.warning(f'Error releasing subsystem metrics redis lock, error: {str(exc)}')
def load_other_metrics(self, request):
# data received from other nodes are stored in their own keys

View File

@@ -446,7 +446,7 @@ register(
label=_('Default Job Idle Timeout'),
help_text=_(
'If no output is detected from ansible in this number of seconds the execution will be terminated. '
'Use value of 0 to used default idle_timeout is 600s.'
'Use value of 0 to indicate that no idle timeout should be imposed.'
),
category=_('Jobs'),
category_slug='jobs',

View File

@@ -4,6 +4,7 @@ import select
from contextlib import contextmanager
from django.conf import settings
from django.db import connection as pg_connection
NOT_READY = ([], [], [])
@@ -15,7 +16,6 @@ def get_local_queuename():
class PubSub(object):
def __init__(self, conn):
assert conn.autocommit, "Connection must be in autocommit mode."
self.conn = conn
def listen(self, channel):
@@ -31,6 +31,9 @@ class PubSub(object):
cur.execute('SELECT pg_notify(%s, %s);', (channel, payload))
def events(self, select_timeout=5, yield_timeouts=False):
if not self.conn.autocommit:
raise RuntimeError('Listening for events can only be done in autocommit mode')
while True:
if select.select([self.conn], [], [], select_timeout) == NOT_READY:
if yield_timeouts:
@@ -45,11 +48,32 @@ class PubSub(object):
@contextmanager
def pg_bus_conn():
conf = settings.DATABASES['default']
conn = psycopg2.connect(dbname=conf['NAME'], host=conf['HOST'], user=conf['USER'], password=conf['PASSWORD'], port=conf['PORT'], **conf.get("OPTIONS", {}))
# Django connection.cursor().connection doesn't have autocommit=True on
conn.set_session(autocommit=True)
def pg_bus_conn(new_connection=False):
'''
Any listeners probably want to establish a new database connection,
separate from the Django connection used for queries, because that will prevent
losing connection to the channel whenever a .close() happens.
Any publishers probably want to use the existing connection
so that messages follow postgres transaction rules
https://www.postgresql.org/docs/current/sql-notify.html
'''
if new_connection:
conf = settings.DATABASES['default']
conn = psycopg2.connect(
dbname=conf['NAME'], host=conf['HOST'], user=conf['USER'], password=conf['PASSWORD'], port=conf['PORT'], **conf.get("OPTIONS", {})
)
# Django connection.cursor().connection doesn't have autocommit=True on by default
conn.set_session(autocommit=True)
else:
if pg_connection.connection is None:
pg_connection.connect()
if pg_connection.connection is None:
raise RuntimeError('Unexpectedly could not connect to postgres for pg_notify actions')
conn = pg_connection.connection
pubsub = PubSub(conn)
yield pubsub
conn.close()
if new_connection:
conn.close()

View File

@@ -37,18 +37,24 @@ class Control(object):
def running(self, *args, **kwargs):
return self.control_with_reply('running', *args, **kwargs)
def cancel(self, task_ids, *args, **kwargs):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
@classmethod
def generate_reply_queue_name(cls):
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
def control_with_reply(self, command, timeout=5):
def control_with_reply(self, command, timeout=5, extra_data=None):
logger.warning('checking {} {} for {}'.format(self.service, command, self.queuename))
reply_queue = Control.generate_reply_queue_name()
self.result = None
with pg_bus_conn() as conn:
with pg_bus_conn(new_connection=True) as conn:
conn.listen(reply_queue)
conn.notify(self.queuename, json.dumps({'control': command, 'reply_to': reply_queue}))
send_data = {'control': command, 'reply_to': reply_queue}
if extra_data:
send_data.update(extra_data)
conn.notify(self.queuename, json.dumps(send_data))
for reply in conn.events(select_timeout=timeout, yield_timeouts=True):
if reply is None:

View File

@@ -16,13 +16,14 @@ from queue import Full as QueueFull, Empty as QueueEmpty
from django.conf import settings
from django.db import connection as django_connection, connections
from django.core.cache import cache as django_cache
from django.utils.timezone import now as tz_now
from django_guid import set_guid
from jinja2 import Template
import psutil
from awx.main.models import UnifiedJob
from awx.main.dispatch import reaper
from awx.main.utils.common import convert_mem_str_to_bytes, get_mem_effective_capacity
from awx.main.utils.common import convert_mem_str_to_bytes, get_mem_effective_capacity, log_excess_runtime
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -328,12 +329,16 @@ class AutoscalePool(WorkerPool):
# Get same number as max forks based on memory, this function takes memory as bytes
self.max_workers = get_mem_effective_capacity(total_memory_gb * 2**30)
# add magic prime number of extra workers to ensure
# we have a few extra workers to run the heartbeat
self.max_workers += 7
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)
def debug(self, *args, **kwargs):
self.cleanup()
return super(AutoscalePool, self).debug(*args, **kwargs)
# the task manager enforces settings.TASK_MANAGER_TIMEOUT on its own
# but if the task takes longer than the time defined here, we will force it to stop here
self.task_manager_timeout = settings.TASK_MANAGER_TIMEOUT + settings.TASK_MANAGER_TIMEOUT_GRACE_PERIOD
@property
def should_grow(self):
@@ -351,6 +356,7 @@ class AutoscalePool(WorkerPool):
def debug_meta(self):
return 'min={} max={}'.format(self.min_workers, self.max_workers)
@log_excess_runtime(logger)
def cleanup(self):
"""
Perform some internal account and cleanup. This is run on
@@ -359,8 +365,6 @@ class AutoscalePool(WorkerPool):
1. Discover worker processes that exited, and recover messages they
were handling.
2. Clean up unnecessary, idle workers.
3. Check to see if the database says this node is running any tasks
that aren't actually running. If so, reap them.
IMPORTANT: this function is one of the few places in the dispatcher
(aside from setting lookups) where we talk to the database. As such,
@@ -401,13 +405,15 @@ class AutoscalePool(WorkerPool):
# the task manager to never do more work
current_task = w.current_task
if current_task and isinstance(current_task, dict):
if current_task.get('task', '').endswith('tasks.run_task_manager'):
endings = ['tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager']
current_task_name = current_task.get('task', '')
if any(current_task_name.endswith(e) for e in endings):
if 'started' not in current_task:
w.managed_tasks[current_task['uuid']]['started'] = time.time()
age = time.time() - current_task['started']
w.managed_tasks[current_task['uuid']]['age'] = age
if age > (60 * 5):
logger.error(f'run_task_manager has held the advisory lock for >5m, sending SIGTERM to {w.pid}') # noqa
if age > self.task_manager_timeout:
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGTERM to {w.pid}')
os.kill(w.pid, signal.SIGTERM)
for m in orphaned:
@@ -417,13 +423,17 @@ class AutoscalePool(WorkerPool):
idx = random.choice(range(len(self.workers)))
self.write(idx, m)
# if the database says a job is running on this node, but it's *not*,
# then reap it
running_uuids = []
for worker in self.workers:
worker.calculate_managed_tasks()
running_uuids.extend(list(worker.managed_tasks.keys()))
reaper.reap(excluded_uuids=running_uuids)
def add_bind_kwargs(self, body):
bind_kwargs = body.pop('bind_kwargs', [])
body.setdefault('kwargs', {})
if 'dispatch_time' in bind_kwargs:
body['kwargs']['dispatch_time'] = tz_now().isoformat()
if 'worker_tasks' in bind_kwargs:
worker_tasks = {}
for worker in self.workers:
worker.calculate_managed_tasks()
worker_tasks[worker.pid] = list(worker.managed_tasks.keys())
body['kwargs']['worker_tasks'] = worker_tasks
def up(self):
if self.full:
@@ -438,6 +448,8 @@ class AutoscalePool(WorkerPool):
if 'guid' in body:
set_guid(body['guid'])
try:
if isinstance(body, dict) and body.get('bind_kwargs'):
self.add_bind_kwargs(body)
# when the cluster heartbeat occurs, clean up internally
if isinstance(body, dict) and 'cluster_node_heartbeat' in body['task']:
self.cleanup()
@@ -452,6 +464,10 @@ class AutoscalePool(WorkerPool):
w.put(body)
break
else:
task_name = 'unknown'
if isinstance(body, dict):
task_name = body.get('task')
logger.warn(f'Workers maxed, queuing {task_name}, load: {sum(len(w.managed_tasks) for w in self.workers)} / {len(self.workers)}')
return super(AutoscalePool, self).write(preferred_queue, body)
except Exception:
for conn in connections.all():

View File

@@ -2,6 +2,7 @@ import inspect
import logging
import sys
import json
import time
from uuid import uuid4
from django.conf import settings
@@ -49,13 +50,21 @@ class task:
@task(queue='tower_broadcast')
def announce():
print("Run this everywhere!")
# The special parameter bind_kwargs tells the main dispatcher process to add certain kwargs
@task(bind_kwargs=['dispatch_time'])
def print_time(dispatch_time=None):
print(f"Time I was dispatched: {dispatch_time}")
"""
def __init__(self, queue=None):
def __init__(self, queue=None, bind_kwargs=None):
self.queue = queue
self.bind_kwargs = bind_kwargs
def __call__(self, fn=None):
queue = self.queue
bind_kwargs = self.bind_kwargs
class PublisherMixin(object):
@@ -75,10 +84,12 @@ class task:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name}
obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name, 'time_pub': time.time()}
guid = get_guid()
if guid:
obj['guid'] = guid
if bind_kwargs:
obj['bind_kwargs'] = bind_kwargs
obj.update(**kw)
if callable(queue):
queue = queue()

View File

@@ -2,6 +2,7 @@ from datetime import timedelta
import logging
from django.db.models import Q
from django.conf import settings
from django.utils.timezone import now as tz_now
from django.contrib.contenttypes.models import ContentType
@@ -10,28 +11,76 @@ from awx.main.models import Instance, UnifiedJob, WorkflowJob
logger = logging.getLogger('awx.main.dispatch')
def reap_job(j, status):
if UnifiedJob.objects.get(id=j.id).status not in ('running', 'waiting'):
def startup_reaping():
"""
If this particular instance is starting, then we know that any running jobs are invalid
so we will reap those jobs as a special action here
"""
try:
me = Instance.objects.me()
except RuntimeError as e:
logger.warning(f'Local instance is not registered, not running startup reaper: {e}')
return
jobs = UnifiedJob.objects.filter(status='running', controller_node=me.hostname)
job_ids = []
for j in jobs:
job_ids.append(j.id)
reap_job(
j,
'failed',
job_explanation='Task was marked as running at system start up. The system must have not shut down properly, so it has been marked as failed.',
)
if job_ids:
logger.error(f'Unified jobs {job_ids} were reaped on dispatch startup')
def reap_job(j, status, job_explanation=None):
j.refresh_from_db(fields=['status', 'job_explanation'])
status_before = j.status
if status_before not in ('running', 'waiting'):
# just in case, don't reap jobs that aren't running
return
j.status = status
j.start_args = '' # blank field to remove encrypted passwords
j.job_explanation += ' '.join(
(
'Task was marked as running but was not present in',
'the job queue, so it has been marked as failed.',
)
)
if j.job_explanation:
j.job_explanation += ' ' # Separate messages for readability
if job_explanation is None:
j.job_explanation += 'Task was marked as running but was not present in the job queue, so it has been marked as failed.'
else:
j.job_explanation += job_explanation
j.save(update_fields=['status', 'start_args', 'job_explanation'])
if hasattr(j, 'send_notification_templates'):
j.send_notification_templates('failed')
j.websocket_emit_status(status)
logger.error('{} is no longer running; reaping'.format(j.log_format))
logger.error(f'{j.log_format} is no longer {status_before}; reaping')
def reap(instance=None, status='failed', excluded_uuids=[]):
def reap_waiting(instance=None, status='failed', job_explanation=None, grace_period=None, excluded_uuids=None, ref_time=None):
"""
Reap all jobs in waiting|running for this instance.
Reap all jobs in waiting for this instance.
"""
if grace_period is None:
grace_period = settings.JOB_WAITING_GRACE_PERIOD + settings.TASK_MANAGER_TIMEOUT
me = instance
if me is None:
try:
me = Instance.objects.me()
except RuntimeError as e:
logger.warning(f'Local instance is not registered, not running reaper: {e}')
return
if ref_time is None:
ref_time = tz_now()
jobs = UnifiedJob.objects.filter(status='waiting', modified__lte=ref_time - timedelta(seconds=grace_period), controller_node=me.hostname)
if excluded_uuids:
jobs = jobs.exclude(celery_task_id__in=excluded_uuids)
for j in jobs:
reap_job(j, status, job_explanation=job_explanation)
def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=None):
"""
Reap all jobs in running for this instance.
"""
me = instance
if me is None:
@@ -40,12 +89,11 @@ def reap(instance=None, status='failed', excluded_uuids=[]):
except RuntimeError as e:
logger.warning(f'Local instance is not registered, not running reaper: {e}')
return
now = tz_now()
workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id
jobs = UnifiedJob.objects.filter(
(Q(status='running') | Q(status='waiting', modified__lte=now - timedelta(seconds=60)))
& (Q(execution_node=me.hostname) | Q(controller_node=me.hostname))
& ~Q(polymorphic_ctype_id=workflow_ctype_id)
).exclude(celery_task_id__in=excluded_uuids)
Q(status='running') & (Q(execution_node=me.hostname) | Q(controller_node=me.hostname)) & ~Q(polymorphic_ctype_id=workflow_ctype_id)
)
if excluded_uuids:
jobs = jobs.exclude(celery_task_id__in=excluded_uuids)
for j in jobs:
reap_job(j, status)
reap_job(j, status, job_explanation=job_explanation)

View File

@@ -17,6 +17,7 @@ from django.conf import settings
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch import pg_bus_conn
from awx.main.utils.common import log_excess_runtime
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -62,7 +63,7 @@ class AWXConsumerBase(object):
def control(self, body):
logger.warning(f'Received control signal:\n{body}')
control = body.get('control')
if control in ('status', 'running'):
if control in ('status', 'running', 'cancel'):
reply_queue = body['reply_to']
if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()])
@@ -71,6 +72,17 @@ class AWXConsumerBase(object):
for worker in self.pool.workers:
worker.calculate_managed_tasks()
msg.extend(worker.managed_tasks.keys())
elif control == 'cancel':
msg = []
task_ids = set(body['task_ids'])
for worker in self.pool.workers:
task = worker.current_task
if task and task['uuid'] in task_ids:
logger.warn(f'Sending SIGTERM to task id={task["uuid"]}, task={task.get("task")}, args={task.get("args")}')
os.kill(worker.pid, signal.SIGTERM)
msg.append(task['uuid'])
if task_ids and not msg:
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
with pg_bus_conn() as conn:
conn.notify(reply_queue, json.dumps(msg))
@@ -81,6 +93,9 @@ class AWXConsumerBase(object):
logger.error('unrecognized control message: {}'.format(control))
def process_task(self, body):
if isinstance(body, dict):
body['time_ack'] = time.time()
if 'control' in body:
try:
return self.control(body)
@@ -101,6 +116,7 @@ class AWXConsumerBase(object):
self.total_messages += 1
self.record_statistics()
@log_excess_runtime(logger)
def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
try:
@@ -149,7 +165,7 @@ class AWXConsumerPG(AWXConsumerBase):
while True:
try:
with pg_bus_conn() as conn:
with pg_bus_conn(new_connection=True) as conn:
for queue in self.queues:
conn.listen(queue)
if init is False:
@@ -169,8 +185,9 @@ class AWXConsumerPG(AWXConsumerBase):
logger.exception(f"Error consuming new events from postgres, will retry for {self.pg_max_wait} s")
self.pg_down_time = time.time()
self.pg_is_down = True
if time.time() - self.pg_down_time > self.pg_max_wait:
logger.warning(f"Postgres event consumer has not recovered in {self.pg_max_wait} s, exiting")
current_downtime = time.time() - self.pg_down_time
if current_downtime > self.pg_max_wait:
logger.exception(f"Postgres event consumer has not recovered in {current_downtime} s, exiting")
raise
# Wait for a second before next attempt, but still listen for any shutdown signals
for i in range(10):
@@ -179,6 +196,10 @@ class AWXConsumerPG(AWXConsumerBase):
time.sleep(0.1)
for conn in db.connections.all():
conn.close_if_unusable_or_obsolete()
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in dispatcher main loop')
raise
class BaseWorker(object):

View File

@@ -4,10 +4,12 @@ import os
import signal
import time
import traceback
import datetime
from django.conf import settings
from django.utils.functional import cached_property
from django.utils.timezone import now as tz_now
from django.db import DatabaseError, OperationalError, connection as django_connection
from django.db import DatabaseError, OperationalError, transaction, connection as django_connection
from django.db.utils import InterfaceError, InternalError
from django_guid import set_guid
@@ -16,8 +18,8 @@ import psutil
import redis
from awx.main.consumers import emit_channel_notification
from awx.main.models import JobEvent, AdHocCommandEvent, ProjectUpdateEvent, InventoryUpdateEvent, SystemJobEvent, UnifiedJob, Job
from awx.main.tasks.system import handle_success_and_failure_notifications
from awx.main.models import JobEvent, AdHocCommandEvent, ProjectUpdateEvent, InventoryUpdateEvent, SystemJobEvent, UnifiedJob
from awx.main.constants import ACTIVE_STATES
from awx.main.models.events import emit_event_detail
from awx.main.utils.profiling import AWXProfiler
import awx.main.analytics.subsystem_metrics as s_metrics
@@ -26,6 +28,32 @@ from .base import BaseWorker
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
def job_stats_wrapup(job_identifier, event=None):
"""Fill in the unified job host_status_counts, fire off notifications if needed"""
try:
# empty dict (versus default of None) can still indicate that events have been processed
# for job types like system jobs, and jobs with no hosts matched
host_status_counts = {}
if event:
host_status_counts = event.get_host_status_counts()
# Update host_status_counts while holding the row lock
with transaction.atomic():
uj = UnifiedJob.objects.select_for_update().get(pk=job_identifier)
uj.host_status_counts = host_status_counts
uj.save(update_fields=['host_status_counts'])
uj.log_lifecycle("stats_wrapup_finished")
# If the status was a finished state before this update was made, send notifications
# If not, we will send notifications when the status changes
if uj.status not in ACTIVE_STATES:
uj.send_notification_templates('succeeded' if uj.status == 'successful' else 'failed')
except Exception:
logger.exception('Worker failed to save stats or emit notifications: Job {}'.format(job_identifier))
class CallbackBrokerWorker(BaseWorker):
"""
A worker implementation that deserializes callback event data and persists
@@ -44,7 +72,6 @@ class CallbackBrokerWorker(BaseWorker):
def __init__(self):
self.buff = {}
self.pid = os.getpid()
self.redis = redis.Redis.from_url(settings.BROKER_URL)
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.queue_pop = 0
@@ -53,6 +80,11 @@ class CallbackBrokerWorker(BaseWorker):
for key in self.redis.keys('awx_callback_receiver_statistics_*'):
self.redis.delete(key)
@cached_property
def pid(self):
"""This needs to be obtained after forking, or else it will give the parent process"""
return os.getpid()
def read(self, queue):
try:
res = self.redis.blpop(self.queue_name, timeout=1)
@@ -120,32 +152,49 @@ class CallbackBrokerWorker(BaseWorker):
metrics_singular_events_saved = 0
metrics_events_batch_save_errors = 0
metrics_events_broadcast = 0
metrics_events_missing_created = 0
metrics_total_job_event_processing_seconds = datetime.timedelta(seconds=0)
for cls, events in self.buff.items():
logger.debug(f'{cls.__name__}.objects.bulk_create({len(events)})')
for e in events:
e.modified = now # this can be set before created because now is set above on line 149
if not e.created:
e.created = now
e.modified = now
metrics_events_missing_created += 1
else: # only calculate the seconds if the created time already has been set
metrics_total_job_event_processing_seconds += e.modified - e.created
metrics_duration_to_save = time.perf_counter()
try:
cls.objects.bulk_create(events)
metrics_bulk_events_saved += len(events)
except Exception:
except Exception as exc:
logger.warning(f'Error in events bulk_create, will try indiviually up to 5 errors, error {str(exc)}')
# if an exception occurs, we should re-attempt to save the
# events one-by-one, because something in the list is
# broken/stale
consecutive_errors = 0
events_saved = 0
metrics_events_batch_save_errors += 1
for e in events:
try:
e.save()
metrics_singular_events_saved += 1
except Exception:
logger.exception('Database Error Saving Job Event')
events_saved += 1
consecutive_errors = 0
except Exception as exc_indv:
consecutive_errors += 1
logger.info(f'Database Error Saving individual Job Event, error {str(exc_indv)}')
if consecutive_errors >= 5:
raise
metrics_singular_events_saved += events_saved
if events_saved == 0:
raise
metrics_duration_to_save = time.perf_counter() - metrics_duration_to_save
for e in events:
if not getattr(e, '_skip_websocket_message', False):
metrics_events_broadcast += 1
emit_event_detail(e)
if getattr(e, '_notification_trigger_event', False):
job_stats_wrapup(getattr(e, e.JOB_REFERENCE), event=e)
self.buff = {}
self.last_flush = time.time()
# only update metrics if we saved events
@@ -156,6 +205,11 @@ class CallbackBrokerWorker(BaseWorker):
self.subsystem_metrics.observe('callback_receiver_batch_events_insert_db', metrics_bulk_events_saved)
self.subsystem_metrics.inc('callback_receiver_events_in_memory', -(metrics_bulk_events_saved + metrics_singular_events_saved))
self.subsystem_metrics.inc('callback_receiver_events_broadcast', metrics_events_broadcast)
self.subsystem_metrics.set(
'callback_receiver_event_processing_avg_seconds',
metrics_total_job_event_processing_seconds.total_seconds()
/ (metrics_bulk_events_saved + metrics_singular_events_saved - metrics_events_missing_created),
)
if self.subsystem_metrics.should_pipe_execute() is True:
self.subsystem_metrics.pipe_execute()
@@ -165,47 +219,32 @@ class CallbackBrokerWorker(BaseWorker):
if flush:
self.last_event = ''
if not flush:
event_map = {
'job_id': JobEvent,
'ad_hoc_command_id': AdHocCommandEvent,
'project_update_id': ProjectUpdateEvent,
'inventory_update_id': InventoryUpdateEvent,
'system_job_id': SystemJobEvent,
}
job_identifier = 'unknown job'
for key, cls in event_map.items():
if key in body:
job_identifier = body[key]
for cls in (JobEvent, AdHocCommandEvent, ProjectUpdateEvent, InventoryUpdateEvent, SystemJobEvent):
if cls.JOB_REFERENCE in body:
job_identifier = body[cls.JOB_REFERENCE]
break
self.last_event = f'\n\t- {cls.__name__} for #{job_identifier} ({body.get("event", "")} {body.get("uuid", "")})' # noqa
notification_trigger_event = bool(body.get('event') == cls.WRAPUP_EVENT)
if body.get('event') == 'EOF':
try:
if 'guid' in body:
set_guid(body['guid'])
final_counter = body.get('final_counter', 0)
logger.info('Event processing is finished for Job {}, sending notifications'.format(job_identifier))
logger.info('Starting EOF event processing for Job {}'.format(job_identifier))
# EOF events are sent when stdout for the running task is
# closed. don't actually persist them to the database; we
# just use them to report `summary` websocket events as an
# approximation for when a job is "done"
emit_channel_notification('jobs-summary', dict(group_name='jobs', unified_job_id=job_identifier, final_counter=final_counter))
# Additionally, when we've processed all events, we should
# have all the data we need to send out success/failure
# notification templates
uj = UnifiedJob.objects.get(pk=job_identifier)
if isinstance(uj, Job):
# *actual playbooks* send their success/failure
# notifications in response to the playbook_on_stats
# event handling code in main.models.events
pass
elif hasattr(uj, 'send_notification_templates'):
handle_success_and_failure_notifications.apply_async([uj.id])
if notification_trigger_event:
job_stats_wrapup(job_identifier)
except Exception:
logger.exception('Worker failed to emit notifications: Job {}'.format(job_identifier))
logger.exception('Worker failed to perform EOF tasks: Job {}'.format(job_identifier))
finally:
self.subsystem_metrics.inc('callback_receiver_events_in_memory', -1)
set_guid('')
@@ -215,9 +254,12 @@ class CallbackBrokerWorker(BaseWorker):
event = cls.create_from_data(**body)
if skip_websocket_message:
if skip_websocket_message: # if this event sends websocket messages, fire them off on flush
event._skip_websocket_message = True
if notification_trigger_event: # if this is an Ansible stats event, ensure notifications on flush
event._notification_trigger_event = True
self.buff.setdefault(cls, []).append(event)
retries = 0
@@ -225,17 +267,18 @@ class CallbackBrokerWorker(BaseWorker):
try:
self.flush(force=flush)
break
except (OperationalError, InterfaceError, InternalError):
except (OperationalError, InterfaceError, InternalError) as exc:
if retries >= self.MAX_RETRIES:
logger.exception('Worker could not re-establish database connectivity, giving up on one or more events.')
return
delay = 60 * retries
logger.exception('Database Error Saving Job Event, retry #{i} in {delay} seconds:'.format(i=retries + 1, delay=delay))
logger.warning(f'Database Error Flushing Job Events, retry #{retries + 1} in {delay} seconds: {str(exc)}')
django_connection.close()
time.sleep(delay)
retries += 1
except DatabaseError:
logger.exception('Database Error Saving Job Event')
logger.exception('Database Error Flushing Job Events')
django_connection.close()
break
except Exception as exc:
tb = traceback.format_exc()

View File

@@ -3,6 +3,7 @@ import logging
import importlib
import sys
import traceback
import time
from kubernetes.config import kube_config
@@ -60,8 +61,19 @@ class TaskWorker(BaseWorker):
# the callable is a class, e.g., RunJob; instantiate and
# return its `run()` method
_call = _call().run
log_extra = ''
logger_method = logger.debug
if ('time_ack' in body) and ('time_pub' in body):
time_publish = body['time_ack'] - body['time_pub']
time_waiting = time.time() - body['time_ack']
if time_waiting > 5.0 or time_publish > 5.0:
# If task too a very long time to process, add this information to the log
log_extra = f' took {time_publish:.4f} to ack, {time_waiting:.4f} in local dispatcher'
logger_method = logger.info
# don't print kwargs, they often contain launch-time secrets
logger.debug('task {} starting {}(*{})'.format(uuid, task, args))
logger_method(f'task {uuid} starting {task}(*{args}){log_extra}')
return _call(*args, **kwargs)
def perform_work(self, body):

View File

@@ -103,7 +103,7 @@ class DeleteMeta:
with connection.cursor() as cursor:
query = "SELECT inhrelid::regclass::text AS child FROM pg_catalog.pg_inherits"
query += f" WHERE inhparent = 'public.{tbl_name}'::regclass"
query += f" WHERE inhparent = '{tbl_name}'::regclass"
query += f" AND TO_TIMESTAMP(LTRIM(inhrelid::regclass::text, '{tbl_name}_'), 'YYYYMMDD_HH24') < '{self.cutoff}'"
query += " ORDER BY inhrelid::regclass::text"

View File

@@ -32,8 +32,10 @@ class Command(BaseCommand):
name='Demo Project',
scm_type='git',
scm_url='https://github.com/ansible/ansible-tower-samples',
scm_update_on_launch=True,
scm_update_cache_timeout=0,
status='successful',
scm_revision='347e44fea036c94d5f60e544de006453ee5c71ad',
playbook_files=['hello_world.yml'],
)
p.organization = o

View File

@@ -862,7 +862,7 @@ class Command(BaseCommand):
overwrite_vars=bool(options.get('overwrite_vars', False)),
)
inventory_update = inventory_source.create_inventory_update(
_eager_fields=dict(job_args=json.dumps(sys.argv), job_env=dict(os.environ.items()), job_cwd=os.getcwd())
_eager_fields=dict(status='running', job_args=json.dumps(sys.argv), job_env=dict(os.environ.items()), job_cwd=os.getcwd())
)
data = AnsibleInventoryLoader(source=source, verbosity=verbosity).load()

View File

@@ -54,7 +54,7 @@ class Command(BaseCommand):
capacity = f' capacity={x.capacity}' if x.node_type != 'hop' else ''
version = f" version={x.version or '?'}" if x.node_type != 'hop' else ''
heartbeat = f' heartbeat="{x.modified:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else ''
heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else ''
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}\033[0m')
print()

View File

@@ -27,7 +27,9 @@ class Command(BaseCommand):
)
def handle(self, **options):
# provides a mapping of hostname to Instance objects
nodes = Instance.objects.in_bulk(field_name='hostname')
if options['source'] not in nodes:
raise CommandError(f"Host {options['source']} is not a registered instance.")
if not (options['peers'] or options['disconnect'] or options['exact'] is not None):
@@ -57,7 +59,9 @@ class Command(BaseCommand):
results = 0
for target in options['peers']:
_, created = InstanceLink.objects.get_or_create(source=nodes[options['source']], target=nodes[target])
_, created = InstanceLink.objects.update_or_create(
source=nodes[options['source']], target=nodes[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
)
if created:
results += 1
@@ -80,7 +84,9 @@ class Command(BaseCommand):
links = set(InstanceLink.objects.filter(source=nodes[options['source']]).values_list('target__hostname', flat=True))
removals, _ = InstanceLink.objects.filter(source=nodes[options['source']], target__hostname__in=links - peers).delete()
for target in peers - links:
_, created = InstanceLink.objects.get_or_create(source=nodes[options['source']], target=nodes[target])
_, created = InstanceLink.objects.update_or_create(
source=nodes[options['source']], target=nodes[target], defaults={'link_state': InstanceLink.States.ESTABLISHED}
)
if created:
additions += 1

View File

@@ -1,13 +1,14 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
import logging
import yaml
from django.conf import settings
from django.core.cache import cache as django_cache
from django.core.management.base import BaseCommand
from django.db import connection as django_connection
from awx.main.dispatch import get_local_queuename, reaper
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
@@ -30,7 +31,16 @@ class Command(BaseCommand):
'--reload',
dest='reload',
action='store_true',
help=('cause the dispatcher to recycle all of its worker processes;' 'running jobs will run to completion first'),
help=('cause the dispatcher to recycle all of its worker processes; running jobs will run to completion first'),
)
parser.add_argument(
'--cancel',
dest='cancel',
help=(
'Cancel a particular task id. Takes either a single id string, or a JSON list of multiple ids. '
'Can take in output from the --running argument as input to cancel all tasks. '
'Only running tasks can be canceled, queued tasks must be started before they can be canceled.'
),
)
def handle(self, *arg, **options):
@@ -42,6 +52,16 @@ class Command(BaseCommand):
return
if options.get('reload'):
return Control('dispatcher').control({'control': 'reload'})
if options.get('cancel'):
cancel_str = options.get('cancel')
try:
cancel_data = yaml.safe_load(cancel_str)
except Exception:
cancel_data = [cancel_str]
if not isinstance(cancel_data, list):
cancel_data = [cancel_str]
print(Control('dispatcher').cancel(cancel_data))
return
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
@@ -53,7 +73,6 @@ class Command(BaseCommand):
# (like the node heartbeat)
periodic.run_continuously()
reaper.reap()
consumer = None
try:

View File

@@ -95,8 +95,13 @@ class Command(BaseCommand):
# database migrations are still running
from awx.main.models.ha import Instance
executor = MigrationExecutor(connection)
migrating = bool(executor.migration_plan(executor.loader.graph.leaf_nodes()))
try:
executor = MigrationExecutor(connection)
migrating = bool(executor.migration_plan(executor.loader.graph.leaf_nodes()))
except Exception as exc:
logger.info(f'Error on startup of run_wsbroadcast (error: {exc}), retry in 10s...')
time.sleep(10)
return
# In containerized deployments, migrations happen in the task container,
# and the services running there don't start until migrations are

View File

@@ -129,10 +129,13 @@ class InstanceManager(models.Manager):
# if instance was not retrieved by uuid and hostname was, use the hostname
instance = self.filter(hostname=hostname)
from awx.main.models import Instance
# Return existing instance
if instance.exists():
instance = instance.first() # in the unusual occasion that there is more than one, only get one
update_fields = []
instance.node_state = Instance.States.INSTALLED # Wait for it to show up on the mesh
update_fields = ['node_state']
# if instance was retrieved by uuid and hostname has changed, update hostname
if instance.hostname != hostname:
logger.warning("passed in hostname {0} is different from the original hostname {1}, updating to {0}".format(hostname, instance.hostname))
@@ -141,6 +144,7 @@ class InstanceManager(models.Manager):
# if any other fields are to be updated
if instance.ip_address != ip_address:
instance.ip_address = ip_address
update_fields.append('ip_address')
if instance.node_type != node_type:
instance.node_type = node_type
update_fields.append('node_type')
@@ -151,12 +155,12 @@ class InstanceManager(models.Manager):
return (False, instance)
# Create new instance, and fill in default values
create_defaults = dict(capacity=0)
create_defaults = {'node_state': Instance.States.INSTALLED, 'capacity': 0}
if defaults is not None:
create_defaults.update(defaults)
uuid_option = {}
if uuid is not None:
uuid_option = dict(uuid=uuid)
uuid_option = {'uuid': uuid}
if node_type == 'execution' and 'version' not in create_defaults:
create_defaults['version'] = RECEPTOR_PENDING
instance = self.create(hostname=hostname, ip_address=ip_address, node_type=node_type, **create_defaults, **uuid_option)

View File

@@ -26,6 +26,17 @@ logger = logging.getLogger('awx.main.middleware')
perf_logger = logging.getLogger('awx.analytics.performance')
class SettingsCacheMiddleware(MiddlewareMixin):
"""
Clears the in-memory settings cache at the beginning of a request.
We do this so that a script can POST to /api/v2/settings/all/ and then
right away GET /api/v2/settings/all/ and see the updated value.
"""
def process_request(self, request):
settings._awx_conf_memoizedcache.clear()
class TimingMiddleware(threading.local, MiddlewareMixin):
dest = '/var/log/tower/profile'

View File

@@ -0,0 +1,18 @@
# Generated by Django 3.2.12 on 2022-04-18 21:29
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0159_deprecate_inventory_source_UoPU_field'),
]
operations = [
migrations.AlterField(
model_name='schedule',
name='rrule',
field=models.TextField(help_text='A value representing the schedules iCal recurrence rule.'),
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 3.2.12 on 2022-04-27 02:16
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0160_alter_schedule_rrule'),
]
operations = [
migrations.AddField(
model_name='unifiedjob',
name='host_status_counts',
field=models.JSONField(blank=True, default=None, editable=False, help_text='Playbook stats from the Ansible playbook_on_stats event.', null=True),
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 3.2.13 on 2022-05-02 21:27
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0161_unifiedjob_host_status_counts'),
]
operations = [
migrations.AlterField(
model_name='unifiedjob',
name='dependent_jobs',
field=models.ManyToManyField(editable=False, related_name='unifiedjob_blocked_jobs', to='main.UnifiedJob'),
),
]

View File

@@ -0,0 +1,23 @@
# Generated by Django 3.2.13 on 2022-06-02 18:15
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0162_alter_unifiedjob_dependent_jobs'),
]
operations = [
migrations.AlterField(
model_name='job',
name='job_tags',
field=models.TextField(blank=True, default=''),
),
migrations.AlterField(
model_name='jobtemplate',
name='job_tags',
field=models.TextField(blank=True, default=''),
),
]

View File

@@ -0,0 +1,40 @@
# Generated by Django 3.2.13 on 2022-06-21 21:29
from django.db import migrations
import logging
logger = logging.getLogger("awx")
def forwards(apps, schema_editor):
InventorySource = apps.get_model('main', 'InventorySource')
sources = InventorySource.objects.filter(update_on_project_update=True)
for src in sources:
if src.update_on_launch == False:
src.update_on_launch = True
src.save(update_fields=['update_on_launch'])
logger.info(f"Setting update_on_launch to True for {src}")
proj = src.source_project
if proj and proj.scm_update_on_launch is False:
proj.scm_update_on_launch = True
proj.save(update_fields=['scm_update_on_launch'])
logger.warning(f"Setting scm_update_on_launch to True for {proj}")
class Migration(migrations.Migration):
dependencies = [
('main', '0163_convert_job_tags_to_textfield'),
]
operations = [
migrations.RunPython(forwards, migrations.RunPython.noop),
migrations.RemoveField(
model_name='inventorysource',
name='scm_last_revision',
),
migrations.RemoveField(
model_name='inventorysource',
name='update_on_project_update',
),
]

View File

@@ -0,0 +1,35 @@
# Generated by Django 3.2.13 on 2022-08-10 14:03
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0164_remove_inventorysource_update_on_project_update'),
]
operations = [
migrations.AddField(
model_name='unifiedjob',
name='preferred_instance_groups_cache',
field=models.JSONField(
blank=True, default=None, editable=False, help_text='A cached list with pk values from preferred instance groups.', null=True
),
),
migrations.AddField(
model_name='unifiedjob',
name='task_impact',
field=models.PositiveIntegerField(default=0, editable=False, help_text='Number of forks an instance consumes when running this job.'),
),
migrations.AddField(
model_name='workflowapproval',
name='expires',
field=models.DateTimeField(
default=None,
editable=False,
help_text='The time this approval will expire. This is the created time plus timeout, used for filtering.',
null=True,
),
),
]

View File

@@ -0,0 +1,40 @@
# Generated by Django 3.2.13 on 2022-07-06 13:19
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0165_task_manager_refactor'),
]
operations = [
migrations.AlterField(
model_name='adhoccommandevent',
name='host',
field=models.ForeignKey(
db_constraint=False,
default=None,
editable=False,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name='ad_hoc_command_events',
to='main.host',
),
),
migrations.AlterField(
model_name='jobevent',
name='host',
field=models.ForeignKey(
db_constraint=False,
default=None,
editable=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name='job_events_as_primary_host',
to='main.host',
),
),
]

View File

@@ -0,0 +1,57 @@
# Generated by Django 3.2.13 on 2022-08-24 14:02
from django.db import migrations, models
import django.db.models.deletion
from awx.main.models import CredentialType
from awx.main.utils.common import set_current_apps
def setup_tower_managed_defaults(apps, schema_editor):
set_current_apps(apps)
CredentialType.setup_tower_managed_defaults(apps)
class Migration(migrations.Migration):
dependencies = [
('main', '0166_alter_jobevent_host'),
]
operations = [
migrations.AddField(
model_name='project',
name='signature_validation_credential',
field=models.ForeignKey(
blank=True,
default=None,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name='projects_signature_validation',
to='main.credential',
help_text='An optional credential used for validating files in the project against unexpected changes.',
),
),
migrations.AlterField(
model_name='credentialtype',
name='kind',
field=models.CharField(
choices=[
('ssh', 'Machine'),
('vault', 'Vault'),
('net', 'Network'),
('scm', 'Source Control'),
('cloud', 'Cloud'),
('registry', 'Container Registry'),
('token', 'Personal Access Token'),
('insights', 'Insights'),
('external', 'External'),
('kubernetes', 'Kubernetes'),
('galaxy', 'Galaxy/Automation Hub'),
('cryptography', 'Cryptography'),
],
max_length=32,
),
),
migrations.RunPython(setup_tower_managed_defaults),
]

View File

@@ -0,0 +1,25 @@
# Generated by Django 3.2.13 on 2022-09-08 16:03
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0167_project_signature_validation_credential'),
]
operations = [
migrations.AddField(
model_name='inventoryupdate',
name='scm_revision',
field=models.CharField(
blank=True,
default='',
editable=False,
help_text='The SCM Revision from the Project used for this inventory update. Only applicable to inventories source from scm',
max_length=1024,
verbose_name='SCM Revision',
),
),
]

View File

@@ -0,0 +1,225 @@
# Generated by Django 3.2.13 on 2022-09-15 14:07
import awx.main.fields
import awx.main.utils.polymorphic
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0168_inventoryupdate_scm_revision'),
]
operations = [
migrations.AddField(
model_name='joblaunchconfig',
name='execution_environment',
field=models.ForeignKey(
blank=True,
default=None,
help_text='The container image to be used for execution.',
null=True,
on_delete=awx.main.utils.polymorphic.SET_NULL,
related_name='joblaunchconfig_as_prompt',
to='main.executionenvironment',
),
),
migrations.AddField(
model_name='joblaunchconfig',
name='labels',
field=models.ManyToManyField(related_name='joblaunchconfig_labels', to='main.Label'),
),
migrations.AddField(
model_name='jobtemplate',
name='ask_execution_environment_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='jobtemplate',
name='ask_forks_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='jobtemplate',
name='ask_instance_groups_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='jobtemplate',
name='ask_job_slice_count_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='jobtemplate',
name='ask_labels_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='jobtemplate',
name='ask_timeout_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='schedule',
name='execution_environment',
field=models.ForeignKey(
blank=True,
default=None,
help_text='The container image to be used for execution.',
null=True,
on_delete=awx.main.utils.polymorphic.SET_NULL,
related_name='schedule_as_prompt',
to='main.executionenvironment',
),
),
migrations.AddField(
model_name='schedule',
name='labels',
field=models.ManyToManyField(related_name='schedule_labels', to='main.Label'),
),
migrations.AddField(
model_name='workflowjobnode',
name='execution_environment',
field=models.ForeignKey(
blank=True,
default=None,
help_text='The container image to be used for execution.',
null=True,
on_delete=awx.main.utils.polymorphic.SET_NULL,
related_name='workflowjobnode_as_prompt',
to='main.executionenvironment',
),
),
migrations.AddField(
model_name='workflowjobnode',
name='labels',
field=models.ManyToManyField(related_name='workflowjobnode_labels', to='main.Label'),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='ask_labels_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='ask_skip_tags_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='workflowjobtemplate',
name='ask_tags_on_launch',
field=awx.main.fields.AskForField(blank=True, default=False),
),
migrations.AddField(
model_name='workflowjobtemplatenode',
name='execution_environment',
field=models.ForeignKey(
blank=True,
default=None,
help_text='The container image to be used for execution.',
null=True,
on_delete=awx.main.utils.polymorphic.SET_NULL,
related_name='workflowjobtemplatenode_as_prompt',
to='main.executionenvironment',
),
),
migrations.AddField(
model_name='workflowjobtemplatenode',
name='labels',
field=models.ManyToManyField(related_name='workflowjobtemplatenode_labels', to='main.Label'),
),
migrations.CreateModel(
name='WorkflowJobTemplateNodeBaseInstanceGroupMembership',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('position', models.PositiveIntegerField(db_index=True, default=None, null=True)),
('instancegroup', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.instancegroup')),
('workflowjobtemplatenode', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.workflowjobtemplatenode')),
],
),
migrations.CreateModel(
name='WorkflowJobNodeBaseInstanceGroupMembership',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('position', models.PositiveIntegerField(db_index=True, default=None, null=True)),
('instancegroup', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.instancegroup')),
('workflowjobnode', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.workflowjobnode')),
],
),
migrations.CreateModel(
name='WorkflowJobInstanceGroupMembership',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('position', models.PositiveIntegerField(db_index=True, default=None, null=True)),
('instancegroup', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.instancegroup')),
('workflowjobnode', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.workflowjob')),
],
),
migrations.CreateModel(
name='ScheduleInstanceGroupMembership',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('position', models.PositiveIntegerField(db_index=True, default=None, null=True)),
('instancegroup', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.instancegroup')),
('schedule', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.schedule')),
],
),
migrations.CreateModel(
name='JobLaunchConfigInstanceGroupMembership',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('position', models.PositiveIntegerField(db_index=True, default=None, null=True)),
('instancegroup', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.instancegroup')),
('joblaunchconfig', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.joblaunchconfig')),
],
),
migrations.AddField(
model_name='joblaunchconfig',
name='instance_groups',
field=awx.main.fields.OrderedManyToManyField(
blank=True, editable=False, related_name='joblaunchconfigs', through='main.JobLaunchConfigInstanceGroupMembership', to='main.InstanceGroup'
),
),
migrations.AddField(
model_name='schedule',
name='instance_groups',
field=awx.main.fields.OrderedManyToManyField(
blank=True, editable=False, related_name='schedule_instance_groups', through='main.ScheduleInstanceGroupMembership', to='main.InstanceGroup'
),
),
migrations.AddField(
model_name='workflowjob',
name='instance_groups',
field=awx.main.fields.OrderedManyToManyField(
blank=True,
editable=False,
related_name='workflow_job_instance_groups',
through='main.WorkflowJobInstanceGroupMembership',
to='main.InstanceGroup',
),
),
migrations.AddField(
model_name='workflowjobnode',
name='instance_groups',
field=awx.main.fields.OrderedManyToManyField(
blank=True,
editable=False,
related_name='workflow_job_node_instance_groups',
through='main.WorkflowJobNodeBaseInstanceGroupMembership',
to='main.InstanceGroup',
),
),
migrations.AddField(
model_name='workflowjobtemplatenode',
name='instance_groups',
field=awx.main.fields.OrderedManyToManyField(
blank=True,
editable=False,
related_name='workflow_job_template_node_instance_groups',
through='main.WorkflowJobTemplateNodeBaseInstanceGroupMembership',
to='main.InstanceGroup',
),
),
]

View File

@@ -0,0 +1,79 @@
# Generated by Django 3.2.13 on 2022-08-02 17:53
import django.core.validators
from django.db import migrations, models
def forwards(apps, schema_editor):
# All existing InstanceLink objects need to be in the state
# 'Established', which is the default, so nothing needs to be done
# for that.
Instance = apps.get_model('main', 'Instance')
for instance in Instance.objects.all():
instance.node_state = 'ready' if not instance.errors else 'unavailable'
instance.save(update_fields=['node_state'])
class Migration(migrations.Migration):
dependencies = [
('main', '0169_jt_prompt_everything_on_launch'),
]
operations = [
migrations.AddField(
model_name='instance',
name='listener_port',
field=models.PositiveIntegerField(
blank=True,
default=27199,
help_text='Port that Receptor will listen for incoming connections on.',
validators=[django.core.validators.MinValueValidator(1), django.core.validators.MaxValueValidator(65535)],
),
),
migrations.AddField(
model_name='instance',
name='node_state',
field=models.CharField(
choices=[
('provisioning', 'Provisioning'),
('provision-fail', 'Provisioning Failure'),
('installed', 'Installed'),
('ready', 'Ready'),
('unavailable', 'Unavailable'),
('deprovisioning', 'De-provisioning'),
('deprovision-fail', 'De-provisioning Failure'),
],
default='ready',
help_text='Indicates the current life cycle stage of this instance.',
max_length=16,
),
),
migrations.AddField(
model_name='instancelink',
name='link_state',
field=models.CharField(
choices=[('adding', 'Adding'), ('established', 'Established'), ('removing', 'Removing')],
default='established',
help_text='Indicates the current life cycle stage of this peer link.',
max_length=16,
),
),
migrations.AlterField(
model_name='instance',
name='node_type',
field=models.CharField(
choices=[
('control', 'Control plane node'),
('execution', 'Execution plane node'),
('hybrid', 'Controller and execution'),
('hop', 'Message-passing node, no execution capability'),
],
default='hybrid',
help_text='Role that this node plays in the mesh.',
max_length=16,
),
),
migrations.RunPython(forwards, reverse_code=migrations.RunPython.noop),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 3.2.13 on 2022-09-26 20:54
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0170_node_and_link_state'),
]
operations = [
migrations.AddField(
model_name='instance',
name='health_check_started',
field=models.DateTimeField(editable=False, help_text='The last time a health check was initiated on this instance.', null=True),
),
]

View File

@@ -0,0 +1,29 @@
# Generated by Django 3.2.13 on 2022-09-29 18:10
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0171_add_health_check_started'),
]
operations = [
migrations.AddField(
model_name='inventory',
name='prevent_instance_group_fallback',
field=models.BooleanField(
default=False,
help_text='If enabled, the inventory will prevent adding any organization instance groups to the list of preferred instances groups to run associated job templates on.If this setting is enabled and you provided an empty list, the global instance groups will be applied.',
),
),
migrations.AddField(
model_name='jobtemplate',
name='prevent_instance_group_fallback',
field=models.BooleanField(
default=False,
help_text='If enabled, the job template will prevent adding any inventory or organization instance groups to the list of preferred instances groups to run on.If this setting is enabled and you provided an empty list, the global instance groups will be applied.',
),
),
]

View File

@@ -36,7 +36,7 @@ def create_clearsessions_jt(apps, schema_editor):
if created:
sched = Schedule(
name='Cleanup Expired Sessions',
rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,
description='Cleans out expired browser sessions',
enabled=True,
created=now_dt,
@@ -69,7 +69,7 @@ def create_cleartokens_jt(apps, schema_editor):
if created:
sched = Schedule(
name='Cleanup Expired OAuth 2 Tokens',
rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,
description='Removes expired OAuth 2 access and refresh tokens',
enabled=True,
created=now_dt,

View File

@@ -90,6 +90,9 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
extra_vars_dict = VarsDictProperty('extra_vars', True)
def _set_default_dependencies_processed(self):
self.dependencies_processed = True
def clean_inventory(self):
inv = self.inventory
if not inv:
@@ -178,12 +181,12 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
def get_passwords_needed_to_start(self):
return self.passwords_needed_to_start
@property
def task_impact(self):
def _get_task_impact(self):
# NOTE: We sorta have to assume the host count matches and that forks default to 5
from awx.main.models.inventory import Host
count_hosts = Host.objects.filter(enabled=True, inventory__ad_hoc_commands__pk=self.pk).count()
if self.inventory:
count_hosts = self.inventory.total_hosts
else:
count_hosts = 5
return min(count_hosts, 5 if self.forks == 0 else self.forks) + 1
def copy(self):
@@ -207,23 +210,32 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
def save(self, *args, **kwargs):
update_fields = kwargs.get('update_fields', [])
def add_to_update_fields(name):
if name not in update_fields:
update_fields.append(name)
if not self.preferred_instance_groups_cache:
self.preferred_instance_groups_cache = self._get_preferred_instance_group_cache()
add_to_update_fields("preferred_instance_groups_cache")
if not self.name:
self.name = Truncator(u': '.join(filter(None, (self.module_name, self.module_args)))).chars(512)
if 'name' not in update_fields:
update_fields.append('name')
add_to_update_fields("name")
if self.task_impact == 0:
self.task_impact = self._get_task_impact()
add_to_update_fields("task_impact")
super(AdHocCommand, self).save(*args, **kwargs)
@property
def preferred_instance_groups(self):
if self.inventory is not None and self.inventory.organization is not None:
organization_groups = [x for x in self.inventory.organization.instance_groups.all()]
else:
organization_groups = []
selected_groups = []
if self.inventory is not None:
inventory_groups = [x for x in self.inventory.instance_groups.all()]
else:
inventory_groups = []
selected_groups = inventory_groups + organization_groups
for instance_group in self.inventory.instance_groups.all():
selected_groups.append(instance_group)
if not self.inventory.prevent_instance_group_fallback and self.inventory.organization is not None:
for instance_group in self.inventory.organization.instance_groups.all():
selected_groups.append(instance_group)
if not selected_groups:
return self.global_instance_groups
return selected_groups

View File

@@ -316,16 +316,17 @@ class PrimordialModel(HasEditsMixin, CreatedModifiedModel):
user = get_current_user()
if user and not user.id:
user = None
if not self.pk and not self.created_by:
if (not self.pk) and (user is not None) and (not self.created_by):
self.created_by = user
if 'created_by' not in update_fields:
update_fields.append('created_by')
# Update modified_by if any editable fields have changed
new_values = self._get_fields_snapshot()
if (not self.pk and not self.modified_by) or self._values_have_edits(new_values):
self.modified_by = user
if 'modified_by' not in update_fields:
update_fields.append('modified_by')
if self.modified_by != user:
self.modified_by = user
if 'modified_by' not in update_fields:
update_fields.append('modified_by')
super(PrimordialModel, self).save(*args, **kwargs)
self._prior_values_store = new_values

View File

@@ -336,6 +336,7 @@ class CredentialType(CommonModelNameNotUnique):
('external', _('External')),
('kubernetes', _('Kubernetes')),
('galaxy', _('Galaxy/Automation Hub')),
('cryptography', _('Cryptography')),
)
kind = models.CharField(max_length=32, choices=KIND_CHOICES)
@@ -1171,6 +1172,25 @@ ManagedCredentialType(
},
)
ManagedCredentialType(
namespace='gpg_public_key',
kind='cryptography',
name=gettext_noop('GPG Public Key'),
inputs={
'fields': [
{
'id': 'gpg_public_key',
'label': gettext_noop('GPG Public Key'),
'type': 'string',
'secret': True,
'multiline': True,
'help_text': gettext_noop('GPG Public Key used to validate content signatures.'),
},
],
'required': ['gpg_public_key'],
},
)
class CredentialInputSource(PrimordialModel):
class Meta:

View File

@@ -35,6 +35,7 @@ def gce(cred, env, private_data_dir):
container_path = to_container_path(path, private_data_dir)
env['GCE_CREDENTIALS_FILE_PATH'] = container_path
env['GCP_SERVICE_ACCOUNT_FILE'] = container_path
env['GOOGLE_APPLICATION_CREDENTIALS'] = container_path
# Handle env variables for new module types.
# This includes gcp_compute inventory plugin and

View File

@@ -6,7 +6,7 @@ from collections import defaultdict
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
from django.db import models, DatabaseError, connection
from django.db import models, DatabaseError
from django.utils.dateparse import parse_datetime
from django.utils.text import Truncator
from django.utils.timezone import utc, now
@@ -25,7 +25,6 @@ analytics_logger = logging.getLogger('awx.analytics.job_events')
logger = logging.getLogger('awx.main.models.events')
__all__ = ['JobEvent', 'ProjectUpdateEvent', 'AdHocCommandEvent', 'InventoryUpdateEvent', 'SystemJobEvent']
@@ -126,6 +125,7 @@ class BasePlaybookEvent(CreatedModifiedModel):
'host_name',
'verbosity',
]
WRAPUP_EVENT = 'playbook_on_stats'
class Meta:
abstract = True
@@ -384,14 +384,6 @@ class BasePlaybookEvent(CreatedModifiedModel):
job.get_event_queryset().filter(uuid__in=changed).update(changed=True)
job.get_event_queryset().filter(uuid__in=failed).update(failed=True)
# send success/failure notifications when we've finished handling the playbook_on_stats event
from awx.main.tasks.system import handle_success_and_failure_notifications # circular import
def _send_notifications():
handle_success_and_failure_notifications.apply_async([job.id])
connection.on_commit(_send_notifications)
for field in ('playbook', 'play', 'task', 'role'):
value = force_str(event_data.get(field, '')).strip()
if value != getattr(self, field):
@@ -470,6 +462,7 @@ class JobEvent(BasePlaybookEvent):
"""
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['job_id', 'workflow_job_id', 'job_created']
JOB_REFERENCE = 'job_id'
objects = DeferJobCreatedManager()
@@ -492,13 +485,18 @@ class JobEvent(BasePlaybookEvent):
editable=False,
db_index=False,
)
# When we partitioned the table we accidentally "lost" the foreign key constraint.
# However this is good because the cascade on delete at the django layer was causing DB issues
# We are going to leave this as a foreign key but mark it as not having a DB relation and
# prevent cascading on delete.
host = models.ForeignKey(
'Host',
related_name='job_events_as_primary_host',
null=True,
default=None,
on_delete=models.SET_NULL,
on_delete=models.DO_NOTHING,
editable=False,
db_constraint=False,
)
host_name = models.CharField(
max_length=1024,
@@ -600,6 +598,7 @@ UnpartitionedJobEvent._meta.db_table = '_unpartitioned_' + JobEvent._meta.db_tab
class ProjectUpdateEvent(BasePlaybookEvent):
VALID_KEYS = BasePlaybookEvent.VALID_KEYS + ['project_update_id', 'workflow_job_id', 'job_created']
JOB_REFERENCE = 'project_update_id'
objects = DeferJobCreatedManager()
@@ -641,6 +640,7 @@ class BaseCommandEvent(CreatedModifiedModel):
"""
VALID_KEYS = ['event_data', 'created', 'counter', 'uuid', 'stdout', 'start_line', 'end_line', 'verbosity']
WRAPUP_EVENT = 'EOF'
class Meta:
abstract = True
@@ -736,6 +736,8 @@ class BaseCommandEvent(CreatedModifiedModel):
class AdHocCommandEvent(BaseCommandEvent):
VALID_KEYS = BaseCommandEvent.VALID_KEYS + ['ad_hoc_command_id', 'event', 'host_name', 'host_id', 'workflow_job_id', 'job_created']
WRAPUP_EVENT = 'playbook_on_stats' # exception to BaseCommandEvent
JOB_REFERENCE = 'ad_hoc_command_id'
objects = DeferJobCreatedManager()
@@ -796,6 +798,10 @@ class AdHocCommandEvent(BaseCommandEvent):
editable=False,
db_index=False,
)
# We need to keep this as a FK in the model because AdHocCommand uses a ManyToMany field
# to hosts through adhoc_events. But in https://github.com/ansible/awx/pull/8236/ we
# removed the nulling of the field in case of a host going away before an event is saved
# so this needs to stay SET_NULL on the ORM level
host = models.ForeignKey(
'Host',
related_name='ad_hoc_command_events',
@@ -803,6 +809,7 @@ class AdHocCommandEvent(BaseCommandEvent):
default=None,
on_delete=models.SET_NULL,
editable=False,
db_constraint=False,
)
host_name = models.CharField(
max_length=1024,
@@ -836,6 +843,7 @@ UnpartitionedAdHocCommandEvent._meta.db_table = '_unpartitioned_' + AdHocCommand
class InventoryUpdateEvent(BaseCommandEvent):
VALID_KEYS = BaseCommandEvent.VALID_KEYS + ['inventory_update_id', 'workflow_job_id', 'job_created']
JOB_REFERENCE = 'inventory_update_id'
objects = DeferJobCreatedManager()
@@ -881,6 +889,7 @@ UnpartitionedInventoryUpdateEvent._meta.db_table = '_unpartitioned_' + Inventory
class SystemJobEvent(BaseCommandEvent):
VALID_KEYS = BaseCommandEvent.VALID_KEYS + ['system_job_id', 'job_created']
JOB_REFERENCE = 'system_job_id'
objects = DeferJobCreatedManager()

View File

@@ -5,13 +5,14 @@ from decimal import Decimal
import logging
import os
from django.core.validators import MinValueValidator
from django.core.validators import MinValueValidator, MaxValueValidator
from django.db import models, connection
from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
from django.utils.translation import gettext_lazy as _
from django.conf import settings
from django.utils.timezone import now, timedelta
from django.db.models import Sum
import redis
from solo.models import SingletonModel
@@ -58,6 +59,15 @@ class InstanceLink(BaseModel):
source = models.ForeignKey('Instance', on_delete=models.CASCADE, related_name='+')
target = models.ForeignKey('Instance', on_delete=models.CASCADE, related_name='reverse_peers')
class States(models.TextChoices):
ADDING = 'adding', _('Adding')
ESTABLISHED = 'established', _('Established')
REMOVING = 'removing', _('Removing')
link_state = models.CharField(
choices=States.choices, default=States.ESTABLISHED, max_length=16, help_text=_("Indicates the current life cycle stage of this peer link.")
)
class Meta:
unique_together = ('source', 'target')
@@ -104,6 +114,11 @@ class Instance(HasPolicyEditsMixin, BaseModel):
editable=False,
help_text=_('Last time instance ran its heartbeat task for main cluster nodes. Last known connection to receptor mesh for execution nodes.'),
)
health_check_started = models.DateTimeField(
null=True,
editable=False,
help_text=_("The last time a health check was initiated on this instance."),
)
last_health_check = models.DateTimeField(
null=True,
editable=False,
@@ -126,13 +141,33 @@ class Instance(HasPolicyEditsMixin, BaseModel):
default=0,
editable=False,
)
NODE_TYPE_CHOICES = [
("control", "Control plane node"),
("execution", "Execution plane node"),
("hybrid", "Controller and execution"),
("hop", "Message-passing node, no execution capability"),
]
node_type = models.CharField(default='hybrid', choices=NODE_TYPE_CHOICES, max_length=16)
class Types(models.TextChoices):
CONTROL = 'control', _("Control plane node")
EXECUTION = 'execution', _("Execution plane node")
HYBRID = 'hybrid', _("Controller and execution")
HOP = 'hop', _("Message-passing node, no execution capability")
node_type = models.CharField(default=Types.HYBRID, choices=Types.choices, max_length=16, help_text=_("Role that this node plays in the mesh."))
class States(models.TextChoices):
PROVISIONING = 'provisioning', _('Provisioning')
PROVISION_FAIL = 'provision-fail', _('Provisioning Failure')
INSTALLED = 'installed', _('Installed')
READY = 'ready', _('Ready')
UNAVAILABLE = 'unavailable', _('Unavailable')
DEPROVISIONING = 'deprovisioning', _('De-provisioning')
DEPROVISION_FAIL = 'deprovision-fail', _('De-provisioning Failure')
node_state = models.CharField(
choices=States.choices, default=States.READY, max_length=16, help_text=_("Indicates the current life cycle stage of this instance.")
)
listener_port = models.PositiveIntegerField(
blank=True,
default=27199,
validators=[MinValueValidator(1), MaxValueValidator(65535)],
help_text=_("Port that Receptor will listen for incoming connections on."),
)
peers = models.ManyToManyField('self', symmetrical=False, through=InstanceLink, through_fields=('source', 'target'))
@@ -149,10 +184,13 @@ class Instance(HasPolicyEditsMixin, BaseModel):
def consumed_capacity(self):
capacity_consumed = 0
if self.node_type in ('hybrid', 'execution'):
capacity_consumed += sum(x.task_impact for x in UnifiedJob.objects.filter(execution_node=self.hostname, status__in=('running', 'waiting')))
capacity_consumed += (
UnifiedJob.objects.filter(execution_node=self.hostname, status__in=('running', 'waiting')).aggregate(Sum("task_impact"))["task_impact__sum"]
or 0
)
if self.node_type in ('hybrid', 'control'):
capacity_consumed += sum(
settings.AWX_CONTROL_NODE_TASK_IMPACT for x in UnifiedJob.objects.filter(controller_node=self.hostname, status__in=('running', 'waiting'))
capacity_consumed += (
settings.AWX_CONTROL_NODE_TASK_IMPACT * UnifiedJob.objects.filter(controller_node=self.hostname, status__in=('running', 'waiting')).count()
)
return capacity_consumed
@@ -174,6 +212,14 @@ class Instance(HasPolicyEditsMixin, BaseModel):
def jobs_total(self):
return UnifiedJob.objects.filter(execution_node=self.hostname).count()
@property
def health_check_pending(self):
if self.health_check_started is None:
return False
if self.last_health_check is None:
return True
return self.health_check_started > self.last_health_check
def get_cleanup_task_kwargs(self, **kwargs):
"""
Produce options to use for the command: ansible-runner worker cleanup
@@ -203,24 +249,28 @@ class Instance(HasPolicyEditsMixin, BaseModel):
return True
if ref_time is None:
ref_time = now()
grace_period = settings.CLUSTER_NODE_HEARTBEAT_PERIOD * 2
grace_period = settings.CLUSTER_NODE_HEARTBEAT_PERIOD * settings.CLUSTER_NODE_MISSED_HEARTBEAT_TOLERANCE
if self.node_type in ('execution', 'hop'):
grace_period += settings.RECEPTOR_SERVICE_ADVERTISEMENT_PERIOD
return self.last_seen < ref_time - timedelta(seconds=grace_period)
def mark_offline(self, update_last_seen=False, perform_save=True, errors=''):
if self.cpu_capacity == 0 and self.mem_capacity == 0 and self.capacity == 0 and self.errors == errors and (not update_last_seen):
return
if self.node_state not in (Instance.States.READY, Instance.States.UNAVAILABLE, Instance.States.INSTALLED):
return []
if self.node_state == Instance.States.UNAVAILABLE and self.errors == errors and (not update_last_seen):
return []
self.node_state = Instance.States.UNAVAILABLE
self.cpu_capacity = self.mem_capacity = self.capacity = 0
self.errors = errors
if update_last_seen:
self.last_seen = now()
update_fields = ['node_state', 'capacity', 'cpu_capacity', 'mem_capacity', 'errors']
if update_last_seen:
update_fields += ['last_seen']
if perform_save:
update_fields = ['capacity', 'cpu_capacity', 'mem_capacity', 'errors']
if update_last_seen:
update_fields += ['last_seen']
self.save(update_fields=update_fields)
return update_fields
def set_capacity_value(self):
"""Sets capacity according to capacity adjustment rule (no save)"""
@@ -274,8 +324,12 @@ class Instance(HasPolicyEditsMixin, BaseModel):
if not errors:
self.refresh_capacity_fields()
self.errors = ''
if self.node_state in (Instance.States.UNAVAILABLE, Instance.States.INSTALLED):
self.node_state = Instance.States.READY
update_fields.append('node_state')
else:
self.mark_offline(perform_save=False, errors=errors)
fields_to_update = self.mark_offline(perform_save=False, errors=errors)
update_fields.extend(fields_to_update)
update_fields.extend(['cpu_capacity', 'mem_capacity', 'capacity'])
# disabling activity stream will avoid extra queries, which is important for heatbeat actions
@@ -292,7 +346,7 @@ class Instance(HasPolicyEditsMixin, BaseModel):
# playbook event data; we should consider this a zero capacity event
redis.Redis.from_url(settings.BROKER_URL).ping()
except redis.ConnectionError:
errors = _('Failed to connect ot Redis')
errors = _('Failed to connect to Redis')
self.save_health_data(awx_application_version, get_cpu_count(), get_mem_in_bytes(), update_last_seen=True, errors=errors)
@@ -384,6 +438,20 @@ def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs
@receiver(post_save, sender=Instance)
def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
if settings.IS_K8S and instance.node_type in (Instance.Types.EXECUTION,):
if instance.node_state == Instance.States.DEPROVISIONING:
from awx.main.tasks.receptor import remove_deprovisioned_node # prevents circular import
# wait for jobs on the node to complete, then delete the
# node and kick off write_receptor_config
connection.on_commit(lambda: remove_deprovisioned_node.apply_async([instance.hostname]))
if instance.node_state == Instance.States.INSTALLED:
from awx.main.tasks.receptor import write_receptor_config # prevents circular import
# broadcast to all control instances to update their receptor configs
connection.on_commit(lambda: write_receptor_config.apply_async(queue='tower_broadcast_all'))
if created or instance.has_policy_changes():
schedule_policy_task()
@@ -430,3 +498,58 @@ class InventoryInstanceGroupMembership(models.Model):
default=None,
db_index=True,
)
class JobLaunchConfigInstanceGroupMembership(models.Model):
joblaunchconfig = models.ForeignKey('JobLaunchConfig', on_delete=models.CASCADE)
instancegroup = models.ForeignKey('InstanceGroup', on_delete=models.CASCADE)
position = models.PositiveIntegerField(
null=True,
default=None,
db_index=True,
)
class ScheduleInstanceGroupMembership(models.Model):
schedule = models.ForeignKey('Schedule', on_delete=models.CASCADE)
instancegroup = models.ForeignKey('InstanceGroup', on_delete=models.CASCADE)
position = models.PositiveIntegerField(
null=True,
default=None,
db_index=True,
)
class WorkflowJobTemplateNodeBaseInstanceGroupMembership(models.Model):
workflowjobtemplatenode = models.ForeignKey('WorkflowJobTemplateNode', on_delete=models.CASCADE)
instancegroup = models.ForeignKey('InstanceGroup', on_delete=models.CASCADE)
position = models.PositiveIntegerField(
null=True,
default=None,
db_index=True,
)
class WorkflowJobNodeBaseInstanceGroupMembership(models.Model):
workflowjobnode = models.ForeignKey('WorkflowJobNode', on_delete=models.CASCADE)
instancegroup = models.ForeignKey('InstanceGroup', on_delete=models.CASCADE)
position = models.PositiveIntegerField(
null=True,
default=None,
db_index=True,
)
class WorkflowJobInstanceGroupMembership(models.Model):
workflowjobnode = models.ForeignKey('WorkflowJob', on_delete=models.CASCADE)
instancegroup = models.ForeignKey('InstanceGroup', on_delete=models.CASCADE)
position = models.PositiveIntegerField(
null=True,
default=None,
db_index=True,
)

View File

@@ -63,7 +63,7 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
an inventory source contains lists and hosts.
"""
FIELDS_TO_PRESERVE_AT_COPY = ['hosts', 'groups', 'instance_groups']
FIELDS_TO_PRESERVE_AT_COPY = ['hosts', 'groups', 'instance_groups', 'prevent_instance_group_fallback']
KIND_CHOICES = [
('', _('Hosts have a direct link to this inventory.')),
('smart', _('Hosts for inventory generated using the host_filter property.')),
@@ -175,6 +175,16 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
related_name='inventory_labels',
help_text=_('Labels associated with this inventory.'),
)
prevent_instance_group_fallback = models.BooleanField(
default=False,
help_text=(
"If enabled, the inventory will prevent adding any organization "
"instance groups to the list of preferred instances groups to run "
"associated job templates on."
"If this setting is enabled and you provided an empty list, the global instance "
"groups will be applied."
),
)
def get_absolute_url(self, request=None):
return reverse('api:inventory_detail', kwargs={'pk': self.pk}, request=request)
@@ -236,6 +246,12 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
raise ParseError(_('Slice number must be 1 or higher.'))
return (number, step)
def get_sliced_hosts(self, host_queryset, slice_number, slice_count):
if slice_count > 1 and slice_number > 0:
offset = slice_number - 1
host_queryset = host_queryset[offset::slice_count]
return host_queryset
def get_script_data(self, hostvars=False, towervars=False, show_all=False, slice_number=1, slice_count=1):
hosts_kw = dict()
if not show_all:
@@ -243,10 +259,8 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
fetch_fields = ['name', 'id', 'variables', 'inventory_id']
if towervars:
fetch_fields.append('enabled')
hosts = self.hosts.filter(**hosts_kw).order_by('name').only(*fetch_fields)
if slice_count > 1 and slice_number > 0:
offset = slice_number - 1
hosts = hosts[offset::slice_count]
host_queryset = self.hosts.filter(**hosts_kw).order_by('name').only(*fetch_fields)
hosts = self.get_sliced_hosts(host_queryset, slice_number, slice_count)
data = dict()
all_group = data.setdefault('all', dict())
@@ -337,9 +351,12 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
else:
active_inventory_sources = self.inventory_sources.filter(source__in=CLOUD_INVENTORY_SOURCES)
failed_inventory_sources = active_inventory_sources.filter(last_job_failed=True)
total_hosts = active_hosts.count()
# if total_hosts has changed, set update_task_impact to True
update_task_impact = total_hosts != self.total_hosts
computed_fields = {
'has_active_failures': bool(failed_hosts.count()),
'total_hosts': active_hosts.count(),
'total_hosts': total_hosts,
'hosts_with_active_failures': failed_hosts.count(),
'total_groups': active_groups.count(),
'has_inventory_sources': bool(active_inventory_sources.count()),
@@ -357,6 +374,14 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
computed_fields.pop(field)
if computed_fields:
iobj.save(update_fields=computed_fields.keys())
if update_task_impact:
# if total hosts count has changed, re-calculate task_impact for any
# job that is still in pending for this inventory, since task_impact
# is cached on task creation and used in task management system
tasks = self.jobs.filter(status="pending")
for t in tasks:
t.task_impact = t._get_task_impact()
UnifiedJob.objects.bulk_update(tasks, ['task_impact'])
logger.debug("Finished updating inventory computed fields, pk={0}, in " "{1:.3f} seconds".format(self.pk, time.time() - start_time))
def websocket_emit_status(self, status):
@@ -985,22 +1010,11 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
default=None,
null=True,
)
scm_last_revision = models.CharField(
max_length=1024,
blank=True,
default='',
editable=False,
)
update_on_project_update = models.BooleanField(
default=False,
help_text=_(
'This field is deprecated and will be removed in a future release. '
'In future release, functionality will be migrated to source project update_on_launch.'
),
)
update_on_launch = models.BooleanField(
default=False,
)
update_cache_timeout = models.PositiveIntegerField(
default=0,
)
@@ -1038,14 +1052,6 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
self.name = 'inventory source (%s)' % replace_text
if 'name' not in update_fields:
update_fields.append('name')
# Reset revision if SCM source has changed parameters
if self.source == 'scm' and not is_new_instance:
before_is = self.__class__.objects.get(pk=self.pk)
if before_is.source_path != self.source_path or before_is.source_project_id != self.source_project_id:
# Reset the scm_revision if file changed to force update
self.scm_last_revision = ''
if 'scm_last_revision' not in update_fields:
update_fields.append('scm_last_revision')
# Do the actual save.
super(InventorySource, self).save(*args, **kwargs)
@@ -1054,10 +1060,6 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
if replace_text in self.name:
self.name = self.name.replace(replace_text, str(self.pk))
super(InventorySource, self).save(update_fields=['name'])
if self.source == 'scm' and is_new_instance and self.update_on_project_update:
# Schedule a new Project update if one is not already queued
if self.source_project and not self.source_project.project_updates.filter(status__in=['new', 'pending', 'waiting']).exists():
self.update()
if not getattr(_inventory_updates, 'is_updating', False):
if self.inventory is not None:
self.inventory.update_computed_fields()
@@ -1147,25 +1149,6 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
)
return dict(error=list(error_notification_templates), started=list(started_notification_templates), success=list(success_notification_templates))
def clean_update_on_project_update(self):
if (
self.update_on_project_update is True
and self.source == 'scm'
and InventorySource.objects.filter(Q(inventory=self.inventory, update_on_project_update=True, source='scm') & ~Q(id=self.id)).exists()
):
raise ValidationError(_("More than one SCM-based inventory source with update on project update per-inventory not allowed."))
return self.update_on_project_update
def clean_update_on_launch(self):
if self.update_on_project_update is True and self.source == 'scm' and self.update_on_launch is True:
raise ValidationError(
_(
"Cannot update SCM-based inventory source on launch if set to update on project update. "
"Instead, configure the corresponding source project to update on launch."
)
)
return self.update_on_launch
def clean_source_path(self):
if self.source != 'scm' and self.source_path:
raise ValidationError(_("Cannot set source_path if not SCM type."))
@@ -1218,6 +1201,14 @@ class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin,
default=None,
null=True,
)
scm_revision = models.CharField(
max_length=1024,
blank=True,
default='',
editable=False,
verbose_name=_('SCM Revision'),
help_text=_('The SCM Revision from the Project used for this inventory update. Only applicable to inventories source from scm'),
)
@property
def is_container_group_task(self):
@@ -1262,8 +1253,7 @@ class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin,
return UnpartitionedInventoryUpdateEvent
return InventoryUpdateEvent
@property
def task_impact(self):
def _get_task_impact(self):
return 1
# InventoryUpdate credential required
@@ -1288,26 +1278,23 @@ class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin,
@property
def preferred_instance_groups(self):
if self.inventory_source.inventory is not None and self.inventory_source.inventory.organization is not None:
organization_groups = [x for x in self.inventory_source.inventory.organization.instance_groups.all()]
else:
organization_groups = []
selected_groups = []
if self.inventory_source.inventory is not None:
inventory_groups = [x for x in self.inventory_source.inventory.instance_groups.all()]
else:
inventory_groups = []
selected_groups = inventory_groups + organization_groups
# Add the inventory sources IG to the selected IGs first
for instance_group in self.inventory_source.inventory.instance_groups.all():
selected_groups.append(instance_group)
# If the inventory allows for fallback and we have an organization then also append the orgs IGs to the end of the list
if (
not getattr(self.inventory_source.inventory, 'prevent_instance_group_fallback', False)
and self.inventory_source.inventory.organization is not None
):
for instance_group in self.inventory_source.inventory.organization.instance_groups.all():
selected_groups.append(instance_group)
if not selected_groups:
return self.global_instance_groups
return selected_groups
def cancel(self, job_explanation=None, is_chain=False):
res = super(InventoryUpdate, self).cancel(job_explanation=job_explanation, is_chain=is_chain)
if res:
if self.launch_type != 'scm' and self.source_project_update:
self.source_project_update.cancel(job_explanation=job_explanation)
return res
class CustomInventoryScript(CommonModelNameNotUnique, ResourceMixin):
class Meta:

View File

@@ -43,8 +43,8 @@ from awx.main.models.notifications import (
NotificationTemplate,
JobNotificationMixin,
)
from awx.main.utils import parse_yaml_or_json, getattr_dne, NullablePromptPseudoField
from awx.main.fields import ImplicitRoleField, AskForField, JSONBlob
from awx.main.utils import parse_yaml_or_json, getattr_dne, NullablePromptPseudoField, polymorphic
from awx.main.fields import ImplicitRoleField, AskForField, JSONBlob, OrderedManyToManyField
from awx.main.models.mixins import (
ResourceMixin,
SurveyJobTemplateMixin,
@@ -130,8 +130,7 @@ class JobOptions(BaseModel):
)
)
)
job_tags = models.CharField(
max_length=1024,
job_tags = models.TextField(
blank=True,
default='',
)
@@ -204,7 +203,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
playbook) to an inventory source with a given credential.
"""
FIELDS_TO_PRESERVE_AT_COPY = ['labels', 'instance_groups', 'credentials', 'survey_spec']
FIELDS_TO_PRESERVE_AT_COPY = ['labels', 'instance_groups', 'credentials', 'survey_spec', 'prevent_instance_group_fallback']
FIELDS_TO_DISCARD_AT_COPY = ['vault_credential', 'credential']
SOFT_UNIQUE_TOGETHER = [('polymorphic_ctype', 'name', 'organization')]
@@ -228,15 +227,6 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
blank=True,
default=False,
)
ask_limit_on_launch = AskForField(
blank=True,
default=False,
)
ask_tags_on_launch = AskForField(blank=True, default=False, allows_field='job_tags')
ask_skip_tags_on_launch = AskForField(
blank=True,
default=False,
)
ask_job_type_on_launch = AskForField(
blank=True,
default=False,
@@ -245,12 +235,27 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
blank=True,
default=False,
)
ask_inventory_on_launch = AskForField(
ask_credential_on_launch = AskForField(blank=True, default=False, allows_field='credentials')
ask_execution_environment_on_launch = AskForField(
blank=True,
default=False,
)
ask_forks_on_launch = AskForField(
blank=True,
default=False,
)
ask_job_slice_count_on_launch = AskForField(
blank=True,
default=False,
)
ask_timeout_on_launch = AskForField(
blank=True,
default=False,
)
ask_instance_groups_on_launch = AskForField(
blank=True,
default=False,
)
ask_credential_on_launch = AskForField(blank=True, default=False, allows_field='credentials')
ask_scm_branch_on_launch = AskForField(blank=True, default=False, allows_field='scm_branch')
job_slice_count = models.PositiveIntegerField(
blank=True,
default=1,
@@ -269,6 +274,15 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
'admin_role',
],
)
prevent_instance_group_fallback = models.BooleanField(
default=False,
help_text=(
"If enabled, the job template will prevent adding any inventory or organization "
"instance groups to the list of preferred instances groups to run on."
"If this setting is enabled and you provided an empty list, the global instance "
"groups will be applied."
),
)
@classmethod
def _get_unified_job_class(cls):
@@ -277,7 +291,17 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in JobOptions._meta.fields) | set(
['name', 'description', 'organization', 'survey_passwords', 'labels', 'credentials', 'job_slice_number', 'job_slice_count', 'execution_environment']
[
'name',
'description',
'organization',
'survey_passwords',
'labels',
'credentials',
'job_slice_number',
'job_slice_count',
'execution_environment',
]
)
@property
@@ -315,10 +339,13 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
actual_inventory = self.inventory
if self.ask_inventory_on_launch and 'inventory' in kwargs:
actual_inventory = kwargs['inventory']
actual_slice_count = self.job_slice_count
if self.ask_job_slice_count_on_launch and 'job_slice_count' in kwargs:
actual_slice_count = kwargs['job_slice_count']
if actual_inventory:
return min(self.job_slice_count, actual_inventory.hosts.count())
return min(actual_slice_count, actual_inventory.hosts.count())
else:
return self.job_slice_count
return actual_slice_count
def save(self, *args, **kwargs):
update_fields = kwargs.get('update_fields', [])
@@ -426,10 +453,15 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
field = self._meta.get_field(field_name)
if isinstance(field, models.ManyToManyField):
old_value = set(old_value.all())
new_value = set(kwargs[field_name]) - old_value
if not new_value:
continue
if field_name == 'instance_groups':
# Instance groups are ordered so we can't make a set out of them
old_value = old_value.all()
elif field_name == 'credentials':
# Credentials have a weird pattern because of how they are layered
old_value = set(old_value.all())
new_value = set(kwargs[field_name]) - old_value
if not new_value:
continue
if new_value == old_value:
# no-op case: Fields the same as template's value
@@ -450,6 +482,10 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
rejected_data[field_name] = new_value
errors_dict[field_name] = _('Project does not allow override of branch.')
continue
elif field_name == 'job_slice_count' and (new_value > 1) and (self.get_effective_slice_ct(kwargs) <= 1):
rejected_data[field_name] = new_value
errors_dict[field_name] = _('Job inventory does not have enough hosts for slicing')
continue
# accepted prompt
prompted_data[field_name] = new_value
else:
@@ -601,6 +637,19 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
def get_ui_url(self):
return urljoin(settings.TOWER_URL_BASE, "/#/jobs/playbook/{}".format(self.pk))
def _set_default_dependencies_processed(self):
"""
This sets the initial value of dependencies_processed
and here we use this as a shortcut to avoid the DependencyManager for jobs that do not need it
"""
if (not self.project) or self.project.scm_update_on_launch:
self.dependencies_processed = False
elif (not self.inventory) or self.inventory.inventory_sources.filter(update_on_launch=True).exists():
self.dependencies_processed = False
else:
# No dependencies to process
self.dependencies_processed = True
@property
def event_class(self):
if self.has_unpartitioned_events:
@@ -645,8 +694,7 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
raise ParseError(_('{status_value} is not a valid status option.').format(status_value=status))
return self._get_hosts(**kwargs)
@property
def task_impact(self):
def _get_task_impact(self):
if self.launch_type == 'callback':
count_hosts = 2
else:
@@ -744,25 +792,27 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
return "$hidden due to Ansible no_log flag$"
return artifacts
def get_effective_artifacts(self, **kwargs):
"""Return unified job artifacts (from set_stats) to pass downstream in workflows"""
if isinstance(self.artifacts, dict):
return self.artifacts
return {}
@property
def is_container_group_task(self):
return bool(self.instance_group and self.instance_group.is_container_group)
@property
def preferred_instance_groups(self):
if self.organization is not None:
organization_groups = [x for x in self.organization.instance_groups.all()]
else:
organization_groups = []
if self.inventory is not None:
inventory_groups = [x for x in self.inventory.instance_groups.all()]
else:
inventory_groups = []
if self.job_template is not None:
template_groups = [x for x in self.job_template.instance_groups.all()]
else:
template_groups = []
selected_groups = template_groups + inventory_groups + organization_groups
# If the user specified instance groups those will be handled by the unified_job.create_unified_job
# This function handles only the defaults for a template w/o user specification
selected_groups = []
for obj_type in ['job_template', 'inventory', 'organization']:
if getattr(self, obj_type) is not None:
for instance_group in getattr(self, obj_type).instance_groups.all():
selected_groups.append(instance_group)
if getattr(getattr(self, obj_type), 'prevent_instance_group_fallback', False):
break
if not selected_groups:
return self.global_instance_groups
return selected_groups
@@ -797,7 +847,8 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
def _get_inventory_hosts(self, only=['name', 'ansible_facts', 'ansible_facts_modified', 'modified', 'inventory_id']):
if not self.inventory:
return []
return self.inventory.hosts.only(*only)
host_queryset = self.inventory.hosts.only(*only)
return self.inventory.get_sliced_hosts(host_queryset, self.job_slice_number, self.job_slice_count)
def start_job_fact_cache(self, destination, modification_times, timeout=None):
self.log_lifecycle("start_job_fact_cache")
@@ -842,7 +893,7 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
continue
host.ansible_facts = ansible_facts
host.ansible_facts_modified = now()
host.save()
host.save(update_fields=['ansible_facts', 'ansible_facts_modified'])
system_tracking_logger.info(
'New fact for inventory {} host {}'.format(smart_str(host.inventory.name), smart_str(host.name)),
extra=dict(
@@ -888,10 +939,36 @@ class LaunchTimeConfigBase(BaseModel):
# This is a solution to the nullable CharField problem, specific to prompting
char_prompts = JSONBlob(default=dict, blank=True)
def prompts_dict(self, display=False):
# Define fields that are not really fields, but alias to char_prompts lookups
limit = NullablePromptPseudoField('limit')
scm_branch = NullablePromptPseudoField('scm_branch')
job_tags = NullablePromptPseudoField('job_tags')
skip_tags = NullablePromptPseudoField('skip_tags')
diff_mode = NullablePromptPseudoField('diff_mode')
job_type = NullablePromptPseudoField('job_type')
verbosity = NullablePromptPseudoField('verbosity')
forks = NullablePromptPseudoField('forks')
job_slice_count = NullablePromptPseudoField('job_slice_count')
timeout = NullablePromptPseudoField('timeout')
# NOTE: additional fields are assumed to exist but must be defined in subclasses
# due to technical limitations
SUBCLASS_FIELDS = (
'instance_groups', # needs a through model defined
'extra_vars', # alternates between extra_vars and extra_data
'credentials', # already a unified job and unified JT field
'labels', # already a unified job and unified JT field
'execution_environment', # already a unified job and unified JT field
)
def prompts_dict(self, display=False, for_cls=None):
data = {}
if for_cls:
cls = for_cls
else:
cls = JobTemplate
# Some types may have different prompts, but always subset of JT prompts
for prompt_name in JobTemplate.get_ask_mapping().keys():
for prompt_name in cls.get_ask_mapping().keys():
try:
field = self._meta.get_field(prompt_name)
except FieldDoesNotExist:
@@ -899,18 +976,23 @@ class LaunchTimeConfigBase(BaseModel):
if isinstance(field, models.ManyToManyField):
if not self.pk:
continue # unsaved object can't have related many-to-many
prompt_val = set(getattr(self, prompt_name).all())
if len(prompt_val) > 0:
data[prompt_name] = prompt_val
prompt_values = list(getattr(self, prompt_name).all())
# Many to manys can't distinguish between None and []
# Because of this, from a config perspective, we assume [] is none and we don't save [] into the config
if len(prompt_values) > 0:
data[prompt_name] = prompt_values
elif prompt_name == 'extra_vars':
if self.extra_vars:
extra_vars = {}
if display:
data[prompt_name] = self.display_extra_vars()
extra_vars = self.display_extra_vars()
else:
data[prompt_name] = self.extra_vars
extra_vars = self.extra_vars
# Depending on model, field type may save and return as string
if isinstance(data[prompt_name], str):
data[prompt_name] = parse_yaml_or_json(data[prompt_name])
if isinstance(extra_vars, str):
extra_vars = parse_yaml_or_json(extra_vars)
if extra_vars:
data['extra_vars'] = extra_vars
if self.survey_passwords and not display:
data['survey_passwords'] = self.survey_passwords
else:
@@ -920,15 +1002,6 @@ class LaunchTimeConfigBase(BaseModel):
return data
for field_name in JobTemplate.get_ask_mapping().keys():
if field_name == 'extra_vars':
continue
try:
LaunchTimeConfigBase._meta.get_field(field_name)
except FieldDoesNotExist:
setattr(LaunchTimeConfigBase, field_name, NullablePromptPseudoField(field_name))
class LaunchTimeConfig(LaunchTimeConfigBase):
"""
Common model for all objects that save details of a saved launch config
@@ -947,8 +1020,18 @@ class LaunchTimeConfig(LaunchTimeConfigBase):
blank=True,
)
)
# Credentials needed for non-unified job / unified JT models
# Fields needed for non-unified job / unified JT models, because they are defined on unified models
credentials = models.ManyToManyField('Credential', related_name='%(class)ss')
labels = models.ManyToManyField('Label', related_name='%(class)s_labels')
execution_environment = models.ForeignKey(
'ExecutionEnvironment',
null=True,
blank=True,
default=None,
on_delete=polymorphic.SET_NULL,
related_name='%(class)s_as_prompt',
help_text="The container image to be used for execution.",
)
@property
def extra_vars(self):
@@ -992,6 +1075,11 @@ class JobLaunchConfig(LaunchTimeConfig):
editable=False,
)
# Instance Groups needed for non-unified job / unified JT models
instance_groups = OrderedManyToManyField(
'InstanceGroup', related_name='%(class)ss', blank=True, editable=False, through='JobLaunchConfigInstanceGroupMembership'
)
def has_user_prompts(self, template):
"""
Returns True if any fields exist in the launch config that are
@@ -1208,6 +1296,9 @@ class SystemJob(UnifiedJob, SystemJobOptions, JobNotificationMixin):
extra_vars_dict = VarsDictProperty('extra_vars', True)
def _set_default_dependencies_processed(self):
self.dependencies_processed = True
@classmethod
def _get_parent_field_name(cls):
return 'system_job_template'
@@ -1233,8 +1324,7 @@ class SystemJob(UnifiedJob, SystemJobOptions, JobNotificationMixin):
return UnpartitionedSystemJobEvent
return SystemJobEvent
@property
def task_impact(self):
def _get_task_impact(self):
return 5
@property

View File

@@ -10,6 +10,8 @@ from awx.api.versioning import reverse
from awx.main.models.base import CommonModelNameNotUnique
from awx.main.models.unified_jobs import UnifiedJobTemplate, UnifiedJob
from awx.main.models.inventory import Inventory
from awx.main.models.schedules import Schedule
from awx.main.models.workflow import WorkflowJobTemplateNode, WorkflowJobNode
__all__ = ('Label',)
@@ -34,16 +36,22 @@ class Label(CommonModelNameNotUnique):
def get_absolute_url(self, request=None):
return reverse('api:label_detail', kwargs={'pk': self.pk}, request=request)
@staticmethod
def get_orphaned_labels():
return Label.objects.filter(organization=None, unifiedjobtemplate_labels__isnull=True, inventory_labels__isnull=True)
def is_detached(self):
return Label.objects.filter(id=self.id, unifiedjob_labels__isnull=True, unifiedjobtemplate_labels__isnull=True, inventory_labels__isnull=True).exists()
return Label.objects.filter(
id=self.id,
unifiedjob_labels__isnull=True,
unifiedjobtemplate_labels__isnull=True,
inventory_labels__isnull=True,
schedule_labels__isnull=True,
workflowjobtemplatenode_labels__isnull=True,
workflowjobnode_labels__isnull=True,
).exists()
def is_candidate_for_detach(self):
c1 = UnifiedJob.objects.filter(labels__in=[self.id]).count()
c2 = UnifiedJobTemplate.objects.filter(labels__in=[self.id]).count()
c3 = Inventory.objects.filter(labels__in=[self.id]).count()
return (c1 + c2 + c3 - 1) == 0
count = UnifiedJob.objects.filter(labels__in=[self.id]).count() # Both Jobs and WFJobs
count += UnifiedJobTemplate.objects.filter(labels__in=[self.id]).count() # Both JTs and WFJT
count += Inventory.objects.filter(labels__in=[self.id]).count()
count += Schedule.objects.filter(labels__in=[self.id]).count()
count += WorkflowJobTemplateNode.objects.filter(labels__in=[self.id]).count()
count += WorkflowJobNode.objects.filter(labels__in=[self.id]).count()
return (count - 1) == 0

Some files were not shown because too many files have changed in this diff Show More