Compare commits

..

367 Commits

Author SHA1 Message Date
Michael Abashian
bbb436ddbb Merge pull request #13872 from mabashian/remove-codemirror
Removes unused codemirror dependency
2023-04-18 15:27:12 -04:00
Michael Abashian
abf915fafe Removes more unnecessary licenses 2023-04-18 15:06:19 -04:00
Michael Abashian
481814991e Remove codemirror licenses 2023-04-18 15:06:18 -04:00
Michael Abashian
e94ee8f8d7 Removes unused codemirror dependency 2023-04-18 15:06:18 -04:00
John Westcott IV
e660f62a59 Merge pull request #13875 from john-westcott-iv/fix_assumed_databases
Fixing issue were we assumed DATABASES would be defined
2023-04-18 14:21:17 -04:00
Keith Grant
a2a04002b6 Merge pull request #13869 from keithjgrant/persistent-filter-race-condition
Rework PersistentFilter to avoid double API call
2023-04-18 11:13:19 -07:00
John Westcott IV
93117c8264 Fixing issue were we assumed DATABSES would be defined 2023-04-18 13:57:17 -04:00
Keith J. Grant
b8118ac86a remove outdated tests 2023-04-18 10:04:28 -07:00
Keith J. Grant
c08f1ddcaa rework PersistentFilter to avoid double API call 2023-04-18 10:04:28 -07:00
John Westcott IV
32f7dfece1 Changing check for all in awx.awx.export (#13854) 2023-04-18 10:29:25 -03:00
Alan Rominger
886ba1ea7f Merge pull request #13860 from AlanCoding/move_test
Move integration tests to be consistent with the rest
2023-04-14 10:36:44 -04:00
Alex Corey
b128f05a37 Merge pull request #11076 from tongtie/fix-choose-project-scmType-manual-international
fix: Internationalization causes the project to be unable to choose manual option
2023-04-14 09:57:08 -04:00
Alan Rominger
36c9c9cdc4 Move integration tests to be consistent with the rest 2023-04-14 09:51:53 -04:00
Alan Rominger
342e9197b8 Customize application_name for different connections in dispatcher service (#13074)
* Introduce new method in settings, import in-line w NOQA mark

* Further refine the app_name to use shorter service names like dispatcher

* Clean up listener logic, change some names
2023-04-13 22:36:36 -04:00
John Westcott IV
2205664fb4 Merge pull request #13857 from john-westcott-iv/add_tacacs_plus
Adding tacacs+ container for testing
2023-04-13 16:15:32 -04:00
John Westcott IV
7cdf471894 Fix sat instance var (#13851)
* add the fallback satellite_instance_var_id

* Removing unnecessary whitespace

---------

Co-authored-by: Nikhil Jain <jainnikhil30@gmail.com>
2023-04-13 17:14:06 -03:00
John Westcott IV
8719648ff5 Adding tacacs+ container for testing 2023-04-13 15:02:08 -04:00
Dien Nguyen
c1455ee125 bugfix: add scm_branch to optional_args for workflow_launch (#13254)
* add scm_branch to optional_args

* add in limits

* Update workflow_launch.py

remove json from import to pass linting.

---------

Co-authored-by: dien nguyen <nguyen.d@gmail.comn>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-04-13 15:36:38 -03:00
Joe Garcia
11d5e5c7d4 Fixes #13402 allow user defined key retrieval from CYBR (#13411)
* Fixed #13402 allow user defined key retrieval from CYBR

* Add default value to object_property

* Raise ValueError if object_property not in response

* Raise KeyError instead of ValueError
2023-04-13 13:11:37 -04:00
John Westcott IV
fba4e06c50 Adding basic validation for local passwords (#13789)
* Adding basic validation for local passwords

* Adding edit screen

* Fixing tests
2023-04-13 10:02:52 -03:00
Hao Liu
12a4c301b8 Merge pull request #13721 from sscheib-rh/feat-add_secret_field_dsv_lookup
Add missing filtering mechanism for the Thycotic Devops Vault credential lookup
2023-04-13 08:58:59 -04:00
Hao Liu
8a1cdf859e Merge pull request #12627 from vician/tss-domain
Added domain entry and authorizer for TSS
2023-04-12 16:33:46 -04:00
Steffen Scheib
2f68317e5f Fixing api-lint error 2023-04-12 16:07:00 -04:00
Steffen Scheib
0f4bac7aed Add missing filtering mechanism for the Thycotic Devops Vault credential lookup 2023-04-12 16:07:00 -04:00
John Westcott IV
e42461d96f Merge pull request #13807 from sean-m-sullivan/credential_doc
update credential list examples in awx collection
2023-04-12 15:40:06 -04:00
sean-m-sullivan
9b716235a2 update credential list examples in awx collection 2023-04-12 15:19:11 -04:00
John Westcott IV
eb704dbaad Merge pull request #13838 from john-westcott-iv/oweel_additional_tests
Added more tests for different modules
2023-04-12 13:14:37 -04:00
John Westcott IV
9b390a624f Merge pull request #13831 from slemrmartin/analytics-api-permissions
Analytics API: Permissions for System Auditor
2023-04-12 10:37:26 -04:00
Martin Slemr
0046ce5e69 Analytics API: Permissions for System Auditor 2023-04-12 15:40:12 +02:00
Hao Liu
b80d0ae85b Merge pull request #13840 from AlanCoding/one_less_connection
Get rid of 1 perpetually unused connection in our app
2023-04-12 09:30:51 -04:00
Hao Liu
1c0142f75c Merge pull request #13841 from AlanCoding/tower_processes
Add run-clear-cache to tower-processes for auto-reload
2023-04-12 08:54:34 -04:00
Alan Rominger
1ea6d15ee3 Add run-clear-cache to tower-processes for auto-reload 2023-04-11 17:05:41 -04:00
Alan Rominger
3cd5d59d87 Get rid of 1 perpetually unused connection in our app 2023-04-11 17:04:59 -04:00
Alexander Komarov
d32a5905e8 Remove unused imports 2023-04-11 16:23:03 -04:00
Alexander Komarov
e53a5da91e Add more tests for different modules 2023-04-11 16:21:50 -04:00
Hao Liu
1a56272eaf Merge pull request #13767 from Ladas/analytics_export_subscription_id
Analytics export other subs attrs
2023-04-11 15:55:26 -04:00
John Westcott IV
3975028bd4 Merge pull request #12952 from sashashura/patch-1
ci: workflows security hardening
2023-04-11 15:51:07 -04:00
Seth Foster
1c51ef8a69 Store serialized metrics locally (#13833) 2023-04-11 15:06:48 -04:00
Michael Abashian
6b0fe8d137 Merge pull request #13766 from tanganellilore/fix_lang
Fix locale UI error
2023-04-11 14:51:55 -04:00
Michael Abashian
23f3ab6a66 Merge branch 'devel' into fix_lang 2023-04-11 11:41:12 -04:00
Seth Foster
ffa3cd1fff Add troubleshooting to execution node docs (#13826) 2023-04-11 10:58:11 -04:00
John Westcott IV
236de7e209 Merge pull request #13827 from john-westcott-iv/remove_future_pin
Unpinning python library for future
2023-04-11 08:16:53 -04:00
Ladislav Smola
4e5cce8d15 Analytics export other subs attrs
We'll export also subscription_id since pool_id is not
enough in certain cases.

Then also export usage and account number
2023-04-10 21:47:32 -04:00
John Westcott IV
6c9e2502a5 Unpinning future 2023-04-10 12:25:15 -04:00
Michael Abashian
0b1b866128 Fixes bug where attempting to edit a schedule with stringified extra_data threw error (#13795) 2023-04-10 09:33:25 -03:00
Hao Liu
80ebe13841 Merge pull request #13825 from TheRealHaoLiu/fix-dependency-conflict
Fix importlib-metadata dependency conflict
2023-04-07 13:17:49 -04:00
Hao Liu
328880609b Fix importlib-metadata dependency conflict
rerun requirements/updator.sh to regenerate requirements.txt fix conflict introduced by https://github.com/ansible/ansible-runner/pull/1224
2023-04-07 11:48:34 -04:00
John Westcott IV
71c307ab8a Merge pull request #13808 from ansible/feature_on-premise-analytics
Proxy analytics requests through AWX API
2023-04-07 11:46:14 -04:00
John Westcott IV
3ce68ced1e Merge pull request #13809 from ansible/feature_usage-collection-pt2
Enhance usage metrics collection
2023-04-07 11:44:59 -04:00
Martin Slemr
20817789bd HostMetric task param check 2023-04-07 08:56:03 -04:00
Salma Kochay
2b63b55b34 UI test fixes for hiding subscription details 2023-04-07 08:56:03 -04:00
Salma Kochay
64923e12fc show/hide host metric subscription details 2023-04-07 08:56:03 -04:00
Martin Slemr
6d4f92e1e8 HostMetric Cleanup task 2023-04-07 08:56:03 -04:00
Martin Slemr
fff6fa7d7a Additional Licensing values 2023-04-07 08:56:03 -04:00
Martin Slemr
44db4587be Analytics upload: HostMetrics hybrid sync 2023-04-07 08:56:03 -04:00
Martin Slemr
dc0958150a Adding analytics to root API page 2023-04-07 08:54:56 -04:00
John Westcott IV
9f27436c75 Adding basic unit/funcational tests 2023-04-07 08:54:56 -04:00
John Westcott IV
e60869e653 Consoldating similar methods 2023-04-07 08:54:56 -04:00
John Westcott IV
51e19d9d0b Adding all endpoints to /api/v2/analytics/ 2023-04-07 08:54:56 -04:00
Martin Slemr
0fea29ad4d Analytics API: OPTIONS proxy and response links update 2023-04-07 08:54:56 -04:00
Martin Slemr
0a40b758c3 Analytics API: Paths, headers and Error handling 2023-04-07 08:54:56 -04:00
Martin Slemr
1191458d80 Analytics API: Basics 2023-04-07 08:54:56 -04:00
Hao Liu
c0491a7b10 Merge pull request #13816 from TheRealHaoLiu/workaround-failed-make-requirements_awx
Temporary workaround for make requirements_awx failure and fix license test
2023-04-07 00:07:13 -04:00
Hao Liu
14e613bc92 Fix failed license check
psycopg2 also start with psycopg

Co-Authored-By: Gabriel Muniz <gmuniz@redhat.com>
2023-04-06 23:35:24 -04:00
Hao Liu
98e37383c2 Temporary workaround for make requirements_awx failure 2023-04-06 22:14:51 -04:00
John Westcott IV
9e336d55e4 Merge pull request #13805 from john-westcott-iv/fix_closing_colors
Do not add closing color tags if --no-color was specified
2023-04-06 08:41:49 -04:00
John Westcott IV
0e68caf0f7 Do not add closing color tags if --no-color was specified 2023-04-05 12:03:15 -04:00
Hao Liu
c9c150b5a6 Merge pull request #13799 from TheRealHaoLiu/fix-supervisor-conf-file
Fix supervisor conf file inconsistancy
2023-04-05 11:07:05 -04:00
Hao Liu
f97605430b Merge pull request #13804 from TheRealHaoLiu/heartbeet-logging
Add log handler and file for heartbeet
2023-04-05 11:06:32 -04:00
Hao Liu
454f31f6a4 Add log handler and file for heartbeet 2023-04-05 10:38:35 -04:00
Hao Liu
f62bf6a4c3 Fix supervisor conf file inconsistancy 2023-04-05 10:32:02 -04:00
John Westcott IV
a0dafbfd8c Merge pull request #13803 from john-westcott-iv/try_and_fix_checks
Adding import of centos repo key for dnf
2023-04-05 10:04:55 -04:00
John Westcott IV
b5c052b2e6 Adding import of centos repo key for dnf 2023-04-05 09:38:02 -04:00
Rick Elrod
1e690fcd7f Only use constr. inv URL when req comes from it (#13797)
When the API request is for /inventories/id use that as the URL in the
API response. When the request is for /constructed_inventories/id use
that.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-04-04 15:26:52 +00:00
Lorenzo Tanganelli
479d0c2b12 add instance_groups on cli and awx.awx.role (#13784) 2023-04-04 10:09:48 -04:00
Lorenzo Tanganelli
ede185504c fix js error in case of locale not exists 2023-04-03 21:03:14 +02:00
Alan Rominger
2db29e5ce2 Merge pull request #13786 from AlanCoding/refresh_refresh_refresh
Fix docker-clean target, accounting for slashes
2023-03-30 14:20:04 -04:00
Alan Rominger
7bb0d32be1 Fix docker-clean file, accounting for slashes 2023-03-30 13:46:15 -04:00
Hao Liu
acb22f0131 Merge pull request #13423 from ansible/feature_web-task-split
Allow web and task container to be deployed in separate deployment on Kubernetes
2023-03-30 12:52:22 -04:00
Rick Elrod
4f99a170be Nix websocket docs for now
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-30 08:48:50 -04:00
Hao Liu
17f5c4b8e6 Modify dev make target name to clarify intention
these make targets are for starting the different daemons within the kube/docker development environment updating the name to make it better reflect their intention

also added comments above the make target to describe what they do

note: these comments show up when run `make help`
2023-03-30 08:47:18 -04:00
Oleksii Baranov
598f9e2a55 Add host_metrics page to the awxkit 2023-03-30 08:46:17 +02:00
Hao Liu
d33573b29c Merge pull request #13603 from jjwatt/jjwatt-fix-clean-languages 2023-03-29 22:49:13 -04:00
Hao Liu
bc55bcf3a2 Rename SUPERVISOR_CONFIG_PATH
previously this is used so that task running in the task container can reach into the web container to restart rsyslog

now that the web container and task container are split there's no longer a way to do that so i renamed this env var to reference where it will now do

which is pointing to the supervisor conf file of the current running container
2023-03-29 22:09:19 -04:00
Hao Liu
6c0c1f6853 Rename launch script for launch awx web
launch_awx.sh that this PR rename is also now only use for launching awx web container renaming to reflect it's purpose

also remove the no longer needed creation of rsyslog conf as rsyslog is no longer in the web container

Update Dockerfile.j2
2023-03-29 22:09:19 -04:00
Hao Liu
0cc02d311f Rename supervisor.conf.j2 to be descriptive
supervisor.conf.j2 file is the template for supervisor.conf file for the web container rename to supervisor_web.conf make it more clear that it is use for the web container
2023-03-29 22:09:19 -04:00
jessicamack
13b9a6c5e3 Remove unused import
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
Lila
ac2f2039f5 Fix cache-clear for kube dev env
Missing conditional for when running in kube development environment
2023-03-29 22:09:19 -04:00
Hao Liu
c8c8ed1775 Raise ValueError when no ready and enabled task instance 2023-03-29 22:09:19 -04:00
thedoubl3j
6267469709 remove rsyslog_configurer from dispatcher as it is already being handled, add rsyslog_configurer to tower_processes 2023-03-29 22:09:19 -04:00
Lila
a1e39f71fc Removed errant comments. 2023-03-29 22:09:19 -04:00
Hao Liu
4b0acaf7a1 Add back missing rsyslog.conf file 2023-03-29 22:09:19 -04:00
Hao Liu
968267287b Catch SynchronousOnlyOperation and get setting async
If trying to get setting from async context (in daphne) catch SynchronousOnlyOperation error and retry in a thread
2023-03-29 22:09:19 -04:00
Hao Liu
25303ee625 Only select task instance that are ready and enabled
When select a queue for task instance to run task only select task instance that are ready and enabled
2023-03-29 22:09:19 -04:00
jessicamack
8c5e2237f4 import typing to fix lint issue
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
jessicamack
57d009199d removed unused imports. fix exception message
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
jessicamack
24cbf39a93 fix heartbeet ascii lint issue
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
jessicamack
95f1ef70a7 update licenses to include new requirement
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
jessicamack
680e2bcc0a remove out of date test code
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:19 -04:00
Hao Liu
cd3f7666be add get_task_queuename
get_local_queuename will return the pod name of the instance

now that web and task are in different pods when web container queue a task it will be put into a queue without as task worker to execute the task
2023-03-29 22:09:19 -04:00
Hao Liu
049fb4eff5 fix job relaunch error
AttributeError: 'Settings' object has no attribute 'INSTALL_UUID'
2023-03-29 22:09:19 -04:00
Hao Liu
7cef4e6db7 clear settings cache after changing DISABLE_LOCAL_AUTH 2023-03-29 22:09:19 -04:00
jessicamack
da004da68a make reconfigure_rsyslog a task
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
jessicamack
b29f2f88d0 updated tests to be in line with clear_setting_cache changes
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
jessicamack
52a8a90c0e remove changes used for dev testing
Signed-off-by: jessicamack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
Hao Liu
7cb890b603 minor fix-up due to merge conflict 2023-03-29 22:09:18 -04:00
Jessica Mack
78652bdd71 add functionality back to cache clear method
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
Jessica Mack
29d222be83 removed rsyslog queue, updated logger level
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:09:18 -04:00
Lila Yasin
e7fa730f81 Removed some commented out code and adjusted a few loggers to make more sense contextually. (#13424) 2023-03-29 22:09:18 -04:00
Seth Foster
33f070081c Send subsystem metrics via wsrelay (#13333)
Works by adding a dedicated producer in wsrelay that looks for
local django channels message with group "metrics". The producer
sends this to the consumer running in the web container.

The consumer running in the web container handles the message by
pushing it into the local redis instance.

The django view that handles a request at the /api/v2/metrics
endpoint will load this data from redis, format it, and return the
response.
2023-03-29 22:09:18 -04:00
Rick Elrod
44463402a8 [wsrelay] attempt to standardize logging levels
This needs some work, but it's a start.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
93c2c56612 [wsrelay] Copy the message payload before we relay
We internally manipulate the message payload a bit (to know whether we
are originating it on the task side or the web system is originating
it). But when we get the message, we actually get a reference to the
dict containing the payload.

Other producers in wsrelay might still be acting on the message and
deciding whether or not to relay it. So we need to manipulate and send a
*copy* of the message, and leave the original alone.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
91bf49cdb3 Remove auto-reconnect logic from wsrelay
We no longer need to do this from wsrelay, as it will automatically try
to reconnect when it hears the next beacon from heartbeet.

This also cleans up the logic for what we do when we want to delete a
node we previously knew about.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
704759d29a add wsrelay to tower-processes
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
513f433f17 Add comment for new psycopg dep
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
5f41003fb1 Prevent looping issue when task/web share a Redis
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
2e0f25150c Start of heartbeet daemon
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
4f5bc992a0 fix merge from devel - wsbroadcast -> wsrelay
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
a9e7508e92 WIP: Make wsrelay listen for pg_notify heartbeat
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
1c2eb22956 Remove some debug code and modify logging a bit
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Rick Elrod
a987249ca6 dedent a block that was clearly meant to be de-dented
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-29 22:09:18 -04:00
Shane McDonald
ab6d56c24e initial PoC for wsrelay
Checkpoint
2023-03-29 22:04:43 -04:00
Jessica Mack
c4ce5d0afa updated supervisor to include cache-clear
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
43f4872fec these methods don't need to be class methods
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
cb31973d59 switched to using the built in task processing
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
9f959ca3d4 removed unneeded launch file and Dockerfile change
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
454d6d28e7 mock additional pg_notify use in test
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
8b70fef743 removed unused import
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
026b8f05d7 added launch file, docker, and supervisor changes
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Jessica Mack
d8e591cd69 added cache-clear service. update dispatcher queues
Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Hao Liu
38cc193aea update permission to launch_awx_rsyslog.sh permission to +x (#13399)
Signed-off-by: Hao Liu <haoli@redhat.com>
2023-03-29 22:04:43 -04:00
Lila Yasin
65b3e0226d Created new rsyslog launch file. (#13327)
* Created new rsyslog launch file.
* Rsyslog conf work.
* Refining how we're calling rsyslog conf.
* Removed rsyslog so it no longer launches in the web container.
* Added the new launch_awx_rsyslog.sh to the /usr/bin
2023-03-29 22:04:43 -04:00
jessicamack
b5e04a4cb3 AWX code changes for rsyslog decoupling (#13222)
* add management command and logging for new daemon
* switch tasks over to calling pg_notify
* add daemon to docker-compose and supervisor
* renamed handle_setting_changes and moved notify call
* removed initial rsyslog configure from dispatcher
* add logging and clear cache before reconfigure
* add notify to delete
* moved pg_notify to own function
* update tests impacted by rsyslog change
* changed over to new pg_notify method

Signed-off-by: Jessica Mack <jmack@redhat.com>
2023-03-29 22:04:43 -04:00
Christian Adams
c89c2892c4 Merge pull request #13749 from fosterseth/mintls13false
Allow TLS 1.2 for Receptor connections
2023-03-29 19:20:09 -04:00
Alan Rominger
5080a5530c Merge pull request #13448 from ansible/feature_constructed-inventory
Allow for using Ansible's `constructed` inventory plugin to dynamically group hosts from AWX inventories
2023-03-29 09:27:21 -04:00
Rick Elrod
77743ef406 [collection] Example for constructed inventories (#13755)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Marliana Lara
f792fea048 Add more constructed inventory hint examples 2023-03-28 11:20:24 -05:00
Alan Rominger
16ad27099e [constructed-inventory] Save facts on model for original host (#13700)
* Save facts on model for original host

Redirect to original host for ansible facts

Use current inventory hosts for facts instance_id filter
Thanks for Gabe for identifying this bug

* Fix spelling of queryset

Co-authored-by: Rick Elrod <rick@elrod.me>

* Fix sign error with facts expiry - from review

---------

Co-authored-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Alan Rominger
3f5a4cb6f1 [constructed-inventory] Backlink events to real hosts and summaries to both hosts (#13718)
* Backlink events to real hosts and summaries to both hosts

* Prevent error when original host is deleted during job run

* No duplicate entries, review suggestion from Rick

* Change word tense in help text, dict style adjustments

From code review

Co-authored-by: Rick Elrod <rick@elrod.me>

* Back out new variable for constructed host id

---------

Co-authored-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Alan Rominger
b88d9f4731 Force overwrite all vars for constructed inventory (#13731) 2023-03-28 11:20:24 -05:00
Alan Rominger
62b79b1959 Point constructed inventory URL to special view (#13730) 2023-03-28 11:20:24 -05:00
Alan Rominger
be5a2bbe61 Fail inventory updates with unmatched limits (#13726) 2023-03-28 11:20:24 -05:00
Rick Elrod
84edbed5ec [constructed-inventory] Fix some validation for constructed inv sources (#13727)
- When updating, we need the original object so we can make sure we
  aren't changing things we shouldn't be.
- We want to allow source_vars and limit, but not much else.
- We want to block everything else (at least, if it doesn't match what
  is in the original object...to allow the collection to work properly).
- Add two functional tests.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Alan Rominger
aa631a1ba7 [constructed-inventory] Allow filtering based on facts (#13678)
* initial functional filter-on-facts functionality

* Move facts to its own module to make interface more coherent

* Update test
2023-03-28 11:20:24 -05:00
Alan Rominger
771b831da8 Fail constructed inventory if ANY source is unparsed 2023-03-28 11:20:24 -05:00
Alan Rominger
ce4c1c11b3 Remove towervars from constructed inventory hosts (#13686) 2023-03-28 11:20:24 -05:00
Marliana Lara
054a70bda4 Filter constructed inventory hosts from smart inventory host lookup 2023-03-28 11:20:24 -05:00
Rick Elrod
ab0463bf2a Ordered m2m for Inventory/Inventory relationship (#13602)
Including changes to our custom Ordered m2m field which previously broke
if the source and target model was the same.

Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-03-28 11:20:24 -05:00
Marliana Lara
2bffddb5fb Add constructed inventory edit form 2023-03-28 11:20:24 -05:00
Marliana Lara
d576e65858 Add constructed inventory add form 2023-03-28 11:20:24 -05:00
Marliana Lara
e3d167dfd1 Hide constructed and smart inventories in Inventory Lookup 2023-03-28 11:20:24 -05:00
Alex Corey
ba9533f0e2 Adds constructed inventory groups and related groups. 2023-03-28 11:20:24 -05:00
Alex Corey
e7a739c3d7 Creates constructed inventory host lists by reusing, and renaming smart inventory host list components. 2023-03-28 11:20:24 -05:00
Marliana Lara
ab3a9a0364 Update inventory details after inventory source sync 2023-03-28 11:20:24 -05:00
Marliana Lara
7dd1bc04c4 Add constructed inventory detail's sync button 2023-03-28 11:20:24 -05:00
Gabe Muniz
8c4e943af0 refactored to use is_valid_relation instead of post 2023-03-28 11:20:24 -05:00
Gabe Muniz
7112da9cdc Various validations for const. inv. serialization
- prevent constructed inventory host,group,inventory_source creation
- disable deleting constructed inventory hosts
- remove the ability to add constructed inventory sources
- remove ability to add constructed inventories to constructed inventories
- block updates to constructed source type
- added tests for group/host/source creation
2023-03-28 11:20:24 -05:00
Marliana Lara
7a74437651 Add constructed inventory CRUD and subtab routes
* Add constructed inventory API model
 * Add constructed inventory detail view
 * Add util to switch inventory url based on "kind"
2023-03-28 11:20:24 -05:00
Hao Liu
e22967d28d add constructed kind to inventory module
- add kind 'constructed' to inventory module
- add 'input_inventories' field to inventory module

Co-authored-by: Rick Elrod <rick@elrod.me>
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-28 11:20:24 -05:00
Gabe Muniz
df6bb5a8b8 Refactor original hosts, add related field
Also rename source_inventories to input_inventories
2023-03-28 11:20:24 -05:00
Gabe Muniz
aa06940df5 force kind to readonly field and set kind to constructed in create 2023-03-28 11:20:24 -05:00
Alan Rominger
3e5467b472 [constructed-inventory] Add constructed inventory docs and do minor field updates (#13487)
* Add constructed inventory docs and do minor field updates

Add verbosity field to the constructed views

automatically set update_on_launch for the auto-created constructed inventory source
2023-03-28 11:20:24 -05:00
Alan Rominger
c2fe06dd95 [constructed-inventory] Use control plane EE for constructed inventory and hack temporary image (#13474)
* Use control plane EE for constructed inventory and hack temporary image

* Update page registry to work with new endpoints
2023-03-28 11:20:24 -05:00
Gabe Muniz
510f54b904 adding limit to inventory_source collection module 2023-03-28 11:20:24 -05:00
Alan Rominger
57e005b775 Start on new constructed inventory API view
Make the GET function work at most basic level

Basic functionality of updating working

Add functional test for the GET and PATCH views

Add constructed inventory list view for direct creation

Add limit field to constructed inventory serializer
2023-03-28 11:20:24 -05:00
Gabe Muniz
aad260bb41 edit new migration for deprecation of host_filter 2023-03-28 11:20:24 -05:00
Gabe Muniz
e3d39a2728 push limit to inventory sources
move limit field from InventorySourceSerializer to InventorySourceOptionsSerializer (#13464)

InventorySourceOptionsSerializer is the parent for both InventorySourceSerializer and InventoryUpdateSerializer

The limit option need to be exposed to both inventory_source and inventory_update

Co-Authored-By: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2023-03-28 11:17:17 -05:00
Alan Rominger
f59ced57bc Model and task changes for constructed inventory
Add in required setting about empty groups
2023-03-28 11:17:17 -05:00
Hao Liu
7f085e159f Merge pull request #13712 from ansible/feature_usage-collection
Allow soft deletion of HostMetrics and add usage collection utility
2023-03-28 12:16:02 -04:00
Seth Foster
db2253601d Allow TLS 1.2 for Receptor connections
- Required for FIPS environment where TLS 1.3 is
not supported
- TLS 1.3 can still be used if the nodes
both agree to use during handshake.
2023-03-27 11:07:30 -04:00
Klaas Demter
32a5186eea Fixes #6556 Expose SOCIAL_AUTH_USERNAME_IS_FULL_EMAIL (#13641)
Signed-off-by: Klaas Demter <Klaas-@users.noreply.github.com>
2023-03-27 11:30:40 -03:00
Aparna Karve
c30c9cbdbe Remove --until option 2023-03-23 14:13:16 -04:00
Martin Slemr
8ec6e556a1 HostMetricSummaryMonthly API commented out 2023-03-23 14:13:16 -04:00
Hao Liu
382f98ceed Fixing migration files 2023-03-23 14:13:03 -04:00
Aparna Karve
fbd5d79428 Added internal batch processing for up to 10k rows
For --rows_per_file if > 10k, rows would be fetched in batches of 10k
2023-03-23 14:06:56 -04:00
Aparna Karve
878008a9c5 make rows_per_file optional parameter
Removed 2 sql statements that gave the info on row count
which warranted many other changes
2023-03-23 14:06:56 -04:00
Aparna Karve
132fe5e443 Remove pandas use csv. Also, remove anonymization 2023-03-23 14:06:56 -04:00
Aparna Karve
311cea5a4a CLI for host usage collection 2023-03-23 14:06:56 -04:00
Zita Nemeckova
88bb6e5a6a Fix test failure 2023-03-23 14:06:56 -04:00
Zita Nemeckova
c117ca66d5 Show HostMetrics only for specific subscription
SUBSCRIPTION_USAGE_MODEL: 'unique_managed_hosts'

Fixes https://issues.redhat.com/browse/AA-1613
2023-03-23 14:06:56 -04:00
Zita Nemeckova
c20e8eb712 Prettier 2023-03-23 14:06:56 -04:00
Zita Nemeckova
5be90fd36b Do not show deleted host metrics 2023-03-23 14:06:56 -04:00
Zita Nemeckova
32a56311e6 Fix linting issues 2023-03-23 14:06:56 -04:00
Zita Nemeckova
610f75fcb1 Update routeConfig test to be according to RBAC 2023-03-23 14:06:56 -04:00
Zita Nemeckova
179868dff2 Add possibility to select and delete HostMetrics 2023-03-23 14:06:56 -04:00
Zita Nemeckova
9f3c4f6240 RBAC: only superuse and auditor can see HostMetrics 2023-03-23 14:06:56 -04:00
Zita Nemeckova
d40fdd77ad Fix filter to take only hostname__icontains and disable advance search 2023-03-23 14:06:56 -04:00
Zita Nemeckova
9135ff2f77 Add HostMetrics routes to the test 2023-03-23 14:06:56 -04:00
Zita Nemeckova
8d46d32944 UI 2023-03-23 14:06:56 -04:00
Martin Slemr
ae0c1730bb Subscription_usage_model in analytics/config.json 2023-03-23 14:06:55 -04:00
Martin Slemr
9badbf0b4e Compliance computation settings 2023-03-23 14:06:55 -04:00
Martin Slemr
7285d82f00 HostMetric migration 2023-03-23 14:06:55 -04:00
Alan Rominger
e38f87eb1d Remove custom API filters and suggest solution via templates 2023-03-23 14:06:55 -04:00
Martin Slemr
e6050804f9 HostMetric review,migration,permissions 2023-03-23 14:06:55 -04:00
Martin Slemr
f919178734 HostMetricSummaryMonthly API and Migrations 2023-03-23 14:06:55 -04:00
Martin Slemr
05f918e666 HostMetric compliance computation 2023-03-23 14:06:55 -04:00
Martin Slemr
b18ad77035 Host Metrics update/soft delete 2023-03-23 14:06:55 -04:00
Martin Slemr
d80759cd7a HostMetrics migration 2023-03-23 14:06:55 -04:00
Martin Slemr
ef4e77d78f Host Metrics List API 2023-03-23 14:06:55 -04:00
Shane McDonald
bf98f62654 Merge pull request #13705 from jainnikhil30/dont_use_githubusercontent
Don't use githubusercontent for containers.conf and podman-contianers.conf
2023-03-23 11:58:58 -04:00
Marliana Lara
1f9925cf51 Fix automation analytics link in license page (#13225) 2023-03-23 08:02:16 -03:00
Hao Liu
4bf8366687 Merge pull request #13743 from TheRealHaoLiu/ui-next-non-phony
Turn ui-next make targets non-PHONY
2023-03-22 21:05:18 -04:00
Hao Liu
21b4755587 Turn make ui-next target non-PHONY
this allow you to pre-build your ui_next outside of container and it won't try to rebuild when you build awx image

`make ui-next` will no longer rebuild if awx/ui_next/build exist
2023-03-22 20:38:54 -04:00
Seth Foster
b4163dd00f Update node affinity description (#13741) 2023-03-22 20:54:08 +00:00
Hao Liu
6908f415a1 Merge pull request #13660 from ansible/feature_ui-next
Introducing tech preview of the new AWX UI
2023-03-21 14:09:47 -04:00
Hao Liu
746cd4bf77 Add note to indicate ui-next is imported target 2023-03-21 13:43:13 -04:00
Hao Liu
39ea162aa9 Update UI_NEXT help text in UI 2023-03-21 13:43:13 -04:00
Hao Liu
5bd00adb59 Update UI_NEXT README
also cleanup some small things
2023-03-21 13:43:13 -04:00
Alan Rominger
28b1c62275 Fix bug with awx collection manual type alias (#13671)
* Fix bug with manual type alias

* Add unit test for creating manual project with path
2023-03-20 15:26:34 -04:00
Vishali Sanghishetty
f3cdf368df Merge pull request #13693 from mabashian/12651-workflow-convergence
Fixes bug where editing a node always defaulted to all convergence
2023-03-20 15:08:52 -04:00
Michael Abashian
4302348e8e Fixes bug where editing a node always defaulted to all convergence 2023-03-20 14:33:44 -04:00
Hao Liu
cd6cb3352e fail UI_NEXT make src if variable not set 2023-03-20 14:05:58 -04:00
Hao Liu
d1895bb92e PHONY all UI_NEXT build target
- they were all PHONY to start with and also all target are written to be rerun able
2023-03-20 14:05:58 -04:00
Hao Liu
8d47644659 Move placeholder index_awx.html out of build dir
- move placeholder index_awx.html out of ui_next build dir
- copy index_awx.html to build dir during development bootstrap if UI_NEXT has not been build
2023-03-20 14:05:58 -04:00
Oleksii Baranov
46227f14a1 Add logging and reduce migration to one operation 2023-03-20 14:19:30 +01:00
Oleksii Baranov
2d114a4d16 Add migration for new cyberark plugin names 2023-03-20 14:19:30 +01:00
lucas-benedito
7deddabea6 8049-expose execution node var for playbook (#13418)
Expose execution node var for playbook

---------

Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-03-17 15:12:25 -04:00
Gabriel Muniz
e15f4de0dd Fix race with heartbeat and reaper logic (#13713)
* Fix race with heartbeat and reaper logic

* Fix tests to fail when over drift over heartbeat time

* replaced modified with started time for reap() code and added test

* fixed logic bug and cleaned up tests

* Added comments to tests to call out reasoning
2023-03-17 14:24:31 -04:00
Kia Lam
f558957538 Commit .po files. 2023-03-17 09:41:29 -07:00
John Westcott IV
fa3920d3a3 Adding default index_awx.html incase user forgets to build ui-next 2023-03-17 11:11:22 -04:00
Hao Liu
48a04bff5a add new UI icons 2023-03-16 23:37:30 -04:00
Kia Lam
c30760aaa9 Fix brandname in banner. 2023-03-16 23:37:30 -04:00
Michael Abashian
3636c5e95e Adds missing mock for fetching the brand name 2023-03-16 23:37:30 -04:00
Hao Liu
ae0d868681 make dev-env test pass 2023-03-16 23:37:30 -04:00
Hao Liu
edbed92c95 Refine UI_NEXT Makefile and update README 2023-03-16 23:37:30 -04:00
Hao Liu
b75b098ee9 throw 404 when UI_NEXT false 2023-03-16 23:34:30 -04:00
Michael Abashian
4f2f345e23 Fix use of brandName 2023-03-16 23:34:30 -04:00
Michael Abashian
41a4551c91 Only show tech preview banner when config.ui_next is true. Use brandName variable in tech preview banner. 2023-03-16 23:34:30 -04:00
Hao Liu
229dbe0905 Add ui_next to /api/v2/config
- Add ui_next to /api/v2/config
- enable banner to show up for normal user since /api/v2/settings is only available to admin users
2023-03-16 23:34:30 -04:00
Michael Abashian
d137086870 Adds UI bits for new UI_NEXT system setting 2023-03-16 23:34:30 -04:00
Hao Liu
f53aa2d26b Build and serve UI_NEXT
- Add new makefile for building ui_next
- Add setting to toggle ui_next
- Add URL path for displaying ui_next
- Update collectstatic and template dir config to serve ui_next
2023-03-16 23:34:30 -04:00
Kia Lam
42c848b57b Add banner to dashboard page.
Co-Authored-By: kialam <2293210+kialam@users.noreply.github.com>
2023-03-16 23:23:21 -04:00
Vishali Sanghishetty
3e6e0463b9 Merge pull request #13708 from marshmalien/13675-code-editor-lastYaml
Match CodeMirror mode to value type on initialization
2023-03-16 16:39:21 -04:00
John Westcott IV
ededc61a71 Merge pull request #13621 from Vaibhavg4651/master
Update constants.js
2023-03-16 11:16:22 -04:00
Marliana Lara
3747f5b097 Match codemirror mode (YAML/JSON) with the value on initialization 2023-03-16 11:09:46 -04:00
jainnikhil30
64b0e09e87 dont user githubusercontent for containers.conf and podman-containers.conf 2023-03-16 18:04:20 +05:30
Michael Abashian
790ccd984c Turn off auto completion on the login form (#13471) 2023-03-16 08:03:48 -03:00
Rick Elrod
5d0849d746 [tests] Some survey tests were being skipped (#13703)
The class that contained these tests wasn't named Test*, so the tests in
it weren't running. Fix that and fix the tests in it so that they pass.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-03-15 22:49:27 -05:00
Michael Abashian
7f1750324f Adds support for a pseudolocalization and lang query params (#13661)
* Adds support for a pseudolocalization query param to check to see whether a string has been marked for translation

Adds support for a pseudolocalization query param to check to see whether a string has been marked for translation

* Adds support for passing a lang param to force rendering in a particular language

* Remove unused import
2023-03-15 08:06:19 -03:00
Gabriel Muniz
a63067da38 Add instance groups roles (#13584)
* adding roles to instance groups
added ResourceMixin to Instancegroup and changed the filtered_queryset

* added necessary changes to rebuild relationship between IG and roles

* added description to InstanceGroupAccess

* preliminary ui plug for demo purposes

* preliminary ui plug for demo purposes
added inventory special logic for use_role to allow attaching instance groups
added more tests to handle those cases

* Add access_list to InstanceGroup

* scratch branch to test migration work

* refactored to shorten logic

* Added migration and am removing logic that enabled Org admin permissions

* Add Obj admin role to JT, Inv, Org

* Changed tests to reflect new permissions

* refactored some of the tests

* cleaned up more tests and reworded help on InstanceGroupAccess

* Removed unnecessary delete of Route for instance group perms change

* Fix UI tests and migration

* fixed permissions on prompt for InstanceGroups

* added related object roles endpoint

* added ui/api function for options instance_groups

* separate the migrations in order to avoid issues with migrations not being finished

* changed migrations parent class to disable the activity stream error in migrations

* Added logging to migration as activitystream is disabled

* added clarifying comment to jobtemlateaccess and linted UI addition

* renamed migrations to avoid collisions

* Rename migrations to avoid collisions
2023-03-14 21:37:22 -04:00
Alan Rominger
7a45048463 Merge pull request #13591 from AlanCoding/templates_galore
Update templates for feature removals
2023-03-14 16:30:15 -04:00
Alan Rominger
97a5e87448 Update templates for feature removals
MOVE the config template v1 to v2
delete other v1 views since v1 is deleted

the host fact gather collection over time was removed

also the job start view was removed

Insights integration was changed and the host insights
  view no longer exists

Slightly modernize config help
2023-03-14 09:40:48 -04:00
Gabriel Muniz
11475590e7 Merge pull request #13648 from gamuniz/update_inventory_import
Update inventory import to cancel on failure from cli.
2023-03-13 21:34:05 -04:00
John Westcott IV
7e88a735ad Merge pull request #13427 from dlyog/devel
Fix for Issue Thycotic SSH Key Template #13384
2023-03-13 09:53:20 -04:00
Gabriel Muniz
2f3e65d4ef Merge pull request #13679 from gamuniz/fix_migration_collision
Fix migration name collision
2023-03-12 18:24:14 -04:00
Gabe Muniz
cc18c1220a Fix migration name collision 2023-03-12 18:01:54 -04:00
Sarah Akus
d2aa1b94e3 Merge pull request #13644 from fosterseth/inv_source_scm_branch
Add scm_branch to inventory source and inventory update
2023-03-11 10:57:21 -05:00
Seth Foster
a97c1b46c0 Merge pull request #13670 from fosterseth/wait_for_pg
docker-compose wait for pg to be ready
2023-03-10 16:35:27 -05:00
Seth Foster
6a3282a689 docker-compose wait for PG to be ready
- periodically ping postres on port 5432 and only start
migrations if successful.
- prevents crash loop when attempting migrations before
postgres is ready.
2023-03-10 16:13:19 -05:00
Seth Foster
be27d89895 Merge pull request #13677 from fosterseth/fix_testautoscaling
TestAutoScaling wait for process to stop
2023-03-10 16:13:05 -05:00
Seth Foster
160508c907 TestAutoScaling wait for process to stop 2023-03-10 15:51:28 -05:00
Sarah Akus
5a3900a927 Merge pull request #13667 from akus062381/change-to-makefile
update Makefile to account for being inside or outside of a container
2023-03-09 08:54:09 -05:00
akus062381
f2bfaf7aca fixed 2023-03-08 19:59:25 -05:00
akus062381
d1cf7245f7 change Makefile 2023-03-08 19:33:37 -05:00
jainnikhil30
0de7551477 comment everything related to instance group, will add back once 13584 goes in
linting

linting again

Use the correct role on org permission check

Co-authored-by: Elijah DeLee <kdelee@redhat.com>

Update docs/bulk_api.md

Co-authored-by: Elijah DeLee <kdelee@redhat.com>

Update docs/bulk_api.md

Co-authored-by: Elijah DeLee <kdelee@redhat.com>

Update awx/main/access.py

Co-authored-by: Elijah DeLee <kdelee@redhat.com>

Update awx/main/access.py

Co-authored-by: Elijah DeLee <kdelee@redhat.com>

Update docs/bulk_api.md

Co-authored-by: Alan Rominger <arominge@redhat.com>

fix collection test (#19)

improve readability of through model object creation (#18)

lower num jobs/hosts in tests (#20)

we can test query scaling at lower numbers, to reduce
load in tests. We suspect this was causing some flake
in the tests on PRs

adjust the num of queries
2023-03-08 12:58:12 -05:00
Alan Rominger
ac99708952 Serializer RBAC and structure review changes (#17)
* Bulk launch serializer RBAC and code structure review

Use WJ node as base in bulk job launch child
  remove fields we get for free this way

Minor translation marking

Consolidate bulk API permission methods
  split out permission check for each UJT type

Code consolidation for org check method

add a save before starting the workflow job
2023-03-08 12:58:12 -05:00
jainnikhil30
47b7bbeda7 make the max host default to 100
Make the max host default 100. We are seeing with moderate number of hosts i.e. 500 hosts having a few host variable each runs into max size of nginx message and nginx rejects the request.
we are therefor keeping the value small so that it doesn't fail with decent number of host variables as well.

remove the 999 hosts test because the default max is 100

fix the credential check

fix the instance groups and execution env permission checks
2023-03-08 12:58:12 -05:00
Elijah DeLee
bca0f2dd47 evaluate max bulk settings in validate and improve OPTIONS (#16)
* evaluate max bulk settings in validate...

instead of in class attribute. This makes them load at request time
instead of at app start up time, which fixes problems with test
as well as I think will be better user experience if admins
actually do change the setting it will apply without restarting
django app on each instance

* improve OPTIONS by not manually declaring feilds

alan pointed this out
2023-03-08 12:58:12 -05:00
Elijah DeLee
3efc7d5bc4 fix access problems (#15)
* fix access problems and add  Add bulk job max settings to api

filter workflow job nodes better

This will both improve performance by limiting the queryset for the node
sublists as well as fix our access problem.

override can_read instead of modify queryset in access.py

We do this because we are not going to expose bulk jobs to the list
views, which is complicatd and has poor performance implications.

Instead, we just care about individual Workflows that clients get linked
to not being broken.

fix comment

remove the get functions from the conf.py for bulk api max value

comment the api expose of the bulk job variables

reformt conf.py with make black

trailing space

add more assertion to the bulk host create test
2023-03-08 12:58:12 -05:00
Seth Foster
4b9ca3deee Resolve id inventory and organization (#14) 2023-03-08 12:58:12 -05:00
jainnikhil30
f622d3a1e6 add more functional test for related fields on bulk job and some other minor fixes
fix the functional test

lint fix

functional test fixes
2023-03-08 12:58:12 -05:00
jainnikhil30
ede1b9af92 add more functional tests for prompted fields and fix the lint test
check label permission and fix lint (#13)

* set created by and launch type correctly

This makes "launched_by" get computed right in the tests.

Mysteriously this seemed to work from API browser, but
this seems more correct to have it work this way, and makes
tests actually work.

For "manual" launch types the attribute used to populate "launched_by"
is "created_by". And we already have "is_bulk_job" to indicate that the
job is a bulk job. So lets just use this.

* check label is in an organization you can read
2023-03-08 12:58:12 -05:00
Elijah DeLee
2becc5dda9 add assertion to test on number of queries made (#9)
* add assertions around access to resulting job

there is a problem getting the job w/ the user that launched it

add more assertions to bulk tests (#11)

dig more into the results and assert on results
also, use a fixture that already implemented the "max queries" thing

fix ansible collection sanity tests (#12)
2023-03-08 12:58:12 -05:00
jainnikhil30
7aad16964c removing the duplicate BulkView import 2023-03-08 12:58:12 -05:00
Nikhil
b1af27c4f6 add more docs on the bulk job launch feature
better error message
2023-03-08 12:58:12 -05:00
Alan Rominger
7cb16ef91d Make the bulk endpoint templates work in API browser
Various fixes

- Don't skip checking resource RBAC permissions for admins
Necessary to handle bad input, e.g. providing a
unified_job_template id that doesn't exit

- In awxkit, only "walk" if we get 'url' in the result

- Bulk host create should return url pointing to inventory,
not inventory/hosts

dont do org check for superuser
2023-03-08 12:58:12 -05:00
Nikhil
9358d59f20 remove char_prompts and survey password from bulk job
fix the api-lint

fix the api-lint

add the descrition to the bulk job launch module params

 add the description for the description field

 add the description for the description field

add docs for the bulk api

fix the models on the bulk api serializers

fix some of the issues highlighted in the code review

better use of role model

remove comments

better error message

revert the PrimaryKeyRelatedField for unified_job_template and inventory
2023-03-08 12:58:12 -05:00
Elijah DeLee
9e037f1a02 fixup return values for bulk launch and host create in awxkit
Enabled the params bulk job

make black

make black again

Fixed inventory and organization input params for bulk modules

add collection integration tests

Fix cli return errors

fix test completeness
2023-03-08 12:58:12 -05:00
Nikhil
266ebe5501 add the extra vars support and configuration for max job and hosts
dont do org validation on superuser

make black
2023-03-08 12:58:12 -05:00
Seth Foster
ce5270434c added awx collection support for bulk api
return more context for bulk host create

now return list of minimal info about host objects

[
    {
        "name": "lakjdsafoiaweirnladlk",
        "enabled": true,
        "instance_id": "",
        "description": "",
        "variables": "",
        "id": 4593,
        "url": "/api/v2/hosts/4593/",
        "inventory": "/api/v2/inventories/1/"
    }
]

Updated tests, but needed to work around some weird behavior with
sqlite. Apparently it behaves differently around assigning ID's to the
result of bulk_create and that is messed up my use of `reverse` to look
up the url of the hosts
2023-03-08 12:58:12 -05:00
Seth Foster
34834252ff awxkit cli support
fixes for awx cli
2023-03-08 12:58:12 -05:00
Nikhil
861ba8a727 add some helpers functions in validate and some other minor fixes
make black changes

increase the number of queries to 30

fix the flake failure

add functional changes for bulk job launch and some minor fixes

pull changes
2023-03-08 12:58:12 -05:00
Elijah DeLee
02e5ba5f94 Move view around and inherit from right view to get OPTIONS
we needed to inherit from GenericAPIView to get the options to render
correctly

q!

add execution env support

add organization validation to the workflowjob

Update awx/api/serializers.py

Co-authored-by: Elijah DeLee <kdelee@redhat.com>

Update awx/api/serializers.py

Co-authored-by: Elijah DeLee <kdelee@redhat.com>
2023-03-08 12:58:12 -05:00
Elijah DeLee
81ba6c0234 add migration for is bulk job 2023-03-08 12:58:12 -05:00
Elijah DeLee
5c47c24e28 Introduce bulk jobs
Provide a view that allows users to launch many jobs with one POST
request. Under the hood, this creates a workflow with a number of jobs
all in a "flat" structure --  much like a sliced job, but with arbitrary
"joblets".

For ~ 100 nodes looking at ~ 200 some queries, which is more than the
proof of concept, but still an order of magnitude better than individual
job launches.

Still more work to implement other m2m objects, and need to address what
Organization should be assigned to a WorkflowJob launched by a BulkJob.

They need this so they can step into the workflow_job_nodes and get the
status of all the containing jobs.

Also want to test when there are MANY job templates etc in the system
because the querires like
UnifiedJobTemplate.accessible_pk_qs(request.user, 'execute_role').all()
queries scare me, seems like it could be a lot of things.

use "many=True" instead of ListField

Still seeing identical number of queries when creatin 100 jobs, going to
investigate more

only validate type in nested serializer

then, we actually get the database object after we do the RBAC checks
This drops us down from hundreds of queries to launch 100 jobs,
to less than 100 queries to launch 100 jobs (I got around 24 queries to
launch 100 jobs with credentials)

pave way for more promptable things

add "limit" as possible prompt on launch to bulk jobs
re-organize how we add credentials to pave way for the other m2m items
not having to repeat too much code

add labels to the bulk job

add the other fields to the workflowjobnode

move urls around
2023-03-08 12:58:12 -05:00
Elijah DeLee
752289e175 create new bulk host create endpoint
allow system admins, org admins, and inventory admins to bulk create
hosts.

Testing on an "open" licensed awx as system admin, I created 1000 hosts with 6 queries in ~ 0.15 seconds
Testing on an "open" licensed awx as organization admin, I created 1000 hosts with 11 queries in ~ 0.15 seconds

fix org max host check

also only do permission denied if license is a trial

add /api/v2/bulk to list bulk apis available

add api description templates

One motiviation to not take a list of hosts with mixed inventories is to
keep things simple re: RBAC and keeping a constant number of queries.

If there is great clamor for accepting list of hosts to insert into
arbitrary different inventories, we could probably make it happen - we'd
need to pop the inventory off of each of the hosts, run the
HostSerializer validate, then in top level BulkHostCreateSerializer
fetch all the inventories/check permissions/org host limits for those
inventories/etc. But that makes this that much more complicated.

add test for rbac access

test also helped me find a bug in a query, fixed that

add test to assert num queries scales as expected

also move other test to dedicated file

also test with super user like I meant to

record activity stream for the inventory

this records that a certain number of hosts were added by a certain user
we could consider if there is any other additional information we want
to include
2023-03-08 12:58:12 -05:00
Hao Liu
a24aaba6bc Merge pull request #13663 from slemrmartin/fix-analytics-collectors
Fix analytics collector
2023-03-08 12:03:16 -05:00
Martin Slemr
349785550c Fix analytics collector 2023-03-08 17:33:23 +01:00
Seth Foster
ab6511a833 fix ui lint 2023-03-07 17:50:09 -05:00
Vidya Nambiar
a7b4c03188 Show scm_branch if project allows branch override 2023-03-07 17:50:08 -05:00
Seth Foster
a5f9506f49 spelling
add allow_override to source_project
2023-03-07 17:50:06 -05:00
Gabe Muniz
8e6f4fae80 enable scm branch ui work 2023-03-07 17:50:06 -05:00
Seth Foster
a952ab0a75 Add scm_branch to inventory source and inventory update
add scm_branch as optional field awxkit
2023-03-07 17:49:57 -05:00
Hao Liu
7cca6c4cd9 Merge pull request #13656 from TheRealHaoLiu/feature-branch-build
Automatically build image for feature branch
2023-03-07 16:53:55 -05:00
Hao Liu
3945db60eb Automatically build image for feature branch
- also will now publish awx image for devel
2023-03-07 16:24:53 -05:00
Hao Liu
252b0dda9f Merge pull request #13633 from TheRealHaoLiu/phony-dockerfile
[fix] Dockerfile collision between awx-kube-build and docker-compose-build
2023-03-07 15:42:58 -05:00
Martin Slemr
0a2f1622f6 Analytics: instance_info.json v1.3 (#13408) 2023-03-07 14:24:04 -03:00
Jesse Wattenbarger
00817d6b89 Merge pull request #13604 from jjwatt/jjwatt-make-foreach
Change docker-clean build rule in Makefile
2023-03-07 11:54:25 -05:00
Alan Rominger
06808ef4c4 Merge pull request #13608 from AlanCoding/keepalive
Use ansible-runner change to get periodic keep-alive messages in K8S
2023-03-06 14:34:37 -05:00
Gabe Muniz
3aba5b5a04 Revert EE selectable work in favor of rewriting later 2023-03-03 16:34:43 -05:00
Gabe Muniz
5c19efdc32 Add execution environment args and pass to inv source 2023-03-03 16:02:46 -05:00
Hao Liu
f0c967c1b2 Merge pull request #13645 from TheRealHaoLiu/fix-websocket
Revert "Remove trailing $ from websocket_urlpatterns to work with cus…
2023-03-02 21:36:21 -05:00
Hao Liu
2ca0b7bc01 Revert "Remove trailing $ from websocket_urlpatterns to work with custom path to fix #12241"
This reverts commit 5e28f5dca1.
2023-03-02 21:14:53 -05:00
Jesse Wattenbarger
217dc57c24 Change docker-clean build rule in Makefile
- Use a make foreach macro and rmi instead of grep and xargs.
2023-03-02 14:56:12 -05:00
Alex Corey
1411d11a0e Merge pull request #13506 from AlexSCorey/13422-JTTabOnCreds
Conditionally applies the job templates tab to credentials that can be on a JT
2023-03-02 13:15:48 -05:00
Alex Corey
2fe1ea94bd Conditionally applies the job templates tab to credentials that can be on a JT 2023-03-02 12:57:20 -05:00
Hao Liu
a47cfc55ab Merge pull request #13574 from tomsiewert/use-compose-plugin
Make docker-compose command configurable in Makefile
2023-03-01 15:41:33 -05:00
Hao Liu
0eb9de02f3 Merge pull request #13627 from infamousjoeg/fix-13597-webservice_id-default
Fixes #13597 webservice_id default value added
2023-03-01 15:29:53 -05:00
Lila Yasin
39ee4285ce Working on running spellcheck on everything ahead of merging the shellcheck/code check CI addition. (#13453) 2023-03-01 10:19:00 -03:00
Christian Adams
2dcda04a9e Merge pull request #13445 from stanislav-zaprudskiy/disable_instance_command
Add `disable_instance` management command
2023-02-28 15:37:38 -05:00
Christian Adams
52d46c88e4 External users should not be able to change their password (#13491)
* Azure AD users should not be able to change their password

* Multiple auth changes

Moving get_external_user function into awx.sso.common
Altering get_external_user to not look at current config, just user object values
Altering how api/conf.py detects external auth config (and making reusable function in awx.sso.common)
Altering logic in api.serializers in _update_pasword to use awx.sso.common

* Adding unit tests

---------

Co-authored-by: John Westcott IV <john.westcott.iv@redhat.com>
2023-02-28 15:44:34 -03:00
Hao Liu
c2df22e0f0 Merge pull request #13632 from TheRealHaoLiu/reshaving-the-yak
[chore] update project_update playbook to be compliant with ansible-lint
2023-02-28 13:17:45 -05:00
Alan Rominger
90f54b98cd Update keepalive setting help_text to be more direct
Co-authored-by: Shane McDonald <me@shanemcd.com>
2023-02-28 09:04:07 -05:00
Michael Abashian
b143df3183 Fix broken UI test 2023-02-28 09:04:07 -05:00
Alan Rominger
6fa22f5be2 Add UI for the new setting 2023-02-28 09:04:07 -05:00
Alan Rominger
d5de1f9d11 Make use of new keepalive messages from ansible-runner
Make setting API configurable and process keepalive events
  when seen in the event callback

Use env var in pod spec and make it specific to K8S
2023-02-28 09:04:07 -05:00
Hao Liu
7cca39d069 change make Dockerfile to phony
awx-kube-build and docker-compose-build share the same Dockerfile

if u run awx-kube-build than docker-compose-build in succession the second command wont run the Dockerfile target and cause the image to be built with the incorrect Dockerfile
2023-02-27 20:53:07 -05:00
Hao Liu
cf21eab7f4 [chore] update project_update playbook to be compliant with ansible-lint
reshaving the yak

Co-Authored-By: Gabriel Muniz <gmuniz@redhat.com>
2023-02-27 18:32:10 -05:00
Joe Garcia
98b2f51c18 fix kwargs[] to kwargs.get() 2023-02-27 11:52:44 -05:00
Joe Garcia
327352feaf Add default value to webservice_id kwarg 2023-02-27 11:26:52 -05:00
Alan Rominger
ccaace8b30 Merge pull request #13541 from npithonDR/devel
Fix error for byweekday in schedule_rruleset
2023-02-27 10:24:48 -05:00
Hao Liu
2902b40084 Merge pull request #13623 from TheRealHaoLiu/revert-project-update-playbook
Revert project_update.yml
2023-02-27 08:47:24 -05:00
Hao Liu
9669b9dd2f Revert project_update.yml
Due to problem found in testing reverting

019e6a52fe
2023-02-27 08:23:27 -05:00
vaibhav gupta
a6a9d3427c Update constants.js 2023-02-24 20:59:59 +05:30
Shane McDonald
d27aada817 Merge pull request #13619 from shanemcd/non-root-path-dev-env
Allow serving app from non-root path in dev env
2023-02-24 09:52:34 -05:00
Shane McDonald
2fca07ee4c Allow serving app from non-root path in dev env
Usage:

$ EXTRA_SOURCES_ANSIBLE_OPTS='-e ingress_path=/awx' make docker-compose
$ curl http://localhost:8013/awx/api/v2/ping/
2023-02-24 09:29:17 -05:00
npithonDR
335ac636b5 Merge pull request #1 from AlanCoding/npithon
Follow comments, split non-list objects
2023-02-24 08:42:00 +01:00
Shane McDonald
f4bcc03ac7 Merge pull request #12242 from adpavlov/12241-websocket-custom-path
Fix websockets when application is served from a non-root path
2023-02-23 12:25:22 -05:00
Alan Rominger
3051384f95 Follow suggestion from comment, split if NOT list 2023-02-23 12:05:32 -05:00
Alan Rominger
811ecb8673 Follow suggestion from comment, split if NOT list 2023-02-23 12:05:21 -05:00
Alexander Pavlov
5e28f5dca1 Remove trailing $ from websocket_urlpatterns to work with custom path to fix #12241
Signed-off-by: Alexander Pavlov <alexander.pavlov@amdocs.com>
2023-02-23 12:02:47 -05:00
Hao Liu
d088d36448 Merge pull request #13618 from TheRealHaoLiu/head-to-tail
[fix] switch from head to tail in project update playbook when clearing project dir
2023-02-23 11:13:03 -05:00
Hao Liu
89e41597a6 switch from head to tail
from @relrod

`head` will close the input fd when it no longer needs it (or exits). find will try to write to the closed fd and somewhere along the way, it will receive SIGPIPE as a result. This is why `yes | head -5 ` doesn't run forever.
2023-02-23 10:46:48 -05:00
Hao Liu
283adc30a8 Merge pull request #13526 from TheRealHaoLiu/project_update_playbook_lint
[chore] Update project_update playbook to be compliant with ansible-lint
2023-02-22 21:39:42 -05:00
Hao Liu
019e6a52fe Update project_update playbook to be compliant with ansible-lint 2023-02-22 19:30:24 -05:00
Hao Liu
35e5610642 Merge pull request #13615 from TheRealHaoLiu/update-kind-devel-doc
update kind development environment instruction
2023-02-22 19:25:03 -05:00
Hao Liu
3a303875bb update kind development environment instruction 2023-02-22 16:18:53 -05:00
Alan Rominger
4499a50019 Merge pull request #13595 from sean-m-sullivan/devel
fix inventory prompt on launch for workflow nodes
2023-02-22 10:23:02 -05:00
Alan Rominger
3fe46e2e27 Merge pull request #13606 from AlanCoding/copy_login
Give proper 401 code to user not logged in
2023-02-21 16:31:23 -05:00
Alan Rominger
6d3f39fe92 Give proper 401 code to user not logged in 2023-02-21 13:34:29 -05:00
Alan Rominger
a3233b5fdd Merge pull request #13594 from AlanCoding/approval_collection
Add integration test and docs for workflow_approval module
2023-02-21 09:03:17 -05:00
Jesse Wattenbarger
af6549ffcd Fix a bug in clean languages
The `$` was not escaped for make or shell.
2023-02-21 07:52:49 -05:00
sean-m-sullivan
fe3aa6ce2b fix inventory prompt on launch for workflow nodes 2023-02-18 23:13:46 -05:00
Gabriel Muniz
77ec46f6cf Merge pull request #13593 from gamuniz/fix_workflowapproval_view
Make /api/v2/workflow_approvals/ endpoint read-only
2023-02-17 18:19:04 -05:00
Alan Rominger
b5f240ce70 Add integration test and docs for workflow_approval module 2023-02-17 15:10:59 -05:00
Gabe Muniz
fb2647ff7b changing the signature of workflowapprovallist
included workflow approval as a read only endpoint to pass collection tests
2023-02-17 14:57:54 -05:00
Stanislav Zaprudskiy
35fbb94aa6 Use CLUSTER_HOST_ID as default hostname argument value
Incorporates feedback from https://github.com/ansible/awx/pull/13445/files#r1106012308

Signed-off-by: Stanislav Zaprudskiy <s.zaprudskiy@sap.com>
2023-02-17 18:10:08 +01:00
Stanislav Zaprudskiy
f2ab8d637c Do not discard jobs w/ .started=None 2023-02-17 18:10:08 +01:00
Stanislav Zaprudskiy
166b586591 Support indefinitely waiting for jobs to finish
Signed-off-by: Stanislav Zaprudskiy <s.zaprudskiy@sap.com>
2023-02-17 18:10:08 +01:00
Stanislav Zaprudskiy
d1c608a281 Reformat with black
Signed-off-by: Stanislav Zaprudskiy <s.zaprudskiy@sap.com>
2023-02-17 18:10:08 +01:00
Stanislav Zaprudskiy
b4803ca894 Add disable_instance management command
Signed-off-by: Stanislav Zaprudskiy <s.zaprudskiy@sap.com>
2023-02-17 18:10:08 +01:00
Tom Siewert
ce7f597c7e Makefile: Make docker-compose command configurable
docker-compose v1 is EOL since April 2022 and hasn't received any
updates since May 2021. docker compose v2 is a complete rewrite in
Go which acts as a plugin for the main docker application.
The syntax is the same, but only the `compose` command differs.
This commit adds the ability to override the default `docker-compose`
command using `make DOCKER_COMPOSE='docker compose'`.

Signed-off-by: Tom Siewert <tom@siewert.io>
2023-02-16 14:47:39 +01:00
John Westcott IV
23a34c5dc9 Merge pull request #13466 from john-westcott-iv/ee_debugging
Enhancing debugging of `The project could not sync because there is no Execution Environment`
2023-02-16 08:11:30 -05:00
John Westcott IV
bef3da6fb2 Merge pull request #13304 from john-westcott-iv/limit_actions
Only allow promote and stage to run on the awx repo
2023-02-16 08:05:23 -05:00
Alan Rominger
7f50679e68 Do not create setting with invalid value in data migration (#13576)
* Do not create setting with invalid value in data migration

* Add test for conf app data migration
2023-02-15 14:54:46 -05:00
John Westcott IV
52d071f9d1 Merge pull request #13573 from john-westcott-iv/ldap_issue
Fixing LDAP users not being properly added to managed teams
2023-02-15 13:25:34 -05:00
John Westcott IV
26a888547d Fixing variable with duplicate name which was causing errors with LDAP team addition 2023-02-14 14:56:13 -05:00
npithonDR
951eee944c Add additional rruleset tests 2023-02-13 09:50:11 +01:00
npithonDR
4630757f5f Fix error for byweekday in schedule_rruleset
Fix error:
```
fatal: [localhost]: FAILED! => {
    "msg": "An unhandled exception occurred while running the lookup plugin 'awx.awx.schedule_rruleset'. Error was a <class 'ansible.errors.AnsibleError'>, original message: In rule 1 byweekday must only contain values in monday, tuesday, wednesday, thursday, friday, saturday, sunday. In rule 1 byweekday must only contain values in monday, tuesday, wednesday, thursday, friday, saturday, sunday"
}
```

with:
```
    - name: Build a complex schedule for every monday using the rruleset plugin
      awx.awx.schedule:
        name: "Test build complex schedule"
        state: present
        unified_job_template: "template name"
        rrule: "{{ query('awx.awx.schedule_rruleset', '2030-04-30 10:30:45', rules=rrules, timezone='Europe/Paris' ) }}"
      vars:
        rrules:
          - frequency: 'day'
            interval: 1
            byweekday: 'monday'
```
2023-02-09 09:34:10 +01:00
Tarun CHawdhury
f0c91bb1f3 Description
Fixed Linting Issue with black formatter

Signed-off-by: Tarun CHawdhury <tarunchawdhury@gmail.com>
2023-01-25 16:22:46 -05:00
Tarun Chawdhury
b1dceefac3 Description
Fixed Linting Issue

Signed-off-by: Tarun Chawdhury <tarun@taruns-air.lan>
2023-01-25 12:46:51 -08:00
Tarun Chawdhury
bb65945b4f Description
Fixed Linting Issue

Signed-off-by: Tarun Chawdhury <tarun@taruns-air.lan>
2023-01-25 12:26:12 -08:00
Tarun Chawdhury
1b8f6630bf Description
Fixed Linting Issue

Signed-off-by: Tarun Chawdhury <tarun@taruns-air.lan>
2023-01-25 10:00:43 -08:00
Tarun Chawdhury
5157838d83 Description
Fixed Linting Issue

Signed-off-by: Tarun Chawdhury <tarun@taruns-air.lan>
2023-01-25 09:21:21 -08:00
Tarun Chawdhury
ebabea54e1 Fixed Lint Issue 2023-01-25 07:24:55 -08:00
Tarun Chawdhury
0eaa7816e9 Merge branch 'ansible:devel' into devel 2023-01-25 07:18:09 -08:00
John Westcott IV
eb9431ee1f Fixing hard coded project 2023-01-24 13:50:07 -05:00
John Westcott IV
fd6605932a Adding exception if unable to find the controler plane ee 2023-01-24 13:50:07 -05:00
Tarun Chawdhury
83149519f8 Add Suppoort for Template SSH Key Retrieval. Fixes Issue #13384
Description

Thycotic has various types of Secret Templates like Password, SSH Key

Thycotic API returns str type for Password and of Type for class

requests.models.Response for SSH Key. Current implementation only

considers Password template. However when trying for SSH Key code

need return the str from response  type requests.models.Response

Signed-off-by: Tarun CHawdhury <tarunchawdhury@gmail.com>
2023-01-16 10:49:44 -05:00
John Westcott IV
5d96ee084d Adding endswith(awx) to stage 2022-12-08 16:36:04 -05:00
John Westcott IV
e2cee10767 Update .github/workflows/promote.yml
Co-authored-by: Shane McDonald <me@shanemcd.com>
2022-12-08 16:34:13 -05:00
John Westcott IV
31c2e1a450 Only allow promote and stage to run on the awx repo 2022-12-07 14:09:36 -05:00
Alex
b3bda415da build: harden label_issue.yml permissions
Signed-off-by: Alex <aleksandrosansan@gmail.com>
2022-09-25 18:12:14 +02:00
Alex
21291b53fd build: harden label_pr.yml permissions
Signed-off-by: Alex <aleksandrosansan@gmail.com>
2022-09-25 18:10:53 +02:00
Alex
3eb748ff1f build: harden promote.yml permissions
Signed-off-by: Alex <aleksandrosansan@gmail.com>
2022-09-25 18:07:10 +02:00
Martin Vician
6d2c10ad02 Added domain item and authorizer for TSS 2022-08-05 14:13:12 +01:00
tongtie
ede9d961da fix: Internationalization causes the project to be unable to choose manual select 2021-09-14 22:20:52 +08:00
332 changed files with 19900 additions and 10029 deletions

View File

@@ -7,6 +7,7 @@ on:
branches:
- devel
- release_*
- feature_*
jobs:
push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
@@ -20,6 +21,12 @@ jobs:
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Set lower case owner name
run: |
echo "OWNER_LC=${OWNER,,}" >>${GITHUB_ENV}
env:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
@@ -31,15 +38,18 @@ jobs:
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${{ github.repository_owner }}/awx_kube_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/} || :
- name: Build images
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-build
- name: Push image
run: |
docker push ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${{ github.repository_owner }}/awx_kube_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}

View File

@@ -6,6 +6,10 @@ on:
- opened
- reopened
permissions:
contents: read # to fetch code
issues: write # to label issues
jobs:
triage:
runs-on: ubuntu-latest

View File

@@ -7,6 +7,10 @@ on:
- reopened
- synchronize
permissions:
contents: read # to determine modified files (actions/labeler)
pull-requests: write # to add labels to PRs (actions/labeler)
jobs:
triage:
runs-on: ubuntu-latest

View File

@@ -8,8 +8,12 @@ on:
release:
types: [published]
permissions:
contents: read # to fetch code (actions/checkout)
jobs:
promote:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
steps:
- name: Checkout awx

View File

@@ -21,6 +21,7 @@ on:
jobs:
stage:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest
permissions:
packages: write

3
.gitignore vendored
View File

@@ -161,3 +161,6 @@ use_dev_supervisor.txt
/_build/
/_build_kube_dev/
/Dockerfile.kube-dev
awx/ui_next/src
awx/ui_next/build

View File

@@ -6,6 +6,7 @@ recursive-include awx/templates *.html
recursive-include awx/api/templates *.md *.html *.yml
recursive-include awx/ui/build *.html
recursive-include awx/ui/build *
recursive-include awx/ui_next/build *
recursive-include awx/playbooks *.yml
recursive-include awx/lib/site-packages *
recursive-include awx/plugins *.ps1

106
Makefile
View File

@@ -1,4 +1,7 @@
-include awx/ui_next/Makefile
PYTHON ?= python3.9
DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no
NODE ?= node
NPM_BIN ?= npm
@@ -34,6 +37,8 @@ SPLUNK ?= false
PROMETHEUS ?= false
# If set to true docker-compose will also start a grafana instance
GRAFANA ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
VENV_BASE ?= /var/lib/awx/venv
@@ -83,7 +88,7 @@ clean-schema:
clean-languages:
rm -f $(I18N_FLAG_FILE)
find ./awx/locale/ -type f -regex ".*\.mo$" -delete
find ./awx/locale/ -type f -regex '.*\.mo$$' -delete
## Remove temporary build files, compiled Python files.
clean: clean-ui clean-api clean-awxkit clean-dist
@@ -203,19 +208,7 @@ uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
uwsgi -b 32768 \
--socket 127.0.0.1:8050 \
--module=awx.wsgi:application \
--home=/var/lib/awx/venv/awx \
--chdir=/awx_devel/ \
--vacuum \
--processes=5 \
--harakiri=120 --master \
--no-orphans \
--max-requests=1000 \
--stats /tmp/stats.socket \
--lazy-apps \
--logformat "%(addr) %(method) %(uri) - %(proto) %(status)"
uwsgi /etc/tower/uwsgi.ini
awx-autoreload:
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)"
@@ -226,12 +219,6 @@ daphne:
fi; \
daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer
wsbroadcast:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_wsbroadcast
## Run to start the background task dispatcher for development.
dispatcher:
@if [ "$(VENV_BASE)" ]; then \
@@ -239,7 +226,6 @@ dispatcher:
fi; \
$(PYTHON) manage.py run_dispatcher
## Run to start the zeromq callback receiver
receiver:
@if [ "$(VENV_BASE)" ]; then \
@@ -256,6 +242,34 @@ jupyter:
fi; \
$(MANAGEMENT_COMMAND) shell_plus --notebook
## Start the rsyslog configurer process in background in development environment.
run-rsyslog-configurer:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_rsyslog_configurer
## Start cache_clear process in background in development environment.
run-cache-clear:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_cache_clear
## Start the wsrelay process in background in development environment.
run-wsrelay:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_wsrelay
## Start the heartbeat process in background in development environment.
run-heartbeet:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_heartbeet
reports:
mkdir -p $@
@@ -422,12 +436,14 @@ ui-release: $(UI_BUILD_FLAG_FILE)
ui-devel: awx/ui/node_modules
@$(MAKE) -B $(UI_BUILD_FLAG_FILE)
mkdir -p /var/lib/awx/public/static/css
mkdir -p /var/lib/awx/public/static/js
mkdir -p /var/lib/awx/public/static/media
cp -r awx/ui/build/static/css/* /var/lib/awx/public/static/css
cp -r awx/ui/build/static/js/* /var/lib/awx/public/static/js
cp -r awx/ui/build/static/media/* /var/lib/awx/public/static/media
@if [ -d "/var/lib/awx" ] ; then \
mkdir -p /var/lib/awx/public/static/css; \
mkdir -p /var/lib/awx/public/static/js; \
mkdir -p /var/lib/awx/public/static/media; \
cp -r awx/ui/build/static/css/* /var/lib/awx/public/static/css; \
cp -r awx/ui/build/static/js/* /var/lib/awx/public/static/js; \
cp -r awx/ui/build/static/media/* /var/lib/awx/public/static/media; \
fi
ui-devel-instrumented: awx/ui/node_modules
$(NPM_BIN) --prefix awx/ui --loglevel warn run start-instrumented
@@ -454,11 +470,12 @@ ui-test-general:
$(NPM_BIN) run --prefix awx/ui pretest
$(NPM_BIN) run --prefix awx/ui/ test-general --runInBand
# NOTE: The make target ui-next is imported from awx/ui_next/Makefile
HEADLESS ?= no
ifeq ($(HEADLESS), yes)
dist/$(SDIST_TAR_FILE):
else
dist/$(SDIST_TAR_FILE): $(UI_BUILD_FLAG_FILE)
dist/$(SDIST_TAR_FILE): $(UI_BUILD_FLAG_FILE) ui-next
endif
$(PYTHON) -m build -s
ln -sf $(SDIST_TAR_FILE) dist/awx.tar.gz
@@ -504,25 +521,25 @@ docker-compose-sources: .git/hooks/pre-commit
-e enable_ldap=$(LDAP) \
-e enable_splunk=$(SPLUNK) \
-e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) $(EXTRA_SOURCES_ANSIBLE_OPTS)
-e enable_grafana=$(GRAFANA) \
-e enable_tacacs=$(TACACS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans
docker-compose-test: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash
docker-compose-runtest: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
docker-compose-build-swagger: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
SCHEMA_DIFF_BASE_BRANCH ?= devel
detect-schema-change: genschema
@@ -531,7 +548,7 @@ detect-schema-change: genschema
diff -u -b reference-schema.json schema.json
docker-compose-clean: awx/projects
docker-compose -f tools/docker-compose/_sources/docker-compose.yml rm -sf
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml rm -sf
docker-compose-container-group-clean:
@if [ -f "tools/docker-compose-minikube/_sources/minikube" ]; then \
@@ -547,10 +564,8 @@ docker-compose-build:
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
docker-clean:
$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
if [ "$(shell docker images | grep awx_devel)" ]; then \
docker images | grep awx_devel | awk '{print $$3}' | xargs docker rmi --force; \
fi
-$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
-$(foreach image_id,$(shell docker images --filter=reference='*/*/*awx_devel*' --filter=reference='*/*awx_devel*' --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
@@ -559,10 +574,10 @@ docker-refresh: docker-clean docker-compose
## Docker Development Environment with Elastic Stack Connected
docker-compose-elk: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-container-group:
MINIKUBE_CONTAINER_GROUP=true make docker-compose
@@ -584,6 +599,7 @@ VERSION:
PYTHON_VERSION:
@echo "$(PYTHON)" | sed 's:python::'
.PHONY: Dockerfile
Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml -e receptor_image=$(RECEPTOR_IMAGE)
@@ -664,3 +680,7 @@ help/generate:
} \
{ lastLine = $$0 }' $(MAKEFILE_LIST) | sort -u
@printf "\n"
## Display help for ui-next targets
help/ui-next:
@make -s help MAKEFILE_LIST="awx/ui_next/Makefile"

View File

@@ -1,5 +1,4 @@
# Django
from django.conf import settings
from django.utils.translation import gettext_lazy as _
# Django REST Framework
@@ -9,6 +8,7 @@ from rest_framework import serializers
from awx.conf import fields, register, register_validate
from awx.api.fields import OAuth2ProviderField
from oauth2_provider.settings import oauth2_settings
from awx.sso.common import is_remote_auth_enabled
register(
@@ -108,19 +108,8 @@ register(
def authentication_validate(serializer, attrs):
remote_auth_settings = [
'AUTH_LDAP_SERVER_URI',
'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY',
'SOCIAL_AUTH_GITHUB_KEY',
'SOCIAL_AUTH_GITHUB_ORG_KEY',
'SOCIAL_AUTH_GITHUB_TEAM_KEY',
'SOCIAL_AUTH_SAML_ENABLED_IDPS',
'RADIUS_SERVER',
'TACACSPLUS_HOST',
]
if attrs.get('DISABLE_LOCAL_AUTH', False):
if not any(getattr(settings, s, None) for s in remote_auth_settings):
raise serializers.ValidationError(_("There are no remote authentication systems configured."))
if attrs.get('DISABLE_LOCAL_AUTH', False) and not is_remote_auth_enabled():
raise serializers.ValidationError(_("There are no remote authentication systems configured."))
return attrs

View File

@@ -155,7 +155,7 @@ class FieldLookupBackend(BaseFilterBackend):
'search',
)
# A list of fields that we know can be filtered on without the possiblity
# A list of fields that we know can be filtered on without the possibility
# of introducing duplicates
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField)
@@ -268,7 +268,7 @@ class FieldLookupBackend(BaseFilterBackend):
continue
# HACK: make `created` available via API for the Django User ORM model
# so it keep compatiblity with other objects which exposes the `created` attr.
# so it keep compatibility with other objects which exposes the `created` attr.
if queryset.model._meta.object_name == 'User' and key.startswith('created'):
key = key.replace('created', 'date_joined')

View File

@@ -28,7 +28,7 @@ from rest_framework import generics
from rest_framework.response import Response
from rest_framework import status
from rest_framework import views
from rest_framework.permissions import AllowAny
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation
@@ -674,7 +674,7 @@ class SubListCreateAttachDetachAPIView(SubListCreateAPIView):
location = None
created = True
# Retrive the sub object (whether created or by ID).
# Retrieve the sub object (whether created or by ID).
sub = get_object_or_400(self.model, pk=sub_id)
# Verify we have permission to attach.
@@ -822,7 +822,7 @@ def trigger_delayed_deep_copy(*args, **kwargs):
class CopyAPIView(GenericAPIView):
serializer_class = CopySerializer
permission_classes = (AllowAny,)
permission_classes = (IsAuthenticated,)
copy_return_serializer_class = None
new_in_330 = True
new_in_api_v2 = True

View File

@@ -25,6 +25,7 @@ __all__ = [
'UserPermission',
'IsSystemAdminOrAuditor',
'WorkflowApprovalPermission',
'AnalyticsPermission',
]
@@ -250,3 +251,16 @@ class IsSystemAdminOrAuditor(permissions.BasePermission):
class WebhookKeyPermission(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return request.user.can_access(view.model, 'admin', obj, request.data)
class AnalyticsPermission(permissions.BasePermission):
"""
Allows GET/POST/OPTIONS to system admins and system auditors.
"""
def has_permission(self, request, view):
if not (request.user and request.user.is_authenticated):
return False
if request.method in ["GET", "POST", "OPTIONS"]:
return request.user.is_superuser or request.user.is_system_auditor
return request.user.is_superuser

View File

@@ -60,7 +60,7 @@ class BrowsableAPIRenderer(renderers.BrowsableAPIRenderer):
delattr(renderer_context['view'], '_request')
def get_raw_data_form(self, data, view, method, request):
# Set a flag on the view to indiciate to the view/serializer that we're
# Set a flag on the view to indicate to the view/serializer that we're
# creating a raw data form for the browsable API. Store the original
# request method to determine how to populate the raw data form.
if request.method in {'OPTIONS', 'DELETE'}:

View File

@@ -8,6 +8,7 @@ import logging
import re
from collections import OrderedDict
from datetime import timedelta
from uuid import uuid4
# OAuth2
from oauthlib import oauth2
@@ -55,6 +56,8 @@ from awx.main.models import (
ExecutionEnvironment,
Group,
Host,
HostMetric,
HostMetricSummaryMonthly,
Instance,
InstanceGroup,
InstanceLink,
@@ -108,13 +111,15 @@ from awx.main.utils import (
extract_ansible_vars,
encrypt_dict,
prefetch_page_capabilities,
get_external_account,
truncate_stdout,
get_licenser,
)
from awx.main.utils.filters import SmartFilter
from awx.main.utils.named_url_graph import reset_counters
from awx.main.scheduler.task_manager_models import TaskManagerModels
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.signals import update_inventory_computed_fields
from awx.main.validators import vars_validate_or_raise
@@ -124,6 +129,8 @@ from awx.api.fields import BooleanNullField, CharNullField, ChoiceNullField, Ver
# AWX Utils
from awx.api.validators import HostnameRegexValidator
from awx.sso.common import get_external_account
logger = logging.getLogger('awx.api.serializers')
# Fields that should be summarized regardless of object type.
@@ -151,11 +158,12 @@ SUMMARIZABLE_FK_FIELDS = {
'kind',
),
'host': DEFAULT_SUMMARY_FIELDS,
'constructed_host': DEFAULT_SUMMARY_FIELDS,
'group': DEFAULT_SUMMARY_FIELDS,
'default_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'execution_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type', 'allow_override'),
'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type', 'allow_override'),
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed'),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'kubernetes', 'credential_type_id'),
'signature_validation_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'credential_type_id'),
@@ -184,6 +192,11 @@ SUMMARIZABLE_FK_FIELDS = {
}
# These fields can be edited on a constructed inventory's generated source (possibly by using the constructed
# inventory's special API endpoint, but also by using the inventory sources endpoint).
CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS = ('source_vars', 'update_cache_timeout', 'limit', 'verbosity')
def reverse_gfk(content_object, request):
"""
Computes a reverse for a GenericForeignKey field.
@@ -536,7 +549,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
#
# This logic is to force rendering choice's on an uneditable field.
# Note: Consider expanding this rendering for more than just choices fields
# Note: This logic works in conjuction with
# Note: This logic works in conjunction with
if hasattr(model_field, 'choices') and model_field.choices:
was_editable = model_field.editable
model_field.editable = True
@@ -982,28 +995,31 @@ class UserSerializer(BaseSerializer):
django_validate_password(value)
if not self.instance and value in (None, ''):
raise serializers.ValidationError(_('Password required for new User.'))
# Check if a password is too long
password_max_length = User._meta.get_field('password').max_length
if len(value) > password_max_length:
raise serializers.ValidationError(_('Password max length is {}'.format(password_max_length)))
if getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH', 0) and len(value) < getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH'):
raise serializers.ValidationError(_('Password must be at least {} characters long.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH'))))
if getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS', 0) and sum(c.isdigit() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS'):
raise serializers.ValidationError(_('Password must contain at least {} digits.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS'))))
if getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER', 0) and sum(c.isupper() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER'):
raise serializers.ValidationError(
_('Password must contain at least {} uppercase characters.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER')))
)
if getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL', 0) and sum(not c.isalnum() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL'):
raise serializers.ValidationError(
_('Password must contain at least {} special characters.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL')))
)
return value
def _update_password(self, obj, new_password):
# For now we're not raising an error, just not saving password for
# users managed by LDAP who already have an unusable password set.
if getattr(settings, 'AUTH_LDAP_SERVER_URI', None):
try:
if obj.pk and obj.profile.ldap_dn and not obj.has_usable_password():
new_password = None
except AttributeError:
pass
if (
getattr(settings, 'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_ORG_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_TEAM_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_SAML_ENABLED_IDPS', None)
) and obj.social_auth.all():
new_password = None
if (getattr(settings, 'RADIUS_SERVER', None) or getattr(settings, 'TACACSPLUS_HOST', None)) and obj.enterprise_auth.all():
new_password = None
if new_password:
# Get external password will return something like ldap or enterprise or None if the user isn't external. We only want to allow a password update for a None option
if new_password and not self.get_external_account(obj):
obj.set_password(new_password)
obj.save(update_fields=['password'])
@@ -1792,12 +1808,12 @@ class ConstructedInventorySerializer(InventorySerializer):
class Meta:
model = Inventory
fields = ('*', '-host_filter', 'source_vars', 'update_cache_timeout', 'limit', 'verbosity')
fields = ('*', '-host_filter') + CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS
read_only_fields = ('*', 'kind')
def pop_inv_src_data(self, data):
inv_src_data = {}
for field in ('source_vars', 'update_cache_timeout', 'limit', 'verbosity'):
for field in CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS:
if field in data:
# values always need to be removed, as they are not valid for Inventory model
value = data.pop(field)
@@ -1885,6 +1901,7 @@ class HostSerializer(BaseSerializerWithVariables):
)
if obj.inventory.kind == 'constructed':
res['original_host'] = self.reverse('api:host_detail', kwargs={'pk': obj.instance_id})
res['ansible_facts'] = self.reverse('api:host_ansible_facts_detail', kwargs={'pk': obj.instance_id})
if obj.inventory:
res['inventory'] = self.reverse('api:inventory_detail', kwargs={'pk': obj.inventory.pk})
if obj.last_job:
@@ -1906,6 +1923,10 @@ class HostSerializer(BaseSerializerWithVariables):
group_list = [{'id': g.id, 'name': g.name} for g in obj.groups.all().order_by('id')[:5]]
group_cnt = obj.groups.count()
d.setdefault('groups', {'count': group_cnt, 'results': group_list})
if obj.inventory.kind == 'constructed':
summaries_qs = obj.constructed_host_summaries
else:
summaries_qs = obj.job_host_summaries
d.setdefault(
'recent_jobs',
[
@@ -1916,7 +1937,7 @@ class HostSerializer(BaseSerializerWithVariables):
'status': j.job.status,
'finished': j.job.finished,
}
for j in obj.job_host_summaries.select_related('job__job_template').order_by('-created').defer('job__extra_vars', 'job__artifacts')[:5]
for j in summaries_qs.select_related('job__job_template').order_by('-created').defer('job__extra_vars', 'job__artifacts')[:5]
],
)
return d
@@ -1959,7 +1980,7 @@ class HostSerializer(BaseSerializerWithVariables):
vars_dict = parse_yaml_or_json(variables)
vars_dict['ansible_ssh_port'] = port
attrs['variables'] = json.dumps(vars_dict)
if Group.objects.filter(name=name, inventory=inventory).exists():
if inventory and Group.objects.filter(name=name, inventory=inventory).exists():
raise serializers.ValidationError(_('A Group with that name already exists.'))
return super(HostSerializer, self).validate(attrs)
@@ -2051,6 +2072,130 @@ class GroupSerializer(BaseSerializerWithVariables):
return ret
class BulkHostSerializer(HostSerializer):
class Meta:
model = Host
fields = (
'name',
'enabled',
'instance_id',
'description',
'variables',
)
class BulkHostCreateSerializer(serializers.Serializer):
inventory = serializers.PrimaryKeyRelatedField(
queryset=Inventory.objects.all(), required=True, write_only=True, help_text=_('Primary Key ID of inventory to add hosts to.')
)
hosts = serializers.ListField(
child=BulkHostSerializer(),
allow_empty=False,
max_length=100000,
write_only=True,
help_text=_('List of hosts to be created, JSON. e.g. [{"name": "example.com"}, {"name": "127.0.0.1"}]'),
)
class Meta:
model = Inventory
fields = ('inventory', 'hosts')
read_only_fields = ()
def raise_if_host_counts_violated(self, attrs):
validation_info = get_licenser().validate()
org = attrs['inventory'].organization
if org:
org_active_count = Host.objects.org_active_count(org.id)
new_hosts = [h['name'] for h in attrs['hosts']]
org_net_new_host_count = len(new_hosts) - Host.objects.filter(inventory__organization=1, name__in=new_hosts).values('name').distinct().count()
if org.max_hosts > 0 and org_active_count + org_net_new_host_count > org.max_hosts:
raise PermissionDenied(
_(
"You have already reached the maximum number of %s hosts"
" allowed for your organization. Contact your System Administrator"
" for assistance." % org.max_hosts
)
)
# Don't check license if it is open license
if validation_info.get('license_type', 'UNLICENSED') == 'open':
return
sys_free_instances = validation_info.get('free_instances', 0)
system_net_new_host_count = Host.objects.exclude(name__in=new_hosts).count()
if system_net_new_host_count > sys_free_instances:
hard_error = validation_info.get('trial', False) is True or validation_info['instance_count'] == 10
if hard_error:
# Only raise permission error for trial, otherwise just log a warning as we do in other inventory import situations
raise PermissionDenied(_("Host count exceeds available instances."))
logger.warning(_("Number of hosts allowed by license has been exceeded."))
def validate(self, attrs):
request = self.context.get('request', None)
inv = attrs['inventory']
if inv.kind != '':
raise serializers.ValidationError(_('Hosts can only be created in manual inventories (not smart or constructed types).'))
if len(attrs['hosts']) > settings.BULK_HOST_MAX_CREATE:
raise serializers.ValidationError(_('Number of hosts exceeds system setting BULK_HOST_MAX_CREATE'))
if request and not request.user.is_superuser:
if request.user not in inv.admin_role:
raise serializers.ValidationError(_(f'Inventory with id {inv.id} not found or lack permissions to add hosts.'))
current_hostnames = set(inv.hosts.values_list('name', flat=True))
new_names = [host['name'] for host in attrs['hosts']]
duplicate_new_names = [n for n in new_names if n in current_hostnames or new_names.count(n) > 1]
if duplicate_new_names:
raise serializers.ValidationError(_(f'Hostnames must be unique in an inventory. Duplicates found: {duplicate_new_names}'))
self.raise_if_host_counts_violated(attrs)
_now = now()
for host in attrs['hosts']:
host['created'] = _now
host['modified'] = _now
host['inventory'] = inv
return attrs
def create(self, validated_data):
# This assumes total_hosts is up to date, and it can get out of date if the inventory computed fields have not been updated lately.
# If we wanted to side step this we could query Hosts.objects.filter(inventory...)
old_total_hosts = validated_data['inventory'].total_hosts
result = [Host(**attrs) for attrs in validated_data['hosts']]
try:
Host.objects.bulk_create(result)
except Exception as e:
raise serializers.ValidationError({"detail": _(f"cannot create host, host creation error {e}")})
new_total_hosts = old_total_hosts + len(result)
request = self.context.get('request', None)
changes = {'total_hosts': [old_total_hosts, new_total_hosts]}
activity_entry = ActivityStream.objects.create(
operation='update',
object1='inventory',
changes=json.dumps(changes),
actor=request.user,
)
activity_entry.inventory.add(validated_data['inventory'])
# This actually updates the cached "total_hosts" field on the inventory
update_inventory_computed_fields.delay(validated_data['inventory'].id)
return_keys = [k for k in BulkHostSerializer().fields.keys()] + ['id']
return_data = {}
host_data = []
for r in result:
item = {k: getattr(r, k) for k in return_keys}
if not settings.IS_TESTING_MODE:
# sqlite acts different with bulk_create -- it doesn't return the id of the objects
# to get it, you have to do an additional query, which is not useful for our tests
item['url'] = reverse('api:host_detail', kwargs={'pk': r.id})
item['inventory'] = reverse('api:inventory_detail', kwargs={'pk': validated_data['inventory'].id})
host_data.append(item)
return_data['url'] = reverse('api:inventory_detail', kwargs={'pk': validated_data['inventory'].id})
return_data['hosts'] = host_data
return return_data
class GroupTreeSerializer(GroupSerializer):
children = serializers.SerializerMethodField()
@@ -2106,6 +2251,7 @@ class InventorySourceOptionsSerializer(BaseSerializer):
'source',
'source_path',
'source_vars',
'scm_branch',
'credential',
'enabled_var',
'enabled_value',
@@ -2244,8 +2390,6 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
obj = super(InventorySourceSerializer, self).update(obj, validated_data)
if deprecated_fields:
self._update_deprecated_fields(deprecated_fields, obj)
if obj.source == 'constructed':
raise serializers.ValidationError({'error': _("Cannot edit source of type constructed.")})
return obj
# TODO: remove when old 'credential' fields are removed
@@ -2269,16 +2413,25 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
if get_field_from_model_or_attrs('source') == 'scm':
if self.instance and self.instance.source == 'constructed':
allowed_fields = CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS
for field in attrs:
if attrs[field] != getattr(self.instance, field) and field not in allowed_fields:
raise serializers.ValidationError({"error": _("Cannot change field '{}' on a constructed inventory source.").format(field)})
elif get_field_from_model_or_attrs('source') == 'scm':
if ('source' in attrs or 'source_project' in attrs) and get_field_from_model_or_attrs('source_project') is None:
raise serializers.ValidationError({"source_project": _("Project required for scm type sources.")})
elif (get_field_from_model_or_attrs('source') == 'constructed') and (self.instance and self.instance.source != 'constructed'):
raise serializers.ValidationError({"Error": _('constructed not a valid source for inventory')})
elif get_field_from_model_or_attrs('source') == 'constructed':
raise serializers.ValidationError({"error": _('constructed not a valid source for inventory')})
else:
redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path']))
redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path', 'scm_branch']))
if redundant_scm_fields:
raise serializers.ValidationError({"detail": _("Cannot set %s if not SCM type." % ' '.join(redundant_scm_fields))})
project = get_field_from_model_or_attrs('source_project')
if get_field_from_model_or_attrs('scm_branch') and not project.allow_override:
raise serializers.ValidationError({'scm_branch': _('Project does not allow overriding branch.')})
attrs = super(InventorySourceSerializer, self).validate(attrs)
# Check type consistency of source and cloud credential, if provided
@@ -4011,6 +4164,7 @@ class JobHostSummarySerializer(BaseSerializer):
'-description',
'job',
'host',
'constructed_host',
'host_name',
'changed',
'dark',
@@ -4094,7 +4248,7 @@ class JobEventSerializer(BaseSerializer):
# Show full stdout for playbook_on_* events.
if obj and obj.event.startswith('playbook_on'):
return data
# If the view logic says to not trunctate (request was to the detail view or a param was used)
# If the view logic says to not truncate (request was to the detail view or a param was used)
if self.context.get('no_truncate', False):
return data
max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY
@@ -4125,7 +4279,7 @@ class ProjectUpdateEventSerializer(JobEventSerializer):
# raw SCM URLs in their stdout (which *could* contain passwords)
# attempt to detect and filter HTTP basic auth passwords in the stdout
# of these types of events
if obj.event_data.get('task_action') in ('git', 'svn'):
if obj.event_data.get('task_action') in ('git', 'svn', 'ansible.builtin.git', 'ansible.builtin.svn'):
try:
return json.loads(UriCleaner.remove_sensitive(json.dumps(obj.event_data)))
except Exception:
@@ -4169,7 +4323,7 @@ class AdHocCommandEventSerializer(BaseSerializer):
def to_representation(self, obj):
data = super(AdHocCommandEventSerializer, self).to_representation(obj)
# If the view logic says to not trunctate (request was to the detail view or a param was used)
# If the view logic says to not truncate (request was to the detail view or a param was used)
if self.context.get('no_truncate', False):
return data
max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY
@@ -4516,6 +4670,271 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
return accepted
class BulkJobNodeSerializer(WorkflowJobNodeSerializer):
# We don't do a PrimaryKeyRelatedField for unified_job_template and others, because that increases the number
# of database queries, rather we take them as integer and later convert them to objects in get_objectified_jobs
unified_job_template = serializers.IntegerField(
required=True, min_value=1, help_text=_('Primary key of the template for this job, can be a job template or inventory source.')
)
inventory = serializers.IntegerField(required=False, min_value=1)
execution_environment = serializers.IntegerField(required=False, min_value=1)
# many-to-many fields
credentials = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
labels = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
# TODO: Use instance group role added via PR 13584(once merged), for now everything related to instance group is commented
# instance_groups = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
class Meta:
model = WorkflowJobNode
fields = ('*', 'credentials', 'labels') # m2m fields are not canonical for WJ nodes, TODO: add instance_groups once supported
def validate(self, attrs):
return super(LaunchConfigurationBaseSerializer, self).validate(attrs)
def get_validation_exclusions(self, obj=None):
ret = super().get_validation_exclusions(obj)
ret.extend(['unified_job_template', 'inventory', 'execution_environment'])
return ret
class BulkJobLaunchSerializer(serializers.Serializer):
name = serializers.CharField(default='Bulk Job Launch', max_length=512, write_only=True, required=False, allow_blank=True) # limited by max name of jobs
jobs = BulkJobNodeSerializer(
many=True,
allow_empty=False,
write_only=True,
max_length=100000,
help_text=_('List of jobs to be launched, JSON. e.g. [{"unified_job_template": 7}, {"unified_job_template": 10}]'),
)
description = serializers.CharField(write_only=True, required=False, allow_blank=False)
extra_vars = serializers.JSONField(write_only=True, required=False)
organization = serializers.PrimaryKeyRelatedField(
queryset=Organization.objects.all(),
required=False,
default=None,
allow_null=True,
write_only=True,
help_text=_('Inherit permissions from this organization. If not provided, a organization the user is a member of will be selected automatically.'),
)
inventory = serializers.PrimaryKeyRelatedField(queryset=Inventory.objects.all(), required=False, write_only=True)
limit = serializers.CharField(write_only=True, required=False, allow_blank=False)
scm_branch = serializers.CharField(write_only=True, required=False, allow_blank=False)
skip_tags = serializers.CharField(write_only=True, required=False, allow_blank=False)
job_tags = serializers.CharField(write_only=True, required=False, allow_blank=False)
class Meta:
model = WorkflowJob
fields = ('name', 'jobs', 'description', 'extra_vars', 'organization', 'inventory', 'limit', 'scm_branch', 'skip_tags', 'job_tags')
read_only_fields = ()
def validate(self, attrs):
request = self.context.get('request', None)
identifiers = set()
if len(attrs['jobs']) > settings.BULK_JOB_MAX_LAUNCH:
raise serializers.ValidationError(_('Number of requested jobs exceeds system setting BULK_JOB_MAX_LAUNCH'))
for node in attrs['jobs']:
if 'identifier' in node:
if node['identifier'] in identifiers:
raise serializers.ValidationError(_(f"Identifier {node['identifier']} not unique"))
identifiers.add(node['identifier'])
else:
node['identifier'] = str(uuid4())
requested_ujts = {j['unified_job_template'] for j in attrs['jobs']}
requested_use_inventories = {job['inventory'] for job in attrs['jobs'] if 'inventory' in job}
requested_use_execution_environments = {job['execution_environment'] for job in attrs['jobs'] if 'execution_environment' in job}
requested_use_credentials = set()
requested_use_labels = set()
# requested_use_instance_groups = set()
for job in attrs['jobs']:
for cred in job.get('credentials', []):
requested_use_credentials.add(cred)
for label in job.get('labels', []):
requested_use_labels.add(label)
# for instance_group in job.get('instance_groups', []):
# requested_use_instance_groups.add(instance_group)
key_to_obj_map = {
"unified_job_template": {obj.id: obj for obj in UnifiedJobTemplate.objects.filter(id__in=requested_ujts)},
"inventory": {obj.id: obj for obj in Inventory.objects.filter(id__in=requested_use_inventories)},
"credentials": {obj.id: obj for obj in Credential.objects.filter(id__in=requested_use_credentials)},
"labels": {obj.id: obj for obj in Label.objects.filter(id__in=requested_use_labels)},
# "instance_groups": {obj.id: obj for obj in InstanceGroup.objects.filter(id__in=requested_use_instance_groups)},
"execution_environment": {obj.id: obj for obj in ExecutionEnvironment.objects.filter(id__in=requested_use_execution_environments)},
}
ujts = {}
for ujt in key_to_obj_map['unified_job_template'].values():
ujts.setdefault(type(ujt), [])
ujts[type(ujt)].append(ujt)
unallowed_types = set(ujts.keys()) - set([JobTemplate, Project, InventorySource, WorkflowJobTemplate])
if unallowed_types:
type_names = ' '.join([cls._meta.verbose_name.title() for cls in unallowed_types])
raise serializers.ValidationError(_("Template types {type_names} not allowed in bulk jobs").format(type_names=type_names))
for model, obj_list in ujts.items():
role_field = 'execute_role' if issubclass(model, (JobTemplate, WorkflowJobTemplate)) else 'update_role'
self.check_list_permission(model, set([obj.id for obj in obj_list]), role_field)
self.check_organization_permission(attrs, request)
if 'inventory' in attrs:
requested_use_inventories.add(attrs['inventory'].id)
self.check_list_permission(Inventory, requested_use_inventories, 'use_role')
self.check_list_permission(Credential, requested_use_credentials, 'use_role')
self.check_list_permission(Label, requested_use_labels)
# self.check_list_permission(InstanceGroup, requested_use_instance_groups) # TODO: change to use_role for conflict
self.check_list_permission(ExecutionEnvironment, requested_use_execution_environments) # TODO: change if roles introduced
jobs_object = self.get_objectified_jobs(attrs, key_to_obj_map)
attrs['jobs'] = jobs_object
if 'extra_vars' in attrs:
extra_vars_dict = parse_yaml_or_json(attrs['extra_vars'])
attrs['extra_vars'] = json.dumps(extra_vars_dict)
attrs = super().validate(attrs)
return attrs
def check_list_permission(self, model, id_list, role_field=None):
if not id_list:
return
user = self.context['request'].user
if role_field is None: # implies "read" level permission is required
access_qs = user.get_queryset(model)
else:
access_qs = model.accessible_objects(user, role_field)
not_allowed = set(id_list) - set(access_qs.filter(id__in=id_list).values_list('id', flat=True))
if not_allowed:
raise serializers.ValidationError(
_("{model_name} {not_allowed} not found or you don't have permissions to access it").format(
model_name=model._meta.verbose_name_plural.title(), not_allowed=not_allowed
)
)
def create(self, validated_data):
request = self.context.get('request', None)
launch_user = request.user if request else None
job_node_data = validated_data.pop('jobs')
wfj_deferred_attr_names = ('skip_tags', 'limit', 'job_tags')
wfj_deferred_vals = {}
for item in wfj_deferred_attr_names:
wfj_deferred_vals[item] = validated_data.pop(item, None)
wfj = WorkflowJob.objects.create(**validated_data, is_bulk_job=True, launch_type='manual', created_by=launch_user)
for key, val in wfj_deferred_vals.items():
if val:
setattr(wfj, key, val)
nodes = []
node_m2m_objects = {}
node_m2m_object_types_to_through_model = {
'credentials': WorkflowJobNode.credentials.through,
'labels': WorkflowJobNode.labels.through,
# 'instance_groups': WorkflowJobNode.instance_groups.through,
}
node_deferred_attr_names = (
'limit',
'scm_branch',
'verbosity',
'forks',
'diff_mode',
'job_tags',
'job_type',
'skip_tags',
'job_slice_count',
'timeout',
)
node_deferred_attrs = {}
for node_attrs in job_node_data:
# we need to add any m2m objects after creation via the through model
node_m2m_objects[node_attrs['identifier']] = {}
node_deferred_attrs[node_attrs['identifier']] = {}
for item in node_m2m_object_types_to_through_model.keys():
if item in node_attrs:
node_m2m_objects[node_attrs['identifier']][item] = node_attrs.pop(item)
# Some attributes are not accepted by WorkflowJobNode __init__, we have to set them after
for item in node_deferred_attr_names:
if item in node_attrs:
node_deferred_attrs[node_attrs['identifier']][item] = node_attrs.pop(item)
# Create the node objects
node_obj = WorkflowJobNode(workflow_job=wfj, created=wfj.created, modified=wfj.modified, **node_attrs)
# we can set the deferred attrs now
for item, value in node_deferred_attrs[node_attrs['identifier']].items():
setattr(node_obj, item, value)
# the node is now ready to be bulk created
nodes.append(node_obj)
# we'll need this later when we do the m2m through model bulk create
node_m2m_objects[node_attrs['identifier']]['node'] = node_obj
WorkflowJobNode.objects.bulk_create(nodes)
# Deal with the m2m objects we have to create once the node exists
for field_name, through_model in node_m2m_object_types_to_through_model.items():
through_model_objects = []
for node_identifier in node_m2m_objects.keys():
if field_name in node_m2m_objects[node_identifier] and field_name == 'credentials':
for cred in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(credential=cred, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if field_name in node_m2m_objects[node_identifier] and field_name == 'labels':
for label in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(label=label, workflowjobnode=node_m2m_objects[node_identifier]['node']))
# if obj_type in node_m2m_objects[node_identifier] and obj_type == 'instance_groups':
# for instance_group in node_m2m_objects[node_identifier][obj_type]:
# through_model_objects.append(through_model(instancegroup=instance_group, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if through_model_objects:
through_model.objects.bulk_create(through_model_objects)
wfj.save()
wfj.signal_start()
return WorkflowJobSerializer().to_representation(wfj)
def check_organization_permission(self, attrs, request):
# validate Organization
# - If the orgs is not set, set it to the org of the launching user
# - If the user is part of multiple orgs, throw a validation error saying user is part of multiple orgs, please provide one
if not request.user.is_superuser:
read_org_qs = Organization.accessible_objects(request.user, 'member_role')
if 'organization' not in attrs or attrs['organization'] == None or attrs['organization'] == '':
read_org_ct = read_org_qs.count()
if read_org_ct == 1:
attrs['organization'] = read_org_qs.first()
elif read_org_ct > 1:
raise serializers.ValidationError("User has permission to multiple Organizations, please set one of them in the request")
else:
raise serializers.ValidationError("User not part of any organization, please assign an organization to assign to the bulk job")
else:
allowed_orgs = set(read_org_qs.values_list('id', flat=True))
requested_org = attrs['organization']
if requested_org.id not in allowed_orgs:
raise ValidationError(_(f"Organization {requested_org.id} not found or you don't have permissions to access it"))
def get_objectified_jobs(self, attrs, key_to_obj_map):
objectified_jobs = []
# This loop is generalized so we should only have to add related items to the key_to_obj_map
for job in attrs['jobs']:
objectified_job = {}
for key, value in job.items():
if key in key_to_obj_map:
if isinstance(value, int):
objectified_job[key] = key_to_obj_map[key][value]
elif isinstance(value, list):
objectified_job[key] = [key_to_obj_map[key][item] for item in value]
else:
objectified_job[key] = value
objectified_jobs.append(objectified_job)
return objectified_jobs
class NotificationTemplateSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete', 'copy']
capabilities_prefetch = [{'copy': 'organization.admin'}]
@@ -4862,7 +5281,7 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
),
)
until = serializers.SerializerMethodField(
help_text=_('The date this schedule will end. This field is computed from the RRULE. If the schedule does not end an emptry string will be returned'),
help_text=_('The date this schedule will end. This field is computed from the RRULE. If the schedule does not end an empty string will be returned'),
)
class Meta:
@@ -5099,6 +5518,32 @@ class InstanceHealthCheckSerializer(BaseSerializer):
fields = read_only_fields
class HostMetricSerializer(BaseSerializer):
show_capabilities = ['delete']
class Meta:
model = HostMetric
fields = (
"id",
"hostname",
"url",
"first_automation",
"last_automation",
"last_deleted",
"automated_counter",
"deleted_counter",
"deleted",
"used_in_inventories",
)
class HostMetricSummaryMonthlySerializer(BaseSerializer):
class Meta:
model = HostMetricSummaryMonthly
read_only_fields = ("id", "date", "license_consumed", "license_capacity", "hosts_added", "hosts_deleted", "indirectly_managed_hosts")
fields = read_only_fields
class InstanceGroupSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete']
capacity = serializers.SerializerMethodField()
@@ -5184,6 +5629,8 @@ class InstanceGroupSerializer(BaseSerializer):
res = super(InstanceGroupSerializer, self).get_related(obj)
res['jobs'] = self.reverse('api:instance_group_unified_jobs_list', kwargs={'pk': obj.pk})
res['instances'] = self.reverse('api:instance_group_instance_list', kwargs={'pk': obj.pk})
res['access_list'] = self.reverse('api:instance_group_access_list', kwargs={'pk': obj.pk})
res['object_roles'] = self.reverse('api:instance_group_object_role_list', kwargs={'pk': obj.pk})
if obj.credential:
res['credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.credential_id})

View File

@@ -1,4 +0,0 @@
Version 1 of the Ansible Tower REST API.
Make a GET request to this resource to obtain a list of all child resources
available via the API.

View File

@@ -7,10 +7,12 @@ the following fields (some fields may not be visible to all users):
* `project_base_dir`: Path on the server where projects and playbooks are \
stored.
* `project_local_paths`: List of directories beneath `project_base_dir` to
use when creating/editing a project.
use when creating/editing a manual project.
* `time_zone`: The configured time zone for the server.
* `license_info`: Information about the current license.
* `version`: Version of Ansible Tower package installed.
* `custom_virtualenvs`: Deprecated venv locations from before migration to
execution environments. Export tooling is in `awx-manage` commands.
* `eula`: The current End-User License Agreement
{% endifmeth %}

View File

@@ -0,0 +1,41 @@
# Bulk Host Create
This endpoint allows the client to create multiple hosts and associate them with an inventory. They may do this by providing the inventory ID and a list of json that would normally be provided to create hosts.
Example:
{
"inventory": 1,
"hosts": [
{"name": "example1.com", "variables": "ansible_connection: local"},
{"name": "example2.com"}
]
}
Return data:
{
"url": "/api/v2/inventories/3/hosts/",
"hosts": [
{
"name": "example1.com",
"enabled": true,
"instance_id": "",
"description": "",
"variables": "ansible_connection: local",
"id": 1255,
"url": "/api/v2/hosts/1255/",
"inventory": "/api/v2/inventories/3/"
},
{
"name": "example2.com",
"enabled": true,
"instance_id": "",
"description": "",
"variables": "",
"id": 1256,
"url": "/api/v2/hosts/1256/",
"inventory": "/api/v2/inventories/3/"
}
]
}

View File

@@ -0,0 +1,13 @@
# Bulk Job Launch
This endpoint allows the client to launch multiple UnifiedJobTemplates at a time, along side any launch time parameters that they would normally set at launch time.
Example:
{
"name": "my bulk job",
"jobs": [
{"unified_job_template": 7, "inventory": 2},
{"unified_job_template": 7, "credentials": [3]}
]
}

View File

@@ -0,0 +1,3 @@
# Bulk Actions
This endpoint lists available bulk action APIs.

View File

@@ -3,7 +3,7 @@ Make a GET request to this resource to retrieve aggregate statistics about inven
Including fetching the number of total hosts tracked by Tower over an amount of time and the current success or
failed status of hosts which have run jobs within an Inventory.
## Parmeters and Filtering
## Parameters and Filtering
The `period` of the data can be adjusted with:
@@ -24,7 +24,7 @@ Data about the number of hosts will be returned in the following format:
Each element contains an epoch timestamp represented in seconds and a numerical value indicating
the number of hosts that exist at a given moment
Data about failed and successfull hosts by inventory will be given as:
Data about failed and successful hosts by inventory will be given as:
{
"sources": [

View File

@@ -2,7 +2,7 @@
Make a GET request to this resource to retrieve aggregate statistics about job runs suitable for graphing.
## Parmeters and Filtering
## Parameters and Filtering
The `period` of the data can be adjusted with:

View File

@@ -1,11 +0,0 @@
# List Fact Scans for a Host Specific Host Scan
Make a GET request to this resource to retrieve system tracking data for a particular scan
You may filter by datetime:
`?datetime=2015-06-01`
and module
`?datetime=2015-06-01&module=ansible`

View File

@@ -1,11 +0,0 @@
# List Fact Scans for a Host by Module and Date
Make a GET request to this resource to retrieve system tracking scans by module and date/time
You may filter scan runs using the `from` and `to` properties:
`?from=2015-06-01%2012:00:00&to=2015-06-03`
You may also filter by module
`?module=packages`

View File

@@ -1 +0,0 @@
# List Red Hat Insights for a Host

View File

@@ -0,0 +1,18 @@
{% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}:
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
record containing the following fields:
{% include "api/_result_fields_common.md" %}
{% endifmeth %}
{% ifmeth DELETE %}
# Delete {{ model_verbose_name|title|anora }}:
Make a DELETE request to this resource to soft-delete this {{ model_verbose_name }}.
A soft deletion will mark the `deleted` field as true and exclude the host
metric from license calculations.
This may be undone later if the same hostname is automated again afterwards.
{% endifmeth %}

View File

@@ -18,7 +18,7 @@ inventory sources:
* `inventory_update`: ID of the inventory update job that was started.
(integer, read-only)
* `project_update`: ID of the project update job that was started if this inventory source is an SCM source.
(interger, read-only, optional)
(integer, read-only, optional)
Note: All manual inventory sources (source="") will be ignored by the update_inventory_sources endpoint. This endpoint will not update inventory sources for Smart Inventories.

View File

@@ -1,21 +0,0 @@
{% ifmeth GET %}
# Determine if a Job can be started
Make a GET request to this resource to determine if the job can be started and
whether any passwords are required to start the job. The response will include
the following fields:
* `can_start`: Flag indicating if this job can be started (boolean, read-only)
* `passwords_needed_to_start`: Password names required to start the job (array,
read-only)
{% endifmeth %}
{% ifmeth POST %}
# Start a Job
Make a POST request to this resource to start the job. If any passwords are
required, they must be passed via POST data.
If successful, the response status code will be 202. If any required passwords
are not provided, a 400 status code will be returned. If the job cannot be
started, a 405 status code will be returned.
{% endifmeth %}

View File

@@ -2,6 +2,7 @@ receptor_user: awx
receptor_group: awx
receptor_verify: true
receptor_tls: true
receptor_mintls13: false
receptor_work_commands:
ansible-runner:
command: ansible-runner

31
awx/api/urls/analytics.py Normal file
View File

@@ -0,0 +1,31 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
import awx.api.views.analytics as analytics
urls = [
re_path(r'^$', analytics.AnalyticsRootView.as_view(), name='analytics_root_view'),
re_path(r'^authorized/$', analytics.AnalyticsAuthorizedView.as_view(), name='analytics_authorized'),
re_path(r'^reports/$', analytics.AnalyticsReportsList.as_view(), name='analytics_reports_list'),
re_path(r'^report/(?P<slug>[\w-]+)/$', analytics.AnalyticsReportDetail.as_view(), name='analytics_report_detail'),
re_path(r'^report_options/$', analytics.AnalyticsReportOptionsList.as_view(), name='analytics_report_options_list'),
re_path(r'^adoption_rate/$', analytics.AnalyticsAdoptionRateList.as_view(), name='analytics_adoption_rate'),
re_path(r'^adoption_rate_options/$', analytics.AnalyticsAdoptionRateList.as_view(), name='analytics_adoption_rate_options'),
re_path(r'^event_explorer/$', analytics.AnalyticsEventExplorerList.as_view(), name='analytics_event_explorer'),
re_path(r'^event_explorer_options/$', analytics.AnalyticsEventExplorerList.as_view(), name='analytics_event_explorer_options'),
re_path(r'^host_explorer/$', analytics.AnalyticsHostExplorerList.as_view(), name='analytics_host_explorer'),
re_path(r'^host_explorer_options/$', analytics.AnalyticsHostExplorerList.as_view(), name='analytics_host_explorer_options'),
re_path(r'^job_explorer/$', analytics.AnalyticsJobExplorerList.as_view(), name='analytics_job_explorer'),
re_path(r'^job_explorer_options/$', analytics.AnalyticsJobExplorerList.as_view(), name='analytics_job_explorer_options'),
re_path(r'^probe_templates/$', analytics.AnalyticsProbeTemplatesList.as_view(), name='analytics_probe_templates_explorer'),
re_path(r'^probe_templates_options/$', analytics.AnalyticsProbeTemplatesList.as_view(), name='analytics_probe_templates_options'),
re_path(r'^probe_template_for_hosts/$', analytics.AnalyticsProbeTemplateForHostsList.as_view(), name='analytics_probe_template_for_hosts_explorer'),
re_path(r'^probe_template_for_hosts_options/$', analytics.AnalyticsProbeTemplateForHostsList.as_view(), name='analytics_probe_template_for_hosts_options'),
re_path(r'^roi_templates/$', analytics.AnalyticsRoiTemplatesList.as_view(), name='analytics_roi_templates_explorer'),
re_path(r'^roi_templates_options/$', analytics.AnalyticsRoiTemplatesList.as_view(), name='analytics_roi_templates_options'),
]
__all__ = ['urls']

View File

@@ -0,0 +1,10 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import HostMetricList, HostMetricDetail
urls = [re_path(r'^$', HostMetricList.as_view(), name='host_metric_list'), re_path(r'^(?P<pk>[0-9]+)/$', HostMetricDetail.as_view(), name='host_metric_detail')]
__all__ = ['urls']

View File

@@ -3,7 +3,14 @@
from django.urls import re_path
from awx.api.views import InstanceGroupList, InstanceGroupDetail, InstanceGroupUnifiedJobsList, InstanceGroupInstanceList
from awx.api.views import (
InstanceGroupList,
InstanceGroupDetail,
InstanceGroupUnifiedJobsList,
InstanceGroupInstanceList,
InstanceGroupAccessList,
InstanceGroupObjectRolesList,
)
urls = [
@@ -11,6 +18,8 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/$', InstanceGroupDetail.as_view(), name='instance_group_detail'),
re_path(r'^(?P<pk>[0-9]+)/jobs/$', InstanceGroupUnifiedJobsList.as_view(), name='instance_group_unified_jobs_list'),
re_path(r'^(?P<pk>[0-9]+)/instances/$', InstanceGroupInstanceList.as_view(), name='instance_group_instance_list'),
re_path(r'^(?P<pk>[0-9]+)/access_list/$', InstanceGroupAccessList.as_view(), name='instance_group_access_list'),
re_path(r'^(?P<pk>[0-9]+)/object_roles/$', InstanceGroupObjectRolesList.as_view(), name='instance_group_object_role_list'),
]
__all__ = ['urls']

View File

@@ -30,10 +30,19 @@ from awx.api.views import (
OAuth2TokenList,
ApplicationOAuth2TokenList,
OAuth2ApplicationDetail,
# HostMetricSummaryMonthlyList, # It will be enabled in future version of the AWX
)
from awx.api.views.bulk import (
BulkView,
BulkHostCreateView,
BulkJobLaunchView,
)
from awx.api.views.mesh_visualizer import MeshVisualizer
from awx.api.views.metrics import MetricsView
from awx.api.views.analytics import AWX_ANALYTICS_API_PREFIX
from .organization import urls as organization_urls
from .user import urls as user_urls
@@ -43,6 +52,7 @@ from .inventory import urls as inventory_urls, constructed_inventory_urls
from .execution_environments import urls as execution_environment_urls
from .team import urls as team_urls
from .host import urls as host_urls
from .host_metric import urls as host_metric_urls
from .group import urls as group_urls
from .inventory_source import urls as inventory_source_urls
from .inventory_update import urls as inventory_update_urls
@@ -73,7 +83,7 @@ from .oauth2 import urls as oauth2_urls
from .oauth2_root import urls as oauth2_root_urls
from .workflow_approval_template import urls as workflow_approval_template_urls
from .workflow_approval import urls as workflow_approval_urls
from .analytics import urls as analytics_urls
v2_urls = [
re_path(r'^$', ApiV2RootView.as_view(), name='api_v2_root_view'),
@@ -112,6 +122,9 @@ v2_urls = [
re_path(r'^inventories/', include(inventory_urls)),
re_path(r'^constructed_inventories/', include(constructed_inventory_urls)),
re_path(r'^hosts/', include(host_urls)),
re_path(r'^host_metrics/', include(host_metric_urls)),
# It will be enabled in future version of the AWX
# re_path(r'^host_metric_summary_monthly/$', HostMetricSummaryMonthlyList.as_view(), name='host_metric_summary_monthly_list'),
re_path(r'^groups/', include(group_urls)),
re_path(r'^inventory_sources/', include(inventory_source_urls)),
re_path(r'^inventory_updates/', include(inventory_update_urls)),
@@ -135,8 +148,12 @@ v2_urls = [
re_path(r'^unified_job_templates/$', UnifiedJobTemplateList.as_view(), name='unified_job_template_list'),
re_path(r'^unified_jobs/$', UnifiedJobList.as_view(), name='unified_job_list'),
re_path(r'^activity_stream/', include(activity_stream_urls)),
re_path(rf'^{AWX_ANALYTICS_API_PREFIX}/', include(analytics_urls)),
re_path(r'^workflow_approval_templates/', include(workflow_approval_template_urls)),
re_path(r'^workflow_approvals/', include(workflow_approval_urls)),
re_path(r'^bulk/$', BulkView.as_view(), name='bulk'),
re_path(r'^bulk/host_create/$', BulkHostCreateView.as_view(), name='bulk_host_create'),
re_path(r'^bulk/job_launch/$', BulkJobLaunchView.as_view(), name='bulk_job_launch'),
]

View File

@@ -17,7 +17,6 @@ from collections import OrderedDict
from urllib3.exceptions import ConnectTimeoutError
# Django
from django.conf import settings
from django.core.exceptions import FieldError, ObjectDoesNotExist
@@ -30,7 +29,7 @@ from django.utils.safestring import mark_safe
from django.utils.timezone import now
from django.views.decorators.csrf import csrf_exempt
from django.template.loader import render_to_string
from django.http import HttpResponse
from django.http import HttpResponse, HttpResponseRedirect
from django.contrib.contenttypes.models import ContentType
from django.utils.translation import gettext_lazy as _
@@ -152,7 +151,7 @@ def api_exception_handler(exc, context):
if 'awx.named_url_rewritten' in req.environ and not str(getattr(exc, 'status_code', 0)).startswith('2'):
# if the URL was rewritten, and it's not a 2xx level status code,
# revert the request.path to its original value to avoid leaking
# any context about the existance of resources
# any context about the existence of resources
req.path = req.environ['awx.named_url_rewritten']
if exc.status_code == 403:
exc = NotFound(detail=_('Not found.'))
@@ -172,7 +171,7 @@ class DashboardView(APIView):
user_inventory = get_user_queryset(request.user, models.Inventory)
inventory_with_failed_hosts = user_inventory.filter(hosts_with_active_failures__gt=0)
user_inventory_external = user_inventory.filter(has_inventory_sources=True)
# if there are *zero* inventories, this aggregrate query will be None, fall back to 0
# if there are *zero* inventories, this aggregate query will be None, fall back to 0
failed_inventory = user_inventory.aggregate(Sum('inventory_sources_with_failures'))['inventory_sources_with_failures__sum'] or 0
data['inventories'] = {
'url': reverse('api:inventory_list', request=request),
@@ -466,6 +465,23 @@ class InstanceGroupUnifiedJobsList(SubListAPIView):
relationship = "unifiedjob_set"
class InstanceGroupAccessList(ResourceAccessList):
model = models.User # needs to be User for AccessLists
parent_model = models.InstanceGroup
class InstanceGroupObjectRolesList(SubListAPIView):
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.InstanceGroup
search_fields = ('role_field', 'content_type__model')
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return models.Role.objects.filter(content_type=content_type, object_id=po.pk)
class InstanceGroupInstanceList(InstanceGroupMembershipMixin, SubListAttachDetachAPIView):
name = _("Instance Group's Instances")
model = models.Instance
@@ -1531,6 +1547,41 @@ class HostRelatedSearchMixin(object):
return ret
class HostMetricList(ListAPIView):
name = _("Host Metrics List")
model = models.HostMetric
serializer_class = serializers.HostMetricSerializer
permission_classes = (IsSystemAdminOrAuditor,)
search_fields = ('hostname', 'deleted')
def get_queryset(self):
return self.model.objects.all()
class HostMetricDetail(RetrieveDestroyAPIView):
name = _("Host Metric Detail")
model = models.HostMetric
serializer_class = serializers.HostMetricSerializer
permission_classes = (IsSystemAdminOrAuditor,)
def delete(self, request, *args, **kwargs):
self.get_object().soft_delete()
return Response(status=status.HTTP_204_NO_CONTENT)
# It will be enabled in future version of the AWX
# class HostMetricSummaryMonthlyList(ListAPIView):
# name = _("Host Metrics Summary Monthly")
# model = models.HostMetricSummaryMonthly
# serializer_class = serializers.HostMetricSummaryMonthlySerializer
# permission_classes = (IsSystemAdminOrAuditor,)
# search_fields = ('date',)
#
# def get_queryset(self):
# return self.model.objects.all()
class HostList(HostRelatedSearchMixin, ListCreateAPIView):
always_allow_superuser = False
model = models.Host
@@ -1568,6 +1619,14 @@ class HostAnsibleFactsDetail(RetrieveAPIView):
model = models.Host
serializer_class = serializers.AnsibleFactsSerializer
def get(self, request, *args, **kwargs):
obj = self.get_object()
if obj.inventory.kind == 'constructed':
# If this is a constructed inventory host, it is not the source of truth about facts
# redirect to the original input inventory host instead
return HttpResponseRedirect(reverse('api:host_ansible_facts_detail', kwargs={'pk': obj.instance_id}, request=self.request))
return super().get(request, *args, **kwargs)
class InventoryHostsList(HostRelatedSearchMixin, SubListCreateAttachDetachAPIView):
model = models.Host
@@ -1669,7 +1728,7 @@ class GroupList(ListCreateAPIView):
class EnforceParentRelationshipMixin(object):
"""
Useful when you have a self-refering ManyToManyRelationship.
Useful when you have a self-referring ManyToManyRelationship.
* Tower uses a shallow (2-deep only) url pattern. For example:
When an object hangs off of a parent object you would have the url of the
@@ -2417,7 +2476,7 @@ class JobTemplateSurveySpec(GenericAPIView):
status=status.HTTP_400_BAD_REQUEST,
)
# if it's a multiselect or multiple choice, it must have coices listed
# choices and defualts must come in as strings seperated by /n characters.
# choices and defaults must come in as strings separated by /n characters.
if qtype == 'multiselect' or qtype == 'multiplechoice':
if 'choices' in survey_item:
if isinstance(survey_item['choices'], str):
@@ -3080,7 +3139,9 @@ class WorkflowJobTemplateWorkflowNodesList(SubListCreateAPIView):
search_fields = ('unified_job_template__name', 'unified_job_template__description')
def get_queryset(self):
return super(WorkflowJobTemplateWorkflowNodesList, self).get_queryset().order_by('id')
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).order_by('id')
class WorkflowJobTemplateJobsList(SubListAPIView):
@@ -3174,7 +3235,9 @@ class WorkflowJobWorkflowNodesList(SubListAPIView):
search_fields = ('unified_job_template__name', 'unified_job_template__description')
def get_queryset(self):
return super(WorkflowJobWorkflowNodesList, self).get_queryset().order_by('id')
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).order_by('id')
class WorkflowJobCancel(GenericCancelView):
@@ -3432,7 +3495,7 @@ class JobCreateSchedule(RetrieveAPIView):
config = obj.launch_config
# Make up a name for the schedule, guarentee that it is unique
# Make up a name for the schedule, guarantee that it is unique
name = 'Auto-generated schedule from job {}'.format(obj.id)
existing_names = models.Schedule.objects.filter(name__startswith=name).values_list('name', flat=True)
if name in existing_names:
@@ -3623,7 +3686,7 @@ class JobJobEventsChildrenSummary(APIView):
# key is counter of meta events (i.e. verbose), value is uuid of the assigned parent
map_meta_counter_nested_uuid = {}
# collapsable tree view in the UI only makes sense for tree-like
# collapsible tree view in the UI only makes sense for tree-like
# hierarchy. If ansible is ran with a strategy like free or host_pinned, then
# events can be out of sequential order, and no longer follow a tree structure
# E1
@@ -4290,7 +4353,7 @@ class WorkflowApprovalTemplateJobsList(SubListAPIView):
parent_key = 'workflow_approval_template'
class WorkflowApprovalList(ListCreateAPIView):
class WorkflowApprovalList(ListAPIView):
model = models.WorkflowApproval
serializer_class = serializers.WorkflowApprovalListSerializer

296
awx/api/views/analytics.py Normal file
View File

@@ -0,0 +1,296 @@
import requests
import logging
import urllib.parse as urlparse
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from django.utils import translation
from awx.api.generics import APIView, Response
from awx.api.permissions import AnalyticsPermission
from awx.api.versioning import reverse
from awx.main.utils import get_awx_version
from rest_framework import status
from collections import OrderedDict
AUTOMATION_ANALYTICS_API_URL_PATH = "/api/tower-analytics/v1"
AWX_ANALYTICS_API_PREFIX = 'analytics'
ERROR_UPLOAD_NOT_ENABLED = "analytics-upload-not-enabled"
ERROR_MISSING_URL = "missing-url"
ERROR_MISSING_USER = "missing-user"
ERROR_MISSING_PASSWORD = "missing-password"
ERROR_NO_DATA_OR_ENTITLEMENT = "no-data-or-entitlement"
ERROR_NOT_FOUND = "not-found"
ERROR_UNAUTHORIZED = "unauthorized"
ERROR_UNKNOWN = "unknown"
ERROR_UNSUPPORTED_METHOD = "unsupported-method"
logger = logging.getLogger('awx.api.views.analytics')
class MissingSettings(Exception):
"""Settings are not correct Exception"""
pass
class GetNotAllowedMixin(object):
def get(self, request, format=None):
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
class AnalyticsRootView(APIView):
permission_classes = (AnalyticsPermission,)
name = _('Automation Analytics')
swagger_topic = 'Automation Analytics'
def get(self, request, format=None):
data = OrderedDict()
data['authorized'] = reverse('api:analytics_authorized')
data['reports'] = reverse('api:analytics_reports_list')
data['report_options'] = reverse('api:analytics_report_options_list')
data['adoption_rate'] = reverse('api:analytics_adoption_rate')
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options')
data['event_explorer'] = reverse('api:analytics_event_explorer')
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options')
data['host_explorer'] = reverse('api:analytics_host_explorer')
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options')
data['job_explorer'] = reverse('api:analytics_job_explorer')
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options')
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer')
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options')
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer')
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options')
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer')
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options')
return Response(data)
class AnalyticsGenericView(APIView):
"""
Example:
headers = {
'Content-Type': 'application/json',
}
params = {
'limit': '20',
'offset': '0',
'sort_by': 'name:asc',
}
json_data = {
'limit': '20',
'offset': '0',
'sort_options': 'name',
'sort_order': 'asc',
'tags': [],
'slug': [],
'name': [],
'description': '',
}
response = requests.post(f'{AUTOMATION_ANALYTICS_API_URL}/reports/', params=params,
headers=headers, json=json_data)
return Response(response.json(), status=response.status_code)
"""
permission_classes = (AnalyticsPermission,)
@staticmethod
def _request_headers(request):
headers = {}
for header in ['Content-Type', 'Content-Length', 'Accept-Encoding', 'User-Agent', 'Accept']:
if request.headers.get(header, None):
headers[header] = request.headers.get(header)
headers['X-Rh-Analytics-Source'] = 'controller'
headers['X-Rh-Analytics-Source-Version'] = get_awx_version()
headers['Accept-Language'] = translation.get_language()
return headers
@staticmethod
def _get_analytics_path(request_path):
parts = request_path.split(f'{AWX_ANALYTICS_API_PREFIX}/')
path_specific = parts[-1]
return f"{AUTOMATION_ANALYTICS_API_URL_PATH}/{path_specific}"
def _get_analytics_url(self, request_path):
analytics_path = self._get_analytics_path(request_path)
url = getattr(settings, 'AUTOMATION_ANALYTICS_URL', None)
if not url:
raise MissingSettings(ERROR_MISSING_URL)
url_parts = urlparse.urlsplit(url)
analytics_url = urlparse.urlunsplit([url_parts.scheme, url_parts.netloc, analytics_path, url_parts.query, url_parts.fragment])
return analytics_url
@staticmethod
def _get_setting(setting_name, default, error_message):
setting = getattr(settings, setting_name, default)
if not setting:
raise MissingSettings(error_message)
return setting
@staticmethod
def _error_response(keyword, message=None, remote=True, remote_status_code=None, status_code=status.HTTP_403_FORBIDDEN):
text = {"error": {"remote": remote, "remote_status": remote_status_code, "keyword": keyword}}
if message:
text["error"]["message"] = message
return Response(text, status=status_code)
def _error_response_404(self, response):
try:
json_response = response.json()
# Subscription/entitlement problem or missing tenant data in AA db => HTTP 403
message = json_response.get('error', None)
if message:
return self._error_response(ERROR_NO_DATA_OR_ENTITLEMENT, message, remote=True, remote_status_code=response.status_code)
# Standard 404 problem => HTTP 404
message = json_response.get('detail', None) or response.text
except requests.exceptions.JSONDecodeError:
# Unexpected text => still HTTP 404
message = response.text
return self._error_response(ERROR_NOT_FOUND, message, remote=True, remote_status_code=status.HTTP_404_NOT_FOUND, status_code=status.HTTP_404_NOT_FOUND)
@staticmethod
def _update_response_links(json_response):
if not json_response.get('links', None):
return
for key, value in json_response['links'].items():
if value:
json_response['links'][key] = value.replace(AUTOMATION_ANALYTICS_API_URL_PATH, f"/api/v2/{AWX_ANALYTICS_API_PREFIX}")
def _forward_response(self, response):
try:
content_type = response.headers.get('content-type', '')
if content_type.find('application/json') != -1:
json_response = response.json()
self._update_response_links(json_response)
return Response(json_response, status=response.status_code)
except Exception as e:
logger.error(f"Analytics API: Response error: {e}")
return Response(response.content, status=response.status_code)
def _send_to_analytics(self, request, method):
try:
headers = self._request_headers(request)
self._get_setting('INSIGHTS_TRACKING_STATE', False, ERROR_UPLOAD_NOT_ENABLED)
url = self._get_analytics_url(request.path)
rh_user = self._get_setting('REDHAT_USERNAME', None, ERROR_MISSING_USER)
rh_password = self._get_setting('REDHAT_PASSWORD', None, ERROR_MISSING_PASSWORD)
if method not in ["GET", "POST", "OPTIONS"]:
return self._error_response(ERROR_UNSUPPORTED_METHOD, method, remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
else:
response = requests.request(
method,
url,
auth=(rh_user, rh_password),
verify=settings.INSIGHTS_CERT_PATH,
params=request.query_params,
headers=headers,
json=request.data,
timeout=(31, 31),
)
#
# Missing or wrong user/pass
#
if response.status_code == status.HTTP_401_UNAUTHORIZED:
text = (response.text or '').rstrip("\n")
return self._error_response(ERROR_UNAUTHORIZED, text, remote=True, remote_status_code=response.status_code)
#
# Not found, No entitlement or No data in Analytics
#
elif response.status_code == status.HTTP_404_NOT_FOUND:
return self._error_response_404(response)
#
# Success or not a 401/404 errors are just forwarded
#
else:
return self._forward_response(response)
except MissingSettings as e:
logger.warning(f"Analytics API: Setting missing: {e.args[0]}")
return self._error_response(e.args[0], remote=False)
except requests.exceptions.RequestException as e:
logger.error(f"Analytics API: Request error: {e}")
return self._error_response(ERROR_UNKNOWN, str(e), remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
except Exception as e:
logger.error(f"Analytics API: Error: {e}")
return self._error_response(ERROR_UNKNOWN, str(e), remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
class AnalyticsGenericListView(AnalyticsGenericView):
def get(self, request, format=None):
return self._send_to_analytics(request, method="GET")
def post(self, request, format=None):
return self._send_to_analytics(request, method="POST")
def options(self, request, format=None):
return self._send_to_analytics(request, method="OPTIONS")
class AnalyticsGenericDetailView(AnalyticsGenericView):
def get(self, request, slug, format=None):
return self._send_to_analytics(request, method="GET")
def post(self, request, slug, format=None):
return self._send_to_analytics(request, method="POST")
def options(self, request, slug, format=None):
return self._send_to_analytics(request, method="OPTIONS")
class AnalyticsAuthorizedView(AnalyticsGenericListView):
name = _("Authorized")
class AnalyticsReportsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Reports")
swagger_topic = "Automation Analytics"
class AnalyticsReportDetail(AnalyticsGenericDetailView):
name = _("Report")
class AnalyticsReportOptionsList(AnalyticsGenericListView):
name = _("Report Options")
class AnalyticsAdoptionRateList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Adoption Rate")
class AnalyticsEventExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Event Explorer")
class AnalyticsHostExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Host Explorer")
class AnalyticsJobExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Job Explorer")
class AnalyticsProbeTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Templates")
class AnalyticsProbeTemplateForHostsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Template For Hosts")
class AnalyticsRoiTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("ROI Templates")

69
awx/api/views/bulk.py Normal file
View File

@@ -0,0 +1,69 @@
from collections import OrderedDict
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import JSONRenderer
from rest_framework.reverse import reverse
from rest_framework import status
from rest_framework.response import Response
from awx.main.models import UnifiedJob, Host
from awx.api.generics import (
GenericAPIView,
APIView,
)
from awx.api import (
serializers,
renderers,
)
class BulkView(APIView):
permission_classes = [IsAuthenticated]
renderer_classes = [
renderers.BrowsableAPIRenderer,
JSONRenderer,
]
allowed_methods = ['GET', 'OPTIONS']
def get(self, request, format=None):
'''List top level resources'''
data = OrderedDict()
data['host_create'] = reverse('api:bulk_host_create', request=request)
data['job_launch'] = reverse('api:bulk_job_launch', request=request)
return Response(data)
class BulkJobLaunchView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = UnifiedJob
serializer_class = serializers.BulkJobLaunchSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
data = OrderedDict()
data['detail'] = "Specify a list of unified job templates to launch alongside their launchtime parameters"
return Response(data, status=status.HTTP_200_OK)
def post(self, request):
bulkjob_serializer = serializers.BulkJobLaunchSerializer(data=request.data, context={'request': request})
if bulkjob_serializer.is_valid():
result = bulkjob_serializer.create(bulkjob_serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(bulkjob_serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class BulkHostCreateView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = Host
serializer_class = serializers.BulkHostCreateSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
return Response({"detail": "Bulk create hosts with this endpoint"}, status=status.HTTP_200_OK)
def post(self, request):
serializer = serializers.BulkHostCreateSerializer(data=request.data, context={'request': request})
if serializer.is_valid():
result = serializer.create(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -103,6 +103,9 @@ class ApiVersionRootView(APIView):
data['inventory_updates'] = reverse('api:inventory_update_list', request=request)
data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request)
data['host_metrics'] = reverse('api:host_metric_list', request=request)
# It will be enabled in future version of the AWX
# data['host_metric_summary_monthly'] = reverse('api:host_metric_summary_monthly_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)
@@ -122,6 +125,8 @@ class ApiVersionRootView(APIView):
data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request)
data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request)
data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request)
data['bulk'] = reverse('api:bulk', request=request)
data['analytics'] = reverse('api:analytics_root_view', request=request)
return Response(data)
@@ -272,6 +277,9 @@ class ApiV2ConfigView(APIView):
pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off'
# Guarding against settings.UI_NEXT being set to a non-boolean value
ui_next_state = settings.UI_NEXT if settings.UI_NEXT in (True, False) else False
data = dict(
time_zone=settings.TIME_ZONE,
license_info=license_data,
@@ -280,6 +288,7 @@ class ApiV2ConfigView(APIView):
analytics_status=pendo_state,
analytics_collectors=all_collectors(),
become_methods=PRIVILEGE_ESCALATION_METHODS,
ui_next=ui_next_state,
)
# If LDAP is enabled, user_ldap_fields will return a list of field

View File

@@ -21,7 +21,7 @@ logger = logging.getLogger('awx.conf.fields')
# Use DRF fields to convert/validate settings:
# - to_representation(obj) should convert a native Python object to a primitive
# serializable type. This primitive type will be what is presented in the API
# and stored in the JSON field in the datbase.
# and stored in the JSON field in the database.
# - to_internal_value(data) should convert the primitive type back into the
# appropriate Python type to be used in settings.

View File

@@ -1,7 +1,11 @@
import inspect
from django.conf import settings
from django.utils.timezone import now
import logging
logger = logging.getLogger('awx.conf.migrations')
def fill_ldap_group_type_params(apps, schema_editor):
@@ -15,7 +19,7 @@ def fill_ldap_group_type_params(apps, schema_editor):
entry = qs[0]
group_type_params = entry.value
else:
entry = Setting(key='AUTH_LDAP_GROUP_TYPE_PARAMS', value=group_type_params, created=now(), modified=now())
return # for new installs we prefer to use the default value
init_attrs = set(inspect.getfullargspec(group_type.__init__).args[1:])
for k in list(group_type_params.keys()):
@@ -23,4 +27,5 @@ def fill_ldap_group_type_params(apps, schema_editor):
del group_type_params[k]
entry.value = group_type_params
logger.warning(f'Migration updating AUTH_LDAP_GROUP_TYPE_PARAMS with value {entry.value}')
entry.save()

View File

@@ -5,11 +5,13 @@ import threading
import time
import os
from concurrent.futures import ThreadPoolExecutor
# Django
from django.conf import LazySettings
from django.conf import settings, UserSettingsHolder
from django.core.cache import cache as django_cache
from django.core.exceptions import ImproperlyConfigured
from django.core.exceptions import ImproperlyConfigured, SynchronousOnlyOperation
from django.db import transaction, connection
from django.db.utils import Error as DBError, ProgrammingError
from django.utils.functional import cached_property
@@ -157,7 +159,7 @@ class EncryptedCacheProxy(object):
obj_id = self.cache.get(Setting.get_cache_id_key(key), default=empty)
if obj_id is empty:
logger.info('Efficiency notice: Corresponding id not stored in cache %s', Setting.get_cache_id_key(key))
obj_id = getattr(self._get_setting_from_db(key), 'pk', None)
obj_id = getattr(_get_setting_from_db(self.registry, key), 'pk', None)
elif obj_id == SETTING_CACHE_NONE:
obj_id = None
return method(TransientSetting(pk=obj_id, value=value), 'value')
@@ -166,11 +168,6 @@ class EncryptedCacheProxy(object):
# a no-op; it just returns the provided value
return value
def _get_setting_from_db(self, key):
field = self.registry.get_setting_field(key)
if not field.read_only:
return Setting.objects.filter(key=key, user__isnull=True).order_by('pk').first()
def __getattr__(self, name):
return getattr(self.cache, name)
@@ -186,6 +183,22 @@ def get_settings_to_cache(registry):
return dict([(key, SETTING_CACHE_NOTSET) for key in get_writeable_settings(registry)])
# Will first attempt to get the setting from the database in synchronous mode.
# If call from async context, it will attempt to get the setting from the database in a thread.
def _get_setting_from_db(registry, key):
def get_settings_from_db_sync(registry, key):
field = registry.get_setting_field(key)
if not field.read_only or key == 'INSTALL_UUID':
return Setting.objects.filter(key=key, user__isnull=True).order_by('pk').first()
try:
return get_settings_from_db_sync(registry, key)
except SynchronousOnlyOperation:
with ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(get_settings_from_db_sync, registry, key)
return future.result()
def get_cache_value(value):
"""Returns the proper special cache setting for a value
based on instance type.
@@ -345,7 +358,7 @@ class SettingsWrapper(UserSettingsHolder):
setting_id = None
# this value is read-only, however we *do* want to fetch its value from the database
if not field.read_only or name == 'INSTALL_UUID':
setting = Setting.objects.filter(key=name, user__isnull=True).order_by('pk').first()
setting = _get_setting_from_db(self.registry, name)
if setting:
if getattr(field, 'encrypted', False):
value = decrypt_field(setting, 'value')

View File

@@ -94,9 +94,7 @@ def test_setting_singleton_retrieve_readonly(api_request, dummy_setting):
@pytest.mark.django_db
def test_setting_singleton_update(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch('awx.conf.views.clear_setting_cache'):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 3})
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
assert response.data['FOO_BAR'] == 3
@@ -112,7 +110,7 @@ def test_setting_singleton_update_hybriddictfield_with_forbidden(api_request, du
# sure that the _Forbidden validator doesn't get used for the
# fields. See also https://github.com/ansible/awx/issues/4099.
with dummy_setting('FOO_BAR', field_class=sso_fields.SAMLOrgAttrField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
'awx.conf.views.clear_setting_cache'
):
api_request(
'patch',
@@ -126,7 +124,7 @@ def test_setting_singleton_update_hybriddictfield_with_forbidden(api_request, du
@pytest.mark.django_db
def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=4, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
'awx.conf.views.clear_setting_cache'
):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 5})
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
@@ -136,7 +134,7 @@ def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy
@pytest.mark.django_db
def test_setting_singleton_update_dont_change_encrypted_mark(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.CharField, encrypted=True, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
'awx.conf.views.clear_setting_cache'
):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 'password'})
assert Setting.objects.get(key='FOO_BAR').value.startswith('$encrypted$')
@@ -155,16 +153,14 @@ def test_setting_singleton_update_runs_custom_validate(api_request, dummy_settin
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), dummy_validate(
'foobar', func_raising_exception
), mock.patch('awx.conf.views.handle_setting_changes'):
), mock.patch('awx.conf.views.clear_setting_cache'):
response = api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 23})
assert response.status_code == 400
@pytest.mark.django_db
def test_setting_singleton_delete(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch('awx.conf.views.clear_setting_cache'):
api_request('delete', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
assert not response.data['FOO_BAR']
@@ -173,7 +169,7 @@ def test_setting_singleton_delete(api_request, dummy_setting):
@pytest.mark.django_db
def test_setting_singleton_delete_no_read_only_fields(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=23, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
'awx.conf.views.clear_setting_cache'
):
api_request('delete', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))

View File

@@ -0,0 +1,25 @@
import pytest
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from awx.conf.models import Setting
from django.apps import apps
@pytest.mark.django_db
def test_fill_group_type_params_no_op():
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 0
@pytest.mark.django_db
def test_keep_old_setting_with_default_value():
Setting.objects.create(key='AUTH_LDAP_GROUP_TYPE', value={'name_attr': 'cn', 'member_attr': 'member'})
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 1
s = Setting.objects.first()
assert s.value == {'name_attr': 'cn', 'member_attr': 'member'}
# NOTE: would be good to test the removal of attributes by migration
# but this requires fighting with the validator and is not done here

View File

@@ -26,10 +26,11 @@ from awx.api.generics import APIView, GenericAPIView, ListAPIView, RetrieveUpdat
from awx.api.permissions import IsSystemAdminOrAuditor
from awx.api.versioning import reverse
from awx.main.utils import camelcase_to_underscore
from awx.main.tasks.system import handle_setting_changes
from awx.main.tasks.system import clear_setting_cache
from awx.conf.models import Setting
from awx.conf.serializers import SettingCategorySerializer, SettingSingletonSerializer
from awx.conf import settings_registry
from awx.main.utils.external_logging import reconfigure_rsyslog
SettingCategory = collections.namedtuple('SettingCategory', ('url', 'slug', 'name'))
@@ -118,7 +119,10 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
setting.save(update_fields=['value'])
settings_change_list.append(key)
if settings_change_list:
connection.on_commit(lambda: handle_setting_changes.delay(settings_change_list))
connection.on_commit(lambda: clear_setting_cache.delay(settings_change_list))
if any([setting.startswith('LOG_AGGREGATOR') for setting in settings_change_list]):
# call notify to rsyslog. no data is need so payload is empty
reconfigure_rsyslog.delay()
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
@@ -133,7 +137,10 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
setting.delete()
settings_change_list.append(setting.key)
if settings_change_list:
connection.on_commit(lambda: handle_setting_changes.delay(settings_change_list))
connection.on_commit(lambda: clear_setting_cache.delay(settings_change_list))
if any([setting.startswith('LOG_AGGREGATOR') for setting in settings_change_list]):
# call notify to rsyslog. no data is need so payload is empty
reconfigure_rsyslog.delay()
# When TOWER_URL_BASE is deleted from the API, reset it to the hostname
# used to make the request as a default.
@@ -180,7 +187,7 @@ class SettingLoggingTest(GenericAPIView):
if not port:
return Response({'error': 'Port required for ' + protocol}, status=status.HTTP_400_BAD_REQUEST)
else:
# if http/https by this point, domain is reacheable
# if http/https by this point, domain is reachable
return Response(status=status.HTTP_202_ACCEPTED)
if protocol == 'udp':

View File

@@ -1972,7 +1972,7 @@ msgid ""
"HTTP headers and meta keys to search to determine remote host name or IP. "
"Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if "
"behind a reverse proxy. See the \"Proxy Support\" section of the "
"Adminstrator guide for more details."
"Administrator guide for more details."
msgstr ""
#: awx/main/conf.py:85
@@ -2457,7 +2457,7 @@ msgid ""
msgstr ""
#: awx/main/conf.py:631
msgid "Maximum disk persistance for external log aggregation (in GB)"
msgid "Maximum disk persistence for external log aggregation (in GB)"
msgstr ""
#: awx/main/conf.py:633
@@ -2548,7 +2548,7 @@ msgid "Enable"
msgstr ""
#: awx/main/constants.py:27
msgid "Doas"
msgid "Does"
msgstr ""
#: awx/main/constants.py:28
@@ -4801,7 +4801,7 @@ msgstr ""
#: awx/main/models/workflow.py:251
msgid ""
"An identifier coresponding to the workflow job template node that this node "
"An identifier corresponding to the workflow job template node that this node "
"was created from."
msgstr ""
@@ -5521,7 +5521,7 @@ msgstr ""
#: awx/sso/conf.py:606
msgid ""
"Extra arguments for Google OAuth2 login. You can restrict it to only allow a "
"single domain to authenticate, even if the user is logged in with multple "
"single domain to authenticate, even if the user is logged in with multiple "
"Google accounts. Refer to the documentation for more detail."
msgstr ""
@@ -5905,7 +5905,7 @@ msgstr ""
#: awx/sso/conf.py:1290
msgid ""
"Create a keypair to use as a service provider (SP) and include the "
"Create a key pair to use as a service provider (SP) and include the "
"certificate content here."
msgstr ""
@@ -5915,7 +5915,7 @@ msgstr ""
#: awx/sso/conf.py:1302
msgid ""
"Create a keypair to use as a service provider (SP) and include the private "
"Create a key pair to use as a service provider (SP) and include the private "
"key content here."
msgstr ""

View File

@@ -1971,7 +1971,7 @@ msgid ""
"HTTP headers and meta keys to search to determine remote host name or IP. "
"Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if "
"behind a reverse proxy. See the \"Proxy Support\" section of the "
"Adminstrator guide for more details."
"Administrator guide for more details."
msgstr "Los encabezados HTTP y las llaves de activación para buscar y determinar el nombre de host remoto o IP. Añada elementos adicionales a esta lista, como \"HTTP_X_FORWARDED_FOR\", si está detrás de un proxy inverso. Consulte la sección \"Soporte de proxy\" de la guía del adminstrador para obtener más información."
#: awx/main/conf.py:85
@@ -4804,7 +4804,7 @@ msgstr "Indica que un trabajo no se creará cuando es sea True. La semántica de
#: awx/main/models/workflow.py:251
msgid ""
"An identifier coresponding to the workflow job template node that this node "
"An identifier corresponding to the workflow job template node that this node "
"was created from."
msgstr "Un identificador que corresponde al nodo de plantilla de tarea del flujo de trabajo a partir del cual se creó este nodo."
@@ -5526,7 +5526,7 @@ msgstr "Argumentos adicionales para Google OAuth2"
#: awx/sso/conf.py:606
msgid ""
"Extra arguments for Google OAuth2 login. You can restrict it to only allow a "
"single domain to authenticate, even if the user is logged in with multple "
"single domain to authenticate, even if the user is logged in with multiple "
"Google accounts. Refer to the documentation for more detail."
msgstr "Argumentos adicionales para el inicio de sesión en Google OAuth2. Puede limitarlo para permitir la autenticación de un solo dominio, incluso si el usuario ha iniciado sesión con varias cuentas de Google. Consulte la documentación para obtener información detallada."
@@ -5910,7 +5910,7 @@ msgstr "Certificado público del proveedor de servicio SAML"
#: awx/sso/conf.py:1290
msgid ""
"Create a keypair to use as a service provider (SP) and include the "
"Create a key pair to use as a service provider (SP) and include the "
"certificate content here."
msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido del certificado aquí."
@@ -5920,7 +5920,7 @@ msgstr "Clave privada del proveedor de servicio SAML"
#: awx/sso/conf.py:1302
msgid ""
"Create a keypair to use as a service provider (SP) and include the private "
"Create a key pair to use as a service provider (SP) and include the private "
"key content here."
msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido de la clave privada aquí."

View File

@@ -588,17 +588,39 @@ class InstanceAccess(BaseAccess):
class InstanceGroupAccess(BaseAccess):
"""
I can see Instance Groups when I am:
- a superuser(system administrator)
- at least read_role on the instance group
I can edit Instance Groups when I am:
- a superuser
- admin role on the Instance group
I can add/delete Instance Groups:
- a superuser(system administrator)
I can use Instance Groups when I have:
- use_role on the instance group
"""
model = InstanceGroup
prefetch_related = ('instances',)
def filtered_queryset(self):
return InstanceGroup.objects.filter(organization__in=Organization.accessible_pk_qs(self.user, 'admin_role')).distinct()
return self.model.accessible_objects(self.user, 'read_role')
@check_superuser
def can_use(self, obj):
return self.user in obj.use_role
def can_add(self, data):
return self.user.is_superuser
@check_superuser
def can_change(self, obj, data):
return self.user.is_superuser
return self.can_admin(obj)
@check_superuser
def can_admin(self, obj):
return self.user in obj.admin_role
def can_delete(self, obj):
if obj.name in [settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]:
@@ -845,7 +867,7 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
return RoleAccess(self.user).can_attach(rel_role, sub_obj, 'members', *args, **kwargs)
if relationship == "instance_groups":
if self.user.is_superuser:
if self.user in obj.admin_role and self.user in sub_obj.use_role:
return True
return False
return super(OrganizationAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
@@ -934,7 +956,7 @@ class InventoryAccess(BaseAccess):
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
if relationship == "instance_groups":
if self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role:
if self.user in sub_obj.use_role and self.user in obj.admin_role:
return True
return False
return super(InventoryAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
@@ -1671,11 +1693,12 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
return self.user.is_superuser or self.user in obj.admin_role
@check_superuser
# object here is the job template. sub_object here is what is being attached
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if relationship == "instance_groups":
if not obj.organization:
return False
return self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role
return self.user in sub_obj.use_role and self.user in obj.admin_role
return super(JobTemplateAccess, self).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
@check_superuser
@@ -1852,8 +1875,6 @@ class JobLaunchConfigAccess(UnifiedCredentialsMixin, BaseAccess):
def _related_filtered_queryset(self, cls):
if cls is Label:
return LabelAccess(self.user).filtered_queryset()
elif cls is InstanceGroup:
return InstanceGroupAccess(self.user).filtered_queryset()
else:
return cls._accessible_pk_qs(cls, self.user, 'use_role')
@@ -1865,6 +1886,7 @@ class JobLaunchConfigAccess(UnifiedCredentialsMixin, BaseAccess):
@check_superuser
def can_add(self, data, template=None):
# WARNING: duplicated with BulkJobLaunchSerializer, check when changing permission levels
# This is a special case, we don't check related many-to-many elsewhere
# launch RBAC checks use this
if 'reference_obj' in data:
@@ -1997,7 +2019,16 @@ class WorkflowJobNodeAccess(BaseAccess):
)
def filtered_queryset(self):
return self.model.objects.filter(workflow_job__unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
return self.model.objects.filter(
Q(workflow_job__unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(workflow_job__organization__in=Organization.objects.filter(Q(admin_role__members=self.user)))
)
def can_read(self, obj):
"""Overriding this opens up detail view access for bulk jobs, where the workflow job has no associated workflow job template."""
if obj.workflow_job.is_bulk_job and obj.workflow_job.created_by_id == self.user.id:
return True
return super().can_read(obj)
@check_superuser
def can_add(self, data):
@@ -2123,7 +2154,16 @@ class WorkflowJobAccess(BaseAccess):
)
def filtered_queryset(self):
return WorkflowJob.objects.filter(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
return WorkflowJob.objects.filter(
Q(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(organization__in=Organization.objects.filter(Q(admin_role__members=self.user)), is_bulk_job=True)
)
def can_read(self, obj):
"""Overriding this opens up detail view access for bulk jobs, where the workflow job has no associated workflow job template."""
if obj.is_bulk_job and obj.created_by_id == self.user.id:
return True
return super().can_read(obj)
def can_add(self, data):
# Old add-start system for launching jobs is being depreciated, and

View File

@@ -4,11 +4,11 @@ import logging
# AWX
from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
logger = logging.getLogger('awx.main.scheduler')
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
def send_subsystem_metrics():
Metrics().send_metrics()

View File

@@ -65,7 +65,7 @@ class FixedSlidingWindow:
return sum(self.buckets.values()) or 0
class BroadcastWebsocketStatsManager:
class RelayWebsocketStatsManager:
def __init__(self, event_loop, local_hostname):
self._local_hostname = local_hostname
@@ -74,7 +74,7 @@ class BroadcastWebsocketStatsManager:
self._redis_key = BROADCAST_WEBSOCKET_REDIS_KEY_NAME
def new_remote_host_stats(self, remote_hostname):
self._stats[remote_hostname] = BroadcastWebsocketStats(self._local_hostname, remote_hostname)
self._stats[remote_hostname] = RelayWebsocketStats(self._local_hostname, remote_hostname)
return self._stats[remote_hostname]
def delete_remote_host_stats(self, remote_hostname):
@@ -107,7 +107,7 @@ class BroadcastWebsocketStatsManager:
return parser.text_string_to_metric_families(stats_str.decode('UTF-8'))
class BroadcastWebsocketStats:
class RelayWebsocketStats:
def __init__(self, local_hostname, remote_hostname):
self._local_hostname = local_hostname
self._remote_hostname = remote_hostname

View File

@@ -6,7 +6,7 @@ import platform
import distro
from django.db import connection
from django.db.models import Count
from django.db.models import Count, Min
from django.conf import settings
from django.contrib.sessions.models import Session
from django.utils.timezone import now, timedelta
@@ -35,7 +35,7 @@ data _since_ the last report date - i.e., new data in the last 24 hours)
"""
def trivial_slicing(key, since, until, last_gather):
def trivial_slicing(key, since, until, last_gather, **kwargs):
if since is not None:
return [(since, until)]
@@ -48,7 +48,7 @@ def trivial_slicing(key, since, until, last_gather):
return [(last_entry, until)]
def four_hour_slicing(key, since, until, last_gather):
def four_hour_slicing(key, since, until, last_gather, **kwargs):
if since is not None:
last_entry = since
else:
@@ -69,6 +69,54 @@ def four_hour_slicing(key, since, until, last_gather):
start = end
def host_metric_slicing(key, since, until, last_gather, **kwargs):
"""
Slicing doesn't start 4 weeks ago, but sends whole table monthly or first time
"""
from awx.main.models.inventory import HostMetric
if since is not None:
return [(since, until)]
from awx.conf.models import Setting
# Check if full sync should be done
full_sync_enabled = kwargs.get('full_sync_enabled', False)
last_entry = None
if not full_sync_enabled:
#
# If not, try incremental sync first
#
last_entries = Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_ENTRIES').first()
last_entries = json.loads((last_entries.value if last_entries is not None else '') or '{}', object_hook=datetime_hook)
last_entry = last_entries.get(key)
if not last_entry:
#
# If not done before, switch to full sync
#
full_sync_enabled = True
if full_sync_enabled:
#
# Find the lowest date for full sync
#
min_dates = HostMetric.objects.aggregate(min_last_automation=Min('last_automation'), min_last_deleted=Min('last_deleted'))
if min_dates['min_last_automation'] and min_dates['min_last_deleted']:
last_entry = min(min_dates['min_last_automation'], min_dates['min_last_deleted'])
elif min_dates['min_last_automation'] or min_dates['min_last_deleted']:
last_entry = min_dates['min_last_automation'] or min_dates['min_last_deleted']
if not last_entry:
# empty table
return []
start, end = last_entry, None
while start < until:
end = min(start + timedelta(days=30), until)
yield (start, end)
start = end
def _identify_lower(key, since, until, last_gather):
from awx.conf.models import Setting
@@ -83,7 +131,7 @@ def _identify_lower(key, since, until, last_gather):
return lower, last_entries
@register('config', '1.4', description=_('General platform configuration.'))
@register('config', '1.6', description=_('General platform configuration.'))
def config(since, **kwargs):
license_info = get_license()
install_type = 'traditional'
@@ -107,10 +155,13 @@ def config(since, **kwargs):
'subscription_name': license_info.get('subscription_name'),
'sku': license_info.get('sku'),
'support_level': license_info.get('support_level'),
'usage': license_info.get('usage'),
'product_name': license_info.get('product_name'),
'valid_key': license_info.get('valid_key'),
'satellite': license_info.get('satellite'),
'pool_id': license_info.get('pool_id'),
'subscription_id': license_info.get('subscription_id'),
'account_number': license_info.get('account_number'),
'current_instances': license_info.get('current_instances'),
'automated_instances': license_info.get('automated_instances'),
'automated_since': license_info.get('automated_since'),
@@ -119,6 +170,7 @@ def config(since, **kwargs):
'compliant': license_info.get('compliant'),
'date_warning': license_info.get('date_warning'),
'date_expired': license_info.get('date_expired'),
'subscription_usage_model': getattr(settings, 'SUBSCRIPTION_USAGE_MODEL', ''), # 1.5+
'free_instances': license_info.get('free_instances', 0),
'total_licensed_instances': license_info.get('instance_count', 0),
'license_expiry': license_info.get('time_remaining', 0),
@@ -233,11 +285,13 @@ def projects_by_scm_type(since, **kwargs):
return counts
@register('instance_info', '1.2', description=_('Cluster topology and capacity'))
@register('instance_info', '1.3', description=_('Cluster topology and capacity'))
def instance_info(since, include_hostnames=False, **kwargs):
info = {}
# Use same method that the TaskManager does to compute consumed capacity without querying all running jobs for each Instance
tm_models = TaskManagerModels.init_with_consumed_capacity(instance_fields=['uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'enabled'])
tm_models = TaskManagerModels.init_with_consumed_capacity(
instance_fields=['uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'enabled', 'node_type']
)
for tm_instance in tm_models.instances.instances_by_hostname.values():
instance = tm_instance.obj
instance_info = {
@@ -534,3 +588,25 @@ def workflow_job_template_node_table(since, full_path, **kwargs):
) always_nodes ON main_workflowjobtemplatenode.id = always_nodes.from_workflowjobtemplatenode_id
ORDER BY main_workflowjobtemplatenode.id ASC) TO STDOUT WITH CSV HEADER'''
return _copy_table(table='workflow_job_template_node', query=workflow_job_template_node_query, path=full_path)
@register(
'host_metric_table', '1.0', format='csv', description=_('Host Metric data, incremental/full sync'), expensive=host_metric_slicing, full_sync_interval=30
)
def host_metric_table(since, full_path, until, **kwargs):
host_metric_query = '''COPY (SELECT main_hostmetric.id,
main_hostmetric.hostname,
main_hostmetric.first_automation,
main_hostmetric.last_automation,
main_hostmetric.last_deleted,
main_hostmetric.deleted,
main_hostmetric.automated_counter,
main_hostmetric.deleted_counter,
main_hostmetric.used_in_inventories
FROM main_hostmetric
WHERE (main_hostmetric.last_automation > '{}' AND main_hostmetric.last_automation <= '{}') OR
(main_hostmetric.last_deleted > '{}' AND main_hostmetric.last_deleted <= '{}')
ORDER BY main_hostmetric.id ASC) TO STDOUT WITH CSV HEADER'''.format(
since.isoformat(), until.isoformat(), since.isoformat(), until.isoformat()
)
return _copy_table(table='host_metric', query=host_metric_query, path=full_path)

View File

@@ -52,7 +52,7 @@ def all_collectors():
}
def register(key, version, description=None, format='json', expensive=None):
def register(key, version, description=None, format='json', expensive=None, full_sync_interval=None):
"""
A decorator used to register a function as a metric collector.
@@ -71,6 +71,7 @@ def register(key, version, description=None, format='json', expensive=None):
f.__awx_analytics_description__ = description
f.__awx_analytics_type__ = format
f.__awx_expensive__ = expensive
f.__awx_full_sync_interval__ = full_sync_interval
return f
return decorate
@@ -259,10 +260,19 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
# These slicer functions may return a generator. The `since` parameter is
# allowed to be None, and will fall back to LAST_ENTRIES[key] or to
# LAST_GATHER (truncated appropriately to match the 4-week limit).
#
# Or it can force full table sync if interval is given
kwargs = dict()
full_sync_enabled = False
if func.__awx_full_sync_interval__:
last_full_sync = last_entries.get(f"{key}_full")
full_sync_enabled = not last_full_sync or last_full_sync < now() - timedelta(days=func.__awx_full_sync_interval__)
kwargs['full_sync_enabled'] = full_sync_enabled
if func.__awx_expensive__:
slices = func.__awx_expensive__(key, since, until, last_gather)
slices = func.__awx_expensive__(key, since, until, last_gather, **kwargs)
else:
slices = collectors.trivial_slicing(key, since, until, last_gather)
slices = collectors.trivial_slicing(key, since, until, last_gather, **kwargs)
for start, end in slices:
files = func(start, full_path=gather_dir, until=end)
@@ -301,6 +311,12 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
succeeded = False
logger.exception("Could not generate metric {}".format(filename))
# update full sync timestamp if successfully shipped
if full_sync_enabled and collection_type != 'dry-run' and succeeded:
with disable_activity_stream():
last_entries[f"{key}_full"] = now()
settings.AUTOMATION_ANALYTICS_LAST_ENTRIES = json.dumps(last_entries, cls=DjangoJSONEncoder)
if collection_type != 'dry-run':
if succeeded:
for fpath in tarfiles:
@@ -359,9 +375,7 @@ def ship(path):
s.headers = get_awx_http_client_headers()
s.headers.pop('Content-Type')
with set_environ(**settings.AWX_TASK_ENV):
response = s.post(
url, files=files, verify="/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31)
)
response = s.post(url, files=files, verify=settings.INSIGHTS_CERT_PATH, auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31))
# Accept 2XX status_codes
if response.status_code >= 300:
logger.error('Upload failed with status {}, {}'.format(response.status_code, response.text))

View File

@@ -9,7 +9,7 @@ from django.apps import apps
from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing
root_key = 'awx_metrics'
root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
logger = logging.getLogger('awx.main.analytics')
@@ -264,13 +264,6 @@ class Metrics:
data[field] = self.METRICS[field].decode(self.conn)
return data
def store_metrics(self, data_json):
# called when receiving metrics from other instances
data = json.loads(data_json)
if self.instance_name != data['instance']:
logger.debug(f"{self.instance_name} received subsystem metrics from {data['instance']}")
self.conn.set(root_key + "_instance_" + data['instance'], data['metrics'])
def should_pipe_execute(self):
if self.metrics_have_changed is False:
return False
@@ -305,13 +298,15 @@ class Metrics:
try:
current_time = time.time()
if current_time - self.previous_send_metrics.decode(self.conn) > self.send_metrics_interval:
serialized_metrics = self.serialize_local_metrics()
payload = {
'instance': self.instance_name,
'metrics': self.serialize_local_metrics(),
'metrics': serialized_metrics,
}
# store a local copy as well
self.store_metrics(json.dumps(payload))
# store the serialized data locally as well, so that load_other_metrics will read it
self.conn.set(root_key + '_instance_' + self.instance_name, serialized_metrics)
emit_channel_notification("metrics", payload)
self.previous_send_metrics.set(current_time)
self.previous_send_metrics.store_value(self.conn)
finally:

View File

@@ -10,7 +10,7 @@ from rest_framework import serializers
# AWX
from awx.conf import fields, register, register_validate
from awx.main.models import ExecutionEnvironment
from awx.main.constants import SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS
logger = logging.getLogger('awx.main.conf')
@@ -282,6 +282,16 @@ register(
placeholder={'HTTP_PROXY': 'myproxy.local:8080'},
)
register(
'AWX_RUNNER_KEEPALIVE_SECONDS',
field_class=fields.IntegerField,
label=_('K8S Ansible Runner Keep-Alive Message Interval'),
help_text=_('Only applies to jobs running in a Container Group. If not 0, send a message every so-many seconds to keep connection open.'),
category=_('Jobs'),
category_slug='jobs',
placeholder=240, # intended to be under common 5 minute idle timeout
)
register(
'GALAXY_TASK_ENV',
field_class=fields.KeyValueField,
@@ -765,6 +775,62 @@ register(
help_text=_('Indicates whether the instance is part of a kubernetes-based deployment.'),
)
register(
'BULK_JOB_MAX_LAUNCH',
field_class=fields.IntegerField,
default=100,
label=_('Max jobs to allow bulk jobs to launch'),
help_text=_('Max jobs to allow bulk jobs to launch'),
category=_('Bulk Actions'),
category_slug='bulk',
)
register(
'BULK_HOST_MAX_CREATE',
field_class=fields.IntegerField,
default=100,
label=_('Max number of hosts to allow to be created in a single bulk action'),
help_text=_('Max number of hosts to allow to be created in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
)
register(
'UI_NEXT',
field_class=fields.BooleanField,
default=False,
label=_('Enable Preview of New User Interface'),
help_text=_('Enable preview of new user interface.'),
category=_('System'),
category_slug='system',
)
register(
'SUBSCRIPTION_USAGE_MODEL',
field_class=fields.ChoiceField,
choices=[
('', _('Default model for AWX - no subscription. Deletion of host_metrics will not be considered for purposes of managed host counting')),
(
SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS,
_('Usage based on unique managed nodes in a large historical time frame and delete functionality for no longer used managed nodes'),
),
],
default='',
allow_blank=True,
label=_('Defines subscription usage model and shows Host Metrics'),
category=_('System'),
category_slug='system',
)
register(
'CLEANUP_HOST_METRICS_LAST_TS',
field_class=fields.DateTimeField,
label=_('Last cleanup date for HostMetrics'),
allow_null=True,
category=_('System'),
category_slug='system',
)
def logging_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'LOG_AGGREGATOR_HOST') or not hasattr(serializer.instance, 'LOG_AGGREGATOR_TYPE'):

View File

@@ -38,6 +38,8 @@ STANDARD_INVENTORY_UPDATE_ENV = {
'ANSIBLE_INVENTORY_EXPORT': 'True',
# Redirecting output to stderr allows JSON parsing to still work with -vvv
'ANSIBLE_VERBOSE_TO_STDERR': 'True',
# if ansible-inventory --limit is used for an inventory import, unmatched should be a failure
'ANSIBLE_HOST_PATTERN_MISMATCH': 'error',
}
CAN_CANCEL = ('new', 'pending', 'waiting', 'running')
ACTIVE_STATES = CAN_CANCEL
@@ -63,7 +65,7 @@ ENV_BLOCKLIST = frozenset(
'INVENTORY_HOSTVARS',
'AWX_HOST',
'PROJECT_REVISION',
'SUPERVISOR_WEB_CONFIG_PATH',
'SUPERVISOR_CONFIG_PATH',
)
)
@@ -106,3 +108,9 @@ JOB_VARIABLE_PREFIXES = [
ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE = (
'\u001b[31m \u001b[1m This can be caused if the version of ansible-runner in your execution environment is out of date.\u001b[0m'
)
# Values for setting SUBSCRIPTION_USAGE_MODEL
SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS = 'unique_managed_hosts'
# Shared prefetch to use for creating a queryset for the purpose of writing or saving facts
HOST_FACTS_FIELDS = ('name', 'ansible_facts', 'ansible_facts_modified', 'modified', 'inventory_id')

View File

@@ -3,6 +3,7 @@ import logging
import time
import hmac
import asyncio
import redis
from django.core.serializers.json import DjangoJSONEncoder
from django.conf import settings
@@ -80,7 +81,7 @@ class WebsocketSecretAuthHelper:
WebsocketSecretAuthHelper.verify_secret(secret)
class BroadcastConsumer(AsyncJsonWebsocketConsumer):
class RelayConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
try:
WebsocketSecretAuthHelper.is_authorized(self.scope)
@@ -100,6 +101,21 @@ class BroadcastConsumer(AsyncJsonWebsocketConsumer):
async def internal_message(self, event):
await self.send(event['text'])
async def receive_json(self, data):
(group, message) = unwrap_broadcast_msg(data)
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "_instance_" + message['instance'], message['metrics'])
else:
await self.channel_layer.group_send(group, message)
async def consumer_subscribe(self, event):
await self.send_json(event)
async def consumer_unsubscribe(self, event):
await self.send_json(event)
class EventConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
@@ -128,6 +144,11 @@ class EventConsumer(AsyncJsonWebsocketConsumer):
self.channel_name,
)
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.unsubscribe", "groups": list(current_groups), "origin_channel": self.channel_name},
)
@database_sync_to_async
def user_can_see_object_id(self, user_access, oid):
# At this point user is a channels.auth.UserLazyObject object
@@ -176,9 +197,20 @@ class EventConsumer(AsyncJsonWebsocketConsumer):
self.channel_name,
)
if len(old_groups):
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.unsubscribe", "groups": list(old_groups), "origin_channel": self.channel_name},
)
new_groups_exclusive = new_groups - current_groups
for group_name in new_groups_exclusive:
await self.channel_layer.group_add(group_name, self.channel_name)
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.subscribe", "groups": list(new_groups), "origin_channel": self.channel_name},
)
self.scope['session']['groups'] = new_groups
await self.send_json({"groups_current": list(new_groups), "groups_left": list(old_groups), "groups_joined": list(new_groups_exclusive)})
@@ -200,9 +232,11 @@ def _dump_payload(payload):
return None
def emit_channel_notification(group, payload):
from awx.main.wsbroadcast import wrap_broadcast_msg # noqa
def unwrap_broadcast_msg(payload: dict):
return (payload['group'], payload['message'])
def emit_channel_notification(group, payload):
payload_dumped = _dump_payload(payload)
if payload_dumped is None:
return
@@ -212,16 +246,6 @@ def emit_channel_notification(group, payload):
run_sync(
channel_layer.group_send(
group,
{"type": "internal.message", "text": payload_dumped},
)
)
run_sync(
channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{
"type": "internal.message",
"text": wrap_broadcast_msg(group, payload_dumped),
},
{"type": "internal.message", "text": payload_dumped, "needs_relay": True},
)
)

View File

@@ -54,6 +54,12 @@ aim_inputs = {
'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'),
},
{'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']},
{
'id': 'object_property',
'label': _('Object Property'),
'type': 'string',
'help_text': _('The property of the object to return. Default: Content Ex: Username, Address, etc.'),
},
{
'id': 'reason',
'label': _('Reason'),
@@ -70,10 +76,11 @@ def aim_backend(**kwargs):
client_cert = kwargs.get('client_cert', None)
client_key = kwargs.get('client_key', None)
verify = kwargs['verify']
webservice_id = kwargs['webservice_id']
webservice_id = kwargs.get('webservice_id', '')
app_id = kwargs['app_id']
object_query = kwargs['object_query']
object_query_format = kwargs['object_query_format']
object_property = kwargs.get('object_property', '')
reason = kwargs.get('reason', None)
if webservice_id == '':
webservice_id = 'AIMWebService'
@@ -98,7 +105,18 @@ def aim_backend(**kwargs):
allow_redirects=False,
)
raise_for_status(res)
return res.json()['Content']
# CCP returns the property name capitalized, username is camel case
# so we need to handle that case
if object_property == '':
object_property = 'Content'
elif object_property.lower() == 'username':
object_property = 'UserName'
elif object_property not in res:
raise KeyError('Property {} not found in object'.format(object_property))
else:
object_property = object_property.capitalize()
return res.json()[object_property]
aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)

View File

@@ -35,8 +35,14 @@ dsv_inputs = {
'type': 'string',
'help_text': _('The secret path e.g. /test/secret1'),
},
{
'id': 'secret_field',
'label': _('Secret Field'),
'help_text': _('The field to extract from the secret'),
'type': 'string',
},
],
'required': ['tenant', 'client_id', 'client_secret', 'path'],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field'],
}
if settings.DEBUG:
@@ -52,5 +58,5 @@ if settings.DEBUG:
dsv_plugin = CredentialPlugin(
'Thycotic DevOps Secrets Vault',
dsv_inputs,
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path']),
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip
)

View File

@@ -1,7 +1,7 @@
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
from thycotic.secrets.server import PasswordGrantAuthorizer, SecretServer, ServerSecret
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
tss_inputs = {
'fields': [
@@ -17,6 +17,12 @@ tss_inputs = {
'help_text': _('The (Application) user username'),
'type': 'string',
},
{
'id': 'domain',
'label': _('Domain'),
'help_text': _('The (Application) user domain'),
'type': 'string',
},
{
'id': 'password',
'label': _('Password'),
@@ -44,12 +50,18 @@ tss_inputs = {
def tss_backend(**kwargs):
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
if 'domain' in kwargs:
authorizer = DomainPasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'], kwargs['domain'])
else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer)
secret_dict = secret_server.get_secret(kwargs['secret_id'])
secret = ServerSecret(**secret_dict)
return secret.fields[kwargs['secret_field']].value
if isinstance(secret.fields[kwargs['secret_field']].value, str) == False:
return secret.fields[kwargs['secret_field']].value.text
else:
return secret.fields[kwargs['secret_field']].value
tss_plugin = CredentialPlugin(

View File

@@ -63,7 +63,7 @@ class RecordedQueryLog(object):
if not os.path.isdir(self.dest):
os.makedirs(self.dest)
progname = ' '.join(sys.argv)
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'wsbroadcast'):
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'wsrelay'):
if match in progname:
progname = match
break

View File

@@ -1,12 +1,14 @@
import os
import psycopg2
import select
from contextlib import contextmanager
from awx.settings.application_name import get_application_name
from django.conf import settings
from django.db import connection as pg_connection
NOT_READY = ([], [], [])
@@ -14,6 +16,29 @@ def get_local_queuename():
return settings.CLUSTER_HOST_ID
def get_task_queuename():
if os.getenv('AWX_COMPONENT') != 'web':
return settings.CLUSTER_HOST_ID
from awx.main.models.ha import Instance
random_task_instance = (
Instance.objects.filter(
node_type__in=(Instance.Types.CONTROL, Instance.Types.HYBRID),
node_state=Instance.States.READY,
enabled=True,
)
.only('hostname')
.order_by('?')
.first()
)
if random_task_instance is None:
raise ValueError('No task instances are READY and Enabled.')
return random_task_instance.hostname
class PubSub(object):
def __init__(self, conn):
self.conn = conn
@@ -60,10 +85,11 @@ def pg_bus_conn(new_connection=False):
'''
if new_connection:
conf = settings.DATABASES['default']
conn = psycopg2.connect(
dbname=conf['NAME'], host=conf['HOST'], user=conf['USER'], password=conf['PASSWORD'], port=conf['PORT'], **conf.get("OPTIONS", {})
)
conf = settings.DATABASES['default'].copy()
conf['OPTIONS'] = conf.get('OPTIONS', {}).copy()
# Modify the application name to distinguish from other connections the process might use
conf['OPTIONS']['application_name'] = get_application_name(settings.CLUSTER_HOST_ID, function='listener')
conn = psycopg2.connect(dbname=conf['NAME'], host=conf['HOST'], user=conf['USER'], password=conf['PASSWORD'], port=conf['PORT'], **conf['OPTIONS'])
# Django connection.cursor().connection doesn't have autocommit=True on by default
conn.set_session(autocommit=True)
else:

View File

@@ -6,7 +6,7 @@ from django.conf import settings
from django.db import connection
import redis
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
from . import pg_bus_conn
@@ -21,7 +21,7 @@ class Control(object):
if service not in self.services:
raise RuntimeError('{} must be in {}'.format(service, self.services))
self.service = service
self.queuename = host or get_local_queuename()
self.queuename = host or get_task_queuename()
def status(self, *args, **kwargs):
r = redis.Redis.from_url(settings.BROKER_URL)

View File

@@ -10,6 +10,7 @@ from django_guid import set_guid
from django_guid.utils import generate_guid
from awx.main.dispatch.worker import TaskWorker
from awx.main.utils.db import set_connection_name
logger = logging.getLogger('awx.main.dispatch.periodic')
@@ -21,6 +22,9 @@ class Scheduler(Scheduler):
def run():
ppid = os.getppid()
logger.warning('periodic beat started')
set_connection_name('periodic') # set application_name to distinguish from other dispatcher processes
while True:
if os.getppid() != ppid:
# if the parent PID changes, this process has been orphaned

View File

@@ -70,7 +70,7 @@ def reap_waiting(instance=None, status='failed', job_explanation=None, grace_per
reap_job(j, status, job_explanation=job_explanation)
def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=None):
def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=None, ref_time=None):
"""
Reap all jobs in running for this instance.
"""
@@ -79,9 +79,11 @@ def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=No
else:
hostname = instance.hostname
workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id
jobs = UnifiedJob.objects.filter(
Q(status='running') & (Q(execution_node=hostname) | Q(controller_node=hostname)) & ~Q(polymorphic_ctype_id=workflow_ctype_id)
)
base_Q = Q(status='running') & (Q(execution_node=hostname) | Q(controller_node=hostname)) & ~Q(polymorphic_ctype_id=workflow_ctype_id)
if ref_time:
jobs = UnifiedJob.objects.filter(base_Q & Q(started__lte=ref_time))
else:
jobs = UnifiedJob.objects.filter(base_Q)
if excluded_uuids:
jobs = jobs.exclude(celery_task_id__in=excluded_uuids)
for j in jobs:

View File

@@ -18,6 +18,7 @@ from django.conf import settings
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch import pg_bus_conn
from awx.main.utils.common import log_excess_runtime
from awx.main.utils.db import set_connection_name
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -219,6 +220,7 @@ class BaseWorker(object):
def work_loop(self, queue, finished, idx, *args):
ppid = os.getppid()
signal_handler = WorkerSignalHandler()
set_connection_name('worker') # set application_name to distinguish from other dispatcher processes
while not signal_handler.kill_now:
# if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too

View File

@@ -26,8 +26,8 @@ class TaskWorker(BaseWorker):
`awx.main.dispatch.publish`.
"""
@classmethod
def resolve_callable(cls, task):
@staticmethod
def resolve_callable(task):
"""
Transform a dotted notation task into an imported, callable function, e.g.,
@@ -46,7 +46,8 @@ class TaskWorker(BaseWorker):
return _call
def run_callable(self, body):
@staticmethod
def run_callable(body):
"""
Given some AMQP message, import the correct Python code and run it.
"""

View File

@@ -954,6 +954,16 @@ class OrderedManyToManyDescriptor(ManyToManyDescriptor):
def get_queryset(self):
return super(OrderedManyRelatedManager, self).get_queryset().order_by('%s__position' % self.through._meta.model_name)
def add(self, *objects):
if len(objects) > 1:
raise RuntimeError('Ordered many-to-many fields do not support multiple objects')
return super().add(*objects)
def remove(self, *objects):
if len(objects) > 1:
raise RuntimeError('Ordered many-to-many fields do not support multiple objects')
return super().remove(*objects)
return OrderedManyRelatedManager
return add_custom_queryset_to_many_related_manager(
@@ -971,13 +981,12 @@ class OrderedManyToManyField(models.ManyToManyField):
by a special `position` column on the M2M table
"""
def _update_m2m_position(self, sender, **kwargs):
if kwargs.get('action') in ('post_add', 'post_remove'):
order_with_respect_to = None
for field in sender._meta.local_fields:
if isinstance(field, models.ForeignKey) and isinstance(kwargs['instance'], field.related_model):
order_with_respect_to = field.name
for i, ig in enumerate(sender.objects.filter(**{order_with_respect_to: kwargs['instance'].pk})):
def _update_m2m_position(self, sender, instance, action, **kwargs):
if action in ('post_add', 'post_remove'):
descriptor = getattr(instance, self.name)
order_with_respect_to = descriptor.source_field_name
for i, ig in enumerate(sender.objects.filter(**{order_with_respect_to: instance.pk})):
if ig.position != i:
ig.position = i
ig.save()

View File

@@ -0,0 +1,22 @@
from awx.main.models import HostMetric
from django.core.management.base import BaseCommand
from django.conf import settings
class Command(BaseCommand):
"""
Run soft-deleting of HostMetrics
"""
help = 'Run soft-deleting of HostMetrics'
def add_arguments(self, parser):
parser.add_argument('--months-ago', type=int, dest='months-ago', action='store', help='Threshold in months for soft-deleting')
def handle(self, *args, **options):
months_ago = options.get('months-ago') or None
if not months_ago:
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_THRESHOLD', 12)
HostMetric.cleanup_task(months_ago)

View File

@@ -0,0 +1,143 @@
import time
from urllib.parse import urljoin
from argparse import ArgumentTypeError
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError
from django.db.models import Q
from django.utils.timezone import now
from awx.main.models import Instance, UnifiedJob
class AWXInstance:
def __init__(self, **filter):
self.filter = filter
self.get_instance()
def get_instance(self):
filter = self.filter if self.filter is not None else dict(hostname=settings.CLUSTER_HOST_ID)
qs = Instance.objects.filter(**filter)
if not qs.exists():
raise ValueError(f"No AWX instance found with {filter} parameters")
self.instance = qs.first()
def disable(self):
if self.instance.enabled:
self.instance.enabled = False
self.instance.save()
return True
def enable(self):
if not self.instance.enabled:
self.instance.enabled = True
self.instance.save()
return True
def jobs(self):
return UnifiedJob.objects.filter(
Q(controller_node=self.instance.hostname) | Q(execution_node=self.instance.hostname), status__in=("running", "waiting")
)
def jobs_pretty(self):
jobs = []
for j in self.jobs():
job_started = j.started if j.started else now()
# similar calculation of `elapsed` as the corresponding serializer
# does
td = now() - job_started
elapsed = (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / (10**6 * 1.0)
elapsed = float(elapsed)
details = dict(
name=j.name,
url=j.get_ui_url(),
elapsed=elapsed,
)
jobs.append(details)
jobs = sorted(jobs, reverse=True, key=lambda j: j["elapsed"])
return ", ".join([f"[\"{j['name']}\"]({j['url']})" for j in jobs])
def instance_pretty(self):
instance = (
self.instance.hostname,
urljoin(settings.TOWER_URL_BASE, f"/#/instances/{self.instance.pk}/details"),
)
return f"[\"{instance[0]}\"]({instance[1]})"
class Command(BaseCommand):
help = "Disable instance, optionally waiting for all its managed jobs to finish."
@staticmethod
def ge_1(arg):
if arg == "inf":
return float("inf")
int_arg = int(arg)
if int_arg < 1:
raise ArgumentTypeError(f"The value must be a positive number >= 1. Provided: \"{arg}\"")
return int_arg
def add_arguments(self, parser):
filter_group = parser.add_mutually_exclusive_group()
filter_group.add_argument(
"--hostname",
type=str,
default=settings.CLUSTER_HOST_ID,
help=f"{Instance.hostname.field.help_text} Defaults to the hostname of the machine where the Python interpreter is currently executing".strip(),
)
filter_group.add_argument("--id", type=self.ge_1, help=Instance.id.field.help_text)
parser.add_argument(
"--wait",
action="store_true",
help="Wait for jobs managed by the instance to finish. With default retry arguments waits ~1h",
)
parser.add_argument(
"--retry",
type=self.ge_1,
default=120,
help="Number of retries when waiting for jobs to finish. Default: 120. Also accepts \"inf\" to wait indefinitely",
)
parser.add_argument(
"--retry_sleep",
type=self.ge_1,
default=30,
help="Number of seconds to sleep before consequtive retries when waiting. Default: 30",
)
def handle(self, *args, **options):
try:
filter = dict(id=options["id"]) if options["id"] is not None else dict(hostname=options["hostname"])
instance = AWXInstance(**filter)
except ValueError as e:
raise CommandError(e)
if instance.disable():
self.stdout.write(self.style.SUCCESS(f"Instance {instance.instance_pretty()} has been disabled"))
else:
self.stdout.write(f"Instance {instance.instance_pretty()} has already been disabled")
if not options["wait"]:
return
rc = 1
while instance.jobs().count() > 0:
if rc < options["retry"]:
self.stdout.write(
f"{rc}/{options['retry']}: Waiting {options['retry_sleep']}s before the next attempt to see if the following instance' managed jobs have finished: {instance.jobs_pretty()}"
)
rc += 1
time.sleep(options["retry_sleep"])
else:
raise CommandError(
f"{rc}/{options['retry']}: No more retry attempts left, but the instance still has associated managed jobs: {instance.jobs_pretty()}"
)
else:
self.stdout.write(self.style.SUCCESS("Done waiting for instance' managed jobs to finish!"))

View File

@@ -1,5 +1,6 @@
from django.core.management.base import BaseCommand, CommandError
from awx.main.tasks.system import clear_setting_cache
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
@@ -31,5 +32,7 @@ class Command(BaseCommand):
else:
raise CommandError('Please pass --enable flag to allow local auth or --disable flag to disable local auth')
clear_setting_cache.delay(['DISABLE_LOCAL_AUTH'])
def handle(self, **options):
self._enable_disable_auth(options.get('enable'), options.get('disable'))

View File

@@ -1,53 +1,230 @@
from django.core.management.base import BaseCommand
import datetime
from django.core.serializers.json import DjangoJSONEncoder
from awx.main.models.inventory import HostMetric
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly
from awx.main.analytics.collectors import config
import json
import sys
import tempfile
import tarfile
import csv
CSV_PREFERRED_ROW_COUNT = 500000
BATCHED_FETCH_COUNT = 10000
class Command(BaseCommand):
help = 'This is for offline licensing usage'
def host_metric_queryset(self, result, offset=0, limit=BATCHED_FETCH_COUNT):
list_of_queryset = list(
result.values(
'id',
'hostname',
'first_automation',
'last_automation',
'last_deleted',
'automated_counter',
'deleted_counter',
'deleted',
'used_in_inventories',
).order_by('first_automation')[offset : offset + limit]
)
return list_of_queryset
def host_metric_summary_monthly_queryset(self, result, offset=0, limit=BATCHED_FETCH_COUNT):
list_of_queryset = list(
result.values(
'id',
'date',
'license_consumed',
'license_capacity',
'hosts_added',
'hosts_deleted',
'indirectly_managed_hosts',
).order_by(
'date'
)[offset : offset + limit]
)
return list_of_queryset
def paginated_db_retrieval(self, type, filter_kwargs, rows_per_file):
offset = 0
list_of_queryset = []
while True:
if type == 'host_metric':
result = HostMetric.objects.filter(**filter_kwargs)
list_of_queryset = self.host_metric_queryset(result, offset, rows_per_file)
elif type == 'host_metric_summary_monthly':
result = HostMetricSummaryMonthly.objects.filter(**filter_kwargs)
list_of_queryset = self.host_metric_summary_monthly_queryset(result, offset, rows_per_file)
if not list_of_queryset:
break
else:
yield list_of_queryset
offset += len(list_of_queryset)
def controlled_db_retrieval(self, type, filter_kwargs, offset=0, fetch_count=BATCHED_FETCH_COUNT):
if type == 'host_metric':
result = HostMetric.objects.filter(**filter_kwargs)
return self.host_metric_queryset(result, offset, fetch_count)
elif type == 'host_metric_summary_monthly':
result = HostMetricSummaryMonthly.objects.filter(**filter_kwargs)
return self.host_metric_summary_monthly_queryset(result, offset, fetch_count)
def write_to_csv(self, csv_file, list_of_queryset, always_header, first_write=False, mode='a'):
with open(csv_file, mode, newline='') as output_file:
try:
keys = list_of_queryset[0].keys() if list_of_queryset else []
dict_writer = csv.DictWriter(output_file, keys)
if always_header or first_write:
dict_writer.writeheader()
dict_writer.writerows(list_of_queryset)
except Exception as e:
print(e)
def csv_for_tar(self, temp_dir, type, filter_kwargs, rows_per_file, always_header=True):
for index, list_of_queryset in enumerate(self.paginated_db_retrieval(type, filter_kwargs, rows_per_file)):
csv_file = f'{temp_dir}/{type}{index+1}.csv'
arcname_file = f'{type}{index+1}.csv'
first_write = True if index == 0 else False
self.write_to_csv(csv_file, list_of_queryset, always_header, first_write, 'w')
yield csv_file, arcname_file
def csv_for_tar_batched_fetch(self, temp_dir, type, filter_kwargs, rows_per_file, always_header=True):
csv_iteration = 1
offset = 0
rows_written_per_csv = 0
to_fetch = BATCHED_FETCH_COUNT
while True:
list_of_queryset = self.controlled_db_retrieval(type, filter_kwargs, offset, to_fetch)
if not list_of_queryset:
break
csv_file = f'{temp_dir}/{type}{csv_iteration}.csv'
arcname_file = f'{type}{csv_iteration}.csv'
self.write_to_csv(csv_file, list_of_queryset, always_header)
offset += to_fetch
rows_written_per_csv += to_fetch
always_header = False
remaining_rows_per_csv = rows_per_file - rows_written_per_csv
if not remaining_rows_per_csv:
yield csv_file, arcname_file
rows_written_per_csv = 0
always_header = True
to_fetch = BATCHED_FETCH_COUNT
csv_iteration += 1
elif remaining_rows_per_csv < BATCHED_FETCH_COUNT:
to_fetch = remaining_rows_per_csv
if rows_written_per_csv:
yield csv_file, arcname_file
def config_for_tar(self, options, temp_dir):
config_json = json.dumps(config(options.get('since')))
config_file = f'{temp_dir}/config.json'
arcname_file = 'config.json'
with open(config_file, 'w') as f:
f.write(config_json)
return config_file, arcname_file
def output_json(self, options, filter_kwargs):
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in self.csv_for_tar(temp_dir, options.get('json', 'host_metric'), filter_kwargs, BATCHED_FETCH_COUNT, True):
csv_file = csv_detail[0]
with open(csv_file) as f:
reader = csv.DictReader(f)
rows = list(reader)
json_result = json.dumps(rows, cls=DjangoJSONEncoder)
print(json_result)
def output_csv(self, options, filter_kwargs):
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in self.csv_for_tar(temp_dir, options.get('csv', 'host_metric'), filter_kwargs, BATCHED_FETCH_COUNT, False):
csv_file = csv_detail[0]
with open(csv_file) as f:
sys.stdout.write(f.read())
def output_tarball(self, options, filter_kwargs):
always_header = True
rows_per_file = options['rows_per_file'] or CSV_PREFERRED_ROW_COUNT
tar = tarfile.open("./host_metrics.tar.gz", "w:gz")
if rows_per_file <= BATCHED_FETCH_COUNT:
csv_function = self.csv_for_tar
else:
csv_function = self.csv_for_tar_batched_fetch
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in csv_function(temp_dir, 'host_metric', filter_kwargs, rows_per_file, always_header):
tar.add(csv_detail[0], arcname=csv_detail[1])
for csv_detail in csv_function(temp_dir, 'host_metric_summary_monthly', filter_kwargs, rows_per_file, always_header):
tar.add(csv_detail[0], arcname=csv_detail[1])
config_file, arcname_file = self.config_for_tar(options, temp_dir)
tar.add(config_file, arcname=arcname_file)
tar.close()
def add_arguments(self, parser):
parser.add_argument('--since', type=datetime.datetime.fromisoformat, help='Start Date in ISO format YYYY-MM-DD')
parser.add_argument('--until', type=datetime.datetime.fromisoformat, help='End Date in ISO format YYYY-MM-DD')
parser.add_argument('--json', action='store_true', help='Select output as JSON')
parser.add_argument('--json', type=str, const='host_metric', nargs='?', help='Select output as JSON for host_metric or host_metric_summary_monthly')
parser.add_argument('--csv', type=str, const='host_metric', nargs='?', help='Select output as CSV for host_metric or host_metric_summary_monthly')
parser.add_argument('--tarball', action='store_true', help=f'Package CSV files into a tar with upto {CSV_PREFERRED_ROW_COUNT} rows')
parser.add_argument('--rows_per_file', type=int, help=f'Split rows in chunks of {CSV_PREFERRED_ROW_COUNT}')
def handle(self, *args, **options):
since = options.get('since')
until = options.get('until')
if since is None and until is None:
print("No Arguments received")
return None
if since is not None and since.tzinfo is None:
since = since.replace(tzinfo=datetime.timezone.utc)
if until is not None and until.tzinfo is None:
until = until.replace(tzinfo=datetime.timezone.utc)
filter_kwargs = {}
if since is not None:
filter_kwargs['last_automation__gte'] = since
if until is not None:
filter_kwargs['last_automation__lte'] = until
result = HostMetric.objects.filter(**filter_kwargs)
filter_kwargs_host_metrics_summary = {}
if since is not None:
filter_kwargs_host_metrics_summary['date__gte'] = since
if options['rows_per_file'] and options.get('rows_per_file') > CSV_PREFERRED_ROW_COUNT:
print(f"rows_per_file exceeds the allowable limit of {CSV_PREFERRED_ROW_COUNT}.")
return
# if --json flag is set, output the result in json format
if options['json']:
list_of_queryset = list(result.values('hostname', 'first_automation', 'last_automation'))
json_result = json.dumps(list_of_queryset, cls=DjangoJSONEncoder)
print(json_result)
self.output_json(options, filter_kwargs)
elif options['csv']:
self.output_csv(options, filter_kwargs)
elif options['tarball']:
self.output_tarball(options, filter_kwargs)
# --json flag is not set, output in plain text
else:
print(f"Total Number of hosts automated: {len(result)}")
for item in result:
print(f"Printing up to {BATCHED_FETCH_COUNT} automated hosts:")
result = HostMetric.objects.filter(**filter_kwargs)
list_of_queryset = self.host_metric_queryset(result, 0, BATCHED_FETCH_COUNT)
for item in list_of_queryset:
print(
"Hostname : {hostname} | first_automation : {first_automation} | last_automation : {last_automation}".format(
hostname=item.hostname, first_automation=item.first_automation, last_automation=item.last_automation
hostname=item['hostname'], first_automation=item['first_automation'], last_automation=item['last_automation']
)
)
return

View File

@@ -458,12 +458,19 @@ class Command(BaseCommand):
# TODO: We disable variable overwrite here in case user-defined inventory variables get
# mangled. But we still need to figure out a better way of processing multiple inventory
# update variables mixing with each other.
all_obj = self.inventory
db_variables = all_obj.variables_dict
db_variables.update(self.all_group.variables)
if db_variables != all_obj.variables_dict:
all_obj.variables = json.dumps(db_variables)
all_obj.save(update_fields=['variables'])
# issue for this: https://github.com/ansible/awx/issues/11623
if self.inventory.kind == 'constructed' and self.inventory_source.overwrite_vars:
# NOTE: we had to add a exception case to not merge variables
# to make constructed inventory coherent
db_variables = self.all_group.variables
else:
db_variables = self.inventory.variables_dict
db_variables.update(self.all_group.variables)
if db_variables != self.inventory.variables_dict:
self.inventory.variables = json.dumps(db_variables)
self.inventory.save(update_fields=['variables'])
logger.debug('Inventory variables updated from "all" group')
else:
logger.debug('Inventory variables unmodified')
@@ -522,16 +529,32 @@ class Command(BaseCommand):
def _update_db_host_from_mem_host(self, db_host, mem_host):
# Update host variables.
db_variables = db_host.variables_dict
if self.overwrite_vars:
db_variables = mem_host.variables
else:
db_variables.update(mem_host.variables)
mem_variables = mem_host.variables
update_fields = []
# Update host instance_id.
instance_id = self._get_instance_id(mem_variables)
if instance_id != db_host.instance_id:
old_instance_id = db_host.instance_id
db_host.instance_id = instance_id
update_fields.append('instance_id')
if self.inventory.kind == 'constructed':
# remote towervars so the constructed hosts do not have extra variables
for prefix in ('host', 'tower'):
for var in ('remote_{}_enabled', 'remote_{}_id'):
mem_variables.pop(var.format(prefix), None)
if self.overwrite_vars:
db_variables = mem_variables
else:
db_variables.update(mem_variables)
if db_variables != db_host.variables_dict:
db_host.variables = json.dumps(db_variables)
update_fields.append('variables')
# Update host enabled flag.
enabled = self._get_enabled(mem_host.variables)
enabled = self._get_enabled(mem_variables)
if enabled is not None and db_host.enabled != enabled:
db_host.enabled = enabled
update_fields.append('enabled')
@@ -540,12 +563,6 @@ class Command(BaseCommand):
old_name = db_host.name
db_host.name = mem_host.name
update_fields.append('name')
# Update host instance_id.
instance_id = self._get_instance_id(mem_host.variables)
if instance_id != db_host.instance_id:
old_instance_id = db_host.instance_id
db_host.instance_id = instance_id
update_fields.append('instance_id')
# Update host and display message(s) on what changed.
if update_fields:
db_host.save(update_fields=update_fields)
@@ -654,13 +671,19 @@ class Command(BaseCommand):
mem_host = self.all_group.all_hosts[mem_host_name]
import_vars = mem_host.variables
host_desc = import_vars.pop('_awx_description', 'imported')
host_attrs = dict(variables=json.dumps(import_vars), description=host_desc)
host_attrs = dict(description=host_desc)
enabled = self._get_enabled(mem_host.variables)
if enabled is not None:
host_attrs['enabled'] = enabled
if self.instance_id_var:
instance_id = self._get_instance_id(mem_host.variables)
host_attrs['instance_id'] = instance_id
if self.inventory.kind == 'constructed':
# remote towervars so the constructed hosts do not have extra variables
for prefix in ('host', 'tower'):
for var in ('remote_{}_enabled', 'remote_{}_id'):
import_vars.pop(var.format(prefix), None)
host_attrs['variables'] = json.dumps(import_vars)
try:
sanitize_jinja(mem_host_name)
except ValueError as e:
@@ -851,6 +874,7 @@ class Command(BaseCommand):
logger.info('Updating inventory %d: %s' % (inventory.pk, inventory.name))
# Create ad-hoc inventory source and inventory update objects
ee = get_default_execution_environment()
with ignore_inventory_computed_fields():
source = Command.get_source_absolute_path(raw_source)
@@ -860,14 +884,22 @@ class Command(BaseCommand):
source_path=os.path.abspath(source),
overwrite=bool(options.get('overwrite', False)),
overwrite_vars=bool(options.get('overwrite_vars', False)),
execution_environment=ee,
)
inventory_update = inventory_source.create_inventory_update(
_eager_fields=dict(status='running', job_args=json.dumps(sys.argv), job_env=dict(os.environ.items()), job_cwd=os.getcwd())
_eager_fields=dict(
status='running', job_args=json.dumps(sys.argv), job_env=dict(os.environ.items()), job_cwd=os.getcwd(), execution_environment=ee
)
)
data = AnsibleInventoryLoader(source=source, verbosity=verbosity).load()
try:
data = AnsibleInventoryLoader(source=source, verbosity=verbosity).load()
logger.debug('Finished loading from source: %s', source)
logger.debug('Finished loading from source: %s', source)
except SystemExit:
logger.debug("Error occurred while running ansible-inventory")
inventory_update.cancel()
sys.exit(1)
status, tb, exc = 'error', '', None
try:

View File

@@ -44,16 +44,18 @@ class Command(BaseCommand):
for x in ig.instances.all():
color = '\033[92m'
end_color = '\033[0m'
if x.capacity == 0 and x.node_type != 'hop':
color = '\033[91m'
if not x.enabled:
color = '\033[90m[DISABLED] '
if no_color:
color = ''
end_color = ''
capacity = f' capacity={x.capacity}' if x.node_type != 'hop' else ''
version = f" version={x.version or '?'}" if x.node_type != 'hop' else ''
heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else ''
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}\033[0m')
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}{end_color}')
print()

View File

@@ -0,0 +1,32 @@
import logging
import json
from django.core.management.base import BaseCommand
from awx.main.dispatch import pg_bus_conn
from awx.main.dispatch.worker.task import TaskWorker
logger = logging.getLogger('awx.main.cache_clear')
class Command(BaseCommand):
"""
Cache Clear
Runs as a management command and starts a daemon that listens for a pg_notify message to clear the cache.
"""
help = 'Launch the cache clear daemon'
def handle(self, *arg, **options):
try:
with pg_bus_conn(new_connection=True) as conn:
conn.listen("tower_settings_change")
for e in conn.events(yield_timeouts=True):
if e is not None:
body = json.loads(e.payload)
logger.info(f"Cache clear request received. Clearing now, payload: {e.payload}")
TaskWorker.run_callable(body)
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in cache clear main loop')
raise

View File

@@ -8,7 +8,7 @@ from django.core.cache import cache as django_cache
from django.core.management.base import BaseCommand
from django.db import connection as django_connection
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
@@ -76,7 +76,7 @@ class Command(BaseCommand):
consumer = None
try:
queues = ['tower_broadcast_all', get_local_queuename()]
queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4))
consumer.run()
except KeyboardInterrupt:

View File

@@ -0,0 +1,67 @@
import json
import logging
import os
import time
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.main.dispatch import pg_bus_conn
logger = logging.getLogger('awx.main.commands.run_heartbeet')
class Command(BaseCommand):
help = 'Launch the web server beacon (heartbeet)'
def print_banner(self):
heartbeet = r"""
********** **********
************* *************
*****************************
***********HEART***********
*************************
*******************
*************** _._
*********** /`._ `'. __
******* \ .\| \ _'` `)
*** (``_) \| ).'` /`- /
* `\ `;\_ `\\//`-'` /
\ `'.'.| / __/`
`'--v_|/`'`
__||-._
/'` `-`` `'\\
/ .'` )
\ BEET ' )
\. /
'. /'`
`) |
//
'(.
`\`.
``"""
print(heartbeet)
def construct_payload(self, action='online'):
payload = {
'hostname': settings.CLUSTER_HOST_ID,
'ip': os.environ.get('MY_POD_IP'),
'action': action,
}
return json.dumps(payload)
def do_hearbeat_loop(self):
with pg_bus_conn(new_connection=True) as conn:
while True:
logger.debug('Sending heartbeat')
conn.notify('web_heartbeet', self.construct_payload())
time.sleep(settings.BROADCAST_WEBSOCKET_BEACON_FROM_WEB_RATE_SECONDS)
# TODO: Send a message with action=offline if we notice a SIGTERM or SIGINT
# (wsrelay can use this to remove the node quicker)
def handle(self, *arg, **options):
self.print_banner()
# Note: We don't really try any reconnect logic to pg_notify here,
# just let supervisor restart if we fail.
self.do_hearbeat_loop()

View File

@@ -0,0 +1,41 @@
import logging
import json
from django.core.management.base import BaseCommand
from django.conf import settings
from django.core.cache import cache
from awx.main.dispatch import pg_bus_conn
from awx.main.dispatch.worker.task import TaskWorker
from awx.main.utils.external_logging import reconfigure_rsyslog
logger = logging.getLogger('awx.main.rsyslog_configurer')
class Command(BaseCommand):
"""
Rsyslog Configurer
Runs as a management command and starts rsyslog configurer daemon. Daemon listens
for pg_notify then calls reconfigure_rsyslog
"""
help = 'Launch the rsyslog_configurer daemon'
def handle(self, *arg, **options):
try:
with pg_bus_conn(new_connection=True) as conn:
conn.listen("rsyslog_configurer")
# reconfigure rsyslog on start up
reconfigure_rsyslog()
for e in conn.events(yield_timeouts=True):
if e is not None:
logger.info("Change in logging settings found. Restarting rsyslogd")
# clear the cache of relevant settings then restart
setting_keys = [k for k in dir(settings) if k.startswith('LOG_AGGREGATOR')]
cache.delete_many(setting_keys)
settings._awx_conf_memoizedcache.clear()
body = json.loads(e.payload)
TaskWorker.run_callable(body)
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in rsyslog_configurer main loop')
raise

View File

@@ -13,13 +13,13 @@ from django.db import connection
from django.db.migrations.executor import MigrationExecutor
from awx.main.analytics.broadcast_websocket import (
BroadcastWebsocketStatsManager,
RelayWebsocketStatsManager,
safe_name,
)
from awx.main.wsbroadcast import BroadcastWebsocketManager
from awx.main.wsrelay import WebSocketRelayManager
logger = logging.getLogger('awx.main.wsbroadcast')
logger = logging.getLogger('awx.main.wsrelay')
class Command(BaseCommand):
@@ -98,8 +98,9 @@ class Command(BaseCommand):
try:
executor = MigrationExecutor(connection)
migrating = bool(executor.migration_plan(executor.loader.graph.leaf_nodes()))
connection.close() # Because of async nature, main loop will use new connection, so close this
except Exception as exc:
logger.info(f'Error on startup of run_wsbroadcast (error: {exc}), retry in 10s...')
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), retry in 10s...')
time.sleep(10)
return
@@ -130,9 +131,9 @@ class Command(BaseCommand):
if options.get('status'):
try:
stats_all = BroadcastWebsocketStatsManager.get_stats_sync()
stats_all = RelayWebsocketStatsManager.get_stats_sync()
except redis.exceptions.ConnectionError as e:
print(f"Unable to get Broadcast Websocket Status. Failed to connect to redis {e}")
print(f"Unable to get Relay Websocket Status. Failed to connect to redis {e}")
return
data = {}
@@ -151,22 +152,19 @@ class Command(BaseCommand):
host_stats = Command.get_connection_status(hostnames, data)
lines = Command._format_lines(host_stats)
print(f'Broadcast websocket connection status from "{my_hostname}" to:')
print(f'Relay websocket connection status from "{my_hostname}" to:')
print('\n'.join(lines))
host_stats = Command.get_connection_stats(hostnames, data)
lines = Command._format_lines(host_stats)
print(f'\nBroadcast websocket connection stats from "{my_hostname}" to:')
print(f'\nRelay websocket connection stats from "{my_hostname}" to:')
print('\n'.join(lines))
return
try:
broadcast_websocket_mgr = BroadcastWebsocketManager()
task = broadcast_websocket_mgr.start()
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
websocket_relay_manager = WebSocketRelayManager()
asyncio.run(websocket_relay_manager.run())
except KeyboardInterrupt:
logger.debug('Terminating Websocket Broadcaster')
logger.info('Terminating Websocket Relayer')

View File

@@ -79,6 +79,11 @@ class HostManager(models.Manager):
return qs
class HostMetricActiveManager(models.Manager):
def get_queryset(self):
return super().get_queryset().filter(deleted=False)
def get_ig_ig_mapping(ig_instance_mapping, instance_ig_mapping):
# Create IG mapping by union of all groups their instances are members of
ig_ig_mapping = {}

View File

@@ -0,0 +1,17 @@
# Generated by Django 3.2.16 on 2023-01-05 15:39
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0174_ensure_org_ee_admin_roles'),
]
operations = [
migrations.AddField(
model_name='workflowjob',
name='is_bulk_job',
field=models.BooleanField(default=False),
),
]

View File

@@ -0,0 +1,32 @@
# Generated by Django 3.2.16 on 2023-03-03 20:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0175_workflowjob_is_bulk_job'),
]
operations = [
migrations.AddField(
model_name='inventorysource',
name='scm_branch',
field=models.CharField(
blank=True,
default='',
help_text='Inventory source SCM branch. Project default used if blank. Only allowed if project allow_override field is set to true.',
max_length=1024,
),
),
migrations.AddField(
model_name='inventoryupdate',
name='scm_branch',
field=models.CharField(
blank=True,
default='',
help_text='Inventory source SCM branch. Project default used if blank. Only allowed if project allow_override field is set to true.',
max_length=1024,
),
),
]

View File

@@ -0,0 +1,48 @@
# Generated by Django 3.2.16 on 2023-02-17 02:45
import awx.main.fields
from django.db import migrations
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0176_inventorysource_scm_branch'),
]
operations = [
migrations.AddField(
model_name='instancegroup',
name='admin_role',
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
parent_role=['singleton:system_administrator'],
related_name='+',
to='main.role',
),
preserve_default='True',
),
migrations.AddField(
model_name='instancegroup',
name='read_role',
field=awx.main.fields.ImplicitRoleField(
editable=False,
null='True',
on_delete=django.db.models.deletion.CASCADE,
parent_role=['singleton:system_auditor', 'use_role', 'admin_role'],
related_name='+',
to='main.role',
),
preserve_default='True',
),
migrations.AddField(
model_name='instancegroup',
name='use_role',
field=awx.main.fields.ImplicitRoleField(
editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['admin_role'], related_name='+', to='main.role'
),
preserve_default='True',
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 3.2.16 on 2023-02-17 02:45
from django.db import migrations
from awx.main.migrations import _rbac as rbac
from awx.main.migrations import _migration_utils as migration_utils
from awx.main.migrations import _OrgAdmin_to_use_ig as oamigrate
from awx.main.migrations import ActivityStreamDisabledMigration
class Migration(ActivityStreamDisabledMigration):
dependencies = [
('main', '0177_instance_group_role_addition'),
]
operations = [
migrations.RunPython(migration_utils.set_current_apps_for_migrations),
migrations.RunPython(rbac.create_roles),
migrations.RunPython(oamigrate.migrate_org_admin_to_use),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 3.2.16 on 2023-03-16 15:16
from django.db import migrations
from awx.main.migrations._credentialtypes import migrate_credential_type
from awx.main.models import CredentialType
class Migration(migrations.Migration):
def update_cyberark_plugin_names(apps, schema_editor):
CredentialType.setup_tower_managed_defaults(apps)
migrate_credential_type(apps, 'aim')
migrate_credential_type(apps, 'conjur')
dependencies = [
('main', '0178_instance_group_admin_migration'),
]
operations = [migrations.RunPython(update_cyberark_plugin_names)]

View File

@@ -0,0 +1,43 @@
# Generated by Django 3.2.16 on 2023-02-03 09:40
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0179_change_cyberark_plugin_names'),
]
operations = [
migrations.AlterField(model_name='hostmetric', name='hostname', field=models.CharField(max_length=512, primary_key=False, serialize=True, unique=True)),
migrations.AddField(
model_name='hostmetric',
name='last_deleted',
field=models.DateTimeField(db_index=True, null=True, help_text='When the host was last deleted'),
),
migrations.AddField(
model_name='hostmetric',
name='automated_counter',
field=models.BigIntegerField(default=0, help_text='How many times was the host automated'),
),
migrations.AddField(
model_name='hostmetric',
name='deleted_counter',
field=models.IntegerField(default=0, help_text='How many times was the host deleted'),
),
migrations.AddField(
model_name='hostmetric',
name='deleted',
field=models.BooleanField(
default=False, help_text='Boolean flag saying whether the host is deleted and therefore not counted into the subscription consumption'
),
),
migrations.AddField(
model_name='hostmetric',
name='used_in_inventories',
field=models.IntegerField(null=True, help_text='How many inventories contain this host'),
),
migrations.AddField(
model_name='hostmetric', name='id', field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')
),
]

View File

@@ -0,0 +1,33 @@
# Generated by Django 3.2.16 on 2023-02-10 12:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0180_add_hostmetric_fields'),
]
operations = [
migrations.CreateModel(
name='HostMetricSummaryMonthly',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date', models.DateField(unique=True)),
('license_consumed', models.BigIntegerField(default=0, help_text='How many unique hosts are consumed from the license')),
('license_capacity', models.BigIntegerField(default=0, help_text="'License capacity as max. number of unique hosts")),
(
'hosts_added',
models.IntegerField(default=0, help_text='How many hosts were added in the associated month, consuming more license capacity'),
),
(
'hosts_deleted',
models.IntegerField(default=0, help_text='How many hosts were deleted in the associated month, freeing the license capacity'),
),
(
'indirectly_managed_hosts',
models.IntegerField(default=0, help_text='Manually entered number indirectly managed hosts for a certain month'),
),
],
),
]

View File

@@ -1,21 +1,37 @@
# Generated by Django 3.2.16 on 2022-12-07 14:20
import awx.main.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0174_ensure_org_ee_admin_roles'),
('main', '0181_hostmetricsummarymonthly'),
]
operations = [
migrations.CreateModel(
name='InventoryConstructedInventoryMembership',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('position', models.PositiveIntegerField(db_index=True, default=None, null=True)),
(
'constructed_inventory',
models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.inventory', related_name='constructed_inventory_memberships'),
),
('input_inventory', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.inventory')),
],
),
migrations.AddField(
model_name='inventory',
name='input_inventories',
field=models.ManyToManyField(
field=awx.main.fields.OrderedManyToManyField(
blank=True,
through_fields=('constructed_inventory', 'input_inventory'),
help_text='Only valid for constructed inventories, this links to the inventories that will be used.',
related_name='destination_inventories',
through='main.InventoryConstructedInventoryMembership',
to='main.Inventory',
),
),
@@ -106,4 +122,17 @@ class Migration(migrations.Migration):
help_text='This field is deprecated and will be removed in a future release. Regex where only matching hosts will be imported.',
),
),
migrations.AddField(
model_name='jobhostsummary',
name='constructed_host',
field=models.ForeignKey(
default=None,
editable=False,
help_text='Only for jobs run against constructed inventories, this links to the host inside the constructed inventory.',
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name='constructed_host_summaries',
to='main.host',
),
),
]

View File

@@ -0,0 +1,20 @@
import logging
from awx.main.models import Organization
logger = logging.getLogger('awx.main.migrations')
def migrate_org_admin_to_use(apps, schema_editor):
logger.info('Initiated migration from Org admin to use role')
roles_added = 0
for org in Organization.objects.prefetch_related('admin_role__members').iterator():
igs = list(org.instance_groups.all())
if not igs:
continue
for admin in org.admin_role.members.filter(is_superuser=False):
for ig in igs:
ig.use_role.members.add(admin)
roles_added += 1
if roles_added:
logger.info(f'Migration converted {roles_added} from organization admin to use role')

View File

@@ -1,6 +1,9 @@
import logging
from awx.main.models import CredentialType
from django.db.models import Q
logger = logging.getLogger('awx.main.migrations')
DEPRECATED_CRED_KIND = {
'rax': {
@@ -76,3 +79,14 @@ def add_tower_verify_field(apps, schema_editor):
def remove_become_methods(apps, schema_editor):
# this is no longer necessary; schemas are defined in code
pass
def migrate_credential_type(apps, namespace):
ns_types = apps.get_model('main', 'CredentialType').objects.filter(namespace=namespace).order_by('created')
if ns_types.count() == 2:
original, renamed = ns_types.all()
logger.info(f'There are credential types to migrate in the "{namespace}" namespace: {original.name}')
apps.get_model('main', 'Credential').objects.filter(credential_type_id=original.id).update(credential_type_id=renamed.id)
logger.info(f'Removing old credential type: {renamed.name}')
original.delete()

View File

@@ -29,6 +29,7 @@ def create_roles(apps, schema_editor):
'Project',
'Credential',
'JobTemplate',
'InstanceGroup',
]
]

View File

@@ -16,7 +16,9 @@ from awx.main.models.inventory import ( # noqa
Group,
Host,
HostMetric,
HostMetricSummaryMonthly,
Inventory,
InventoryConstructedInventoryMembership,
InventorySource,
InventoryUpdate,
SmartInventoryMembership,

View File

@@ -7,6 +7,7 @@ from collections import defaultdict
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
from django.db import models, DatabaseError
from django.db.models.functions import Cast
from django.utils.dateparse import parse_datetime
from django.utils.text import Truncator
from django.utils.timezone import utc, now
@@ -536,25 +537,38 @@ class JobEvent(BasePlaybookEvent):
return
job = self.job
from awx.main.models import Host, JobHostSummary, HostMetric # circular import
from awx.main.models import Host, JobHostSummary # circular import
if self.job.inventory.kind == 'constructed':
all_hosts = Host.objects.filter(id__in=self.job.inventory.hosts.values_list(Cast('instance_id', output_field=models.IntegerField()))).only(
'id', 'name'
)
constructed_host_map = self.host_map
host_map = {host.name: host.id for host in all_hosts}
else:
all_hosts = Host.objects.filter(pk__in=self.host_map.values()).only('id', 'name')
constructed_host_map = {}
host_map = self.host_map
all_hosts = Host.objects.filter(pk__in=self.host_map.values()).only('id', 'name')
existing_host_ids = set(h.id for h in all_hosts)
summaries = dict()
updated_hosts_list = list()
for host in hostnames:
updated_hosts_list.append(host.lower())
host_id = self.host_map.get(host, None)
host_id = host_map.get(host)
if host_id not in existing_host_ids:
host_id = None
constructed_host_id = constructed_host_map.get(host)
host_stats = {}
for stat in ('changed', 'dark', 'failures', 'ignored', 'ok', 'processed', 'rescued', 'skipped'):
try:
host_stats[stat] = self.event_data.get(stat, {}).get(host, 0)
except AttributeError: # in case event_data[stat] isn't a dict.
pass
summary = JobHostSummary(created=now(), modified=now(), job_id=job.id, host_id=host_id, host_name=host, **host_stats)
summary = JobHostSummary(
created=now(), modified=now(), job_id=job.id, host_id=host_id, constructed_host_id=constructed_host_id, host_name=host, **host_stats
)
summary.failed = bool(summary.dark or summary.failures)
summaries[(host_id, host)] = summary
@@ -575,12 +589,26 @@ class JobEvent(BasePlaybookEvent):
Host.objects.bulk_update(list(updated_hosts), ['last_job_id', 'last_job_host_summary_id'], batch_size=100)
# bulk-create
current_time = now()
HostMetric.objects.bulk_create(
[HostMetric(hostname=hostname, last_automation=current_time) for hostname in updated_hosts_list], ignore_conflicts=True, batch_size=100
# Create/update Host Metrics
self._update_host_metrics(updated_hosts_list)
@staticmethod
def _update_host_metrics(updated_hosts_list):
from awx.main.models import HostMetric # circular import
# bulk-create
current_time = now()
HostMetric.objects.bulk_create(
[HostMetric(hostname=hostname, last_automation=current_time) for hostname in updated_hosts_list], ignore_conflicts=True, batch_size=100
)
# bulk-update
batch_start, batch_size = 0, 1000
while batch_start <= len(updated_hosts_list):
batched_host_list = updated_hosts_list[batch_start : (batch_start + batch_size)]
HostMetric.objects.filter(hostname__in=batched_host_list).update(
last_automation=current_time, automated_counter=models.F('automated_counter') + 1, deleted=False
)
HostMetric.objects.filter(hostname__in=updated_hosts_list).update(last_automation=current_time)
batch_start += batch_size
@property
def job_verbosity(self):

View File

@@ -17,15 +17,20 @@ from django.db.models import Sum
import redis
from solo.models import SingletonModel
# AWX
from awx import __version__ as awx_application_version
from awx.api.versioning import reverse
from awx.main.fields import JSONBlob
from awx.main.fields import JSONBlob, ImplicitRoleField
from awx.main.managers import InstanceManager, UUID_DEFAULT
from awx.main.constants import JOB_FOLDER_PREFIX
from awx.main.models.base import BaseModel, HasEditsMixin, prevent_search
from awx.main.models.rbac import (
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
ROLE_SINGLETON_SYSTEM_AUDITOR,
)
from awx.main.models.unified_jobs import UnifiedJob
from awx.main.utils.common import get_corrected_cpu, get_cpu_effective_capacity, get_corrected_memory, get_mem_effective_capacity
from awx.main.models.mixins import RelatedJobsMixin
from awx.main.models.mixins import RelatedJobsMixin, ResourceMixin
# ansible-runner
from ansible_runner.utils.capacity import get_cpu_count, get_mem_in_bytes
@@ -352,7 +357,7 @@ class Instance(HasPolicyEditsMixin, BaseModel):
self.save_health_data(awx_application_version, get_cpu_count(), get_mem_in_bytes(), update_last_seen=True, errors=errors)
class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin, ResourceMixin):
"""A model representing a Queue/Group of AWX Instances."""
name = models.CharField(max_length=250, unique=True)
@@ -379,6 +384,24 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
default='',
)
)
admin_role = ImplicitRoleField(
parent_role=[
'singleton:' + ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
]
)
use_role = ImplicitRoleField(
parent_role=[
'admin_role',
]
)
read_role = ImplicitRoleField(
parent_role=[
'singleton:' + ROLE_SINGLETON_SYSTEM_AUDITOR,
'use_role',
'admin_role',
]
)
max_concurrent_jobs = models.IntegerField(default=0, help_text=_("Maximum number of concurrent jobs to run on this group. Zero means no limit."))
max_forks = models.IntegerField(default=0, help_text=_("Max forks to execute on this group. Zero means no limit."))
policy_instance_percentage = models.IntegerField(default=0, help_text=_("Percentage of Instances to automatically assign to this group"))

View File

@@ -9,6 +9,8 @@ import re
import copy
import os.path
from urllib.parse import urljoin
import dateutil.relativedelta
import yaml
# Django
@@ -17,6 +19,7 @@ from django.db import models, connection
from django.utils.translation import gettext_lazy as _
from django.db import transaction
from django.core.exceptions import ValidationError
from django.urls import resolve
from django.utils.timezone import now
from django.db.models import Q
@@ -32,7 +35,7 @@ from awx.main.fields import (
SmartFilterField,
OrderedManyToManyField,
)
from awx.main.managers import HostManager
from awx.main.managers import HostManager, HostMetricActiveManager
from awx.main.models.base import BaseModel, CommonModelNameNotUnique, VarsDictProperty, CLOUD_INVENTORY_SOURCES, prevent_search, accepts_json
from awx.main.models.events import InventoryUpdateEvent, UnpartitionedInventoryUpdateEvent
from awx.main.models.unified_jobs import UnifiedJob, UnifiedJobTemplate
@@ -53,11 +56,21 @@ from awx.main.utils.execution_environments import to_container_path, get_control
from awx.main.utils.licensing import server_product_name
__all__ = ['Inventory', 'Host', 'Group', 'InventorySource', 'InventoryUpdate', 'SmartInventoryMembership']
__all__ = ['Inventory', 'Host', 'Group', 'InventorySource', 'InventoryUpdate', 'SmartInventoryMembership', 'HostMetric', 'HostMetricSummaryMonthly']
logger = logging.getLogger('awx.main.models.inventory')
class InventoryConstructedInventoryMembership(models.Model):
constructed_inventory = models.ForeignKey('Inventory', on_delete=models.CASCADE, related_name='constructed_inventory_memberships')
input_inventory = models.ForeignKey('Inventory', on_delete=models.CASCADE)
position = models.PositiveIntegerField(
null=True,
default=None,
db_index=True,
)
class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
"""
an inventory source contains lists and hosts.
@@ -140,11 +153,13 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
default=None,
help_text=_('Filter that will be applied to the hosts of this inventory.'),
)
input_inventories = models.ManyToManyField(
input_inventories = OrderedManyToManyField(
'Inventory',
blank=True,
through_fields=('constructed_inventory', 'input_inventory'),
related_name='destination_inventories',
help_text=_('Only valid for constructed inventories, this links to the inventories that will be used.'),
through='InventoryConstructedInventoryMembership',
)
instance_groups = OrderedManyToManyField(
'InstanceGroup',
@@ -194,6 +209,14 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
)
def get_absolute_url(self, request=None):
if request is not None:
# circular import
from awx.api.urls.inventory import constructed_inventory_urls
route = resolve(request.path_info)
if any(route.url_name == url.name for url in constructed_inventory_urls):
return reverse('api:constructed_inventory_detail', kwargs={'pk': self.pk}, request=request)
return reverse('api:inventory_detail', kwargs={'pk': self.pk}, request=request)
variables_dict = VarsDictProperty('variables')
@@ -345,13 +368,12 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
for host in hosts:
data['_meta']['hostvars'][host.name] = host.variables_dict
if towervars:
tower_dict = dict(
remote_tower_enabled=str(host.enabled).lower(),
remote_tower_id=host.id,
remote_host_enabled=str(host.enabled).lower(),
remote_host_id=host.id,
)
data['_meta']['hostvars'][host.name].update(tower_dict)
for prefix in ('host', 'tower'):
tower_dict = {
f'remote_{prefix}_enabled': str(host.enabled).lower(),
f'remote_{prefix}_id': host.id,
}
data['_meta']['hostvars'][host.name].update(tower_dict)
return data
@@ -445,7 +467,9 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin, RelatedJobsMixin):
"""
if self.kind == 'constructed':
if not self.inventory_sources.exists():
self.inventory_sources.create(source='constructed', name=f'Auto-created source for: {self.name}'[:512], overwrite=True, update_on_launch=True)
self.inventory_sources.create(
source='constructed', name=f'Auto-created source for: {self.name}'[:512], overwrite=True, overwrite_vars=True, update_on_launch=True
)
def save(self, *args, **kwargs):
self._update_host_smart_inventory_memeberships()
@@ -837,9 +861,64 @@ class Group(CommonModelNameNotUnique, RelatedJobsMixin):
class HostMetric(models.Model):
hostname = models.CharField(primary_key=True, max_length=512)
hostname = models.CharField(unique=True, max_length=512)
first_automation = models.DateTimeField(auto_now_add=True, null=False, db_index=True, help_text=_('When the host was first automated against'))
last_automation = models.DateTimeField(db_index=True, help_text=_('When the host was last automated against'))
last_deleted = models.DateTimeField(null=True, db_index=True, help_text=_('When the host was last deleted'))
automated_counter = models.BigIntegerField(default=0, help_text=_('How many times was the host automated'))
deleted_counter = models.IntegerField(default=0, help_text=_('How many times was the host deleted'))
deleted = models.BooleanField(
default=False, help_text=_('Boolean flag saying whether the host is deleted and therefore not counted into the subscription consumption')
)
used_in_inventories = models.IntegerField(null=True, help_text=_('How many inventories contain this host'))
objects = models.Manager()
active_objects = HostMetricActiveManager()
def get_absolute_url(self, request=None):
return reverse('api:host_metric_detail', kwargs={'pk': self.pk}, request=request)
def soft_delete(self):
if not self.deleted:
self.deleted_counter = (self.deleted_counter or 0) + 1
self.last_deleted = now()
self.deleted = True
self.save(update_fields=['deleted', 'deleted_counter', 'last_deleted'])
def soft_restore(self):
if self.deleted:
self.deleted = False
self.save(update_fields=['deleted'])
@classmethod
def cleanup_task(cls, months_ago):
try:
months_ago = int(months_ago)
if months_ago <= 0:
raise ValueError()
last_automation_before = now() - dateutil.relativedelta.relativedelta(months=months_ago)
logger.info(f'Cleanup [HostMetric]: soft-deleting records last automated before {last_automation_before}')
HostMetric.active_objects.filter(last_automation__lt=last_automation_before).update(
deleted=True, deleted_counter=models.F('deleted_counter') + 1, last_deleted=now()
)
settings.CLEANUP_HOST_METRICS_LAST_TS = now()
except (TypeError, ValueError):
logger.error(f"Cleanup [HostMetric]: months_ago({months_ago}) has to be a positive integer value")
class HostMetricSummaryMonthly(models.Model):
"""
HostMetric summaries computed by scheduled task <TODO> monthly
"""
date = models.DateField(unique=True)
license_consumed = models.BigIntegerField(default=0, help_text=_("How many unique hosts are consumed from the license"))
license_capacity = models.BigIntegerField(default=0, help_text=_("'License capacity as max. number of unique hosts"))
hosts_added = models.IntegerField(default=0, help_text=_("How many hosts were added in the associated month, consuming more license capacity"))
hosts_deleted = models.IntegerField(default=0, help_text=_("How many hosts were deleted in the associated month, freeing the license capacity"))
indirectly_managed_hosts = models.IntegerField(default=0, help_text=("Manually entered number indirectly managed hosts for a certain month"))
class InventorySourceOptions(BaseModel):
@@ -890,6 +969,12 @@ class InventorySourceOptions(BaseModel):
default='',
help_text=_('Inventory source variables in YAML or JSON format.'),
)
scm_branch = models.CharField(
max_length=1024,
default='',
blank=True,
help_text=_('Inventory source SCM branch. Project default used if blank. Only allowed if project allow_override field is set to true.'),
)
enabled_var = models.TextField(
blank=True,
default='',
@@ -1588,8 +1673,9 @@ class constructed(PluginFileInjector):
def build_env(self, *args, **kwargs):
env = super().build_env(*args, **kwargs)
# Enable all types of inventory plugins so we pick up the script files from source inventories
del env['ANSIBLE_INVENTORY_ENABLED']
# Enable script inventory plugin so we pick up the script files from source inventories
env['ANSIBLE_INVENTORY_ENABLED'] += ',script'
env['ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED'] = 'True'
return env

View File

@@ -2,12 +2,8 @@
# All Rights Reserved.
# Python
import codecs
import datetime
import logging
import os
import time
import json
from urllib.parse import urljoin
@@ -15,11 +11,9 @@ from urllib.parse import urljoin
from django.conf import settings
from django.core.exceptions import ValidationError
from django.db import models
from django.db.models.query import QuerySet
from django.db.models.functions import Cast
# from django.core.cache import cache
from django.utils.encoding import smart_str
from django.utils.timezone import now
from django.utils.translation import gettext_lazy as _
from django.core.exceptions import FieldDoesNotExist
@@ -28,6 +22,7 @@ from rest_framework.exceptions import ParseError
# AWX
from awx.api.versioning import reverse
from awx.main.constants import HOST_FACTS_FIELDS
from awx.main.models.base import (
BaseModel,
CreatedModifiedModel,
@@ -44,7 +39,7 @@ from awx.main.models.notifications import (
NotificationTemplate,
JobNotificationMixin,
)
from awx.main.utils import parse_yaml_or_json, getattr_dne, NullablePromptPseudoField, polymorphic, log_excess_runtime
from awx.main.utils import parse_yaml_or_json, getattr_dne, NullablePromptPseudoField, polymorphic
from awx.main.fields import ImplicitRoleField, AskForField, JSONBlob, OrderedManyToManyField
from awx.main.models.mixins import (
ResourceMixin,
@@ -60,8 +55,6 @@ from awx.main.constants import JOB_VARIABLE_PREFIXES
logger = logging.getLogger('awx.main.models.jobs')
analytics_logger = logging.getLogger('awx.analytics.job_events')
system_tracking_logger = logging.getLogger('awx.analytics.system_tracking')
__all__ = ['JobTemplate', 'JobLaunchConfig', 'Job', 'JobHostSummary', 'SystemJobTemplate', 'SystemJob']
@@ -578,12 +571,7 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
default=None,
on_delete=models.SET_NULL,
)
hosts = models.ManyToManyField(
'Host',
related_name='jobs',
editable=False,
through='JobHostSummary',
)
hosts = models.ManyToManyField('Host', related_name='jobs', editable=False, through='JobHostSummary', through_fields=('job', 'host'))
artifacts = JSONBlob(
default=dict,
blank=True,
@@ -831,6 +819,9 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
for name in JOB_VARIABLE_PREFIXES:
r['{}_job_template_id'.format(name)] = self.job_template.pk
r['{}_job_template_name'.format(name)] = self.job_template.name
if self.execution_node:
for name in JOB_VARIABLE_PREFIXES:
r['{}_execution_node'.format(name)] = self.execution_node
return r
'''
@@ -845,109 +836,26 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
def get_notification_friendly_name(self):
return "Job"
def _get_inventory_hosts(self, only=('name', 'ansible_facts', 'ansible_facts_modified', 'modified', 'inventory_id'), **filters):
"""Return value is an iterable for the relevant hosts for this job"""
if not self.inventory:
return []
host_queryset = self.inventory.hosts.only(*only)
if filters:
host_queryset = host_queryset.filter(**filters)
host_queryset = self.inventory.get_sliced_hosts(host_queryset, self.job_slice_number, self.job_slice_count)
if isinstance(host_queryset, QuerySet):
return host_queryset.iterator()
return host_queryset
def get_hosts_for_fact_cache(self):
"""
Builds the queryset to use for writing or finalizing the fact cache
these need to be the 'real' hosts associated with the job.
For constructed inventories, that means the original (input inventory) hosts
when slicing, that means only returning hosts in that slice
"""
Host = JobHostSummary._meta.get_field('host').related_model
if not self.inventory_id:
return Host.objects.none()
@log_excess_runtime(logger, debug_cutoff=0.01, msg='Job {job_id} host facts prepared for {written_ct} hosts, took {delta:.3f} s', add_log_data=True)
def start_job_fact_cache(self, destination, log_data, timeout=None):
self.log_lifecycle("start_job_fact_cache")
log_data['job_id'] = self.id
log_data['written_ct'] = 0
os.makedirs(destination, mode=0o700)
if timeout is None:
timeout = settings.ANSIBLE_FACT_CACHE_TIMEOUT
if timeout > 0:
# exclude hosts with fact data older than `settings.ANSIBLE_FACT_CACHE_TIMEOUT seconds`
timeout = now() - datetime.timedelta(seconds=timeout)
hosts = self._get_inventory_hosts(ansible_facts_modified__gte=timeout)
if self.inventory.kind == 'constructed':
id_field = Host._meta.get_field('id')
host_qs = Host.objects.filter(id__in=self.inventory.hosts.exclude(instance_id='').values_list(Cast('instance_id', output_field=id_field)))
else:
hosts = self._get_inventory_hosts()
host_qs = self.inventory.hosts
last_filepath_written = None
for host in hosts:
filepath = os.sep.join(map(str, [destination, host.name]))
if not os.path.realpath(filepath).startswith(destination):
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
try:
with codecs.open(filepath, 'w', encoding='utf-8') as f:
os.chmod(f.name, 0o600)
json.dump(host.ansible_facts, f)
log_data['written_ct'] += 1
last_filepath_written = filepath
except IOError:
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
# make note of the time we wrote the last file so we can check if any file changed later
if last_filepath_written:
return os.path.getmtime(last_filepath_written)
return None
@log_excess_runtime(
logger,
debug_cutoff=0.01,
msg='Job {job_id} host facts: updated {updated_ct}, cleared {cleared_ct}, unchanged {unmodified_ct}, took {delta:.3f} s',
add_log_data=True,
)
def finish_job_fact_cache(self, destination, facts_write_time, log_data):
self.log_lifecycle("finish_job_fact_cache")
log_data['job_id'] = self.id
log_data['updated_ct'] = 0
log_data['unmodified_ct'] = 0
log_data['cleared_ct'] = 0
hosts_to_update = []
for host in self._get_inventory_hosts():
filepath = os.sep.join(map(str, [destination, host.name]))
if not os.path.realpath(filepath).startswith(destination):
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
if os.path.exists(filepath):
# If the file changed since we wrote the last facts file, pre-playbook run...
modified = os.path.getmtime(filepath)
if (not facts_write_time) or modified > facts_write_time:
with codecs.open(filepath, 'r', encoding='utf-8') as f:
try:
ansible_facts = json.load(f)
except ValueError:
continue
host.ansible_facts = ansible_facts
host.ansible_facts_modified = now()
hosts_to_update.append(host)
system_tracking_logger.info(
'New fact for inventory {} host {}'.format(smart_str(host.inventory.name), smart_str(host.name)),
extra=dict(
inventory_id=host.inventory.id,
host_name=host.name,
ansible_facts=host.ansible_facts,
ansible_facts_modified=host.ansible_facts_modified.isoformat(),
job_id=self.id,
),
)
log_data['updated_ct'] += 1
else:
log_data['unmodified_ct'] += 1
else:
# if the file goes missing, ansible removed it (likely via clear_facts)
host.ansible_facts = {}
host.ansible_facts_modified = now()
hosts_to_update.append(host)
system_tracking_logger.info('Facts cleared for inventory {} host {}'.format(smart_str(host.inventory.name), smart_str(host.name)))
log_data['cleared_ct'] += 1
if len(hosts_to_update) > 100:
self.inventory.hosts.bulk_update(hosts_to_update, ['ansible_facts', 'ansible_facts_modified'])
hosts_to_update = []
if hosts_to_update:
self.inventory.hosts.bulk_update(hosts_to_update, ['ansible_facts', 'ansible_facts_modified'])
host_qs = host_qs.only(*HOST_FACTS_FIELDS)
host_qs = self.inventory.get_sliced_hosts(host_qs, self.job_slice_number, self.job_slice_count)
return host_qs
class LaunchTimeConfigBase(BaseModel):
@@ -1169,6 +1077,15 @@ class JobHostSummary(CreatedModifiedModel):
editable=False,
)
host = models.ForeignKey('Host', related_name='job_host_summaries', null=True, default=None, on_delete=models.SET_NULL, editable=False)
constructed_host = models.ForeignKey(
'Host',
related_name='constructed_host_summaries',
null=True,
default=None,
on_delete=models.SET_NULL,
editable=False,
help_text='Only for jobs run against constructed inventories, this links to the host inside the constructed inventory.',
)
host_name = models.CharField(
max_length=1024,

View File

@@ -14,7 +14,7 @@ from oauth2_provider.models import AbstractApplication, AbstractAccessToken
from oauth2_provider.generators import generate_client_secret
from oauthlib import oauth2
from awx.main.utils import get_external_account
from awx.sso.common import get_external_account
from awx.main.fields import OAuth2ClientSecretField

View File

@@ -32,7 +32,7 @@ from polymorphic.models import PolymorphicModel
# AWX
from awx.main.models.base import CommonModelNameNotUnique, PasswordFieldsModel, NotificationFieldsModel, prevent_search
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin, ExecutionEnvironmentMixin
@@ -1567,7 +1567,7 @@ class UnifiedJob(
return r
def get_queue_name(self):
return self.controller_node or self.execution_node or get_local_queuename()
return self.controller_node or self.execution_node or get_task_queuename()
@property
def is_container_group_task(self):

View File

@@ -650,6 +650,7 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio
help_text=_("If automatically created for a sliced job run, the job template " "the workflow job was created from."),
)
is_sliced_job = models.BooleanField(default=False)
is_bulk_job = models.BooleanField(default=False)
def _set_default_dependencies_processed(self):
self.dependencies_processed = True

View File

@@ -28,7 +28,7 @@ class AWXProtocolTypeRouter(ProtocolTypeRouter):
websocket_urlpatterns = [
re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
re_path(r'websocket/broadcast/$', consumers.BroadcastConsumer.as_asgi()),
re_path(r'websocket/relay/$', consumers.RelayConsumer.as_asgi()),
]
application = AWXProtocolTypeRouter(

View File

@@ -8,7 +8,7 @@ from django.conf import settings
from awx import MODE
from awx.main.scheduler import TaskManager, DependencyManager, WorkflowManager
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
logger = logging.getLogger('awx.main.scheduler')
@@ -20,16 +20,16 @@ def run_manager(manager, prefix):
manager().schedule()
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
def task_manager():
run_manager(TaskManager, "task")
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
def dependency_manager():
run_manager(DependencyManager, "dependency")
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
def workflow_manager():
run_manager(WorkflowManager, "workflow")

View File

@@ -85,6 +85,8 @@ class RunnerCallback:
# which generate job events from two 'streams':
# ansible-inventory and the awx.main.commands.inventory_import
# logger
if event_data.get('event') == 'keepalive':
return
if event_data.get(self.event_data_key, None):
if self.event_data_key != 'job_id':
@@ -116,7 +118,7 @@ class RunnerCallback:
# so it *should* have a negligible performance impact
task = event_data.get('event_data', {}).get('task_action')
try:
if task in ('git', 'svn'):
if task in ('git', 'svn', 'ansible.builtin.git', 'ansible.builtin.svn'):
event_data_json = json.dumps(event_data)
event_data_json = UriCleaner.remove_sensitive(event_data_json)
event_data = json.loads(event_data_json)
@@ -219,7 +221,7 @@ class RunnerCallbackForProjectUpdate(RunnerCallback):
def event_handler(self, event_data):
super_return_value = super(RunnerCallbackForProjectUpdate, self).event_handler(event_data)
returned_data = event_data.get('event_data', {})
if returned_data.get('task_action', '') == 'set_fact':
if returned_data.get('task_action', '') in ('set_fact', 'ansible.builtin.set_fact'):
returned_facts = returned_data.get('res', {}).get('ansible_facts', {})
if 'scm_version' in returned_facts:
self.playbook_new_revision = returned_facts['scm_version']

117
awx/main/tasks/facts.py Normal file
View File

@@ -0,0 +1,117 @@
import codecs
import datetime
import os
import json
import logging
# Django
from django.conf import settings
from django.db.models.query import QuerySet
from django.utils.encoding import smart_str
from django.utils.timezone import now
# AWX
from awx.main.utils.common import log_excess_runtime
from awx.main.models.inventory import Host
logger = logging.getLogger('awx.main.tasks.facts')
system_tracking_logger = logging.getLogger('awx.analytics.system_tracking')
@log_excess_runtime(logger, debug_cutoff=0.01, msg='Inventory {inventory_id} host facts prepared for {written_ct} hosts, took {delta:.3f} s', add_log_data=True)
def start_fact_cache(hosts, destination, log_data, timeout=None, inventory_id=None):
log_data['inventory_id'] = inventory_id
log_data['written_ct'] = 0
try:
os.makedirs(destination, mode=0o700)
except FileExistsError:
pass
if timeout is None:
timeout = settings.ANSIBLE_FACT_CACHE_TIMEOUT
if isinstance(hosts, QuerySet):
hosts = hosts.iterator()
last_filepath_written = None
for host in hosts:
if (not host.ansible_facts_modified) or (timeout and host.ansible_facts_modified < now() - datetime.timedelta(seconds=timeout)):
continue # facts are expired - do not write them
filepath = os.sep.join(map(str, [destination, host.name]))
if not os.path.realpath(filepath).startswith(destination):
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
try:
with codecs.open(filepath, 'w', encoding='utf-8') as f:
os.chmod(f.name, 0o600)
json.dump(host.ansible_facts, f)
log_data['written_ct'] += 1
last_filepath_written = filepath
except IOError:
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
# make note of the time we wrote the last file so we can check if any file changed later
if last_filepath_written:
return os.path.getmtime(last_filepath_written)
return None
@log_excess_runtime(
logger,
debug_cutoff=0.01,
msg='Inventory {inventory_id} host facts: updated {updated_ct}, cleared {cleared_ct}, unchanged {unmodified_ct}, took {delta:.3f} s',
add_log_data=True,
)
def finish_fact_cache(hosts, destination, facts_write_time, log_data, job_id=None, inventory_id=None):
log_data['inventory_id'] = inventory_id
log_data['updated_ct'] = 0
log_data['unmodified_ct'] = 0
log_data['cleared_ct'] = 0
if isinstance(hosts, QuerySet):
hosts = hosts.iterator()
hosts_to_update = []
for host in hosts:
filepath = os.sep.join(map(str, [destination, host.name]))
if not os.path.realpath(filepath).startswith(destination):
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
if os.path.exists(filepath):
# If the file changed since we wrote the last facts file, pre-playbook run...
modified = os.path.getmtime(filepath)
if (not facts_write_time) or modified > facts_write_time:
with codecs.open(filepath, 'r', encoding='utf-8') as f:
try:
ansible_facts = json.load(f)
except ValueError:
continue
host.ansible_facts = ansible_facts
host.ansible_facts_modified = now()
hosts_to_update.append(host)
system_tracking_logger.info(
'New fact for inventory {} host {}'.format(smart_str(host.inventory.name), smart_str(host.name)),
extra=dict(
inventory_id=host.inventory.id,
host_name=host.name,
ansible_facts=host.ansible_facts,
ansible_facts_modified=host.ansible_facts_modified.isoformat(),
job_id=job_id,
),
)
log_data['updated_ct'] += 1
else:
log_data['unmodified_ct'] += 1
else:
# if the file goes missing, ansible removed it (likely via clear_facts)
host.ansible_facts = {}
host.ansible_facts_modified = now()
hosts_to_update.append(host)
system_tracking_logger.info('Facts cleared for inventory {} host {}'.format(smart_str(host.inventory.name), smart_str(host.name)))
log_data['cleared_ct'] += 1
if len(hosts_to_update) > 100:
Host.objects.bulk_update(hosts_to_update, ['ansible_facts', 'ansible_facts_modified'])
hosts_to_update = []
if hosts_to_update:
Host.objects.bulk_update(hosts_to_update, ['ansible_facts', 'ansible_facts_modified'])

View File

@@ -29,7 +29,7 @@ from gitdb.exc import BadName as BadGitName
# AWX
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename
from awx.main.dispatch import get_task_queuename
from awx.main.constants import (
PRIVILEGE_ESCALATION_METHODS,
STANDARD_INVENTORY_UPDATE_ENV,
@@ -37,6 +37,7 @@ from awx.main.constants import (
MAX_ISOLATED_PATH_COLON_DELIMITER,
CONTAINER_VOLUMES_MOUNT_TYPES,
ACTIVE_STATES,
HOST_FACTS_FIELDS,
)
from awx.main.models import (
Instance,
@@ -63,6 +64,7 @@ from awx.main.tasks.callback import (
)
from awx.main.tasks.signals import with_signal_handling, signal_callback
from awx.main.tasks.receptor import AWXReceptorJob
from awx.main.tasks.facts import start_fact_cache, finish_fact_cache
from awx.main.exceptions import AwxTaskError, PostRunError, ReceptorNodeNotFound
from awx.main.utils.ansible import read_ansible_config
from awx.main.utils.execution_environments import CONTAINER_ROOT, to_container_path
@@ -311,7 +313,7 @@ class BaseTask(object):
env['AWX_PRIVATE_DATA_DIR'] = private_data_dir
if self.instance.execution_environment is None:
raise RuntimeError('The project could not sync because there is no Execution Environment.')
raise RuntimeError(f'The {self.model.__name__} could not run because there is no Execution Environment.')
return env
@@ -455,6 +457,9 @@ class BaseTask(object):
instance.ansible_version = ansible_version_info
instance.save(update_fields=['ansible_version'])
def should_use_fact_cache(self):
return False
@with_path_cleanup
@with_signal_handling
def run(self, pk, **kwargs):
@@ -553,7 +558,8 @@ class BaseTask(object):
params['module'] = self.build_module_name(self.instance)
params['module_args'] = self.build_module_args(self.instance)
if getattr(self.instance, 'use_fact_cache', False):
# TODO: refactor into a better BasTask method
if self.should_use_fact_cache():
# Enable Ansible fact cache.
params['fact_cache_type'] = 'jsonfile'
else:
@@ -764,7 +770,7 @@ class SourceControlMixin(BaseTask):
def sync_and_copy(self, project, private_data_dir, scm_branch=None):
self.acquire_lock(project, self.instance.id)
is_commit = False
try:
original_branch = None
failed_reason = project.get_reason_if_failed()
@@ -776,6 +782,7 @@ class SourceControlMixin(BaseTask):
if os.path.exists(project_path):
git_repo = git.Repo(project_path)
if git_repo.head.is_detached:
is_commit = True
original_branch = git_repo.head.commit
else:
original_branch = git_repo.active_branch
@@ -787,7 +794,11 @@ class SourceControlMixin(BaseTask):
# for git project syncs, non-default branches can be problems
# restore to branch the repo was on before this run
try:
original_branch.checkout()
if is_commit:
git_repo.head.set_commit(original_branch)
git_repo.head.reset(index=True, working_tree=True)
else:
original_branch.checkout()
except Exception:
# this could have failed due to dirty tree, but difficult to predict all cases
logger.exception(f'Failed to restore project repo to prior state after {self.instance.id}')
@@ -795,7 +806,7 @@ class SourceControlMixin(BaseTask):
self.release_lock(project)
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
class RunJob(SourceControlMixin, BaseTask):
"""
Run a job using ansible-playbook.
@@ -1003,6 +1014,9 @@ class RunJob(SourceControlMixin, BaseTask):
return args
def should_use_fact_cache(self):
return self.instance.use_fact_cache
def build_playbook_path_relative_to_cwd(self, job, private_data_dir):
return job.playbook
@@ -1068,8 +1082,11 @@ class RunJob(SourceControlMixin, BaseTask):
# Fetch "cached" fact data from prior runs and put on the disk
# where ansible expects to find it
if job.use_fact_cache:
self.facts_write_time = self.instance.start_job_fact_cache(os.path.join(private_data_dir, 'artifacts', str(job.id), 'fact_cache'))
if self.should_use_fact_cache():
job.log_lifecycle("start_job_fact_cache")
self.facts_write_time = start_fact_cache(
job.get_hosts_for_fact_cache(), os.path.join(private_data_dir, 'artifacts', str(job.id), 'fact_cache'), inventory_id=job.inventory_id
)
def build_project_dir(self, job, private_data_dir):
self.sync_and_copy(job.project, private_data_dir, scm_branch=job.scm_branch)
@@ -1083,10 +1100,14 @@ class RunJob(SourceControlMixin, BaseTask):
# actual `run()` call; this _usually_ means something failed in
# the pre_run_hook method
return
if job.use_fact_cache:
job.finish_job_fact_cache(
if self.should_use_fact_cache():
job.log_lifecycle("finish_job_fact_cache")
finish_fact_cache(
job.get_hosts_for_fact_cache(),
os.path.join(private_data_dir, 'artifacts', str(job.id), 'fact_cache'),
self.facts_write_time,
facts_write_time=self.facts_write_time,
job_id=job.id,
inventory_id=job.inventory_id,
)
def final_run_hook(self, job, status, private_data_dir):
@@ -1100,7 +1121,7 @@ class RunJob(SourceControlMixin, BaseTask):
update_inventory_computed_fields.delay(inventory.id)
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
class RunProjectUpdate(BaseTask):
model = ProjectUpdate
event_model = ProjectUpdateEvent
@@ -1422,7 +1443,7 @@ class RunProjectUpdate(BaseTask):
return params
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
class RunInventoryUpdate(SourceControlMixin, BaseTask):
model = InventoryUpdate
event_model = InventoryUpdateEvent
@@ -1524,11 +1545,18 @@ class RunInventoryUpdate(SourceControlMixin, BaseTask):
# special case for constructed inventories, we pass source inventories from database
# these must come in order, and in order _before_ the constructed inventory itself
if inventory_update.inventory.kind == 'constructed':
inventory_update.log_lifecycle("start_job_fact_cache")
for input_inventory in inventory_update.inventory.input_inventories.all():
args.append('-i')
script_params = dict(hostvars=True, towervars=True)
source_inv_path = self.write_inventory_file(input_inventory, private_data_dir, f'hosts_{input_inventory.id}', script_params)
args.append(to_container_path(source_inv_path, private_data_dir))
# Include any facts from input inventories so they can be used in filters
start_fact_cache(
input_inventory.hosts.only(*HOST_FACTS_FIELDS),
os.path.join(private_data_dir, 'artifacts', str(inventory_update.id), 'fact_cache'),
inventory_id=input_inventory.id,
)
# Add arguments for the source inventory file/script/thing
rel_path = self.pseudo_build_inventory(inventory_update, private_data_dir)
@@ -1557,6 +1585,9 @@ class RunInventoryUpdate(SourceControlMixin, BaseTask):
return args
def should_use_fact_cache(self):
return bool(self.instance.source == 'constructed')
def build_inventory(self, inventory_update, private_data_dir):
return None # what runner expects in order to not deal with inventory
@@ -1598,7 +1629,7 @@ class RunInventoryUpdate(SourceControlMixin, BaseTask):
if inventory_update.source == 'scm':
if not source_project:
raise RuntimeError('Could not find project to run SCM inventory update from.')
self.sync_and_copy(source_project, private_data_dir)
self.sync_and_copy(source_project, private_data_dir, scm_branch=inventory_update.inventory_source.scm_branch)
else:
# If source is not SCM make an empty project directory, content is built inside inventory folder
super(RunInventoryUpdate, self).build_project_dir(inventory_update, private_data_dir)
@@ -1675,7 +1706,7 @@ class RunInventoryUpdate(SourceControlMixin, BaseTask):
raise PostRunError('Error occured while saving inventory data, see traceback or server logs', status='error', tb=traceback.format_exc())
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
class RunAdHocCommand(BaseTask):
"""
Run an ad hoc command using ansible.
@@ -1828,7 +1859,7 @@ class RunAdHocCommand(BaseTask):
return d
@task(queue=get_local_queuename)
@task(queue=get_task_queuename)
class RunSystemJob(BaseTask):
model = SystemJob
event_model = SystemJobEvent

Some files were not shown because too many files have changed in this diff Show More