Compare commits

...

95 Commits

Author SHA1 Message Date
John Westcott IV
ea9c52aca6 Merge pull request #13461 from john-westcott-iv/no_galaxy_if_published
Two changes to GitHub promote action
2023-01-23 16:02:03 -05:00
John Westcott IV
a7ebce1fef Update .github/workflows/promote.yml
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-01-23 15:43:44 -05:00
John Westcott IV
5de9cf748d Two changes to promote action
Perform a git reset --hard before attempting to release awxkit to pypi.
We found that something new in the process was causing an unexpected behavior if the git tree had any changes inside it.
It would cause a devel version to be created and used as part of the upload which pypi was refusing.

Collections can not easly be deleted from galaxy so if we have to rerun a job because of a pypi or quay failure we don't want to try and upload the collection again.
2023-01-23 15:37:02 -05:00
Jake Jackson
ebea78943d Deprecate tower modules (#13210)
* first deprecation pass, need to confirm date or version

* remove doc block updates as not needed, update runtime and remove symlinks

* add line to readme as notable release

* update version before release
2023-01-23 13:44:26 -05:00
Seth Foster
1e33bc4020 Merge pull request #13338 from fosterseth/tag_awx_ee_on_release
tag awx-ee latest on awx release
2023-01-20 12:44:52 -05:00
Divided by Zer0
e9ad01e806 Handles workflow node schema inventory (#12721)
Verified by QE. Merging it.
2023-01-19 18:25:19 -03:00
Alan Rominger
8a4059d266 Workaround for events with NUL char, touch up error loop (#13398)
* Workaround for events with NUL char, touch up error loop

This fixes an error where some events would not save
  due to having the 0x00 character which errors in postgres
  this adds a line to replace it with empty text

Hitting that kind of event put us in an infinite error loop
  so this change makes a number of changes to prevent similar loops
  the showcase example is a negative counter,
  this is not realistic in the real world but works for unit tests

These error loop fixes seek to esablish the cases where we clear the buffer
Some logic is removed from the outer loop, with the idea that
ensure_connection will better distinguish flake

* From review comments, delay NUL char sanitization to later

Use pop to make list operations more clear

* Fix incorrect use of pop
2023-01-19 13:36:23 -05:00
Seth Foster
01a7076267 Merge pull request #13433 from kwevers/bugfix/hashicorp-vault-retries
Retry HashiCorp Vault requests on HTTP 412
2023-01-18 16:00:40 -05:00
Seth Foster
32b6aec66b Merge pull request #13444 from codygula/devel
Update to include pip install command and PyPI link. related #13179
2023-01-18 15:51:28 -05:00
John Westcott IV
884ab424d5 Merge pull request #12832 from no-12/allow_metrics_for_anonymous_users
Allow metrics collection for anonymous users via settings
2023-01-18 09:46:35 -05:00
Cody Gula
7e55305c45 Update to include pip install command and PyPI link
Signed-off-by: Cody Gula <cgula7@gmail.com>
2023-01-17 19:04:57 -08:00
Sarah Akus
e9a1582b70 Merge pull request #13262 from AlexSCorey/12429-PrepopulateResources
Prepopulates job template form with related resource
2023-01-17 17:43:02 -05:00
Alex Corey
51ef1e808d Prepopulates job template form with related resource 2023-01-17 13:10:07 -05:00
Lila Yasin
11fbfc2063 added fix for preserve existing children issue. (#13374)
* added fix for preserve existing children issue.

* Modified line 131 to call actual parm name.

* Removed line 132 after updating.
2023-01-16 11:36:07 -03:00
Kristof Wevers
f6395c69dd Retry HashiCorp Vault requests on HTTP 412
HC Vault clusters use eventual consistency and might return an HTTP 412
if the secret ID hasn't replicated yet to the replicas / standby nodes.
If this happens the request should be retried.

related #13413

Signed-off-by: Kristof Wevers <kristof.wevers@infura.eu>
2023-01-16 13:29:33 +01:00
kialam
ca07bc85cb Merge pull request #13367 from kialam/fix-13290-instance-404
Conditionally query /health_check endpoint for execution node only.
2023-01-12 13:20:35 -08:00
Seth Foster
b87dd6dc56 tag awx-ee latest with awx release 2023-01-11 17:21:51 -05:00
Seth Foster
f8d46d5e71 Merge pull request #13351 from jangel97/project_lokfile_timeout
add logging to situation in which project lock file is locked
2023-01-10 20:58:53 -05:00
Jose Angel Morena
ce0a456ecc add log message if unable to open lockfile
Signed-off-by: Jose Angel Morena <jmorenas@redhat.com>
2023-01-10 21:51:23 +01:00
Nico Ohnezat
5775ff1422 make help text of ALLOW_METRICS_FOR_ANONYMOUS_USERS more clear 2023-01-10 09:32:25 +01:00
Nico Ohnezat
82e8bcd2bb related #6753 allow metrics for anonymous users
Signed-off-by: Nico Ohnezat <nico@no-12.net>
2023-01-10 09:32:25 +01:00
John Westcott IV
d73cc501d5 Merge pull request #13342 from john-westcott-iv/reconcile_fix
Fixing bug in LDAP reconcile loop
2023-01-09 14:20:49 -05:00
John Westcott IV
7e40a4daed Refactoring code 2023-01-09 10:31:15 -05:00
John Westcott IV
47e824dd11 Fixing LDAP reconcile loop 2023-01-09 10:31:15 -05:00
Sarah Akus
4643b816fe Merge pull request #13075 from keithjgrant/13059-running-job-output-gap
Fix gap between API-loaded job events and WS-streamed events
2023-01-05 13:46:10 -05:00
Seth Foster
79d9329cfa Merge pull request #13403 from fosterseth/fix_console_colors
Fix console color logs
2023-01-05 13:34:13 -05:00
Seth Foster
6492c03965 Fix console color logs 2023-01-05 12:55:20 -05:00
Michael Abashian
98107301a5 Merge pull request #13194 from mabashian/13193-related-name-exact
Adds support for exact name searching against related fields to the ui
2023-01-05 10:20:39 -05:00
Keith J. Grant
4810099158 update test 2023-01-05 09:56:37 -05:00
Michael Abashian
1aca9929ab Adds support for exact name searching against related fields to the ui 2023-01-05 09:56:37 -05:00
Sarah Akus
2aa58bc17d Merge pull request #13372 from vidyanambiar/aap-7757
Fix for Save button not responding on Job Settings page
2023-01-04 13:39:55 -05:00
Shane McDonald
b99a434dee Merge pull request #13395 from shanemcd/pin-rsyslog
Pin rsyslog to avoid crash
2023-01-04 21:54:34 +08:00
Shane McDonald
6cee99a9f9 Pin rsyslog to prevent crash
With the latest version of rsyslog we had a test failing with:

AssertionError: Response data: {'error': "b'rsyslog internal message (3,-2455): could not transfer  the  specified  internal posix  capabilities settings to the kernel, capng_apply=-5\\n [v8.2102.0-107.el9 try https://www.rsyslog.com/e/2455 ]\\n'"}

Downgrading fixes it
2023-01-04 08:19:20 -05:00
Seth Foster
ee509aea56 Merge pull request #12961 from fosterseth/fix_results_traceback
Result_traceback should not include job stdout
2023-01-03 13:34:23 -05:00
Sarah Akus
b5452a48f8 Merge pull request #13196 from keithjgrant/13189-job-traceback
Fix job error traceback in job output
2023-01-03 11:59:58 -05:00
Vidya Nambiar
68e555824d Fix for Save button not responding on Job Settings page
Signed-off-by: Vidya Nambiar <vnambiar@redhat.com>
2022-12-22 11:23:03 -05:00
Seth Foster
0c980fa7d5 Merge pull request #13366 from fosterseth/bump_receptorctl_1.3.0
bump receptorctl version to 1.3.0
2022-12-21 16:27:25 -05:00
Shane McDonald
e34ce8c795 Merge pull request #13365 from dsavineau/downgrade_hiredis
Pin hiredis to 2.0.0
2022-12-21 15:23:15 -05:00
Kia Lam
58bad6cfa9 Conditionally query /health_check endpoint for execution node only. 2022-12-21 10:44:12 -08:00
Seth Foster
3543644e0e bump receptorctl version to 1.3.0 2022-12-21 13:36:11 -05:00
Seth Foster
36c0d07b30 Result_traceback should not include job stdout
If a job fails, we do receptor work results and put that output
into result_traceback.

We should only do this if
1. Receptor unit has failed
2. Runner callback processed 0 events

Otherwise we risk putting too much data into this field.
2022-12-21 13:05:44 -05:00
Keith J. Grant
03b0281fde clean up follow mode quirks 2022-12-21 09:30:35 -08:00
Keith J. Grant
6f6f04a071 refresh events when first websocket event streams 2022-12-21 09:30:35 -08:00
Dimitri Savineau
239827a9cf Pin hiredis to 2.0.0
The hiredis 2.1.0 release doesn't provide source distribution on PyPi so
users can't build that python package from sources.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2022-12-21 11:57:41 -05:00
Alan Rominger
ac9871b36f Merge pull request #13361 from relrod/sanity
[collection] Run sanity tests outside of our container
2022-12-21 11:00:21 -05:00
Alan Rominger
f739908ccf Add comment about Ansible-core being installed by default
Co-authored-by: John R Barker <john@johnrbarker.com>
2022-12-21 09:57:00 -05:00
Alan Rominger
cf1ec07eab Changes to run sanity tests locally
Use a Makefile arg for the ansible-test sanity CLI args
  defaults to --docker
  in the future we probably need to customize python versions

Copy the rule exception for Ansible 2.15
  this helps people who are running from Ansible devel
2022-12-21 09:53:22 -05:00
Rick Elrod
d968b648de Run sanity tests outside of our container
Also just ignore one sanity test for the export module, instead of
ignoring all of them.

Also use latest ansible-test, and make it work on GHA (by using podman
instead of docker).

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-12-20 21:40:41 -06:00
Rick Elrod
5dd0eab806 Pin channels-redis to 4.3.1 to fix an async issue (#13348)
Refs django/channels_redis#332
Refs #13313

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-12-20 17:05:44 -06:00
Alan Rominger
41f3f381ec Merge pull request #13352 from AlanCoding/dont_pass_subtasks
Remove `subtasks` keyword arg that can exceed pg_notify max message length
2022-12-20 16:25:39 -05:00
Alan Rominger
ac8cff75ce Run collection sanity tests in CI (#13356)
* Run collection sanity tests in CI

This requires adding a Makefile install of ansible-core

Fake the version to make semver check happy

* Fixes from ansible-test sanity failures

* Exclude the export module due to awxkit requirement

* Fix broken ansible-test rule exceptions

remove Ansible 2.14 exclusions that make ansible-test ERROR, saying they are not needed
2022-12-20 16:06:25 -05:00
Alan Rominger
94b34b801c Avoid unbounded kwargs by fetching subtasks inside handle_work_error
Update tests to new handle_work_error call pattern

Handle blame correctly with multiple serial deps
  add new test case corresponding to this scenario
2022-12-19 16:02:51 -05:00
Jeff Bradberry
8f6849fc22 Include listener_port in the defaults for Instance.objects.register (#13328) 2022-12-19 14:16:05 -03:00
Sarah Akus
821b1701bf Merge pull request #13340 from gamuniz/change_wf_scmbranch_behavior
Change workflow create/edit to null scm_branch when not provided.
2022-12-19 10:52:21 -05:00
John Westcott IV
b7f2825909 Throw a warning if custom secret key was specified but not given (#13128)
* Throw a warning if custom secret key was specified but not given

* Fixing unit tests
2022-12-17 14:15:27 -03:00
Jeff Bradberry
e87e041a2a Break up and conditionally add the RBAC checks for ActivityStream (#13279)
This should vastly improve the queries executed when accessing any of
the activity stream endpoints as a normal user, in many cases.
2022-12-16 15:11:14 -03:00
Gabe Muniz
cc336e791c fix expected test result 2022-12-16 12:30:57 -05:00
Gabe Muniz
c2a3c3b285 The current behavior of workflow job templates is to pass in an empty string as scm_branch on allsaves and edits. This becomes problematic when using job templates/workflows which allow prompt on launch for scm_branch as it may override the scm_branch set for the individual workflow nodes to an empty string. That behavior limits the usefulness of prompting scm branch as it can no longer by selected while creating workflows as they'll be overwritten. 2022-12-16 12:30:57 -05:00
Jeff Bradberry
7b8dcc98e7 Merge pull request #13308 from jbradberry/rebuild-org-ee-admin-roles
Ensure that the Organization.execution_environment_admin_role always gets built
2022-12-16 11:29:20 -05:00
Satoe Imaishi
d5011492bf Merge pull request #13343 from simaishi/add_pkgconfig
Add back pkgconfig for offline build
2022-12-16 08:07:38 -05:00
Satoe Imaishi
e363ddf470 Add back pkgconfig for offline build 2022-12-15 20:49:28 -05:00
Shane McDonald
987709cdb3 Merge pull request #13344 from shanemcd/fix-tox
Remove unneeded pass_env in tox config
2022-12-15 20:02:31 -05:00
Shane McDonald
f04ac3c798 Remove unneeded pass_env in tox config
I don't recall us ever using Travis so I'm not sure why this is here.

https://tox.wiki/en/latest/changelog.html#v4-0-6-2022-12-10
2022-12-15 19:44:02 -05:00
Jake Jackson
71a6baccdb Fix lookup plugins sanity (#13238)
* fix pytz

* fix NameError

* fix tests and add sanity ignore files for import test until distutils replaced

* change static method to regular method and update test to instantiate class
2022-12-15 16:40:51 -05:00
Alan Rominger
d07076b686 Merge pull request #13330 from AlanCoding/ask_me_for_tags
Fill in rest of ask_tags handling for WFJT module
2022-12-15 10:59:17 -05:00
John Westcott IV
7129f3e8cd Updating python3-saml (#13263)
Moved to forked version to get latest lxml to allow other pacakges to update
2022-12-15 12:15:09 -03:00
Julen Landa Alustiza
df61a5cea1 Merge pull request #13126 from infamousjoeg/cyberark-ccp-branding-webserviceid
CyberArk Central Credential Provider Lookup custom Web Service ID & update branding
2022-12-15 15:54:35 +01:00
Ilija Matoski
a4b950f79b Set AWS_SESSION_TOKEN in addition to AWS_SECURITY_TOKEN (#13297)
* Set AWS_SESSION_TOKEN in addition to AWS_SECURITY_TOKEN

* added AWS_SESSION_TOKEN to inventoryupdate-1 test
2022-12-15 10:09:40 -03:00
Sarah Akus
8be739d255 Merge pull request #13306 from vidyanambiar/aap-7507
Fixes 'Not Found' error on looking up credentials
2022-12-14 16:13:55 -05:00
John Westcott IV
ca54195099 Merge pull request #13324 from mannyci/devel
Fix typo in controller_api lookup plugin
2022-12-14 15:19:53 -05:00
Alex Corey
f0fcfdde39 Merge pull request #13257 from ansible/dependabot/npm_and_yarn/awx/ui/devel/luxon-3.1.1
Bump luxon from 3.0.3 to 3.1.1 in /awx/ui
2022-12-14 09:19:47 -05:00
Alex Corey
80b1ba4a35 Merge pull request #13259 from ansible/dependabot/npm_and_yarn/awx/ui/devel/patternfly/react-core-4.264.0
Bump @patternfly/react-core from 4.250.1 to 4.264.0 in /awx/ui
2022-12-14 09:13:32 -05:00
Alan Rominger
51f8e362dc Add tags prompt to integration test 2022-12-14 09:10:15 -05:00
Sarah Akus
737d6d8c8b Merge pull request #13329 from akus062381/add-new-triage-reply
add new triage reply
2022-12-13 16:45:16 -05:00
Alan Rominger
beaf6b6058 Fill in rest of ask_tags handling for WFJT module 2022-12-13 16:38:25 -05:00
akus062381
aad1fbcef8 add new triage reply 2022-12-13 16:17:42 -05:00
Rick Elrod
0b96d617ac Fix BROADCAST_WEBSOCKET_PORT for Kube dev (#13243)
- `settings/minikube.py` gets imported conditionally, when the
  environment variable `AWX_KUBE_DEVEL` is set. In this imported file,
  we set `BROADCAST_WEBSOCKET_PORT = 8013`, but 8013 is only used in the
  docker-compose dev environment. In Kubernetes environments, 8052 is
  used for everything. This is hardcoded awx-operator's ConfigMap.

- Also rename `minikube.py` because it is used for every kind of
  development Kube environment, including Kind.

Signed-off-by: Rick Elrod <rick@elrod.me>
2022-12-13 15:07:15 -06:00
Alan Rominger
fe768a159b Merge pull request #13295 from AlanCoding/raw_instance_data
Remove un-editable Instance fields from pre-filled edit data in API browser
2022-12-13 15:16:34 -05:00
Alan Rominger
c1ebea858b Merge pull request #13291 from AlanCoding/policy_want_a_cracker
Add missing disassociate trigger for policy task
2022-12-13 11:35:22 -05:00
Manas Maiti
7b2938f515 fix typo 2022-12-12 18:01:15 +01:00
Jeff Bradberry
e524d3df3e Replace the role fixup post_migrate handler with a data migration 2022-12-12 10:20:56 -05:00
Vidya Nambiar
cec2d2dfb9 minor rearrangement of imports
Signed-off-by: Vidya Nambiar <vnambiar@redhat.com>
2022-12-08 10:52:20 -05:00
Jeff Bradberry
15b7ad3570 Add a post_migrate signal handler to rebuild the Org roles
particularly, the execution_environment_admin_role.
2022-12-07 15:57:20 -05:00
Vidya Nambiar
36ff9cbc6d revert change to package.json
Signed-off-by: Vidya Nambiar <vnambiar@redhat.com>
2022-12-07 15:03:40 -05:00
Vidya Nambiar
ed74d80ecb Fixes 'Not Found' error on looking up credentials
remove redundant console logs

typo

Signed-off-by: Vidya Nambiar <vnambiar@redhat.com>
2022-12-07 15:00:28 -05:00
Alan Rominger
4a7f4d0ed4 Remove uneditable Instance fields from API browser 2022-12-06 15:20:04 -05:00
Alan Rominger
6e08c3567f Add missing disassociate trigger for policy task 2022-12-06 14:43:13 -05:00
dependabot[bot]
58734a33c4 Bump @patternfly/react-core from 4.250.1 to 4.264.0 in /awx/ui
Bumps [@patternfly/react-core](https://github.com/patternfly/patternfly-react) from 4.250.1 to 4.264.0.
- [Release notes](https://github.com/patternfly/patternfly-react/releases)
- [Commits](https://github.com/patternfly/patternfly-react/compare/@patternfly/react-core@4.250.1...@patternfly/react-core@4.264.0)

---
updated-dependencies:
- dependency-name: "@patternfly/react-core"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-06 15:33:23 +00:00
dependabot[bot]
2832f28014 Bump luxon from 3.0.3 to 3.1.1 in /awx/ui
Bumps [luxon](https://github.com/moment/luxon) from 3.0.3 to 3.1.1.
- [Release notes](https://github.com/moment/luxon/releases)
- [Changelog](https://github.com/moment/luxon/blob/master/CHANGELOG.md)
- [Commits](https://github.com/moment/luxon/compare/3.0.3...3.1.1)

---
updated-dependencies:
- dependency-name: luxon
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-06 15:30:50 +00:00
Keith J. Grant
d34f6af830 fix traceback offset/counter # in UI 2022-11-15 13:35:14 -08:00
Joe Garcia
f9bb26ad33 Merge branch 'devel' into cyberark-ccp-branding-webserviceid 2022-11-10 20:50:02 -05:00
Joe Garcia
878035c13b Fixed webservice_id check to string 2022-10-26 12:45:59 -04:00
Joe Garcia
2cc971a43f default to AIMWebService if no val provided 2022-10-26 12:41:15 -04:00
Joe Garcia
9d77c54612 Remove references to AIM everywhere 2022-10-26 12:32:12 -04:00
Joe Garcia
ef651a3a21 Add Web Service ID & update branding 2022-10-26 11:54:09 -04:00
138 changed files with 1230 additions and 475 deletions

View File

@@ -53,6 +53,16 @@ https://github.com/ansible/awx/#get-involved \
Thank you once again for this and your interest in AWX!
### Red Hat Support Team
- Hi! \
\
It appears that you are using an RPM build for RHEL. Please reach out to the Red Hat support team and submit a ticket. \
\
Here is the link to do so: \
\
https://access.redhat.com/support \
\
Thank you for your submission and for supporting AWX!
## Common

View File

@@ -145,3 +145,22 @@ jobs:
env:
AWX_TEST_IMAGE: awx
AWX_TEST_VERSION: ci
collection-sanity:
name: awx_collection sanity
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v2
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Run sanity tests
run: make test_collection_sanity
env:
# needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1

View File

@@ -6,7 +6,7 @@ env:
on:
pull_request_target:
types: [labeled]
jobs:
jobs:
e2e-test:
if: contains(github.event.pull_request.labels.*.name, 'qe:e2e')
runs-on: ubuntu-latest
@@ -107,5 +107,3 @@ jobs:
with:
name: AWX-logs-${{ matrix.job }}
path: make-docker-compose-output.log

View File

@@ -38,9 +38,13 @@ jobs:
- name: Build collection and publish to galaxy
run: |
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \
echo "Galaxy release already done"; \
else \
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz; \
fi
- name: Set official pypi info
run: echo pypi_repo=pypi >> $GITHUB_ENV
@@ -52,6 +56,7 @@ jobs:
- name: Build awxkit and upload to pypi
run: |
git reset --hard
cd awxkit && python3 setup.py bdist_wheel
twine upload \
-r ${{ env.pypi_repo }} \
@@ -74,4 +79,6 @@ jobs:
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:latest
docker push quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository }}:latest
docker pull ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }} quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}

View File

@@ -84,6 +84,20 @@ jobs:
-e push=yes \
-e awx_official=yes
- name: Log in to GHCR
run: |
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Log in to Quay
run: |
echo ${{ secrets.QUAY_TOKEN }} | docker login quay.io -u ${{ secrets.QUAY_USER }} --password-stdin
- name: tag awx-ee:latest with version input
run: |
docker pull quay.io/ansible/awx-ee:latest
docker tag quay.io/ansible/awx-ee:latest ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
docker push ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
- name: Build and stage awx-operator
working-directory: awx-operator
run: |
@@ -103,6 +117,7 @@ jobs:
env:
AWX_TEST_IMAGE: ${{ github.repository }}
AWX_TEST_VERSION: ${{ github.event.inputs.version }}
AWX_EE_TEST_IMAGE: ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
- name: Create draft release for AWX
working-directory: awx

View File

@@ -6,7 +6,20 @@ CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py)
COLLECTION_VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
# ansible-test requires semver compatable version, so we allow overrides to hack it
COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
# args for the ansible-test sanity command
COLLECTION_SANITY_ARGS ?= --docker
# collection unit testing directories
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
# collection integration test directories (defaults to all)
COLLECTION_TEST_TARGET ?=
# args for collection install
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
COLLECTION_TEMPLATE_VERSION ?= false
# NOTE: This defaults the container image version to the branch that's active
COMPOSE_TAG ?= $(GIT_BRANCH)
@@ -288,19 +301,13 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
COLLECTION_TEST_TARGET ?=
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
COLLECTION_TEMPLATE_VERSION ?= false
test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi && \
pip install ansible-core && \
if ! [ -x "$(shell command -v ansible-playbook)" ]; then pip install ansible-core; fi
ansible --version
py.test $(COLLECTION_TEST_DIRS) -v
# The python path needs to be modified so that the tests can find Ansible within the container
# First we will use anything expility set as PYTHONPATH
@@ -330,8 +337,13 @@ install_collection: build_collection
rm -rf $(COLLECTION_INSTALL)
ansible-galaxy collection install awx_collection_build/$(COLLECTION_NAMESPACE)-$(COLLECTION_PACKAGE)-$(COLLECTION_VERSION).tar.gz
test_collection_sanity: install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity
test_collection_sanity:
rm -rf awx_collection_build/
rm -rf $(COLLECTION_INSTALL)
if ! [ -x "$(shell command -v ansible-test)" ]; then pip install ansible-core; fi
ansible --version
COLLECTION_VERSION=1.0.0 make install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)

View File

@@ -96,6 +96,15 @@ register(
category=_('Authentication'),
category_slug='authentication',
)
register(
'ALLOW_METRICS_FOR_ANONYMOUS_USERS',
field_class=fields.BooleanField,
default=False,
label=_('Allow anonymous users to poll metrics'),
help_text=_('If true, anonymous users are allowed to poll metrics.'),
category=_('Authentication'),
category_slug='authentication',
)
def authentication_validate(serializer, attrs):

View File

@@ -344,6 +344,13 @@ class InstanceDetail(RetrieveUpdateAPIView):
model = models.Instance
serializer_class = serializers.InstanceSerializer
def update_raw_data(self, data):
# these fields are only valid on creation of an instance, so they unwanted on detail view
data.pop('listener_port', None)
data.pop('node_type', None)
data.pop('hostname', None)
return super(InstanceDetail, self).update_raw_data(data)
def update(self, request, *args, **kwargs):
r = super(InstanceDetail, self).update(request, *args, **kwargs)
if status.is_success(r.status_code):

View File

@@ -5,9 +5,11 @@
import logging
# Django
from django.conf import settings
from django.utils.translation import gettext_lazy as _
# Django REST Framework
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from rest_framework.exceptions import PermissionDenied
@@ -31,9 +33,14 @@ class MetricsView(APIView):
renderer_classes = [renderers.PlainTextRenderer, renderers.PrometheusJSONRenderer, renderers.BrowsableAPIRenderer]
def initialize_request(self, request, *args, **kwargs):
if settings.ALLOW_METRICS_FOR_ANONYMOUS_USERS:
self.permission_classes = (AllowAny,)
return super(APIView, self).initialize_request(request, *args, **kwargs)
def get(self, request):
'''Show Metrics Details'''
if request.user.is_superuser or request.user.is_system_auditor:
if settings.ALLOW_METRICS_FOR_ANONYMOUS_USERS or request.user.is_superuser or request.user.is_system_auditor:
metrics_to_show = ''
if not request.query_params.get('subsystemonly', "0") == "1":
metrics_to_show += metrics().decode('UTF-8')

View File

@@ -16,7 +16,7 @@ from rest_framework import status
from awx.main.constants import ACTIVE_STATES
from awx.main.utils import get_object_or_400
from awx.main.models.ha import Instance, InstanceGroup
from awx.main.models.ha import Instance, InstanceGroup, schedule_policy_task
from awx.main.models.organization import Team
from awx.main.models.projects import Project
from awx.main.models.inventory import Inventory
@@ -107,6 +107,11 @@ class InstanceGroupMembershipMixin(object):
if inst_name in ig_obj.policy_instance_list:
ig_obj.policy_instance_list.pop(ig_obj.policy_instance_list.index(inst_name))
ig_obj.save(update_fields=['policy_instance_list'])
# sometimes removing an instance has a non-obvious consequence
# this is almost always true if policy_instance_percentage or _minimum is non-zero
# after removing a single instance, the other memberships need to be re-balanced
schedule_policy_task()
return response

View File

@@ -2697,46 +2697,66 @@ class ActivityStreamAccess(BaseAccess):
# 'job_template', 'job', 'project', 'project_update', 'workflow_job',
# 'inventory_source', 'workflow_job_template'
inventory_set = Inventory.accessible_objects(self.user, 'read_role')
credential_set = Credential.accessible_objects(self.user, 'read_role')
q = Q(user=self.user)
inventory_set = Inventory.accessible_pk_qs(self.user, 'read_role')
if inventory_set:
q |= (
Q(ad_hoc_command__inventory__in=inventory_set)
| Q(inventory__in=inventory_set)
| Q(host__inventory__in=inventory_set)
| Q(group__inventory__in=inventory_set)
| Q(inventory_source__inventory__in=inventory_set)
| Q(inventory_update__inventory_source__inventory__in=inventory_set)
)
credential_set = Credential.accessible_pk_qs(self.user, 'read_role')
if credential_set:
q |= Q(credential__in=credential_set)
auditing_orgs = (
(Organization.accessible_objects(self.user, 'admin_role') | Organization.accessible_objects(self.user, 'auditor_role'))
.distinct()
.values_list('id', flat=True)
)
project_set = Project.accessible_objects(self.user, 'read_role')
jt_set = JobTemplate.accessible_objects(self.user, 'read_role')
team_set = Team.accessible_objects(self.user, 'read_role')
wfjt_set = WorkflowJobTemplate.accessible_objects(self.user, 'read_role')
app_set = OAuth2ApplicationAccess(self.user).filtered_queryset()
token_set = OAuth2TokenAccess(self.user).filtered_queryset()
if auditing_orgs:
q |= (
Q(user__in=auditing_orgs.values('member_role__members'))
| Q(organization__in=auditing_orgs)
| Q(notification_template__organization__in=auditing_orgs)
| Q(notification__notification_template__organization__in=auditing_orgs)
| Q(label__organization__in=auditing_orgs)
| Q(role__in=Role.objects.filter(ancestors__in=self.user.roles.all()) if auditing_orgs else [])
)
return qs.filter(
Q(ad_hoc_command__inventory__in=inventory_set)
| Q(o_auth2_application__in=app_set)
| Q(o_auth2_access_token__in=token_set)
| Q(user__in=auditing_orgs.values('member_role__members'))
| Q(user=self.user)
| Q(organization__in=auditing_orgs)
| Q(inventory__in=inventory_set)
| Q(host__inventory__in=inventory_set)
| Q(group__inventory__in=inventory_set)
| Q(inventory_source__inventory__in=inventory_set)
| Q(inventory_update__inventory_source__inventory__in=inventory_set)
| Q(credential__in=credential_set)
| Q(team__in=team_set)
| Q(project__in=project_set)
| Q(project_update__project__in=project_set)
| Q(job_template__in=jt_set)
| Q(job__job_template__in=jt_set)
| Q(workflow_job_template__in=wfjt_set)
| Q(workflow_job_template_node__workflow_job_template__in=wfjt_set)
| Q(workflow_job__workflow_job_template__in=wfjt_set)
| Q(notification_template__organization__in=auditing_orgs)
| Q(notification__notification_template__organization__in=auditing_orgs)
| Q(label__organization__in=auditing_orgs)
| Q(role__in=Role.objects.filter(ancestors__in=self.user.roles.all()) if auditing_orgs else [])
).distinct()
project_set = Project.accessible_pk_qs(self.user, 'read_role')
if project_set:
q |= Q(project__in=project_set) | Q(project_update__project__in=project_set)
jt_set = JobTemplate.accessible_pk_qs(self.user, 'read_role')
if jt_set:
q |= Q(job_template__in=jt_set) | Q(job__job_template__in=jt_set)
wfjt_set = WorkflowJobTemplate.accessible_pk_qs(self.user, 'read_role')
if wfjt_set:
q |= (
Q(workflow_job_template__in=wfjt_set)
| Q(workflow_job_template_node__workflow_job_template__in=wfjt_set)
| Q(workflow_job__workflow_job_template__in=wfjt_set)
)
team_set = Team.accessible_pk_qs(self.user, 'read_role')
if team_set:
q |= Q(team__in=team_set)
app_set = OAuth2ApplicationAccess(self.user).filtered_queryset()
if app_set:
q |= Q(o_auth2_application__in=app_set)
token_set = OAuth2TokenAccess(self.user).filtered_queryset()
if token_set:
q |= Q(o_auth2_access_token__in=token_set)
return qs.filter(q).distinct()
def can_add(self, data):
return False

View File

@@ -9,10 +9,16 @@ aim_inputs = {
'fields': [
{
'id': 'url',
'label': _('CyberArk AIM URL'),
'label': _('CyberArk CCP URL'),
'type': 'string',
'format': 'url',
},
{
'id': 'webservice_id',
'label': _('Web Service ID'),
'type': 'string',
'help_text': _('The CCP Web Service ID. Leave blank to default to AIMWebService.'),
},
{
'id': 'app_id',
'label': _('Application ID'),
@@ -64,10 +70,13 @@ def aim_backend(**kwargs):
client_cert = kwargs.get('client_cert', None)
client_key = kwargs.get('client_key', None)
verify = kwargs['verify']
webservice_id = kwargs['webservice_id']
app_id = kwargs['app_id']
object_query = kwargs['object_query']
object_query_format = kwargs['object_query_format']
reason = kwargs.get('reason', None)
if webservice_id == '':
webservice_id = 'AIMWebService'
query_params = {
'AppId': app_id,
@@ -78,7 +87,7 @@ def aim_backend(**kwargs):
query_params['reason'] = reason
request_qs = '?' + urlencode(query_params, quote_via=quote)
request_url = urljoin(url, '/'.join(['AIMWebService', 'api', 'Accounts']))
request_url = urljoin(url, '/'.join([webservice_id, 'api', 'Accounts']))
with CertFiles(client_cert, client_key) as cert:
res = requests.get(
@@ -92,4 +101,4 @@ def aim_backend(**kwargs):
return res.json()['Content']
aim_plugin = CredentialPlugin('CyberArk AIM Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)
aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)

View File

@@ -1,6 +1,7 @@
import copy
import os
import pathlib
import time
from urllib.parse import urljoin
from .plugin import CredentialPlugin, CertFiles, raise_for_status
@@ -247,7 +248,15 @@ def kv_backend(**kwargs):
request_url = urljoin(url, '/'.join(['v1'] + path_segments)).rstrip('/')
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
response = sess.get(request_url, **request_kwargs)
request_retries = 0
while request_retries < 5:
response = sess.get(request_url, **request_kwargs)
# https://developer.hashicorp.com/vault/docs/enterprise/consistency
if response.status_code == 412:
request_retries += 1
time.sleep(1)
else:
break
raise_for_status(response)
json = response.json()
@@ -289,8 +298,15 @@ def ssh_backend(**kwargs):
with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert
resp = sess.post(request_url, **request_kwargs)
request_retries = 0
while request_retries < 5:
resp = sess.post(request_url, **request_kwargs)
# https://developer.hashicorp.com/vault/docs/enterprise/consistency
if resp.status_code == 412:
request_retries += 1
time.sleep(1)
else:
break
raise_for_status(resp)
return resp.json()['data']['signed_key']

View File

@@ -3,14 +3,12 @@ import logging
import os
import signal
import time
import traceback
import datetime
from django.conf import settings
from django.utils.functional import cached_property
from django.utils.timezone import now as tz_now
from django.db import DatabaseError, OperationalError, transaction, connection as django_connection
from django.db.utils import InterfaceError, InternalError
from django.db import transaction, connection as django_connection
from django_guid import set_guid
import psutil
@@ -64,6 +62,7 @@ class CallbackBrokerWorker(BaseWorker):
"""
MAX_RETRIES = 2
INDIVIDUAL_EVENT_RETRIES = 3
last_stats = time.time()
last_flush = time.time()
total = 0
@@ -164,38 +163,48 @@ class CallbackBrokerWorker(BaseWorker):
else: # only calculate the seconds if the created time already has been set
metrics_total_job_event_processing_seconds += e.modified - e.created
metrics_duration_to_save = time.perf_counter()
saved_events = []
try:
cls.objects.bulk_create(events)
metrics_bulk_events_saved += len(events)
saved_events = events
self.buff[cls] = []
except Exception as exc:
logger.warning(f'Error in events bulk_create, will try indiviually up to 5 errors, error {str(exc)}')
# If the database is flaking, let ensure_connection throw a general exception
# will be caught by the outer loop, which goes into a proper sleep and retry loop
django_connection.ensure_connection()
logger.warning(f'Error in events bulk_create, will try indiviually, error: {str(exc)}')
# if an exception occurs, we should re-attempt to save the
# events one-by-one, because something in the list is
# broken/stale
consecutive_errors = 0
events_saved = 0
metrics_events_batch_save_errors += 1
for e in events:
for e in events.copy():
try:
e.save()
events_saved += 1
consecutive_errors = 0
metrics_singular_events_saved += 1
events.remove(e)
saved_events.append(e) # Importantly, remove successfully saved events from the buffer
except Exception as exc_indv:
consecutive_errors += 1
logger.info(f'Database Error Saving individual Job Event, error {str(exc_indv)}')
if consecutive_errors >= 5:
raise
metrics_singular_events_saved += events_saved
if events_saved == 0:
raise
retry_count = getattr(e, '_retry_count', 0) + 1
e._retry_count = retry_count
# special sanitization logic for postgres treatment of NUL 0x00 char
if (retry_count == 1) and isinstance(exc_indv, ValueError) and ("\x00" in e.stdout):
e.stdout = e.stdout.replace("\x00", "")
if retry_count >= self.INDIVIDUAL_EVENT_RETRIES:
logger.error(f'Hit max retries ({retry_count}) saving individual Event error: {str(exc_indv)}\ndata:\n{e.__dict__}')
events.remove(e)
else:
logger.info(f'Database Error Saving individual Event uuid={e.uuid} try={retry_count}, error: {str(exc_indv)}')
metrics_duration_to_save = time.perf_counter() - metrics_duration_to_save
for e in events:
for e in saved_events:
if not getattr(e, '_skip_websocket_message', False):
metrics_events_broadcast += 1
emit_event_detail(e)
if getattr(e, '_notification_trigger_event', False):
job_stats_wrapup(getattr(e, e.JOB_REFERENCE), event=e)
self.buff = {}
self.last_flush = time.time()
# only update metrics if we saved events
if (metrics_bulk_events_saved + metrics_singular_events_saved) > 0:
@@ -267,20 +276,16 @@ class CallbackBrokerWorker(BaseWorker):
try:
self.flush(force=flush)
break
except (OperationalError, InterfaceError, InternalError) as exc:
except Exception as exc:
# Aside form bugs, exceptions here are assumed to be due to database flake
if retries >= self.MAX_RETRIES:
logger.exception('Worker could not re-establish database connectivity, giving up on one or more events.')
self.buff = {}
return
delay = 60 * retries
logger.warning(f'Database Error Flushing Job Events, retry #{retries + 1} in {delay} seconds: {str(exc)}')
django_connection.close()
time.sleep(delay)
retries += 1
except DatabaseError:
logger.exception('Database Error Flushing Job Events')
django_connection.close()
break
except Exception as exc:
tb = traceback.format_exc()
logger.error('Callback Task Processor Raised Exception: %r', exc)
logger.error('Detail: {}'.format(tb))
except Exception:
logger.exception(f'Callback Task Processor Raised Unexpected Exception processing event data:\n{body}')

View File

@@ -32,8 +32,14 @@ class Command(BaseCommand):
def handle(self, **options):
self.old_key = settings.SECRET_KEY
custom_key = os.environ.get("TOWER_SECRET_KEY")
if options.get("use_custom_key") and custom_key:
self.new_key = custom_key
if options.get("use_custom_key"):
if custom_key:
self.new_key = custom_key
else:
print("Use custom key was specified but the env var TOWER_SECRET_KEY was not available")
import sys
sys.exit(1)
else:
self.new_key = base64.encodebytes(os.urandom(33)).decode().rstrip()
self._notification_templates()

View File

@@ -158,7 +158,11 @@ class InstanceManager(models.Manager):
return (False, instance)
# Create new instance, and fill in default values
create_defaults = {'node_state': Instance.States.INSTALLED, 'capacity': 0}
create_defaults = {
'node_state': Instance.States.INSTALLED,
'capacity': 0,
'listener_port': 27199,
}
if defaults is not None:
create_defaults.update(defaults)
uuid_option = {}

View File

@@ -0,0 +1,18 @@
# Generated by Django 3.2.16 on 2022-12-07 21:11
from django.db import migrations
from awx.main.migrations import _rbac as rbac
from awx.main.migrations import _migration_utils as migration_utils
class Migration(migrations.Migration):
dependencies = [
('main', '0173_instancegroup_max_limits'),
]
operations = [
migrations.RunPython(migration_utils.set_current_apps_for_migrations),
migrations.RunPython(rbac.create_roles),
]

View File

@@ -15,6 +15,7 @@ def aws(cred, env, private_data_dir):
if cred.has_input('security_token'):
env['AWS_SECURITY_TOKEN'] = cred.get_input('security_token', default='')
env['AWS_SESSION_TOKEN'] = env['AWS_SECURITY_TOKEN']
def gce(cred, env, private_data_dir):

View File

@@ -507,7 +507,7 @@ class TaskManager(TaskBase):
return None
@timeit
def start_task(self, task, instance_group, dependent_tasks=None, instance=None):
def start_task(self, task, instance_group, instance=None):
# Just like for process_running_tasks, add the job to the dependency graph and
# ask the TaskManagerInstanceGroups object to update consumed capacity on all
# implicated instances and container groups.
@@ -524,14 +524,6 @@ class TaskManager(TaskBase):
ScheduleTaskManager().schedule()
from awx.main.tasks.system import handle_work_error, handle_work_success
dependent_tasks = dependent_tasks or []
task_actual = {
'type': get_type_for_model(type(task)),
'id': task.id,
}
dependencies = [{'type': get_type_for_model(type(t)), 'id': t.id} for t in dependent_tasks]
task.status = 'waiting'
(start_status, opts) = task.pre_start()
@@ -563,6 +555,7 @@ class TaskManager(TaskBase):
# apply_async does a NOTIFY to the channel dispatcher is listening to
# postgres will treat this as part of the transaction, which is what we want
if task.status != 'failed' and type(task) is not WorkflowJob:
task_actual = {'type': get_type_for_model(type(task)), 'id': task.id}
task_cls = task._get_task_class()
task_cls.apply_async(
[task.pk],
@@ -570,7 +563,7 @@ class TaskManager(TaskBase):
queue=task.get_queue_name(),
uuid=task.celery_task_id,
callbacks=[{'task': handle_work_success.name, 'kwargs': {'task_actual': task_actual}}],
errbacks=[{'task': handle_work_error.name, 'args': [task.celery_task_id], 'kwargs': {'subtasks': [task_actual] + dependencies}}],
errbacks=[{'task': handle_work_error.name, 'kwargs': {'task_actual': task_actual}}],
)
# In exception cases, like a job failing pre-start checks, we send the websocket status message
@@ -609,7 +602,7 @@ class TaskManager(TaskBase):
if isinstance(task, WorkflowJob):
# Previously we were tracking allow_simultaneous blocking both here and in DependencyGraph.
# Double check that using just the DependencyGraph works for Workflows and Sliced Jobs.
self.start_task(task, None, task.get_jobs_fail_chain(), None)
self.start_task(task, None, None)
continue
found_acceptable_queue = False
@@ -637,7 +630,7 @@ class TaskManager(TaskBase):
execution_instance = self.tm_models.instances[control_instance.hostname].obj
task.log_lifecycle("controller_node_chosen")
task.log_lifecycle("execution_node_chosen")
self.start_task(task, self.controlplane_ig, task.get_jobs_fail_chain(), execution_instance)
self.start_task(task, self.controlplane_ig, execution_instance)
found_acceptable_queue = True
continue
@@ -645,7 +638,7 @@ class TaskManager(TaskBase):
if not self.tm_models.instance_groups[instance_group.name].has_remaining_capacity(task):
continue
if instance_group.is_container_group:
self.start_task(task, instance_group, task.get_jobs_fail_chain(), None)
self.start_task(task, instance_group, None)
found_acceptable_queue = True
break
@@ -670,7 +663,7 @@ class TaskManager(TaskBase):
)
)
execution_instance = self.tm_models.instances[execution_instance.hostname].obj
self.start_task(task, instance_group, task.get_jobs_fail_chain(), execution_instance)
self.start_task(task, instance_group, execution_instance)
found_acceptable_queue = True
break
else:

View File

@@ -390,6 +390,7 @@ class BaseTask(object):
logger.error("I/O error({0}) while trying to open lock file [{1}]: {2}".format(e.errno, lock_path, e.strerror))
raise
emitted_lockfile_log = False
start_time = time.time()
while True:
try:
@@ -401,6 +402,9 @@ class BaseTask(object):
logger.error("I/O error({0}) while trying to aquire lock on file [{1}]: {2}".format(e.errno, lock_path, e.strerror))
raise
else:
if not emitted_lockfile_log:
logger.info(f"exception acquiring lock {lock_path}: {e}")
emitted_lockfile_log = True
time.sleep(1.0)
self.instance.refresh_from_db(fields=['cancel_flag'])
if self.instance.cancel_flag or signal_callback():

View File

@@ -411,9 +411,11 @@ class AWXReceptorJob:
unit_status = receptor_ctl.simple_command(f'work status {self.unit_id}')
detail = unit_status.get('Detail', None)
state_name = unit_status.get('StateName', None)
stdout_size = unit_status.get('StdoutSize', 0)
except Exception:
detail = ''
state_name = ''
stdout_size = 0
logger.exception(f'An error was encountered while getting status for work unit {self.unit_id}')
if 'exceeded quota' in detail:
@@ -424,9 +426,16 @@ class AWXReceptorJob:
return
try:
resultsock = receptor_ctl.get_work_results(self.unit_id, return_sockfile=True)
lines = resultsock.readlines()
receptor_output = b"".join(lines).decode()
receptor_output = ''
if state_name == 'Failed' and self.task.runner_callback.event_ct == 0:
# if receptor work unit failed and no events were emitted, work results may
# contain useful information about why the job failed. In case stdout is
# massive, only ask for last 1000 bytes
startpos = max(stdout_size - 1000, 0)
resultsock, resultfile = receptor_ctl.get_work_results(self.unit_id, startpos=startpos, return_socket=True, return_sockfile=True)
resultsock.setblocking(False) # this makes resultfile reads non blocking
lines = resultfile.readlines()
receptor_output = b"".join(lines).decode()
if receptor_output:
self.task.runner_callback.delay_update(result_traceback=receptor_output)
elif detail:

View File

@@ -52,6 +52,7 @@ from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename, reaper
from awx.main.utils.common import (
get_type_for_model,
ignore_inventory_computed_fields,
ignore_inventory_group_removal,
ScheduleWorkflowManager,
@@ -720,45 +721,43 @@ def handle_work_success(task_actual):
@task(queue=get_local_queuename)
def handle_work_error(task_id, *args, **kwargs):
subtasks = kwargs.get('subtasks', None)
logger.debug('Executing error task id %s, subtasks: %s' % (task_id, str(subtasks)))
first_instance = None
first_instance_type = ''
if subtasks is not None:
for each_task in subtasks:
try:
instance = UnifiedJob.get_instance_by_type(each_task['type'], each_task['id'])
if not instance:
# Unknown task type
logger.warning("Unknown task type: {}".format(each_task['type']))
continue
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in error callback.'.format(each_task['type'], each_task['id']))
continue
def handle_work_error(task_actual):
try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in error callback.'.format(task_actual['type'], task_actual['id']))
return
if not instance:
return
if first_instance is None:
first_instance = instance
first_instance_type = each_task['type']
subtasks = instance.get_jobs_fail_chain() # reverse of dependent_jobs mostly
logger.debug(f'Executing error task id {task_actual["id"]}, subtasks: {[subtask.id for subtask in subtasks]}')
if instance.celery_task_id != task_id and not instance.cancel_flag and not instance.status in ('successful', 'failed'):
instance.status = 'failed'
instance.failed = True
if not instance.job_explanation:
instance.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
first_instance_type,
first_instance.name,
first_instance.id,
)
instance.save()
instance.websocket_emit_status("failed")
deps_of_deps = {}
for subtask in subtasks:
if subtask.celery_task_id != instance.celery_task_id and not subtask.cancel_flag and not subtask.status in ('successful', 'failed'):
# If there are multiple in the dependency chain, A->B->C, and this was called for A, blame B for clarity
blame_job = deps_of_deps.get(subtask.id, instance)
subtask.status = 'failed'
subtask.failed = True
if not subtask.job_explanation:
subtask.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % (
get_type_for_model(type(blame_job)),
blame_job.name,
blame_job.id,
)
subtask.save()
subtask.websocket_emit_status("failed")
for sub_subtask in subtask.get_jobs_fail_chain():
deps_of_deps[sub_subtask.id] = subtask
# We only send 1 job complete message since all the job completion message
# handling does is trigger the scheduler. If we extend the functionality of
# what the job complete message handler does then we may want to send a
# completion event for each job here.
if first_instance:
schedule_manager_success_or_error(first_instance)
schedule_manager_success_or_error(instance)
@task(queue=get_local_queuename)

View File

@@ -3,5 +3,6 @@
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
"AWS_ACCESS_KEY_ID": "fooo",
"AWS_SECRET_ACCESS_KEY": "fooo",
"AWS_SECURITY_TOKEN": "fooo"
"AWS_SECURITY_TOKEN": "fooo",
"AWS_SESSION_TOKEN": "fooo"
}

View File

@@ -1,7 +1,15 @@
import pytest
import time
from unittest import mock
from uuid import uuid4
from django.test import TransactionTestCase
from awx.main.dispatch.worker.callback import job_stats_wrapup, CallbackBrokerWorker
from awx.main.dispatch.worker.callback import job_stats_wrapup
from awx.main.models.jobs import Job
from awx.main.models.inventory import InventoryUpdate, InventorySource
from awx.main.models.events import InventoryUpdateEvent
@pytest.mark.django_db
@@ -24,3 +32,108 @@ def test_wrapup_does_send_notifications(mocker):
job.refresh_from_db()
assert job.host_status_counts == {}
mock.assert_called_once_with('succeeded')
class FakeRedis:
def keys(self, *args, **kwargs):
return []
def set(self):
pass
def get(self):
return None
@classmethod
def from_url(cls, *args, **kwargs):
return cls()
def pipeline(self):
return self
class TestCallbackBrokerWorker(TransactionTestCase):
@pytest.fixture(autouse=True)
def turn_off_websockets(self):
with mock.patch('awx.main.dispatch.worker.callback.emit_event_detail', lambda *a, **kw: None):
yield
def get_worker(self):
with mock.patch('redis.Redis', new=FakeRedis): # turn off redis stuff
return CallbackBrokerWorker()
def event_create_kwargs(self):
inventory_update = InventoryUpdate.objects.create(source='file', inventory_source=InventorySource.objects.create(source='file'))
return dict(inventory_update=inventory_update, created=inventory_update.created)
def test_flush_with_valid_event(self):
worker = self.get_worker()
events = [InventoryUpdateEvent(uuid=str(uuid4()), **self.event_create_kwargs())]
worker.buff = {InventoryUpdateEvent: events}
worker.flush()
assert worker.buff.get(InventoryUpdateEvent, []) == []
assert InventoryUpdateEvent.objects.filter(uuid=events[0].uuid).count() == 1
def test_flush_with_invalid_event(self):
worker = self.get_worker()
kwargs = self.event_create_kwargs()
events = [
InventoryUpdateEvent(uuid=str(uuid4()), stdout='good1', **kwargs),
InventoryUpdateEvent(uuid=str(uuid4()), stdout='bad', counter=-2, **kwargs),
InventoryUpdateEvent(uuid=str(uuid4()), stdout='good2', **kwargs),
]
worker.buff = {InventoryUpdateEvent: events.copy()}
worker.flush()
assert InventoryUpdateEvent.objects.filter(uuid=events[0].uuid).count() == 1
assert InventoryUpdateEvent.objects.filter(uuid=events[1].uuid).count() == 0
assert InventoryUpdateEvent.objects.filter(uuid=events[2].uuid).count() == 1
assert worker.buff == {InventoryUpdateEvent: [events[1]]}
def test_duplicate_key_not_saved_twice(self):
worker = self.get_worker()
events = [InventoryUpdateEvent(uuid=str(uuid4()), **self.event_create_kwargs())]
worker.buff = {InventoryUpdateEvent: events.copy()}
worker.flush()
# put current saved event in buffer (error case)
worker.buff = {InventoryUpdateEvent: [InventoryUpdateEvent.objects.get(uuid=events[0].uuid)]}
worker.last_flush = time.time() - 2.0
# here, the bulk_create will fail with UNIQUE constraint violation, but individual saves should resolve it
worker.flush()
assert InventoryUpdateEvent.objects.filter(uuid=events[0].uuid).count() == 1
assert worker.buff.get(InventoryUpdateEvent, []) == []
def test_give_up_on_bad_event(self):
worker = self.get_worker()
events = [InventoryUpdateEvent(uuid=str(uuid4()), counter=-2, **self.event_create_kwargs())]
worker.buff = {InventoryUpdateEvent: events.copy()}
for i in range(5):
worker.last_flush = time.time() - 2.0
worker.flush()
# Could not save, should be logged, and buffer should be cleared
assert worker.buff.get(InventoryUpdateEvent, []) == []
assert InventoryUpdateEvent.objects.filter(uuid=events[0].uuid).count() == 0 # sanity
def test_postgres_invalid_NUL_char(self):
# In postgres, text fields reject NUL character, 0x00
# tests use sqlite3 which will not raise an error
# but we can still test that it is sanitized before saving
worker = self.get_worker()
kwargs = self.event_create_kwargs()
events = [InventoryUpdateEvent(uuid=str(uuid4()), stdout="\x00", **kwargs)]
assert "\x00" in events[0].stdout # sanity
worker.buff = {InventoryUpdateEvent: events.copy()}
with mock.patch.object(InventoryUpdateEvent.objects, 'bulk_create', side_effect=ValueError):
with mock.patch.object(events[0], 'save', side_effect=ValueError):
worker.flush()
assert "\x00" not in events[0].stdout
worker.last_flush = time.time() - 2.0
worker.flush()
event = InventoryUpdateEvent.objects.get(uuid=events[0].uuid)
assert "\x00" not in event.stdout

View File

@@ -171,13 +171,17 @@ class TestKeyRegeneration:
def test_use_custom_key_with_empty_tower_secret_key_env_var(self):
os.environ['TOWER_SECRET_KEY'] = ''
new_key = call_command('regenerate_secret_key', '--use-custom-key')
assert settings.SECRET_KEY != new_key
with pytest.raises(SystemExit) as e:
call_command('regenerate_secret_key', '--use-custom-key')
assert e.type == SystemExit
assert e.value.code == 1
def test_use_custom_key_with_no_tower_secret_key_env_var(self):
os.environ.pop('TOWER_SECRET_KEY', None)
new_key = call_command('regenerate_secret_key', '--use-custom-key')
assert settings.SECRET_KEY != new_key
with pytest.raises(SystemExit) as e:
call_command('regenerate_secret_key', '--use-custom-key')
assert e.type == SystemExit
assert e.value.code == 1
def test_with_tower_secret_key_env_var(self):
custom_key = 'MXSq9uqcwezBOChl/UfmbW1k4op+bC+FQtwPqgJ1u9XV'

View File

@@ -23,7 +23,7 @@ def test_multi_group_basic_job_launch(instance_factory, controlplane_instance_gr
mock_task_impact.return_value = 500
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, [], i1), mock.call(j2, ig2, [], i2)])
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, i1), mock.call(j2, ig2, i2)])
@pytest.mark.django_db
@@ -54,7 +54,7 @@ def test_multi_group_with_shared_dependency(instance_factory, controlplane_insta
DependencyManager().schedule()
TaskManager().schedule()
pu = p.project_updates.first()
TaskManager.start_task.assert_called_once_with(pu, controlplane_instance_group, [j1, j2], controlplane_instance_group.instances.all()[0])
TaskManager.start_task.assert_called_once_with(pu, controlplane_instance_group, controlplane_instance_group.instances.all()[0])
pu.finished = pu.created + timedelta(seconds=1)
pu.status = "successful"
pu.save()
@@ -62,8 +62,8 @@ def test_multi_group_with_shared_dependency(instance_factory, controlplane_insta
DependencyManager().schedule()
TaskManager().schedule()
TaskManager.start_task.assert_any_call(j1, ig1, [], i1)
TaskManager.start_task.assert_any_call(j2, ig2, [], i2)
TaskManager.start_task.assert_any_call(j1, ig1, i1)
TaskManager.start_task.assert_any_call(j2, ig2, i2)
assert TaskManager.start_task.call_count == 2
@@ -75,7 +75,7 @@ def test_workflow_job_no_instancegroup(workflow_job_template_factory, controlpla
wfj.save()
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(wfj, None, [], None)
TaskManager.start_task.assert_called_once_with(wfj, None, None)
assert wfj.instance_group is None
@@ -150,7 +150,7 @@ def test_failover_group_run(instance_factory, controlplane_instance_group, mocke
mock_task_impact.return_value = 500
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule()
mock_job.assert_has_calls([mock.call(j1, ig1, [], i1), mock.call(j1_1, ig2, [], i2)])
mock_job.assert_has_calls([mock.call(j1, ig1, i1), mock.call(j1_1, ig2, i2)])
assert mock_job.call_count == 2

View File

@@ -18,7 +18,7 @@ def test_single_job_scheduler_launch(hybrid_instance, controlplane_instance_grou
j = create_job(objects.job_template)
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@pytest.mark.django_db
@@ -240,12 +240,12 @@ def test_multi_jt_capacity_blocking(hybrid_instance, job_template_factory, mocke
mock_task_impact.return_value = 505
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule()
mock_job.assert_called_once_with(j1, controlplane_instance_group, [], instance)
mock_job.assert_called_once_with(j1, controlplane_instance_group, instance)
j1.status = "successful"
j1.save()
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule()
mock_job.assert_called_once_with(j2, controlplane_instance_group, [], instance)
mock_job.assert_called_once_with(j2, controlplane_instance_group, instance)
@pytest.mark.django_db
@@ -337,12 +337,12 @@ def test_single_job_dependencies_project_launch(controlplane_instance_group, job
pu = [x for x in p.project_updates.all()]
assert len(pu) == 1
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(pu[0], controlplane_instance_group, [j], instance)
TaskManager.start_task.assert_called_once_with(pu[0], controlplane_instance_group, instance)
pu[0].status = "successful"
pu[0].save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@pytest.mark.django_db
@@ -365,12 +365,12 @@ def test_single_job_dependencies_inventory_update_launch(controlplane_instance_g
iu = [x for x in ii.inventory_updates.all()]
assert len(iu) == 1
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(iu[0], controlplane_instance_group, [j], instance)
TaskManager.start_task.assert_called_once_with(iu[0], controlplane_instance_group, instance)
iu[0].status = "successful"
iu[0].save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@pytest.mark.django_db
@@ -412,7 +412,7 @@ def test_job_dependency_with_already_updated(controlplane_instance_group, job_te
mock_iu.assert_not_called()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@pytest.mark.django_db
@@ -442,9 +442,7 @@ def test_shared_dependencies_launch(controlplane_instance_group, job_template_fa
TaskManager().schedule()
pu = p.project_updates.first()
iu = ii.inventory_updates.first()
TaskManager.start_task.assert_has_calls(
[mock.call(iu, controlplane_instance_group, [j1, j2], instance), mock.call(pu, controlplane_instance_group, [j1, j2], instance)]
)
TaskManager.start_task.assert_has_calls([mock.call(iu, controlplane_instance_group, instance), mock.call(pu, controlplane_instance_group, instance)])
pu.status = "successful"
pu.finished = pu.created + timedelta(seconds=1)
pu.save()
@@ -453,9 +451,7 @@ def test_shared_dependencies_launch(controlplane_instance_group, job_template_fa
iu.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_has_calls(
[mock.call(j1, controlplane_instance_group, [], instance), mock.call(j2, controlplane_instance_group, [], instance)]
)
TaskManager.start_task.assert_has_calls([mock.call(j1, controlplane_instance_group, instance), mock.call(j2, controlplane_instance_group, instance)])
pu = [x for x in p.project_updates.all()]
iu = [x for x in ii.inventory_updates.all()]
assert len(pu) == 1
@@ -479,7 +475,7 @@ def test_job_not_blocking_project_update(controlplane_instance_group, job_templa
project_update.status = "pending"
project_update.save()
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(project_update, controlplane_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(project_update, controlplane_instance_group, instance)
@pytest.mark.django_db
@@ -503,7 +499,7 @@ def test_job_not_blocking_inventory_update(controlplane_instance_group, job_temp
DependencyManager().schedule()
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(inventory_update, controlplane_instance_group, [], instance)
TaskManager.start_task.assert_called_once_with(inventory_update, controlplane_instance_group, instance)
@pytest.mark.django_db

View File

@@ -5,8 +5,8 @@ import tempfile
import shutil
from awx.main.tasks.jobs import RunJob
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files
from awx.main.models import Instance, Job
from awx.main.tasks.system import execution_node_health_check, _cleanup_images_and_files, handle_work_error
from awx.main.models import Instance, Job, InventoryUpdate, ProjectUpdate
@pytest.fixture
@@ -74,3 +74,17 @@ def test_does_not_run_reaped_job(mocker, mock_me):
job.refresh_from_db()
assert job.status == 'failed'
mock_run.assert_not_called()
@pytest.mark.django_db
def test_handle_work_error_nested(project, inventory_source):
pu = ProjectUpdate.objects.create(status='failed', project=project, celery_task_id='1234')
iu = InventoryUpdate.objects.create(status='pending', inventory_source=inventory_source, source='scm')
job = Job.objects.create(status='pending')
iu.dependent_jobs.add(pu)
job.dependent_jobs.add(pu, iu)
handle_work_error({'type': 'project_update', 'id': pu.id})
iu.refresh_from_db()
job.refresh_from_db()
assert iu.job_explanation == f'Previous Task Failed: {{"job_type": "project_update", "job_name": "", "job_id": "{pu.id}"}}'
assert job.job_explanation == f'Previous Task Failed: {{"job_type": "inventory_update", "job_name": "", "job_id": "{iu.id}"}}'

View File

@@ -103,6 +103,10 @@ ColorHandler = logging.StreamHandler
if settings.COLOR_LOGS is True:
try:
from logutils.colorize import ColorizingStreamHandler
import colorama
colorama.deinit()
colorama.init(wrap=False, convert=False, strip=False)
class ColorHandler(ColorizingStreamHandler):
def colorize(self, line, record):

View File

@@ -418,6 +418,9 @@ AUTH_BASIC_ENABLED = True
# when trying to access a UI page that requries authentication.
LOGIN_REDIRECT_OVERRIDE = ''
# Note: This setting may be overridden by database settings.
ALLOW_METRICS_FOR_ANONYMOUS_USERS = False
DEVSERVER_DEFAULT_ADDR = '0.0.0.0'
DEVSERVER_DEFAULT_PORT = '8013'

View File

@@ -114,7 +114,7 @@ if 'sqlite3' not in DATABASES['default']['ENGINE']: # noqa
# this needs to stay at the bottom of this file
try:
if os.getenv('AWX_KUBE_DEVEL', False):
include(optional('minikube.py'), scope=locals())
include(optional('development_kube.py'), scope=locals())
else:
include(optional('local_*.py'), scope=locals())
except ImportError:

View File

@@ -1,4 +1,4 @@
BROADCAST_WEBSOCKET_SECRET = '🤖starscream🤖'
BROADCAST_WEBSOCKET_PORT = 8013
BROADCAST_WEBSOCKET_PORT = 8052
BROADCAST_WEBSOCKET_VERIFY_CERT = False
BROADCAST_WEBSOCKET_PROTOCOL = 'http'

View File

@@ -452,7 +452,10 @@ def on_populate_user(sender, **kwargs):
remove = bool(team_opts.get('remove', True))
state = _update_m2m_from_groups(ldap_user, users_opts, remove)
if state is not None:
desired_team_states[team_name] = {'member_role': state}
organization = team_opts['organization']
if organization not in desired_team_states:
desired_team_states[organization] = {}
desired_team_states[organization][team_name] = {'member_role': state}
# Check if user.profile is available, otherwise force user.save()
try:
@@ -473,16 +476,28 @@ def on_populate_user(sender, **kwargs):
def reconcile_users_org_team_mappings(user, desired_org_states, desired_team_states, source):
#
# Arguments:
# user - a user object
# desired_org_states: { '<org_name>': { '<role>': <boolean> or None } }
# desired_team_states: { '<org_name>': { '<team name>': { '<role>': <boolean> or None } } }
# source - a text label indicating the "authentication adapter" for debug messages
#
# This function will load the users existing roles and then based on the deisred states modify the users roles
# True indicates the user needs to be a member of the role
# False indicates the user should not be a member of the role
# None means this function should not change the users membership of a role
#
from awx.main.models import Organization, Team
content_types = []
reconcile_items = []
if desired_org_states:
content_types.append(ContentType.objects.get_for_model(Organization))
reconcile_items.append(('organization', desired_org_states, Organization))
reconcile_items.append(('organization', desired_org_states))
if desired_team_states:
content_types.append(ContentType.objects.get_for_model(Team))
reconcile_items.append(('team', desired_team_states, Team))
reconcile_items.append(('team', desired_team_states))
if not content_types:
# If both desired states were empty we can simply return because there is nothing to reconcile
@@ -491,24 +506,39 @@ def reconcile_users_org_team_mappings(user, desired_org_states, desired_team_sta
# users_roles is a flat set of IDs
users_roles = set(user.roles.filter(content_type__in=content_types).values_list('pk', flat=True))
for object_type, desired_states, model in reconcile_items:
# Get all of the roles in the desired states for efficient DB extraction
for object_type, desired_states in reconcile_items:
roles = []
for sub_dict in desired_states.values():
for role_name in sub_dict:
if sub_dict[role_name] is None:
continue
if role_name not in roles:
roles.append(role_name)
# Get a set of named tuples for the org/team name plus all of the roles we got above
model_roles = model.objects.filter(name__in=desired_states.keys()).values_list('name', *roles, named=True)
if object_type == 'organization':
for sub_dict in desired_states.values():
for role_name in sub_dict:
if sub_dict[role_name] is None:
continue
if role_name not in roles:
roles.append(role_name)
model_roles = Organization.objects.filter(name__in=desired_states.keys()).values_list('name', *roles, named=True)
else:
team_names = []
for teams_dict in desired_states.values():
team_names.extend(teams_dict.keys())
for sub_dict in teams_dict.values():
for role_name in sub_dict:
if sub_dict[role_name] is None:
continue
if role_name not in roles:
roles.append(role_name)
model_roles = Team.objects.filter(name__in=team_names).values_list('name', 'organization__name', *roles, named=True)
for row in model_roles:
for role_name in roles:
desired_state = desired_states.get(row.name, {})
if desired_state[role_name] is None:
if object_type == 'organization':
desired_state = desired_states.get(row.name, {})
else:
desired_state = desired_states.get(row.organization__name, {}).get(row.name, {})
if desired_state.get(role_name, None) is None:
# The mapping was not defined for this [org/team]/role so we can just pass
pass
continue
# If somehow the auth adapter knows about an items role but that role is not defined in the DB we are going to print a pretty error
# This is your classic safety net that we should never hit; but here you are reading this comment... good luck and Godspeed.

View File

@@ -8,7 +8,7 @@
"dependencies": {
"@lingui/react": "3.14.0",
"@patternfly/patternfly": "4.217.1",
"@patternfly/react-core": "^4.250.1",
"@patternfly/react-core": "^4.264.0",
"@patternfly/react-icons": "4.92.10",
"@patternfly/react-table": "4.108.0",
"ace-builds": "^1.10.1",
@@ -22,7 +22,7 @@
"has-ansi": "5.0.1",
"html-entities": "2.3.2",
"js-yaml": "4.1.0",
"luxon": "^3.0.3",
"luxon": "^3.1.1",
"prop-types": "^15.8.1",
"react": "17.0.2",
"react-ace": "^10.1.0",
@@ -3752,13 +3752,13 @@
"integrity": "sha512-uN7JgfQsyR16YHkuGRCTIcBcnyKIqKjGkB2SGk9x1XXH3yYGenL83kpAavX9Xtozqp17KppOlybJuzcKvZMrgw=="
},
"node_modules/@patternfly/react-core": {
"version": "4.250.1",
"resolved": "https://registry.npmjs.org/@patternfly/react-core/-/react-core-4.250.1.tgz",
"integrity": "sha512-vAOZPQdZzYXl/vkHnHMIt1eC3nrPDdsuuErPatkNPwmSvilXuXmWP5wxoJ36FbSNRRURkprFwx52zMmWS3iHJA==",
"version": "4.264.0",
"resolved": "https://registry.npmjs.org/@patternfly/react-core/-/react-core-4.264.0.tgz",
"integrity": "sha512-tK0BMWxw8nhukev40HZ6q6d02pDnjX7oyA91vHa18aakJUKBWMaerqpG4NZVMoh0tPKX3aLNj+zyCwDALFAZZw==",
"dependencies": {
"@patternfly/react-icons": "^4.92.6",
"@patternfly/react-styles": "^4.91.6",
"@patternfly/react-tokens": "^4.93.6",
"@patternfly/react-icons": "^4.93.0",
"@patternfly/react-styles": "^4.92.0",
"@patternfly/react-tokens": "^4.94.0",
"focus-trap": "6.9.2",
"react-dropzone": "9.0.0",
"tippy.js": "5.1.2",
@@ -3769,6 +3769,15 @@
"react-dom": "^16.8 || ^17 || ^18"
}
},
"node_modules/@patternfly/react-core/node_modules/@patternfly/react-icons": {
"version": "4.93.0",
"resolved": "https://registry.npmjs.org/@patternfly/react-icons/-/react-icons-4.93.0.tgz",
"integrity": "sha512-OH0vORVioL+HLWMEog8/3u8jsiMCeJ0pFpvRKRhy5Uk4CdAe40k1SOBvXJP6opr+O8TLbz0q3bm8Jsh/bPaCuQ==",
"peerDependencies": {
"react": "^16.8 || ^17 || ^18",
"react-dom": "^16.8 || ^17 || ^18"
}
},
"node_modules/@patternfly/react-core/node_modules/tslib": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.3.1.tgz",
@@ -3784,9 +3793,9 @@
}
},
"node_modules/@patternfly/react-styles": {
"version": "4.91.10",
"resolved": "https://registry.npmjs.org/@patternfly/react-styles/-/react-styles-4.91.10.tgz",
"integrity": "sha512-fAG4Vjp63ohiR92F4e/Gkw5q1DSSckHKqdnEF75KUpSSBORzYP0EKMpupSd6ItpQFJw3iWs3MJi3/KIAAfU1Jw=="
"version": "4.92.0",
"resolved": "https://registry.npmjs.org/@patternfly/react-styles/-/react-styles-4.92.0.tgz",
"integrity": "sha512-B/f6iyu8UEN1+wRxdC4sLIhvJeyL8SqInDXZmwOIqK8uPJ8Lze7qrbVhkkVzbMF37/oDPVa6dZH8qZFq062LEA=="
},
"node_modules/@patternfly/react-table": {
"version": "4.108.0",
@@ -3811,9 +3820,9 @@
"integrity": "sha512-d6xOpEDfsi2CZVlPQzGeux8XMwLT9hssAsaPYExaQMuYskwb+x1x7J371tWlbBdWHroy99KnVB6qIkUbs5X3UQ=="
},
"node_modules/@patternfly/react-tokens": {
"version": "4.93.10",
"resolved": "https://registry.npmjs.org/@patternfly/react-tokens/-/react-tokens-4.93.10.tgz",
"integrity": "sha512-F+j1irDc9M6zvY6qNtDryhbpnHz3R8ymHRdGelNHQzPTIK88YSWEnT1c9iUI+uM/iuZol7sJmO5STtg2aPIDRQ=="
"version": "4.94.0",
"resolved": "https://registry.npmjs.org/@patternfly/react-tokens/-/react-tokens-4.94.0.tgz",
"integrity": "sha512-fYXxUJZnzpn89K2zzHF0cSncZZVGKrohdb5f5T1wzxwU2NZPVGpvr88xhm+V2Y/fSrrTPwXcP3IIdtNOOtJdZw=="
},
"node_modules/@pmmmwh/react-refresh-webpack-plugin": {
"version": "0.5.4",
@@ -15468,9 +15477,9 @@
}
},
"node_modules/luxon": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/luxon/-/luxon-3.0.3.tgz",
"integrity": "sha512-+EfHWnF+UT7GgTnq5zXg3ldnTKL2zdv7QJgsU5bjjpbH17E3qi/puMhQyJVYuCq+FRkogvB5WB6iVvUr+E4a7w==",
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/luxon/-/luxon-3.1.1.tgz",
"integrity": "sha512-Ah6DloGmvseB/pX1cAmjbFvyU/pKuwQMQqz7d0yvuDlVYLTs2WeDHQMpC8tGjm1da+BriHROW/OEIT/KfYg6xw==",
"engines": {
"node": ">=12"
}
@@ -25094,19 +25103,25 @@
"integrity": "sha512-uN7JgfQsyR16YHkuGRCTIcBcnyKIqKjGkB2SGk9x1XXH3yYGenL83kpAavX9Xtozqp17KppOlybJuzcKvZMrgw=="
},
"@patternfly/react-core": {
"version": "4.250.1",
"resolved": "https://registry.npmjs.org/@patternfly/react-core/-/react-core-4.250.1.tgz",
"integrity": "sha512-vAOZPQdZzYXl/vkHnHMIt1eC3nrPDdsuuErPatkNPwmSvilXuXmWP5wxoJ36FbSNRRURkprFwx52zMmWS3iHJA==",
"version": "4.264.0",
"resolved": "https://registry.npmjs.org/@patternfly/react-core/-/react-core-4.264.0.tgz",
"integrity": "sha512-tK0BMWxw8nhukev40HZ6q6d02pDnjX7oyA91vHa18aakJUKBWMaerqpG4NZVMoh0tPKX3aLNj+zyCwDALFAZZw==",
"requires": {
"@patternfly/react-icons": "^4.92.6",
"@patternfly/react-styles": "^4.91.6",
"@patternfly/react-tokens": "^4.93.6",
"@patternfly/react-icons": "^4.93.0",
"@patternfly/react-styles": "^4.92.0",
"@patternfly/react-tokens": "^4.94.0",
"focus-trap": "6.9.2",
"react-dropzone": "9.0.0",
"tippy.js": "5.1.2",
"tslib": "^2.0.0"
},
"dependencies": {
"@patternfly/react-icons": {
"version": "4.93.0",
"resolved": "https://registry.npmjs.org/@patternfly/react-icons/-/react-icons-4.93.0.tgz",
"integrity": "sha512-OH0vORVioL+HLWMEog8/3u8jsiMCeJ0pFpvRKRhy5Uk4CdAe40k1SOBvXJP6opr+O8TLbz0q3bm8Jsh/bPaCuQ==",
"requires": {}
},
"tslib": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.3.1.tgz",
@@ -25121,9 +25136,9 @@
"requires": {}
},
"@patternfly/react-styles": {
"version": "4.91.10",
"resolved": "https://registry.npmjs.org/@patternfly/react-styles/-/react-styles-4.91.10.tgz",
"integrity": "sha512-fAG4Vjp63ohiR92F4e/Gkw5q1DSSckHKqdnEF75KUpSSBORzYP0EKMpupSd6ItpQFJw3iWs3MJi3/KIAAfU1Jw=="
"version": "4.92.0",
"resolved": "https://registry.npmjs.org/@patternfly/react-styles/-/react-styles-4.92.0.tgz",
"integrity": "sha512-B/f6iyu8UEN1+wRxdC4sLIhvJeyL8SqInDXZmwOIqK8uPJ8Lze7qrbVhkkVzbMF37/oDPVa6dZH8qZFq062LEA=="
},
"@patternfly/react-table": {
"version": "4.108.0",
@@ -25146,9 +25161,9 @@
}
},
"@patternfly/react-tokens": {
"version": "4.93.10",
"resolved": "https://registry.npmjs.org/@patternfly/react-tokens/-/react-tokens-4.93.10.tgz",
"integrity": "sha512-F+j1irDc9M6zvY6qNtDryhbpnHz3R8ymHRdGelNHQzPTIK88YSWEnT1c9iUI+uM/iuZol7sJmO5STtg2aPIDRQ=="
"version": "4.94.0",
"resolved": "https://registry.npmjs.org/@patternfly/react-tokens/-/react-tokens-4.94.0.tgz",
"integrity": "sha512-fYXxUJZnzpn89K2zzHF0cSncZZVGKrohdb5f5T1wzxwU2NZPVGpvr88xhm+V2Y/fSrrTPwXcP3IIdtNOOtJdZw=="
},
"@pmmmwh/react-refresh-webpack-plugin": {
"version": "0.5.4",
@@ -34210,9 +34225,9 @@
}
},
"luxon": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/luxon/-/luxon-3.0.3.tgz",
"integrity": "sha512-+EfHWnF+UT7GgTnq5zXg3ldnTKL2zdv7QJgsU5bjjpbH17E3qi/puMhQyJVYuCq+FRkogvB5WB6iVvUr+E4a7w=="
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/luxon/-/luxon-3.1.1.tgz",
"integrity": "sha512-Ah6DloGmvseB/pX1cAmjbFvyU/pKuwQMQqz7d0yvuDlVYLTs2WeDHQMpC8tGjm1da+BriHROW/OEIT/KfYg6xw=="
},
"lz-string": {
"version": "1.4.4",

View File

@@ -8,7 +8,7 @@
"dependencies": {
"@lingui/react": "3.14.0",
"@patternfly/patternfly": "4.217.1",
"@patternfly/react-core": "^4.250.1",
"@patternfly/react-core": "^4.264.0",
"@patternfly/react-icons": "4.92.10",
"@patternfly/react-table": "4.108.0",
"ace-builds": "^1.10.1",
@@ -22,7 +22,7 @@
"has-ansi": "5.0.1",
"html-entities": "2.3.2",
"js-yaml": "4.1.0",
"luxon": "^3.0.3",
"luxon": "^3.1.1",
"prop-types": "^15.8.1",
"react": "17.0.2",
"react-ace": "^10.1.0",

View File

@@ -153,6 +153,10 @@ function CredentialsStep({
}))}
value={selectedType && selectedType.id}
onChange={(e, id) => {
// Reset query params when the category of credentials is changed
history.replace({
search: '',
});
setSelectedType(types.find((o) => o.id === parseInt(id, 10)));
}}
/>

View File

@@ -3,6 +3,7 @@ import { act } from 'react-dom/test-utils';
import { Formik } from 'formik';
import { CredentialsAPI, CredentialTypesAPI } from 'api';
import { mountWithContexts } from '../../../../testUtils/enzymeHelpers';
import { createMemoryHistory } from 'history';
import CredentialsStep from './CredentialsStep';
jest.mock('../../../api/models/CredentialTypes');
@@ -164,6 +165,41 @@ describe('CredentialsStep', () => {
});
});
test('should reset query params (credential.page) when selected credential type is changed', async () => {
let wrapper;
const history = createMemoryHistory({
initialEntries: ['?credential.page=2'],
});
await act(async () => {
wrapper = mountWithContexts(
<Formik>
<CredentialsStep allowCredentialsWithPasswords />
</Formik>,
{
context: { router: { history } },
}
);
});
wrapper.update();
expect(CredentialsAPI.read).toHaveBeenCalledWith({
credential_type: 1,
order_by: 'name',
page: 2,
page_size: 5,
});
await act(async () => {
wrapper.find('AnsibleSelect').invoke('onChange')({}, 3);
});
expect(CredentialsAPI.read).toHaveBeenCalledWith({
credential_type: 3,
order_by: 'name',
page: 1,
page_size: 5,
});
});
test("error should be shown when a credential that prompts for passwords is selected on a step that doesn't allow it", async () => {
let wrapper;
await act(async () => {

View File

@@ -173,6 +173,10 @@ function MultiCredentialsLookup({
}))}
value={selectedType && selectedType.id}
onChange={(e, id) => {
// Reset query params when the category of credentials is changed
history.replace({
search: '',
});
setSelectedType(
credentialTypes.find((o) => o.id === parseInt(id, 10))
);

View File

@@ -6,6 +6,7 @@ import {
mountWithContexts,
waitForElement,
} from '../../../testUtils/enzymeHelpers';
import { createMemoryHistory } from 'history';
import MultiCredentialsLookup from './MultiCredentialsLookup';
jest.mock('../../api');
@@ -228,6 +229,53 @@ describe('<Formik><MultiCredentialsLookup /></Formik>', () => {
]);
});
test('should reset query params (credentials.page) when selected credential type is changed', async () => {
const history = createMemoryHistory({
initialEntries: ['?credentials.page=2'],
});
await act(async () => {
wrapper = mountWithContexts(
<Formik>
<MultiCredentialsLookup
value={credentials}
tooltip="This is credentials look up"
onChange={() => {}}
onError={() => {}}
/>
</Formik>,
{
context: { router: { history } },
}
);
});
const searchButton = await waitForElement(
wrapper,
'Button[aria-label="Search"]'
);
await act(async () => {
searchButton.invoke('onClick')();
});
expect(CredentialsAPI.read).toHaveBeenCalledWith({
credential_type: 400,
order_by: 'name',
page: 2,
page_size: 5,
});
const select = await waitForElement(wrapper, 'AnsibleSelect');
await act(async () => {
select.invoke('onChange')({}, 500);
});
wrapper.update();
expect(CredentialsAPI.read).toHaveBeenCalledWith({
credential_type: 500,
order_by: 'name',
page: 1,
page_size: 5,
});
});
test('should only add 1 credential per credential type except vault(see below)', async () => {
const onChange = jest.fn();
await act(async () => {

View File

@@ -34,8 +34,14 @@ const QS_CONFIG = getQSConfig('template', {
order_by: 'name',
});
function RelatedTemplateList({ searchParams, projectName = null }) {
const { id: projectId } = useParams();
const resources = {
projects: 'project',
inventories: 'inventory',
credentials: 'credentials',
};
function RelatedTemplateList({ searchParams, resourceName = null }) {
const { id } = useParams();
const location = useLocation();
const { addToast, Toast, toastProps } = useToast();
@@ -129,12 +135,19 @@ function RelatedTemplateList({ searchParams, projectName = null }) {
actions && Object.prototype.hasOwnProperty.call(actions, 'POST');
let linkTo = '';
if (projectName) {
const qs = encodeQueryString({
project_id: projectId,
project_name: projectName,
});
if (resourceName) {
const queryString = {
resource_id: id,
resource_name: resourceName,
resource_type: resources[location.pathname.split('/')[1]],
resource_kind: null,
};
if (Array.isArray(resourceName)) {
const [name, kind] = resourceName;
queryString.resource_name = name;
queryString.resource_kind = kind;
}
const qs = encodeQueryString(queryString);
linkTo = `/templates/job_template/add/?${qs}`;
} else {
linkTo = '/templates/job_template/add';

View File

@@ -0,0 +1 @@
/* eslint-disable import/prefer-default-export */

View File

@@ -420,7 +420,7 @@ describe('<AdvancedSearch />', () => {
const selectOptions = wrapper.find(
'Select[aria-label="Related search type"] SelectOption'
);
expect(selectOptions).toHaveLength(2);
expect(selectOptions).toHaveLength(3);
expect(
selectOptions.find('SelectOption[id="name-option-select"]').prop('value')
).toBe('name__icontains');

View File

@@ -31,6 +31,12 @@ function RelatedLookupTypeInput({
value="name__icontains"
description={t`Fuzzy search on name field.`}
/>
<SelectOption
id="name-exact-option-select"
key="name"
value="name"
description={t`Exact search on name field.`}
/>
<SelectOption
id="id-option-select"
key="id"

View File

@@ -22,6 +22,16 @@ import { CredentialsAPI } from 'api';
import CredentialDetail from './CredentialDetail';
import CredentialEdit from './CredentialEdit';
const jobTemplateCredentialTypes = [
'machine',
'cloud',
'net',
'ssh',
'vault',
'kubernetes',
'cryptography',
];
function Credential({ setBreadcrumb }) {
const { pathname } = useLocation();
@@ -75,13 +85,14 @@ function Credential({ setBreadcrumb }) {
link: `/credentials/${id}/access`,
id: 1,
},
{
];
if (jobTemplateCredentialTypes.includes(credential?.kind)) {
tabsArray.push({
name: t`Job Templates`,
link: `/credentials/${id}/job_templates`,
id: 2,
},
];
});
}
let showCardHeader = true;
if (pathname.endsWith('edit') || pathname.endsWith('add')) {
@@ -133,6 +144,7 @@ function Credential({ setBreadcrumb }) {
<Route key="job_templates" path="/credentials/:id/job_templates">
<RelatedTemplateList
searchParams={{ credentials__id: credential.id }}
resourceName={[credential.name, credential.kind]}
/>
</Route>,
<Route key="not-found" path="*">

View File

@@ -6,7 +6,8 @@ import {
mountWithContexts,
waitForElement,
} from '../../../testUtils/enzymeHelpers';
import mockCredential from './shared/data.scmCredential.json';
import mockMachineCredential from './shared/data.machineCredential.json';
import mockSCMCredential from './shared/data.scmCredential.json';
import Credential from './Credential';
jest.mock('../../api');
@@ -21,13 +22,10 @@ jest.mock('react-router-dom', () => ({
describe('<Credential />', () => {
let wrapper;
beforeEach(() => {
test('initially renders user-based machine credential successfully', async () => {
CredentialsAPI.readDetail.mockResolvedValueOnce({
data: mockCredential,
data: mockMachineCredential,
});
});
test('initially renders user-based credential successfully', async () => {
await act(async () => {
wrapper = mountWithContexts(<Credential setBreadcrumb={() => {}} />);
});
@@ -36,6 +34,18 @@ describe('<Credential />', () => {
expect(wrapper.find('RoutedTabs li').length).toBe(4);
});
test('initially renders user-based SCM credential successfully', async () => {
CredentialsAPI.readDetail.mockResolvedValueOnce({
data: mockSCMCredential,
});
await act(async () => {
wrapper = mountWithContexts(<Credential setBreadcrumb={() => {}} />);
});
wrapper.update();
expect(wrapper.find('Credential').length).toBe(1);
expect(wrapper.find('RoutedTabs li').length).toBe(3);
});
test('should render expected tabs', async () => {
const expectedTabs = [
'Back to Credentials',

View File

@@ -465,7 +465,7 @@
},
"created": "2020-05-18T21:53:35.370730Z",
"modified": "2020-05-18T21:54:05.436400Z",
"name": "CyberArk AIM Central Credential Provider Lookup",
"name": "CyberArk Central Credential Provider Lookup",
"description": "",
"kind": "external",
"namespace": "aim",

View File

@@ -81,35 +81,30 @@ function InstanceDetails({ setBreadcrumb, instanceGroup }) {
const {
data: { results },
} = await InstanceGroupsAPI.readInstances(instanceGroup.id);
let instanceDetails;
const isAssociated = results.some(
({ id: instId }) => instId === parseInt(instanceId, 10)
);
if (isAssociated) {
const [{ data: details }, { data: healthCheckData }] =
await Promise.all([
InstancesAPI.readDetail(instanceId),
InstancesAPI.readHealthCheckDetail(instanceId),
]);
instanceDetails = details;
setHealthCheck(healthCheckData);
} else {
throw new Error(
`This instance is not associated with this instance group`
const { data: details } = await InstancesAPI.readDetail(instanceId);
if (details.node_type === 'execution') {
const { data: healthCheckData } =
await InstancesAPI.readHealthCheckDetail(instanceId);
setHealthCheck(healthCheckData);
}
setBreadcrumb(instanceGroup, details);
setForks(
computeForks(
details.mem_capacity,
details.cpu_capacity,
details.capacity_adjustment
)
);
return { instance: details };
}
setBreadcrumb(instanceGroup, instanceDetails);
setForks(
computeForks(
instanceDetails.mem_capacity,
instanceDetails.cpu_capacity,
instanceDetails.capacity_adjustment
)
throw new Error(
`This instance is not associated with this instance group`
);
return { instance: instanceDetails };
}, [instanceId, setBreadcrumb, instanceGroup]),
{ instance: {}, isLoading: true }
);

View File

@@ -181,6 +181,7 @@ function Inventory({ setBreadcrumb }) {
>
<RelatedTemplateList
searchParams={{ inventory__id: inventory.id }}
resourceName={inventory.name}
/>
</Route>,
<Route path="*" key="not-found">

View File

@@ -41,7 +41,7 @@ function JobEvent({
if (lineNumber < 0) {
return null;
}
const canToggle = index === toggleLineIndex;
const canToggle = index === toggleLineIndex && !event.isTracebackOnly;
return (
<JobEventLine
onClick={isClickable ? onJobEventClick : undefined}
@@ -55,7 +55,7 @@ function JobEvent({
onToggle={onToggleCollapsed}
/>
<JobEventLineNumber>
{lineNumber}
{!event.isTracebackOnly ? lineNumber : ''}
<JobEventEllipsis isCollapsed={isCollapsed && canToggle} />
</JobEventLineNumber>
<JobEventLineText

View File

@@ -187,7 +187,9 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
useEffect(() => {
const pendingRequests = Object.values(eventByUuidRequests.current || {});
setHasContentLoading(true); // prevents "no content found" screen from flashing
setIsFollowModeEnabled(false);
if (location.search) {
setIsFollowModeEnabled(false);
}
Promise.allSettled(pendingRequests).then(() => {
setRemoteRowCount(0);
clearLoadedEvents();
@@ -251,6 +253,9 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
});
const updated = oldWsEvents.concat(newEvents);
jobSocketCounter.current = updated.length;
if (!oldWsEvents.length && min > remoteRowCount + 1) {
loadJobEvents(min);
}
return updated.sort((a, b) => a.counter - b.counter);
});
setCssMap((prevCssMap) => ({
@@ -358,7 +363,7 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
}
};
const loadJobEvents = async () => {
const loadJobEvents = async (firstWsCounter = null) => {
const [params, loadRange] = getEventRequestParams(job, 50, [1, 50]);
if (isMounted.current) {
@@ -371,6 +376,9 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
if (isFlatMode) {
params.not__stdout = '';
}
if (firstWsCounter) {
params.counter__lt = firstWsCounter;
}
const qsParams = parseQueryString(QS_CONFIG, location.search);
const eventPromise = getJobModel(job.type).readEvents(job.id, {
...params,
@@ -435,7 +443,7 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
if (getEvent(counter)) {
return true;
}
if (index > remoteRowCount && index < remoteRowCount + wsEvents.length) {
if (index >= remoteRowCount && index < remoteRowCount + wsEvents.length) {
return true;
}
return currentlyLoading.includes(counter);
@@ -462,7 +470,7 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
}
if (
!event &&
index > remoteRowCount &&
index >= remoteRowCount &&
index < remoteRowCount + wsEvents.length
) {
event = wsEvents[index - remoteRowCount];
@@ -629,10 +637,14 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
setIsFollowModeEnabled(false);
};
const scrollToEnd = () => {
const scrollToEnd = useCallback(() => {
scrollToRow(-1);
setTimeout(() => scrollToRow(-1), 100);
};
let timeout;
if (isFollowModeEnabled) {
setTimeout(() => scrollToRow(-1), 100);
}
return () => clearTimeout(timeout);
}, [isFollowModeEnabled]);
const handleScrollLast = () => {
scrollToEnd();

View File

@@ -29,8 +29,11 @@ export function prependTraceback(job, events) {
start_line: 0,
};
const firstIndex = events.findIndex((jobEvent) => jobEvent.counter === 1);
if (firstIndex && events[firstIndex]?.stdout) {
const stdoutLines = events[firstIndex].stdout.split('\r\n');
if (firstIndex > -1) {
if (!events[firstIndex].stdout) {
events[firstIndex].isTracebackOnly = true;
}
const stdoutLines = events[firstIndex].stdout?.split('\r\n') || [];
stdoutLines[0] = tracebackEvent.stdout;
events[firstIndex].stdout = stdoutLines.join('\r\n');
} else {

View File

@@ -179,7 +179,7 @@ function Project({ setBreadcrumb }) {
searchParams={{
project__id: project.id,
}}
projectName={project.name}
resourceName={project.name}
/>
</Route>
{project?.scm_type && project.scm_type !== '' && (

View File

@@ -141,14 +141,14 @@ function JobsEdit() {
<FormColumnLayout>
<InputField
name="AWX_ISOLATION_BASE_PATH"
config={jobs.AWX_ISOLATION_BASE_PATH}
isRequired
config={jobs.AWX_ISOLATION_BASE_PATH ?? null}
isRequired={Boolean(options?.AWX_ISOLATION_BASE_PATH)}
/>
<InputField
name="SCHEDULE_MAX_JOBS"
config={jobs.SCHEDULE_MAX_JOBS}
type="number"
isRequired
config={jobs.SCHEDULE_MAX_JOBS ?? null}
type={options?.SCHEDULE_MAX_JOBS ? 'number' : undefined}
isRequired={Boolean(options?.SCHEDULE_MAX_JOBS)}
/>
<InputField
name="DEFAULT_JOB_TIMEOUT"

View File

@@ -122,4 +122,22 @@ describe('<JobsEdit />', () => {
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
expect(wrapper.find('ContentError').length).toBe(1);
});
test('Form input fields that are invisible (due to being set manually via a settings file) should not prevent submitting the form', async () => {
const mockOptions = Object.assign({}, mockAllOptions);
// If AWX_ISOLATION_BASE_PATH has been set in a settings file it will be absent in the PUT options
delete mockOptions['actions']['PUT']['AWX_ISOLATION_BASE_PATH'];
await act(async () => {
wrapper = mountWithContexts(
<SettingsProvider value={mockOptions.actions}>
<JobsEdit />
</SettingsProvider>
);
});
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
await act(async () => {
wrapper.find('Form').invoke('onSubmit')();
});
expect(SettingsAPI.updateAll).toHaveBeenCalledTimes(1);
});
});

View File

@@ -397,7 +397,10 @@ const InputField = ({ name, config, type = 'text', isRequired = false }) => {
};
InputField.propTypes = {
name: string.isRequired,
config: shape({}).isRequired,
config: shape({}),
};
InputField.defaultProps = {
config: null,
};
const TextAreaField = ({ name, config, isRequired = false }) => {

View File

@@ -9,29 +9,31 @@ function JobTemplateAdd() {
const [formSubmitError, setFormSubmitError] = useState(null);
const history = useHistory();
const projectParams = {
project_id: null,
project_name: null,
const resourceParams = {
resource_id: null,
resource_name: null,
resource_type: null,
resource_kind: null,
};
history.location.search
.replace(/^\?/, '')
.split('&')
.map((s) => s.split('='))
.forEach(([key, val]) => {
if (!(key in projectParams)) {
if (!(key in resourceParams)) {
return;
}
projectParams[key] = decodeURIComponent(val);
resourceParams[key] = decodeURIComponent(val);
});
let projectValues = null;
let resourceValues = null;
if (
Object.values(projectParams).filter((item) => item !== null).length === 2
) {
projectValues = {
id: projectParams.project_id,
name: projectParams.project_name,
if (history.location.search.includes('resource_id' && 'resource_name')) {
resourceValues = {
id: resourceParams.resource_id,
name: resourceParams.resource_name,
type: resourceParams.resource_type,
kind: resourceParams.resource_kind, // refers to credential kind
};
}
@@ -122,7 +124,7 @@ function JobTemplateAdd() {
handleCancel={handleCancel}
handleSubmit={handleSubmit}
submitError={formSubmitError}
projectValues={projectValues}
resourceValues={resourceValues}
isOverrideDisabledLookup
/>
</CardBody>

View File

@@ -274,9 +274,14 @@ describe('<JobTemplateAdd />', () => {
test('should parse and pre-fill project field from query params', async () => {
const history = createMemoryHistory({
initialEntries: [
'/templates/job_template/add/add?project_id=6&project_name=Demo%20Project',
'/templates/job_template/add?resource_id=6&resource_name=Demo%20Project&resource_type=project',
],
});
ProjectsAPI.read.mockResolvedValueOnce({
count: 1,
results: [{ name: 'foo', id: 1, allow_override: true, organization: 1 }],
});
ProjectsAPI.readOptions.mockResolvedValueOnce({});
let wrapper;
await act(async () => {
wrapper = mountWithContexts(<JobTemplateAdd />, {
@@ -284,8 +289,9 @@ describe('<JobTemplateAdd />', () => {
});
});
await waitForElement(wrapper, 'EmptyStateBody', (el) => el.length === 0);
expect(wrapper.find('input#project').prop('value')).toEqual('Demo Project');
expect(ProjectsAPI.readPlaybooks).toBeCalledWith('6');
expect(ProjectsAPI.readPlaybooks).toBeCalledWith(6);
});
test('should not call ProjectsAPI.readPlaybooks if there is no project', async () => {

View File

@@ -24,6 +24,7 @@ function WorkflowJobTemplateAdd() {
limit,
job_tags,
skip_tags,
scm_branch,
...templatePayload
} = values;
templatePayload.inventory = inventory?.id;
@@ -32,6 +33,7 @@ function WorkflowJobTemplateAdd() {
templatePayload.limit = limit === '' ? null : limit;
templatePayload.job_tags = job_tags === '' ? null : job_tags;
templatePayload.skip_tags = skip_tags === '' ? null : skip_tags;
templatePayload.scm_branch = scm_branch === '' ? null : scm_branch;
const organizationId =
organization?.id || inventory?.summary_fields?.organization.id;
try {

View File

@@ -119,7 +119,7 @@ describe('<WorkflowJobTemplateAdd/>', () => {
job_tags: null,
limit: null,
organization: undefined,
scm_branch: '',
scm_branch: null,
skip_tags: null,
webhook_credential: undefined,
webhook_service: '',

View File

@@ -30,6 +30,7 @@ function WorkflowJobTemplateEdit({ template }) {
limit,
job_tags,
skip_tags,
scm_branch,
...templatePayload
} = values;
templatePayload.inventory = inventory?.id || null;
@@ -38,6 +39,7 @@ function WorkflowJobTemplateEdit({ template }) {
templatePayload.limit = limit === '' ? null : limit;
templatePayload.job_tags = job_tags === '' ? null : job_tags;
templatePayload.skip_tags = skip_tags === '' ? null : skip_tags;
templatePayload.scm_branch = scm_branch === '' ? null : scm_branch;
const formOrgId =
organization?.id || inventory?.summary_fields?.organization.id || null;

View File

@@ -690,7 +690,7 @@ JobTemplateForm.defaultProps = {
};
const FormikApp = withFormik({
mapPropsToValues({ projectValues = {}, template = {} }) {
mapPropsToValues({ resourceValues = null, template = {} }) {
const {
summary_fields = {
labels: { results: [] },
@@ -698,7 +698,7 @@ const FormikApp = withFormik({
},
} = template;
return {
const initialValues = {
allow_callbacks: template.allow_callbacks || false,
allow_simultaneous: template.allow_simultaneous || false,
ask_credential_on_launch: template.ask_credential_on_launch || false,
@@ -739,7 +739,7 @@ const FormikApp = withFormik({
playbook: template.playbook || '',
prevent_instance_group_fallback:
template.prevent_instance_group_fallback || false,
project: summary_fields?.project || projectValues || null,
project: summary_fields?.project || null,
scm_branch: template.scm_branch || '',
skip_tags: template.skip_tags || '',
timeout: template.timeout || 0,
@@ -756,6 +756,24 @@ const FormikApp = withFormik({
execution_environment:
template.summary_fields?.execution_environment || null,
};
if (resourceValues !== null) {
if (resourceValues.type === 'credentials') {
initialValues[resourceValues.type] = [
{
id: parseInt(resourceValues.id, 10),
name: resourceValues.name,
kind: resourceValues.kind,
},
];
} else {
initialValues[resourceValues.type] = {
id: parseInt(resourceValues.id, 10),
name: resourceValues.name,
};
}
}
return initialValues;
},
handleSubmit: async (values, { props, setErrors }) => {
try {

View File

@@ -46,90 +46,216 @@ action_groups:
plugin_routing:
inventory:
tower:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* plugins have been deprecated, use awx.awx.controller instead.
redirect: awx.awx.controller
lookup:
tower_api:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* plugins have been deprecated, use awx.awx.controller_api instead.
redirect: awx.awx.controller_api
tower_schedule_rrule:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* plugins have been deprecated, use awx.awx.schedule_rrule instead.
redirect: awx.awx.schedule_rrule
modules:
tower_ad_hoc_command_cancel:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.ad_hoc_command_cancel instead.
redirect: awx.awx.ad_hoc_command_cancel
tower_ad_hoc_command_wait:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.ad_hoc_command_wait instead.
redirect: awx.awx.ad_hoc_command_wait
tower_ad_hoc_command:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.ad_hoc_command instead.
redirect: awx.awx.ad_hoc_command
tower_application:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.application instead.
redirect: awx.awx.application
tower_meta:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.controller_meta instead.
redirect: awx.awx.controller_meta
tower_credential_input_source:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.credential_input_source instead.
redirect: awx.awx.credential_input_source
tower_credential_type:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.credential_type instead.
redirect: awx.awx.credential_type
tower_credential:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.credential instead.
redirect: awx.awx.credential
tower_execution_environment:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.execution_environment instead.
redirect: awx.awx.execution_environment
tower_export:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.export instead.
redirect: awx.awx.export
tower_group:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.group instead.
redirect: awx.awx.group
tower_host:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.host instead.
redirect: awx.awx.host
tower_import:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.import instead.
redirect: awx.awx.import
tower_instance_group:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.instance_group instead.
redirect: awx.awx.instance_group
tower_inventory_source_update:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.inventory_source_update instead.
redirect: awx.awx.inventory_source_update
tower_inventory_source:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.inventory_source instead.
redirect: awx.awx.inventory_source
tower_inventory:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.inventory instead.
redirect: awx.awx.inventory
tower_job_cancel:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.job_cancel instead.
redirect: awx.awx.job_cancel
tower_job_launch:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.job_launch instead.
redirect: awx.awx.job_launch
tower_job_list:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.job_list instead.
redirect: awx.awx.job_list
tower_job_template:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.job_template instead.
redirect: awx.awx.job_template
tower_job_wait:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.job_wait instead.
redirect: awx.awx.job_wait
tower_label:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.label instead.
redirect: awx.awx.label
tower_license:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.license instead.
redirect: awx.awx.license
tower_notification_template:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.notification_template instead.
redirect: awx.awx.notification_template
tower_notification:
redirect: awx.awx.notification_template
tower_organization:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.organization instead.
redirect: awx.awx.organization
tower_project_update:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.project_update instead.
redirect: awx.awx.project_update
tower_project:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.project instead.
redirect: awx.awx.project
tower_role:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.role instead.
redirect: awx.awx.role
tower_schedule:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.schedule instead.
redirect: awx.awx.schedule
tower_settings:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.settings instead.
redirect: awx.awx.settings
tower_team:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.team instead.
redirect: awx.awx.team
tower_token:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.token instead.
redirect: awx.awx.token
tower_user:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.user instead.
redirect: awx.awx.user
tower_workflow_approval:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_approval instead.
redirect: awx.awx.workflow_approval
tower_workflow_job_template_node:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_job_template_node instead.
redirect: awx.awx.workflow_job_template_node
tower_workflow_job_template:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_job_template instead.
redirect: awx.awx.workflow_job_template
tower_workflow_launch:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_launch instead.
redirect: awx.awx.workflow_launch
tower_workflow_node_wait:
deprecation:
removal_date: '2022-01-23'
warning_text: The tower_* modules have been deprecated, use awx.awx.workflow_node_wait instead.
redirect: awx.awx.workflow_node_wait

View File

@@ -7,7 +7,6 @@ __metaclass__ = type
DOCUMENTATION = '''
name: controller
plugin_type: inventory
author:
- Matthew Jones (@matburt)
- Yunfan Zhang (@YunfanZhang42)

View File

@@ -5,7 +5,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
lookup: controller_api
name: controller_api
author: John Westcott IV (@john-westcott-iv)
short_description: Search the API for objects
requirements:
@@ -74,7 +74,7 @@ EXAMPLES = """
- name: Load the UI settings specifying the connection info
set_fact:
controller_settings: "{{ lookup('awx.awx.controller_api', 'settings/ui' host='controller.example.com',
controller_settings: "{{ lookup('awx.awx.controller_api', 'settings/ui', host='controller.example.com',
username='admin', password=my_pass_var, verify_ssl=False) }}"
- name: Report the usernames of all users with admin privs

View File

@@ -5,7 +5,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
lookup: schedule_rrule
name: schedule_rrule
author: John Westcott IV (@john-westcott-iv)
short_description: Generate an rrule string which can be used for Schedules
requirements:
@@ -101,39 +101,39 @@ else:
class LookupModule(LookupBase):
frequencies = {
'none': rrule.DAILY,
'minute': rrule.MINUTELY,
'hour': rrule.HOURLY,
'day': rrule.DAILY,
'week': rrule.WEEKLY,
'month': rrule.MONTHLY,
}
weekdays = {
'monday': rrule.MO,
'tuesday': rrule.TU,
'wednesday': rrule.WE,
'thursday': rrule.TH,
'friday': rrule.FR,
'saturday': rrule.SA,
'sunday': rrule.SU,
}
set_positions = {
'first': 1,
'second': 2,
'third': 3,
'fourth': 4,
'last': -1,
}
# plugin constructor
def __init__(self, *args, **kwargs):
if LIBRARY_IMPORT_ERROR:
raise_from(AnsibleError('{0}'.format(LIBRARY_IMPORT_ERROR)), LIBRARY_IMPORT_ERROR)
super().__init__(*args, **kwargs)
self.frequencies = {
'none': rrule.DAILY,
'minute': rrule.MINUTELY,
'hour': rrule.HOURLY,
'day': rrule.DAILY,
'week': rrule.WEEKLY,
'month': rrule.MONTHLY,
}
self.weekdays = {
'monday': rrule.MO,
'tuesday': rrule.TU,
'wednesday': rrule.WE,
'thursday': rrule.TH,
'friday': rrule.FR,
'saturday': rrule.SA,
'sunday': rrule.SU,
}
self.set_positions = {
'first': 1,
'second': 2,
'third': 3,
'fourth': 4,
'last': -1,
}
@staticmethod
def parse_date_time(date_string):
try:
@@ -149,14 +149,13 @@ class LookupModule(LookupBase):
return self.get_rrule(frequency, kwargs)
@staticmethod
def get_rrule(frequency, kwargs):
def get_rrule(self, frequency, kwargs):
if frequency not in LookupModule.frequencies:
if frequency not in self.frequencies:
raise AnsibleError('Frequency of {0} is invalid'.format(frequency))
rrule_kwargs = {
'freq': LookupModule.frequencies[frequency],
'freq': self.frequencies[frequency],
'interval': kwargs.get('every', 1),
}
@@ -187,9 +186,9 @@ class LookupModule(LookupBase):
days = []
for day in kwargs['on_days'].split(','):
day = day.strip()
if day not in LookupModule.weekdays:
raise AnsibleError('Parameter on_days must only contain values {0}'.format(', '.join(LookupModule.weekdays.keys())))
days.append(LookupModule.weekdays[day])
if day not in self.weekdays:
raise AnsibleError('Parameter on_days must only contain values {0}'.format(', '.join(self.weekdays.keys())))
days.append(self.weekdays[day])
rrule_kwargs['byweekday'] = days
@@ -214,13 +213,13 @@ class LookupModule(LookupBase):
except Exception as e:
raise_from(AnsibleError('on_the parameter must be two words separated by a space'), e)
if weekday not in LookupModule.weekdays:
if weekday not in self.weekdays:
raise AnsibleError('Weekday portion of on_the parameter is not valid')
if occurance not in LookupModule.set_positions:
if occurance not in self.set_positions:
raise AnsibleError('The first string of the on_the parameter is not valid')
rrule_kwargs['byweekday'] = LookupModule.weekdays[weekday]
rrule_kwargs['bysetpos'] = LookupModule.set_positions[occurance]
rrule_kwargs['byweekday'] = self.weekdays[weekday]
rrule_kwargs['bysetpos'] = self.set_positions[occurance]
my_rule = rrule.rrule(**rrule_kwargs)

View File

@@ -5,7 +5,7 @@ from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
lookup: schedule_rruleset
name: schedule_rruleset
author: John Westcott IV (@john-westcott-iv)
short_description: Generate an rruleset string
requirements:
@@ -31,7 +31,8 @@ DOCUMENTATION = """
rules:
description:
- Array of rules in the rruleset
type: array
type: list
elements: dict
required: True
suboptions:
frequency:
@@ -136,40 +137,44 @@ try:
import pytz
from dateutil import rrule
except ImportError as imp_exc:
raise_from(AnsibleError('{0}'.format(imp_exc)), imp_exc)
LIBRARY_IMPORT_ERROR = imp_exc
else:
LIBRARY_IMPORT_ERROR = None
class LookupModule(LookupBase):
frequencies = {
'none': rrule.DAILY,
'minute': rrule.MINUTELY,
'hour': rrule.HOURLY,
'day': rrule.DAILY,
'week': rrule.WEEKLY,
'month': rrule.MONTHLY,
}
weekdays = {
'monday': rrule.MO,
'tuesday': rrule.TU,
'wednesday': rrule.WE,
'thursday': rrule.TH,
'friday': rrule.FR,
'saturday': rrule.SA,
'sunday': rrule.SU,
}
set_positions = {
'first': 1,
'second': 2,
'third': 3,
'fourth': 4,
'last': -1,
}
# plugin constructor
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if LIBRARY_IMPORT_ERROR:
raise_from(AnsibleError('{0}'.format(LIBRARY_IMPORT_ERROR)), LIBRARY_IMPORT_ERROR)
super().__init__(*args, **kwargs)
self.frequencies = {
'none': rrule.DAILY,
'minute': rrule.MINUTELY,
'hour': rrule.HOURLY,
'day': rrule.DAILY,
'week': rrule.WEEKLY,
'month': rrule.MONTHLY,
}
self.weekdays = {
'monday': rrule.MO,
'tuesday': rrule.TU,
'wednesday': rrule.WE,
'thursday': rrule.TH,
'friday': rrule.FR,
'saturday': rrule.SA,
'sunday': rrule.SU,
}
self.set_positions = {
'first': 1,
'second': 2,
'third': 3,
'fourth': 4,
'last': -1,
}
@staticmethod
def parse_date_time(date_string):
@@ -188,14 +193,14 @@ class LookupModule(LookupBase):
# something: [1,2,3] - A list of ints
return_values = []
# If they give us a single int, lets make it a list of ints
if type(rule[field_name]) == int:
if isinstance(rule[field_name], int):
rule[field_name] = [rule[field_name]]
# If its not a list, we need to split it into a list
if type(rule[field_name]) != list:
if isinstance(rule[field_name], list):
rule[field_name] = rule[field_name].split(',')
for value in rule[field_name]:
# If they have a list of strs we want to strip the str incase its space delineated
if type(value) == str:
if isinstance(value, str):
value = value.strip()
# If value happens to be an int (from a list of ints) we need to coerce it into a str for the re.match
if not re.match(r"^\d+$", str(value)) or int(value) < min_value or int(value) > max_value:
@@ -205,7 +210,7 @@ class LookupModule(LookupBase):
def process_list(self, field_name, rule, valid_list, rule_number):
return_values = []
if type(rule[field_name]) != list:
if isinstance(rule[field_name], list):
rule[field_name] = rule[field_name].split(',')
for value in rule[field_name]:
value = value.strip()
@@ -260,11 +265,11 @@ class LookupModule(LookupBase):
frequency = rule.get('frequency', None)
if not frequency:
raise AnsibleError("Rule {0} is missing a frequency".format(rule_number))
if frequency not in LookupModule.frequencies:
if frequency not in self.frequencies:
raise AnsibleError('Frequency of rule {0} is invalid {1}'.format(rule_number, frequency))
rrule_kwargs = {
'freq': LookupModule.frequencies[frequency],
'freq': self.frequencies[frequency],
'interval': rule.get('interval', 1),
'dtstart': start_date,
}
@@ -287,7 +292,7 @@ class LookupModule(LookupBase):
)
if 'bysetpos' in rule:
rrule_kwargs['bysetpos'] = self.process_list('bysetpos', rule, LookupModule.set_positions, rule_number)
rrule_kwargs['bysetpos'] = self.process_list('bysetpos', rule, self.set_positions, rule_number)
if 'bymonth' in rule:
rrule_kwargs['bymonth'] = self.process_integer('bymonth', rule, 1, 12, rule_number)
@@ -302,7 +307,7 @@ class LookupModule(LookupBase):
rrule_kwargs['byweekno'] = self.process_integer('byweekno', rule, 1, 52, rule_number)
if 'byweekday' in rule:
rrule_kwargs['byweekday'] = self.process_list('byweekday', rule, LookupModule.weekdays, rule_number)
rrule_kwargs['byweekday'] = self.process_list('byweekday', rule, self.weekdays, rule_number)
if 'byhour' in rule:
rrule_kwargs['byhour'] = self.process_integer('byhour', rule, 0, 23, rule_number)

View File

@@ -4,6 +4,7 @@ __metaclass__ = type
from ansible.module_utils.basic import AnsibleModule, env_fallback
from ansible.module_utils.urls import Request, SSLValidationError, ConnectionError
from ansible.module_utils.parsing.convert_bool import boolean as strtobool
from ansible.module_utils.six import PY2
from ansible.module_utils.six import raise_from, string_types
from ansible.module_utils.six.moves import StringIO
@@ -11,14 +12,21 @@ from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible.module_utils.six.moves.http_cookiejar import CookieJar
from ansible.module_utils.six.moves.urllib.parse import urlparse, urlencode
from ansible.module_utils.six.moves.configparser import ConfigParser, NoOptionError
from distutils.version import LooseVersion as Version
from socket import getaddrinfo, IPPROTO_TCP
import time
import re
from json import loads, dumps
from os.path import isfile, expanduser, split, join, exists, isdir
from os import access, R_OK, getcwd
from distutils.util import strtobool
try:
from ansible.module_utils.compat.version import LooseVersion as Version
except ImportError:
try:
from distutils.version import LooseVersion as Version
except ImportError:
raise AssertionError('To use this plugin or module with ansible-core 2.11, you need to use Python < 3.12 with distutils.version present')
try:
import yaml

View File

@@ -55,7 +55,6 @@ options:
description:
- The arguments to pass to the module.
type: str
default: ""
forks:
description:
- The number of forks to use for this ad hoc execution.

View File

@@ -42,6 +42,7 @@ options:
- Maximum time in seconds to wait for a job to finish.
- Not specifying means the task will wait until the controller cancels the command.
type: int
default: 0
extends_documentation_fragment: awx.awx.auth
'''

View File

@@ -52,7 +52,7 @@ options:
- The credential type being created.
- Can be a built-in credential type such as "Machine", or a custom credential type such as "My Credential Type"
- Choices include Amazon Web Services, Ansible Galaxy/Automation Hub API Token, Centrify Vault Credential Provider Lookup,
Container Registry, CyberArk AIM Central Credential Provider Lookup, CyberArk Conjur Secrets Manager Lookup, Google Compute Engine,
Container Registry, CyberArk Central Credential Provider Lookup, CyberArk Conjur Secret Lookup, Google Compute Engine,
GitHub Personal Access Token, GitLab Personal Access Token, GPG Public Key, HashiCorp Vault Secret Lookup, HashiCorp Vault Signed SSH,
Insights, Machine, Microsoft Azure Key Vault, Microsoft Azure Resource Manager, Network, OpenShift or Kubernetes API
Bearer Token, OpenStack, Red Hat Ansible Automation Platform, Red Hat Satellite 6, Red Hat Virtualization, Source Control,

View File

@@ -80,9 +80,9 @@ def main():
name=dict(required=True),
new_name=dict(),
image=dict(required=True),
description=dict(default=''),
description=dict(),
organization=dict(),
credential=dict(default=''),
credential=dict(),
state=dict(choices=['present', 'absent'], default='present'),
pull=dict(choices=['always', 'missing', 'never'], default='missing'),
)

View File

@@ -86,6 +86,16 @@ options:
- workflow names to export
type: list
elements: str
applications:
description:
- OAuth2 application names to export
type: list
elements: str
schedules:
description:
- schedule names to export
type: list
elements: str
requirements:
- "awxkit >= 9.3.0"
notes:

View File

@@ -128,7 +128,7 @@ def main():
description = module.params.get('description')
state = module.params.pop('state')
preserve_existing_hosts = module.params.get('preserve_existing_hosts')
preserve_existing_children = module.params.get('preserve_existing_groups')
preserve_existing_children = module.params.get('preserve_existing_children')
variables = module.params.get('variables')
# Attempt to look up the related items the user specified (these will fail the module if not found)

View File

@@ -266,6 +266,7 @@ options:
description:
- Maximum time in seconds to wait for a job to finish (server-side).
type: int
default: 0
job_slice_count:
description:
- The number of jobs to slice into at runtime. Will cause the Job Template to launch a workflow if value is greater than 1.
@@ -287,7 +288,6 @@ options:
description:
- Branch to use in job run. Project default used if blank. Only allowed if project allow_override field is set to true.
type: str
default: ''
labels:
description:
- The labels applied to this job template

View File

@@ -60,12 +60,10 @@ options:
description:
- The branch to use for the SCM resource.
type: str
default: ''
scm_refspec:
description:
- The refspec to use for the SCM resource.
type: str
default: ''
credential:
description:
- Name of the credential to use with this SCM resource.

View File

@@ -51,7 +51,6 @@ options:
- Specify C(extra_vars) for the template.
required: False
type: dict
default: {}
forks:
description:
- Forks applied as a prompt, assuming job template prompts for forks

View File

@@ -39,6 +39,7 @@ options:
- Note This is a client side search, not an API side search
required: False
type: dict
default: {}
extends_documentation_fragment: awx.awx.auth
'''

View File

@@ -35,7 +35,6 @@ options:
- Optional description of this access token.
required: False
type: str
default: ''
application:
description:
- The application tied to this token.

View File

@@ -1 +0,0 @@
ad_hoc_command.py

View File

@@ -1 +0,0 @@
ad_hoc_command_cancel.py

View File

@@ -1 +0,0 @@
ad_hoc_command_wait.py

View File

@@ -1 +0,0 @@
application.py

View File

@@ -1 +0,0 @@
controller_meta.py

View File

@@ -1 +0,0 @@
credential.py

View File

@@ -1 +0,0 @@
credential_input_source.py

View File

@@ -1 +0,0 @@
credential_type.py

View File

@@ -1 +0,0 @@
execution_environment.py

View File

@@ -1 +0,0 @@
export.py

View File

@@ -1 +0,0 @@
group.py

View File

@@ -1 +0,0 @@
host.py

View File

@@ -1 +0,0 @@
import.py

View File

@@ -1 +0,0 @@
instance_group.py

View File

@@ -1 +0,0 @@
inventory.py

View File

@@ -1 +0,0 @@
inventory_source.py

View File

@@ -1 +0,0 @@
inventory_source_update.py

View File

@@ -1 +0,0 @@
job_cancel.py

View File

@@ -1 +0,0 @@
job_launch.py

View File

@@ -1 +0,0 @@
job_list.py

View File

@@ -1 +0,0 @@
job_template.py

View File

@@ -1 +0,0 @@
job_wait.py

Some files were not shown because too many files have changed in this diff Show More