Compare commits

..

28 Commits

Author SHA1 Message Date
Peter Braun
543d3f940b update licenses and embedded sources 2025-04-15 11:14:10 +02:00
Peter Braun
ee7edb9179 update sqlparse dependency 2025-04-14 23:21:16 +02:00
Alan Rominger
49240ca8e8 Fix environment-specific rough edges of logging setup (#15193) 2025-04-14 12:12:06 -04:00
Alan Rominger
5ff3d4b2fc Reduce log noise from next run being in past (#15670) 2025-04-14 09:04:16 -04:00
Fabio Alessandro Locati
3f96ea17d6 Links in README.md should use HTTPS (#15940) 2025-04-12 18:38:17 +00:00
TVo
f59ad4f39c Remove cgi deprecation exception from pytest.ini (#15939)
Remove deprecation exception from pytest.ini
2025-04-11 14:07:23 -07:00
Alan Rominger
c3ee0c2d8a Sensible log behavior when redis is unavailable (#15466)
* Sensible log behavior when redis is unavailable

* Consistent behavior with dispatcher and callback
2025-04-10 13:45:05 -07:00
Alan Rominger
7a3010f0e6 Bring WFJT job access to parity with UnifiedJobAccess (#15344)
* Bring WFJT job access to parity with UnifiedJobAccess

* Run linters
2025-04-10 15:29:47 -04:00
Lila Yasin
05dc9bad1c AAP-39365 facts are unintentionally deleted when the inventory is modified during a job execution (#15910)
* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Added test_jobs.py to the model unit test folder in orther to show the undesired behaviour with fact cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Solved undesired behaviour with fact_cache

Signed-off-by: onetti7 <davonebat@gmail.com>

* Solved bug with slices

Signed-off-by: onetti7 <davonebat@gmail.com>

* Remove unused imports

Remove now unused line of code which was commented out by the contributor

Revert "Remove now unused line of code which was commented out by the contributor"

This reverts commit f1a056a2356d56bc7256957a18503bd14dcfd8aa.

* Add back line that had been commented out as this line makes hosts specific to the particular slice when applicable

Revise private_data_dir fixture to see if it improves code coverage

Checked out awx/main/tests/unit/models/test_jobs.py in devel to see if it resolves git diff issue

* Fix formatting in awx/main/tests/unit/models/test_jobs.py

Rename for loop from host in hosts to hosts in hosts_cahced and remove unneeded continue

Revise finish_fact_cache to utilize inventory rather than hosts

Remove local var hosts that was assigned but unused

Revert change in start_fact_cache hosts_cached back to hosts

Revise the way we are handling hosts_cached and joining the file

Revert "Revise the way we are handling hosts_cached and joining the file"

This reverts commit e6e3d2f09c1b79a9bce3647a72e3dad97fe0aed8.

Reapply "Revise the way we are handling hosts_cached and joining the file"

This reverts commit a42b7ae69133fee24d3a5f1b456d9c343d111df9.

Revert some of my changes to get back to a better working state

Rename for loop to host in hosts_cached and remove unneeded continue

Remove jobs job.get_hosts_for_fact_cache() from post run hook, fix if statement after continue block, and revise how we are calling hosts in finish for loop

Add test_invalid_host_facts to test_jobs to increase code coverage

Update method signature to use hosts_cached and updated other references to hosts in finish_facts_cached

Rename hosts iterator to hosts_cached to agree with naming elsewhere

Revise test_invalid_host_facts to get more code coverage

Revise test_invalid_host_facts to increase codecov

Revise test_pre_post_run_hook_facts_deleted_sliced to ensure we are hitting the assertionerror for code cov

Revise  mock_inventory.hosts. to hit assert failure

Add revision of hosts and facts to force failure to satisfy code cov

Fix failure in test_pre_post_run_hook_facts_deleted_sliced

Add back for loop to create failures and add assert to hit them

Remove hosts.iterator() from both start_fact_cache and finish_fact_cache

Remove unused import of Queryset to satisfy api-lint

Fix typo in docstring hasnot to has not

Move hosts_cached.append(host) to outer loop in start_fact_cache

Add class to help support cached hosts resolving host.name issue with hosts_cached

* Add live tests for ansible facts

Remove fixture needed for local work only maybe

Revert "Add class to help support cached hosts resolving host.name issue with hosts_cached"

This reverts commit 99d998cfb9960baafe887de80bd9b01af50513ec.

* Move hosts_cached.append(host) outside of try except

* Move hosts_cached.append(host) to the beginning of start_fact_cache

---------

Signed-off-by: onetti7 <davonebat@gmail.com>
Co-authored-by: onetti7 <davonebat@gmail.com>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2025-04-10 11:46:41 -04:00
Alan Rominger
38f0f8d45f Remove pbr dependency (#15806)
* Remove pbr dependency

* Review comment, remove comment
2025-04-09 17:20:12 -04:00
Alan Rominger
d3ee9a1bfd AAP-27502 Try removing coreapi for deprecation warning (#15804)
Try removing coreapi for deprecation warning
2025-04-09 10:50:07 -07:00
TVo
438aa463d5 Remove L10N deprecation exception (#15925)
* Remove L10N deprecation exception

* Remove L10N from default settings file.
2025-04-09 06:22:01 -07:00
jessicamack
51f9160654 Fix CVE 2025-26699 (#15924)
fix CVE 2025-26699
2025-04-08 12:07:22 -04:00
Dave
ac3123a2ac Error reporting and handling in GH14575/GH12682 (#14802)
Bug Error reporting and handling in GH14575/GH12682

This targets a bug that tries to parse blank string as None for panelid
and dashboardid.

It also prints more errors reporting to the console to diagnose
reporting issues

Co-authored-by: Lila Yasin <lyasin@redhat.com>
2025-04-08 15:27:19 +00:00
Hao Liu
c4ee5127c5 Prevent automountServiceAccountToken in containergroup pod sepc (#15586)
* Prevent job pod from mounting serviceaccount token

* Add serializer validation for cg pod_spec_override

Prevent automountServiceAccountToken to be set to true and provide an error message when automountServiceAccountToken is being set to true
2025-04-03 12:58:16 -04:00
Hao Liu
9ec7540c4b Common setup-python in github action (#15901) 2025-04-03 11:14:52 -04:00
Hao Liu
2389fc691e Common action for setup ssh agent in GHA (#15902) 2025-04-03 11:14:33 -04:00
Konstantin
567f5a2476 Credentials: accept empty description (#15857)
Accept empty description
2025-04-02 17:05:38 +00:00
Jan-Piet Mens
e837535396 Indicate tower_cli.cfg can also be in current directory (#15092)
Update readme to provide more details on how tower_cli.cfg is used.

Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2025-04-02 11:32:11 -04:00
Konstantin Kuminsky
1d57f1c355 Ability to remove credentials owned by a user 2025-04-02 16:48:27 +02:00
Konstantin
7676f14114 Accept empty description 2025-04-02 16:01:38 +02:00
Dirk Jülich
182e5cfaa4 AAP-37381 Apply password validators from settings.AUTH_PASSWORD_VALIDATORS correctly. (#15897)
* Move call to django_validate_password to the correct method were the user object is available.

* Added tests for the Django password validation functionality.
2025-04-01 12:03:11 +02:00
Fabio Alessandro Locati
99be91e939 Add notice of paused releases (#15900)
* Add notice of suspended releases

* Improve following suggestions
2025-03-27 19:14:21 +00:00
Chris Meyers
9ff163b919 Remove AsgiHandler deprecation exception
* Time has passed. Channels (4.2.0) no longer raises a deprecation
  warning for this case. It used to (4.1.0).
* We do NOT serve http requests over daphne, this is the default
  behavior of ProtocolTypeRouter() when the http param is NOT included
2025-03-27 11:40:40 -04:00
Chris Meyers
5d0d0404c7 Remove ProtocolTypeRouter deprecation exception
* Time has passed. Channels (4.2.0) no longer raises a deprecation
  warning for this case. It used to (4.1.0).
* All is good. No code changes needed for this. We do NOT service http
  requests over daphne, just websockets. We, correctly, do NOT supply
  the http key so daphne does NOT service http requests.
2025-03-27 11:17:56 -04:00
Chris Meyers
5d53821ce5 Aap 41580 indirect host count wildcard query (#15893)
* Support <collection_namespace>.<collection_name>.* indirect host query
  to match ANY module in the <collection_namespace>.<collection_name>
* Add tests for new wildcard indirect host count
* error checking of ansible event name
* error checking of ansible event query
2025-03-24 08:15:44 -04:00
Seth Foster
39cd09ce19 Remove django settings module env var (#15896)
Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2025-03-18 20:20:26 -04:00
Jake Jackson
cd0e27446a Update bug scrub docs (#15894)
* Update docs with a few more things

* update about use of PAT
* update around managing output from the script

* Fix spacing and empty line

* finish run on sentence
* update requirements with extra dep needed
2025-03-18 11:06:37 -04:00
60 changed files with 1183 additions and 476 deletions

View File

@@ -11,9 +11,7 @@ inputs:
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- uses: ./.github/actions/setup-python
- name: Set lower case owner name
shell: bash
@@ -26,26 +24,9 @@ runs:
run: |
echo "${{ inputs.github-token }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Generate placeholder SSH private key if SSH auth for private repos is not needed
id: generate_key
shell: bash
run: |
if [[ -z "${{ inputs.private-github-key }}" ]]; then
ssh-keygen -t ed25519 -C "github-actions" -N "" -f ~/.ssh/id_ed25519
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
cat ~/.ssh/id_ed25519 >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
else
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
echo "${{ inputs.private-github-key }}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
fi
- name: Add private GitHub key to SSH agent
uses: webfactory/ssh-agent@v0.9.0
- uses: ./.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ steps.generate_key.outputs.SSH_PRIVATE_KEY }}
ssh-private-key: ${{ inputs.private-github-key }}
- name: Pre-pull latest devel image to warm cache
shell: bash

27
.github/actions/setup-python/action.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: 'Setup Python from Makefile'
description: 'Extract and set up Python version from Makefile'
inputs:
python-version:
description: 'Override Python version (optional)'
required: false
default: ''
working-directory:
description: 'Directory containing the Makefile'
required: false
default: '.'
runs:
using: composite
steps:
- name: Get python version from Makefile
shell: bash
run: |
if [ -n "${{ inputs.python-version }}" ]; then
echo "py_version=${{ inputs.python-version }}" >> $GITHUB_ENV
else
cd ${{ inputs.working-directory }}
echo "py_version=`make PYTHON_VERSION`" >> $GITHUB_ENV
fi
- name: Install python
uses: actions/setup-python@v5
with:
python-version: ${{ env.py_version }}

View File

@@ -0,0 +1,29 @@
name: 'Setup SSH for GitHub'
description: 'Configure SSH for private repository access'
inputs:
ssh-private-key:
description: 'SSH private key for repository access'
required: false
default: ''
runs:
using: composite
steps:
- name: Generate placeholder SSH private key if SSH auth for private repos is not needed
id: generate_key
shell: bash
run: |
if [[ -z "${{ inputs.ssh-private-key }}" ]]; then
ssh-keygen -t ed25519 -C "github-actions" -N "" -f ~/.ssh/id_ed25519
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
cat ~/.ssh/id_ed25519 >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
else
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
echo "${{ inputs.ssh-private-key }}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
fi
- name: Add private GitHub key to SSH agent
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ steps.generate_key.outputs.SSH_PRIVATE_KEY }}

View File

@@ -130,7 +130,7 @@ jobs:
with:
show-progress: false
- uses: actions/setup-python@v5
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
@@ -161,6 +161,10 @@ jobs:
show-progress: false
path: awx
- uses: ./awx/.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Checkout awx-operator
uses: actions/checkout@v4
with:
@@ -168,39 +172,14 @@ jobs:
repository: ansible/awx-operator
path: awx-operator
- name: Get python version from Makefile
working-directory: awx
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- uses: ./awx/.github/actions/setup-python
with:
python-version: ${{ env.py_version }}
working-directory: awx
- name: Install playbook dependencies
run: |
python3 -m pip install docker
- name: Generate placeholder SSH private key if SSH auth for private repos is not needed
id: generate_key
shell: bash
run: |
if [[ -z "${{ secrets.PRIVATE_GITHUB_KEY }}" ]]; then
ssh-keygen -t ed25519 -C "github-actions" -N "" -f ~/.ssh/id_ed25519
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
cat ~/.ssh/id_ed25519 >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
else
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
echo "${{ secrets.PRIVATE_GITHUB_KEY }}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
fi
- name: Add private GitHub key to SSH agent
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ steps.generate_key.outputs.SSH_PRIVATE_KEY }}
- name: Build AWX image
working-directory: awx
run: |
@@ -299,7 +278,7 @@ jobs:
with:
show-progress: false
- uses: actions/setup-python@v5
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'
@@ -375,7 +354,7 @@ jobs:
with:
show-progress: false
- uses: actions/setup-python@v5
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'

View File

@@ -49,14 +49,10 @@ jobs:
run: |
echo "DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER,,}" >> $GITHUB_ENV
echo "COMPOSE_TAG=${GITHUB_REF##*/}" >> $GITHUB_ENV
echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
env:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- uses: ./.github/actions/setup-python
- name: Log in to registry
run: |
@@ -73,25 +69,9 @@ jobs:
make ui
if: matrix.build-targets.image-name == 'awx'
- name: Generate placeholder SSH private key if SSH auth for private repos is not needed
id: generate_key
shell: bash
run: |
if [[ -z "${{ secrets.PRIVATE_GITHUB_KEY }}" ]]; then
ssh-keygen -t ed25519 -C "github-actions" -N "" -f ~/.ssh/id_ed25519
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
cat ~/.ssh/id_ed25519 >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
else
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
echo "${{ secrets.PRIVATE_GITHUB_KEY }}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
fi
- name: Add private GitHub key to SSH agent
uses: webfactory/ssh-agent@v0.9.0
- uses: ./.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ steps.generate_key.outputs.SSH_PRIVATE_KEY }}
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Build and push AWX devel images
run: |

View File

@@ -12,7 +12,7 @@ jobs:
with:
show-progress: false
- uses: actions/setup-python@v5
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'

View File

@@ -34,9 +34,11 @@ jobs:
with:
show-progress: false
- uses: actions/setup-python@v4
- uses: ./.github/actions/setup-python
- name: Install python requests
run: pip install requests
- name: Check if user is a member of Ansible org
uses: jannekem/run-python-script-action@v1
id: check_user

View File

@@ -33,7 +33,7 @@ jobs:
with:
show-progress: false
- uses: actions/setup-python@v5
- uses: ./.github/actions/setup-python
with:
python-version: '3.x'

View File

@@ -36,13 +36,7 @@ jobs:
with:
show-progress: false
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- uses: ./.github/actions/setup-python
- name: Install dependencies
run: |

View File

@@ -64,14 +64,9 @@ jobs:
repository: ansible/awx-logos
path: awx-logos
- name: Get python version from Makefile
working-directory: awx
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
- uses: ./awx/.github/actions/setup-python
with:
python-version: ${{ env.py_version }}
working-directory: awx
- name: Install playbook dependencies
run: |

View File

@@ -23,37 +23,15 @@ jobs:
with:
show-progress: false
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.py_version }}
- uses: ./.github/actions/setup-python
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Generate placeholder SSH private key if SSH auth for private repos is not needed
id: generate_key
shell: bash
run: |
if [[ -z "${{ secrets.PRIVATE_GITHUB_KEY }}" ]]; then
ssh-keygen -t ed25519 -C "github-actions" -N "" -f ~/.ssh/id_ed25519
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
cat ~/.ssh/id_ed25519 >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
else
echo "SSH_PRIVATE_KEY<<EOF" >> $GITHUB_OUTPUT
echo "${{ secrets.PRIVATE_GITHUB_KEY }}" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
fi
- name: Add private GitHub key to SSH agent
uses: webfactory/ssh-agent@v0.9.0
- uses: ./.github/actions/setup-ssh-agent
with:
ssh-private-key: ${{ steps.generate_key.outputs.SSH_PRIVATE_KEY }}
ssh-private-key: ${{ secrets.PRIVATE_GITHUB_KEY }}
- name: Pre-pull image to warm build cache
run: |

View File

@@ -3,6 +3,17 @@
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
> [!CAUTION]
> The last release of this repository was released on Jul 2, 2024.
> **Releases of this project are now paused during a large scale refactoring.**
> For more information, follow [the Forum](https://forum.ansible.com/) and - more specifically - see the various communications on the matter:
>
> * [Blog: Upcoming Changes to the AWX Project](https://www.ansible.com/blog/upcoming-changes-to-the-awx-project/)
> * [Streamlining AWX Releases](https://forum.ansible.com/t/streamlining-awx-releases/6894) Primary update
> * [Refactoring AWX into a Pluggable, Service-Oriented Architecture](https://forum.ansible.com/t/refactoring-awx-into-a-pluggable-service-oriented-architecture/7404)
> * [Upcoming changes to AWX Operator installation methods](https://forum.ansible.com/t/upcoming-changes-to-awx-operator-installation-methods/7598)
> * [AWX UI and credential types transitioning to the new pluggable architecture](https://forum.ansible.com/t/awx-ui-and-credential-types-transitioning-to-the-new-pluggable-architecture/8027)
AWX provides a web-based user interface, REST API, and task engine built on top of [Ansible](https://github.com/ansible/ansible). It is one of the upstream projects for [Red Hat Ansible Automation Platform](https://www.ansible.com/products/automation-platform).
To install AWX, please view the [Install guide](./INSTALL.md).

View File

@@ -6,6 +6,7 @@ import copy
import json
import logging
import re
import yaml
from collections import Counter, OrderedDict
from datetime import timedelta
from uuid import uuid4
@@ -626,15 +627,41 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
return exclusions
def validate(self, attrs):
"""
Apply serializer validation. Called by DRF.
Can be extended by subclasses. Or consider overwriting
`validate_with_obj` in subclasses, which provides access to the model
object and exception handling for field validation.
:param dict attrs: The names and values of the model form fields.
:raise rest_framework.exceptions.ValidationError: If the validation
fails.
The exception must contain a dict with the names of the form fields
which failed validation as keys, and a list of error messages as
values. This ensures that the error messages are rendered near the
relevant fields.
:return: The names and values from the model form fields, possibly
modified by the validations.
:rtype: dict
"""
attrs = super(BaseSerializer, self).validate(attrs)
# Create/update a model instance and run its full_clean() method to
# do any validation implemented on the model class.
exclusions = self.get_validation_exclusions(self.instance)
# Create a new model instance or take the existing one if it exists,
# and update its attributes with the respective field values from
# attrs.
obj = self.instance or self.Meta.model()
for k, v in attrs.items():
if k not in exclusions and k != 'canonical_address_port':
setattr(obj, k, v)
try:
# Create/update a model instance and run its full_clean() method to
# do any validation implemented on the model class.
exclusions = self.get_validation_exclusions(self.instance)
obj = self.instance or self.Meta.model()
for k, v in attrs.items():
if k not in exclusions and k != 'canonical_address_port':
setattr(obj, k, v)
# Run serializer validators which need the model object for
# validation.
self.validate_with_obj(attrs, obj)
# Apply any validations implemented on the model class.
obj.full_clean(exclude=exclusions)
# full_clean may modify values on the instance; copy those changes
# back to attrs so they are saved.
@@ -663,6 +690,32 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
raise ValidationError(d)
return attrs
def validate_with_obj(self, attrs, obj):
"""
Overwrite this if you need the model instance for your validation.
:param dict attrs: The names and values of the model form fields.
:param obj: An instance of the class's meta model.
If the serializer runs on a newly created object, obj contains only
the attrs from its serializer. If the serializer runs because an
object has been edited, obj is the existing model instance with all
attributes and values available.
:raise django.core.exceptionsValidationError: Raise this if your
validation fails.
To make the error appear at the respective form field, instantiate
the Exception with a dict containing the field name as key and the
error message as value.
Example: ``ValidationError({"password": "Not good enough!"})``
If the exception contains just a string, the message cannot be
related to a field and is rendered at the top of the model form.
:return: None
"""
return
def reverse(self, *args, **kwargs):
kwargs['request'] = self.context.get('request')
return reverse(*args, **kwargs)
@@ -682,12 +735,11 @@ class EmptySerializer(serializers.Serializer):
class UnifiedJobTemplateSerializer(BaseSerializer):
# As a base serializer, the capabilities prefetch is not used directly,
# instead they are derived from the Workflow Job Template Serializer and the Job Template Serializer, respectively.
priority = serializers.IntegerField(required=False, min_value=0, max_value=32000)
capabilities_prefetch = []
class Meta:
model = UnifiedJobTemplate
fields = ('*', 'last_job_run', 'last_job_failed', 'next_job_run', 'status', 'priority', 'execution_environment')
fields = ('*', 'last_job_run', 'last_job_failed', 'next_job_run', 'status', 'execution_environment')
def get_related(self, obj):
res = super(UnifiedJobTemplateSerializer, self).get_related(obj)
@@ -985,7 +1037,6 @@ class UserSerializer(BaseSerializer):
return ret
def validate_password(self, value):
django_validate_password(value)
if not self.instance and value in (None, ''):
raise serializers.ValidationError(_('Password required for new User.'))
@@ -1008,6 +1059,50 @@ class UserSerializer(BaseSerializer):
return value
def validate_with_obj(self, attrs, obj):
"""
Validate the password with the Django password validators
To enable the Django password validators, configure
`settings.AUTH_PASSWORD_VALIDATORS` as described in the [Django
docs](https://docs.djangoproject.com/en/5.1/topics/auth/passwords/#enabling-password-validation)
:param dict attrs: The User form field names and their values as a dict.
Example::
{
'username': 'TestUsername', 'first_name': 'FirstName',
'last_name': 'LastName', 'email': 'First.Last@my.org',
'is_superuser': False, 'is_system_auditor': False,
'password': 'secret123'
}
:param obj: The User model instance.
:raises django.core.exceptions.ValidationError: Raise this if at least
one Django password validator fails.
The exception contains a dict ``{"password": <error-message>``}
which indicates that the password field has failed validation, and
the reason for failure.
:return: None.
"""
# We must do this here instead of in `validate_password` bacause some
# django password validators need access to other model instance fields,
# e.g. ``username`` for the ``UserAttributeSimilarityValidator``.
password = attrs.get("password")
# Skip validation if no password has been entered. This may happen when
# an existing User is edited.
if password and password != '$encrypted$':
# Apply validators from settings.AUTH_PASSWORD_VALIDATORS. This may
# raise ValidationError.
#
# If the validation fails, re-raise the exception with adjusted
# content to make the error appear near the password field.
try:
django_validate_password(password, user=obj)
except DjangoValidationError as exc:
raise DjangoValidationError({"password": exc.messages})
def _update_password(self, obj, new_password):
if new_password and new_password != '$encrypted$':
obj.set_password(new_password)
@@ -2997,7 +3092,6 @@ class JobOptionsSerializer(LabelsListMixin, BaseSerializer):
'scm_branch',
'forks',
'limit',
'priority',
'verbosity',
'extra_vars',
'job_tags',
@@ -3120,7 +3214,6 @@ class JobTemplateMixin(object):
class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobOptionsSerializer):
show_capabilities = ['start', 'schedule', 'copy', 'edit', 'delete']
capabilities_prefetch = ['admin', 'execute', {'copy': ['project.use', 'inventory.use']}]
priority = serializers.IntegerField(required=False, min_value=0, max_value=32000)
status = serializers.ChoiceField(choices=JobTemplate.JOB_TEMPLATE_STATUS_CHOICES, read_only=True, required=False)
@@ -3128,7 +3221,6 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
model = JobTemplate
fields = (
'*',
'priority',
'host_config_key',
'ask_scm_branch_on_launch',
'ask_diff_mode_on_launch',
@@ -3256,7 +3348,6 @@ class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
'diff_mode',
'job_slice_number',
'job_slice_count',
'priority',
'webhook_service',
'webhook_credential',
'webhook_guid',
@@ -3707,7 +3798,6 @@ class WorkflowJobTemplateWithSpecSerializer(WorkflowJobTemplateSerializer):
class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
limit = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
priority = serializers.IntegerField(required=False, min_value=0, max_value=32000)
scm_branch = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
skip_tags = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
@@ -3728,7 +3818,6 @@ class WorkflowJobSerializer(LabelsListMixin, UnifiedJobSerializer):
'-controller_node',
'inventory',
'limit',
'priority',
'scm_branch',
'webhook_service',
'webhook_credential',
@@ -3846,7 +3935,6 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
job_type = serializers.ChoiceField(allow_blank=True, allow_null=True, required=False, default=None, choices=NEW_JOB_TYPE_CHOICES)
job_tags = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
limit = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
priority = serializers.IntegerField(required=False, min_value=0, max_value=32000)
skip_tags = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
diff_mode = serializers.BooleanField(required=False, allow_null=True, default=None)
verbosity = serializers.ChoiceField(allow_null=True, required=False, default=None, choices=VERBOSITY_CHOICES)
@@ -3865,7 +3953,6 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
'job_tags',
'skip_tags',
'limit',
'priority',
'skip_tags',
'diff_mode',
'verbosity',
@@ -4359,7 +4446,6 @@ class JobLaunchSerializer(BaseSerializer):
job_type = serializers.ChoiceField(required=False, choices=NEW_JOB_TYPE_CHOICES, write_only=True)
skip_tags = serializers.CharField(required=False, write_only=True, allow_blank=True)
limit = serializers.CharField(required=False, write_only=True, allow_blank=True)
priority = serializers.IntegerField(required=False, write_only=False, min_value=0, max_value=32000)
verbosity = serializers.ChoiceField(required=False, choices=VERBOSITY_CHOICES, write_only=True)
execution_environment = serializers.PrimaryKeyRelatedField(queryset=ExecutionEnvironment.objects.all(), required=False, write_only=True)
labels = serializers.PrimaryKeyRelatedField(many=True, queryset=Label.objects.all(), required=False, write_only=True)
@@ -4377,7 +4463,6 @@ class JobLaunchSerializer(BaseSerializer):
'inventory',
'scm_branch',
'limit',
'priority',
'job_tags',
'skip_tags',
'job_type',
@@ -4563,7 +4648,6 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
extra_vars = VerbatimField(required=False, write_only=True)
inventory = serializers.PrimaryKeyRelatedField(queryset=Inventory.objects.all(), required=False, write_only=True)
limit = serializers.CharField(required=False, write_only=True, allow_blank=True)
priority = serializers.IntegerField(required=False, write_only=False, min_value=0, max_value=32000)
scm_branch = serializers.CharField(required=False, write_only=True, allow_blank=True)
workflow_job_template_data = serializers.SerializerMethodField()
@@ -4703,14 +4787,13 @@ class BulkJobLaunchSerializer(serializers.Serializer):
)
inventory = serializers.PrimaryKeyRelatedField(queryset=Inventory.objects.all(), required=False, write_only=True)
limit = serializers.CharField(write_only=True, required=False, allow_blank=False)
# priority = serializers.IntegerField(write_only=True, required=False, min_value=0, max_value=32000)
scm_branch = serializers.CharField(write_only=True, required=False, allow_blank=False)
skip_tags = serializers.CharField(write_only=True, required=False, allow_blank=False)
job_tags = serializers.CharField(write_only=True, required=False, allow_blank=False)
class Meta:
model = WorkflowJob
fields = ('name', 'jobs', 'description', 'extra_vars', 'organization', 'inventory', 'limit', 'priority', 'scm_branch', 'skip_tags', 'job_tags')
fields = ('name', 'jobs', 'description', 'extra_vars', 'organization', 'inventory', 'limit', 'scm_branch', 'skip_tags', 'job_tags')
read_only_fields = ()
def validate(self, attrs):
@@ -5834,6 +5917,34 @@ class InstanceGroupSerializer(BaseSerializer):
raise serializers.ValidationError(_('Only Kubernetes credentials can be associated with an Instance Group'))
return value
def validate_pod_spec_override(self, value):
if not value:
return value
# value should be empty for non-container groups
if self.instance and not self.instance.is_container_group:
raise serializers.ValidationError(_('pod_spec_override is only valid for container groups'))
pod_spec_override_json = None
# defect if the value is yaml or json if yaml convert to json
try:
# convert yaml to json
pod_spec_override_json = yaml.safe_load(value)
except yaml.YAMLError:
try:
pod_spec_override_json = json.loads(value)
except json.JSONDecodeError:
raise serializers.ValidationError(_('pod_spec_override must be valid yaml or json'))
# validate the
spec = pod_spec_override_json.get('spec', {})
automount_service_account_token = spec.get('automountServiceAccountToken', False)
if automount_service_account_token:
raise serializers.ValidationError(_('automountServiceAccountToken is not allowed for security reasons'))
return value
def validate(self, attrs):
attrs = super(InstanceGroupSerializer, self).validate(attrs)

View File

@@ -2098,7 +2098,7 @@ class WorkflowJobAccess(BaseAccess):
def filtered_queryset(self):
return WorkflowJob.objects.filter(
Q(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(organization__in=Organization.objects.filter(Q(admin_role__members=self.user)), is_bulk_job=True)
| Q(organization__in=Organization.accessible_pk_qs(self.user, 'auditor_role'))
)
def can_read(self, obj):
@@ -2496,12 +2496,11 @@ class UnifiedJobAccess(BaseAccess):
def filtered_queryset(self):
inv_pk_qs = Inventory._accessible_pk_qs(Inventory, self.user, 'read_role')
org_auditor_qs = Organization.objects.filter(Q(admin_role__members=self.user) | Q(auditor_role__members=self.user))
qs = self.model.objects.filter(
Q(unified_job_template_id__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(inventoryupdate__inventory_source__inventory__id__in=inv_pk_qs)
| Q(adhoccommand__inventory__id__in=inv_pk_qs)
| Q(organization__in=org_auditor_qs)
| Q(organization__in=Organization.accessible_pk_qs(self.user, 'auditor_role'))
)
return qs

View File

@@ -9,6 +9,7 @@ from prometheus_client.core import GaugeMetricFamily, HistogramMetricFamily
from prometheus_client.registry import CollectorRegistry
from django.conf import settings
from django.http import HttpRequest
import redis.exceptions
from rest_framework.request import Request
from awx.main.consumers import emit_channel_notification
@@ -290,8 +291,12 @@ class Metrics(MetricsNamespace):
def send_metrics(self):
# more than one thread could be calling this at the same time, so should
# acquire redis lock before sending metrics
lock = self.conn.lock(root_key + '-' + self._namespace + '_lock')
if not lock.acquire(blocking=False):
try:
lock = self.conn.lock(root_key + '-' + self._namespace + '_lock')
if not lock.acquire(blocking=False):
return
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Connection error in send_metrics: {exc}')
return
try:
current_time = time.time()

View File

@@ -88,8 +88,10 @@ class Scheduler:
# internally times are all referenced relative to startup time, add grace period
self.global_start = time.time() + 2.0
def get_and_mark_pending(self):
relative_time = time.time() - self.global_start
def get_and_mark_pending(self, reftime=None):
if reftime is None:
reftime = time.time() # mostly for tests
relative_time = reftime - self.global_start
to_run = []
for job in self.jobs:
if job.due_to_run(relative_time):
@@ -98,8 +100,10 @@ class Scheduler:
job.mark_run(relative_time)
return to_run
def time_until_next_run(self):
relative_time = time.time() - self.global_start
def time_until_next_run(self, reftime=None):
if reftime is None:
reftime = time.time() # mostly for tests
relative_time = reftime - self.global_start
next_job = min(self.jobs, key=lambda j: j.next_run)
delta = next_job.next_run - relative_time
if delta <= 0.1:
@@ -115,10 +119,11 @@ class Scheduler:
def debug(self, *args, **kwargs):
data = dict()
data['title'] = 'Scheduler status'
reftime = time.time()
now = datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S UTC')
now = datetime.fromtimestamp(reftime).strftime('%Y-%m-%d %H:%M:%S UTC')
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
relative_time = time.time() - self.global_start
relative_time = reftime - self.global_start
data['started_time'] = start_time
data['current_time'] = now
data['current_time_relative'] = round(relative_time, 3)

View File

@@ -15,6 +15,7 @@ from datetime import timedelta
from django import db
from django.conf import settings
import redis.exceptions
from ansible_base.lib.logging.runtime import log_excess_runtime
@@ -130,10 +131,13 @@ class AWXConsumerBase(object):
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
save_data = self.pool.debug()
try:
self.redis.set(f'awx_{self.name}_statistics', self.pool.debug())
self.redis.set(f'awx_{self.name}_statistics', save_data)
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Redis connection error saving {self.name} status data:\n{exc}\nmissed data:\n{save_data}')
except Exception:
logger.exception(f"encountered an error communicating with redis to store {self.name} statistics")
logger.exception(f"Unknown redis error saving {self.name} status data:\nmissed data:\n{save_data}")
self.last_stats = time.time()
def run(self, *args, **kwargs):
@@ -189,7 +193,10 @@ class AWXConsumerPG(AWXConsumerBase):
current_time = time.time()
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
try:
self.subsystem_metrics.pipe_execute()
except redis.exceptions.ConnectionError as exc:
logger.warning(f'Redis connection error saving dispatcher metrics, error:\n{exc}')
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
@@ -205,7 +212,11 @@ class AWXConsumerPG(AWXConsumerBase):
except Exception as exc:
logger.warning(f'Failed to save dispatcher statistics {exc}')
for job in self.scheduler.get_and_mark_pending():
# Everything benchmarks to the same original time, so that skews due to
# runtime of the actions, themselves, do not mess up scheduling expectations
reftime = time.time()
for job in self.scheduler.get_and_mark_pending(reftime=reftime):
if 'control' in job.data:
try:
job.data['control']()
@@ -222,7 +233,7 @@ class AWXConsumerPG(AWXConsumerBase):
self.listen_start = time.time()
return self.scheduler.time_until_next_run()
return self.scheduler.time_until_next_run(reftime=reftime)
def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs)

View File

@@ -86,6 +86,7 @@ class CallbackBrokerWorker(BaseWorker):
return os.getpid()
def read(self, queue):
has_redis_error = False
try:
res = self.redis.blpop(self.queue_name, timeout=1)
if res is None:
@@ -95,14 +96,21 @@ class CallbackBrokerWorker(BaseWorker):
self.subsystem_metrics.inc('callback_receiver_events_popped_redis', 1)
self.subsystem_metrics.inc('callback_receiver_events_in_memory', 1)
return json.loads(res[1])
except redis.exceptions.ConnectionError as exc:
# Low noise log, because very common and many workers will write this
logger.error(f"redis connection error: {exc}")
has_redis_error = True
time.sleep(5)
except redis.exceptions.RedisError:
logger.exception("encountered an error communicating with redis")
has_redis_error = True
time.sleep(1)
except (json.JSONDecodeError, KeyError):
logger.exception("failed to decode JSON message from redis")
finally:
self.record_statistics()
self.record_read_metrics()
if not has_redis_error:
self.record_statistics()
self.record_read_metrics()
return {'event': 'FLUSH'}

View File

@@ -1,10 +1,13 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved.
from django.conf import settings
from django.core.management.base import BaseCommand
from awx.main.analytics.subsystem_metrics import CallbackReceiverMetricsServer
import redis
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError
import redis.exceptions
from awx.main.analytics.subsystem_metrics import CallbackReceiverMetricsServer
from awx.main.dispatch.control import Control
from awx.main.dispatch.worker import AWXConsumerRedis, CallbackBrokerWorker
@@ -27,7 +30,10 @@ class Command(BaseCommand):
return
consumer = None
CallbackReceiverMetricsServer().start()
try:
CallbackReceiverMetricsServer().start()
except redis.exceptions.ConnectionError as exc:
raise CommandError(f'Callback receiver could not connect to redis, error: {exc}')
try:
consumer = AWXConsumerRedis(

View File

@@ -3,8 +3,10 @@
import logging
import yaml
import redis
from django.conf import settings
from django.core.management.base import BaseCommand
from django.core.management.base import BaseCommand, CommandError
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control
@@ -63,7 +65,10 @@ class Command(BaseCommand):
consumer = None
DispatcherMetricsServer().start()
try:
DispatcherMetricsServer().start()
except redis.exceptions.ConnectionError as exc:
raise CommandError(f'Dispatcher could not connect to redis, error: {exc}')
try:
queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]

View File

@@ -1,27 +0,0 @@
# Generated by Django 4.2.16 on 2025-03-11 14:40
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0201_delete_token_cleanup_job'),
]
operations = [
migrations.AddField(
model_name='unifiedjob',
name='priority',
field=models.PositiveIntegerField(
default=0,
editable=False,
help_text='Relative priority to other jobs. The higher the number, the higher the priority. Jobs with equivalent prioirty are started based on available capacity and launch time.',
),
),
migrations.AddField(
model_name='unifiedjobtemplate',
name='priority',
field=models.PositiveIntegerField(default=0),
),
]

View File

@@ -298,7 +298,6 @@ class JobTemplate(
'organization',
'survey_passwords',
'labels',
'priority',
'credentials',
'job_slice_number',
'job_slice_count',
@@ -1176,7 +1175,7 @@ class SystemJobTemplate(UnifiedJobTemplate, SystemJobOptions):
@classmethod
def _get_unified_job_field_names(cls):
return ['name', 'description', 'organization', 'priority', 'job_type', 'extra_vars']
return ['name', 'description', 'organization', 'job_type', 'extra_vars']
def get_absolute_url(self, request=None):
return reverse('api:system_job_template_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -354,7 +354,7 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
@classmethod
def _get_unified_job_field_names(cls):
return set(f.name for f in ProjectOptions._meta.fields) | set(['name', 'description', 'priority', 'organization'])
return set(f.name for f in ProjectOptions._meta.fields) | set(['name', 'description', 'organization'])
def clean_organization(self):
if self.pk:

View File

@@ -118,11 +118,6 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, ExecutionEn
default=None,
editable=False,
)
priority = models.PositiveIntegerField(
null=False,
default=0,
editable=True,
)
current_job = models.ForeignKey(
'UnifiedJob',
null=True,
@@ -590,13 +585,6 @@ class UnifiedJob(
default=None,
editable=False,
)
priority = models.PositiveIntegerField(
default=0,
editable=False,
help_text=_(
"Relative priority to other jobs. The higher the number, the higher the priority. Jobs with equivalent prioirty are started based on available capacity and launch time."
),
)
emitted_events = models.PositiveIntegerField(
default=0,
editable=False,

View File

@@ -416,7 +416,7 @@ class WorkflowJobOptions(LaunchTimeConfigBase):
@classmethod
def _get_unified_job_field_names(cls):
r = set(f.name for f in WorkflowJobOptions._meta.fields) | set(
['name', 'description', 'organization', 'survey_passwords', 'labels', 'limit', 'scm_branch', 'priority', 'job_tags', 'skip_tags']
['name', 'description', 'organization', 'survey_passwords', 'labels', 'limit', 'scm_branch', 'job_tags', 'skip_tags']
)
r.remove('char_prompts') # needed due to copying launch config to launch config
return r

View File

@@ -53,8 +53,8 @@ class GrafanaBackend(AWXBaseEmailBackend, CustomNotificationBase):
):
super(GrafanaBackend, self).__init__(fail_silently=fail_silently)
self.grafana_key = grafana_key
self.dashboardId = int(dashboardId) if dashboardId is not None else None
self.panelId = int(panelId) if panelId is not None else None
self.dashboardId = int(dashboardId) if dashboardId is not None and panelId != "" else None
self.panelId = int(panelId) if panelId is not None and panelId != "" else None
self.annotation_tags = annotation_tags if annotation_tags is not None else []
self.grafana_no_verify_ssl = grafana_no_verify_ssl
self.isRegion = isRegion
@@ -97,6 +97,7 @@ class GrafanaBackend(AWXBaseEmailBackend, CustomNotificationBase):
r = requests.post(
"{}/api/annotations".format(m.recipients()[0]), json=grafana_data, headers=grafana_headers, verify=(not self.grafana_no_verify_ssl)
)
if r.status_code >= 400:
logger.error(smart_str(_("Error sending notification grafana: {}").format(r.status_code)))
if not self.fail_silently:

View File

@@ -174,6 +174,9 @@ class PodManager(object):
)
pod_spec['spec']['containers'][0]['name'] = self.pod_name
# Prevent mounting of service account token in job pods in order to prevent job pods from accessing the k8s API via in cluster service account auth
pod_spec['spec']['automountServiceAccountToken'] = False
return pod_spec

View File

@@ -10,6 +10,8 @@ import time
import sys
import signal
import redis
# Django
from django.db import transaction
from django.utils.translation import gettext_lazy as _, gettext_noop
@@ -97,7 +99,7 @@ class TaskBase:
UnifiedJob.objects.filter(**filter_args)
.exclude(launch_type='sync')
.exclude(polymorphic_ctype_id=wf_approval_ctype_id)
.order_by('-priority', 'created')
.order_by('created')
.prefetch_related('dependent_jobs')
)
self.all_tasks = [t for t in qs]
@@ -120,6 +122,8 @@ class TaskBase:
self.subsystem_metrics.pipe_execute()
else:
logger.debug(f"skipping recording {self.prefix} metrics, last recorded {time_last_recorded} seconds ago")
except redis.exceptions.ConnectionError as exc:
logger.warning(f"Redis connection error saving metrics for {self.prefix}, error: {exc}")
except Exception:
logger.exception(f"Error saving metrics for {self.prefix}")
@@ -286,7 +290,7 @@ class WorkflowManager(TaskBase):
@timeit
def get_tasks(self, filter_args):
self.all_tasks = [wf for wf in WorkflowJob.objects.filter(**filter_args).order_by('-priority', 'created')]
self.all_tasks = [wf for wf in WorkflowJob.objects.filter(**filter_args)]
@timeit
def _schedule(self):
@@ -336,14 +340,12 @@ class DependencyManager(TaskBase):
return bool(((update.finished + timedelta(seconds=cache_timeout))) < tz_now())
def get_or_create_project_update(self, task):
project_id = task.project_id
priority = task.priority
def get_or_create_project_update(self, project_id):
project = self.all_projects.get(project_id, None)
if project is not None:
latest_project_update = project.project_updates.filter(job_type='check').order_by("-created").first()
if self.should_update_again(latest_project_update, project.scm_update_cache_timeout):
project_task = project.create_project_update(_eager_fields=dict(launch_type='dependency', priority=priority))
project_task = project.create_project_update(_eager_fields=dict(launch_type='dependency'))
project_task.signal_start()
return [project_task]
else:
@@ -351,7 +353,7 @@ class DependencyManager(TaskBase):
return []
def gen_dep_for_job(self, task):
dependencies = self.get_or_create_project_update(task)
dependencies = self.get_or_create_project_update(task.project_id)
try:
start_args = json.loads(decrypt_field(task, field_name="start_args"))
@@ -363,7 +365,7 @@ class DependencyManager(TaskBase):
continue
latest_inventory_update = inventory_source.inventory_updates.order_by("-created").first()
if self.should_update_again(latest_inventory_update, inventory_source.update_cache_timeout):
inventory_task = inventory_source.create_inventory_update(_eager_fields=dict(launch_type='dependency', priority=task.priority))
inventory_task = inventory_source.create_inventory_update(_eager_fields=dict(launch_type='dependency'))
inventory_task.signal_start()
dependencies.append(inventory_task)
else:

View File

@@ -6,7 +6,6 @@ import logging
# Django
from django.conf import settings
from django.db.models.query import QuerySet
from django.utils.encoding import smart_str
from django.utils.timezone import now
from django.db import OperationalError
@@ -26,6 +25,7 @@ system_tracking_logger = logging.getLogger('awx.analytics.system_tracking')
def start_fact_cache(hosts, destination, log_data, timeout=None, inventory_id=None):
log_data['inventory_id'] = inventory_id
log_data['written_ct'] = 0
hosts_cached = list()
try:
os.makedirs(destination, mode=0o700)
except FileExistsError:
@@ -34,17 +34,17 @@ def start_fact_cache(hosts, destination, log_data, timeout=None, inventory_id=No
if timeout is None:
timeout = settings.ANSIBLE_FACT_CACHE_TIMEOUT
if isinstance(hosts, QuerySet):
hosts = hosts.iterator()
last_filepath_written = None
for host in hosts:
if (not host.ansible_facts_modified) or (timeout and host.ansible_facts_modified < now() - datetime.timedelta(seconds=timeout)):
hosts_cached.append(host)
if not host.ansible_facts_modified or (timeout and host.ansible_facts_modified < now() - datetime.timedelta(seconds=timeout)):
continue # facts are expired - do not write them
filepath = os.sep.join(map(str, [destination, host.name]))
if not os.path.realpath(filepath).startswith(destination):
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
try:
with codecs.open(filepath, 'w', encoding='utf-8') as f:
os.chmod(f.name, 0o600)
@@ -54,10 +54,11 @@ def start_fact_cache(hosts, destination, log_data, timeout=None, inventory_id=No
except IOError:
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
# make note of the time we wrote the last file so we can check if any file changed later
if last_filepath_written:
return os.path.getmtime(last_filepath_written)
return None
return os.path.getmtime(last_filepath_written), hosts_cached
return None, hosts_cached
def raw_update_hosts(host_list):
@@ -88,17 +89,14 @@ def update_hosts(host_list, max_tries=5):
msg='Inventory {inventory_id} host facts: updated {updated_ct}, cleared {cleared_ct}, unchanged {unmodified_ct}, took {delta:.3f} s',
add_log_data=True,
)
def finish_fact_cache(hosts, destination, facts_write_time, log_data, job_id=None, inventory_id=None):
def finish_fact_cache(hosts_cached, destination, facts_write_time, log_data, job_id=None, inventory_id=None):
log_data['inventory_id'] = inventory_id
log_data['updated_ct'] = 0
log_data['unmodified_ct'] = 0
log_data['cleared_ct'] = 0
if isinstance(hosts, QuerySet):
hosts = hosts.iterator()
hosts_to_update = []
for host in hosts:
for host in hosts_cached:
filepath = os.sep.join(map(str, [destination, host.name]))
if not os.path.realpath(filepath).startswith(destination):
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
@@ -130,6 +128,7 @@ def finish_fact_cache(hosts, destination, facts_write_time, log_data, job_id=Non
log_data['unmodified_ct'] += 1
else:
# if the file goes missing, ansible removed it (likely via clear_facts)
# if the file goes missing, but the host has not started facts, then we should not clear the facts
host.ansible_facts = {}
host.ansible_facts_modified = now()
hosts_to_update.append(host)

View File

@@ -45,22 +45,35 @@ def build_indirect_host_data(job: Job, job_event_queries: dict[str, dict[str, st
facts_missing_logged = False
unhashable_facts_logged = False
job_event_queries_fqcn = {}
for query_k, query_v in job_event_queries.items():
if len(parts := query_k.split('.')) != 3:
logger.info(f"Skiping malformed query '{query_k}'. Expected to be of the form 'a.b.c'")
continue
if parts[2] != '*':
continue
job_event_queries_fqcn['.'.join(parts[0:2])] = query_v
for event in job.job_events.filter(event_data__isnull=False).iterator():
if 'res' not in event.event_data:
continue
if 'resolved_action' not in event.event_data or event.event_data['resolved_action'] not in job_event_queries.keys():
if not (resolved_action := event.event_data.get('resolved_action', None)):
continue
resolved_action = event.event_data['resolved_action']
if len(resolved_action_parts := resolved_action.split('.')) != 3:
logger.debug(f"Malformed invocation module name '{resolved_action}'. Expected to be of the form 'a.b.c'")
continue
# We expect a dict with a 'query' key for the resolved_action
if 'query' not in job_event_queries[resolved_action]:
resolved_action_fqcn = '.'.join(resolved_action_parts[0:2])
# Match module invocation to collection queries
# First match against fully qualified query names i.e. a.b.c
# Then try and match against wildcard queries i.e. a.b.*
if not (jq_str_for_event := job_event_queries.get(resolved_action, job_event_queries_fqcn.get(resolved_action_fqcn, {})).get('query')):
continue
# Recall from cache, or process the jq expression, and loop over the jq results
jq_str_for_event = job_event_queries[resolved_action]['query']
if jq_str_for_event not in compiled_jq_expressions:
compiled_jq_expressions[resolved_action] = jq.compile(jq_str_for_event)
compiled_jq = compiled_jq_expressions[resolved_action]

View File

@@ -1091,7 +1091,7 @@ class RunJob(SourceControlMixin, BaseTask):
# where ansible expects to find it
if self.should_use_fact_cache():
job.log_lifecycle("start_job_fact_cache")
self.facts_write_time = start_fact_cache(
self.facts_write_time, self.hosts_with_facts_cached = start_fact_cache(
job.get_hosts_for_fact_cache(), os.path.join(private_data_dir, 'artifacts', str(job.id), 'fact_cache'), inventory_id=job.inventory_id
)
@@ -1110,7 +1110,7 @@ class RunJob(SourceControlMixin, BaseTask):
if self.should_use_fact_cache() and self.runner_callback.artifacts_processed:
job.log_lifecycle("finish_job_fact_cache")
finish_fact_cache(
job.get_hosts_for_fact_cache(),
self.hosts_with_facts_cached,
os.path.join(private_data_dir, 'artifacts', str(job.id), 'fact_cache'),
facts_write_time=self.facts_write_time,
job_id=job.id,

View File

@@ -0,0 +1,7 @@
---
- hosts: all
gather_facts: false
connection: local
tasks:
- meta: clear_facts

View File

@@ -0,0 +1,17 @@
---
- hosts: all
vars:
extra_value: ""
gather_facts: false
connection: local
tasks:
- name: set a custom fact
set_fact:
foo: "bar{{ extra_value }}"
bar:
a:
b:
- "c"
- "d"
cacheable: true

View File

@@ -0,0 +1,9 @@
---
- hosts: all
gather_facts: false
connection: local
vars:
msg: 'hello'
tasks:
- debug: var=msg

View File

@@ -56,6 +56,175 @@ def test_user_create(post, admin):
assert not response.data['is_system_auditor']
# Disable local password checks to ensure that any ValidationError originates from the Django validators.
@override_settings(
LOCAL_PASSWORD_MIN_LENGTH=1,
LOCAL_PASSWORD_MIN_DIGITS=0,
LOCAL_PASSWORD_MIN_UPPER=0,
LOCAL_PASSWORD_MIN_SPECIAL=0,
)
@pytest.mark.django_db
def test_user_create_with_django_password_validation_basic(post, admin):
"""Test if the Django password validators are applied correctly."""
with override_settings(
AUTH_PASSWORD_VALIDATORS=[
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
'OPTIONS': {
'min_length': 3,
},
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
],
):
# This user should fail the UserAttrSimilarity, MinLength and CommonPassword validators.
user_attrs = (
{
"password": "Password", # NOSONAR
"username": "Password",
"is_superuser": False,
},
)
print(f"Create user with invalid password {user_attrs=}")
response = post(reverse('api:user_list'), user_attrs, admin, middleware=SessionMiddleware(mock.Mock()))
assert response.status_code == 400
# This user should pass all Django validators.
user_attrs = {
"password": "r$TyKiOCb#ED", # NOSONAR
"username": "TestUser",
"is_superuser": False,
}
print(f"Create user with valid password {user_attrs=}")
response = post(reverse('api:user_list'), user_attrs, admin, middleware=SessionMiddleware(mock.Mock()))
assert response.status_code == 201
@pytest.mark.parametrize(
"user_attrs,validators,expected_status_code",
[
# Test password similarity with username.
(
{"password": "TestUser1", "username": "TestUser1", "is_superuser": False}, # NOSONAR
[
{'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator'},
],
400,
),
(
{"password": "abc", "username": "TestUser1", "is_superuser": False}, # NOSONAR
[
{'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator'},
],
201,
),
# Test password min length criterion.
(
{"password": "TooShort", "username": "TestUser1", "is_superuser": False}, # NOSONAR
[
{'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 'OPTIONS': {'min_length': 9}},
],
400,
),
(
{"password": "LongEnough", "username": "TestUser1", "is_superuser": False}, # NOSONAR
[
{'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 'OPTIONS': {'min_length': 9}},
],
201,
),
# Test password is too common criterion.
(
{"password": "Password", "username": "TestUser1", "is_superuser": False}, # NOSONAR
[
{'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator'},
],
400,
),
(
{"password": "aEArV$5Vkdw", "username": "TestUser1", "is_superuser": False}, # NOSONAR
[
{'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator'},
],
201,
),
# Test if password is only numeric.
(
{"password": "1234567890", "username": "TestUser1", "is_superuser": False}, # NOSONAR
[
{'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator'},
],
400,
),
(
{"password": "abc4567890", "username": "TestUser1", "is_superuser": False}, # NOSONAR
[
{'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator'},
],
201,
),
],
)
# Disable local password checks to ensure that any ValidationError originates from the Django validators.
@override_settings(
LOCAL_PASSWORD_MIN_LENGTH=1,
LOCAL_PASSWORD_MIN_DIGITS=0,
LOCAL_PASSWORD_MIN_UPPER=0,
LOCAL_PASSWORD_MIN_SPECIAL=0,
)
@pytest.mark.django_db
def test_user_create_with_django_password_validation_ext(post, delete, admin, user_attrs, validators, expected_status_code):
"""Test the functionality of the single Django password validators."""
#
default_parameters = {
# Default values for input parameters which are None.
"user_attrs": {
"password": "r$TyKiOCb#ED", # NOSONAR
"username": "DefaultUser",
"is_superuser": False,
},
"validators": [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
'OPTIONS': {
'min_length': 8,
},
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
],
}
user_attrs = user_attrs if user_attrs is not None else default_parameters["user_attrs"]
validators = validators if validators is not None else default_parameters["validators"]
with override_settings(AUTH_PASSWORD_VALIDATORS=validators):
response = post(reverse('api:user_list'), user_attrs, admin, middleware=SessionMiddleware(mock.Mock()))
assert response.status_code == expected_status_code
# Delete user if it was created succesfully.
if response.status_code == 201:
response = delete(reverse('api:user_detail', kwargs={'pk': response.data['id']}), admin, middleware=SessionMiddleware(mock.Mock()))
assert response.status_code == 204
else:
# Catch the unexpected behavior that sometimes the user is written
# into the database before the validation fails. This actually can
# happen if UserSerializer.validate instantiates User(**attrs)!
username = user_attrs['username']
assert not User.objects.filter(username=username)
@pytest.mark.django_db
def test_fail_double_create_user(post, admin):
response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin, middleware=SessionMiddleware(mock.Mock()))
@@ -82,6 +251,10 @@ def test_updating_own_password_refreshes_session(patch, admin):
Updating your own password should refresh the session id.
'''
with mock.patch('awx.api.serializers.update_session_auth_hash') as update_session_auth_hash:
# Attention: If the Django password validator `CommonPasswordValidator`
# is active, this test case will fail because this validator raises on
# password 'newpassword'. Consider changing the hard-coded password to
# something uncommon.
patch(reverse('api:user_detail', kwargs={'pk': admin.pk}), {'password': 'newpassword'}, admin, middleware=SessionMiddleware(mock.Mock()))
assert update_session_auth_hash.called

View File

@@ -106,6 +106,17 @@ def test_compat_role_naming(setup_managed_roles, job_template, rando, alice):
assert rd.created_by is None
@pytest.mark.django_db
def test_organization_admin_has_audit(setup_managed_roles):
"""This formalizes a behavior change from old to new RBAC system
Previously, the auditor_role did not list admin_role as a parent
this made various queries hard to deal with, requiring adding 2 conditions
The new system should explicitly list the auditor permission in org admin role"""
rd = RoleDefinition.objects.get(name='Organization Admin')
assert 'audit_organization' in rd.permissions.values_list('codename', flat=True)
@pytest.mark.django_db
def test_organization_level_permissions(organization, inventory, setup_managed_roles):
u1 = User.objects.create(username='alice')

View File

@@ -1,6 +1,7 @@
import pytest
from awx.main.access import (
UnifiedJobAccess,
WorkflowJobTemplateAccess,
WorkflowJobTemplateNodeAccess,
WorkflowJobAccess,
@@ -245,6 +246,30 @@ class TestWorkflowJobAccess:
inventory.use_role.members.add(rando)
assert WorkflowJobAccess(rando).can_start(workflow_job)
@pytest.mark.parametrize('org_role', ['admin_role', 'auditor_role'])
def test_workflow_job_org_audit_access(self, workflow_job_template, rando, org_role):
assert workflow_job_template.organization # sanity
workflow_job = workflow_job_template.create_unified_job()
assert workflow_job.organization # sanity
assert not UnifiedJobAccess(rando).can_read(workflow_job)
assert not WorkflowJobAccess(rando).can_read(workflow_job)
assert workflow_job not in WorkflowJobAccess(rando).filtered_queryset()
org = workflow_job.organization
role = getattr(org, org_role)
role.members.add(rando)
assert UnifiedJobAccess(rando).can_read(workflow_job)
assert WorkflowJobAccess(rando).can_read(workflow_job)
assert workflow_job in WorkflowJobAccess(rando).filtered_queryset()
# Organization-level permissions should persist after deleting the WFJT
workflow_job_template.delete()
assert UnifiedJobAccess(rando).can_read(workflow_job)
assert WorkflowJobAccess(rando).can_read(workflow_job)
assert workflow_job in WorkflowJobAccess(rando).filtered_queryset()
@pytest.mark.django_db
class TestWFJTCopyAccess:

View File

@@ -1,4 +1,5 @@
import yaml
from functools import reduce
from unittest import mock
import pytest
@@ -20,6 +21,46 @@ from awx.main.models.indirect_managed_node_audit import IndirectManagedNodeAudit
TEST_JQ = "{name: .name, canonical_facts: {host_name: .direct_host_name}, facts: {another_host_name: .direct_host_name}}"
class Query(dict):
def __init__(self, resolved_action: str, query_jq: dict):
self._resolved_action = resolved_action.split('.')
self._collection_ns, self._collection_name, self._module_name = self._resolved_action
super().__init__({self.resolve_key: {'query': query_jq}})
def get_fqcn(self):
return f'{self._collection_ns}.{self._collection_name}'
@property
def resolve_value(self):
return self[self.resolve_key]
@property
def resolve_key(self):
return f'{self.get_fqcn()}.{self._module_name}'
def resolve(self, module_name=None):
return {f'{self.get_fqcn()}.{module_name or self._module_name}': self.resolve_value}
def create_event_query(self, module_name=None):
if (module_name := module_name or self._module_name) == '*':
raise ValueError('Invalid module name *')
return self.create_event_queries([module_name])
def create_event_queries(self, module_names):
queries = {}
for name in module_names:
queries |= self.resolve(name)
return EventQuery.objects.create(
fqcn=self.get_fqcn(),
collection_version='1.0.1',
event_query=yaml.dump(queries, default_flow_style=False),
)
def create_registered_event(self, job, module_name):
job.job_events.create(event_data={'resolved_action': f'{self.get_fqcn()}.{module_name}', 'res': {'direct_host_name': 'foo_host', 'name': 'vm-foo'}})
@pytest.fixture
def bare_job(job_factory):
job = job_factory()
@@ -39,11 +80,6 @@ def job_with_counted_event(bare_job):
return bare_job
def create_event_query(fqcn='demo.query'):
module_name = f'{fqcn}.example'
return EventQuery.objects.create(fqcn=fqcn, collection_version='1.0.1', event_query=yaml.dump({module_name: {'query': TEST_JQ}}, default_flow_style=False))
def create_audit_record(name, job, organization, created=now()):
record = IndirectManagedNodeAudit.objects.create(name=name, job=job, organization=organization)
record.created = created
@@ -54,7 +90,7 @@ def create_audit_record(name, job, organization, created=now()):
@pytest.fixture
def event_query():
"This is ordinarily created by the artifacts callback"
return create_event_query()
return Query('demo.query.example', TEST_JQ).create_event_query()
@pytest.fixture
@@ -72,105 +108,211 @@ def new_audit_record(bare_job, organization):
@pytest.mark.django_db
def test_build_with_no_results(bare_job):
# never filled in events, should do nothing
assert build_indirect_host_data(bare_job, {}) == []
@pytest.mark.parametrize(
'queries,expected_matches',
(
pytest.param(
[],
0,
id='no_results',
),
pytest.param(
[Query('demo.query.example', TEST_JQ)],
1,
id='fully_qualified',
),
pytest.param(
[Query('demo.query.*', TEST_JQ)],
1,
id='wildcard',
),
pytest.param(
[
Query('demo.query.*', TEST_JQ),
Query('demo.query.example', TEST_JQ),
],
1,
id='wildcard_and_fully_qualified',
),
pytest.param(
[
Query('demo.query.*', TEST_JQ),
Query('demo.query.example', {}),
],
0,
id='wildcard_and_fully_qualified',
),
pytest.param(
[
Query('demo.query.example', {}),
Query('demo.query.*', TEST_JQ),
],
0,
id='ordering_should_not_matter',
),
),
)
def test_build_indirect_host_data(job_with_counted_event, queries: Query, expected_matches: int):
data = build_indirect_host_data(job_with_counted_event, {k: v for d in queries for k, v in d.items()})
assert len(data) == expected_matches
@mock.patch('awx.main.tasks.host_indirect.logger.debug')
@pytest.mark.django_db
@pytest.mark.parametrize(
'task_name',
(
pytest.param(
'demo.query',
id='no_results',
),
pytest.param(
'demo',
id='no_results',
),
pytest.param(
'a.b.c.d',
id='no_results',
),
),
)
def test_build_indirect_host_data_malformed_module_name(mock_logger_debug, bare_job, task_name: str):
create_registered_event(bare_job, task_name)
assert build_indirect_host_data(bare_job, Query('demo.query.example', TEST_JQ)) == []
mock_logger_debug.assert_called_once_with(f"Malformed invocation module name '{task_name}'. Expected to be of the form 'a.b.c'")
@mock.patch('awx.main.tasks.host_indirect.logger.info')
@pytest.mark.django_db
@pytest.mark.parametrize(
'query',
(
pytest.param(
'demo.query',
id='no_results',
),
pytest.param(
'demo',
id='no_results',
),
pytest.param(
'a.b.c.d',
id='no_results',
),
),
)
def test_build_indirect_host_data_malformed_query(mock_logger_info, job_with_counted_event, query: str):
assert build_indirect_host_data(job_with_counted_event, {query: {'query': TEST_JQ}}) == []
mock_logger_info.assert_called_once_with(f"Skiping malformed query '{query}'. Expected to be of the form 'a.b.c'")
@pytest.mark.django_db
def test_collect_an_event(job_with_counted_event):
records = build_indirect_host_data(job_with_counted_event, {'demo.query.example': {'query': TEST_JQ}})
assert len(records) == 1
@pytest.mark.parametrize(
'query',
(
pytest.param(
Query('demo.query.example', TEST_JQ),
id='fully_qualified',
),
pytest.param(
Query('demo.query.*', TEST_JQ),
id='wildcard',
),
),
)
def test_fetch_job_event_query(bare_job, query: Query):
query.create_event_query(module_name='example')
assert fetch_job_event_query(bare_job) == query.resolve('example')
@pytest.mark.django_db
def test_fetch_job_event_query(bare_job, event_query):
assert fetch_job_event_query(bare_job) == {'demo.query.example': {'query': TEST_JQ}}
@pytest.mark.parametrize(
'queries',
(
[
Query('demo.query.example', TEST_JQ),
Query('demo2.query.example', TEST_JQ),
],
[
Query('demo.query.*', TEST_JQ),
Query('demo2.query.example', TEST_JQ),
],
),
)
def test_fetch_multiple_job_event_query(bare_job, queries: list[Query]):
for q in queries:
q.create_event_query(module_name='example')
assert fetch_job_event_query(bare_job) == reduce(lambda acc, q: acc | q.resolve('example'), queries, {})
@pytest.mark.django_db
def test_fetch_multiple_job_event_query(bare_job):
create_event_query(fqcn='demo.query')
create_event_query(fqcn='demo2.query')
assert fetch_job_event_query(bare_job) == {'demo.query.example': {'query': TEST_JQ}, 'demo2.query.example': {'query': TEST_JQ}}
@pytest.mark.parametrize(
('state',),
(
pytest.param(
[
(
Query('demo.query.example', TEST_JQ),
['example'],
),
],
id='fully_qualified',
),
pytest.param(
[
(
Query('demo.query.example', TEST_JQ),
['example'] * 3,
),
],
id='multiple_events_same_module_same_host',
),
pytest.param(
[
(
Query('demo.query.example', TEST_JQ),
['example'],
),
(
Query('demo2.query.example', TEST_JQ),
['example'],
),
],
id='multiple_modules',
),
pytest.param(
[
(
Query('demo.query.*', TEST_JQ),
['example', 'example2'],
),
],
id='multiple_modules_same_collection',
),
),
)
def test_save_indirect_host_entries(bare_job, state):
all_task_names = []
for entry in state:
query, module_names = entry
all_task_names.extend([f'{query.get_fqcn()}.{module_name}' for module_name in module_names])
query.create_event_queries(module_names)
[query.create_registered_event(bare_job, n) for n in module_names]
save_indirect_host_entries(bare_job.id)
bare_job.refresh_from_db()
@pytest.mark.django_db
def test_save_indirect_host_entries(job_with_counted_event, event_query):
assert job_with_counted_event.event_queries_processed is False
save_indirect_host_entries(job_with_counted_event.id)
job_with_counted_event.refresh_from_db()
assert job_with_counted_event.event_queries_processed is True
assert IndirectManagedNodeAudit.objects.filter(job=job_with_counted_event).count() == 1
host_audit = IndirectManagedNodeAudit.objects.filter(job=job_with_counted_event).first()
assert host_audit.count == 1
assert bare_job.event_queries_processed is True
assert IndirectManagedNodeAudit.objects.filter(job=bare_job).count() == 1
host_audit = IndirectManagedNodeAudit.objects.filter(job=bare_job).first()
assert host_audit.count == len(all_task_names)
assert host_audit.canonical_facts == {'host_name': 'foo_host'}
assert host_audit.facts == {'another_host_name': 'foo_host'}
assert host_audit.organization == job_with_counted_event.organization
assert host_audit.organization == bare_job.organization
assert host_audit.name == 'vm-foo'
@pytest.mark.django_db
def test_multiple_events_same_module_same_host(bare_job, event_query):
"This tests that the count field gives correct answers"
create_registered_event(bare_job)
create_registered_event(bare_job)
create_registered_event(bare_job)
save_indirect_host_entries(bare_job.id)
assert IndirectManagedNodeAudit.objects.filter(job=bare_job).count() == 1
host_audit = IndirectManagedNodeAudit.objects.filter(job=bare_job).first()
assert host_audit.count == 3
assert host_audit.events == ['demo.query.example']
@pytest.mark.django_db
def test_multiple_registered_modules(bare_job):
"This tests that the events will list multiple modules if more than 1 module from different collections is registered and used"
create_registered_event(bare_job, task_name='demo.query.example')
create_registered_event(bare_job, task_name='demo2.query.example')
# These take the place of using the event_query fixture
create_event_query(fqcn='demo.query')
create_event_query(fqcn='demo2.query')
save_indirect_host_entries(bare_job.id)
assert IndirectManagedNodeAudit.objects.filter(job=bare_job).count() == 1
host_audit = IndirectManagedNodeAudit.objects.filter(job=bare_job).first()
assert host_audit.count == 2
assert set(host_audit.events) == {'demo.query.example', 'demo2.query.example'}
@pytest.mark.django_db
def test_multiple_registered_modules_same_collection(bare_job):
"This tests that the events will list multiple modules if more than 1 module in same collection is registered and used"
create_registered_event(bare_job, task_name='demo.query.example')
create_registered_event(bare_job, task_name='demo.query.example2')
# Takes place of event_query fixture, doing manually here
EventQuery.objects.create(
fqcn='demo.query',
collection_version='1.0.1',
event_query=yaml.dump(
{
'demo.query.example': {'query': TEST_JQ},
'demo.query.example2': {'query': TEST_JQ},
},
default_flow_style=False,
),
)
save_indirect_host_entries(bare_job.id)
assert IndirectManagedNodeAudit.objects.filter(job=bare_job).count() == 1
host_audit = IndirectManagedNodeAudit.objects.filter(job=bare_job).first()
assert host_audit.count == 2
assert set(host_audit.events) == {'demo.query.example', 'demo.query.example2'}
assert set(host_audit.events) == set(all_task_names)
@pytest.mark.django_db

View File

@@ -129,7 +129,7 @@ def podman_image_generator():
@pytest.fixture
def run_job_from_playbook(default_org, demo_inv, post, admin):
def _rf(test_name, playbook, local_path=None, scm_url=None):
def _rf(test_name, playbook, local_path=None, scm_url=None, jt_params=None):
project_name = f'{test_name} project'
jt_name = f'{test_name} JT: {playbook}'
@@ -166,9 +166,13 @@ def run_job_from_playbook(default_org, demo_inv, post, admin):
assert proj.get_project_path()
assert playbook in proj.playbooks
jt_data = {'name': jt_name, 'project': proj.id, 'playbook': playbook, 'inventory': demo_inv.id}
if jt_params:
jt_data.update(jt_params)
result = post(
reverse('api:job_template_list'),
{'name': jt_name, 'project': proj.id, 'playbook': playbook, 'inventory': demo_inv.id},
jt_data,
admin,
expect=201,
)

View File

@@ -0,0 +1,64 @@
import pytest
from awx.main.tests.live.tests.conftest import wait_for_events
from awx.main.models import Job, Inventory
def assert_facts_populated(name):
job = Job.objects.filter(name__icontains=name).order_by('-created').first()
assert job is not None
wait_for_events(job)
inventory = job.inventory
assert inventory.hosts.count() > 0 # sanity
for host in inventory.hosts.all():
assert host.ansible_facts
@pytest.fixture
def general_facts_test(live_tmp_folder, run_job_from_playbook):
def _rf(slug, jt_params):
jt_params['use_fact_cache'] = True
standard_kwargs = dict(scm_url=f'file://{live_tmp_folder}/facts', jt_params=jt_params)
# GATHER FACTS
name = f'test_gather_ansible_facts_{slug}'
run_job_from_playbook(name, 'gather.yml', **standard_kwargs)
assert_facts_populated(name)
# KEEP FACTS
name = f'test_clear_ansible_facts_{slug}'
run_job_from_playbook(name, 'no_op.yml', **standard_kwargs)
assert_facts_populated(name)
# CLEAR FACTS
name = f'test_clear_ansible_facts_{slug}'
run_job_from_playbook(name, 'clear.yml', **standard_kwargs)
job = Job.objects.filter(name__icontains=name).order_by('-created').first()
assert job is not None
wait_for_events(job)
inventory = job.inventory
assert inventory.hosts.count() > 0 # sanity
for host in inventory.hosts.all():
assert not host.ansible_facts
return _rf
def test_basic_ansible_facts(general_facts_test):
general_facts_test('basic', {})
@pytest.fixture
def sliced_inventory():
inv, _ = Inventory.objects.get_or_create(name='inventory-to-slice')
if not inv.hosts.exists():
for i in range(10):
inv.hosts.create(name=f'sliced_host_{i}')
return inv
def test_slicing_with_facts(general_facts_test, sliced_inventory):
general_facts_test('sliced', {'job_slice_count': 3, 'inventory': sliced_inventory.id})

View File

@@ -34,7 +34,7 @@ def hosts(ref_time):
def test_start_job_fact_cache(hosts, tmpdir):
fact_cache = os.path.join(tmpdir, 'facts')
last_modified = start_fact_cache(hosts, fact_cache, timeout=0)
last_modified, _ = start_fact_cache(hosts, fact_cache, timeout=0)
for host in hosts:
filepath = os.path.join(fact_cache, host.name)
@@ -61,7 +61,7 @@ def test_fact_cache_with_invalid_path_traversal(tmpdir):
def test_start_job_fact_cache_past_timeout(hosts, tmpdir):
fact_cache = os.path.join(tmpdir, 'facts')
# the hosts fixture was modified 5s ago, which is more than 2s
last_modified = start_fact_cache(hosts, fact_cache, timeout=2)
last_modified, _ = start_fact_cache(hosts, fact_cache, timeout=2)
assert last_modified is None
for host in hosts:
@@ -71,7 +71,7 @@ def test_start_job_fact_cache_past_timeout(hosts, tmpdir):
def test_start_job_fact_cache_within_timeout(hosts, tmpdir):
fact_cache = os.path.join(tmpdir, 'facts')
# the hosts fixture was modified 5s ago, which is less than 7s
last_modified = start_fact_cache(hosts, fact_cache, timeout=7)
last_modified, _ = start_fact_cache(hosts, fact_cache, timeout=7)
assert last_modified
for host in hosts:
@@ -80,7 +80,7 @@ def test_start_job_fact_cache_within_timeout(hosts, tmpdir):
def test_finish_job_fact_cache_with_existing_data(hosts, mocker, tmpdir, ref_time):
fact_cache = os.path.join(tmpdir, 'facts')
last_modified = start_fact_cache(hosts, fact_cache, timeout=0)
last_modified, _ = start_fact_cache(hosts, fact_cache, timeout=0)
bulk_update = mocker.patch('django.db.models.query.QuerySet.bulk_update')
@@ -108,7 +108,7 @@ def test_finish_job_fact_cache_with_existing_data(hosts, mocker, tmpdir, ref_tim
def test_finish_job_fact_cache_with_bad_data(hosts, mocker, tmpdir):
fact_cache = os.path.join(tmpdir, 'facts')
last_modified = start_fact_cache(hosts, fact_cache, timeout=0)
last_modified, _ = start_fact_cache(hosts, fact_cache, timeout=0)
bulk_update = mocker.patch('django.db.models.query.QuerySet.bulk_update')
@@ -127,7 +127,7 @@ def test_finish_job_fact_cache_with_bad_data(hosts, mocker, tmpdir):
def test_finish_job_fact_cache_clear(hosts, mocker, ref_time, tmpdir):
fact_cache = os.path.join(tmpdir, 'facts')
last_modified = start_fact_cache(hosts, fact_cache, timeout=0)
last_modified, _ = start_fact_cache(hosts, fact_cache, timeout=0)
bulk_update = mocker.patch('django.db.models.query.QuerySet.bulk_update')

View File

@@ -0,0 +1,162 @@
# -*- coding: utf-8 -*-
import os
import tempfile
import shutil
import pytest
from unittest import mock
from awx.main.models import (
Inventory,
Host,
)
from django.utils.timezone import now
from django.db.models.query import QuerySet
from awx.main.models import (
Job,
Organization,
Project,
)
from awx.main.tasks import jobs
@pytest.fixture
def private_data_dir():
private_data = tempfile.mkdtemp(prefix='awx_')
for subfolder in ('inventory', 'env'):
runner_subfolder = os.path.join(private_data, subfolder)
os.makedirs(runner_subfolder, exist_ok=True)
yield private_data
shutil.rmtree(private_data, True)
@mock.patch('awx.main.tasks.facts.update_hosts')
@mock.patch('awx.main.tasks.facts.settings')
@mock.patch('awx.main.tasks.jobs.create_partition', return_value=True)
def test_pre_post_run_hook_facts(mock_create_partition, mock_facts_settings, update_hosts, private_data_dir, execution_environment):
# creates inventory_object with two hosts
inventory = Inventory(pk=1)
mock_inventory = mock.MagicMock(spec=Inventory, wraps=inventory)
mock_inventory._state = mock.MagicMock()
qs_hosts = QuerySet()
hosts = [
Host(id=1, name='host1', ansible_facts={"a": 1, "b": 2}, ansible_facts_modified=now(), inventory=mock_inventory),
Host(id=2, name='host2', ansible_facts={"a": 1, "b": 2}, ansible_facts_modified=now(), inventory=mock_inventory),
]
qs_hosts._result_cache = hosts
qs_hosts.only = mock.MagicMock(return_value=hosts)
mock_inventory.hosts = qs_hosts
assert mock_inventory.hosts.count() == 2
# creates job object with fact_cache enabled
org = Organization(pk=1)
proj = Project(pk=1, organization=org)
job = mock.MagicMock(spec=Job, use_fact_cache=True, project=proj, organization=org, job_slice_number=1, job_slice_count=1)
job.inventory = mock_inventory
job.execution_environment = execution_environment
job.get_hosts_for_fact_cache = Job.get_hosts_for_fact_cache.__get__(job) # to run original method
job.job_env.get = mock.MagicMock(return_value=private_data_dir)
# creates the task object with job object as instance
mock_facts_settings.ANSIBLE_FACT_CACHE_TIMEOUT = False # defines timeout to false
task = jobs.RunJob()
task.instance = job
task.update_model = mock.Mock(return_value=job)
task.model.objects.get = mock.Mock(return_value=job)
# run pre_run_hook
task.facts_write_time = task.pre_run_hook(job, private_data_dir)
# updates inventory with one more host
hosts.append(Host(id=3, name='host3', ansible_facts={"added": True}, ansible_facts_modified=now(), inventory=mock_inventory))
assert mock_inventory.hosts.count() == 3
# run post_run_hook
task.runner_callback.artifacts_processed = mock.MagicMock(return_value=True)
task.post_run_hook(job, "success")
assert mock_inventory.hosts[2].ansible_facts == {"added": True}
@mock.patch('awx.main.tasks.facts.update_hosts')
@mock.patch('awx.main.tasks.facts.settings')
@mock.patch('awx.main.tasks.jobs.create_partition', return_value=True)
def test_pre_post_run_hook_facts_deleted_sliced(mock_create_partition, mock_facts_settings, update_hosts, private_data_dir, execution_environment):
# creates inventory_object with two hosts
inventory = Inventory(pk=1)
mock_inventory = mock.MagicMock(spec=Inventory, wraps=inventory)
mock_inventory._state = mock.MagicMock()
qs_hosts = QuerySet()
hosts = [Host(id=num, name=f'host{num}', ansible_facts={"a": 1, "b": 2}, ansible_facts_modified=now(), inventory=mock_inventory) for num in range(999)]
qs_hosts._result_cache = hosts
qs_hosts.only = mock.MagicMock(return_value=hosts)
mock_inventory.hosts = qs_hosts
assert mock_inventory.hosts.count() == 999
# creates job object with fact_cache enabled
org = Organization(pk=1)
proj = Project(pk=1, organization=org)
job = mock.MagicMock(spec=Job, use_fact_cache=True, project=proj, organization=org, job_slice_number=1, job_slice_count=3)
job.inventory = mock_inventory
job.execution_environment = execution_environment
job.get_hosts_for_fact_cache = Job.get_hosts_for_fact_cache.__get__(job) # to run original method
job.job_env.get = mock.MagicMock(return_value=private_data_dir)
# creates the task object with job object as instance
mock_facts_settings.ANSIBLE_FACT_CACHE_TIMEOUT = False
task = jobs.RunJob()
task.instance = job
task.update_model = mock.Mock(return_value=job)
task.model.objects.get = mock.Mock(return_value=job)
# run pre_run_hook
task.facts_write_time = task.pre_run_hook(job, private_data_dir)
hosts.pop(1)
assert mock_inventory.hosts.count() == 998
# run post_run_hook
task.runner_callback.artifacts_processed = mock.MagicMock(return_value=True)
task.post_run_hook(job, "success")
for host in hosts:
assert host.ansible_facts == {"a": 1, "b": 2}
failures = []
for host in hosts:
try:
assert host.ansible_facts == {"a": 1, "b": 2, "unexpected_key": "bad"}
except AssertionError:
failures.append("Host named {} has facts {}".format(host.name, host.ansible_facts))
assert len(failures) > 0, f"Failures occurred for the following hosts: {failures}"
@mock.patch('awx.main.tasks.facts.update_hosts')
@mock.patch('awx.main.tasks.facts.settings')
def test_invalid_host_facts(mock_facts_settings, update_hosts, private_data_dir, execution_environment):
inventory = Inventory(pk=1)
mock_inventory = mock.MagicMock(spec=Inventory, wraps=inventory)
mock_inventory._state = mock.MagicMock()
hosts = [
Host(id=0, name='host0', ansible_facts={"a": 1, "b": 2}, ansible_facts_modified=now(), inventory=mock_inventory),
Host(id=1, name='host1', ansible_facts={"a": 1, "b": 2, "unexpected_key": "bad"}, ansible_facts_modified=now(), inventory=mock_inventory),
]
mock_inventory.hosts = hosts
failures = []
for host in mock_inventory.hosts:
assert "a" in host.ansible_facts
if "unexpected_key" in host.ansible_facts:
failures.append(host.name)
mock_facts_settings.SOME_SETTING = True
update_hosts(mock_inventory.hosts)
with pytest.raises(pytest.fail.Exception):
if failures:
pytest.fail(f" {len(failures)} facts cleared failures : {','.join(failures)}")

View File

@@ -4,6 +4,7 @@
# Python
import base64
import logging
import logging.handlers
import sys
import traceback
import os
@@ -27,6 +28,9 @@ from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import Resource
__all__ = ['RSysLogHandler', 'SpecialInventoryHandler', 'ColorHandler']
class RSysLogHandler(logging.handlers.SysLogHandler):
append_nul = False
@@ -109,39 +113,35 @@ class SpecialInventoryHandler(logging.Handler):
if settings.COLOR_LOGS is True:
try:
from logutils.colorize import ColorizingStreamHandler
import colorama
from logutils.colorize import ColorizingStreamHandler
import colorama
colorama.deinit()
colorama.init(wrap=False, convert=False, strip=False)
colorama.deinit()
colorama.init(wrap=False, convert=False, strip=False)
class ColorHandler(ColorizingStreamHandler):
def colorize(self, line, record):
# comment out this method if you don't like the job_lifecycle
# logs rendered with cyan text
previous_level_map = self.level_map.copy()
if record.name == "awx.analytics.job_lifecycle":
self.level_map[logging.INFO] = (None, 'cyan', True)
msg = super(ColorHandler, self).colorize(line, record)
self.level_map = previous_level_map
return msg
class ColorHandler(ColorizingStreamHandler):
def colorize(self, line, record):
# comment out this method if you don't like the job_lifecycle
# logs rendered with cyan text
previous_level_map = self.level_map.copy()
if record.name == "awx.analytics.job_lifecycle":
self.level_map[logging.INFO] = (None, 'cyan', True)
msg = super(ColorHandler, self).colorize(line, record)
self.level_map = previous_level_map
return msg
def format(self, record):
message = logging.StreamHandler.format(self, record)
return '\n'.join([self.colorize(line, record) for line in message.splitlines()])
def format(self, record):
message = logging.StreamHandler.format(self, record)
return '\n'.join([self.colorize(line, record) for line in message.splitlines()])
level_map = {
logging.DEBUG: (None, 'green', True),
logging.INFO: (None, None, True),
logging.WARNING: (None, 'yellow', True),
logging.ERROR: (None, 'red', True),
logging.CRITICAL: (None, 'red', True),
}
level_map = {
logging.DEBUG: (None, 'green', True),
logging.INFO: (None, None, True),
logging.WARNING: (None, 'yellow', True),
logging.ERROR: (None, 'red', True),
logging.CRITICAL: (None, 'red', True),
}
except ImportError:
# logutils is only used for colored logs in the dev environment
pass
else:
ColorHandler = logging.StreamHandler

View File

@@ -80,10 +80,6 @@ LANGUAGE_CODE = 'en-us'
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale
USE_L10N = True
USE_TZ = True
STATICFILES_DIRS = [

View File

@@ -257,6 +257,8 @@ def main():
copy_lookup_data = lookup_data
if organization:
lookup_data['organization'] = org_id
if user:
lookup_data['organization'] = None
credential = module.get_one('credentials', name_or_id=name, check_exists=(state == 'exists'), **{'data': lookup_data})
@@ -290,8 +292,11 @@ def main():
if inputs:
credential_fields['inputs'] = inputs
if description:
credential_fields['description'] = description
if description is not None:
if description == '':
credential_fields['description'] = ''
else:
credential_fields['description'] = description
if organization:
credential_fields['organization'] = org_id

View File

@@ -116,8 +116,11 @@ def main():
}
if kind:
credential_type_params['kind'] = kind
if module.params.get('description'):
credential_type_params['description'] = module.params.get('description')
if module.params.get('description') is not None:
if module.params.get('description') == '':
credential_type_params['description'] = ''
else:
credential_type_params['description'] = module.params.get('description')
if module.params.get('inputs'):
credential_type_params['inputs'] = module.params.get('inputs')
if module.params.get('injectors'):

View File

@@ -47,6 +47,7 @@ These can be specified via (from highest to lowest precedence):
- direct module parameters
- environment variables (most useful when running against localhost)
- a config file path specified by the `tower_config_file` parameter
- a config file at `./tower_cli.cfg`, i.e. in the current directory
- a config file at `~/.tower_cli.cfg`
- a config file at `/etc/tower/tower_cli.cfg`
@@ -60,6 +61,15 @@ username = foo
password = bar
```
or like this:
```
host: https://localhost:8043
verify_ssl: true
oauth_token: <token>
```
## Release and Upgrade Notes
Notable releases of the `{{ collection_namespace }}.{{ collection_package }}` collection:

View File

@@ -1,11 +0,0 @@
Copyright 2022 Rick van Hattem
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -15,10 +15,6 @@ markers =
filterwarnings =
error
# NOTE: The following are introduced upgrading python 3.11 to python 3.12
# FIXME: Upgrade django-polymorphic https://github.com/jazzband/django-polymorphic/pull/541
once:Deprecated call to `pkg_resources.declare_namespace\('sphinxcontrib'\)`.\nImplementing implicit namespace packages \(as specified in PEP 420\) is preferred to `pkg_resources.declare_namespace`.:DeprecationWarning
# FIXME: Upgrade protobuf https://github.com/protocolbuffers/protobuf/issues/15077
once:Type google._upb._message.* uses PyType_Spec with a metaclass that has custom tp_new:DeprecationWarning
@@ -29,9 +25,6 @@ filterwarnings =
# FIXME: Set `USE_TZ` to `True`.
once:The default value of USE_TZ will change from False to True in Django 5.0. Set USE_TZ to False in your project settings if you want to keep the current default behavior.:django.utils.deprecation.RemovedInDjango50Warning:django.conf
# FIXME: Delete this entry once `USE_L10N` use is removed.
once:The USE_L10N setting is deprecated. Starting with Django 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale.:django.utils.deprecation.RemovedInDjango50Warning:django.conf
# FIXME: Delete this entry once `pyparsing` is updated.
once:module 'sre_constants' is deprecated:DeprecationWarning:_pytest.assertion.rewrite
@@ -41,9 +34,6 @@ filterwarnings =
# FIXME: Delete this entry once `zope` is updated.
once:Deprecated call to `pkg_resources.declare_namespace.'zope'.`.\nImplementing implicit namespace packages .as specified in PEP 420. is preferred to `pkg_resources.declare_namespace`. See https.//setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages:DeprecationWarning:
# FIXME: Delete this entry once `coreapi` is updated.
once:'cgi' is deprecated and slated for removal in Python 3.13:DeprecationWarning:_pytest.assertion.rewrite
# FIXME: Delete this entry once the use of `distutils` is exterminated from the repo.
once:The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives:DeprecationWarning:_pytest.assertion.rewrite
@@ -79,12 +69,6 @@ filterwarnings =
# FIXME: in `awx/main/analytics/collectors.py` and then delete the entry.
once:distro.linux_distribution.. is deprecated. It should only be used as a compatibility shim with Python's platform.linux_distribution... Please use distro.id.., distro.version.. and distro.name.. instead.:DeprecationWarning:awx.main.analytics.collectors
# FIXME: Figure this out, fix and then delete the entry.
once:\nUsing ProtocolTypeRouter without an explicit "http" key is deprecated.\nGiven that you have not passed the "http" you likely should use Django's\nget_asgi_application...:DeprecationWarning:awx.main.routing
# FIXME: Figure this out, fix and then delete the entry.
once:Channel's inbuilt http protocol AsgiHandler is deprecated. Use Django's get_asgi_application.. instead.:DeprecationWarning:channels.routing
# FIXME: Use `codecs.open()` via a context manager
# FIXME: in `awx/main/utils/ansible.py` to close hanging file descriptors
# FIXME: and then delete the entry.

View File

@@ -14,7 +14,7 @@ cryptography<42.0.0 # investigation is needed for 42+ to work with OpenSSL v3.0
Cython
daphne
distro
django==4.2.16 # CVE-2024-24680
django==4.2.20 # CVE-2025-26699
django-cors-headers
django-crum
django-extensions
@@ -54,7 +54,7 @@ python-tss-sdk>=1.2.1
pyyaml>=6.0.2 # require packing fix for cython 3 or higher
pyzstd # otel collector log file compression library
receptorctl
sqlparse>=0.4.4 # Required by django https://github.com/ansible/awx/security/dependabot/96
sqlparse>=0.5.2
redis[hiredis]
requests
slack-sdk
@@ -69,6 +69,3 @@ setuptools_scm[toml] # see UPGRADE BLOCKERs, xmlsec build dep
setuptools-rust>=0.11.4 # cryptography build dep
pkgconfig>=1.5.1 # xmlsec build dep - needed for offline build
django-flags>=5.0.13
# Temporarily added to use ansible-runner from git branch, to be removed
# when ansible-runner moves from requirements_git.txt to here
pbr

View File

@@ -1,13 +1,13 @@
adal==1.2.7
# via msrestazure
aiohappyeyeballs==2.4.4
aiohappyeyeballs==2.6.1
# via aiohttp
aiohttp==3.11.11
aiohttp==3.11.16
# via
# -r /awx_devel/requirements/requirements.in
# aiohttp-retry
# twilio
aiohttp-retry==2.8.3
aiohttp-retry==2.9.1
# via twilio
aiosignal==1.3.2
# via aiohttp
@@ -25,9 +25,9 @@ asgiref==3.8.1
# django
# django-ansible-base
# django-cors-headers
asn1==2.7.1
asn1==3.0.0
# via -r /awx_devel/requirements/requirements.in
attrs==24.3.0
attrs==25.3.0
# via
# aiohttp
# jsonschema
@@ -46,14 +46,14 @@ awx-plugins.interfaces @ git+https://github.com/ansible/awx_plugins.interfaces.g
# via
# -r /awx_devel/requirements/requirements_git.txt
# awx-plugins-core
azure-core==1.32.0
azure-core==1.33.0
# via
# azure-identity
# azure-keyvault-certificates
# azure-keyvault-keys
# azure-keyvault-secrets
# msrest
azure-identity==1.19.0
azure-identity==1.21.0
# via -r /awx_devel/requirements/requirements.in
azure-keyvault==4.2.0
# via -r /awx_devel/requirements/requirements.in
@@ -65,14 +65,14 @@ azure-keyvault-secrets==4.9.0
# via azure-keyvault
backports-tarfile==1.2.0
# via jaraco-context
boto3==1.35.96
boto3==1.37.34
# via -r /awx_devel/requirements/requirements.in
botocore==1.35.96
botocore==1.37.34
# via
# -r /awx_devel/requirements/requirements.in
# boto3
# s3transfer
cachetools==5.5.0
cachetools==5.5.2
# via google-auth
# git+https://github.com/ansible/system-certifi.git@devel # git requirements installed separately
# via
@@ -84,7 +84,7 @@ cffi==1.17.1
# via
# cryptography
# pynacl
channels==4.2.0
channels==4.2.2
# via
# -r /awx_devel/requirements/requirements.in
# channels-redis
@@ -109,11 +109,11 @@ cryptography==41.0.7
# pyjwt
# pyopenssl
# service-identity
cython==3.0.11
cython==3.0.12
# via -r /awx_devel/requirements/requirements.in
daphne==4.1.2
# via -r /awx_devel/requirements/requirements.in
deprecated==1.2.15
deprecated==1.2.18
# via
# opentelemetry-api
# opentelemetry-exporter-otlp-proto-grpc
@@ -122,7 +122,7 @@ deprecated==1.2.15
# pygithub
distro==1.9.0
# via -r /awx_devel/requirements/requirements.in
django==4.2.16
django==4.2.20
# via
# -r /awx_devel/requirements/requirements.in
# channels
@@ -138,19 +138,19 @@ django==4.2.16
# djangorestframework
# django-ansible-base @ git+https://github.com/ansible/django-ansible-base@devel # git requirements installed separately
# via -r /awx_devel/requirements/requirements_git.txt
django-cors-headers==4.6.0
django-cors-headers==4.7.0
# via -r /awx_devel/requirements/requirements.in
django-crum==0.7.9
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
django-extensions==3.2.3
django-extensions==4.1
# via -r /awx_devel/requirements/requirements.in
django-flags==5.0.13
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
django-guid==3.5.0
django-guid==3.5.1
# via -r /awx_devel/requirements/requirements.in
django-oauth-toolkit==1.7.1
# via -r /awx_devel/requirements/requirements.in
@@ -158,7 +158,7 @@ django-polymorphic==3.1.0
# via -r /awx_devel/requirements/requirements.in
django-solo==2.4.0
# via -r /awx_devel/requirements/requirements.in
djangorestframework==3.15.2
djangorestframework==3.16.0
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
@@ -167,10 +167,12 @@ djangorestframework-yaml==2.0.0
durationpy==0.9
# via kubernetes
dynaconf==3.2.10
# via -r /awx_devel/requirements/requirements.in
# via
# -r /awx_devel/requirements/requirements.in
# django-ansible-base
enum-compat==0.0.3
# via asn1
filelock==3.16.1
filelock==3.18.0
# via -r /awx_devel/requirements/requirements.in
frozenlist==1.5.0
# via
@@ -180,13 +182,13 @@ gitdb==4.0.12
# via gitpython
gitpython==3.1.44
# via -r /awx_devel/requirements/requirements.in
google-auth==2.37.0
google-auth==2.39.0
# via kubernetes
googleapis-common-protos==1.66.0
googleapis-common-protos==1.70.0
# via
# opentelemetry-exporter-otlp-proto-grpc
# opentelemetry-exporter-otlp-proto-http
grpcio==1.69.0
grpcio==1.71.0
# via
# -r /awx_devel/requirements/requirements.in
# opentelemetry-exporter-otlp-proto-grpc
@@ -202,7 +204,7 @@ idna==3.10
# requests
# twisted
# yarl
importlib-metadata==8.5.0
importlib-metadata==8.6.1
# via opentelemetry-api
importlib-resources==6.5.2
# via irc
@@ -235,7 +237,7 @@ jaraco-text==4.0.0
# via
# irc
# jaraco-collections
jinja2==3.1.5
jinja2==3.1.6
# via -r /awx_devel/requirements/requirements.in
jmespath==1.0.1
# via
@@ -243,7 +245,7 @@ jmespath==1.0.1
# botocore
jq==1.8.0
# via -r /awx_devel/requirements/requirements.in
json-log-formatter==1.1
json-log-formatter==1.1.1
# via -r /awx_devel/requirements/requirements.in
jsonschema==4.23.0
# via -r /awx_devel/requirements/requirements.in
@@ -251,27 +253,27 @@ jsonschema-specifications==2024.10.1
# via jsonschema
jwcrypto==1.5.6
# via django-oauth-toolkit
kubernetes==31.0.0
kubernetes==32.0.1
# via openshift
lockfile==0.12.2
# via python-daemon
markdown==3.7
markdown==3.8
# via -r /awx_devel/requirements/requirements.in
markupsafe==3.0.2
# via jinja2
maturin==1.8.1
maturin==1.8.3
# via -r /awx_devel/requirements/requirements.in
more-itertools==10.5.0
more-itertools==10.6.0
# via
# irc
# jaraco-functools
# jaraco-stream
# jaraco-text
msal==1.31.1
msal==1.32.0
# via
# azure-identity
# msal-extensions
msal-extensions==1.2.0
msal-extensions==1.3.1
# via azure-identity
msgpack==1.1.0
# via
@@ -281,7 +283,7 @@ msrest==0.7.1
# via msrestazure
msrestazure==0.6.4.post1
# via -r /awx_devel/requirements/requirements.in
multidict==6.1.0
multidict==6.4.3
# via
# aiohttp
# yarl
@@ -292,7 +294,7 @@ oauthlib==3.2.2
# requests-oauthlib
openshift==0.13.2
# via -r /awx_devel/requirements/requirements.in
opentelemetry-api==1.29.0
opentelemetry-api==1.32.0
# via
# -r /awx_devel/requirements/requirements.in
# opentelemetry-exporter-otlp-proto-grpc
@@ -301,31 +303,31 @@ opentelemetry-api==1.29.0
# opentelemetry-instrumentation-logging
# opentelemetry-sdk
# opentelemetry-semantic-conventions
opentelemetry-exporter-otlp==1.29.0
opentelemetry-exporter-otlp==1.32.0
# via -r /awx_devel/requirements/requirements.in
opentelemetry-exporter-otlp-proto-common==1.29.0
opentelemetry-exporter-otlp-proto-common==1.32.0
# via
# opentelemetry-exporter-otlp-proto-grpc
# opentelemetry-exporter-otlp-proto-http
opentelemetry-exporter-otlp-proto-grpc==1.29.0
opentelemetry-exporter-otlp-proto-grpc==1.32.0
# via opentelemetry-exporter-otlp
opentelemetry-exporter-otlp-proto-http==1.29.0
opentelemetry-exporter-otlp-proto-http==1.32.0
# via opentelemetry-exporter-otlp
opentelemetry-instrumentation==0.50b0
opentelemetry-instrumentation==0.53b0
# via opentelemetry-instrumentation-logging
opentelemetry-instrumentation-logging==0.50b0
opentelemetry-instrumentation-logging==0.53b0
# via -r /awx_devel/requirements/requirements.in
opentelemetry-proto==1.29.0
opentelemetry-proto==1.32.0
# via
# opentelemetry-exporter-otlp-proto-common
# opentelemetry-exporter-otlp-proto-grpc
# opentelemetry-exporter-otlp-proto-http
opentelemetry-sdk==1.29.0
opentelemetry-sdk==1.32.0
# via
# -r /awx_devel/requirements/requirements.in
# opentelemetry-exporter-otlp-proto-grpc
# opentelemetry-exporter-otlp-proto-http
opentelemetry-semantic-conventions==0.50b0
opentelemetry-semantic-conventions==0.53b0
# via
# opentelemetry-instrumentation
# opentelemetry-sdk
@@ -334,29 +336,25 @@ packaging==24.2
# ansible-runner
# opentelemetry-instrumentation
# setuptools-scm
pbr==6.1.0
# via -r /awx_devel/requirements/requirements.in
pexpect==4.7.0
# via
# -r /awx_devel/requirements/requirements.in
# ansible-runner
pkgconfig==1.5.5
# via -r /awx_devel/requirements/requirements.in
portalocker==2.10.1
# via msal-extensions
prometheus-client==0.21.1
# via -r /awx_devel/requirements/requirements.in
propcache==0.2.1
propcache==0.3.1
# via
# aiohttp
# yarl
protobuf==5.29.3
protobuf==5.29.4
# via
# googleapis-common-protos
# opentelemetry-proto
psutil==6.1.1
psutil==7.0.0
# via -r /awx_devel/requirements/requirements.in
psycopg==3.2.3
psycopg==3.2.6
# via -r /awx_devel/requirements/requirements.in
ptyprocess==0.7.0
# via pexpect
@@ -365,7 +363,7 @@ pyasn1==0.6.1
# pyasn1-modules
# rsa
# service-identity
pyasn1-modules==0.4.1
pyasn1-modules==0.4.2
# via
# google-auth
# service-identity
@@ -384,7 +382,7 @@ pyjwt[crypto]==2.10.1
# twilio
pynacl==1.5.0
# via pygithub
pyopenssl==24.3.0
pyopenssl==25.0.0
# via
# -r /awx_devel/requirements/requirements.in
# twisted
@@ -407,7 +405,7 @@ python-string-utils==1.0.0
# via openshift
python-tss-sdk==1.2.3
# via -r /awx_devel/requirements/requirements.in
pytz==2024.2
pytz==2025.2
# via irc
pyyaml==6.0.2
# via
@@ -418,13 +416,13 @@ pyyaml==6.0.2
# receptorctl
pyzstd==0.16.2
# via -r /awx_devel/requirements/requirements.in
receptorctl==1.5.2
receptorctl==1.5.4
# via -r /awx_devel/requirements/requirements.in
redis[hiredis]==5.2.1
# via
# -r /awx_devel/requirements/requirements.in
# channels-redis
referencing==0.35.1
referencing==0.36.2
# via
# jsonschema
# jsonschema-specifications
@@ -448,21 +446,21 @@ requests-oauthlib==2.0.0
# via
# kubernetes
# msrest
rpds-py==0.22.3
rpds-py==0.24.0
# via
# jsonschema
# referencing
rsa==4.9
# via google-auth
s3transfer==0.10.4
s3transfer==0.11.4
# via boto3
semantic-version==2.10.0
# via setuptools-rust
service-identity==24.2.0
# via twisted
setuptools-rust==1.10.2
setuptools-rust==1.11.1
# via -r /awx_devel/requirements/requirements.in
setuptools-scm[toml]==8.1.0
setuptools-scm[toml]==8.2.0
# via -r /awx_devel/requirements/requirements.in
six==1.17.0
# via
@@ -472,7 +470,7 @@ six==1.17.0
# openshift
# pygerduty
# python-dateutil
slack-sdk==3.34.0
slack-sdk==3.35.0
# via -r /awx_devel/requirements/requirements.in
smmap==5.0.2
# via gitdb
@@ -485,7 +483,7 @@ tempora==5.8.0
# via
# irc
# jaraco-logging
twilio==9.4.2
twilio==9.5.2
# via -r /awx_devel/requirements/requirements.in
twisted[tls]==24.11.0
# via
@@ -493,7 +491,7 @@ twisted[tls]==24.11.0
# daphne
txaio==23.1.1
# via autobahn
typing-extensions==4.12.2
typing-extensions==4.13.2
# via
# azure-core
# azure-identity
@@ -504,15 +502,17 @@ typing-extensions==4.12.2
# opentelemetry-sdk
# psycopg
# pygithub
# pyopenssl
# referencing
# twisted
urllib3==2.3.0
urllib3==2.4.0
# via
# botocore
# django-ansible-base
# kubernetes
# pygithub
# requests
uwsgi==2.0.28
uwsgi==2.0.29
# via -r /awx_devel/requirements/requirements.in
uwsgitop==0.12
# via -r /awx_devel/requirements/requirements.in
@@ -520,11 +520,11 @@ websocket-client==1.8.0
# via kubernetes
wheel==0.45.1
# via -r /awx_devel/requirements/requirements.in
wrapt==1.17.0
wrapt==1.17.2
# via
# deprecated
# opentelemetry-instrumentation
yarl==1.18.3
yarl==1.19.0
# via aiohttp
zipp==3.21.0
# via importlib-metadata

View File

@@ -1,5 +1,4 @@
build
coreapi
django-debug-toolbar==3.2.4
django-test-migrations
drf-yasg<1.21.10 # introduces new DeprecationWarning that is turned into error

View File

@@ -1,5 +1,4 @@
git+https://github.com/ansible/system-certifi.git@devel#egg=certifi
# Remove pbr from requirements.in when moving ansible-runner to requirements.in
git+https://github.com/ansible/ansible-runner.git@devel#egg=ansible-runner
awx-plugins-core @ git+https://github.com/ansible/awx-plugins.git@devel#egg=awx-plugins-core[credentials-github-app]
django-ansible-base @ git+https://github.com/ansible/django-ansible-base@devel#egg=django-ansible-base[rest-filters,jwt_consumer,resource-registry,rbac,feature-flags]

View File

@@ -16,4 +16,11 @@ Get the usage.
```
python generate-sheet.py -h
```
```
## Adding a github Personal Access Token
The scripts looks first for a github personal access token to use to avoid having the scripts calls rate limited, you can create one or use an existing one if you have. The script looks for the PAT under the environment var `GITHUB_ACCESS_TOKEN`.
# For internal spreadsheet usage
AWX engineers will need to import the data generated from the script into a spreadshet manager. Please make sure that you do not replace the existing sheets but make a new one or create a new sheet inside the existing spreadsheet upon import.

View File

@@ -1,2 +1,3 @@
requests
pyexcel
pyexcel-ods3

View File

@@ -42,7 +42,6 @@ services:
DJANGO_SUPERUSER_PASSWORD: {{ admin_password }}
UWSGI_MOUNT_PATH: {{ ingress_path }}
DJANGO_COLORS: "${DJANGO_COLORS:-}"
DJANGO_SETTINGS_MODULE: "awx.settings"
{% if loop.index == 1 %}
RUN_MIGRATIONS: 1
{% endif %}