mirror of
https://github.com/ansible/awx.git
synced 2026-02-05 03:24:50 -03:30
Compare commits
1 Commits
devel
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
96602d6c47 |
2
.github/CODE_OF_CONDUCT.md
vendored
2
.github/CODE_OF_CONDUCT.md
vendored
@@ -1,3 +1,3 @@
|
||||
# Community Code of Conduct
|
||||
|
||||
Please see the official [Ansible Community Code of Conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).
|
||||
Please see the official [Ansible Community Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
2
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -13,7 +13,7 @@ body:
|
||||
attributes:
|
||||
label: Please confirm the following
|
||||
options:
|
||||
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).
|
||||
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
|
||||
required: true
|
||||
- label: I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
|
||||
required: true
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/config.yml
vendored
2
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -5,7 +5,7 @@ contact_links:
|
||||
url: https://github.com/ansible/awx#get-involved
|
||||
about: For general debugging or technical support please see the Get Involved section of our readme.
|
||||
- name: 📝 Ansible Code of Conduct
|
||||
url: https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_template_chooser
|
||||
url: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_template_chooser
|
||||
about: AWX uses the Ansible Code of Conduct; ❤ Be nice to other members of the community. ☮ Behave.
|
||||
- name: 💼 For Enterprise
|
||||
url: https://www.ansible.com/products/engine?utm_medium=github&utm_source=issue_template_chooser
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
2
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@@ -13,7 +13,7 @@ body:
|
||||
attributes:
|
||||
label: Please confirm the following
|
||||
options:
|
||||
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html).
|
||||
- label: I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
|
||||
required: true
|
||||
- label: I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
|
||||
required: true
|
||||
|
||||
6
.github/triage_replies.md
vendored
6
.github/triage_replies.md
vendored
@@ -70,10 +70,10 @@ Thank you for your submission and for supporting AWX!
|
||||
- Hello, we'd love to help, but we need a little more information about the problem you're having. Screenshots, log outputs, or any reproducers would be very helpful.
|
||||
|
||||
### Code of Conduct
|
||||
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html
|
||||
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html
|
||||
|
||||
### EE Contents / Community General
|
||||
- Hello. The awx-ee contains the collections and dependencies needed for supported AWX features to function. Anything beyond that (like the community.general package) will require you to build your own EE. For information on how to do that, see https://docs.ansible.com/projects/builder/en/stable/ \
|
||||
- Hello. The awx-ee contains the collections and dependencies needed for supported AWX features to function. Anything beyond that (like the community.general package) will require you to build your own EE. For information on how to do that, see https://ansible-builder.readthedocs.io/en/stable/ \
|
||||
\
|
||||
The Ansible Community is looking at building an EE that corresponds to all of the collections inside the ansible package. That may help you if and when it happens; see https://github.com/ansible-community/community-topics/issues/31 for details.
|
||||
|
||||
@@ -88,7 +88,7 @@ The Ansible Community is looking at building an EE that corresponds to all of th
|
||||
- Hello, we think your idea is good! Please consider contributing a PR for this following our contributing guidelines: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
|
||||
|
||||
### Receptor
|
||||
- You can find the receptor docs here: https://docs.ansible.com/projects/receptor/en/latest/
|
||||
- You can find the receptor docs here: https://receptor.readthedocs.io/en/latest/
|
||||
- Hello, your issue seems related to receptor. Could you please open an issue in the receptor repository? https://github.com/ansible/receptor. Thanks!
|
||||
|
||||
### Ansible Engine not AWX
|
||||
|
||||
57
.github/workflows/ci.yml
vendored
57
.github/workflows/ci.yml
vendored
@@ -183,19 +183,14 @@ jobs:
|
||||
path: awx-operator
|
||||
|
||||
- name: Setup python, referencing action at awx relative path
|
||||
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065
|
||||
uses: ./awx/.github/actions/setup-python
|
||||
with:
|
||||
python-version: '3.12'
|
||||
python-version: '3.13'
|
||||
|
||||
- name: Install playbook dependencies
|
||||
run: |
|
||||
python -m pip install docker
|
||||
|
||||
- name: Check Python version
|
||||
working-directory: awx
|
||||
run: |
|
||||
make print-PYTHON
|
||||
|
||||
- name: Build AWX image
|
||||
working-directory: awx
|
||||
run: |
|
||||
@@ -207,59 +202,27 @@ jobs:
|
||||
|
||||
- name: Run test deployment with awx-operator
|
||||
working-directory: awx-operator
|
||||
id: awx_operator_test
|
||||
timeout-minutes: 60
|
||||
continue-on-error: true
|
||||
run: |
|
||||
set +e
|
||||
timeout 15m bash -elc '
|
||||
python -m pip install -r molecule/requirements.txt
|
||||
python -m pip install PyYAML # for awx/tools/scripts/rewrite-awx-operator-requirements.py
|
||||
$(realpath ../awx/tools/scripts/rewrite-awx-operator-requirements.py) molecule/requirements.yml $(realpath ../awx)
|
||||
ansible-galaxy collection install -r molecule/requirements.yml
|
||||
sudo rm -f $(which kustomize)
|
||||
make kustomize
|
||||
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
|
||||
'
|
||||
rc=$?
|
||||
if [ $rc -eq 124 ]; then
|
||||
echo "timed_out=true" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
exit $rc
|
||||
python -m pip install -r molecule/requirements.txt
|
||||
python -m pip install PyYAML # for awx/tools/scripts/rewrite-awx-operator-requirements.py
|
||||
$(realpath ../awx/tools/scripts/rewrite-awx-operator-requirements.py) molecule/requirements.yml $(realpath ../awx)
|
||||
ansible-galaxy collection install -r molecule/requirements.yml
|
||||
sudo rm -f $(which kustomize)
|
||||
make kustomize
|
||||
KUSTOMIZE_PATH=$(readlink -f bin/kustomize) molecule -v test -s kind -- --skip-tags=replicas
|
||||
env:
|
||||
AWX_TEST_IMAGE: local/awx
|
||||
AWX_TEST_VERSION: ci
|
||||
AWX_EE_TEST_IMAGE: quay.io/ansible/awx-ee:latest
|
||||
STORE_DEBUG_OUTPUT: true
|
||||
|
||||
- name: Collect awx-operator logs on timeout
|
||||
# Only run on timeout; normal failures should use molecule's built-in log collection.
|
||||
if: steps.awx_operator_test.outputs.timed_out == 'true'
|
||||
run: |
|
||||
mkdir -p "$DEBUG_OUTPUT_DIR"
|
||||
if command -v kind >/dev/null 2>&1; then
|
||||
for cluster in $(kind get clusters 2>/dev/null); do
|
||||
kind export logs "$DEBUG_OUTPUT_DIR/$cluster" --name "$cluster" || true
|
||||
done
|
||||
fi
|
||||
if command -v kubectl >/dev/null 2>&1; then
|
||||
kubectl get all -A -o wide > "$DEBUG_OUTPUT_DIR/kubectl-get-all.txt" || true
|
||||
kubectl get pods -A -o wide > "$DEBUG_OUTPUT_DIR/kubectl-get-pods.txt" || true
|
||||
kubectl describe pods -A > "$DEBUG_OUTPUT_DIR/kubectl-describe-pods.txt" || true
|
||||
fi
|
||||
docker ps -a > "$DEBUG_OUTPUT_DIR/docker-ps.txt" || true
|
||||
|
||||
- name: Upload debug output
|
||||
if: always()
|
||||
if: failure()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: awx-operator-debug-output
|
||||
path: ${{ env.DEBUG_OUTPUT_DIR }}
|
||||
|
||||
- name: Fail awx-operator check if test deployment failed
|
||||
if: steps.awx_operator_test.outcome != 'success'
|
||||
run: exit 1
|
||||
|
||||
collection-sanity:
|
||||
name: awx_collection sanity
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
@@ -20,4 +20,4 @@ jobs:
|
||||
run: |
|
||||
ansible localhost -c local, -m command -a "{{ ansible_python_interpreter + ' -m pip install boto3'}}"
|
||||
ansible localhost -c local -m aws_s3 \
|
||||
-a "bucket=awx-public-ci-files object=${{ github.event.repository.name }}/${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"
|
||||
-a "bucket=awx-public-ci-files object=${GITHUB_REF##*/}/schema.json mode=delobj permission=public-read"
|
||||
|
||||
24
.github/workflows/sonarcloud_pr.yml
vendored
24
.github/workflows/sonarcloud_pr.yml
vendored
@@ -152,27 +152,11 @@ jobs:
|
||||
echo "All changed files in PR:"
|
||||
echo "$files"
|
||||
|
||||
# Filter out files that are excluded by .coveragerc to avoid coverage conflicts
|
||||
# This prevents SonarCloud from analyzing files that have no coverage data
|
||||
# Convert to comma-separated list for sonar.inclusions
|
||||
if [ -n "$files" ]; then
|
||||
# Filter out files matching .coveragerc omit patterns
|
||||
filtered_files=$(echo "$files" | grep -v "settings/.*_defaults\.py$" | grep -v "settings/defaults\.py$" | grep -v "main/migrations/")
|
||||
|
||||
# Show which files were filtered out for transparency
|
||||
excluded_files=$(echo "$files" | grep -E "(settings/.*_defaults\.py$|settings/defaults\.py$|main/migrations/)" || true)
|
||||
if [ -n "$excluded_files" ]; then
|
||||
echo "├── Filtered out (coverage-excluded): $(echo "$excluded_files" | wc -l) file(s)"
|
||||
echo "$excluded_files" | sed 's/^/│ - /'
|
||||
fi
|
||||
|
||||
if [ -n "$filtered_files" ]; then
|
||||
inclusions=$(echo "$filtered_files" | tr '\n' ',' | sed 's/,$//')
|
||||
echo "SONAR_INCLUSIONS=$inclusions" >> $GITHUB_ENV
|
||||
echo "└── Result: ✅ Will scan these files (excluding coverage-omitted files): $inclusions"
|
||||
else
|
||||
echo "└── Result: ✅ All changed files are excluded by coverage config, running full SonarCloud analysis"
|
||||
# Don't set SONAR_INCLUSIONS, let it scan everything per sonar-project.properties
|
||||
fi
|
||||
inclusions=$(echo "$files" | tr '\n' ',' | sed 's/,$//')
|
||||
echo "SONAR_INCLUSIONS=$inclusions" >> $GITHUB_ENV
|
||||
echo "└── Result: ✅ Will scan these files: $inclusions"
|
||||
else
|
||||
echo "└── Result: ✅ Running SonarCloud analysis"
|
||||
fi
|
||||
|
||||
2
.github/workflows/upload_schema.yml
vendored
2
.github/workflows/upload_schema.yml
vendored
@@ -42,7 +42,7 @@ jobs:
|
||||
with:
|
||||
command: cp
|
||||
source: ${{ github.workspace }}/schema.json
|
||||
destination: s3://awx-public-ci-files/${{ github.event.repository.name }}/${{ github.ref_name }}/schema.json
|
||||
destination: s3://awx-public-ci-files/${{ github.ref_name }}/schema.json
|
||||
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY }}
|
||||
aws_secret_access_key: ${{ secrets.AWS_SECRET_KEY }}
|
||||
aws_region: us-east-1
|
||||
|
||||
@@ -7,7 +7,7 @@ build:
|
||||
os: ubuntu-22.04
|
||||
tools:
|
||||
python: >-
|
||||
3.12
|
||||
3.11
|
||||
commands:
|
||||
- pip install --user tox
|
||||
- python3 -m tox -e docs --notest -v
|
||||
|
||||
@@ -31,7 +31,7 @@ Have questions about this document or anything not covered here? Create a topic
|
||||
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
|
||||
- If collaborating with someone else on the same branch, consider using `--force-with-lease` instead of `--force`. This will prevent you from accidentally overwriting commits pushed by someone else. For more information, see [git push docs](https://git-scm.com/docs/git-push#git-push---force-with-leaseltrefnamegt).
|
||||
- If submitting a large code change, it's a good idea to create a [forum topic tagged with 'awx'](https://forum.ansible.com/tag/awx), and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
|
||||
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
|
||||
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
|
||||
|
||||
## Setting up your development environment
|
||||
|
||||
|
||||
49
Makefile
49
Makefile
@@ -1,6 +1,6 @@
|
||||
-include awx/ui/Makefile
|
||||
|
||||
PYTHON := $(notdir $(shell for i in python3.12 python3.11 python3; do command -v $$i; done|sed 1q))
|
||||
PYTHON := $(notdir $(shell for i in python3.11 python3; do command -v $$i; done|sed 1q))
|
||||
SHELL := bash
|
||||
DOCKER_COMPOSE ?= docker compose
|
||||
OFFICIAL ?= no
|
||||
@@ -79,7 +79,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
|
||||
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio
|
||||
# These should be upgraded in the AWX and Ansible venv before attempting
|
||||
# to install the actual requirements
|
||||
VENV_BOOTSTRAP ?= pip==25.3 setuptools==80.9.0 setuptools_scm[toml]==9.2.2 wheel==0.45.1 cython==3.1.3
|
||||
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==80.9.0 setuptools_scm[toml]==8.0.4 wheel==0.42.0 cython==3.1.3
|
||||
|
||||
NAME ?= awx
|
||||
|
||||
@@ -107,8 +107,6 @@ else
|
||||
endif
|
||||
|
||||
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
|
||||
update_requirements upgrade_requirements update_requirements_dev \
|
||||
docker_update_requirements docker_upgrade_requirements docker_update_requirements_dev \
|
||||
develop refresh adduser migrate dbchange \
|
||||
receiver test test_unit test_coverage coverage_html \
|
||||
sdist \
|
||||
@@ -148,7 +146,7 @@ clean-api:
|
||||
rm -rf build $(NAME)-$(VERSION) *.egg-info
|
||||
rm -rf .tox
|
||||
find . -type f -regex ".*\.py[co]$$" -delete
|
||||
find . -type d -name "__pycache__" -exec rm -rf {} +
|
||||
find . -type d -name "__pycache__" -delete
|
||||
rm -f awx/awx_test.sqlite3*
|
||||
rm -rf requirements/vendor
|
||||
rm -rf awx/projects
|
||||
@@ -198,36 +196,6 @@ requirements_dev: requirements_awx requirements_awx_dev
|
||||
|
||||
requirements_test: requirements
|
||||
|
||||
## Update requirements files using pip-compile (run inside container)
|
||||
update_requirements:
|
||||
cd requirements && ./updater.sh run
|
||||
|
||||
## Upgrade all requirements to latest versions (run inside container)
|
||||
upgrade_requirements:
|
||||
cd requirements && ./updater.sh upgrade
|
||||
|
||||
## Update development requirements (run inside container)
|
||||
update_requirements_dev:
|
||||
cd requirements && ./updater.sh dev
|
||||
|
||||
## Update requirements using docker-runner
|
||||
docker_update_requirements:
|
||||
@echo "Running requirements updater..."
|
||||
AWX_DOCKER_CMD='make update_requirements' $(MAKE) docker-runner
|
||||
@echo "Requirements update complete!"
|
||||
|
||||
## Upgrade requirements using docker-runner
|
||||
docker_upgrade_requirements:
|
||||
@echo "Running requirements upgrader..."
|
||||
AWX_DOCKER_CMD='make upgrade_requirements' $(MAKE) docker-runner
|
||||
@echo "Requirements upgrade complete!"
|
||||
|
||||
## Update dev requirements using docker-runner
|
||||
docker_update_requirements_dev:
|
||||
@echo "Running dev requirements updater..."
|
||||
AWX_DOCKER_CMD='make update_requirements_dev' $(MAKE) docker-runner
|
||||
@echo "Dev requirements update complete!"
|
||||
|
||||
## "Install" awx package in development mode.
|
||||
develop:
|
||||
@if [ "$(VIRTUAL_ENV)" ]; then \
|
||||
@@ -289,7 +257,7 @@ dispatcher:
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/awx/bin/activate; \
|
||||
fi; \
|
||||
$(PYTHON) manage.py dispatcherd
|
||||
$(PYTHON) manage.py run_dispatcher
|
||||
|
||||
## Run to start the zeromq callback receiver
|
||||
receiver:
|
||||
@@ -571,10 +539,9 @@ docker-compose-runtest: awx/projects docker-compose-sources
|
||||
docker-compose-build-schema: awx/projects docker-compose-sources
|
||||
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 make genschema
|
||||
|
||||
SCHEMA_DIFF_BASE_FOLDER ?= awx
|
||||
SCHEMA_DIFF_BASE_BRANCH ?= devel
|
||||
detect-schema-change: genschema
|
||||
curl https://s3.amazonaws.com/awx-public-ci-files/$(SCHEMA_DIFF_BASE_FOLDER)/$(SCHEMA_DIFF_BASE_BRANCH)/schema.json -o reference-schema.json
|
||||
curl https://s3.amazonaws.com/awx-public-ci-files/$(SCHEMA_DIFF_BASE_BRANCH)/schema.json -o reference-schema.json
|
||||
# Ignore differences in whitespace with -b
|
||||
# diff exits with 1 when files differ - capture but don't fail
|
||||
-diff -u -b reference-schema.json schema.json
|
||||
@@ -610,7 +577,7 @@ docker-compose-build: Dockerfile.dev
|
||||
docker-compose-buildx: Dockerfile.dev
|
||||
- docker buildx create --name docker-compose-buildx
|
||||
docker buildx use docker-compose-buildx
|
||||
docker buildx build \
|
||||
- docker buildx build \
|
||||
--ssh default=$(SSH_AUTH_SOCK) \
|
||||
--push \
|
||||
--build-arg BUILDKIT_INLINE_CACHE=1 \
|
||||
@@ -670,7 +637,7 @@ awx-kube-build: Dockerfile
|
||||
awx-kube-buildx: Dockerfile
|
||||
- docker buildx create --name awx-kube-buildx
|
||||
docker buildx use awx-kube-buildx
|
||||
docker buildx build \
|
||||
- docker buildx build \
|
||||
--ssh default=$(SSH_AUTH_SOCK) \
|
||||
--push \
|
||||
--build-arg VERSION=$(VERSION) \
|
||||
@@ -704,7 +671,7 @@ awx-kube-dev-build: Dockerfile.kube-dev
|
||||
awx-kube-dev-buildx: Dockerfile.kube-dev
|
||||
- docker buildx create --name awx-kube-dev-buildx
|
||||
docker buildx use awx-kube-dev-buildx
|
||||
docker buildx build \
|
||||
- docker buildx build \
|
||||
--ssh default=$(SSH_AUTH_SOCK) \
|
||||
--push \
|
||||
--build-arg BUILDKIT_INLINE_CACHE=1 \
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
[](https://github.com/ansible/awx/actions/workflows/ci.yml) [](https://codecov.io/github/ansible/awx) [](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html) [](https://github.com/ansible/awx/blob/devel/LICENSE.md) [](https://forum.ansible.com/tag/awx)
|
||||
[](https://github.com/ansible/awx/actions/workflows/ci.yml) [](https://codecov.io/github/ansible/awx) [](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [](https://github.com/ansible/awx/blob/devel/LICENSE.md) [](https://forum.ansible.com/tag/awx)
|
||||
[](https://chat.ansible.im/#/welcome) [](https://forum.ansible.com)
|
||||
|
||||
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
|
||||
@@ -18,7 +18,7 @@ AWX provides a web-based user interface, REST API, and task engine built on top
|
||||
|
||||
To install AWX, please view the [Install guide](./INSTALL.md).
|
||||
|
||||
To learn more about using AWX, view the [AWX docs site](https://docs.ansible.com/projects/awx/en/latest/).
|
||||
To learn more about using AWX, view the [AWX docs site](https://ansible.readthedocs.io/projects/awx/en/latest/).
|
||||
|
||||
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
|
||||
|
||||
@@ -41,11 +41,11 @@ If you're experiencing a problem that you feel is a bug in AWX or have ideas for
|
||||
Code of Conduct
|
||||
---------------
|
||||
|
||||
We require all of our community members and contributors to adhere to the [Ansible code of conduct](https://docs.ansible.com/projects/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
|
||||
We require all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
|
||||
|
||||
Get Involved
|
||||
------------
|
||||
|
||||
We welcome your feedback and ideas via the [Ansible Forum](https://forum.ansible.com/tag/awx).
|
||||
|
||||
For a full list of all the ways to talk with the Ansible Community, see the [AWX Communication guide](https://docs.ansible.com/projects/awx/en/latest/contributor/communication.html).
|
||||
For a full list of all the ways to talk with the Ansible Community, see the [AWX Communication guide](https://ansible.readthedocs.io/projects/awx/en/latest/contributor/communication.html).
|
||||
|
||||
@@ -7,6 +7,7 @@ from rest_framework import serializers
|
||||
# AWX
|
||||
from awx.conf import fields, register, register_validate
|
||||
|
||||
|
||||
register(
|
||||
'SESSION_COOKIE_AGE',
|
||||
field_class=fields.IntegerField,
|
||||
|
||||
@@ -21,7 +21,7 @@ class NullFieldMixin(object):
|
||||
"""
|
||||
|
||||
def validate_empty_values(self, data):
|
||||
is_empty_value, data = super(NullFieldMixin, self).validate_empty_values(data)
|
||||
(is_empty_value, data) = super(NullFieldMixin, self).validate_empty_values(data)
|
||||
if is_empty_value and data is None:
|
||||
return (False, data)
|
||||
return (is_empty_value, data)
|
||||
|
||||
@@ -764,7 +764,7 @@ class SubListCreateAttachDetachAPIView(SubListCreateAPIView):
|
||||
return Response(status=status.HTTP_204_NO_CONTENT)
|
||||
|
||||
def unattach(self, request, *args, **kwargs):
|
||||
sub_id, res = self.unattach_validate(request)
|
||||
(sub_id, res) = self.unattach_validate(request)
|
||||
if res:
|
||||
return res
|
||||
return self.unattach_by_id(request, sub_id)
|
||||
@@ -1023,9 +1023,6 @@ class GenericCancelView(RetrieveAPIView):
|
||||
# In subclass set model, serializer_class
|
||||
obj_permission_type = 'cancel'
|
||||
|
||||
def get(self, request, *args, **kwargs):
|
||||
return super(GenericCancelView, self).get(request, *args, **kwargs)
|
||||
|
||||
@transaction.non_atomic_requests
|
||||
def dispatch(self, *args, **kwargs):
|
||||
return super(GenericCancelView, self).dispatch(*args, **kwargs)
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import MetricsView
|
||||
|
||||
|
||||
urls = [re_path(r'^$', MetricsView.as_view(), name='metrics_view')]
|
||||
|
||||
__all__ = ['urls']
|
||||
|
||||
@@ -111,7 +111,7 @@ class UnifiedJobEventPagination(Pagination):
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.use_limit_paginator = False
|
||||
self.limit_pagination = LimitPagination()
|
||||
super().__init__(*args, **kwargs)
|
||||
return super().__init__(*args, **kwargs)
|
||||
|
||||
def paginate_queryset(self, queryset, request, view=None):
|
||||
if 'limit' in request.query_params:
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import warnings
|
||||
|
||||
from rest_framework.permissions import IsAuthenticated
|
||||
from drf_spectacular.openapi import AutoSchema
|
||||
from drf_spectacular.views import (
|
||||
SpectacularAPIView,
|
||||
@@ -9,50 +8,6 @@ from drf_spectacular.views import (
|
||||
)
|
||||
|
||||
|
||||
def filter_credential_type_schema(
|
||||
result,
|
||||
generator, # NOSONAR
|
||||
request, # NOSONAR
|
||||
public, # NOSONAR
|
||||
):
|
||||
"""
|
||||
Postprocessing hook to filter CredentialType kind enum values.
|
||||
|
||||
For CredentialTypeRequest and PatchedCredentialTypeRequest schemas (POST/PUT/PATCH),
|
||||
filter the 'kind' enum to only show 'cloud' and 'net' values.
|
||||
|
||||
This ensures the OpenAPI schema accurately reflects that only 'cloud' and 'net'
|
||||
credential types can be created or modified via the API, matching the validation
|
||||
in CredentialTypeSerializer.validate().
|
||||
|
||||
Args:
|
||||
result: The OpenAPI schema dict to be modified
|
||||
generator, request, public: Required by drf-spectacular interface (unused)
|
||||
|
||||
Returns:
|
||||
The modified OpenAPI schema dict
|
||||
"""
|
||||
schemas = result.get('components', {}).get('schemas', {})
|
||||
|
||||
# Filter CredentialTypeRequest (POST/PUT) - field is required
|
||||
if 'CredentialTypeRequest' in schemas:
|
||||
kind_prop = schemas['CredentialTypeRequest'].get('properties', {}).get('kind', {})
|
||||
if 'enum' in kind_prop:
|
||||
# Filter to only cloud and net (no None - field is required)
|
||||
kind_prop['enum'] = ['cloud', 'net']
|
||||
kind_prop['description'] = "* `cloud` - Cloud\\n* `net` - Network"
|
||||
|
||||
# Filter PatchedCredentialTypeRequest (PATCH) - field is optional
|
||||
if 'PatchedCredentialTypeRequest' in schemas:
|
||||
kind_prop = schemas['PatchedCredentialTypeRequest'].get('properties', {}).get('kind', {})
|
||||
if 'enum' in kind_prop:
|
||||
# Filter to only cloud and net (None allowed - field can be omitted in PATCH)
|
||||
kind_prop['enum'] = ['cloud', 'net', None]
|
||||
kind_prop['description'] = "* `cloud` - Cloud\\n* `net` - Network"
|
||||
|
||||
return result
|
||||
|
||||
|
||||
class CustomAutoSchema(AutoSchema):
|
||||
"""Custom AutoSchema to add swagger_topic to tags and handle deprecated endpoints."""
|
||||
|
||||
@@ -91,29 +46,11 @@ class CustomAutoSchema(AutoSchema):
|
||||
return getattr(self.view, 'deprecated', False)
|
||||
|
||||
|
||||
class AuthenticatedSpectacularAPIView(SpectacularAPIView):
|
||||
"""SpectacularAPIView that requires authentication."""
|
||||
|
||||
permission_classes = [IsAuthenticated]
|
||||
|
||||
|
||||
class AuthenticatedSpectacularSwaggerView(SpectacularSwaggerView):
|
||||
"""SpectacularSwaggerView that requires authentication."""
|
||||
|
||||
permission_classes = [IsAuthenticated]
|
||||
|
||||
|
||||
class AuthenticatedSpectacularRedocView(SpectacularRedocView):
|
||||
"""SpectacularRedocView that requires authentication."""
|
||||
|
||||
permission_classes = [IsAuthenticated]
|
||||
|
||||
|
||||
# Schema view (returns OpenAPI schema JSON/YAML)
|
||||
schema_view = AuthenticatedSpectacularAPIView.as_view()
|
||||
schema_view = SpectacularAPIView.as_view()
|
||||
|
||||
# Swagger UI view
|
||||
swagger_ui_view = AuthenticatedSpectacularSwaggerView.as_view(url_name='api:schema-json')
|
||||
swagger_ui_view = SpectacularSwaggerView.as_view(url_name='api:schema-json')
|
||||
|
||||
# ReDoc UI view
|
||||
redoc_view = AuthenticatedSpectacularRedocView.as_view(url_name='api:schema-json')
|
||||
redoc_view = SpectacularRedocView.as_view(url_name='api:schema-json')
|
||||
|
||||
@@ -963,13 +963,13 @@ class UnifiedJobSerializer(BaseSerializer):
|
||||
|
||||
class UnifiedJobListSerializer(UnifiedJobSerializer):
|
||||
class Meta:
|
||||
fields = ('*', '-job_args', '-job_cwd', '-job_env', '-result_traceback', '-event_processing_finished', '-artifacts')
|
||||
fields = ('*', '-job_args', '-job_cwd', '-job_env', '-result_traceback', '-event_processing_finished')
|
||||
|
||||
def get_field_names(self, declared_fields, info):
|
||||
field_names = super(UnifiedJobListSerializer, self).get_field_names(declared_fields, info)
|
||||
# Meta multiple inheritance and -field_name options don't seem to be
|
||||
# taking effect above, so remove the undesired fields here.
|
||||
return tuple(x for x in field_names if x not in ('job_args', 'job_cwd', 'job_env', 'result_traceback', 'event_processing_finished', 'artifacts'))
|
||||
return tuple(x for x in field_names if x not in ('job_args', 'job_cwd', 'job_env', 'result_traceback', 'event_processing_finished'))
|
||||
|
||||
def get_types(self):
|
||||
if type(self) is UnifiedJobListSerializer:
|
||||
@@ -1230,7 +1230,7 @@ class OrganizationSerializer(BaseSerializer, OpaQueryPathMixin):
|
||||
# to a team. This provides a hint to the ui so it can know to not
|
||||
# display these roles for team role selection.
|
||||
for key in ('admin_role', 'member_role'):
|
||||
if summary_dict and key in summary_dict.get('object_roles', {}):
|
||||
if key in summary_dict.get('object_roles', {}):
|
||||
summary_dict['object_roles'][key]['user_only'] = True
|
||||
|
||||
return summary_dict
|
||||
@@ -2165,13 +2165,13 @@ class BulkHostDeleteSerializer(serializers.Serializer):
|
||||
attrs['hosts_data'] = attrs['host_qs'].values()
|
||||
|
||||
if len(attrs['host_qs']) == 0:
|
||||
error_hosts = dict.fromkeys(attrs['hosts'], "Hosts do not exist or you lack permission to delete it")
|
||||
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in attrs['hosts']}
|
||||
raise serializers.ValidationError({'hosts': error_hosts})
|
||||
|
||||
if len(attrs['host_qs']) < len(attrs['hosts']):
|
||||
hosts_exists = [host['id'] for host in attrs['hosts_data']]
|
||||
failed_hosts = list(set(attrs['hosts']).difference(hosts_exists))
|
||||
error_hosts = dict.fromkeys(failed_hosts, "Hosts do not exist or you lack permission to delete it")
|
||||
error_hosts = {host: "Hosts do not exist or you lack permission to delete it" for host in failed_hosts}
|
||||
raise serializers.ValidationError({'hosts': error_hosts})
|
||||
|
||||
# Getting all inventories that the hosts can be in
|
||||
@@ -3527,7 +3527,7 @@ class JobRelaunchSerializer(BaseSerializer):
|
||||
choices=NEW_JOB_TYPE_CHOICES,
|
||||
write_only=True,
|
||||
)
|
||||
credential_passwords = VerbatimField(required=False, write_only=True)
|
||||
credential_passwords = VerbatimField(required=True, write_only=True)
|
||||
|
||||
class Meta:
|
||||
model = Job
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{% if content_only %}<div class="nocode ansi_fore ansi_back{% if dark %} ansi_dark{% endif %}">{% else %}
|
||||
<!DOCTYPE HTML>
|
||||
<html lang="en">
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
|
||||
<title>{{ title }}</title>
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import ActivityStreamList, ActivityStreamDetail
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', ActivityStreamList.as_view(), name='activity_stream_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', ActivityStreamDetail.as_view(), name='activity_stream_detail'),
|
||||
|
||||
@@ -14,6 +14,7 @@ from awx.api.views import (
|
||||
AdHocCommandStdout,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', AdHocCommandList.as_view(), name='ad_hoc_command_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', AdHocCommandDetail.as_view(), name='ad_hoc_command_detail'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import AdHocCommandEventDetail
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', AdHocCommandEventDetail.as_view(), name='ad_hoc_command_event_detail'),
|
||||
]
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
import awx.api.views.analytics as analytics
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', analytics.AnalyticsRootView.as_view(), name='analytics_root_view'),
|
||||
re_path(r'^authorized/$', analytics.AnalyticsAuthorizedView.as_view(), name='analytics_authorized'),
|
||||
|
||||
@@ -16,6 +16,7 @@ from awx.api.views import (
|
||||
CredentialExternalTest,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', CredentialList.as_view(), name='credential_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/activity_stream/$', CredentialActivityStreamList.as_view(), name='credential_activity_stream_list'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import CredentialInputSourceDetail, CredentialInputSourceList
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', CredentialInputSourceList.as_view(), name='credential_input_source_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', CredentialInputSourceDetail.as_view(), name='credential_input_source_detail'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import CredentialTypeList, CredentialTypeDetail, CredentialTypeCredentialList, CredentialTypeActivityStreamList, CredentialTypeExternalTest
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', CredentialTypeList.as_view(), name='credential_type_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', CredentialTypeDetail.as_view(), name='credential_type_detail'),
|
||||
|
||||
@@ -8,6 +8,7 @@ from awx.api.views import (
|
||||
ExecutionEnvironmentActivityStreamList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', ExecutionEnvironmentList.as_view(), name='execution_environment_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', ExecutionEnvironmentDetail.as_view(), name='execution_environment_detail'),
|
||||
|
||||
@@ -18,6 +18,7 @@ from awx.api.views import (
|
||||
GroupAdHocCommandsList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', GroupList.as_view(), name='group_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', GroupDetail.as_view(), name='group_detail'),
|
||||
|
||||
@@ -18,6 +18,7 @@ from awx.api.views import (
|
||||
HostAdHocCommandEventsList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', HostList.as_view(), name='host_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', HostDetail.as_view(), name='host_detail'),
|
||||
|
||||
@@ -14,6 +14,7 @@ from awx.api.views import (
|
||||
)
|
||||
from awx.api.views.instance_install_bundle import InstanceInstallBundle
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', InstanceList.as_view(), name='instance_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', InstanceDetail.as_view(), name='instance_detail'),
|
||||
|
||||
@@ -12,6 +12,7 @@ from awx.api.views import (
|
||||
InstanceGroupObjectRolesList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', InstanceGroupList.as_view(), name='instance_group_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', InstanceGroupDetail.as_view(), name='instance_group_detail'),
|
||||
|
||||
@@ -29,6 +29,7 @@ from awx.api.views import (
|
||||
InventoryVariableData,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', InventoryList.as_view(), name='inventory_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', InventoryDetail.as_view(), name='inventory_detail'),
|
||||
|
||||
@@ -18,6 +18,7 @@ from awx.api.views import (
|
||||
InventorySourceNotificationTemplatesSuccessList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', InventorySourceList.as_view(), name='inventory_source_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', InventorySourceDetail.as_view(), name='inventory_source_detail'),
|
||||
|
||||
@@ -15,6 +15,7 @@ from awx.api.views import (
|
||||
InventoryUpdateCredentialsList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', InventoryUpdateList.as_view(), name='inventory_update_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', InventoryUpdateDetail.as_view(), name='inventory_update_detail'),
|
||||
|
||||
@@ -19,6 +19,7 @@ from awx.api.views import (
|
||||
JobHostSummaryDetail,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', JobList.as_view(), name='job_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', JobDetail.as_view(), name='job_detail'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import JobHostSummaryDetail
|
||||
|
||||
|
||||
urls = [re_path(r'^(?P<pk>[0-9]+)/$', JobHostSummaryDetail.as_view(), name='job_host_summary_detail')]
|
||||
|
||||
__all__ = ['urls']
|
||||
|
||||
@@ -23,6 +23,7 @@ from awx.api.views import (
|
||||
JobTemplateCopy,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', JobTemplateList.as_view(), name='job_template_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', JobTemplateDetail.as_view(), name='job_template_detail'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views.labels import LabelList, LabelDetail
|
||||
|
||||
|
||||
urls = [re_path(r'^$', LabelList.as_view(), name='label_list'), re_path(r'^(?P<pk>[0-9]+)/$', LabelDetail.as_view(), name='label_detail')]
|
||||
|
||||
__all__ = ['urls']
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import NotificationList, NotificationDetail
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', NotificationList.as_view(), name='notification_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', NotificationDetail.as_view(), name='notification_detail'),
|
||||
|
||||
@@ -11,6 +11,7 @@ from awx.api.views import (
|
||||
NotificationTemplateCopy,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', NotificationTemplateList.as_view(), name='notification_template_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', NotificationTemplateDetail.as_view(), name='notification_template_detail'),
|
||||
|
||||
@@ -27,6 +27,7 @@ from awx.api.views.organization import (
|
||||
)
|
||||
from awx.api.views import OrganizationCredentialList
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', OrganizationList.as_view(), name='organization_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', OrganizationDetail.as_view(), name='organization_detail'),
|
||||
|
||||
@@ -22,6 +22,7 @@ from awx.api.views import (
|
||||
ProjectCopy,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', ProjectList.as_view(), name='project_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', ProjectDetail.as_view(), name='project_detail'),
|
||||
|
||||
@@ -13,6 +13,7 @@ from awx.api.views import (
|
||||
ProjectUpdateEventsList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', ProjectUpdateList.as_view(), name='project_update_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', ProjectUpdateDetail.as_view(), name='project_update_detail'),
|
||||
|
||||
@@ -8,6 +8,7 @@ from awx.api.views import (
|
||||
ReceptorAddressDetail,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', ReceptorAddressesList.as_view(), name='receptor_addresses_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', ReceptorAddressDetail.as_view(), name='receptor_address_detail'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import RoleList, RoleDetail, RoleUsersList, RoleTeamsList
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', RoleList.as_view(), name='role_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', RoleDetail.as_view(), name='role_detail'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import ScheduleList, ScheduleDetail, ScheduleUnifiedJobsList, ScheduleCredentialsList, ScheduleLabelsList, ScheduleInstanceGroupList
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', ScheduleList.as_view(), name='schedule_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', ScheduleDetail.as_view(), name='schedule_detail'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import SystemJobList, SystemJobDetail, SystemJobCancel, SystemJobNotificationsList, SystemJobEventsList
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', SystemJobList.as_view(), name='system_job_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', SystemJobDetail.as_view(), name='system_job_detail'),
|
||||
|
||||
@@ -14,6 +14,7 @@ from awx.api.views import (
|
||||
SystemJobTemplateNotificationTemplatesSuccessList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', SystemJobTemplateList.as_view(), name='system_job_template_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', SystemJobTemplateDetail.as_view(), name='system_job_template_detail'),
|
||||
|
||||
@@ -15,6 +15,7 @@ from awx.api.views import (
|
||||
TeamAccessList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', TeamList.as_view(), name='team_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', TeamDetail.as_view(), name='team_detail'),
|
||||
|
||||
@@ -148,12 +148,18 @@ v2_urls = [
|
||||
|
||||
app_name = 'api'
|
||||
|
||||
# Import schema views (needed for both development and testing)
|
||||
from awx.api.schema import schema_view, swagger_ui_view, redoc_view
|
||||
|
||||
urlpatterns = [
|
||||
re_path(r'^$', ApiRootView.as_view(), name='api_root_view'),
|
||||
re_path(r'^(?P<version>(v2))/', include(v2_urls)),
|
||||
re_path(r'^login/$', LoggedLoginView.as_view(template_name='rest_framework/login.html', extra_context={'inside_login_context': True}), name='login'),
|
||||
re_path(r'^logout/$', LoggedLogoutView.as_view(next_page='/api/', redirect_field_name='next'), name='logout'),
|
||||
# the docs/, schema-related endpoints used to be listed here but now exposed by DAB api_documentation app
|
||||
# Schema endpoints (available in all modes for API documentation and testing)
|
||||
re_path(r'^schema/$', schema_view, name='schema-json'),
|
||||
re_path(r'^swagger/$', swagger_ui_view, name='schema-swagger-ui'),
|
||||
re_path(r'^redoc/$', redoc_view, name='schema-redoc'),
|
||||
]
|
||||
|
||||
from awx.api.urls.debug import urls as debug_urls
|
||||
|
||||
@@ -2,6 +2,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views.webhooks import WebhookKeyView, GithubWebhookReceiver, GitlabWebhookReceiver, BitbucketDcWebhookReceiver
|
||||
|
||||
|
||||
urlpatterns = [
|
||||
re_path(r'^webhook_key/$', WebhookKeyView.as_view(), name='webhook_key'),
|
||||
re_path(r'^github/$', GithubWebhookReceiver.as_view(), name='webhook_receiver_github'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import WorkflowApprovalList, WorkflowApprovalDetail, WorkflowApprovalApprove, WorkflowApprovalDeny
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', WorkflowApprovalList.as_view(), name='workflow_approval_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowApprovalDetail.as_view(), name='workflow_approval_detail'),
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.api.views import WorkflowApprovalTemplateDetail, WorkflowApprovalTemplateJobsList
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowApprovalTemplateDetail.as_view(), name='workflow_approval_template_detail'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/approvals/$', WorkflowApprovalTemplateJobsList.as_view(), name='workflow_approval_template_jobs_list'),
|
||||
|
||||
@@ -14,6 +14,7 @@ from awx.api.views import (
|
||||
WorkflowJobActivityStreamList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', WorkflowJobList.as_view(), name='workflow_job_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobDetail.as_view(), name='workflow_job_detail'),
|
||||
|
||||
@@ -14,6 +14,7 @@ from awx.api.views import (
|
||||
WorkflowJobNodeInstanceGroupsList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', WorkflowJobNodeList.as_view(), name='workflow_job_node_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobNodeDetail.as_view(), name='workflow_job_node_detail'),
|
||||
|
||||
@@ -22,6 +22,7 @@ from awx.api.views import (
|
||||
WorkflowJobTemplateLabelList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', WorkflowJobTemplateList.as_view(), name='workflow_job_template_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobTemplateDetail.as_view(), name='workflow_job_template_detail'),
|
||||
|
||||
@@ -15,6 +15,7 @@ from awx.api.views import (
|
||||
WorkflowJobTemplateNodeInstanceGroupsList,
|
||||
)
|
||||
|
||||
|
||||
urls = [
|
||||
re_path(r'^$', WorkflowJobTemplateNodeList.as_view(), name='workflow_job_template_node_list'),
|
||||
re_path(r'^(?P<pk>[0-9]+)/$', WorkflowJobTemplateNodeDetail.as_view(), name='workflow_job_template_node_detail'),
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -15,8 +15,6 @@ from rest_framework import status
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
AUTOMATION_ANALYTICS_API_URL_PATH = "/api/tower-analytics/v1"
|
||||
AWX_ANALYTICS_API_PREFIX = 'analytics'
|
||||
|
||||
@@ -40,8 +38,6 @@ class MissingSettings(Exception):
|
||||
|
||||
|
||||
class GetNotAllowedMixin(object):
|
||||
skip_ai_description = True
|
||||
|
||||
def get(self, request, format=None):
|
||||
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
|
||||
|
||||
@@ -50,9 +46,7 @@ class AnalyticsRootView(APIView):
|
||||
permission_classes = (AnalyticsPermission,)
|
||||
name = _('Automation Analytics')
|
||||
swagger_topic = 'Automation Analytics'
|
||||
resource_purpose = 'automation analytics endpoints'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "A list of additional API endpoints related to analytics"})
|
||||
def get(self, request, format=None):
|
||||
data = OrderedDict()
|
||||
data['authorized'] = reverse('api:analytics_authorized', request=request)
|
||||
@@ -105,8 +99,6 @@ class AnalyticsGenericView(APIView):
|
||||
return Response(response.json(), status=response.status_code)
|
||||
"""
|
||||
|
||||
resource_purpose = 'base view for analytics api proxy'
|
||||
|
||||
permission_classes = (AnalyticsPermission,)
|
||||
|
||||
@staticmethod
|
||||
@@ -265,91 +257,67 @@ class AnalyticsGenericView(APIView):
|
||||
|
||||
|
||||
class AnalyticsGenericListView(AnalyticsGenericView):
|
||||
resource_purpose = 'analytics api proxy list view'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get analytics data from Red Hat Insights"})
|
||||
def get(self, request, format=None):
|
||||
return self._send_to_analytics(request, method="GET")
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Post query to Red Hat Insights analytics"})
|
||||
def post(self, request, format=None):
|
||||
return self._send_to_analytics(request, method="POST")
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get analytics endpoint options"})
|
||||
def options(self, request, format=None):
|
||||
return self._send_to_analytics(request, method="OPTIONS")
|
||||
|
||||
|
||||
class AnalyticsGenericDetailView(AnalyticsGenericView):
|
||||
resource_purpose = 'analytics api proxy detail view'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get specific analytics resource from Red Hat Insights"})
|
||||
def get(self, request, slug, format=None):
|
||||
return self._send_to_analytics(request, method="GET")
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Post query for specific analytics resource to Red Hat Insights"})
|
||||
def post(self, request, slug, format=None):
|
||||
return self._send_to_analytics(request, method="POST")
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get options for specific analytics resource"})
|
||||
def options(self, request, slug, format=None):
|
||||
return self._send_to_analytics(request, method="OPTIONS")
|
||||
|
||||
|
||||
@extend_schema_if_available(
|
||||
extensions={'x-ai-description': 'Check if the user has access to Red Hat Insights'},
|
||||
)
|
||||
class AnalyticsAuthorizedView(AnalyticsGenericListView):
|
||||
name = _("Authorized")
|
||||
resource_purpose = 'red hat insights authorization status'
|
||||
|
||||
|
||||
class AnalyticsReportsList(GetNotAllowedMixin, AnalyticsGenericListView):
|
||||
name = _("Reports")
|
||||
swagger_topic = "Automation Analytics"
|
||||
resource_purpose = 'automation analytics reports'
|
||||
|
||||
|
||||
class AnalyticsReportDetail(AnalyticsGenericDetailView):
|
||||
name = _("Report")
|
||||
resource_purpose = 'automation analytics report detail'
|
||||
|
||||
|
||||
class AnalyticsReportOptionsList(AnalyticsGenericListView):
|
||||
name = _("Report Options")
|
||||
resource_purpose = 'automation analytics report options'
|
||||
|
||||
|
||||
class AnalyticsAdoptionRateList(GetNotAllowedMixin, AnalyticsGenericListView):
|
||||
name = _("Adoption Rate")
|
||||
resource_purpose = 'automation analytics adoption rate data'
|
||||
|
||||
|
||||
class AnalyticsEventExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
|
||||
name = _("Event Explorer")
|
||||
resource_purpose = 'automation analytics event explorer data'
|
||||
|
||||
|
||||
class AnalyticsHostExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
|
||||
name = _("Host Explorer")
|
||||
resource_purpose = 'automation analytics host explorer data'
|
||||
|
||||
|
||||
class AnalyticsJobExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
|
||||
name = _("Job Explorer")
|
||||
resource_purpose = 'automation analytics job explorer data'
|
||||
|
||||
|
||||
class AnalyticsProbeTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
|
||||
name = _("Probe Templates")
|
||||
resource_purpose = 'automation analytics probe templates'
|
||||
|
||||
|
||||
class AnalyticsProbeTemplateForHostsList(GetNotAllowedMixin, AnalyticsGenericListView):
|
||||
name = _("Probe Template For Hosts")
|
||||
resource_purpose = 'automation analytics probe templates for hosts'
|
||||
|
||||
|
||||
class AnalyticsRoiTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
|
||||
name = _("ROI Templates")
|
||||
resource_purpose = 'automation analytics roi templates'
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
from django.utils.translation import gettext_lazy as _
|
||||
|
||||
from rest_framework.permissions import IsAuthenticated
|
||||
@@ -32,7 +30,6 @@ class BulkView(APIView):
|
||||
]
|
||||
allowed_methods = ['GET', 'OPTIONS']
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Retrieves a list of available bulk actions"})
|
||||
def get(self, request, format=None):
|
||||
'''List top level resources'''
|
||||
data = OrderedDict()
|
||||
@@ -48,13 +45,11 @@ class BulkJobLaunchView(GenericAPIView):
|
||||
serializer_class = serializers.BulkJobLaunchSerializer
|
||||
allowed_methods = ['GET', 'POST', 'OPTIONS']
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk job launch endpoint"})
|
||||
def get(self, request):
|
||||
data = OrderedDict()
|
||||
data['detail'] = "Specify a list of unified job templates to launch alongside their launchtime parameters"
|
||||
return Response(data, status=status.HTTP_200_OK)
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Bulk launch job templates"})
|
||||
def post(self, request):
|
||||
bulkjob_serializer = serializers.BulkJobLaunchSerializer(data=request.data, context={'request': request})
|
||||
if bulkjob_serializer.is_valid():
|
||||
@@ -69,11 +64,9 @@ class BulkHostCreateView(GenericAPIView):
|
||||
serializer_class = serializers.BulkHostCreateSerializer
|
||||
allowed_methods = ['GET', 'POST', 'OPTIONS']
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk host create endpoint"})
|
||||
def get(self, request):
|
||||
return Response({"detail": "Bulk create hosts with this endpoint"}, status=status.HTTP_200_OK)
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Bulk create hosts"})
|
||||
def post(self, request):
|
||||
serializer = serializers.BulkHostCreateSerializer(data=request.data, context={'request': request})
|
||||
if serializer.is_valid():
|
||||
@@ -88,11 +81,9 @@ class BulkHostDeleteView(GenericAPIView):
|
||||
serializer_class = serializers.BulkHostDeleteSerializer
|
||||
allowed_methods = ['GET', 'POST', 'OPTIONS']
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get information about bulk host delete endpoint"})
|
||||
def get(self, request):
|
||||
return Response({"detail": "Bulk delete hosts with this endpoint"}, status=status.HTTP_200_OK)
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Bulk delete hosts"})
|
||||
def post(self, request):
|
||||
serializer = serializers.BulkHostDeleteSerializer(data=request.data, context={'request': request})
|
||||
if serializer.is_valid():
|
||||
|
||||
@@ -5,7 +5,6 @@ from django.conf import settings
|
||||
from rest_framework.permissions import AllowAny
|
||||
from rest_framework.response import Response
|
||||
from awx.api.generics import APIView
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
from awx.main.scheduler import TaskManager, DependencyManager, WorkflowManager
|
||||
|
||||
@@ -15,9 +14,7 @@ class TaskManagerDebugView(APIView):
|
||||
exclude_from_schema = True
|
||||
permission_classes = [AllowAny]
|
||||
prefix = 'Task'
|
||||
resource_purpose = 'debug task manager'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Trigger task manager scheduling"})
|
||||
def get(self, request):
|
||||
TaskManager().schedule()
|
||||
if not settings.AWX_DISABLE_TASK_MANAGERS:
|
||||
@@ -32,9 +29,7 @@ class DependencyManagerDebugView(APIView):
|
||||
exclude_from_schema = True
|
||||
permission_classes = [AllowAny]
|
||||
prefix = 'Dependency'
|
||||
resource_purpose = 'debug dependency manager'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Trigger dependency manager scheduling"})
|
||||
def get(self, request):
|
||||
DependencyManager().schedule()
|
||||
if not settings.AWX_DISABLE_TASK_MANAGERS:
|
||||
@@ -49,9 +44,7 @@ class WorkflowManagerDebugView(APIView):
|
||||
exclude_from_schema = True
|
||||
permission_classes = [AllowAny]
|
||||
prefix = 'Workflow'
|
||||
resource_purpose = 'debug workflow manager'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Trigger workflow manager scheduling"})
|
||||
def get(self, request):
|
||||
WorkflowManager().schedule()
|
||||
if not settings.AWX_DISABLE_TASK_MANAGERS:
|
||||
@@ -65,9 +58,7 @@ class DebugRootView(APIView):
|
||||
_ignore_model_permissions = True
|
||||
exclude_from_schema = True
|
||||
permission_classes = [AllowAny]
|
||||
resource_purpose = 'debug endpoints root'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "List available debug endpoints"})
|
||||
def get(self, request, format=None):
|
||||
'''List of available debug urls'''
|
||||
data = OrderedDict()
|
||||
|
||||
@@ -10,7 +10,6 @@ import time
|
||||
import re
|
||||
|
||||
import asn1
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
from awx.api import serializers
|
||||
from awx.api.generics import GenericAPIView, Response
|
||||
from awx.api.permissions import IsSystemAdmin
|
||||
@@ -50,9 +49,7 @@ class InstanceInstallBundle(GenericAPIView):
|
||||
model = models.Instance
|
||||
serializer_class = serializers.InstanceSerializer
|
||||
permission_classes = (IsSystemAdmin,)
|
||||
resource_purpose = 'install bundle'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Generate and download install bundle for an instance"})
|
||||
def get(self, request, *args, **kwargs):
|
||||
instance_obj = self.get_object()
|
||||
|
||||
@@ -198,8 +195,8 @@ def generate_receptor_tls(instance_obj):
|
||||
.issuer_name(ca_cert.issuer)
|
||||
.public_key(csr.public_key())
|
||||
.serial_number(x509.random_serial_number())
|
||||
.not_valid_before(datetime.datetime.now(datetime.UTC))
|
||||
.not_valid_after(datetime.datetime.now(datetime.UTC) + datetime.timedelta(days=3650))
|
||||
.not_valid_before(datetime.datetime.utcnow())
|
||||
.not_valid_after(datetime.datetime.utcnow() + datetime.timedelta(days=3650))
|
||||
.add_extension(
|
||||
csr.extensions.get_extension_for_class(x509.SubjectAlternativeName).value,
|
||||
critical=csr.extensions.get_extension_for_class(x509.SubjectAlternativeName).critical,
|
||||
|
||||
@@ -19,8 +19,6 @@ from rest_framework import serializers
|
||||
# AWX
|
||||
from awx.main.models import ActivityStream, Inventory, JobTemplate, Role, User, InstanceGroup, InventoryUpdateEvent, InventoryUpdate
|
||||
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
from awx.api.generics import (
|
||||
ListCreateAPIView,
|
||||
RetrieveUpdateDestroyAPIView,
|
||||
@@ -45,6 +43,7 @@ from awx.api.views.mixin import RelatedJobsPreventDeleteMixin
|
||||
|
||||
from awx.api.pagination import UnifiedJobEventPagination
|
||||
|
||||
|
||||
logger = logging.getLogger('awx.api.views.organization')
|
||||
|
||||
|
||||
@@ -56,7 +55,6 @@ class InventoryUpdateEventsList(SubListAPIView):
|
||||
name = _('Inventory Update Events List')
|
||||
search_fields = ('stdout',)
|
||||
pagination_class = UnifiedJobEventPagination
|
||||
resource_purpose = 'events of an inventory update'
|
||||
|
||||
def get_queryset(self):
|
||||
iu = self.get_parent_object()
|
||||
@@ -71,17 +69,11 @@ class InventoryUpdateEventsList(SubListAPIView):
|
||||
class InventoryList(ListCreateAPIView):
|
||||
model = Inventory
|
||||
serializer_class = InventorySerializer
|
||||
resource_purpose = 'inventories'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "A list of inventories."})
|
||||
def get(self, request, *args, **kwargs):
|
||||
return super().get(request, *args, **kwargs)
|
||||
|
||||
|
||||
class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
|
||||
model = Inventory
|
||||
serializer_class = InventorySerializer
|
||||
resource_purpose = 'inventory detail'
|
||||
|
||||
def update(self, request, *args, **kwargs):
|
||||
obj = self.get_object()
|
||||
@@ -108,39 +100,33 @@ class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIVie
|
||||
|
||||
class ConstructedInventoryDetail(InventoryDetail):
|
||||
serializer_class = ConstructedInventorySerializer
|
||||
resource_purpose = 'constructed inventory detail'
|
||||
|
||||
|
||||
class ConstructedInventoryList(InventoryList):
|
||||
serializer_class = ConstructedInventorySerializer
|
||||
resource_purpose = 'constructed inventories'
|
||||
|
||||
def get_queryset(self):
|
||||
r = super().get_queryset()
|
||||
return r.filter(kind='constructed')
|
||||
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get or create input inventory inventory"})
|
||||
class InventoryInputInventoriesList(SubListAttachDetachAPIView):
|
||||
model = Inventory
|
||||
serializer_class = InventorySerializer
|
||||
parent_model = Inventory
|
||||
relationship = 'input_inventories'
|
||||
resource_purpose = 'input inventories of a constructed inventory'
|
||||
|
||||
def is_valid_relation(self, parent, sub, created=False):
|
||||
if sub.kind == 'constructed':
|
||||
raise serializers.ValidationError({'error': 'You cannot add a constructed inventory to another constructed inventory.'})
|
||||
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get activity stream for an inventory"})
|
||||
class InventoryActivityStreamList(SubListAPIView):
|
||||
model = ActivityStream
|
||||
serializer_class = ActivityStreamSerializer
|
||||
parent_model = Inventory
|
||||
relationship = 'activitystream_set'
|
||||
search_fields = ('changes',)
|
||||
resource_purpose = 'activity stream for an inventory'
|
||||
|
||||
def get_queryset(self):
|
||||
parent = self.get_parent_object()
|
||||
@@ -154,13 +140,11 @@ class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
|
||||
serializer_class = InstanceGroupSerializer
|
||||
parent_model = Inventory
|
||||
relationship = 'instance_groups'
|
||||
resource_purpose = 'instance groups of an inventory'
|
||||
|
||||
|
||||
class InventoryAccessList(ResourceAccessList):
|
||||
model = User # needs to be User for AccessLists's
|
||||
parent_model = Inventory
|
||||
resource_purpose = 'users who can access the inventory'
|
||||
|
||||
|
||||
class InventoryObjectRolesList(SubListAPIView):
|
||||
@@ -169,7 +153,6 @@ class InventoryObjectRolesList(SubListAPIView):
|
||||
parent_model = Inventory
|
||||
search_fields = ('role_field', 'content_type__model')
|
||||
deprecated = True
|
||||
resource_purpose = 'roles of an inventory'
|
||||
|
||||
def get_queryset(self):
|
||||
po = self.get_parent_object()
|
||||
@@ -182,7 +165,6 @@ class InventoryJobTemplateList(SubListAPIView):
|
||||
serializer_class = JobTemplateSerializer
|
||||
parent_model = Inventory
|
||||
relationship = 'jobtemplates'
|
||||
resource_purpose = 'job templates using an inventory'
|
||||
|
||||
def get_queryset(self):
|
||||
parent = self.get_parent_object()
|
||||
@@ -193,10 +175,8 @@ class InventoryJobTemplateList(SubListAPIView):
|
||||
|
||||
class InventoryLabelList(LabelSubListCreateAttachDetachView):
|
||||
parent_model = Inventory
|
||||
resource_purpose = 'labels of an inventory'
|
||||
|
||||
|
||||
class InventoryCopy(CopyAPIView):
|
||||
model = Inventory
|
||||
copy_return_serializer_class = InventorySerializer
|
||||
resource_purpose = 'copy of an inventory'
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
from awx.api.generics import SubListCreateAttachDetachAPIView, RetrieveUpdateAPIView, ListCreateAPIView
|
||||
from awx.main.models import Label
|
||||
from awx.api.serializers import LabelSerializer
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
# Django
|
||||
from django.utils.translation import gettext_lazy as _
|
||||
@@ -25,10 +24,9 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
|
||||
model = Label
|
||||
serializer_class = LabelSerializer
|
||||
relationship = 'labels'
|
||||
resource_purpose = 'labels of a resource'
|
||||
|
||||
def unattach(self, request, *args, **kwargs):
|
||||
sub_id, res = super().unattach_validate(request)
|
||||
(sub_id, res) = super().unattach_validate(request)
|
||||
if res:
|
||||
return res
|
||||
|
||||
@@ -41,7 +39,6 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
|
||||
|
||||
return res
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Create or attach a label to a resource"})
|
||||
def post(self, request, *args, **kwargs):
|
||||
# If a label already exists in the database, attach it instead of erroring out
|
||||
# that it already exists
|
||||
@@ -64,11 +61,9 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
|
||||
class LabelDetail(RetrieveUpdateAPIView):
|
||||
model = Label
|
||||
serializer_class = LabelSerializer
|
||||
resource_purpose = 'label detail'
|
||||
|
||||
|
||||
class LabelList(ListCreateAPIView):
|
||||
name = _("Labels")
|
||||
model = Label
|
||||
serializer_class = LabelSerializer
|
||||
resource_purpose = 'labels'
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
# All Rights Reserved.
|
||||
|
||||
from django.utils.translation import gettext_lazy as _
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
from awx.api.generics import APIView, Response
|
||||
from awx.api.permissions import IsSystemAdminOrAuditor
|
||||
@@ -14,9 +13,7 @@ class MeshVisualizer(APIView):
|
||||
name = _("Mesh Visualizer")
|
||||
permission_classes = (IsSystemAdminOrAuditor,)
|
||||
swagger_topic = "System Configuration"
|
||||
resource_purpose = 'mesh network topology visualization data'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get mesh network topology visualization data"})
|
||||
def get(self, request, format=None):
|
||||
data = {
|
||||
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,
|
||||
|
||||
@@ -7,13 +7,13 @@ import logging
|
||||
# Django
|
||||
from django.conf import settings
|
||||
from django.utils.translation import gettext_lazy as _
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
# Django REST Framework
|
||||
from rest_framework.permissions import AllowAny
|
||||
from rest_framework.response import Response
|
||||
from rest_framework.exceptions import PermissionDenied
|
||||
|
||||
|
||||
# AWX
|
||||
# from awx.main.analytics import collectors
|
||||
import awx.main.analytics.subsystem_metrics as s_metrics
|
||||
@@ -22,13 +22,13 @@ from awx.api import renderers
|
||||
|
||||
from awx.api.generics import APIView
|
||||
|
||||
|
||||
logger = logging.getLogger('awx.analytics')
|
||||
|
||||
|
||||
class MetricsView(APIView):
|
||||
name = _('Metrics')
|
||||
swagger_topic = 'Metrics'
|
||||
resource_purpose = 'prometheus metrics data'
|
||||
|
||||
renderer_classes = [renderers.PlainTextRenderer, renderers.PrometheusJSONRenderer, renderers.BrowsableAPIRenderer]
|
||||
|
||||
@@ -37,7 +37,6 @@ class MetricsView(APIView):
|
||||
self.permission_classes = (AllowAny,)
|
||||
return super(APIView, self).initialize_request(request, *args, **kwargs)
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get Prometheus metrics data"})
|
||||
def get(self, request):
|
||||
'''Show Metrics Details'''
|
||||
if settings.ALLOW_METRICS_FOR_ANONYMOUS_USERS or request.user.is_superuser or request.user.is_system_auditor:
|
||||
|
||||
@@ -60,13 +60,11 @@ logger = logging.getLogger('awx.api.views.organization')
|
||||
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
|
||||
model = Organization
|
||||
serializer_class = OrganizationSerializer
|
||||
resource_purpose = 'organizations'
|
||||
|
||||
|
||||
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
|
||||
model = Organization
|
||||
serializer_class = OrganizationSerializer
|
||||
resource_purpose = 'organization detail'
|
||||
|
||||
def get_serializer_context(self, *args, **kwargs):
|
||||
full_context = super(OrganizationDetail, self).get_serializer_context(*args, **kwargs)
|
||||
@@ -104,7 +102,6 @@ class OrganizationInventoriesList(SubListAPIView):
|
||||
serializer_class = InventorySerializer
|
||||
parent_model = Organization
|
||||
relationship = 'inventories'
|
||||
resource_purpose = 'inventories of an organization'
|
||||
|
||||
|
||||
class OrganizationUsersList(BaseUsersList):
|
||||
@@ -113,7 +110,6 @@ class OrganizationUsersList(BaseUsersList):
|
||||
parent_model = Organization
|
||||
relationship = 'member_role.members'
|
||||
ordering = ('username',)
|
||||
resource_purpose = 'users of an organization'
|
||||
|
||||
|
||||
class OrganizationAdminsList(BaseUsersList):
|
||||
@@ -122,7 +118,6 @@ class OrganizationAdminsList(BaseUsersList):
|
||||
parent_model = Organization
|
||||
relationship = 'admin_role.members'
|
||||
ordering = ('username',)
|
||||
resource_purpose = 'administrators of an organization'
|
||||
|
||||
|
||||
class OrganizationProjectsList(SubListCreateAPIView):
|
||||
@@ -130,7 +125,6 @@ class OrganizationProjectsList(SubListCreateAPIView):
|
||||
serializer_class = ProjectSerializer
|
||||
parent_model = Organization
|
||||
parent_key = 'organization'
|
||||
resource_purpose = 'projects of an organization'
|
||||
|
||||
|
||||
class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
|
||||
@@ -140,7 +134,6 @@ class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
|
||||
relationship = 'executionenvironments'
|
||||
parent_key = 'organization'
|
||||
swagger_topic = "Execution Environments"
|
||||
resource_purpose = 'execution environments of an organization'
|
||||
|
||||
|
||||
class OrganizationJobTemplatesList(SubListCreateAPIView):
|
||||
@@ -148,7 +141,6 @@ class OrganizationJobTemplatesList(SubListCreateAPIView):
|
||||
serializer_class = JobTemplateSerializer
|
||||
parent_model = Organization
|
||||
parent_key = 'organization'
|
||||
resource_purpose = 'job templates of an organization'
|
||||
|
||||
|
||||
class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
|
||||
@@ -156,7 +148,6 @@ class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
|
||||
serializer_class = WorkflowJobTemplateSerializer
|
||||
parent_model = Organization
|
||||
parent_key = 'organization'
|
||||
resource_purpose = 'workflow job templates of an organization'
|
||||
|
||||
|
||||
class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
|
||||
@@ -165,7 +156,6 @@ class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
|
||||
parent_model = Organization
|
||||
relationship = 'teams'
|
||||
parent_key = 'organization'
|
||||
resource_purpose = 'teams of an organization'
|
||||
|
||||
|
||||
class OrganizationActivityStreamList(SubListAPIView):
|
||||
@@ -174,7 +164,6 @@ class OrganizationActivityStreamList(SubListAPIView):
|
||||
parent_model = Organization
|
||||
relationship = 'activitystream_set'
|
||||
search_fields = ('changes',)
|
||||
resource_purpose = 'activity stream for an organization'
|
||||
|
||||
|
||||
class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
|
||||
@@ -183,34 +172,28 @@ class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
|
||||
parent_model = Organization
|
||||
relationship = 'notification_templates'
|
||||
parent_key = 'organization'
|
||||
resource_purpose = 'notification templates of an organization'
|
||||
|
||||
|
||||
class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView):
|
||||
model = NotificationTemplate
|
||||
serializer_class = NotificationTemplateSerializer
|
||||
parent_model = Organization
|
||||
resource_purpose = 'base view for notification templates of an organization'
|
||||
|
||||
|
||||
class OrganizationNotificationTemplatesStartedList(OrganizationNotificationTemplatesAnyList):
|
||||
relationship = 'notification_templates_started'
|
||||
resource_purpose = 'notification templates for job started events of an organization'
|
||||
|
||||
|
||||
class OrganizationNotificationTemplatesErrorList(OrganizationNotificationTemplatesAnyList):
|
||||
relationship = 'notification_templates_error'
|
||||
resource_purpose = 'notification templates for job error events of an organization'
|
||||
|
||||
|
||||
class OrganizationNotificationTemplatesSuccessList(OrganizationNotificationTemplatesAnyList):
|
||||
relationship = 'notification_templates_success'
|
||||
resource_purpose = 'notification templates for job success events of an organization'
|
||||
|
||||
|
||||
class OrganizationNotificationTemplatesApprovalList(OrganizationNotificationTemplatesAnyList):
|
||||
relationship = 'notification_templates_approvals'
|
||||
resource_purpose = 'notification templates for workflow approval events of an organization'
|
||||
|
||||
|
||||
class OrganizationInstanceGroupsList(OrganizationInstanceGroupMembershipMixin, SubListAttachDetachAPIView):
|
||||
@@ -219,7 +202,6 @@ class OrganizationInstanceGroupsList(OrganizationInstanceGroupMembershipMixin, S
|
||||
parent_model = Organization
|
||||
relationship = 'instance_groups'
|
||||
filter_read_permission = False
|
||||
resource_purpose = 'instance groups of an organization'
|
||||
|
||||
|
||||
class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
|
||||
@@ -228,7 +210,6 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
|
||||
parent_model = Organization
|
||||
relationship = 'galaxy_credentials'
|
||||
filter_read_permission = False
|
||||
resource_purpose = 'galaxy credentials of an organization'
|
||||
|
||||
def is_valid_relation(self, parent, sub, created=False):
|
||||
if sub.kind != 'galaxy_api_token':
|
||||
@@ -238,7 +219,6 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
|
||||
class OrganizationAccessList(ResourceAccessList):
|
||||
model = User # needs to be User for AccessLists's
|
||||
parent_model = Organization
|
||||
resource_purpose = 'users who can access the organization'
|
||||
|
||||
|
||||
class OrganizationObjectRolesList(SubListAPIView):
|
||||
@@ -247,7 +227,6 @@ class OrganizationObjectRolesList(SubListAPIView):
|
||||
parent_model = Organization
|
||||
search_fields = ('role_field', 'content_type__model')
|
||||
deprecated = True
|
||||
resource_purpose = 'roles of an organization'
|
||||
|
||||
def get_queryset(self):
|
||||
po = self.get_parent_object()
|
||||
|
||||
@@ -23,8 +23,6 @@ from rest_framework import status
|
||||
|
||||
import requests
|
||||
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
from awx import MODE
|
||||
from awx.api.generics import APIView
|
||||
from awx.conf.registry import settings_registry
|
||||
@@ -48,10 +46,8 @@ class ApiRootView(APIView):
|
||||
name = _('REST API')
|
||||
versioning_class = URLPathVersioning
|
||||
swagger_topic = 'Versioning'
|
||||
resource_purpose = 'api root and version information'
|
||||
|
||||
@method_decorator(ensure_csrf_cookie)
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "List supported API versions"})
|
||||
def get(self, request, format=None):
|
||||
'''List supported API versions'''
|
||||
v2 = reverse('api:api_v2_root_view', request=request, kwargs={'version': 'v2'})
|
||||
@@ -63,16 +59,14 @@ class ApiRootView(APIView):
|
||||
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
|
||||
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
|
||||
if MODE == 'development':
|
||||
data['docs'] = drf_reverse('api:schema-swagger-ui')
|
||||
data['swagger'] = drf_reverse('api:schema-swagger-ui')
|
||||
return Response(data)
|
||||
|
||||
|
||||
class ApiVersionRootView(APIView):
|
||||
permission_classes = (AllowAny,)
|
||||
swagger_topic = 'Versioning'
|
||||
resource_purpose = 'api top-level resources'
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "List top-level API resources"})
|
||||
def get(self, request, format=None):
|
||||
'''List top level resources'''
|
||||
data = OrderedDict()
|
||||
@@ -132,7 +126,6 @@ class ApiVersionRootView(APIView):
|
||||
|
||||
class ApiV2RootView(ApiVersionRootView):
|
||||
name = _('Version 2')
|
||||
resource_purpose = 'api v2 root'
|
||||
|
||||
|
||||
class ApiV2PingView(APIView):
|
||||
@@ -144,11 +137,7 @@ class ApiV2PingView(APIView):
|
||||
authentication_classes = ()
|
||||
name = _('Ping')
|
||||
swagger_topic = 'System Configuration'
|
||||
resource_purpose = 'basic instance information'
|
||||
|
||||
@extend_schema_if_available(
|
||||
extensions={'x-ai-description': 'Return basic information about this instance'},
|
||||
)
|
||||
def get(self, request, format=None):
|
||||
"""Return some basic information about this instance
|
||||
|
||||
@@ -183,16 +172,12 @@ class ApiV2SubscriptionView(APIView):
|
||||
permission_classes = (IsAuthenticated,)
|
||||
name = _('Subscriptions')
|
||||
swagger_topic = 'System Configuration'
|
||||
resource_purpose = 'aap subscription validation'
|
||||
|
||||
def check_permissions(self, request):
|
||||
super(ApiV2SubscriptionView, self).check_permissions(request)
|
||||
if not request.user.is_superuser and request.method.lower() not in {'options', 'head'}:
|
||||
self.permission_denied(request) # Raises PermissionDenied exception.
|
||||
|
||||
@extend_schema_if_available(
|
||||
extensions={'x-ai-description': 'List valid AAP subscriptions'},
|
||||
)
|
||||
def post(self, request):
|
||||
data = request.data.copy()
|
||||
|
||||
@@ -259,16 +244,12 @@ class ApiV2AttachView(APIView):
|
||||
permission_classes = (IsAuthenticated,)
|
||||
name = _('Attach Subscription')
|
||||
swagger_topic = 'System Configuration'
|
||||
resource_purpose = 'subscription attachment'
|
||||
|
||||
def check_permissions(self, request):
|
||||
super(ApiV2AttachView, self).check_permissions(request)
|
||||
if not request.user.is_superuser and request.method.lower() not in {'options', 'head'}:
|
||||
self.permission_denied(request) # Raises PermissionDenied exception.
|
||||
|
||||
@extend_schema_if_available(
|
||||
extensions={'x-ai-description': 'Attach a subscription'},
|
||||
)
|
||||
def post(self, request):
|
||||
data = request.data.copy()
|
||||
subscription_id = data.get('subscription_id', None)
|
||||
@@ -318,16 +299,12 @@ class ApiV2ConfigView(APIView):
|
||||
permission_classes = (IsAuthenticated,)
|
||||
name = _('Configuration')
|
||||
swagger_topic = 'System Configuration'
|
||||
resource_purpose = 'system configuration and license management'
|
||||
|
||||
def check_permissions(self, request):
|
||||
super(ApiV2ConfigView, self).check_permissions(request)
|
||||
if not request.user.is_superuser and request.method.lower() not in {'options', 'head', 'get'}:
|
||||
self.permission_denied(request) # Raises PermissionDenied exception.
|
||||
|
||||
@extend_schema_if_available(
|
||||
extensions={'x-ai-description': 'Return various configuration settings'},
|
||||
)
|
||||
def get(self, request, format=None):
|
||||
'''Return various sitewide configuration settings'''
|
||||
|
||||
@@ -366,7 +343,6 @@ class ApiV2ConfigView(APIView):
|
||||
|
||||
return Response(data)
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Add or update a subscription manifest license"})
|
||||
def post(self, request):
|
||||
if not isinstance(request.data, dict):
|
||||
return Response({"error": _("Invalid subscription data")}, status=status.HTTP_400_BAD_REQUEST)
|
||||
@@ -412,9 +388,6 @@ class ApiV2ConfigView(APIView):
|
||||
logger.warning(smart_str(u"Invalid subscription submitted."), extra=dict(actor=request.user.username))
|
||||
return Response({"error": _("Invalid subscription")}, status=status.HTTP_400_BAD_REQUEST)
|
||||
|
||||
@extend_schema_if_available(
|
||||
extensions={'x-ai-description': 'Remove the current subscription'},
|
||||
)
|
||||
def delete(self, request):
|
||||
try:
|
||||
settings.LICENSE = {}
|
||||
|
||||
@@ -11,7 +11,6 @@ from rest_framework import status
|
||||
from rest_framework.exceptions import PermissionDenied
|
||||
from rest_framework.permissions import AllowAny
|
||||
from rest_framework.response import Response
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
from awx.api import serializers
|
||||
from awx.api.generics import APIView, GenericAPIView
|
||||
@@ -25,7 +24,6 @@ logger = logging.getLogger('awx.api.views.webhooks')
|
||||
class WebhookKeyView(GenericAPIView):
|
||||
serializer_class = serializers.EmptySerializer
|
||||
permission_classes = (WebhookKeyPermission,)
|
||||
resource_purpose = 'webhook key management'
|
||||
|
||||
def get_queryset(self):
|
||||
qs_models = {'job_templates': JobTemplate, 'workflow_job_templates': WorkflowJobTemplate}
|
||||
@@ -33,13 +31,11 @@ class WebhookKeyView(GenericAPIView):
|
||||
|
||||
return super().get_queryset()
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Get the webhook key for a template"})
|
||||
def get(self, request, *args, **kwargs):
|
||||
obj = self.get_object()
|
||||
|
||||
return Response({'webhook_key': obj.webhook_key})
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Rotate the webhook key for a template"})
|
||||
def post(self, request, *args, **kwargs):
|
||||
obj = self.get_object()
|
||||
obj.rotate_webhook_key()
|
||||
@@ -56,7 +52,6 @@ class WebhookReceiverBase(APIView):
|
||||
authentication_classes = ()
|
||||
|
||||
ref_keys = {}
|
||||
resource_purpose = 'webhook receiver for triggering jobs'
|
||||
|
||||
def get_queryset(self):
|
||||
qs_models = {'job_templates': JobTemplate, 'workflow_job_templates': WorkflowJobTemplate}
|
||||
@@ -132,8 +127,7 @@ class WebhookReceiverBase(APIView):
|
||||
raise PermissionDenied
|
||||
|
||||
@csrf_exempt
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Receive a webhook event and trigger a job"})
|
||||
def post(self, request, *args, **kwargs_in):
|
||||
def post(self, request, *args, **kwargs):
|
||||
# Ensure that the full contents of the request are captured for multiple uses.
|
||||
request.body
|
||||
|
||||
@@ -181,7 +175,6 @@ class WebhookReceiverBase(APIView):
|
||||
|
||||
class GithubWebhookReceiver(WebhookReceiverBase):
|
||||
service = 'github'
|
||||
resource_purpose = 'github webhook receiver'
|
||||
|
||||
ref_keys = {
|
||||
'pull_request': 'pull_request.head.sha',
|
||||
@@ -219,7 +212,6 @@ class GithubWebhookReceiver(WebhookReceiverBase):
|
||||
|
||||
class GitlabWebhookReceiver(WebhookReceiverBase):
|
||||
service = 'gitlab'
|
||||
resource_purpose = 'gitlab webhook receiver'
|
||||
|
||||
ref_keys = {'Push Hook': 'checkout_sha', 'Tag Push Hook': 'checkout_sha', 'Merge Request Hook': 'object_attributes.last_commit.id'}
|
||||
|
||||
@@ -258,7 +250,6 @@ class GitlabWebhookReceiver(WebhookReceiverBase):
|
||||
|
||||
class BitbucketDcWebhookReceiver(WebhookReceiverBase):
|
||||
service = 'bitbucket_dc'
|
||||
resource_purpose = 'bitbucket data center webhook receiver'
|
||||
|
||||
ref_keys = {
|
||||
'repo:refs_changed': 'changes.0.toHash',
|
||||
|
||||
@@ -6,7 +6,7 @@ import urllib.parse as urlparse
|
||||
from collections import OrderedDict
|
||||
|
||||
# Django
|
||||
from django.core.validators import URLValidator, DomainNameValidator, _lazy_re_compile
|
||||
from django.core.validators import URLValidator, _lazy_re_compile
|
||||
from django.utils.translation import gettext_lazy as _
|
||||
|
||||
# Django REST Framework
|
||||
@@ -160,11 +160,10 @@ class StringListIsolatedPathField(StringListField):
|
||||
class URLField(CharField):
|
||||
# these lines set up a custom regex that allow numbers in the
|
||||
# top-level domain
|
||||
|
||||
tld_re = (
|
||||
r'\.' # dot
|
||||
r'(?!-)' # can't start with a dash
|
||||
r'(?:[a-z' + DomainNameValidator.ul + r'0-9' + '-]{2,63}' # domain label, this line was changed from the original URLValidator
|
||||
r'(?:[a-z' + URLValidator.ul + r'0-9' + '-]{2,63}' # domain label, this line was changed from the original URLValidator
|
||||
r'|xn--[a-z0-9]{1,59})' # or punycode label
|
||||
r'(?<!-)' # can't end with a dash
|
||||
r'\.?' # may have a trailing dot
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.urls import re_path
|
||||
|
||||
from awx.conf.views import SettingCategoryList, SettingSingletonDetail, SettingLoggingTest
|
||||
|
||||
|
||||
urlpatterns = [
|
||||
re_path(r'^$', SettingCategoryList.as_view(), name='setting_category_list'),
|
||||
re_path(r'^(?P<category_slug>[a-z0-9-]+)/$', SettingSingletonDetail.as_view(), name='setting_singleton_detail'),
|
||||
|
||||
@@ -31,7 +31,7 @@ from awx.conf.models import Setting
|
||||
from awx.conf.serializers import SettingCategorySerializer, SettingSingletonSerializer
|
||||
from awx.conf import settings_registry
|
||||
from awx.main.utils.external_logging import reconfigure_rsyslog
|
||||
from ansible_base.lib.utils.schema import extend_schema_if_available
|
||||
|
||||
|
||||
SettingCategory = collections.namedtuple('SettingCategory', ('url', 'slug', 'name'))
|
||||
|
||||
@@ -42,10 +42,6 @@ class SettingCategoryList(ListAPIView):
|
||||
filter_backends = []
|
||||
name = _('Setting Categories')
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "A list of additional API endpoints related to settings."})
|
||||
def get(self, request, *args, **kwargs):
|
||||
return super().get(request, *args, **kwargs)
|
||||
|
||||
def get_queryset(self):
|
||||
setting_categories = []
|
||||
categories = settings_registry.get_registered_categories()
|
||||
@@ -67,10 +63,6 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
|
||||
filter_backends = []
|
||||
name = _('Setting Detail')
|
||||
|
||||
@extend_schema_if_available(extensions={"x-ai-description": "Update system settings."})
|
||||
def patch(self, request, *args, **kwargs):
|
||||
return super().patch(request, *args, **kwargs)
|
||||
|
||||
def get_queryset(self):
|
||||
self.category_slug = self.kwargs.get('category_slug', 'all')
|
||||
all_category_slugs = list(settings_registry.get_registered_categories().keys())
|
||||
|
||||
@@ -1,17 +1,15 @@
|
||||
# Python
|
||||
import logging
|
||||
|
||||
# Dispatcherd
|
||||
from dispatcherd.publish import task
|
||||
|
||||
# AWX
|
||||
from awx.main.analytics.subsystem_metrics import DispatcherMetrics, CallbackReceiverMetrics
|
||||
from awx.main.dispatch.publish import task as task_awx
|
||||
from awx.main.dispatch import get_task_queuename
|
||||
|
||||
logger = logging.getLogger('awx.main.scheduler')
|
||||
|
||||
|
||||
@task(queue=get_task_queuename, timeout=300, on_duplicate='discard')
|
||||
@task_awx(queue=get_task_queuename)
|
||||
def send_subsystem_metrics():
|
||||
DispatcherMetrics().send_metrics()
|
||||
CallbackReceiverMetrics().send_metrics()
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import datetime
|
||||
import asyncio
|
||||
import logging
|
||||
import redis
|
||||
import redis.asyncio
|
||||
import re
|
||||
|
||||
from prometheus_client import (
|
||||
@@ -13,7 +15,7 @@ from prometheus_client import (
|
||||
)
|
||||
|
||||
from django.conf import settings
|
||||
from awx.main.utils.redis import get_redis_client, get_redis_client_async
|
||||
|
||||
|
||||
BROADCAST_WEBSOCKET_REDIS_KEY_NAME = 'broadcast_websocket_stats'
|
||||
|
||||
@@ -64,8 +66,6 @@ class FixedSlidingWindow:
|
||||
|
||||
|
||||
class RelayWebsocketStatsManager:
|
||||
_redis_client = None # Cached Redis client for get_stats_sync()
|
||||
|
||||
def __init__(self, local_hostname):
|
||||
self._local_hostname = local_hostname
|
||||
self._stats = dict()
|
||||
@@ -80,7 +80,7 @@ class RelayWebsocketStatsManager:
|
||||
|
||||
async def run_loop(self):
|
||||
try:
|
||||
redis_conn = get_redis_client_async()
|
||||
redis_conn = await redis.asyncio.Redis.from_url(settings.BROKER_URL)
|
||||
while True:
|
||||
stats_data_str = ''.join(stat.serialize() for stat in self._stats.values())
|
||||
await redis_conn.set(self._redis_key, stats_data_str)
|
||||
@@ -103,10 +103,8 @@ class RelayWebsocketStatsManager:
|
||||
"""
|
||||
Stringified verion of all the stats
|
||||
"""
|
||||
# Reuse cached Redis client to avoid creating new connection pools on every call
|
||||
if cls._redis_client is None:
|
||||
cls._redis_client = get_redis_client()
|
||||
stats_str = cls._redis_client.get(BROADCAST_WEBSOCKET_REDIS_KEY_NAME) or b''
|
||||
redis_conn = redis.Redis.from_url(settings.BROKER_URL)
|
||||
stats_str = redis_conn.get(BROADCAST_WEBSOCKET_REDIS_KEY_NAME) or b''
|
||||
return parser.text_string_to_metric_families(stats_str.decode('UTF-8'))
|
||||
|
||||
|
||||
|
||||
@@ -487,7 +487,9 @@ def unified_jobs_table(since, full_path, until, **kwargs):
|
||||
OR (main_unifiedjob.finished > '{0}' AND main_unifiedjob.finished <= '{1}'))
|
||||
AND main_unifiedjob.launch_type != 'sync'
|
||||
ORDER BY main_unifiedjob.id ASC) TO STDOUT WITH CSV HEADER
|
||||
'''.format(since.isoformat(), until.isoformat())
|
||||
'''.format(
|
||||
since.isoformat(), until.isoformat()
|
||||
)
|
||||
return _copy_table(table='unified_jobs', query=unified_job_query, path=full_path)
|
||||
|
||||
|
||||
@@ -548,7 +550,9 @@ def workflow_job_node_table(since, full_path, until, **kwargs):
|
||||
) always_nodes ON main_workflowjobnode.id = always_nodes.from_workflowjobnode_id
|
||||
WHERE (main_workflowjobnode.modified > '{}' AND main_workflowjobnode.modified <= '{}')
|
||||
ORDER BY main_workflowjobnode.id ASC) TO STDOUT WITH CSV HEADER
|
||||
'''.format(since.isoformat(), until.isoformat())
|
||||
'''.format(
|
||||
since.isoformat(), until.isoformat()
|
||||
)
|
||||
return _copy_table(table='workflow_job_node', query=workflow_job_node_query, path=full_path)
|
||||
|
||||
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
import http.client
|
||||
import socket
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
import logging
|
||||
|
||||
from django.conf import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def get_dispatcherd_metrics(request):
|
||||
metrics_cfg = settings.METRICS_SUBSYSTEM_CONFIG.get('server', {}).get(settings.METRICS_SERVICE_DISPATCHER, {})
|
||||
host = metrics_cfg.get('host', 'localhost')
|
||||
port = metrics_cfg.get('port', 8015)
|
||||
metrics_filter = []
|
||||
if request is not None and hasattr(request, "query_params"):
|
||||
try:
|
||||
nodes_filter = request.query_params.getlist("node")
|
||||
except Exception:
|
||||
nodes_filter = []
|
||||
if nodes_filter and settings.CLUSTER_HOST_ID not in nodes_filter:
|
||||
return ''
|
||||
try:
|
||||
metrics_filter = request.query_params.getlist("metric")
|
||||
except Exception:
|
||||
metrics_filter = []
|
||||
if metrics_filter:
|
||||
# Right now we have no way of filtering the dispatcherd metrics
|
||||
# so just avoid getting in the way if another metric is filtered for
|
||||
return ''
|
||||
url = f"http://{host}:{port}/metrics"
|
||||
try:
|
||||
with urllib.request.urlopen(url, timeout=1.0) as response:
|
||||
payload = response.read()
|
||||
if not payload:
|
||||
return ''
|
||||
return payload.decode('utf-8')
|
||||
except (urllib.error.URLError, UnicodeError, socket.timeout, TimeoutError, http.client.HTTPException) as exc:
|
||||
logger.debug(f"Failed to collect dispatcherd metrics from {url}: {exc}")
|
||||
return ''
|
||||
@@ -14,8 +14,6 @@ from rest_framework.request import Request
|
||||
|
||||
from awx.main.consumers import emit_channel_notification
|
||||
from awx.main.utils import is_testing
|
||||
from awx.main.utils.redis import get_redis_client
|
||||
from .dispatcherd_metrics import get_dispatcherd_metrics
|
||||
|
||||
root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX
|
||||
logger = logging.getLogger('awx.main.analytics')
|
||||
@@ -200,8 +198,8 @@ class Metrics(MetricsNamespace):
|
||||
def __init__(self, namespace, auto_pipe_execute=False, instance_name=None, metrics_have_changed=True, **kwargs):
|
||||
MetricsNamespace.__init__(self, namespace)
|
||||
|
||||
self.conn = get_redis_client()
|
||||
self.pipe = self.conn.pipeline()
|
||||
self.pipe = redis.Redis.from_url(settings.BROKER_URL).pipeline()
|
||||
self.conn = redis.Redis.from_url(settings.BROKER_URL)
|
||||
self.last_pipe_execute = time.time()
|
||||
# track if metrics have been modified since last saved to redis
|
||||
# start with True so that we get an initial save to redis
|
||||
@@ -399,6 +397,11 @@ class DispatcherMetrics(Metrics):
|
||||
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
|
||||
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
|
||||
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
|
||||
# dispatcher subsystem metrics
|
||||
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
|
||||
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
|
||||
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
|
||||
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
|
||||
]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
@@ -426,12 +429,8 @@ class CallbackReceiverMetrics(Metrics):
|
||||
|
||||
def metrics(request):
|
||||
output_text = ''
|
||||
output_text += DispatcherMetrics().generate_metrics(request)
|
||||
output_text += CallbackReceiverMetrics().generate_metrics(request)
|
||||
|
||||
dispatcherd_metrics = get_dispatcherd_metrics(request)
|
||||
if dispatcherd_metrics:
|
||||
output_text += dispatcherd_metrics
|
||||
for m in [DispatcherMetrics(), CallbackReceiverMetrics()]:
|
||||
output_text += m.generate_metrics(request)
|
||||
return output_text
|
||||
|
||||
|
||||
@@ -481,6 +480,13 @@ class CallbackReceiverMetricsServer(MetricsServer):
|
||||
super().__init__(settings.METRICS_SERVICE_CALLBACK_RECEIVER, registry)
|
||||
|
||||
|
||||
class DispatcherMetricsServer(MetricsServer):
|
||||
def __init__(self):
|
||||
registry = CollectorRegistry(auto_describe=True)
|
||||
registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False)))
|
||||
super().__init__(settings.METRICS_SERVICE_DISPATCHER, registry)
|
||||
|
||||
|
||||
class WebsocketsMetricsServer(MetricsServer):
|
||||
def __init__(self):
|
||||
registry = CollectorRegistry(auto_describe=True)
|
||||
|
||||
@@ -82,7 +82,7 @@ class MainConfig(AppConfig):
|
||||
def configure_dispatcherd(self):
|
||||
"""This implements the default configuration for dispatcherd
|
||||
|
||||
If running the tasking service like awx-manage dispatcherd,
|
||||
If running the tasking service like awx-manage run_dispatcher,
|
||||
some additional config will be applied on top of this.
|
||||
This configuration provides the minimum such that code can submit
|
||||
tasks to pg_notify to run those tasks.
|
||||
|
||||
@@ -3,6 +3,7 @@ import logging
|
||||
import time
|
||||
import hmac
|
||||
import asyncio
|
||||
import redis
|
||||
|
||||
from django.core.serializers.json import DjangoJSONEncoder
|
||||
from django.conf import settings
|
||||
@@ -13,8 +14,6 @@ from channels.generic.websocket import AsyncJsonWebsocketConsumer
|
||||
from channels.layers import get_channel_layer
|
||||
from channels.db import database_sync_to_async
|
||||
|
||||
from awx.main.utils.redis import get_redis_client_async
|
||||
|
||||
logger = logging.getLogger('awx.main.consumers')
|
||||
XRF_KEY = '_auth_user_xrf'
|
||||
|
||||
@@ -41,10 +40,10 @@ class WebsocketSecretAuthHelper:
|
||||
@classmethod
|
||||
def verify_secret(cls, s, nonce_tolerance=300):
|
||||
try:
|
||||
prefix, payload = s.split(' ')
|
||||
(prefix, payload) = s.split(' ')
|
||||
if prefix != 'HMAC-SHA256':
|
||||
raise ValueError('Unsupported encryption algorithm')
|
||||
nonce_parsed, secret_parsed = payload.split(':')
|
||||
(nonce_parsed, secret_parsed) = payload.split(':')
|
||||
except Exception:
|
||||
raise ValueError("Failed to parse secret")
|
||||
|
||||
@@ -95,9 +94,6 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
|
||||
await self.channel_layer.group_add(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
|
||||
logger.info(f"client '{self.channel_name}' joined the broadcast group.")
|
||||
|
||||
# Initialize Redis client once for reuse across all message handling
|
||||
self._redis_conn = get_redis_client_async()
|
||||
|
||||
async def disconnect(self, code):
|
||||
logger.info(f"client '{self.channel_name}' disconnected from the broadcast group.")
|
||||
await self.channel_layer.group_discard(settings.BROADCAST_WEBSOCKET_GROUP_NAME, self.channel_name)
|
||||
@@ -106,12 +102,11 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
|
||||
await self.send(event['text'])
|
||||
|
||||
async def receive_json(self, data):
|
||||
group, message = unwrap_broadcast_msg(data)
|
||||
(group, message) = unwrap_broadcast_msg(data)
|
||||
if group == "metrics":
|
||||
message = json.loads(message['text'])
|
||||
await self._redis_conn.set(
|
||||
settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics']
|
||||
)
|
||||
conn = redis.Redis.from_url(settings.BROKER_URL)
|
||||
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "-" + message['metrics_namespace'] + "_instance_" + message['instance'], message['metrics'])
|
||||
else:
|
||||
await self.channel_layer.group_send(group, message)
|
||||
|
||||
|
||||
@@ -77,13 +77,14 @@ class PubSub(object):
|
||||
n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid)
|
||||
yield n
|
||||
|
||||
def events(self):
|
||||
def events(self, yield_timeouts=False):
|
||||
if not self.conn.autocommit:
|
||||
raise RuntimeError('Listening for events can only be done in autocommit mode')
|
||||
|
||||
while True:
|
||||
if select.select([self.conn], [], [], self.select_timeout) == NOT_READY:
|
||||
yield None
|
||||
if yield_timeouts:
|
||||
yield None
|
||||
else:
|
||||
notification_generator = self.current_notifies(self.conn)
|
||||
for notification in notification_generator:
|
||||
|
||||
@@ -2,7 +2,7 @@ from django.conf import settings
|
||||
|
||||
from ansible_base.lib.utils.db import get_pg_notify_params
|
||||
from awx.main.dispatch import get_task_queuename
|
||||
from awx.main.utils.common import get_auto_max_workers
|
||||
from awx.main.dispatch.pool import get_auto_max_workers
|
||||
|
||||
|
||||
def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False) -> dict:
|
||||
@@ -11,35 +11,28 @@ def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False
|
||||
Parameters:
|
||||
for_service: if True, include dynamic options needed for running the dispatcher service
|
||||
this will require database access, you should delay evaluation until after app setup
|
||||
mock_publish: if True, use mock values that don't require database access
|
||||
this is used during tests to avoid database queries during app initialization
|
||||
"""
|
||||
# When mock_publish=True (e.g., during tests), use a default value to avoid
|
||||
# database access in get_auto_max_workers() which queries settings.IS_K8S
|
||||
if mock_publish:
|
||||
max_workers = 20 # Reasonable default for tests
|
||||
else:
|
||||
max_workers = get_auto_max_workers()
|
||||
|
||||
config = {
|
||||
"version": 2,
|
||||
"service": {
|
||||
"pool_kwargs": {
|
||||
"min_workers": settings.JOB_EVENT_WORKERS,
|
||||
"max_workers": max_workers,
|
||||
"max_workers": get_auto_max_workers(),
|
||||
},
|
||||
"main_kwargs": {"node_id": settings.CLUSTER_HOST_ID},
|
||||
"process_manager_cls": "ForkServerManager",
|
||||
"process_manager_kwargs": {"preload_modules": ['awx.main.dispatch.prefork']},
|
||||
"process_manager_kwargs": {"preload_modules": ['awx.main.dispatch.hazmat']},
|
||||
},
|
||||
"brokers": {},
|
||||
"publish": {},
|
||||
"brokers": {
|
||||
"socket": {"socket_path": settings.DISPATCHERD_DEBUGGING_SOCKFILE},
|
||||
},
|
||||
"publish": {"default_control_broker": "socket"},
|
||||
"worker": {"worker_cls": "awx.main.dispatch.worker.dispatcherd.AWXTaskWorker"},
|
||||
}
|
||||
|
||||
if mock_publish:
|
||||
config["brokers"]["dispatcherd.testing.brokers.noop"] = {}
|
||||
config["publish"]["default_broker"] = "dispatcherd.testing.brokers.noop"
|
||||
config["brokers"]["noop"] = {}
|
||||
config["publish"]["default_broker"] = "noop"
|
||||
else:
|
||||
config["brokers"]["pg_notify"] = {
|
||||
"config": get_pg_notify_params(),
|
||||
@@ -56,11 +49,5 @@ def get_dispatcherd_config(for_service: bool = False, mock_publish: bool = False
|
||||
}
|
||||
|
||||
config["brokers"]["pg_notify"]["channels"] = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
|
||||
metrics_cfg = settings.METRICS_SUBSYSTEM_CONFIG.get('server', {}).get(settings.METRICS_SERVICE_DISPATCHER)
|
||||
if metrics_cfg:
|
||||
config["service"]["metrics_kwargs"] = {
|
||||
"host": metrics_cfg.get("host", "localhost"),
|
||||
"port": metrics_cfg.get("port", 8015),
|
||||
}
|
||||
|
||||
return config
|
||||
|
||||
78
awx/main/dispatch/control.py
Normal file
78
awx/main/dispatch/control.py
Normal file
@@ -0,0 +1,78 @@
|
||||
import logging
|
||||
import uuid
|
||||
import json
|
||||
|
||||
from django.conf import settings
|
||||
from django.db import connection
|
||||
import redis
|
||||
|
||||
from awx.main.dispatch import get_task_queuename
|
||||
|
||||
from . import pg_bus_conn
|
||||
|
||||
logger = logging.getLogger('awx.main.dispatch')
|
||||
|
||||
|
||||
class Control(object):
|
||||
services = ('dispatcher', 'callback_receiver')
|
||||
result = None
|
||||
|
||||
def __init__(self, service, host=None):
|
||||
if service not in self.services:
|
||||
raise RuntimeError('{} must be in {}'.format(service, self.services))
|
||||
self.service = service
|
||||
self.queuename = host or get_task_queuename()
|
||||
|
||||
def status(self, *args, **kwargs):
|
||||
r = redis.Redis.from_url(settings.BROKER_URL)
|
||||
if self.service == 'dispatcher':
|
||||
stats = r.get(f'awx_{self.service}_statistics') or b''
|
||||
return stats.decode('utf-8')
|
||||
else:
|
||||
workers = []
|
||||
for key in r.keys('awx_callback_receiver_statistics_*'):
|
||||
workers.append(r.get(key).decode('utf-8'))
|
||||
return '\n'.join(workers)
|
||||
|
||||
def running(self, *args, **kwargs):
|
||||
return self.control_with_reply('running', *args, **kwargs)
|
||||
|
||||
def cancel(self, task_ids, with_reply=True):
|
||||
if with_reply:
|
||||
return self.control_with_reply('cancel', extra_data={'task_ids': task_ids})
|
||||
else:
|
||||
self.control({'control': 'cancel', 'task_ids': task_ids, 'reply_to': None}, extra_data={'task_ids': task_ids})
|
||||
|
||||
def schedule(self, *args, **kwargs):
|
||||
return self.control_with_reply('schedule', *args, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def generate_reply_queue_name(cls):
|
||||
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
|
||||
|
||||
def control_with_reply(self, command, timeout=5, extra_data=None):
|
||||
logger.warning('checking {} {} for {}'.format(self.service, command, self.queuename))
|
||||
reply_queue = Control.generate_reply_queue_name()
|
||||
self.result = None
|
||||
|
||||
if not connection.get_autocommit():
|
||||
raise RuntimeError('Control-with-reply messages can only be done in autocommit mode')
|
||||
|
||||
with pg_bus_conn(select_timeout=timeout) as conn:
|
||||
conn.listen(reply_queue)
|
||||
send_data = {'control': command, 'reply_to': reply_queue}
|
||||
if extra_data:
|
||||
send_data.update(extra_data)
|
||||
conn.notify(self.queuename, json.dumps(send_data))
|
||||
|
||||
for reply in conn.events(yield_timeouts=True):
|
||||
if reply is None:
|
||||
logger.error(f'{self.service} did not reply within {timeout}s')
|
||||
raise RuntimeError(f"{self.service} did not reply within {timeout}s")
|
||||
break
|
||||
|
||||
return json.loads(reply.payload)
|
||||
|
||||
def control(self, msg, **kwargs):
|
||||
with pg_bus_conn() as conn:
|
||||
conn.notify(self.queuename, json.dumps(msg))
|
||||
@@ -10,6 +10,7 @@ from awx import prepare_env
|
||||
|
||||
from dispatcherd.utils import resolve_callable
|
||||
|
||||
|
||||
prepare_env()
|
||||
|
||||
django.setup() # noqa
|
||||
@@ -17,8 +18,9 @@ django.setup() # noqa
|
||||
|
||||
from django.conf import settings
|
||||
|
||||
|
||||
# Preload all periodic tasks so their imports will be in shared memory
|
||||
for name, options in settings.DISPATCHER_SCHEDULE.items():
|
||||
for name, options in settings.CELERYBEAT_SCHEDULE.items():
|
||||
resolve_callable(options['task'])
|
||||
|
||||
|
||||
@@ -29,5 +31,6 @@ from awx.main.scheduler.kubernetes import PodManager # noqa
|
||||
from django.core.cache import cache as django_cache
|
||||
from django.db import connection
|
||||
|
||||
|
||||
connection.close()
|
||||
django_cache.close()
|
||||
147
awx/main/dispatch/periodic.py
Normal file
147
awx/main/dispatch/periodic.py
Normal file
@@ -0,0 +1,147 @@
|
||||
import logging
|
||||
import time
|
||||
import yaml
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
logger = logging.getLogger('awx.main.dispatch.periodic')
|
||||
|
||||
|
||||
class ScheduledTask:
|
||||
"""
|
||||
Class representing schedules, very loosely modeled after python schedule library Job
|
||||
the idea of this class is to:
|
||||
- only deal in relative times (time since the scheduler global start)
|
||||
- only deal in integer math for target runtimes, but float for current relative time
|
||||
|
||||
Missed schedule policy:
|
||||
Invariant target times are maintained, meaning that if interval=10s offset=0
|
||||
and it runs at t=7s, then it calls for next run in 3s.
|
||||
However, if a complete interval has passed, that is counted as a missed run,
|
||||
and missed runs are abandoned (no catch-up runs).
|
||||
"""
|
||||
|
||||
def __init__(self, name: str, data: dict):
|
||||
# parameters need for schedule computation
|
||||
self.interval = int(data['schedule'].total_seconds())
|
||||
self.offset = 0 # offset relative to start time this schedule begins
|
||||
self.index = 0 # number of periods of the schedule that has passed
|
||||
|
||||
# parameters that do not affect scheduling logic
|
||||
self.last_run = None # time of last run, only used for debug
|
||||
self.completed_runs = 0 # number of times schedule is known to run
|
||||
self.name = name
|
||||
self.data = data # used by caller to know what to run
|
||||
|
||||
@property
|
||||
def next_run(self):
|
||||
"Time until the next run with t=0 being the global_start of the scheduler class"
|
||||
return (self.index + 1) * self.interval + self.offset
|
||||
|
||||
def due_to_run(self, relative_time):
|
||||
return bool(self.next_run <= relative_time)
|
||||
|
||||
def expected_runs(self, relative_time):
|
||||
return int((relative_time - self.offset) / self.interval)
|
||||
|
||||
def mark_run(self, relative_time):
|
||||
self.last_run = relative_time
|
||||
self.completed_runs += 1
|
||||
new_index = self.expected_runs(relative_time)
|
||||
if new_index > self.index + 1:
|
||||
logger.warning(f'Missed {new_index - self.index - 1} schedules of {self.name}')
|
||||
self.index = new_index
|
||||
|
||||
def missed_runs(self, relative_time):
|
||||
"Number of times job was supposed to ran but failed to, only used for debug"
|
||||
missed_ct = self.expected_runs(relative_time) - self.completed_runs
|
||||
# if this is currently due to run do not count that as a missed run
|
||||
if missed_ct and self.due_to_run(relative_time):
|
||||
missed_ct -= 1
|
||||
return missed_ct
|
||||
|
||||
|
||||
class Scheduler:
|
||||
def __init__(self, schedule):
|
||||
"""
|
||||
Expects schedule in the form of a dictionary like
|
||||
{
|
||||
'job1': {'schedule': timedelta(seconds=50), 'other': 'stuff'}
|
||||
}
|
||||
Only the schedule nearest-second value is used for scheduling,
|
||||
the rest of the data is for use by the caller to know what to run.
|
||||
"""
|
||||
self.jobs = [ScheduledTask(name, data) for name, data in schedule.items()]
|
||||
min_interval = min(job.interval for job in self.jobs)
|
||||
num_jobs = len(self.jobs)
|
||||
|
||||
# this is intentionally oppioniated against spammy schedules
|
||||
# a core goal is to spread out the scheduled tasks (for worker management)
|
||||
# and high-frequency schedules just do not work with that
|
||||
if num_jobs > min_interval:
|
||||
raise RuntimeError(f'Number of schedules ({num_jobs}) is more than the shortest schedule interval ({min_interval} seconds).')
|
||||
|
||||
# even space out jobs over the base interval
|
||||
for i, job in enumerate(self.jobs):
|
||||
job.offset = (i * min_interval) // num_jobs
|
||||
|
||||
# internally times are all referenced relative to startup time, add grace period
|
||||
self.global_start = time.time() + 2.0
|
||||
|
||||
def get_and_mark_pending(self, reftime=None):
|
||||
if reftime is None:
|
||||
reftime = time.time() # mostly for tests
|
||||
relative_time = reftime - self.global_start
|
||||
to_run = []
|
||||
for job in self.jobs:
|
||||
if job.due_to_run(relative_time):
|
||||
to_run.append(job)
|
||||
logger.debug(f'scheduler found {job.name} to run, {relative_time - job.next_run} seconds after target')
|
||||
job.mark_run(relative_time)
|
||||
return to_run
|
||||
|
||||
def time_until_next_run(self, reftime=None):
|
||||
if reftime is None:
|
||||
reftime = time.time() # mostly for tests
|
||||
relative_time = reftime - self.global_start
|
||||
next_job = min(self.jobs, key=lambda j: j.next_run)
|
||||
delta = next_job.next_run - relative_time
|
||||
if delta <= 0.1:
|
||||
# careful not to give 0 or negative values to the select timeout, which has unclear interpretation
|
||||
logger.warning(f'Scheduler next run of {next_job.name} is {-delta} seconds in the past')
|
||||
return 0.1
|
||||
elif delta > 20.0:
|
||||
logger.warning(f'Scheduler next run unexpectedly over 20 seconds in future: {delta}')
|
||||
return 20.0
|
||||
logger.debug(f'Scheduler next run is {next_job.name} in {delta} seconds')
|
||||
return delta
|
||||
|
||||
def debug(self, *args, **kwargs):
|
||||
data = dict()
|
||||
data['title'] = 'Scheduler status'
|
||||
reftime = time.time()
|
||||
|
||||
now = datetime.fromtimestamp(reftime).strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
relative_time = reftime - self.global_start
|
||||
data['started_time'] = start_time
|
||||
data['current_time'] = now
|
||||
data['current_time_relative'] = round(relative_time, 3)
|
||||
data['total_schedules'] = len(self.jobs)
|
||||
|
||||
data['schedule_list'] = dict(
|
||||
[
|
||||
(
|
||||
job.name,
|
||||
dict(
|
||||
last_run_seconds_ago=round(relative_time - job.last_run, 3) if job.last_run else None,
|
||||
next_run_in_seconds=round(job.next_run - relative_time, 3),
|
||||
offset_in_seconds=job.offset,
|
||||
completed_runs=job.completed_runs,
|
||||
missed_runs=job.missed_runs(relative_time),
|
||||
),
|
||||
)
|
||||
for job in sorted(self.jobs, key=lambda job: job.interval)
|
||||
]
|
||||
)
|
||||
return yaml.safe_dump(data, default_flow_style=False, sort_keys=False)
|
||||
@@ -1,54 +1,583 @@
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
import traceback
|
||||
from datetime import datetime
|
||||
from uuid import uuid4
|
||||
import json
|
||||
|
||||
import collections
|
||||
from multiprocessing import Process
|
||||
from multiprocessing import Queue as MPQueue
|
||||
from queue import Full as QueueFull, Empty as QueueEmpty
|
||||
|
||||
from django.conf import settings
|
||||
from django.db import connection as django_connection
|
||||
from django.db import connection as django_connection, connections
|
||||
from django.core.cache import cache as django_cache
|
||||
from django.utils.timezone import now as tz_now
|
||||
from django_guid import set_guid
|
||||
from jinja2 import Template
|
||||
import psutil
|
||||
|
||||
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
|
||||
from ansible_base.lib.logging.runtime import log_excess_runtime
|
||||
|
||||
from awx.main.models import UnifiedJob
|
||||
from awx.main.dispatch import reaper
|
||||
from awx.main.utils.common import get_mem_effective_capacity, get_corrected_memory, get_corrected_cpu, get_cpu_effective_capacity
|
||||
|
||||
# ansible-runner
|
||||
from ansible_runner.utils.capacity import get_mem_in_bytes, get_cpu_count
|
||||
|
||||
if 'run_callback_receiver' in sys.argv:
|
||||
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
|
||||
else:
|
||||
logger = logging.getLogger('awx.main.dispatch')
|
||||
|
||||
|
||||
RETIRED_SENTINEL_TASK = "[retired]"
|
||||
|
||||
|
||||
class NoOpResultQueue(object):
|
||||
def put(self, item):
|
||||
pass
|
||||
|
||||
|
||||
class PoolWorker(object):
|
||||
"""
|
||||
A simple wrapper around a multiprocessing.Process that tracks a worker child process.
|
||||
Used to track a worker child process and its pending and finished messages.
|
||||
|
||||
The worker process runs the provided target function.
|
||||
This class makes use of two distinct multiprocessing.Queues to track state:
|
||||
|
||||
- self.queue: this is a queue which represents pending messages that should
|
||||
be handled by this worker process; as new AMQP messages come
|
||||
in, a pool will put() them into this queue; the child
|
||||
process that is forked will get() from this queue and handle
|
||||
received messages in an endless loop
|
||||
- self.finished: this is a queue which the worker process uses to signal
|
||||
that it has finished processing a message
|
||||
|
||||
When a message is put() onto this worker, it is tracked in
|
||||
self.managed_tasks.
|
||||
|
||||
Periodically, the worker will call .calculate_managed_tasks(), which will
|
||||
cause messages in self.finished to be removed from self.managed_tasks.
|
||||
|
||||
In this way, self.managed_tasks represents a view of the messages assigned
|
||||
to a specific process. The message at [0] is the least-recently inserted
|
||||
message, and it represents what the worker is running _right now_
|
||||
(self.current_task).
|
||||
|
||||
A worker is "busy" when it has at least one message in self.managed_tasks.
|
||||
It is "idle" when self.managed_tasks is empty.
|
||||
"""
|
||||
|
||||
def __init__(self, target, args):
|
||||
self.process = Process(target=target, args=args)
|
||||
track_managed_tasks = False
|
||||
|
||||
def __init__(self, queue_size, target, args, **kwargs):
|
||||
self.messages_sent = 0
|
||||
self.messages_finished = 0
|
||||
self.managed_tasks = collections.OrderedDict()
|
||||
self.finished = MPQueue(queue_size) if self.track_managed_tasks else NoOpResultQueue()
|
||||
self.queue = MPQueue(queue_size)
|
||||
self.process = Process(target=target, args=(self.queue, self.finished) + args)
|
||||
self.process.daemon = True
|
||||
self.creation_time = time.monotonic()
|
||||
self.retiring = False
|
||||
|
||||
def start(self):
|
||||
self.process.start()
|
||||
|
||||
def put(self, body):
|
||||
if self.retiring:
|
||||
uuid = body.get('uuid', 'N/A') if isinstance(body, dict) else 'N/A'
|
||||
logger.info(f"Worker pid:{self.pid} is retiring. Refusing new task {uuid}.")
|
||||
raise QueueFull("Worker is retiring and not accepting new tasks") # AutoscalePool.write handles QueueFull
|
||||
uuid = '?'
|
||||
if isinstance(body, dict):
|
||||
if not body.get('uuid'):
|
||||
body['uuid'] = str(uuid4())
|
||||
uuid = body['uuid']
|
||||
if self.track_managed_tasks:
|
||||
self.managed_tasks[uuid] = body
|
||||
self.queue.put(body, block=True, timeout=5)
|
||||
self.messages_sent += 1
|
||||
self.calculate_managed_tasks()
|
||||
|
||||
def quit(self):
|
||||
"""
|
||||
Send a special control message to the worker that tells it to exit
|
||||
gracefully.
|
||||
"""
|
||||
self.queue.put('QUIT')
|
||||
|
||||
@property
|
||||
def age(self):
|
||||
"""Returns the current age of the worker in seconds."""
|
||||
return time.monotonic() - self.creation_time
|
||||
|
||||
@property
|
||||
def pid(self):
|
||||
return self.process.pid
|
||||
|
||||
@property
|
||||
def qsize(self):
|
||||
return self.queue.qsize()
|
||||
|
||||
@property
|
||||
def alive(self):
|
||||
return self.process.is_alive()
|
||||
|
||||
@property
|
||||
def mb(self):
|
||||
if self.alive:
|
||||
return '{:0.3f}'.format(psutil.Process(self.pid).memory_info().rss / 1024.0 / 1024.0)
|
||||
return '0'
|
||||
|
||||
@property
|
||||
def exitcode(self):
|
||||
return str(self.process.exitcode)
|
||||
|
||||
def calculate_managed_tasks(self):
|
||||
if not self.track_managed_tasks:
|
||||
return
|
||||
# look to see if any tasks were finished
|
||||
finished = []
|
||||
for _ in range(self.finished.qsize()):
|
||||
try:
|
||||
finished.append(self.finished.get(block=False))
|
||||
except QueueEmpty:
|
||||
break # qsize is not always _totally_ up to date
|
||||
|
||||
# if any tasks were finished, removed them from the managed tasks for
|
||||
# this worker
|
||||
for uuid in finished:
|
||||
try:
|
||||
del self.managed_tasks[uuid]
|
||||
self.messages_finished += 1
|
||||
except KeyError:
|
||||
# ansible _sometimes_ appears to send events w/ duplicate UUIDs;
|
||||
# UUIDs for ansible events are *not* actually globally unique
|
||||
# when this occurs, it's _fine_ to ignore this KeyError because
|
||||
# the purpose of self.managed_tasks is to just track internal
|
||||
# state of which events are *currently* being processed.
|
||||
logger.warning('Event UUID {} appears to be have been duplicated.'.format(uuid))
|
||||
if self.retiring:
|
||||
self.managed_tasks[RETIRED_SENTINEL_TASK] = {'task': RETIRED_SENTINEL_TASK}
|
||||
|
||||
@property
|
||||
def current_task(self):
|
||||
if not self.track_managed_tasks:
|
||||
return None
|
||||
self.calculate_managed_tasks()
|
||||
# the task at [0] is the one that's running right now (or is about to
|
||||
# be running)
|
||||
if len(self.managed_tasks):
|
||||
return self.managed_tasks[list(self.managed_tasks.keys())[0]]
|
||||
|
||||
return None
|
||||
|
||||
@property
|
||||
def orphaned_tasks(self):
|
||||
if not self.track_managed_tasks:
|
||||
return []
|
||||
orphaned = []
|
||||
if not self.alive:
|
||||
# if this process had a running task that never finished,
|
||||
# requeue its error callbacks
|
||||
current_task = self.current_task
|
||||
if isinstance(current_task, dict):
|
||||
orphaned.extend(current_task.get('errbacks', []))
|
||||
|
||||
# if this process has any pending messages requeue them
|
||||
for _ in range(self.qsize):
|
||||
try:
|
||||
message = self.queue.get(block=False)
|
||||
if message != 'QUIT':
|
||||
orphaned.append(message)
|
||||
except QueueEmpty:
|
||||
break # qsize is not always _totally_ up to date
|
||||
if len(orphaned):
|
||||
logger.error('requeuing {} messages from gone worker pid:{}'.format(len(orphaned), self.pid))
|
||||
return orphaned
|
||||
|
||||
@property
|
||||
def busy(self):
|
||||
self.calculate_managed_tasks()
|
||||
return len(self.managed_tasks) > 0
|
||||
|
||||
@property
|
||||
def idle(self):
|
||||
return not self.busy
|
||||
|
||||
|
||||
class StatefulPoolWorker(PoolWorker):
|
||||
track_managed_tasks = True
|
||||
|
||||
|
||||
class WorkerPool(object):
|
||||
"""
|
||||
Creates a pool of forked PoolWorkers.
|
||||
|
||||
Each worker process runs the provided target function in an isolated process.
|
||||
The pool manages spawning, tracking, and stopping worker processes.
|
||||
As WorkerPool.write(...) is called (generally, by a kombu consumer
|
||||
implementation when it receives an AMQP message), messages are passed to
|
||||
one of the multiprocessing Queues where some work can be done on them.
|
||||
|
||||
Example:
|
||||
pool = WorkerPool(workers_num=4) # spawn four worker processes
|
||||
class MessagePrinter(awx.main.dispatch.worker.BaseWorker):
|
||||
|
||||
def perform_work(self, body):
|
||||
print(body)
|
||||
|
||||
pool = WorkerPool(min_workers=4) # spawn four worker processes
|
||||
pool.init_workers(MessagePrint().work_loop)
|
||||
pool.write(
|
||||
0, # preferred worker 0
|
||||
'Hello, World!'
|
||||
)
|
||||
"""
|
||||
|
||||
def __init__(self, workers_num=None):
|
||||
self.workers_num = workers_num or settings.JOB_EVENT_WORKERS
|
||||
pool_cls = PoolWorker
|
||||
debug_meta = ''
|
||||
|
||||
def init_workers(self, target):
|
||||
for idx in range(self.workers_num):
|
||||
# It's important to close these because we're _about_ to fork, and we
|
||||
# don't want the forked processes to inherit the open sockets
|
||||
# for the DB and cache connections (that way lies race conditions)
|
||||
django_connection.close()
|
||||
django_cache.close()
|
||||
worker = PoolWorker(target, (idx,))
|
||||
def __init__(self, min_workers=None, queue_size=None):
|
||||
self.name = settings.CLUSTER_HOST_ID
|
||||
self.pid = os.getpid()
|
||||
self.min_workers = min_workers or settings.JOB_EVENT_WORKERS
|
||||
self.queue_size = queue_size or settings.JOB_EVENT_MAX_QUEUE_SIZE
|
||||
self.workers = []
|
||||
|
||||
def __len__(self):
|
||||
return len(self.workers)
|
||||
|
||||
def init_workers(self, target, *target_args):
|
||||
self.target = target
|
||||
self.target_args = target_args
|
||||
for idx in range(self.min_workers):
|
||||
self.up()
|
||||
|
||||
def up(self):
|
||||
idx = len(self.workers)
|
||||
# It's important to close these because we're _about_ to fork, and we
|
||||
# don't want the forked processes to inherit the open sockets
|
||||
# for the DB and cache connections (that way lies race conditions)
|
||||
django_connection.close()
|
||||
django_cache.close()
|
||||
worker = self.pool_cls(self.queue_size, self.target, (idx,) + self.target_args)
|
||||
self.workers.append(worker)
|
||||
try:
|
||||
worker.start()
|
||||
except Exception:
|
||||
logger.exception('could not fork')
|
||||
else:
|
||||
logger.debug('scaling up worker pid:{}'.format(worker.pid))
|
||||
return idx, worker
|
||||
|
||||
def debug(self, *args, **kwargs):
|
||||
tmpl = Template(
|
||||
'Recorded at: {{ dt }} \n'
|
||||
'{{ pool.name }}[pid:{{ pool.pid }}] workers total={{ workers|length }} {{ meta }} \n'
|
||||
'{% for w in workers %}'
|
||||
'. worker[pid:{{ w.pid }}]{% if not w.alive %} GONE exit={{ w.exitcode }}{% endif %}'
|
||||
' sent={{ w.messages_sent }}'
|
||||
' age={{ "%.0f"|format(w.age) }}s'
|
||||
' retiring={{ w.retiring }}'
|
||||
'{% if w.messages_finished %} finished={{ w.messages_finished }}{% endif %}'
|
||||
' qsize={{ w.managed_tasks|length }}'
|
||||
' rss={{ w.mb }}MB'
|
||||
'{% for task in w.managed_tasks.values() %}'
|
||||
'\n - {% if loop.index0 == 0 %}running {% if "age" in task %}for: {{ "%.1f" % task["age"] }}s {% endif %}{% else %}queued {% endif %}'
|
||||
'{{ task["uuid"] }} '
|
||||
'{% if "task" in task %}'
|
||||
'{{ task["task"].rsplit(".", 1)[-1] }}'
|
||||
# don't print kwargs, they often contain launch-time secrets
|
||||
'(*{{ task.get("args", []) }})'
|
||||
'{% endif %}'
|
||||
'{% endfor %}'
|
||||
'{% if not w.managed_tasks|length %}'
|
||||
' [IDLE]'
|
||||
'{% endif %}'
|
||||
'\n'
|
||||
'{% endfor %}'
|
||||
)
|
||||
now = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||
return tmpl.render(pool=self, workers=self.workers, meta=self.debug_meta, dt=now)
|
||||
|
||||
def write(self, preferred_queue, body):
|
||||
queue_order = sorted(range(len(self.workers)), key=lambda x: -1 if x == preferred_queue else x)
|
||||
write_attempt_order = []
|
||||
for queue_actual in queue_order:
|
||||
try:
|
||||
worker.start()
|
||||
self.workers[queue_actual].put(body)
|
||||
return queue_actual
|
||||
except QueueFull:
|
||||
pass
|
||||
except Exception:
|
||||
logger.exception('could not fork')
|
||||
tb = traceback.format_exc()
|
||||
logger.warning("could not write to queue %s" % preferred_queue)
|
||||
logger.warning("detail: {}".format(tb))
|
||||
write_attempt_order.append(preferred_queue)
|
||||
logger.error("could not write payload to any queue, attempted order: {}".format(write_attempt_order))
|
||||
return None
|
||||
|
||||
def stop(self, signum):
|
||||
try:
|
||||
for worker in self.workers:
|
||||
os.kill(worker.pid, signum)
|
||||
except Exception:
|
||||
logger.exception('could not kill {}'.format(worker.pid))
|
||||
|
||||
|
||||
def get_auto_max_workers():
|
||||
"""Method we normally rely on to get max_workers
|
||||
|
||||
Uses almost same logic as Instance.local_health_check
|
||||
The important thing is to be MORE than Instance.capacity
|
||||
so that the task-manager does not over-schedule this node
|
||||
|
||||
Ideally we would just use the capacity from the database plus reserve workers,
|
||||
but this poses some bootstrap problems where OCP task containers
|
||||
register themselves after startup
|
||||
"""
|
||||
# Get memory from ansible-runner
|
||||
total_memory_gb = get_mem_in_bytes()
|
||||
|
||||
# This may replace memory calculation with a user override
|
||||
corrected_memory = get_corrected_memory(total_memory_gb)
|
||||
|
||||
# Get same number as max forks based on memory, this function takes memory as bytes
|
||||
mem_capacity = get_mem_effective_capacity(corrected_memory, is_control_node=True)
|
||||
|
||||
# Follow same process for CPU capacity constraint
|
||||
cpu_count = get_cpu_count()
|
||||
corrected_cpu = get_corrected_cpu(cpu_count)
|
||||
cpu_capacity = get_cpu_effective_capacity(corrected_cpu, is_control_node=True)
|
||||
|
||||
# Here is what is different from health checks,
|
||||
auto_max = max(mem_capacity, cpu_capacity)
|
||||
|
||||
# add magic number of extra workers to ensure
|
||||
# we have a few extra workers to run the heartbeat
|
||||
auto_max += 7
|
||||
|
||||
return auto_max
|
||||
|
||||
|
||||
class AutoscalePool(WorkerPool):
|
||||
"""
|
||||
An extended pool implementation that automatically scales workers up and
|
||||
down based on demand
|
||||
"""
|
||||
|
||||
pool_cls = StatefulPoolWorker
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.max_workers = kwargs.pop('max_workers', None)
|
||||
self.max_worker_lifetime_seconds = kwargs.pop(
|
||||
'max_worker_lifetime_seconds', getattr(settings, 'WORKER_MAX_LIFETIME_SECONDS', 14400)
|
||||
) # Default to 4 hours
|
||||
super(AutoscalePool, self).__init__(*args, **kwargs)
|
||||
|
||||
if self.max_workers is None:
|
||||
self.max_workers = get_auto_max_workers()
|
||||
|
||||
# max workers can't be less than min_workers
|
||||
self.max_workers = max(self.min_workers, self.max_workers)
|
||||
|
||||
# the task manager enforces settings.TASK_MANAGER_TIMEOUT on its own
|
||||
# but if the task takes longer than the time defined here, we will force it to stop here
|
||||
self.task_manager_timeout = settings.TASK_MANAGER_TIMEOUT + settings.TASK_MANAGER_TIMEOUT_GRACE_PERIOD
|
||||
|
||||
# initialize some things for subsystem metrics periodic gathering
|
||||
# the AutoscalePool class does not save these to redis directly, but reports via produce_subsystem_metrics
|
||||
self.scale_up_ct = 0
|
||||
self.worker_count_max = 0
|
||||
|
||||
# last time we wrote current tasks, to avoid too much log spam
|
||||
self.last_task_list_log = time.monotonic()
|
||||
|
||||
def produce_subsystem_metrics(self, metrics_object):
|
||||
metrics_object.set('dispatcher_pool_scale_up_events', self.scale_up_ct)
|
||||
metrics_object.set('dispatcher_pool_active_task_count', sum(len(w.managed_tasks) for w in self.workers))
|
||||
metrics_object.set('dispatcher_pool_max_worker_count', self.worker_count_max)
|
||||
self.worker_count_max = len(self.workers)
|
||||
|
||||
@property
|
||||
def should_grow(self):
|
||||
if len(self.workers) < self.min_workers:
|
||||
# If we don't have at least min_workers, add more
|
||||
return True
|
||||
# If every worker is busy doing something, add more
|
||||
return all([w.busy for w in self.workers])
|
||||
|
||||
@property
|
||||
def full(self):
|
||||
return len(self.workers) == self.max_workers
|
||||
|
||||
@property
|
||||
def debug_meta(self):
|
||||
return 'min={} max={}'.format(self.min_workers, self.max_workers)
|
||||
|
||||
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
|
||||
def cleanup(self):
|
||||
"""
|
||||
Perform some internal account and cleanup. This is run on
|
||||
every cluster node heartbeat:
|
||||
|
||||
1. Discover worker processes that exited, and recover messages they
|
||||
were handling.
|
||||
2. Clean up unnecessary, idle workers.
|
||||
|
||||
IMPORTANT: this function is one of the few places in the dispatcher
|
||||
(aside from setting lookups) where we talk to the database. As such,
|
||||
if there's an outage, this method _can_ throw various
|
||||
django.db.utils.Error exceptions. Act accordingly.
|
||||
"""
|
||||
orphaned = []
|
||||
for w in self.workers[::]:
|
||||
is_retirement_age = self.max_worker_lifetime_seconds is not None and w.age > self.max_worker_lifetime_seconds
|
||||
if not w.alive:
|
||||
# the worker process has exited
|
||||
# 1. take the task it was running and enqueue the error
|
||||
# callbacks
|
||||
# 2. take any pending tasks delivered to its queue and
|
||||
# send them to another worker
|
||||
logger.error('worker pid:{} is gone (exit={})'.format(w.pid, w.exitcode))
|
||||
if w.current_task:
|
||||
if w.current_task == {'task': RETIRED_SENTINEL_TASK}:
|
||||
logger.debug('scaling down worker pid:{} due to worker age: {}'.format(w.pid, w.age))
|
||||
self.workers.remove(w)
|
||||
continue
|
||||
if w.current_task != 'QUIT':
|
||||
try:
|
||||
for j in UnifiedJob.objects.filter(celery_task_id=w.current_task['uuid']):
|
||||
reaper.reap_job(j, 'failed')
|
||||
except Exception:
|
||||
logger.exception('failed to reap job UUID {}'.format(w.current_task['uuid']))
|
||||
else:
|
||||
logger.warning(f'Worker was told to quit but has not, pid={w.pid}')
|
||||
orphaned.extend(w.orphaned_tasks)
|
||||
self.workers.remove(w)
|
||||
|
||||
elif w.idle and len(self.workers) > self.min_workers:
|
||||
# the process has an empty queue (it's idle) and we have
|
||||
# more processes in the pool than we need (> min)
|
||||
# send this process a message so it will exit gracefully
|
||||
# at the next opportunity
|
||||
logger.debug('scaling down worker pid:{}'.format(w.pid))
|
||||
w.quit()
|
||||
self.workers.remove(w)
|
||||
|
||||
elif w.idle and is_retirement_age:
|
||||
logger.debug('scaling down worker pid:{} due to worker age: {}'.format(w.pid, w.age))
|
||||
w.quit()
|
||||
self.workers.remove(w)
|
||||
|
||||
elif is_retirement_age and not w.retiring and not w.idle:
|
||||
logger.info(
|
||||
f"Worker pid:{w.pid} (age: {w.age:.0f}s) exceeded max lifetime ({self.max_worker_lifetime_seconds:.0f}s). "
|
||||
"Signaling for graceful retirement."
|
||||
)
|
||||
# Send QUIT signal; worker will finish current task then exit.
|
||||
w.quit()
|
||||
# mark as retiring to reject any future tasks that might be assigned in meantime
|
||||
w.retiring = True
|
||||
|
||||
if w.alive:
|
||||
# if we discover a task manager invocation that's been running
|
||||
# too long, reap it (because otherwise it'll just hold the postgres
|
||||
# advisory lock forever); the goal of this code is to discover
|
||||
# deadlocks or other serious issues in the task manager that cause
|
||||
# the task manager to never do more work
|
||||
current_task = w.current_task
|
||||
if current_task and isinstance(current_task, dict):
|
||||
endings = ('tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager')
|
||||
current_task_name = current_task.get('task', '')
|
||||
if current_task_name.endswith(endings):
|
||||
if 'started' not in current_task:
|
||||
w.managed_tasks[current_task['uuid']]['started'] = time.time()
|
||||
age = time.time() - current_task['started']
|
||||
w.managed_tasks[current_task['uuid']]['age'] = age
|
||||
if age > self.task_manager_timeout:
|
||||
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGUSR1 to {w.pid}')
|
||||
os.kill(w.pid, signal.SIGUSR1)
|
||||
|
||||
for m in orphaned:
|
||||
# if all the workers are dead, spawn at least one
|
||||
if not len(self.workers):
|
||||
self.up()
|
||||
idx = random.choice(range(len(self.workers)))
|
||||
self.write(idx, m)
|
||||
|
||||
def add_bind_kwargs(self, body):
|
||||
bind_kwargs = body.pop('bind_kwargs', [])
|
||||
body.setdefault('kwargs', {})
|
||||
if 'dispatch_time' in bind_kwargs:
|
||||
body['kwargs']['dispatch_time'] = tz_now().isoformat()
|
||||
if 'worker_tasks' in bind_kwargs:
|
||||
worker_tasks = {}
|
||||
for worker in self.workers:
|
||||
worker.calculate_managed_tasks()
|
||||
worker_tasks[worker.pid] = list(worker.managed_tasks.keys())
|
||||
body['kwargs']['worker_tasks'] = worker_tasks
|
||||
|
||||
def up(self):
|
||||
if self.full:
|
||||
# if we can't spawn more workers, just toss this message into a
|
||||
# random worker's backlog
|
||||
idx = random.choice(range(len(self.workers)))
|
||||
return idx, self.workers[idx]
|
||||
else:
|
||||
self.scale_up_ct += 1
|
||||
ret = super(AutoscalePool, self).up()
|
||||
new_worker_ct = len(self.workers)
|
||||
if new_worker_ct > self.worker_count_max:
|
||||
self.worker_count_max = new_worker_ct
|
||||
return ret
|
||||
|
||||
@staticmethod
|
||||
def fast_task_serialization(current_task):
|
||||
try:
|
||||
return str(current_task.get('task')) + ' - ' + str(sorted(current_task.get('args', []))) + ' - ' + str(sorted(current_task.get('kwargs', {})))
|
||||
except Exception:
|
||||
# just make sure this does not make things worse
|
||||
return str(current_task)
|
||||
|
||||
def write(self, preferred_queue, body):
|
||||
if 'guid' in body:
|
||||
set_guid(body['guid'])
|
||||
try:
|
||||
if isinstance(body, dict) and body.get('bind_kwargs'):
|
||||
self.add_bind_kwargs(body)
|
||||
if self.should_grow:
|
||||
self.up()
|
||||
# we don't care about "preferred queue" round robin distribution, just
|
||||
# find the first non-busy worker and claim it
|
||||
workers = self.workers[:]
|
||||
random.shuffle(workers)
|
||||
for w in workers:
|
||||
if not w.busy:
|
||||
w.put(body)
|
||||
break
|
||||
else:
|
||||
logger.debug('scaling up worker pid:{}'.format(worker.process.pid))
|
||||
task_name = 'unknown'
|
||||
if isinstance(body, dict):
|
||||
task_name = body.get('task')
|
||||
logger.warning(f'Workers maxed, queuing {task_name}, load: {sum(len(w.managed_tasks) for w in self.workers)} / {len(self.workers)}')
|
||||
# Once every 10 seconds write out task list for debugging
|
||||
if time.monotonic() - self.last_task_list_log >= 10.0:
|
||||
task_counts = {}
|
||||
for worker in self.workers:
|
||||
task_slug = self.fast_task_serialization(worker.current_task)
|
||||
task_counts.setdefault(task_slug, 0)
|
||||
task_counts[task_slug] += 1
|
||||
logger.info(f'Running tasks by count:\n{json.dumps(task_counts, indent=2)}')
|
||||
self.last_task_list_log = time.monotonic()
|
||||
return super(AutoscalePool, self).write(preferred_queue, body)
|
||||
except Exception:
|
||||
for conn in connections.all():
|
||||
# If the database connection has a hiccup, re-establish a new
|
||||
# connection
|
||||
conn.close_if_unusable_or_obsolete()
|
||||
logger.exception('failed to write inbound message')
|
||||
|
||||
146
awx/main/dispatch/publish.py
Normal file
146
awx/main/dispatch/publish.py
Normal file
@@ -0,0 +1,146 @@
|
||||
import inspect
|
||||
import logging
|
||||
import json
|
||||
import time
|
||||
from uuid import uuid4
|
||||
|
||||
from dispatcherd.publish import submit_task
|
||||
from dispatcherd.utils import resolve_callable
|
||||
|
||||
from django_guid import get_guid
|
||||
from django.conf import settings
|
||||
|
||||
from . import pg_bus_conn
|
||||
|
||||
logger = logging.getLogger('awx.main.dispatch')
|
||||
|
||||
|
||||
def serialize_task(f):
|
||||
return '.'.join([f.__module__, f.__name__])
|
||||
|
||||
|
||||
class task:
|
||||
"""
|
||||
Used to decorate a function or class so that it can be run asynchronously
|
||||
via the task dispatcher. Tasks can be simple functions:
|
||||
|
||||
@task()
|
||||
def add(a, b):
|
||||
return a + b
|
||||
|
||||
...or classes that define a `run` method:
|
||||
|
||||
@task()
|
||||
class Adder:
|
||||
def run(self, a, b):
|
||||
return a + b
|
||||
|
||||
# Tasks can be run synchronously...
|
||||
assert add(1, 1) == 2
|
||||
assert Adder().run(1, 1) == 2
|
||||
|
||||
# ...or published to a queue:
|
||||
add.apply_async([1, 1])
|
||||
Adder.apply_async([1, 1])
|
||||
|
||||
# Tasks can also define a specific target queue or use the special fan-out queue tower_broadcast:
|
||||
|
||||
@task(queue='slow-tasks')
|
||||
def snooze():
|
||||
time.sleep(10)
|
||||
|
||||
@task(queue='tower_broadcast')
|
||||
def announce():
|
||||
print("Run this everywhere!")
|
||||
|
||||
# The special parameter bind_kwargs tells the main dispatcher process to add certain kwargs
|
||||
|
||||
@task(bind_kwargs=['dispatch_time'])
|
||||
def print_time(dispatch_time=None):
|
||||
print(f"Time I was dispatched: {dispatch_time}")
|
||||
"""
|
||||
|
||||
def __init__(self, queue=None, bind_kwargs=None):
|
||||
self.queue = queue
|
||||
self.bind_kwargs = bind_kwargs
|
||||
|
||||
def __call__(self, fn=None):
|
||||
queue = self.queue
|
||||
bind_kwargs = self.bind_kwargs
|
||||
|
||||
class PublisherMixin(object):
|
||||
queue = None
|
||||
|
||||
@classmethod
|
||||
def delay(cls, *args, **kwargs):
|
||||
return cls.apply_async(args, kwargs)
|
||||
|
||||
@classmethod
|
||||
def get_async_body(cls, args=None, kwargs=None, uuid=None, **kw):
|
||||
"""
|
||||
Get the python dict to become JSON data in the pg_notify message
|
||||
This same message gets passed over the dispatcher IPC queue to workers
|
||||
If a task is submitted to a multiprocessing pool, skipping pg_notify, this might be used directly
|
||||
"""
|
||||
task_id = uuid or str(uuid4())
|
||||
args = args or []
|
||||
kwargs = kwargs or {}
|
||||
obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name, 'time_pub': time.time()}
|
||||
guid = get_guid()
|
||||
if guid:
|
||||
obj['guid'] = guid
|
||||
if bind_kwargs:
|
||||
obj['bind_kwargs'] = bind_kwargs
|
||||
obj.update(**kw)
|
||||
return obj
|
||||
|
||||
@classmethod
|
||||
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
|
||||
try:
|
||||
from flags.state import flag_enabled
|
||||
|
||||
if flag_enabled('FEATURE_DISPATCHERD_ENABLED'):
|
||||
# At this point we have the import string, and submit_task wants the method, so back to that
|
||||
actual_task = resolve_callable(cls.name)
|
||||
return submit_task(actual_task, args=args, kwargs=kwargs, queue=queue, uuid=uuid, **kw)
|
||||
except Exception:
|
||||
logger.exception(f"[DISPATCHER] Failed to check for alternative dispatcherd implementation for {cls.name}")
|
||||
# Continue with original implementation if anything fails
|
||||
pass
|
||||
|
||||
# Original implementation follows
|
||||
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
|
||||
if not queue:
|
||||
msg = f'{cls.name}: Queue value required and may not be None'
|
||||
logger.error(msg)
|
||||
raise ValueError(msg)
|
||||
obj = cls.get_async_body(args=args, kwargs=kwargs, uuid=uuid, **kw)
|
||||
if callable(queue):
|
||||
queue = queue()
|
||||
if not settings.DISPATCHER_MOCK_PUBLISH:
|
||||
with pg_bus_conn() as conn:
|
||||
conn.notify(queue, json.dumps(obj))
|
||||
return (obj, queue)
|
||||
|
||||
# If the object we're wrapping *is* a class (e.g., RunJob), return
|
||||
# a *new* class that inherits from the wrapped class *and* BaseTask
|
||||
# In this way, the new class returned by our decorator is the class
|
||||
# being decorated *plus* PublisherMixin so cls.apply_async() and
|
||||
# cls.delay() work
|
||||
bases = []
|
||||
ns = {'name': serialize_task(fn), 'queue': queue}
|
||||
if inspect.isclass(fn):
|
||||
bases = list(fn.__bases__)
|
||||
ns.update(fn.__dict__)
|
||||
cls = type(fn.__name__, tuple(bases + [PublisherMixin]), ns)
|
||||
if inspect.isclass(fn):
|
||||
return cls
|
||||
|
||||
# if the object being decorated is *not* a class (it's a Python
|
||||
# function), make fn.apply_async and fn.delay proxy through to the
|
||||
# PublisherMixin we dynamically created above
|
||||
setattr(fn, 'name', cls.name)
|
||||
setattr(fn, 'apply_async', cls.apply_async)
|
||||
setattr(fn, 'delay', cls.delay)
|
||||
setattr(fn, 'get_async_body', cls.get_async_body)
|
||||
return fn
|
||||
@@ -1,6 +1,9 @@
|
||||
from datetime import timedelta
|
||||
import logging
|
||||
|
||||
from django.db.models import Q
|
||||
from django.conf import settings
|
||||
from django.utils.timezone import now as tz_now
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
|
||||
from awx.main.models import Instance, UnifiedJob, WorkflowJob
|
||||
@@ -47,6 +50,26 @@ def reap_job(j, status, job_explanation=None):
|
||||
logger.error(f'{j.log_format} is no longer {status_before}; reaping')
|
||||
|
||||
|
||||
def reap_waiting(instance=None, status='failed', job_explanation=None, grace_period=None, excluded_uuids=None, ref_time=None):
|
||||
"""
|
||||
Reap all jobs in waiting for this instance.
|
||||
"""
|
||||
if grace_period is None:
|
||||
grace_period = settings.JOB_WAITING_GRACE_PERIOD + settings.TASK_MANAGER_TIMEOUT
|
||||
|
||||
if instance is None:
|
||||
hostname = Instance.objects.my_hostname()
|
||||
else:
|
||||
hostname = instance.hostname
|
||||
if ref_time is None:
|
||||
ref_time = tz_now()
|
||||
jobs = UnifiedJob.objects.filter(status='waiting', modified__lte=ref_time - timedelta(seconds=grace_period), controller_node=hostname)
|
||||
if excluded_uuids:
|
||||
jobs = jobs.exclude(celery_task_id__in=excluded_uuids)
|
||||
for j in jobs:
|
||||
reap_job(j, status, job_explanation=job_explanation)
|
||||
|
||||
|
||||
def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=None, ref_time=None):
|
||||
"""
|
||||
Reap all jobs in running for this instance.
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
from .base import AWXConsumerRedis # noqa
|
||||
from .base import AWXConsumerRedis, AWXConsumerPG, BaseWorker # noqa
|
||||
from .callback import CallbackBrokerWorker # noqa
|
||||
from .task import TaskWorker # noqa
|
||||
|
||||
@@ -4,39 +4,341 @@
|
||||
import os
|
||||
import logging
|
||||
import signal
|
||||
import sys
|
||||
import redis
|
||||
import json
|
||||
import psycopg
|
||||
import time
|
||||
from uuid import UUID
|
||||
from queue import Empty as QueueEmpty
|
||||
from datetime import timedelta
|
||||
|
||||
from django import db
|
||||
from django.conf import settings
|
||||
import redis.exceptions
|
||||
|
||||
from ansible_base.lib.logging.runtime import log_excess_runtime
|
||||
|
||||
from awx.main.utils.redis import get_redis_client
|
||||
from awx.main.dispatch.pool import WorkerPool
|
||||
from awx.main.dispatch.periodic import Scheduler
|
||||
from awx.main.dispatch import pg_bus_conn
|
||||
from awx.main.utils.db import set_connection_name
|
||||
import awx.main.analytics.subsystem_metrics as s_metrics
|
||||
|
||||
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
|
||||
if 'run_callback_receiver' in sys.argv:
|
||||
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
|
||||
else:
|
||||
logger = logging.getLogger('awx.main.dispatch')
|
||||
|
||||
|
||||
def signame(sig):
|
||||
return dict((k, v) for v, k in signal.__dict__.items() if v.startswith('SIG') and not v.startswith('SIG_'))[sig]
|
||||
|
||||
|
||||
class AWXConsumerRedis(object):
|
||||
class WorkerSignalHandler:
|
||||
def __init__(self):
|
||||
self.kill_now = False
|
||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
||||
signal.signal(signal.SIGINT, self.exit_gracefully)
|
||||
|
||||
def exit_gracefully(self, *args, **kwargs):
|
||||
self.kill_now = True
|
||||
|
||||
|
||||
class AWXConsumerBase(object):
|
||||
last_stats = time.time()
|
||||
|
||||
def __init__(self, name, worker, queues=[], pool=None):
|
||||
self.should_stop = False
|
||||
|
||||
def __init__(self, name, worker):
|
||||
self.name = name
|
||||
self.pool = WorkerPool()
|
||||
self.pool.init_workers(worker.work_loop)
|
||||
self.redis = get_redis_client()
|
||||
self.total_messages = 0
|
||||
self.queues = queues
|
||||
self.worker = worker
|
||||
self.pool = pool
|
||||
if pool is None:
|
||||
self.pool = WorkerPool()
|
||||
self.pool.init_workers(self.worker.work_loop)
|
||||
self.redis = redis.Redis.from_url(settings.BROKER_URL)
|
||||
|
||||
def run(self):
|
||||
@property
|
||||
def listening_on(self):
|
||||
return f'listening on {self.queues}'
|
||||
|
||||
def control(self, body):
|
||||
logger.warning(f'Received control signal:\n{body}')
|
||||
control = body.get('control')
|
||||
if control in ('status', 'schedule', 'running', 'cancel'):
|
||||
reply_queue = body['reply_to']
|
||||
if control == 'status':
|
||||
msg = '\n'.join([self.listening_on, self.pool.debug()])
|
||||
if control == 'schedule':
|
||||
msg = self.scheduler.debug()
|
||||
elif control == 'running':
|
||||
msg = []
|
||||
for worker in self.pool.workers:
|
||||
worker.calculate_managed_tasks()
|
||||
msg.extend(worker.managed_tasks.keys())
|
||||
elif control == 'cancel':
|
||||
msg = []
|
||||
task_ids = set(body['task_ids'])
|
||||
for worker in self.pool.workers:
|
||||
task = worker.current_task
|
||||
if task and task['uuid'] in task_ids:
|
||||
logger.warn(f'Sending SIGTERM to task id={task["uuid"]}, task={task.get("task")}, args={task.get("args")}')
|
||||
os.kill(worker.pid, signal.SIGTERM)
|
||||
msg.append(task['uuid'])
|
||||
if task_ids and not msg:
|
||||
logger.info(f'Could not locate running tasks to cancel with ids={task_ids}')
|
||||
|
||||
if reply_queue is not None:
|
||||
with pg_bus_conn() as conn:
|
||||
conn.notify(reply_queue, json.dumps(msg))
|
||||
elif control == 'reload':
|
||||
for worker in self.pool.workers:
|
||||
worker.quit()
|
||||
else:
|
||||
logger.error('unrecognized control message: {}'.format(control))
|
||||
|
||||
def dispatch_task(self, body):
|
||||
"""This will place the given body into a worker queue to run method decorated as a task"""
|
||||
if isinstance(body, dict):
|
||||
body['time_ack'] = time.time()
|
||||
|
||||
if len(self.pool):
|
||||
if "uuid" in body and body['uuid']:
|
||||
try:
|
||||
queue = UUID(body['uuid']).int % len(self.pool)
|
||||
except Exception:
|
||||
queue = self.total_messages % len(self.pool)
|
||||
else:
|
||||
queue = self.total_messages % len(self.pool)
|
||||
else:
|
||||
queue = 0
|
||||
self.pool.write(queue, body)
|
||||
self.total_messages += 1
|
||||
|
||||
def process_task(self, body):
|
||||
"""Routes the task details in body as either a control task or a task-task"""
|
||||
if 'control' in body:
|
||||
try:
|
||||
return self.control(body)
|
||||
except Exception:
|
||||
logger.exception(f"Exception handling control message: {body}")
|
||||
return
|
||||
self.dispatch_task(body)
|
||||
|
||||
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
|
||||
def record_statistics(self):
|
||||
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
|
||||
save_data = self.pool.debug()
|
||||
try:
|
||||
self.redis.set(f'awx_{self.name}_statistics', save_data)
|
||||
except redis.exceptions.ConnectionError as exc:
|
||||
logger.warning(f'Redis connection error saving {self.name} status data:\n{exc}\nmissed data:\n{save_data}')
|
||||
except Exception:
|
||||
logger.exception(f"Unknown redis error saving {self.name} status data:\nmissed data:\n{save_data}")
|
||||
self.last_stats = time.time()
|
||||
|
||||
def run(self, *args, **kwargs):
|
||||
signal.signal(signal.SIGINT, self.stop)
|
||||
signal.signal(signal.SIGTERM, self.stop)
|
||||
|
||||
# Child should implement other things here
|
||||
|
||||
def stop(self, signum, frame):
|
||||
self.should_stop = True
|
||||
logger.warning('received {}, stopping'.format(signame(signum)))
|
||||
self.worker.on_stop()
|
||||
raise SystemExit()
|
||||
|
||||
|
||||
class AWXConsumerRedis(AWXConsumerBase):
|
||||
def run(self, *args, **kwargs):
|
||||
super(AWXConsumerRedis, self).run(*args, **kwargs)
|
||||
self.worker.on_start()
|
||||
logger.info(f'Callback receiver started with pid={os.getpid()}')
|
||||
db.connection.close() # logs use database, so close connection
|
||||
|
||||
while True:
|
||||
time.sleep(60)
|
||||
|
||||
def stop(self, signum, frame):
|
||||
logger.warning('received {}, stopping'.format(signame(signum)))
|
||||
raise SystemExit()
|
||||
|
||||
class AWXConsumerPG(AWXConsumerBase):
|
||||
def __init__(self, *args, schedule=None, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.pg_max_wait = getattr(settings, 'DISPATCHER_DB_DOWNTOWN_TOLLERANCE', settings.DISPATCHER_DB_DOWNTIME_TOLERANCE)
|
||||
# if no successful loops have ran since startup, then we should fail right away
|
||||
self.pg_is_down = True # set so that we fail if we get database errors on startup
|
||||
init_time = time.time()
|
||||
self.pg_down_time = init_time - self.pg_max_wait # allow no grace period
|
||||
self.last_cleanup = init_time
|
||||
self.subsystem_metrics = s_metrics.DispatcherMetrics(auto_pipe_execute=False)
|
||||
self.last_metrics_gather = init_time
|
||||
self.listen_cumulative_time = 0.0
|
||||
if schedule:
|
||||
schedule = schedule.copy()
|
||||
else:
|
||||
schedule = {}
|
||||
# add control tasks to be ran at regular schedules
|
||||
# NOTE: if we run out of database connections, it is important to still run cleanup
|
||||
# so that we scale down workers and free up connections
|
||||
schedule['pool_cleanup'] = {'control': self.pool.cleanup, 'schedule': timedelta(seconds=60)}
|
||||
# record subsystem metrics for the dispatcher
|
||||
schedule['metrics_gather'] = {'control': self.record_metrics, 'schedule': timedelta(seconds=20)}
|
||||
self.scheduler = Scheduler(schedule)
|
||||
|
||||
@log_excess_runtime(logger, debug_cutoff=0.05, cutoff=0.2)
|
||||
def record_metrics(self):
|
||||
current_time = time.time()
|
||||
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
|
||||
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
|
||||
try:
|
||||
self.subsystem_metrics.pipe_execute()
|
||||
except redis.exceptions.ConnectionError as exc:
|
||||
logger.warning(f'Redis connection error saving dispatcher metrics, error:\n{exc}')
|
||||
self.listen_cumulative_time = 0.0
|
||||
self.last_metrics_gather = current_time
|
||||
|
||||
def run_periodic_tasks(self):
|
||||
"""
|
||||
Run general periodic logic, and return maximum time in seconds before
|
||||
the next requested run
|
||||
This may be called more often than that when events are consumed
|
||||
so this should be very efficient in that
|
||||
"""
|
||||
try:
|
||||
self.record_statistics() # maintains time buffer in method
|
||||
except Exception as exc:
|
||||
logger.warning(f'Failed to save dispatcher statistics {exc}')
|
||||
|
||||
# Everything benchmarks to the same original time, so that skews due to
|
||||
# runtime of the actions, themselves, do not mess up scheduling expectations
|
||||
reftime = time.time()
|
||||
|
||||
for job in self.scheduler.get_and_mark_pending(reftime=reftime):
|
||||
if 'control' in job.data:
|
||||
try:
|
||||
job.data['control']()
|
||||
except Exception:
|
||||
logger.exception(f'Error running control task {job.data}')
|
||||
elif 'task' in job.data:
|
||||
body = self.worker.resolve_callable(job.data['task']).get_async_body()
|
||||
# bypasses pg_notify for scheduled tasks
|
||||
self.dispatch_task(body)
|
||||
|
||||
if self.pg_is_down:
|
||||
logger.info('Dispatcher listener connection established')
|
||||
self.pg_is_down = False
|
||||
|
||||
self.listen_start = time.time()
|
||||
|
||||
return self.scheduler.time_until_next_run(reftime=reftime)
|
||||
|
||||
def run(self, *args, **kwargs):
|
||||
super(AWXConsumerPG, self).run(*args, **kwargs)
|
||||
|
||||
logger.info(f"Running {self.name}, workers min={self.pool.min_workers} max={self.pool.max_workers}, listening to queues {self.queues}")
|
||||
init = False
|
||||
|
||||
while True:
|
||||
try:
|
||||
with pg_bus_conn(new_connection=True) as conn:
|
||||
for queue in self.queues:
|
||||
conn.listen(queue)
|
||||
if init is False:
|
||||
self.worker.on_start()
|
||||
init = True
|
||||
# run_periodic_tasks run scheduled actions and gives time until next scheduled action
|
||||
# this is saved to the conn (PubSub) object in order to modify read timeout in-loop
|
||||
conn.select_timeout = self.run_periodic_tasks()
|
||||
# this is the main operational loop for awx-manage run_dispatcher
|
||||
for e in conn.events(yield_timeouts=True):
|
||||
self.listen_cumulative_time += time.time() - self.listen_start # for metrics
|
||||
if e is not None:
|
||||
self.process_task(json.loads(e.payload))
|
||||
conn.select_timeout = self.run_periodic_tasks()
|
||||
if self.should_stop:
|
||||
return
|
||||
except psycopg.InterfaceError:
|
||||
logger.warning("Stale Postgres message bus connection, reconnecting")
|
||||
continue
|
||||
except (db.DatabaseError, psycopg.OperationalError):
|
||||
# If we have attained stady state operation, tolerate short-term database hickups
|
||||
if not self.pg_is_down:
|
||||
logger.exception(f"Error consuming new events from postgres, will retry for {self.pg_max_wait} s")
|
||||
self.pg_down_time = time.time()
|
||||
self.pg_is_down = True
|
||||
current_downtime = time.time() - self.pg_down_time
|
||||
if current_downtime > self.pg_max_wait:
|
||||
logger.exception(f"Postgres event consumer has not recovered in {current_downtime} s, exiting")
|
||||
# Sending QUIT to multiprocess queue to signal workers to exit
|
||||
for worker in self.pool.workers:
|
||||
try:
|
||||
worker.quit()
|
||||
except Exception:
|
||||
logger.exception(f"Error sending QUIT to worker {worker}")
|
||||
raise
|
||||
# Wait for a second before next attempt, but still listen for any shutdown signals
|
||||
for i in range(10):
|
||||
if self.should_stop:
|
||||
return
|
||||
time.sleep(0.1)
|
||||
for conn in db.connections.all():
|
||||
conn.close_if_unusable_or_obsolete()
|
||||
except Exception:
|
||||
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
|
||||
logger.exception('Encountered unhandled error in dispatcher main loop')
|
||||
# Sending QUIT to multiprocess queue to signal workers to exit
|
||||
for worker in self.pool.workers:
|
||||
try:
|
||||
worker.quit()
|
||||
except Exception:
|
||||
logger.exception(f"Error sending QUIT to worker {worker}")
|
||||
raise
|
||||
|
||||
|
||||
class BaseWorker(object):
|
||||
def read(self, queue):
|
||||
return queue.get(block=True, timeout=1)
|
||||
|
||||
def work_loop(self, queue, finished, idx, *args):
|
||||
ppid = os.getppid()
|
||||
signal_handler = WorkerSignalHandler()
|
||||
set_connection_name('worker') # set application_name to distinguish from other dispatcher processes
|
||||
while not signal_handler.kill_now:
|
||||
# if the parent PID changes, this process has been orphaned
|
||||
# via e.g., segfault or sigkill, we should exit too
|
||||
if os.getppid() != ppid:
|
||||
break
|
||||
try:
|
||||
body = self.read(queue)
|
||||
if body == 'QUIT':
|
||||
break
|
||||
except QueueEmpty:
|
||||
continue
|
||||
except Exception:
|
||||
logger.exception("Exception on worker {}, reconnecting: ".format(idx))
|
||||
continue
|
||||
try:
|
||||
for conn in db.connections.all():
|
||||
# If the database connection has a hiccup during the prior message, close it
|
||||
# so we can establish a new connection
|
||||
conn.close_if_unusable_or_obsolete()
|
||||
self.perform_work(body, *args)
|
||||
except Exception:
|
||||
logger.exception(f'Unhandled exception in perform_work in worker pid={os.getpid()}')
|
||||
finally:
|
||||
if 'uuid' in body:
|
||||
uuid = body['uuid']
|
||||
finished.put(uuid)
|
||||
logger.debug('worker exiting gracefully pid:{}'.format(os.getpid()))
|
||||
|
||||
def perform_work(self, body):
|
||||
raise NotImplementedError()
|
||||
|
||||
def on_start(self):
|
||||
pass
|
||||
|
||||
def on_stop(self):
|
||||
pass
|
||||
|
||||
@@ -4,12 +4,10 @@ import os
|
||||
import signal
|
||||
import time
|
||||
import datetime
|
||||
from queue import Empty as QueueEmpty
|
||||
|
||||
from django.conf import settings
|
||||
from django.utils.functional import cached_property
|
||||
from django.utils.timezone import now as tz_now
|
||||
from django import db
|
||||
from django.db import transaction, connection as django_connection
|
||||
from django_guid import set_guid
|
||||
|
||||
@@ -17,8 +15,6 @@ import psutil
|
||||
|
||||
import redis
|
||||
|
||||
from awx.main.utils.redis import get_redis_client
|
||||
from awx.main.utils.db import set_connection_name
|
||||
from awx.main.consumers import emit_channel_notification
|
||||
from awx.main.models import JobEvent, AdHocCommandEvent, ProjectUpdateEvent, InventoryUpdateEvent, SystemJobEvent, UnifiedJob
|
||||
from awx.main.constants import ACTIVE_STATES
|
||||
@@ -26,6 +22,7 @@ from awx.main.models.events import emit_event_detail
|
||||
from awx.main.utils.profiling import AWXProfiler
|
||||
from awx.main.tasks.system import events_processed_hook
|
||||
import awx.main.analytics.subsystem_metrics as s_metrics
|
||||
from .base import BaseWorker
|
||||
|
||||
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
|
||||
|
||||
@@ -56,17 +53,7 @@ def job_stats_wrapup(job_identifier, event=None):
|
||||
logger.exception('Worker failed to save stats or emit notifications: Job {}'.format(job_identifier))
|
||||
|
||||
|
||||
class WorkerSignalHandler:
|
||||
def __init__(self):
|
||||
self.kill_now = False
|
||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
||||
signal.signal(signal.SIGINT, self.exit_gracefully)
|
||||
|
||||
def exit_gracefully(self, *args, **kwargs):
|
||||
self.kill_now = True
|
||||
|
||||
|
||||
class CallbackBrokerWorker:
|
||||
class CallbackBrokerWorker(BaseWorker):
|
||||
"""
|
||||
A worker implementation that deserializes callback event data and persists
|
||||
it into the database.
|
||||
@@ -85,7 +72,7 @@ class CallbackBrokerWorker:
|
||||
|
||||
def __init__(self):
|
||||
self.buff = {}
|
||||
self.redis = get_redis_client()
|
||||
self.redis = redis.Redis.from_url(settings.BROKER_URL)
|
||||
self.subsystem_metrics = s_metrics.CallbackReceiverMetrics(auto_pipe_execute=False)
|
||||
self.queue_pop = 0
|
||||
self.queue_name = settings.CALLBACK_QUEUE
|
||||
@@ -98,7 +85,7 @@ class CallbackBrokerWorker:
|
||||
"""This needs to be obtained after forking, or else it will give the parent process"""
|
||||
return os.getpid()
|
||||
|
||||
def read(self):
|
||||
def read(self, queue):
|
||||
has_redis_error = False
|
||||
try:
|
||||
res = self.redis.blpop(self.queue_name, timeout=1)
|
||||
@@ -161,37 +148,10 @@ class CallbackBrokerWorker:
|
||||
filepath = self.prof.stop()
|
||||
logger.error(f'profiling is disabled, wrote {filepath}')
|
||||
|
||||
def work_loop(self, idx, *args):
|
||||
def work_loop(self, *args, **kw):
|
||||
if settings.AWX_CALLBACK_PROFILE:
|
||||
signal.signal(signal.SIGUSR1, self.toggle_profiling)
|
||||
|
||||
ppid = os.getppid()
|
||||
signal_handler = WorkerSignalHandler()
|
||||
set_connection_name('worker') # set application_name to distinguish from other dispatcher processes
|
||||
while not signal_handler.kill_now:
|
||||
# if the parent PID changes, this process has been orphaned
|
||||
# via e.g., segfault or sigkill, we should exit too
|
||||
if os.getppid() != ppid:
|
||||
break
|
||||
try:
|
||||
body = self.read() # this is only for the callback, only reading from redis.
|
||||
if body == 'QUIT':
|
||||
break
|
||||
except QueueEmpty:
|
||||
continue
|
||||
except Exception:
|
||||
logger.exception("Exception on worker {}, reconnecting: ".format(idx))
|
||||
continue
|
||||
try:
|
||||
for conn in db.connections.all():
|
||||
# If the database connection has a hiccup during the prior message, close it
|
||||
# so we can establish a new connection
|
||||
conn.close_if_unusable_or_obsolete()
|
||||
self.perform_work(body, *args)
|
||||
except Exception:
|
||||
logger.exception(f'Unhandled exception in perform_work in worker pid={os.getpid()}')
|
||||
|
||||
logger.debug('worker exiting gracefully pid:{}'.format(os.getpid()))
|
||||
return super(CallbackBrokerWorker, self).work_loop(*args, **kw)
|
||||
|
||||
def flush(self, force=False):
|
||||
now = tz_now()
|
||||
|
||||
@@ -1,49 +1,144 @@
|
||||
import inspect
|
||||
import logging
|
||||
import importlib
|
||||
import sys
|
||||
import traceback
|
||||
import time
|
||||
|
||||
from kubernetes.config import kube_config
|
||||
|
||||
from django.conf import settings
|
||||
from django_guid import set_guid
|
||||
|
||||
from awx.main.tasks.system import dispatch_startup, inform_cluster_of_shutdown
|
||||
|
||||
from .base import BaseWorker
|
||||
|
||||
logger = logging.getLogger('awx.main.dispatch')
|
||||
|
||||
|
||||
def resolve_callable(task):
|
||||
class TaskWorker(BaseWorker):
|
||||
"""
|
||||
Transform a dotted notation task into an imported, callable function, e.g.,
|
||||
awx.main.tasks.system.delete_inventory
|
||||
awx.main.tasks.jobs.RunProjectUpdate
|
||||
"""
|
||||
if not task.startswith('awx.'):
|
||||
raise ValueError('{} is not a valid awx task'.format(task))
|
||||
module, target = task.rsplit('.', 1)
|
||||
module = importlib.import_module(module)
|
||||
_call = None
|
||||
if hasattr(module, target):
|
||||
_call = getattr(module, target, None)
|
||||
if not (hasattr(_call, 'apply_async') and hasattr(_call, 'delay')):
|
||||
raise ValueError('{} is not decorated with @task()'.format(task))
|
||||
return _call
|
||||
A worker implementation that deserializes task messages and runs native
|
||||
Python code.
|
||||
|
||||
The code that *builds* these types of messages is found in
|
||||
`awx.main.dispatch.publish`.
|
||||
"""
|
||||
|
||||
def run_callable(body):
|
||||
"""
|
||||
Given some AMQP message, import the correct Python code and run it.
|
||||
"""
|
||||
task = body['task']
|
||||
uuid = body.get('uuid', '<unknown>')
|
||||
args = body.get('args', [])
|
||||
kwargs = body.get('kwargs', {})
|
||||
if 'guid' in body:
|
||||
set_guid(body.pop('guid'))
|
||||
_call = resolve_callable(task)
|
||||
log_extra = ''
|
||||
logger_method = logger.debug
|
||||
if 'time_pub' in body:
|
||||
time_publish = time.time() - body['time_pub']
|
||||
if time_publish > 5.0:
|
||||
# If task too a very long time to process, add this information to the log
|
||||
log_extra = f' took {time_publish:.4f} to send message'
|
||||
logger_method = logger.info
|
||||
# don't print kwargs, they often contain launch-time secrets
|
||||
logger_method(f'task {uuid} starting {task}(*{args}){log_extra}')
|
||||
return _call(*args, **kwargs)
|
||||
@staticmethod
|
||||
def resolve_callable(task):
|
||||
"""
|
||||
Transform a dotted notation task into an imported, callable function, e.g.,
|
||||
|
||||
awx.main.tasks.system.delete_inventory
|
||||
awx.main.tasks.jobs.RunProjectUpdate
|
||||
"""
|
||||
if not task.startswith('awx.'):
|
||||
raise ValueError('{} is not a valid awx task'.format(task))
|
||||
module, target = task.rsplit('.', 1)
|
||||
module = importlib.import_module(module)
|
||||
_call = None
|
||||
if hasattr(module, target):
|
||||
_call = getattr(module, target, None)
|
||||
if not (hasattr(_call, 'apply_async') and hasattr(_call, 'delay')):
|
||||
raise ValueError('{} is not decorated with @task()'.format(task))
|
||||
|
||||
return _call
|
||||
|
||||
@staticmethod
|
||||
def run_callable(body):
|
||||
"""
|
||||
Given some AMQP message, import the correct Python code and run it.
|
||||
"""
|
||||
task = body['task']
|
||||
uuid = body.get('uuid', '<unknown>')
|
||||
args = body.get('args', [])
|
||||
kwargs = body.get('kwargs', {})
|
||||
if 'guid' in body:
|
||||
set_guid(body.pop('guid'))
|
||||
_call = TaskWorker.resolve_callable(task)
|
||||
if inspect.isclass(_call):
|
||||
# the callable is a class, e.g., RunJob; instantiate and
|
||||
# return its `run()` method
|
||||
_call = _call().run
|
||||
|
||||
log_extra = ''
|
||||
logger_method = logger.debug
|
||||
if ('time_ack' in body) and ('time_pub' in body):
|
||||
time_publish = body['time_ack'] - body['time_pub']
|
||||
time_waiting = time.time() - body['time_ack']
|
||||
if time_waiting > 5.0 or time_publish > 5.0:
|
||||
# If task too a very long time to process, add this information to the log
|
||||
log_extra = f' took {time_publish:.4f} to ack, {time_waiting:.4f} in local dispatcher'
|
||||
logger_method = logger.info
|
||||
# don't print kwargs, they often contain launch-time secrets
|
||||
logger_method(f'task {uuid} starting {task}(*{args}){log_extra}')
|
||||
|
||||
return _call(*args, **kwargs)
|
||||
|
||||
def perform_work(self, body):
|
||||
"""
|
||||
Import and run code for a task e.g.,
|
||||
|
||||
body = {
|
||||
'args': [8],
|
||||
'callbacks': [{
|
||||
'args': [],
|
||||
'kwargs': {}
|
||||
'task': u'awx.main.tasks.system.handle_work_success'
|
||||
}],
|
||||
'errbacks': [{
|
||||
'args': [],
|
||||
'kwargs': {},
|
||||
'task': 'awx.main.tasks.system.handle_work_error'
|
||||
}],
|
||||
'kwargs': {},
|
||||
'task': u'awx.main.tasks.jobs.RunProjectUpdate'
|
||||
}
|
||||
"""
|
||||
settings.__clean_on_fork__()
|
||||
result = None
|
||||
try:
|
||||
result = self.run_callable(body)
|
||||
except Exception as exc:
|
||||
result = exc
|
||||
|
||||
try:
|
||||
if getattr(exc, 'is_awx_task_error', False):
|
||||
# Error caused by user / tracked in job output
|
||||
logger.warning("{}".format(exc))
|
||||
else:
|
||||
task = body['task']
|
||||
args = body.get('args', [])
|
||||
kwargs = body.get('kwargs', {})
|
||||
logger.exception('Worker failed to run task {}(*{}, **{}'.format(task, args, kwargs))
|
||||
except Exception:
|
||||
# It's fairly critical that this code _not_ raise exceptions on logging
|
||||
# If you configure external logging in a way that _it_ fails, there's
|
||||
# not a lot we can do here; sys.stderr.write is a final hail mary
|
||||
_, _, tb = sys.exc_info()
|
||||
traceback.print_tb(tb)
|
||||
|
||||
for callback in body.get('errbacks', []) or []:
|
||||
callback['uuid'] = body['uuid']
|
||||
self.perform_work(callback)
|
||||
finally:
|
||||
# It's frustrating that we have to do this, but the python k8s
|
||||
# client leaves behind cacert files in /tmp, so we must clean up
|
||||
# the tmpdir per-dispatcher process every time a new task comes in
|
||||
try:
|
||||
kube_config._cleanup_temp_files()
|
||||
except Exception:
|
||||
logger.exception('failed to cleanup k8s client tmp files')
|
||||
|
||||
for callback in body.get('callbacks', []) or []:
|
||||
callback['uuid'] = body['uuid']
|
||||
self.perform_work(callback)
|
||||
return result
|
||||
|
||||
def on_start(self):
|
||||
dispatch_startup()
|
||||
|
||||
def on_stop(self):
|
||||
inform_cluster_of_shutdown()
|
||||
|
||||
@@ -40,6 +40,7 @@ from awx.main.validators import validate_ssh_private_key
|
||||
from awx.main.constants import ENV_BLOCKLIST
|
||||
from awx.main import utils
|
||||
|
||||
|
||||
__all__ = [
|
||||
'JSONBlob',
|
||||
'AutoOneToOneField',
|
||||
|
||||
@@ -23,7 +23,8 @@ class Command(BaseCommand):
|
||||
|
||||
print('## ' + JobTemplate.objects.get(pk=jt).name + f' (last {history} runs)\n')
|
||||
with connection.cursor() as cursor:
|
||||
cursor.execute(f'''
|
||||
cursor.execute(
|
||||
f'''
|
||||
SELECT
|
||||
b.id, b.job_id, b.host_name, b.created - a.created delta,
|
||||
b.task task,
|
||||
@@ -43,7 +44,8 @@ class Command(BaseCommand):
|
||||
LIMIT {history}
|
||||
)
|
||||
ORDER BY delta DESC;
|
||||
''')
|
||||
'''
|
||||
)
|
||||
slowest_events = cursor.fetchall()
|
||||
|
||||
def format_td(x):
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
import datetime
|
||||
import logging
|
||||
|
||||
|
||||
# Django
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.utils.timezone import now
|
||||
|
||||
@@ -4,8 +4,10 @@
|
||||
# Python
|
||||
import datetime
|
||||
import logging
|
||||
import pytz
|
||||
import re
|
||||
|
||||
|
||||
# Django
|
||||
from django.apps import apps
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
@@ -41,7 +43,7 @@ def partition_name_dt(part_name):
|
||||
if not m:
|
||||
return m
|
||||
dt_str = f"{m.group(3)}_{m.group(4)}"
|
||||
dt = datetime.datetime.strptime(dt_str, '%Y%m%d_%H').replace(tzinfo=datetime.timezone.utc)
|
||||
dt = datetime.datetime.strptime(dt_str, '%Y%m%d_%H').replace(tzinfo=pytz.UTC)
|
||||
return dt
|
||||
|
||||
|
||||
|
||||
@@ -1,88 +0,0 @@
|
||||
import argparse
|
||||
import inspect
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
|
||||
import yaml
|
||||
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
from django.db import connection
|
||||
|
||||
from dispatcherd.cli import (
|
||||
CONTROL_ARG_SCHEMAS,
|
||||
DEFAULT_CONFIG_FILE,
|
||||
_base_cli_parent,
|
||||
_control_common_parent,
|
||||
_register_control_arguments,
|
||||
_build_command_data_from_args,
|
||||
)
|
||||
from dispatcherd.config import setup as dispatcher_setup
|
||||
from dispatcherd.factories import get_control_from_settings
|
||||
from dispatcherd.service import control_tasks
|
||||
|
||||
from awx.main.dispatch.config import get_dispatcherd_config
|
||||
from awx.main.management.commands.dispatcherd import ensure_no_dispatcherd_env_config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = 'Dispatcher control operations'
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.description = 'Run dispatcherd control commands using awx-manage.'
|
||||
base_parent = _base_cli_parent()
|
||||
control_parent = _control_common_parent()
|
||||
parser._add_container_actions(base_parent)
|
||||
parser._add_container_actions(control_parent)
|
||||
|
||||
subparsers = parser.add_subparsers(dest='command', metavar='command')
|
||||
subparsers.required = True
|
||||
shared_parents = [base_parent, control_parent]
|
||||
for command in control_tasks.__all__:
|
||||
func = getattr(control_tasks, command, None)
|
||||
doc = inspect.getdoc(func) or ''
|
||||
summary = doc.splitlines()[0] if doc else None
|
||||
command_parser = subparsers.add_parser(
|
||||
command,
|
||||
help=summary,
|
||||
description=doc,
|
||||
parents=shared_parents,
|
||||
)
|
||||
_register_control_arguments(command_parser, CONTROL_ARG_SCHEMAS.get(command))
|
||||
|
||||
def handle(self, *args, **options):
|
||||
command = options.pop('command', None)
|
||||
if not command:
|
||||
raise CommandError('No dispatcher control command specified')
|
||||
|
||||
for django_opt in ('verbosity', 'traceback', 'no_color', 'force_color', 'skip_checks'):
|
||||
options.pop(django_opt, None)
|
||||
|
||||
log_level = options.pop('log_level', 'DEBUG')
|
||||
config_path = os.path.abspath(options.pop('config', DEFAULT_CONFIG_FILE))
|
||||
expected_replies = options.pop('expected_replies', 1)
|
||||
|
||||
logging.basicConfig(level=getattr(logging, log_level), stream=sys.stdout)
|
||||
logger.debug(f"Configured standard out logging at {log_level} level")
|
||||
|
||||
default_config = os.path.abspath(DEFAULT_CONFIG_FILE)
|
||||
ensure_no_dispatcherd_env_config()
|
||||
if config_path != default_config:
|
||||
raise CommandError('The config path CLI option is not allowed for the awx-manage command')
|
||||
if connection.vendor == 'sqlite':
|
||||
raise CommandError('dispatcherctl is not supported with sqlite3; use a PostgreSQL database')
|
||||
else:
|
||||
logger.info('Using config generated from awx.main.dispatch.config.get_dispatcherd_config')
|
||||
dispatcher_setup(get_dispatcherd_config())
|
||||
|
||||
schema_namespace = argparse.Namespace(**options)
|
||||
data = _build_command_data_from_args(schema_namespace, command)
|
||||
|
||||
ctl = get_control_from_settings()
|
||||
returned = ctl.control_with_reply(command, data=data, expected_replies=expected_replies)
|
||||
self.stdout.write(yaml.dump(returned, default_flow_style=False))
|
||||
if len(returned) < expected_replies:
|
||||
logger.error(f'Obtained only {len(returned)} of {expected_replies}, exiting with non-zero code')
|
||||
raise CommandError('dispatcherctl returned fewer replies than expected')
|
||||
@@ -1,85 +0,0 @@
|
||||
# Copyright (c) 2015 Ansible, Inc.
|
||||
# All Rights Reserved
|
||||
import copy
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import logging.config
|
||||
import os
|
||||
|
||||
from django.conf import settings
|
||||
from django.core.cache import cache as django_cache
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
from django.db import connection
|
||||
|
||||
from dispatcherd.config import setup as dispatcher_setup
|
||||
|
||||
from awx.main.dispatch.config import get_dispatcherd_config
|
||||
|
||||
logger = logging.getLogger('awx.main.dispatch')
|
||||
|
||||
|
||||
from dispatcherd import run_service
|
||||
|
||||
|
||||
def _json_default(value):
|
||||
if isinstance(value, set):
|
||||
return sorted(value)
|
||||
if isinstance(value, tuple):
|
||||
return list(value)
|
||||
return str(value)
|
||||
|
||||
|
||||
def _hash_config(config):
|
||||
serialized = json.dumps(config, sort_keys=True, separators=(',', ':'), default=_json_default)
|
||||
return hashlib.sha256(serialized.encode('utf-8')).hexdigest()
|
||||
|
||||
|
||||
def ensure_no_dispatcherd_env_config():
|
||||
if os.getenv('DISPATCHERD_CONFIG_FILE'):
|
||||
raise CommandError('DISPATCHERD_CONFIG_FILE is set but awx-manage dispatcherd uses dynamic config from code')
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = (
|
||||
'Run the background task service, this is the supported entrypoint since the introduction of dispatcherd as a library. '
|
||||
'This replaces the prior awx-manage run_dispatcher service, and control actions are at awx-manage dispatcherctl.'
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
return
|
||||
|
||||
def handle(self, *arg, **options):
|
||||
ensure_no_dispatcherd_env_config()
|
||||
|
||||
self.configure_dispatcher_logging()
|
||||
config = get_dispatcherd_config(for_service=True)
|
||||
config_hash = _hash_config(config)
|
||||
logger.info(
|
||||
'Using dispatcherd config generated from awx.main.dispatch.config.get_dispatcherd_config (sha256=%s)',
|
||||
config_hash,
|
||||
)
|
||||
|
||||
# Close the connection, because the pg_notify broker will create new async connection
|
||||
connection.close()
|
||||
django_cache.close()
|
||||
dispatcher_setup(config)
|
||||
|
||||
run_service()
|
||||
|
||||
def configure_dispatcher_logging(self):
|
||||
# Apply special log rule for the parent process
|
||||
special_logging = copy.deepcopy(settings.LOGGING)
|
||||
changed_handlers = []
|
||||
for handler_name, handler_config in special_logging.get('handlers', {}).items():
|
||||
filters = handler_config.get('filters', [])
|
||||
if 'dynamic_level_filter' in filters:
|
||||
handler_config['filters'] = [flt for flt in filters if flt != 'dynamic_level_filter']
|
||||
changed_handlers.append(handler_name)
|
||||
logger.info(f'Dispatcherd main process replaced log level filter for handlers: {changed_handlers}')
|
||||
|
||||
# Apply the custom logging level here, before the asyncio code starts
|
||||
special_logging.setdefault('loggers', {}).setdefault('dispatcherd', {})
|
||||
special_logging['loggers']['dispatcherd']['level'] = settings.LOG_AGGREGATOR_LEVEL
|
||||
|
||||
logging.config.dictConfig(special_logging)
|
||||
@@ -1,9 +1,9 @@
|
||||
import datetime
|
||||
import logging
|
||||
|
||||
from awx.main import analytics
|
||||
from dateutil import parser
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.utils import timezone
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
@@ -38,10 +38,10 @@ class Command(BaseCommand):
|
||||
|
||||
since = parser.parse(opt_since) if opt_since else None
|
||||
if since and since.tzinfo is None:
|
||||
since = since.replace(tzinfo=datetime.timezone.utc)
|
||||
since = since.replace(tzinfo=timezone.utc)
|
||||
until = parser.parse(opt_until) if opt_until else None
|
||||
if until and until.tzinfo is None:
|
||||
until = until.replace(tzinfo=datetime.timezone.utc)
|
||||
until = until.replace(tzinfo=timezone.utc)
|
||||
|
||||
if opt_ship and opt_dry_run:
|
||||
self.logger.error('Both --ship and --dry-run cannot be processed at the same time.')
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user