Compare commits

..

1 Commits

Author SHA1 Message Date
Luiz Costa
9e7486b024 WIP Makefile 2022-11-16 16:04:12 -03:00
984 changed files with 223516 additions and 24864 deletions

View File

@@ -53,16 +53,6 @@ https://github.com/ansible/awx/#get-involved \
Thank you once again for this and your interest in AWX! Thank you once again for this and your interest in AWX!
### Red Hat Support Team
- Hi! \
\
It appears that you are using an RPM build for RHEL. Please reach out to the Red Hat support team and submit a ticket. \
\
Here is the link to do so: \
\
https://access.redhat.com/support \
\
Thank you for your submission and for supporting AWX!
## Common ## Common
@@ -106,13 +96,6 @@ The Ansible Community is looking at building an EE that corresponds to all of th
### Oracle AWX ### Oracle AWX
We'd be happy to help if you can reproduce this with AWX since we do not have Oracle's Linux Automation Manager. If you need help with this specific version of Oracles Linux Automation Manager you will need to contact your Oracle for support. We'd be happy to help if you can reproduce this with AWX since we do not have Oracle's Linux Automation Manager. If you need help with this specific version of Oracles Linux Automation Manager you will need to contact your Oracle for support.
### Community Resolved
Hi,
We are happy to see that it appears a fix has been provided for your issue, so we will go ahead and close this ticket. Please feel free to reopen if any other problems arise.
<name of community member who helped> thanks so much for taking the time to write a thoughtful and helpful response to this issue!
### AWX Release ### AWX Release
Subject: Announcing AWX Xa.Ya.za and AWX-Operator Xb.Yb.zb Subject: Announcing AWX Xa.Ya.za and AWX-Operator Xb.Yb.zb

View File

@@ -1,10 +1,7 @@
--- ---
name: CI name: CI
env: env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting BRANCH: ${{ github.base_ref || 'devel' }}
CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEV_DOCKER_TAG_BASE: ghcr.io/${{ github.repository_owner }}
COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
on: on:
pull_request: pull_request:
jobs: jobs:
@@ -20,33 +17,85 @@ jobs:
tests: tests:
- name: api-test - name: api-test
command: /start_tests.sh command: /start_tests.sh
label: Run API Tests
- name: api-lint - name: api-lint
command: /var/lib/awx/venv/awx/bin/tox -e linters command: /var/lib/awx/venv/awx/bin/tox -e linters
label: Run API Linters
- name: api-swagger - name: api-swagger
command: /start_tests.sh swagger command: /start_tests.sh swagger
label: Generate API Reference
- name: awx-collection - name: awx-collection
command: /start_tests.sh test_collection_all command: /start_tests.sh test_collection_all
label: Run Collection Tests
- name: api-schema - name: api-schema
label: Check API Schema
command: /start_tests.sh detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{ github.event.pull_request.base.ref }} command: /start_tests.sh detect-schema-change SCHEMA_DIFF_BASE_BRANCH=${{ github.event.pull_request.base.ref }}
- name: ui-lint - name: ui-lint
label: Run UI Linters
command: make ui-lint command: make ui-lint
- name: ui-test-screens - name: ui-test-screens
label: Run UI Screens Tests
command: make ui-test-screens command: make ui-test-screens
- name: ui-test-general - name: ui-test-general
label: Run UI General Tests
command: make ui-test-general command: make ui-test-general
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Run check ${{ matrix.tests.name }} - name: Get python version from Makefile
run: AWX_DOCKER_CMD='${{ matrix.tests.command }}' make github_ci_runner run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: ${{ matrix.texts.label }}
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} ${{ matrix.tests.command }}
dev-env: dev-env:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Run smoke test - name: Run smoke test
run: make github_ci_setup && ansible-playbook tools/docker-compose/ansible/smoke-test.yml -v run: |
export DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }}
export COMPOSE_TAG=${{ env.BRANCH }}
ansible-playbook tools/docker-compose/ansible/smoke-test.yml -e repo_dir=$(pwd) -v
awx-operator: awx-operator:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -95,22 +144,3 @@ jobs:
env: env:
AWX_TEST_IMAGE: awx AWX_TEST_IMAGE: awx
AWX_TEST_VERSION: ci AWX_TEST_VERSION: ci
collection-sanity:
name: awx_collection sanity
runs-on: ubuntu-latest
strategy:
fail-fast: false
steps:
- uses: actions/checkout@v2
# The containers that GitHub Actions use have Ansible installed, so upgrade to make sure we have the latest version.
- name: Upgrade ansible-core
run: python3 -m pip install --upgrade ansible-core
- name: Run sanity tests
run: make test_collection_sanity
env:
# needed due to cgroupsv2. This is fixed, but a stable release
# with the fix has not been made yet.
ANSIBLE_TEST_PREFER_PODMAN: 1

View File

@@ -1,13 +1,10 @@
--- ---
name: Build/Push Development Images name: Build/Push Development Images
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on: on:
push: push:
branches: branches:
- devel - devel
- release_* - release_*
- feature_*
jobs: jobs:
push: push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_') if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
@@ -21,12 +18,6 @@ jobs:
- name: Get python version from Makefile - name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Set lower case owner name
run: |
echo "OWNER_LC=${OWNER,,}" >>${GITHUB_ENV}
env:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }} - name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
@@ -38,18 +29,15 @@ jobs:
- name: Pre-pull image to warm build cache - name: Pre-pull image to warm build cache
run: | run: |
docker pull ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/} || : docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/} || : docker pull ghcr.io/${{ github.repository_owner }}/awx_kube_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/} || :
- name: Build images - name: Build images
run: | run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-build
- name: Push image - name: Push image
run: | run: |
docker push ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/} docker push ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/} docker push ghcr.io/${{ github.repository_owner }}/awx_kube_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}

View File

@@ -1,12 +1,9 @@
--- ---
name: E2E Tests name: E2E Tests
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on: on:
pull_request_target: pull_request_target:
types: [labeled] types: [labeled]
jobs: jobs:
e2e-test: e2e-test:
if: contains(github.event.pull_request.labels.*.name, 'qe:e2e') if: contains(github.event.pull_request.labels.*.name, 'qe:e2e')
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -107,3 +104,5 @@ jobs:
with: with:
name: AWX-logs-${{ matrix.job }} name: AWX-logs-${{ matrix.job }}
path: make-docker-compose-output.log path: make-docker-compose-output.log

View File

@@ -1,7 +1,5 @@
--- ---
name: Feature branch deletion cleanup name: Feature branch deletion cleanup
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on: on:
delete: delete:
branches: branches:

View File

@@ -17,9 +17,9 @@ jobs:
env: env:
PR_BODY: ${{ github.event.pull_request.body }} PR_BODY: ${{ github.event.pull_request.body }}
run: | run: |
echo "$PR_BODY" | grep "Bug, Docs Fix or other nominal change" > Z echo $PR_BODY | grep "Bug, Docs Fix or other nominal change" > Z
echo "$PR_BODY" | grep "New or Enhanced Feature" > Y echo $PR_BODY | grep "New or Enhanced Feature" > Y
echo "$PR_BODY" | grep "Breaking Change" > X echo $PR_BODY | grep "Breaking Change" > X
exit 0 exit 0
# We exit 0 and set the shell to prevent the returns from the greps from failing this step # We exit 0 and set the shell to prevent the returns from the greps from failing this step
# See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference # See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference

View File

@@ -1,16 +1,11 @@
--- ---
name: Promote Release name: Promote Release
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on: on:
release: release:
types: [published] types: [published]
jobs: jobs:
promote: promote:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout awx - name: Checkout awx
@@ -39,13 +34,9 @@ jobs:
- name: Build collection and publish to galaxy - name: Build collection and publish to galaxy
run: | run: |
COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection COLLECTION_TEMPLATE_VERSION=true COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
if [ "$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz | tail -1)" == "302" ] ; then \ ansible-galaxy collection publish \
echo "Galaxy release already done"; \ --token=${{ secrets.GALAXY_TOKEN }} \
else \ awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz; \
fi
- name: Set official pypi info - name: Set official pypi info
run: echo pypi_repo=pypi >> $GITHUB_ENV run: echo pypi_repo=pypi >> $GITHUB_ENV
@@ -57,7 +48,6 @@ jobs:
- name: Build awxkit and upload to pypi - name: Build awxkit and upload to pypi
run: | run: |
git reset --hard
cd awxkit && python3 setup.py bdist_wheel cd awxkit && python3 setup.py bdist_wheel
twine upload \ twine upload \
-r ${{ env.pypi_repo }} \ -r ${{ env.pypi_repo }} \
@@ -80,6 +70,4 @@ jobs:
docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:latest docker tag ghcr.io/${{ github.repository }}:${{ github.event.release.tag_name }} quay.io/${{ github.repository }}:latest
docker push quay.io/${{ github.repository }}:${{ github.event.release.tag_name }} docker push quay.io/${{ github.repository }}:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository }}:latest docker push quay.io/${{ github.repository }}:latest
docker pull ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker tag ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }} quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}
docker push quay.io/${{ github.repository_owner }}/awx-ee:${{ github.event.release.tag_name }}

View File

@@ -1,9 +1,5 @@
--- ---
name: Stage Release name: Stage Release
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on: on:
workflow_dispatch: workflow_dispatch:
inputs: inputs:
@@ -21,7 +17,6 @@ on:
jobs: jobs:
stage: stage:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
packages: write packages: write
@@ -85,20 +80,6 @@ jobs:
-e push=yes \ -e push=yes \
-e awx_official=yes -e awx_official=yes
- name: Log in to GHCR
run: |
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Log in to Quay
run: |
echo ${{ secrets.QUAY_TOKEN }} | docker login quay.io -u ${{ secrets.QUAY_USER }} --password-stdin
- name: tag awx-ee:latest with version input
run: |
docker pull quay.io/ansible/awx-ee:latest
docker tag quay.io/ansible/awx-ee:latest ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
docker push ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
- name: Build and stage awx-operator - name: Build and stage awx-operator
working-directory: awx-operator working-directory: awx-operator
run: | run: |
@@ -118,7 +99,6 @@ jobs:
env: env:
AWX_TEST_IMAGE: ${{ github.repository }} AWX_TEST_IMAGE: ${{ github.repository }}
AWX_TEST_VERSION: ${{ github.event.inputs.version }} AWX_TEST_VERSION: ${{ github.event.inputs.version }}
AWX_EE_TEST_IMAGE: ghcr.io/${{ github.repository_owner }}/awx-ee:${{ github.event.inputs.version }}
- name: Create draft release for AWX - name: Create draft release for AWX
working-directory: awx working-directory: awx

View File

@@ -1,9 +1,5 @@
--- ---
name: Upload API Schema name: Upload API Schema
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on: on:
push: push:
branches: branches:

View File

@@ -12,7 +12,7 @@ recursive-include awx/plugins *.ps1
recursive-include requirements *.txt recursive-include requirements *.txt
recursive-include requirements *.yml recursive-include requirements *.yml
recursive-include config * recursive-include config *
recursive-include licenses * recursive-include docs/licenses *
recursive-exclude awx devonly.py* recursive-exclude awx devonly.py*
recursive-exclude awx/api/tests * recursive-exclude awx/api/tests *
recursive-exclude awx/main/tests * recursive-exclude awx/main/tests *

250
Makefile
View File

@@ -1,5 +1,4 @@
PYTHON ?= python3.9 PYTHON ?= python3.9
DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no OFFICIAL ?= no
NODE ?= node NODE ?= node
NPM_BIN ?= npm NPM_BIN ?= npm
@@ -7,20 +6,7 @@ CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD) GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage MANAGEMENT_COMMAND ?= awx-manage
VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py) VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py)
COLLECTION_VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
# ansible-test requires semver compatable version, so we allow overrides to hack it
COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
# args for the ansible-test sanity command
COLLECTION_SANITY_ARGS ?= --docker
# collection unit testing directories
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
# collection integration test directories (defaults to all)
COLLECTION_TEST_TARGET ?=
# args for collection install
COLLECTION_PACKAGE ?= awx
COLLECTION_NAMESPACE ?= awx
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
COLLECTION_TEMPLATE_VERSION ?= false
# NOTE: This defaults the container image version to the branch that's active # NOTE: This defaults the container image version to the branch that's active
COMPOSE_TAG ?= $(GIT_BRANCH) COMPOSE_TAG ?= $(GIT_BRANCH)
@@ -48,7 +34,7 @@ RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio
# These should be upgraded in the AWX and Ansible venv before attempting # These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements # to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==7.0.5 wheel==0.38.4 VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==58.2.0 setuptools_scm[toml]==6.4.2 wheel==0.36.2
NAME ?= awx NAME ?= awx
@@ -66,48 +52,7 @@ I18N_FLAG_FILE = .i18n_built
sdist \ sdist \
ui-release ui-devel \ ui-release ui-devel \
VERSION PYTHON_VERSION docker-compose-sources \ VERSION PYTHON_VERSION docker-compose-sources \
.git/hooks/pre-commit github_ci_setup github_ci_runner .git/hooks/pre-commit
clean-tmp:
rm -rf tmp/
clean-venv:
rm -rf venv/
clean-dist:
rm -rf dist
clean-schema:
rm -rf swagger.json
rm -rf schema.json
rm -rf reference-schema.json
clean-languages:
rm -f $(I18N_FLAG_FILE)
find ./awx/locale/ -type f -regex ".*\.mo$" -delete
## Remove temporary build files, compiled Python files.
clean: clean-ui clean-api clean-awxkit clean-dist
rm -rf awx/public
rm -rf awx/lib/site-packages
rm -rf awx/job_status
rm -rf awx/job_output
rm -rf reports
rm -rf tmp
rm -rf $(I18N_FLAG_FILE)
mkdir tmp
clean-api:
rm -rf build $(NAME)-$(VERSION) *.egg-info
rm -rf .tox
find . -type f -regex ".*\.py[co]$$" -delete
find . -type d -name "__pycache__" -delete
rm -f awx/awx_test.sqlite3*
rm -rf requirements/vendor
rm -rf awx/projects
clean-awxkit:
rm -rf awxkit/*.egg-info awxkit/.tox awxkit/build/*
## convenience target to assert environment variables are defined ## convenience target to assert environment variables are defined
guard-%: guard-%:
@@ -204,7 +149,19 @@ uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi; \ fi; \
uwsgi /etc/tower/uwsgi.ini uwsgi -b 32768 \
--socket 127.0.0.1:8050 \
--module=awx.wsgi:application \
--home=/var/lib/awx/venv/awx \
--chdir=/awx_devel/ \
--vacuum \
--processes=5 \
--harakiri=120 --master \
--no-orphans \
--max-requests=1000 \
--stats /tmp/stats.socket \
--lazy-apps \
--logformat "%(addr) %(method) %(uri) - %(proto) %(status)"
awx-autoreload: awx-autoreload:
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)" @/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)"
@@ -290,28 +247,19 @@ test:
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3 cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
awx-manage check_migrations --dry-run --check -n 'missing_migration_file' awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
## Login to Github container image registry, pull image, then build image. COLLECTION_TEST_DIRS ?= awx_collection/test/awx
github_ci_setup: COLLECTION_TEST_TARGET ?=
# GITHUB_ACTOR is automatic github actions env var COLLECTION_PACKAGE ?= awx
# CI_GITHUB_TOKEN is defined in .github files COLLECTION_NAMESPACE ?= awx
echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache COLLECTION_TEMPLATE_VERSION ?= false
make docker-compose-build
## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner:
docker run -u $(shell id -u) --rm -v $(shell pwd):/awx_devel/:Z --workdir=/awx_devel $(DEVEL_IMAGE_NAME) $(AWX_DOCKER_CMD)
## Builds image and runs AWX_DOCKER_CMD in it, mainly for .github checks.
github_ci_runner: github_ci_setup docker-runner
test_collection: test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \ if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi && \ fi && \
if ! [ -x "$(shell command -v ansible-playbook)" ]; then pip install ansible-core; fi pip install ansible-core && \
ansible --version
py.test $(COLLECTION_TEST_DIRS) -v py.test $(COLLECTION_TEST_DIRS) -v
# The python path needs to be modified so that the tests can find Ansible within the container # The python path needs to be modified so that the tests can find Ansible within the container
# First we will use anything expility set as PYTHONPATH # First we will use anything expility set as PYTHONPATH
@@ -341,13 +289,8 @@ install_collection: build_collection
rm -rf $(COLLECTION_INSTALL) rm -rf $(COLLECTION_INSTALL)
ansible-galaxy collection install awx_collection_build/$(COLLECTION_NAMESPACE)-$(COLLECTION_PACKAGE)-$(COLLECTION_VERSION).tar.gz ansible-galaxy collection install awx_collection_build/$(COLLECTION_NAMESPACE)-$(COLLECTION_PACKAGE)-$(COLLECTION_VERSION).tar.gz
test_collection_sanity: test_collection_sanity: install_collection
rm -rf awx_collection_build/ cd $(COLLECTION_INSTALL) && ansible-test sanity
rm -rf $(COLLECTION_INSTALL)
if ! [ -x "$(shell command -v ansible-test)" ]; then pip install ansible-core; fi
ansible --version
COLLECTION_VERSION=1.0.0 make install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
test_collection_integration: install_collection test_collection_integration: install_collection
cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET) cd $(COLLECTION_INSTALL) && ansible-test integration $(COLLECTION_TEST_TARGET)
@@ -388,15 +331,6 @@ bulk_data:
UI_BUILD_FLAG_FILE = awx/ui/.ui-built UI_BUILD_FLAG_FILE = awx/ui/.ui-built
clean-ui:
rm -rf node_modules
rm -rf awx/ui/node_modules
rm -rf awx/ui/build
rm -rf awx/ui/src/locales/_build
rm -rf $(UI_BUILD_FLAG_FILE)
# the collectstatic command doesn't like it if this dir doesn't exist.
mkdir -p awx/ui/build/static
awx/ui/node_modules: awx/ui/node_modules:
NODE_OPTIONS=--max-old-space-size=6144 $(NPM_BIN) --prefix awx/ui --loglevel warn --force ci NODE_OPTIONS=--max-old-space-size=6144 $(NPM_BIN) --prefix awx/ui --loglevel warn --force ci
@@ -405,20 +339,18 @@ $(UI_BUILD_FLAG_FILE):
$(PYTHON) tools/scripts/compilemessages.py $(PYTHON) tools/scripts/compilemessages.py
$(NPM_BIN) --prefix awx/ui --loglevel warn run compile-strings $(NPM_BIN) --prefix awx/ui --loglevel warn run compile-strings
$(NPM_BIN) --prefix awx/ui --loglevel warn run build $(NPM_BIN) --prefix awx/ui --loglevel warn run build
mkdir -p /var/lib/awx/public/static/css
mkdir -p /var/lib/awx/public/static/js
mkdir -p /var/lib/awx/public/static/media
cp -r awx/ui/build/static/css/* /var/lib/awx/public/static/css
cp -r awx/ui/build/static/js/* /var/lib/awx/public/static/js
cp -r awx/ui/build/static/media/* /var/lib/awx/public/static/media
touch $@ touch $@
ui-release: $(UI_BUILD_FLAG_FILE) ui-release: $(UI_BUILD_FLAG_FILE)
ui-devel: awx/ui/node_modules ui-devel: awx/ui/node_modules
@$(MAKE) -B $(UI_BUILD_FLAG_FILE) @$(MAKE) -B $(UI_BUILD_FLAG_FILE)
@if [ -d "/var/lib/awx" ] ; then \
mkdir -p /var/lib/awx/public/static/css; \
mkdir -p /var/lib/awx/public/static/js; \
mkdir -p /var/lib/awx/public/static/media; \
cp -r awx/ui/build/static/css/* /var/lib/awx/public/static/css; \
cp -r awx/ui/build/static/js/* /var/lib/awx/public/static/js; \
cp -r awx/ui/build/static/media/* /var/lib/awx/public/static/media; \
fi
ui-devel-instrumented: awx/ui/node_modules ui-devel-instrumented: awx/ui/node_modules
$(NPM_BIN) --prefix awx/ui --loglevel warn run start-instrumented $(NPM_BIN) --prefix awx/ui --loglevel warn run start-instrumented
@@ -500,20 +432,20 @@ docker-compose-sources: .git/hooks/pre-commit
docker-compose: awx/projects docker-compose-sources docker-compose: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans docker-compose -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources docker-compose-credential-plugins: awx/projects docker-compose-sources
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m" echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans
docker-compose-test: awx/projects docker-compose-sources docker-compose-test: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash
docker-compose-runtest: awx/projects docker-compose-sources docker-compose-runtest: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
docker-compose-build-swagger: awx/projects docker-compose-sources docker-compose-build-swagger: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
SCHEMA_DIFF_BASE_BRANCH ?= devel SCHEMA_DIFF_BASE_BRANCH ?= devel
detect-schema-change: genschema detect-schema-change: genschema
@@ -521,15 +453,6 @@ detect-schema-change: genschema
# Ignore differences in whitespace with -b # Ignore differences in whitespace with -b
diff -u -b reference-schema.json schema.json diff -u -b reference-schema.json schema.json
docker-compose-clean: awx/projects
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml rm -sf
docker-compose-container-group-clean:
@if [ -f "tools/docker-compose-minikube/_sources/minikube" ]; then \
tools/docker-compose-minikube/_sources/minikube delete; \
fi
rm -rf tools/docker-compose-minikube/_sources/
## Base development image build ## Base development image build
docker-compose-build: docker-compose-build:
ansible-playbook tools/ansible/dockerfile.yml -e build_dev=True -e receptor_image=$(RECEPTOR_IMAGE) ansible-playbook tools/ansible/dockerfile.yml -e build_dev=True -e receptor_image=$(RECEPTOR_IMAGE)
@@ -537,33 +460,18 @@ docker-compose-build:
--build-arg BUILDKIT_INLINE_CACHE=1 \ --build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) . --cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
docker-clean:
-$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
-$(foreach image_id,$(shell docker images --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose docker-refresh: docker-clean docker-compose
## Docker Development Environment with Elastic Stack Connected ## Docker Development Environment with Elastic Stack Connected
docker-compose-elk: awx/projects docker-compose-sources docker-compose-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: awx/projects docker-compose-sources docker-compose-cluster-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-container-group: docker-compose-container-group:
MINIKUBE_CONTAINER_GROUP=true make docker-compose MINIKUBE_CONTAINER_GROUP=true make docker-compose
clean-elk:
docker stop tools_kibana_1
docker stop tools_logstash_1
docker stop tools_elasticsearch_1
docker rm tools_logstash_1
docker rm tools_elasticsearch_1
docker rm tools_kibana_1
psql-container: psql-container:
docker run -it --net tools_default --rm postgres:12 sh -c 'exec psql -h "postgres" -p "5432" -U postgres' docker run -it --net tools_default --rm postgres:12 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
@@ -573,7 +481,6 @@ VERSION:
PYTHON_VERSION: PYTHON_VERSION:
@echo "$(PYTHON)" | sed 's:python::' @echo "$(PYTHON)" | sed 's:python::'
.PHONY: Dockerfile
Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2 Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml -e receptor_image=$(RECEPTOR_IMAGE) ansible-playbook tools/ansible/dockerfile.yml -e receptor_image=$(RECEPTOR_IMAGE)
@@ -610,16 +517,95 @@ pot: $(UI_BUILD_FLAG_FILE)
po: $(UI_BUILD_FLAG_FILE) po: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-strings -- --clean $(NPM_BIN) --prefix awx/ui --loglevel warn run extract-strings -- --clean
LANG = "en_us"
## generate API django .pot .po ## generate API django .pot .po
messages: messages:
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi; \ fi; \
$(PYTHON) manage.py makemessages -l en_us --keep-pot $(PYTHON) manage.py makemessages -l $(LANG) --keep-pot
print-%: print-%:
@echo $($*) @echo $($*)
# Cleaning
# --------------------------------------
## Remove temporary build files, compiled Python files.
clean: clean-ui clean-api clean-awxkit clean-dist
rm -rf awx/public
rm -rf awx/lib/site-packages
rm -rf awx/job_status
rm -rf awx/job_output
rm -rf reports
rm -rf tmp
rm -rf $(I18N_FLAG_FILE)
mkdir tmp
clean-elk:
docker stop tools_kibana_1
docker stop tools_logstash_1
docker stop tools_elasticsearch_1
docker rm tools_logstash_1
docker rm tools_elasticsearch_1
docker rm tools_kibana_1
clean-ui:
rm -rf node_modules
rm -rf awx/ui/node_modules
rm -rf awx/ui/build
rm -rf awx/ui/src/locales/_build
rm -rf $(UI_BUILD_FLAG_FILE)
# the collectstatic command doesn't like it if this dir doesn't exist.
mkdir -p awx/ui/build/static
clean-tmp:
rm -rf tmp/
clean-venv:
rm -rf venv/
clean-dist:
rm -rf dist
clean-schema:
rm -rf swagger.json
rm -rf schema.json
rm -rf reference-schema.json
clean-languages:
rm -f $(I18N_FLAG_FILE)
find ./awx/locale/ -type f -regex ".*\.mo$" -delete
clean-api:
rm -rf build $(NAME)-$(VERSION) *.egg-info
rm -rf .tox
find . -type f -regex ".*\.py[co]$$" -delete
find . -type d -name "__pycache__" -delete
rm -f awx/awx_test.sqlite3*
rm -rf requirements/vendor
rm -rf awx/projects
clean-awxkit:
rm -rf awxkit/*.egg-info awxkit/.tox awxkit/build/*
docker-compose-clean: awx/projects
docker-compose -f tools/docker-compose/_sources/docker-compose.yml rm -sf
docker-compose-container-group-clean:
@if [ -f "tools/docker-compose-minikube/_sources/minikube" ]; then \
tools/docker-compose-minikube/_sources/minikube delete; \
fi
rm -rf tools/docker-compose-minikube/_sources/
docker-clean:
$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
if [ "$(shell docker images | grep awx_devel)" ]; then \
docker images | grep awx_devel | awk '{print $$3}' | xargs docker rmi --force; \
fi
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
# HELP related targets # HELP related targets
# -------------------------------------- # --------------------------------------

View File

@@ -67,6 +67,7 @@ else:
from django.db import connection from django.db import connection
if HAS_DJANGO is True: if HAS_DJANGO is True:
# See upgrade blocker note in requirements/README.md # See upgrade blocker note in requirements/README.md
try: try:
names_digest('foo', 'bar', 'baz', length=8) names_digest('foo', 'bar', 'baz', length=8)

View File

@@ -1,4 +1,5 @@
# Django # Django
from django.conf import settings
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
# Django REST Framework # Django REST Framework
@@ -8,7 +9,6 @@ from rest_framework import serializers
from awx.conf import fields, register, register_validate from awx.conf import fields, register, register_validate
from awx.api.fields import OAuth2ProviderField from awx.api.fields import OAuth2ProviderField
from oauth2_provider.settings import oauth2_settings from oauth2_provider.settings import oauth2_settings
from awx.sso.common import is_remote_auth_enabled
register( register(
@@ -96,20 +96,22 @@ register(
category=_('Authentication'), category=_('Authentication'),
category_slug='authentication', category_slug='authentication',
) )
register(
'ALLOW_METRICS_FOR_ANONYMOUS_USERS',
field_class=fields.BooleanField,
default=False,
label=_('Allow anonymous users to poll metrics'),
help_text=_('If true, anonymous users are allowed to poll metrics.'),
category=_('Authentication'),
category_slug='authentication',
)
def authentication_validate(serializer, attrs): def authentication_validate(serializer, attrs):
if attrs.get('DISABLE_LOCAL_AUTH', False) and not is_remote_auth_enabled(): remote_auth_settings = [
raise serializers.ValidationError(_("There are no remote authentication systems configured.")) 'AUTH_LDAP_SERVER_URI',
'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY',
'SOCIAL_AUTH_GITHUB_KEY',
'SOCIAL_AUTH_GITHUB_ORG_KEY',
'SOCIAL_AUTH_GITHUB_TEAM_KEY',
'SOCIAL_AUTH_SAML_ENABLED_IDPS',
'RADIUS_SERVER',
'TACACSPLUS_HOST',
]
if attrs.get('DISABLE_LOCAL_AUTH', False):
if not any(getattr(settings, s, None) for s in remote_auth_settings):
raise serializers.ValidationError(_("There are no remote authentication systems configured."))
return attrs return attrs

View File

@@ -80,6 +80,7 @@ class VerbatimField(serializers.Field):
class OAuth2ProviderField(fields.DictField): class OAuth2ProviderField(fields.DictField):
default_error_messages = {'invalid_key_names': _('Invalid key names: {invalid_key_names}')} default_error_messages = {'invalid_key_names': _('Invalid key names: {invalid_key_names}')}
valid_key_names = {'ACCESS_TOKEN_EXPIRE_SECONDS', 'AUTHORIZATION_CODE_EXPIRE_SECONDS', 'REFRESH_TOKEN_EXPIRE_SECONDS'} valid_key_names = {'ACCESS_TOKEN_EXPIRE_SECONDS', 'AUTHORIZATION_CODE_EXPIRE_SECONDS', 'REFRESH_TOKEN_EXPIRE_SECONDS'}
child = fields.IntegerField(min_value=1) child = fields.IntegerField(min_value=1)

View File

@@ -155,11 +155,12 @@ class FieldLookupBackend(BaseFilterBackend):
'search', 'search',
) )
# A list of fields that we know can be filtered on without the possibility # A list of fields that we know can be filtered on without the possiblity
# of introducing duplicates # of introducing duplicates
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField) NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField)
def get_fields_from_lookup(self, model, lookup): def get_fields_from_lookup(self, model, lookup):
if '__' in lookup and lookup.rsplit('__', 1)[-1] in self.SUPPORTED_LOOKUPS: if '__' in lookup and lookup.rsplit('__', 1)[-1] in self.SUPPORTED_LOOKUPS:
path, suffix = lookup.rsplit('__', 1) path, suffix = lookup.rsplit('__', 1)
else: else:
@@ -268,7 +269,7 @@ class FieldLookupBackend(BaseFilterBackend):
continue continue
# HACK: make `created` available via API for the Django User ORM model # HACK: make `created` available via API for the Django User ORM model
# so it keep compatibility with other objects which exposes the `created` attr. # so it keep compatiblity with other objects which exposes the `created` attr.
if queryset.model._meta.object_name == 'User' and key.startswith('created'): if queryset.model._meta.object_name == 'User' and key.startswith('created'):
key = key.replace('created', 'date_joined') key = key.replace('created', 'date_joined')

View File

@@ -28,7 +28,7 @@ from rest_framework import generics
from rest_framework.response import Response from rest_framework.response import Response
from rest_framework import status from rest_framework import status
from rest_framework import views from rest_framework import views
from rest_framework.permissions import IsAuthenticated from rest_framework.permissions import AllowAny
from rest_framework.renderers import StaticHTMLRenderer from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation from rest_framework.negotiation import DefaultContentNegotiation
@@ -135,6 +135,7 @@ def get_default_schema():
class APIView(views.APIView): class APIView(views.APIView):
schema = get_default_schema() schema = get_default_schema()
versioning_class = URLPathVersioning versioning_class = URLPathVersioning
@@ -674,7 +675,7 @@ class SubListCreateAttachDetachAPIView(SubListCreateAPIView):
location = None location = None
created = True created = True
# Retrieve the sub object (whether created or by ID). # Retrive the sub object (whether created or by ID).
sub = get_object_or_400(self.model, pk=sub_id) sub = get_object_or_400(self.model, pk=sub_id)
# Verify we have permission to attach. # Verify we have permission to attach.
@@ -799,6 +800,7 @@ class RetrieveUpdateDestroyAPIView(RetrieveUpdateAPIView, DestroyAPIView):
class ResourceAccessList(ParentMixin, ListAPIView): class ResourceAccessList(ParentMixin, ListAPIView):
serializer_class = ResourceAccessListElementSerializer serializer_class = ResourceAccessListElementSerializer
ordering = ('username',) ordering = ('username',)
@@ -821,8 +823,9 @@ def trigger_delayed_deep_copy(*args, **kwargs):
class CopyAPIView(GenericAPIView): class CopyAPIView(GenericAPIView):
serializer_class = CopySerializer serializer_class = CopySerializer
permission_classes = (IsAuthenticated,) permission_classes = (AllowAny,)
copy_return_serializer_class = None copy_return_serializer_class = None
new_in_330 = True new_in_330 = True
new_in_api_v2 = True new_in_api_v2 = True

View File

@@ -128,7 +128,7 @@ class Metadata(metadata.SimpleMetadata):
# Special handling of notification configuration where the required properties # Special handling of notification configuration where the required properties
# are conditional on the type selected. # are conditional on the type selected.
if field.field_name == 'notification_configuration': if field.field_name == 'notification_configuration':
for notification_type_name, notification_tr_name, notification_type_class in NotificationTemplate.NOTIFICATION_TYPES: for (notification_type_name, notification_tr_name, notification_type_class) in NotificationTemplate.NOTIFICATION_TYPES:
field_info[notification_type_name] = notification_type_class.init_parameters field_info[notification_type_name] = notification_type_class.init_parameters
# Special handling of notification messages where the required properties # Special handling of notification messages where the required properties
@@ -138,7 +138,7 @@ class Metadata(metadata.SimpleMetadata):
except (AttributeError, KeyError): except (AttributeError, KeyError):
view_model = None view_model = None
if view_model == NotificationTemplate and field.field_name == 'messages': if view_model == NotificationTemplate and field.field_name == 'messages':
for notification_type_name, notification_tr_name, notification_type_class in NotificationTemplate.NOTIFICATION_TYPES: for (notification_type_name, notification_tr_name, notification_type_class) in NotificationTemplate.NOTIFICATION_TYPES:
field_info[notification_type_name] = notification_type_class.default_messages field_info[notification_type_name] = notification_type_class.default_messages
# Update type of fields returned... # Update type of fields returned...

View File

@@ -24,6 +24,7 @@ class DisabledPaginator(DjangoPaginator):
class Pagination(pagination.PageNumberPagination): class Pagination(pagination.PageNumberPagination):
page_size_query_param = 'page_size' page_size_query_param = 'page_size'
max_page_size = settings.MAX_PAGE_SIZE max_page_size = settings.MAX_PAGE_SIZE
count_disabled = False count_disabled = False

View File

@@ -22,6 +22,7 @@ class SurrogateEncoder(encoders.JSONEncoder):
class DefaultJSONRenderer(renderers.JSONRenderer): class DefaultJSONRenderer(renderers.JSONRenderer):
encoder_class = SurrogateEncoder encoder_class = SurrogateEncoder
@@ -60,7 +61,7 @@ class BrowsableAPIRenderer(renderers.BrowsableAPIRenderer):
delattr(renderer_context['view'], '_request') delattr(renderer_context['view'], '_request')
def get_raw_data_form(self, data, view, method, request): def get_raw_data_form(self, data, view, method, request):
# Set a flag on the view to indicate to the view/serializer that we're # Set a flag on the view to indiciate to the view/serializer that we're
# creating a raw data form for the browsable API. Store the original # creating a raw data form for the browsable API. Store the original
# request method to determine how to populate the raw data form. # request method to determine how to populate the raw data form.
if request.method in {'OPTIONS', 'DELETE'}: if request.method in {'OPTIONS', 'DELETE'}:
@@ -94,6 +95,7 @@ class BrowsableAPIRenderer(renderers.BrowsableAPIRenderer):
class PlainTextRenderer(renderers.BaseRenderer): class PlainTextRenderer(renderers.BaseRenderer):
media_type = 'text/plain' media_type = 'text/plain'
format = 'txt' format = 'txt'
@@ -104,15 +106,18 @@ class PlainTextRenderer(renderers.BaseRenderer):
class DownloadTextRenderer(PlainTextRenderer): class DownloadTextRenderer(PlainTextRenderer):
format = "txt_download" format = "txt_download"
class AnsiTextRenderer(PlainTextRenderer): class AnsiTextRenderer(PlainTextRenderer):
media_type = 'text/plain' media_type = 'text/plain'
format = 'ansi' format = 'ansi'
class AnsiDownloadRenderer(PlainTextRenderer): class AnsiDownloadRenderer(PlainTextRenderer):
format = "ansi_download" format = "ansi_download"

View File

@@ -8,7 +8,6 @@ import logging
import re import re
from collections import OrderedDict from collections import OrderedDict
from datetime import timedelta from datetime import timedelta
from uuid import uuid4
# OAuth2 # OAuth2
from oauthlib import oauth2 from oauthlib import oauth2
@@ -109,15 +108,13 @@ from awx.main.utils import (
extract_ansible_vars, extract_ansible_vars,
encrypt_dict, encrypt_dict,
prefetch_page_capabilities, prefetch_page_capabilities,
get_external_account,
truncate_stdout, truncate_stdout,
get_licenser,
) )
from awx.main.utils.filters import SmartFilter from awx.main.utils.filters import SmartFilter
from awx.main.utils.named_url_graph import reset_counters from awx.main.utils.named_url_graph import reset_counters
from awx.main.scheduler.task_manager_models import TaskManagerModels from awx.main.scheduler.task_manager_models import TaskManagerInstanceGroups, TaskManagerInstances
from awx.main.redact import UriCleaner, REPLACE_STR from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.signals import update_inventory_computed_fields
from awx.main.validators import vars_validate_or_raise from awx.main.validators import vars_validate_or_raise
@@ -127,8 +124,6 @@ from awx.api.fields import BooleanNullField, CharNullField, ChoiceNullField, Ver
# AWX Utils # AWX Utils
from awx.api.validators import HostnameRegexValidator from awx.api.validators import HostnameRegexValidator
from awx.sso.common import get_external_account
logger = logging.getLogger('awx.api.serializers') logger = logging.getLogger('awx.api.serializers')
# Fields that should be summarized regardless of object type. # Fields that should be summarized regardless of object type.
@@ -160,7 +155,7 @@ SUMMARIZABLE_FK_FIELDS = {
'default_environment': DEFAULT_SUMMARY_FIELDS + ('image',), 'default_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'execution_environment': DEFAULT_SUMMARY_FIELDS + ('image',), 'execution_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type', 'allow_override'), 'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type', 'allow_override'),
'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type', 'allow_override'), 'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed'), 'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed'),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'kubernetes', 'credential_type_id'), 'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'kubernetes', 'credential_type_id'),
'signature_validation_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'credential_type_id'), 'signature_validation_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'credential_type_id'),
@@ -205,6 +200,7 @@ def reverse_gfk(content_object, request):
class CopySerializer(serializers.Serializer): class CopySerializer(serializers.Serializer):
name = serializers.CharField() name = serializers.CharField()
def validate(self, attrs): def validate(self, attrs):
@@ -436,6 +432,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
continue continue
summary_fields[fk] = OrderedDict() summary_fields[fk] = OrderedDict()
for field in related_fields: for field in related_fields:
fval = getattr(fkval, field, None) fval = getattr(fkval, field, None)
if fval is None and field == 'type': if fval is None and field == 'type':
@@ -541,7 +538,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
# #
# This logic is to force rendering choice's on an uneditable field. # This logic is to force rendering choice's on an uneditable field.
# Note: Consider expanding this rendering for more than just choices fields # Note: Consider expanding this rendering for more than just choices fields
# Note: This logic works in conjunction with # Note: This logic works in conjuction with
if hasattr(model_field, 'choices') and model_field.choices: if hasattr(model_field, 'choices') and model_field.choices:
was_editable = model_field.editable was_editable = model_field.editable
model_field.editable = True model_field.editable = True
@@ -933,6 +930,7 @@ class UnifiedJobListSerializer(UnifiedJobSerializer):
class UnifiedJobStdoutSerializer(UnifiedJobSerializer): class UnifiedJobStdoutSerializer(UnifiedJobSerializer):
result_stdout = serializers.SerializerMethodField() result_stdout = serializers.SerializerMethodField()
class Meta: class Meta:
@@ -946,6 +944,7 @@ class UnifiedJobStdoutSerializer(UnifiedJobSerializer):
class UserSerializer(BaseSerializer): class UserSerializer(BaseSerializer):
password = serializers.CharField(required=False, default='', write_only=True, help_text=_('Write-only field used to change the password.')) password = serializers.CharField(required=False, default='', write_only=True, help_text=_('Write-only field used to change the password.'))
ldap_dn = serializers.CharField(source='profile.ldap_dn', read_only=True) ldap_dn = serializers.CharField(source='profile.ldap_dn', read_only=True)
external_account = serializers.SerializerMethodField(help_text=_('Set if the account is managed by an external service')) external_account = serializers.SerializerMethodField(help_text=_('Set if the account is managed by an external service'))
@@ -992,8 +991,23 @@ class UserSerializer(BaseSerializer):
def _update_password(self, obj, new_password): def _update_password(self, obj, new_password):
# For now we're not raising an error, just not saving password for # For now we're not raising an error, just not saving password for
# users managed by LDAP who already have an unusable password set. # users managed by LDAP who already have an unusable password set.
# Get external password will return something like ldap or enterprise or None if the user isn't external. We only want to allow a password update for a None option if getattr(settings, 'AUTH_LDAP_SERVER_URI', None):
if new_password and not self.get_external_account(obj): try:
if obj.pk and obj.profile.ldap_dn and not obj.has_usable_password():
new_password = None
except AttributeError:
pass
if (
getattr(settings, 'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_ORG_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_TEAM_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_SAML_ENABLED_IDPS', None)
) and obj.social_auth.all():
new_password = None
if (getattr(settings, 'RADIUS_SERVER', None) or getattr(settings, 'TACACSPLUS_HOST', None)) and obj.enterprise_auth.all():
new_password = None
if new_password:
obj.set_password(new_password) obj.set_password(new_password)
obj.save(update_fields=['password']) obj.save(update_fields=['password'])
@@ -1090,6 +1104,7 @@ class UserActivityStreamSerializer(UserSerializer):
class BaseOAuth2TokenSerializer(BaseSerializer): class BaseOAuth2TokenSerializer(BaseSerializer):
refresh_token = serializers.SerializerMethodField() refresh_token = serializers.SerializerMethodField()
token = serializers.SerializerMethodField() token = serializers.SerializerMethodField()
ALLOWED_SCOPES = ['read', 'write'] ALLOWED_SCOPES = ['read', 'write']
@@ -1207,6 +1222,7 @@ class UserPersonalTokenSerializer(BaseOAuth2TokenSerializer):
class OAuth2ApplicationSerializer(BaseSerializer): class OAuth2ApplicationSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete'] show_capabilities = ['edit', 'delete']
class Meta: class Meta:
@@ -1441,6 +1457,7 @@ class ExecutionEnvironmentSerializer(BaseSerializer):
class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer): class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
status = serializers.ChoiceField(choices=Project.PROJECT_STATUS_CHOICES, read_only=True) status = serializers.ChoiceField(choices=Project.PROJECT_STATUS_CHOICES, read_only=True)
last_update_failed = serializers.BooleanField(read_only=True) last_update_failed = serializers.BooleanField(read_only=True)
last_updated = serializers.DateTimeField(read_only=True) last_updated = serializers.DateTimeField(read_only=True)
@@ -1531,6 +1548,7 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
class ProjectPlaybooksSerializer(ProjectSerializer): class ProjectPlaybooksSerializer(ProjectSerializer):
playbooks = serializers.SerializerMethodField(help_text=_('Array of playbooks available within this project.')) playbooks = serializers.SerializerMethodField(help_text=_('Array of playbooks available within this project.'))
class Meta: class Meta:
@@ -1548,6 +1566,7 @@ class ProjectPlaybooksSerializer(ProjectSerializer):
class ProjectInventoriesSerializer(ProjectSerializer): class ProjectInventoriesSerializer(ProjectSerializer):
inventory_files = serializers.ReadOnlyField(help_text=_('Array of inventory files and directories available within this project, ' 'not comprehensive.')) inventory_files = serializers.ReadOnlyField(help_text=_('Array of inventory files and directories available within this project, ' 'not comprehensive.'))
class Meta: class Meta:
@@ -1562,6 +1581,7 @@ class ProjectInventoriesSerializer(ProjectSerializer):
class ProjectUpdateViewSerializer(ProjectSerializer): class ProjectUpdateViewSerializer(ProjectSerializer):
can_update = serializers.BooleanField(read_only=True) can_update = serializers.BooleanField(read_only=True)
class Meta: class Meta:
@@ -1591,6 +1611,7 @@ class ProjectUpdateSerializer(UnifiedJobSerializer, ProjectOptionsSerializer):
class ProjectUpdateDetailSerializer(ProjectUpdateSerializer): class ProjectUpdateDetailSerializer(ProjectUpdateSerializer):
playbook_counts = serializers.SerializerMethodField(help_text=_('A count of all plays and tasks for the job run.')) playbook_counts = serializers.SerializerMethodField(help_text=_('A count of all plays and tasks for the job run.'))
class Meta: class Meta:
@@ -1613,6 +1634,7 @@ class ProjectUpdateListSerializer(ProjectUpdateSerializer, UnifiedJobListSeriali
class ProjectUpdateCancelSerializer(ProjectUpdateSerializer): class ProjectUpdateCancelSerializer(ProjectUpdateSerializer):
can_cancel = serializers.BooleanField(read_only=True) can_cancel = serializers.BooleanField(read_only=True)
class Meta: class Meta:
@@ -1857,7 +1879,7 @@ class HostSerializer(BaseSerializerWithVariables):
vars_dict = parse_yaml_or_json(variables) vars_dict = parse_yaml_or_json(variables)
vars_dict['ansible_ssh_port'] = port vars_dict['ansible_ssh_port'] = port
attrs['variables'] = json.dumps(vars_dict) attrs['variables'] = json.dumps(vars_dict)
if inventory and Group.objects.filter(name=name, inventory=inventory).exists(): if Group.objects.filter(name=name, inventory=inventory).exists():
raise serializers.ValidationError(_('A Group with that name already exists.')) raise serializers.ValidationError(_('A Group with that name already exists.'))
return super(HostSerializer, self).validate(attrs) return super(HostSerializer, self).validate(attrs)
@@ -1949,131 +1971,8 @@ class GroupSerializer(BaseSerializerWithVariables):
return ret return ret
class BulkHostSerializer(HostSerializer):
class Meta:
model = Host
fields = (
'name',
'enabled',
'instance_id',
'description',
'variables',
)
class BulkHostCreateSerializer(serializers.Serializer):
inventory = serializers.PrimaryKeyRelatedField(
queryset=Inventory.objects.all(), required=True, write_only=True, help_text=_('Primary Key ID of inventory to add hosts to.')
)
hosts = serializers.ListField(
child=BulkHostSerializer(),
allow_empty=False,
max_length=100000,
write_only=True,
help_text=_('List of hosts to be created, JSON. e.g. [{"name": "example.com"}, {"name": "127.0.0.1"}]'),
)
class Meta:
model = Inventory
fields = ('inventory', 'hosts')
read_only_fields = ()
def raise_if_host_counts_violated(self, attrs):
validation_info = get_licenser().validate()
org = attrs['inventory'].organization
if org:
org_active_count = Host.objects.org_active_count(org.id)
new_hosts = [h['name'] for h in attrs['hosts']]
org_net_new_host_count = len(new_hosts) - Host.objects.filter(inventory__organization=1, name__in=new_hosts).values('name').distinct().count()
if org.max_hosts > 0 and org_active_count + org_net_new_host_count > org.max_hosts:
raise PermissionDenied(
_(
"You have already reached the maximum number of %s hosts"
" allowed for your organization. Contact your System Administrator"
" for assistance." % org.max_hosts
)
)
# Don't check license if it is open license
if validation_info.get('license_type', 'UNLICENSED') == 'open':
return
sys_free_instances = validation_info.get('free_instances', 0)
system_net_new_host_count = Host.objects.exclude(name__in=new_hosts).count()
if system_net_new_host_count > sys_free_instances:
hard_error = validation_info.get('trial', False) is True or validation_info['instance_count'] == 10
if hard_error:
# Only raise permission error for trial, otherwise just log a warning as we do in other inventory import situations
raise PermissionDenied(_("Host count exceeds available instances."))
logger.warning(_("Number of hosts allowed by license has been exceeded."))
def validate(self, attrs):
request = self.context.get('request', None)
inv = attrs['inventory']
if inv.kind != '':
raise serializers.ValidationError(_('Hosts can only be created in manual inventories (not smart or constructed types).'))
if len(attrs['hosts']) > settings.BULK_HOST_MAX_CREATE:
raise serializers.ValidationError(_('Number of hosts exceeds system setting BULK_HOST_MAX_CREATE'))
if request and not request.user.is_superuser:
if request.user not in inv.admin_role:
raise serializers.ValidationError(_(f'Inventory with id {inv.id} not found or lack permissions to add hosts.'))
current_hostnames = set(inv.hosts.values_list('name', flat=True))
new_names = [host['name'] for host in attrs['hosts']]
duplicate_new_names = [n for n in new_names if n in current_hostnames or new_names.count(n) > 1]
if duplicate_new_names:
raise serializers.ValidationError(_(f'Hostnames must be unique in an inventory. Duplicates found: {duplicate_new_names}'))
self.raise_if_host_counts_violated(attrs)
_now = now()
for host in attrs['hosts']:
host['created'] = _now
host['modified'] = _now
host['inventory'] = inv
return attrs
def create(self, validated_data):
# This assumes total_hosts is up to date, and it can get out of date if the inventory computed fields have not been updated lately.
# If we wanted to side step this we could query Hosts.objects.filter(inventory...)
old_total_hosts = validated_data['inventory'].total_hosts
result = [Host(**attrs) for attrs in validated_data['hosts']]
try:
Host.objects.bulk_create(result)
except Exception as e:
raise serializers.ValidationError({"detail": _(f"cannot create host, host creation error {e}")})
new_total_hosts = old_total_hosts + len(result)
request = self.context.get('request', None)
changes = {'total_hosts': [old_total_hosts, new_total_hosts]}
activity_entry = ActivityStream.objects.create(
operation='update',
object1='inventory',
changes=json.dumps(changes),
actor=request.user,
)
activity_entry.inventory.add(validated_data['inventory'])
# This actually updates the cached "total_hosts" field on the inventory
update_inventory_computed_fields.delay(validated_data['inventory'].id)
return_keys = [k for k in BulkHostSerializer().fields.keys()] + ['id']
return_data = {}
host_data = []
for r in result:
item = {k: getattr(r, k) for k in return_keys}
if not settings.IS_TESTING_MODE:
# sqlite acts different with bulk_create -- it doesn't return the id of the objects
# to get it, you have to do an additional query, which is not useful for our tests
item['url'] = reverse('api:host_detail', kwargs={'pk': r.id})
item['inventory'] = reverse('api:inventory_detail', kwargs={'pk': validated_data['inventory'].id})
host_data.append(item)
return_data['url'] = reverse('api:inventory_detail', kwargs={'pk': validated_data['inventory'].id})
return_data['hosts'] = host_data
return return_data
class GroupTreeSerializer(GroupSerializer): class GroupTreeSerializer(GroupSerializer):
children = serializers.SerializerMethodField() children = serializers.SerializerMethodField()
class Meta: class Meta:
@@ -2128,7 +2027,6 @@ class InventorySourceOptionsSerializer(BaseSerializer):
'source', 'source',
'source_path', 'source_path',
'source_vars', 'source_vars',
'scm_branch',
'credential', 'credential',
'enabled_var', 'enabled_var',
'enabled_value', 'enabled_value',
@@ -2172,6 +2070,7 @@ class InventorySourceOptionsSerializer(BaseSerializer):
class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOptionsSerializer): class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOptionsSerializer):
status = serializers.ChoiceField(choices=InventorySource.INVENTORY_SOURCE_STATUS_CHOICES, read_only=True) status = serializers.ChoiceField(choices=InventorySource.INVENTORY_SOURCE_STATUS_CHOICES, read_only=True)
last_update_failed = serializers.BooleanField(read_only=True) last_update_failed = serializers.BooleanField(read_only=True)
last_updated = serializers.DateTimeField(read_only=True) last_updated = serializers.DateTimeField(read_only=True)
@@ -2293,14 +2192,10 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
if ('source' in attrs or 'source_project' in attrs) and get_field_from_model_or_attrs('source_project') is None: if ('source' in attrs or 'source_project' in attrs) and get_field_from_model_or_attrs('source_project') is None:
raise serializers.ValidationError({"source_project": _("Project required for scm type sources.")}) raise serializers.ValidationError({"source_project": _("Project required for scm type sources.")})
else: else:
redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path', 'scm_branch'])) redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path']))
if redundant_scm_fields: if redundant_scm_fields:
raise serializers.ValidationError({"detail": _("Cannot set %s if not SCM type." % ' '.join(redundant_scm_fields))}) raise serializers.ValidationError({"detail": _("Cannot set %s if not SCM type." % ' '.join(redundant_scm_fields))})
project = get_field_from_model_or_attrs('source_project')
if get_field_from_model_or_attrs('scm_branch') and not project.allow_override:
raise serializers.ValidationError({'scm_branch': _('Project does not allow overriding branch.')})
attrs = super(InventorySourceSerializer, self).validate(attrs) attrs = super(InventorySourceSerializer, self).validate(attrs)
# Check type consistency of source and cloud credential, if provided # Check type consistency of source and cloud credential, if provided
@@ -2320,6 +2215,7 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
class InventorySourceUpdateSerializer(InventorySourceSerializer): class InventorySourceUpdateSerializer(InventorySourceSerializer):
can_update = serializers.BooleanField(read_only=True) can_update = serializers.BooleanField(read_only=True)
class Meta: class Meta:
@@ -2336,6 +2232,7 @@ class InventorySourceUpdateSerializer(InventorySourceSerializer):
class InventoryUpdateSerializer(UnifiedJobSerializer, InventorySourceOptionsSerializer): class InventoryUpdateSerializer(UnifiedJobSerializer, InventorySourceOptionsSerializer):
custom_virtualenv = serializers.ReadOnlyField() custom_virtualenv = serializers.ReadOnlyField()
class Meta: class Meta:
@@ -2376,6 +2273,7 @@ class InventoryUpdateSerializer(UnifiedJobSerializer, InventorySourceOptionsSeri
class InventoryUpdateDetailSerializer(InventoryUpdateSerializer): class InventoryUpdateDetailSerializer(InventoryUpdateSerializer):
source_project = serializers.SerializerMethodField(help_text=_('The project used for this job.'), method_name='get_source_project_id') source_project = serializers.SerializerMethodField(help_text=_('The project used for this job.'), method_name='get_source_project_id')
class Meta: class Meta:
@@ -2426,6 +2324,7 @@ class InventoryUpdateListSerializer(InventoryUpdateSerializer, UnifiedJobListSer
class InventoryUpdateCancelSerializer(InventoryUpdateSerializer): class InventoryUpdateCancelSerializer(InventoryUpdateSerializer):
can_cancel = serializers.BooleanField(read_only=True) can_cancel = serializers.BooleanField(read_only=True)
class Meta: class Meta:
@@ -2783,6 +2682,7 @@ class CredentialSerializer(BaseSerializer):
class CredentialSerializerCreate(CredentialSerializer): class CredentialSerializerCreate(CredentialSerializer):
user = serializers.PrimaryKeyRelatedField( user = serializers.PrimaryKeyRelatedField(
queryset=User.objects.all(), queryset=User.objects.all(),
required=False, required=False,
@@ -3137,6 +3037,7 @@ class JobTemplateWithSpecSerializer(JobTemplateSerializer):
class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer): class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
passwords_needed_to_start = serializers.ReadOnlyField() passwords_needed_to_start = serializers.ReadOnlyField()
artifacts = serializers.SerializerMethodField() artifacts = serializers.SerializerMethodField()
@@ -3219,6 +3120,7 @@ class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
class JobDetailSerializer(JobSerializer): class JobDetailSerializer(JobSerializer):
playbook_counts = serializers.SerializerMethodField(help_text=_('A count of all plays and tasks for the job run.')) playbook_counts = serializers.SerializerMethodField(help_text=_('A count of all plays and tasks for the job run.'))
custom_virtualenv = serializers.ReadOnlyField() custom_virtualenv = serializers.ReadOnlyField()
@@ -3236,6 +3138,7 @@ class JobDetailSerializer(JobSerializer):
class JobCancelSerializer(BaseSerializer): class JobCancelSerializer(BaseSerializer):
can_cancel = serializers.BooleanField(read_only=True) can_cancel = serializers.BooleanField(read_only=True)
class Meta: class Meta:
@@ -3244,6 +3147,7 @@ class JobCancelSerializer(BaseSerializer):
class JobRelaunchSerializer(BaseSerializer): class JobRelaunchSerializer(BaseSerializer):
passwords_needed_to_start = serializers.SerializerMethodField() passwords_needed_to_start = serializers.SerializerMethodField()
retry_counts = serializers.SerializerMethodField() retry_counts = serializers.SerializerMethodField()
hosts = serializers.ChoiceField( hosts = serializers.ChoiceField(
@@ -3303,6 +3207,7 @@ class JobRelaunchSerializer(BaseSerializer):
class JobCreateScheduleSerializer(LabelsListMixin, BaseSerializer): class JobCreateScheduleSerializer(LabelsListMixin, BaseSerializer):
can_schedule = serializers.SerializerMethodField() can_schedule = serializers.SerializerMethodField()
prompts = serializers.SerializerMethodField() prompts = serializers.SerializerMethodField()
@@ -3428,6 +3333,7 @@ class AdHocCommandDetailSerializer(AdHocCommandSerializer):
class AdHocCommandCancelSerializer(AdHocCommandSerializer): class AdHocCommandCancelSerializer(AdHocCommandSerializer):
can_cancel = serializers.BooleanField(read_only=True) can_cancel = serializers.BooleanField(read_only=True)
class Meta: class Meta:
@@ -3466,6 +3372,7 @@ class SystemJobTemplateSerializer(UnifiedJobTemplateSerializer):
class SystemJobSerializer(UnifiedJobSerializer): class SystemJobSerializer(UnifiedJobSerializer):
result_stdout = serializers.SerializerMethodField() result_stdout = serializers.SerializerMethodField()
class Meta: class Meta:
@@ -3492,6 +3399,7 @@ class SystemJobSerializer(UnifiedJobSerializer):
class SystemJobCancelSerializer(SystemJobSerializer): class SystemJobCancelSerializer(SystemJobSerializer):
can_cancel = serializers.BooleanField(read_only=True) can_cancel = serializers.BooleanField(read_only=True)
class Meta: class Meta:
@@ -3656,6 +3564,7 @@ class WorkflowJobListSerializer(WorkflowJobSerializer, UnifiedJobListSerializer)
class WorkflowJobCancelSerializer(WorkflowJobSerializer): class WorkflowJobCancelSerializer(WorkflowJobSerializer):
can_cancel = serializers.BooleanField(read_only=True) can_cancel = serializers.BooleanField(read_only=True)
class Meta: class Meta:
@@ -3669,6 +3578,7 @@ class WorkflowApprovalViewSerializer(UnifiedJobSerializer):
class WorkflowApprovalSerializer(UnifiedJobSerializer): class WorkflowApprovalSerializer(UnifiedJobSerializer):
can_approve_or_deny = serializers.SerializerMethodField() can_approve_or_deny = serializers.SerializerMethodField()
approval_expiration = serializers.SerializerMethodField() approval_expiration = serializers.SerializerMethodField()
timed_out = serializers.ReadOnlyField() timed_out = serializers.ReadOnlyField()
@@ -4063,6 +3973,7 @@ class JobHostSummarySerializer(BaseSerializer):
class JobEventSerializer(BaseSerializer): class JobEventSerializer(BaseSerializer):
event_display = serializers.CharField(source='get_event_display2', read_only=True) event_display = serializers.CharField(source='get_event_display2', read_only=True)
event_level = serializers.IntegerField(read_only=True) event_level = serializers.IntegerField(read_only=True)
@@ -4116,7 +4027,7 @@ class JobEventSerializer(BaseSerializer):
# Show full stdout for playbook_on_* events. # Show full stdout for playbook_on_* events.
if obj and obj.event.startswith('playbook_on'): if obj and obj.event.startswith('playbook_on'):
return data return data
# If the view logic says to not truncate (request was to the detail view or a param was used) # If the view logic says to not trunctate (request was to the detail view or a param was used)
if self.context.get('no_truncate', False): if self.context.get('no_truncate', False):
return data return data
max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY
@@ -4147,7 +4058,7 @@ class ProjectUpdateEventSerializer(JobEventSerializer):
# raw SCM URLs in their stdout (which *could* contain passwords) # raw SCM URLs in their stdout (which *could* contain passwords)
# attempt to detect and filter HTTP basic auth passwords in the stdout # attempt to detect and filter HTTP basic auth passwords in the stdout
# of these types of events # of these types of events
if obj.event_data.get('task_action') in ('git', 'svn', 'ansible.builtin.git', 'ansible.builtin.svn'): if obj.event_data.get('task_action') in ('git', 'svn'):
try: try:
return json.loads(UriCleaner.remove_sensitive(json.dumps(obj.event_data))) return json.loads(UriCleaner.remove_sensitive(json.dumps(obj.event_data)))
except Exception: except Exception:
@@ -4158,6 +4069,7 @@ class ProjectUpdateEventSerializer(JobEventSerializer):
class AdHocCommandEventSerializer(BaseSerializer): class AdHocCommandEventSerializer(BaseSerializer):
event_display = serializers.CharField(source='get_event_display', read_only=True) event_display = serializers.CharField(source='get_event_display', read_only=True)
class Meta: class Meta:
@@ -4191,7 +4103,7 @@ class AdHocCommandEventSerializer(BaseSerializer):
def to_representation(self, obj): def to_representation(self, obj):
data = super(AdHocCommandEventSerializer, self).to_representation(obj) data = super(AdHocCommandEventSerializer, self).to_representation(obj)
# If the view logic says to not truncate (request was to the detail view or a param was used) # If the view logic says to not trunctate (request was to the detail view or a param was used)
if self.context.get('no_truncate', False): if self.context.get('no_truncate', False):
return data return data
max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY
@@ -4439,6 +4351,7 @@ class JobLaunchSerializer(BaseSerializer):
class WorkflowJobLaunchSerializer(BaseSerializer): class WorkflowJobLaunchSerializer(BaseSerializer):
can_start_without_user_input = serializers.BooleanField(read_only=True) can_start_without_user_input = serializers.BooleanField(read_only=True)
defaults = serializers.SerializerMethodField() defaults = serializers.SerializerMethodField()
variables_needed_to_start = serializers.ReadOnlyField() variables_needed_to_start = serializers.ReadOnlyField()
@@ -4495,6 +4408,7 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
return False return False
def get_defaults(self, obj): def get_defaults(self, obj):
defaults_dict = {} defaults_dict = {}
for field_name in WorkflowJobTemplate.get_ask_mapping().keys(): for field_name in WorkflowJobTemplate.get_ask_mapping().keys():
if field_name == 'inventory': if field_name == 'inventory':
@@ -4511,6 +4425,7 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
return dict(name=obj.name, id=obj.id, description=obj.description) return dict(name=obj.name, id=obj.id, description=obj.description)
def validate(self, attrs): def validate(self, attrs):
template = self.instance template = self.instance
accepted, rejected, errors = template._accept_or_ignore_job_kwargs(**attrs) accepted, rejected, errors = template._accept_or_ignore_job_kwargs(**attrs)
@@ -4538,271 +4453,6 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
return accepted return accepted
class BulkJobNodeSerializer(WorkflowJobNodeSerializer):
# We don't do a PrimaryKeyRelatedField for unified_job_template and others, because that increases the number
# of database queries, rather we take them as integer and later convert them to objects in get_objectified_jobs
unified_job_template = serializers.IntegerField(
required=True, min_value=1, help_text=_('Primary key of the template for this job, can be a job template or inventory source.')
)
inventory = serializers.IntegerField(required=False, min_value=1)
execution_environment = serializers.IntegerField(required=False, min_value=1)
# many-to-many fields
credentials = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
labels = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
# TODO: Use instance group role added via PR 13584(once merged), for now everything related to instance group is commented
# instance_groups = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
class Meta:
model = WorkflowJobNode
fields = ('*', 'credentials', 'labels') # m2m fields are not canonical for WJ nodes, TODO: add instance_groups once supported
def validate(self, attrs):
return super(LaunchConfigurationBaseSerializer, self).validate(attrs)
def get_validation_exclusions(self, obj=None):
ret = super().get_validation_exclusions(obj)
ret.extend(['unified_job_template', 'inventory', 'execution_environment'])
return ret
class BulkJobLaunchSerializer(serializers.Serializer):
name = serializers.CharField(default='Bulk Job Launch', max_length=512, write_only=True, required=False, allow_blank=True) # limited by max name of jobs
jobs = BulkJobNodeSerializer(
many=True,
allow_empty=False,
write_only=True,
max_length=100000,
help_text=_('List of jobs to be launched, JSON. e.g. [{"unified_job_template": 7}, {"unified_job_template": 10}]'),
)
description = serializers.CharField(write_only=True, required=False, allow_blank=False)
extra_vars = serializers.JSONField(write_only=True, required=False)
organization = serializers.PrimaryKeyRelatedField(
queryset=Organization.objects.all(),
required=False,
default=None,
allow_null=True,
write_only=True,
help_text=_('Inherit permissions from this organization. If not provided, a organization the user is a member of will be selected automatically.'),
)
inventory = serializers.PrimaryKeyRelatedField(queryset=Inventory.objects.all(), required=False, write_only=True)
limit = serializers.CharField(write_only=True, required=False, allow_blank=False)
scm_branch = serializers.CharField(write_only=True, required=False, allow_blank=False)
skip_tags = serializers.CharField(write_only=True, required=False, allow_blank=False)
job_tags = serializers.CharField(write_only=True, required=False, allow_blank=False)
class Meta:
model = WorkflowJob
fields = ('name', 'jobs', 'description', 'extra_vars', 'organization', 'inventory', 'limit', 'scm_branch', 'skip_tags', 'job_tags')
read_only_fields = ()
def validate(self, attrs):
request = self.context.get('request', None)
identifiers = set()
if len(attrs['jobs']) > settings.BULK_JOB_MAX_LAUNCH:
raise serializers.ValidationError(_('Number of requested jobs exceeds system setting BULK_JOB_MAX_LAUNCH'))
for node in attrs['jobs']:
if 'identifier' in node:
if node['identifier'] in identifiers:
raise serializers.ValidationError(_(f"Identifier {node['identifier']} not unique"))
identifiers.add(node['identifier'])
else:
node['identifier'] = str(uuid4())
requested_ujts = {j['unified_job_template'] for j in attrs['jobs']}
requested_use_inventories = {job['inventory'] for job in attrs['jobs'] if 'inventory' in job}
requested_use_execution_environments = {job['execution_environment'] for job in attrs['jobs'] if 'execution_environment' in job}
requested_use_credentials = set()
requested_use_labels = set()
# requested_use_instance_groups = set()
for job in attrs['jobs']:
for cred in job.get('credentials', []):
requested_use_credentials.add(cred)
for label in job.get('labels', []):
requested_use_labels.add(label)
# for instance_group in job.get('instance_groups', []):
# requested_use_instance_groups.add(instance_group)
key_to_obj_map = {
"unified_job_template": {obj.id: obj for obj in UnifiedJobTemplate.objects.filter(id__in=requested_ujts)},
"inventory": {obj.id: obj for obj in Inventory.objects.filter(id__in=requested_use_inventories)},
"credentials": {obj.id: obj for obj in Credential.objects.filter(id__in=requested_use_credentials)},
"labels": {obj.id: obj for obj in Label.objects.filter(id__in=requested_use_labels)},
# "instance_groups": {obj.id: obj for obj in InstanceGroup.objects.filter(id__in=requested_use_instance_groups)},
"execution_environment": {obj.id: obj for obj in ExecutionEnvironment.objects.filter(id__in=requested_use_execution_environments)},
}
ujts = {}
for ujt in key_to_obj_map['unified_job_template'].values():
ujts.setdefault(type(ujt), [])
ujts[type(ujt)].append(ujt)
unallowed_types = set(ujts.keys()) - set([JobTemplate, Project, InventorySource, WorkflowJobTemplate])
if unallowed_types:
type_names = ' '.join([cls._meta.verbose_name.title() for cls in unallowed_types])
raise serializers.ValidationError(_("Template types {type_names} not allowed in bulk jobs").format(type_names=type_names))
for model, obj_list in ujts.items():
role_field = 'execute_role' if issubclass(model, (JobTemplate, WorkflowJobTemplate)) else 'update_role'
self.check_list_permission(model, set([obj.id for obj in obj_list]), role_field)
self.check_organization_permission(attrs, request)
if 'inventory' in attrs:
requested_use_inventories.add(attrs['inventory'].id)
self.check_list_permission(Inventory, requested_use_inventories, 'use_role')
self.check_list_permission(Credential, requested_use_credentials, 'use_role')
self.check_list_permission(Label, requested_use_labels)
# self.check_list_permission(InstanceGroup, requested_use_instance_groups) # TODO: change to use_role for conflict
self.check_list_permission(ExecutionEnvironment, requested_use_execution_environments) # TODO: change if roles introduced
jobs_object = self.get_objectified_jobs(attrs, key_to_obj_map)
attrs['jobs'] = jobs_object
if 'extra_vars' in attrs:
extra_vars_dict = parse_yaml_or_json(attrs['extra_vars'])
attrs['extra_vars'] = json.dumps(extra_vars_dict)
attrs = super().validate(attrs)
return attrs
def check_list_permission(self, model, id_list, role_field=None):
if not id_list:
return
user = self.context['request'].user
if role_field is None: # implies "read" level permission is required
access_qs = user.get_queryset(model)
else:
access_qs = model.accessible_objects(user, role_field)
not_allowed = set(id_list) - set(access_qs.filter(id__in=id_list).values_list('id', flat=True))
if not_allowed:
raise serializers.ValidationError(
_("{model_name} {not_allowed} not found or you don't have permissions to access it").format(
model_name=model._meta.verbose_name_plural.title(), not_allowed=not_allowed
)
)
def create(self, validated_data):
request = self.context.get('request', None)
launch_user = request.user if request else None
job_node_data = validated_data.pop('jobs')
wfj_deferred_attr_names = ('skip_tags', 'limit', 'job_tags')
wfj_deferred_vals = {}
for item in wfj_deferred_attr_names:
wfj_deferred_vals[item] = validated_data.pop(item, None)
wfj = WorkflowJob.objects.create(**validated_data, is_bulk_job=True, launch_type='manual', created_by=launch_user)
for key, val in wfj_deferred_vals.items():
if val:
setattr(wfj, key, val)
nodes = []
node_m2m_objects = {}
node_m2m_object_types_to_through_model = {
'credentials': WorkflowJobNode.credentials.through,
'labels': WorkflowJobNode.labels.through,
# 'instance_groups': WorkflowJobNode.instance_groups.through,
}
node_deferred_attr_names = (
'limit',
'scm_branch',
'verbosity',
'forks',
'diff_mode',
'job_tags',
'job_type',
'skip_tags',
'job_slice_count',
'timeout',
)
node_deferred_attrs = {}
for node_attrs in job_node_data:
# we need to add any m2m objects after creation via the through model
node_m2m_objects[node_attrs['identifier']] = {}
node_deferred_attrs[node_attrs['identifier']] = {}
for item in node_m2m_object_types_to_through_model.keys():
if item in node_attrs:
node_m2m_objects[node_attrs['identifier']][item] = node_attrs.pop(item)
# Some attributes are not accepted by WorkflowJobNode __init__, we have to set them after
for item in node_deferred_attr_names:
if item in node_attrs:
node_deferred_attrs[node_attrs['identifier']][item] = node_attrs.pop(item)
# Create the node objects
node_obj = WorkflowJobNode(workflow_job=wfj, created=wfj.created, modified=wfj.modified, **node_attrs)
# we can set the deferred attrs now
for item, value in node_deferred_attrs[node_attrs['identifier']].items():
setattr(node_obj, item, value)
# the node is now ready to be bulk created
nodes.append(node_obj)
# we'll need this later when we do the m2m through model bulk create
node_m2m_objects[node_attrs['identifier']]['node'] = node_obj
WorkflowJobNode.objects.bulk_create(nodes)
# Deal with the m2m objects we have to create once the node exists
for field_name, through_model in node_m2m_object_types_to_through_model.items():
through_model_objects = []
for node_identifier in node_m2m_objects.keys():
if field_name in node_m2m_objects[node_identifier] and field_name == 'credentials':
for cred in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(credential=cred, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if field_name in node_m2m_objects[node_identifier] and field_name == 'labels':
for label in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(label=label, workflowjobnode=node_m2m_objects[node_identifier]['node']))
# if obj_type in node_m2m_objects[node_identifier] and obj_type == 'instance_groups':
# for instance_group in node_m2m_objects[node_identifier][obj_type]:
# through_model_objects.append(through_model(instancegroup=instance_group, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if through_model_objects:
through_model.objects.bulk_create(through_model_objects)
wfj.save()
wfj.signal_start()
return WorkflowJobSerializer().to_representation(wfj)
def check_organization_permission(self, attrs, request):
# validate Organization
# - If the orgs is not set, set it to the org of the launching user
# - If the user is part of multiple orgs, throw a validation error saying user is part of multiple orgs, please provide one
if not request.user.is_superuser:
read_org_qs = Organization.accessible_objects(request.user, 'member_role')
if 'organization' not in attrs or attrs['organization'] == None or attrs['organization'] == '':
read_org_ct = read_org_qs.count()
if read_org_ct == 1:
attrs['organization'] = read_org_qs.first()
elif read_org_ct > 1:
raise serializers.ValidationError("User has permission to multiple Organizations, please set one of them in the request")
else:
raise serializers.ValidationError("User not part of any organization, please assign an organization to assign to the bulk job")
else:
allowed_orgs = set(read_org_qs.values_list('id', flat=True))
requested_org = attrs['organization']
if requested_org.id not in allowed_orgs:
raise ValidationError(_(f"Organization {requested_org.id} not found or you don't have permissions to access it"))
def get_objectified_jobs(self, attrs, key_to_obj_map):
objectified_jobs = []
# This loop is generalized so we should only have to add related items to the key_to_obj_map
for job in attrs['jobs']:
objectified_job = {}
for key, value in job.items():
if key in key_to_obj_map:
if isinstance(value, int):
objectified_job[key] = key_to_obj_map[key][value]
elif isinstance(value, list):
objectified_job[key] = [key_to_obj_map[key][item] for item in value]
else:
objectified_job[key] = value
objectified_jobs.append(objectified_job)
return objectified_jobs
class NotificationTemplateSerializer(BaseSerializer): class NotificationTemplateSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete', 'copy'] show_capabilities = ['edit', 'delete', 'copy']
capabilities_prefetch = [{'copy': 'organization.admin'}] capabilities_prefetch = [{'copy': 'organization.admin'}]
@@ -5016,6 +4666,7 @@ class NotificationTemplateSerializer(BaseSerializer):
class NotificationSerializer(BaseSerializer): class NotificationSerializer(BaseSerializer):
body = serializers.SerializerMethodField(help_text=_('Notification body')) body = serializers.SerializerMethodField(help_text=_('Notification body'))
class Meta: class Meta:
@@ -5149,7 +4800,7 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
), ),
) )
until = serializers.SerializerMethodField( until = serializers.SerializerMethodField(
help_text=_('The date this schedule will end. This field is computed from the RRULE. If the schedule does not end an empty string will be returned'), help_text=_('The date this schedule will end. This field is computed from the RRULE. If the schedule does not end an emptry string will be returned'),
) )
class Meta: class Meta:
@@ -5387,11 +5038,14 @@ class InstanceHealthCheckSerializer(BaseSerializer):
class InstanceGroupSerializer(BaseSerializer): class InstanceGroupSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete'] show_capabilities = ['edit', 'delete']
capacity = serializers.SerializerMethodField()
consumed_capacity = serializers.SerializerMethodField() consumed_capacity = serializers.SerializerMethodField()
percent_capacity_remaining = serializers.SerializerMethodField() percent_capacity_remaining = serializers.SerializerMethodField()
jobs_running = serializers.SerializerMethodField() jobs_running = serializers.IntegerField(
help_text=_('Count of jobs in the running or waiting state that ' 'are targeted for this instance group'), read_only=True
)
jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance group'), read_only=True) jobs_total = serializers.IntegerField(help_text=_('Count of all jobs that target this instance group'), read_only=True)
instances = serializers.SerializerMethodField() instances = serializers.SerializerMethodField()
is_container_group = serializers.BooleanField( is_container_group = serializers.BooleanField(
@@ -5417,22 +5071,6 @@ class InstanceGroupSerializer(BaseSerializer):
label=_('Policy Instance Minimum'), label=_('Policy Instance Minimum'),
help_text=_("Static minimum number of Instances that will be automatically assign to " "this group when new instances come online."), help_text=_("Static minimum number of Instances that will be automatically assign to " "this group when new instances come online."),
) )
max_concurrent_jobs = serializers.IntegerField(
default=0,
min_value=0,
required=False,
initial=0,
label=_('Max Concurrent Jobs'),
help_text=_("Maximum number of concurrent jobs to run on a group. When set to zero, no maximum is enforced."),
)
max_forks = serializers.IntegerField(
default=0,
min_value=0,
required=False,
initial=0,
label=_('Max Forks'),
help_text=_("Maximum number of forks to execute concurrently on a group. When set to zero, no maximum is enforced."),
)
policy_instance_list = serializers.ListField( policy_instance_list = serializers.ListField(
child=serializers.CharField(), child=serializers.CharField(),
required=False, required=False,
@@ -5454,8 +5092,6 @@ class InstanceGroupSerializer(BaseSerializer):
"consumed_capacity", "consumed_capacity",
"percent_capacity_remaining", "percent_capacity_remaining",
"jobs_running", "jobs_running",
"max_concurrent_jobs",
"max_forks",
"jobs_total", "jobs_total",
"instances", "instances",
"is_container_group", "is_container_group",
@@ -5471,8 +5107,6 @@ class InstanceGroupSerializer(BaseSerializer):
res = super(InstanceGroupSerializer, self).get_related(obj) res = super(InstanceGroupSerializer, self).get_related(obj)
res['jobs'] = self.reverse('api:instance_group_unified_jobs_list', kwargs={'pk': obj.pk}) res['jobs'] = self.reverse('api:instance_group_unified_jobs_list', kwargs={'pk': obj.pk})
res['instances'] = self.reverse('api:instance_group_instance_list', kwargs={'pk': obj.pk}) res['instances'] = self.reverse('api:instance_group_instance_list', kwargs={'pk': obj.pk})
res['access_list'] = self.reverse('api:instance_group_access_list', kwargs={'pk': obj.pk})
res['object_roles'] = self.reverse('api:instance_group_object_role_list', kwargs={'pk': obj.pk})
if obj.credential: if obj.credential:
res['credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.credential_id}) res['credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.credential_id})
@@ -5539,42 +5173,32 @@ class InstanceGroupSerializer(BaseSerializer):
# Store capacity values (globally computed) in the context # Store capacity values (globally computed) in the context
if 'task_manager_igs' not in self.context: if 'task_manager_igs' not in self.context:
instance_groups_queryset = None instance_groups_queryset = None
jobs_qs = UnifiedJob.objects.filter(status__in=('running', 'waiting'))
if self.parent: # Is ListView: if self.parent: # Is ListView:
instance_groups_queryset = self.parent.instance instance_groups_queryset = self.parent.instance
tm_models = TaskManagerModels.init_with_consumed_capacity( instances = TaskManagerInstances(jobs_qs)
instance_fields=['uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'enabled'], instance_groups = TaskManagerInstanceGroups(instances_by_hostname=instances, instance_groups_queryset=instance_groups_queryset)
instance_groups_queryset=instance_groups_queryset,
)
self.context['task_manager_igs'] = tm_models.instance_groups self.context['task_manager_igs'] = instance_groups
return self.context['task_manager_igs'] return self.context['task_manager_igs']
def get_consumed_capacity(self, obj): def get_consumed_capacity(self, obj):
ig_mgr = self.get_ig_mgr() ig_mgr = self.get_ig_mgr()
return ig_mgr.get_consumed_capacity(obj.name) return ig_mgr.get_consumed_capacity(obj.name)
def get_capacity(self, obj):
ig_mgr = self.get_ig_mgr()
return ig_mgr.get_capacity(obj.name)
def get_percent_capacity_remaining(self, obj): def get_percent_capacity_remaining(self, obj):
capacity = self.get_capacity(obj) if not obj.capacity:
if not capacity:
return 0.0 return 0.0
consumed_capacity = self.get_consumed_capacity(obj) ig_mgr = self.get_ig_mgr()
return float("{0:.2f}".format(((float(capacity) - float(consumed_capacity)) / (float(capacity))) * 100)) return float("{0:.2f}".format((float(ig_mgr.get_remaining_capacity(obj.name)) / (float(obj.capacity))) * 100))
def get_instances(self, obj): def get_instances(self, obj):
ig_mgr = self.get_ig_mgr() return obj.instances.count()
return len(ig_mgr.get_instances(obj.name))
def get_jobs_running(self, obj):
ig_mgr = self.get_ig_mgr()
return ig_mgr.get_jobs_running(obj.name)
class ActivityStreamSerializer(BaseSerializer): class ActivityStreamSerializer(BaseSerializer):
changes = serializers.SerializerMethodField() changes = serializers.SerializerMethodField()
object_association = serializers.SerializerMethodField(help_text=_("When present, shows the field name of the role or relationship that changed.")) object_association = serializers.SerializerMethodField(help_text=_("When present, shows the field name of the role or relationship that changed."))
object_type = serializers.SerializerMethodField(help_text=_("When present, shows the model on which the role or relationship was defined.")) object_type = serializers.SerializerMethodField(help_text=_("When present, shows the model on which the role or relationship was defined."))

View File

@@ -7,12 +7,10 @@ the following fields (some fields may not be visible to all users):
* `project_base_dir`: Path on the server where projects and playbooks are \ * `project_base_dir`: Path on the server where projects and playbooks are \
stored. stored.
* `project_local_paths`: List of directories beneath `project_base_dir` to * `project_local_paths`: List of directories beneath `project_base_dir` to
use when creating/editing a manual project. use when creating/editing a project.
* `time_zone`: The configured time zone for the server. * `time_zone`: The configured time zone for the server.
* `license_info`: Information about the current license. * `license_info`: Information about the current license.
* `version`: Version of Ansible Tower package installed. * `version`: Version of Ansible Tower package installed.
* `custom_virtualenvs`: Deprecated venv locations from before migration to
execution environments. Export tooling is in `awx-manage` commands.
* `eula`: The current End-User License Agreement * `eula`: The current End-User License Agreement
{% endifmeth %} {% endifmeth %}

View File

@@ -0,0 +1,4 @@
Version 1 of the Ansible Tower REST API.
Make a GET request to this resource to obtain a list of all child resources
available via the API.

View File

@@ -1,41 +0,0 @@
# Bulk Host Create
This endpoint allows the client to create multiple hosts and associate them with an inventory. They may do this by providing the inventory ID and a list of json that would normally be provided to create hosts.
Example:
{
"inventory": 1,
"hosts": [
{"name": "example1.com", "variables": "ansible_connection: local"},
{"name": "example2.com"}
]
}
Return data:
{
"url": "/api/v2/inventories/3/hosts/",
"hosts": [
{
"name": "example1.com",
"enabled": true,
"instance_id": "",
"description": "",
"variables": "ansible_connection: local",
"id": 1255,
"url": "/api/v2/hosts/1255/",
"inventory": "/api/v2/inventories/3/"
},
{
"name": "example2.com",
"enabled": true,
"instance_id": "",
"description": "",
"variables": "",
"id": 1256,
"url": "/api/v2/hosts/1256/",
"inventory": "/api/v2/inventories/3/"
}
]
}

View File

@@ -1,13 +0,0 @@
# Bulk Job Launch
This endpoint allows the client to launch multiple UnifiedJobTemplates at a time, along side any launch time parameters that they would normally set at launch time.
Example:
{
"name": "my bulk job",
"jobs": [
{"unified_job_template": 7, "inventory": 2},
{"unified_job_template": 7, "credentials": [3]}
]
}

View File

@@ -1,3 +0,0 @@
# Bulk Actions
This endpoint lists available bulk action APIs.

View File

@@ -3,7 +3,7 @@ Make a GET request to this resource to retrieve aggregate statistics about inven
Including fetching the number of total hosts tracked by Tower over an amount of time and the current success or Including fetching the number of total hosts tracked by Tower over an amount of time and the current success or
failed status of hosts which have run jobs within an Inventory. failed status of hosts which have run jobs within an Inventory.
## Parameters and Filtering ## Parmeters and Filtering
The `period` of the data can be adjusted with: The `period` of the data can be adjusted with:
@@ -24,7 +24,7 @@ Data about the number of hosts will be returned in the following format:
Each element contains an epoch timestamp represented in seconds and a numerical value indicating Each element contains an epoch timestamp represented in seconds and a numerical value indicating
the number of hosts that exist at a given moment the number of hosts that exist at a given moment
Data about failed and successful hosts by inventory will be given as: Data about failed and successfull hosts by inventory will be given as:
{ {
"sources": [ "sources": [

View File

@@ -2,7 +2,7 @@
Make a GET request to this resource to retrieve aggregate statistics about job runs suitable for graphing. Make a GET request to this resource to retrieve aggregate statistics about job runs suitable for graphing.
## Parameters and Filtering ## Parmeters and Filtering
The `period` of the data can be adjusted with: The `period` of the data can be adjusted with:

View File

@@ -0,0 +1,11 @@
# List Fact Scans for a Host Specific Host Scan
Make a GET request to this resource to retrieve system tracking data for a particular scan
You may filter by datetime:
`?datetime=2015-06-01`
and module
`?datetime=2015-06-01&module=ansible`

View File

@@ -0,0 +1,11 @@
# List Fact Scans for a Host by Module and Date
Make a GET request to this resource to retrieve system tracking scans by module and date/time
You may filter scan runs using the `from` and `to` properties:
`?from=2015-06-01%2012:00:00&to=2015-06-03`
You may also filter by module
`?module=packages`

View File

@@ -0,0 +1 @@
# List Red Hat Insights for a Host

View File

@@ -18,7 +18,7 @@ inventory sources:
* `inventory_update`: ID of the inventory update job that was started. * `inventory_update`: ID of the inventory update job that was started.
(integer, read-only) (integer, read-only)
* `project_update`: ID of the project update job that was started if this inventory source is an SCM source. * `project_update`: ID of the project update job that was started if this inventory source is an SCM source.
(integer, read-only, optional) (interger, read-only, optional)
Note: All manual inventory sources (source="") will be ignored by the update_inventory_sources endpoint. This endpoint will not update inventory sources for Smart Inventories. Note: All manual inventory sources (source="") will be ignored by the update_inventory_sources endpoint. This endpoint will not update inventory sources for Smart Inventories.

View File

@@ -0,0 +1,21 @@
{% ifmeth GET %}
# Determine if a Job can be started
Make a GET request to this resource to determine if the job can be started and
whether any passwords are required to start the job. The response will include
the following fields:
* `can_start`: Flag indicating if this job can be started (boolean, read-only)
* `passwords_needed_to_start`: Password names required to start the job (array,
read-only)
{% endifmeth %}
{% ifmeth POST %}
# Start a Job
Make a POST request to this resource to start the job. If any passwords are
required, they must be passed via POST data.
If successful, the response status code will be 202. If any required passwords
are not provided, a 400 status code will be returned. If the job cannot be
started, a 405 status code will be returned.
{% endifmeth %}

View File

@@ -7,7 +7,7 @@ receptor_work_commands:
command: ansible-runner command: ansible-runner
params: worker params: worker
allowruntimeparams: true allowruntimeparams: true
verifysignature: true verifysignature: {{ sign_work }}
custom_worksign_public_keyfile: receptor/work-public-key.pem custom_worksign_public_keyfile: receptor/work-public-key.pem
custom_tls_certfile: receptor/tls/receptor.crt custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key custom_tls_keyfile: receptor/tls/receptor.key

View File

@@ -3,14 +3,7 @@
from django.urls import re_path from django.urls import re_path
from awx.api.views import ( from awx.api.views import InstanceGroupList, InstanceGroupDetail, InstanceGroupUnifiedJobsList, InstanceGroupInstanceList
InstanceGroupList,
InstanceGroupDetail,
InstanceGroupUnifiedJobsList,
InstanceGroupInstanceList,
InstanceGroupAccessList,
InstanceGroupObjectRolesList,
)
urls = [ urls = [
@@ -18,8 +11,6 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/$', InstanceGroupDetail.as_view(), name='instance_group_detail'), re_path(r'^(?P<pk>[0-9]+)/$', InstanceGroupDetail.as_view(), name='instance_group_detail'),
re_path(r'^(?P<pk>[0-9]+)/jobs/$', InstanceGroupUnifiedJobsList.as_view(), name='instance_group_unified_jobs_list'), re_path(r'^(?P<pk>[0-9]+)/jobs/$', InstanceGroupUnifiedJobsList.as_view(), name='instance_group_unified_jobs_list'),
re_path(r'^(?P<pk>[0-9]+)/instances/$', InstanceGroupInstanceList.as_view(), name='instance_group_instance_list'), re_path(r'^(?P<pk>[0-9]+)/instances/$', InstanceGroupInstanceList.as_view(), name='instance_group_instance_list'),
re_path(r'^(?P<pk>[0-9]+)/access_list/$', InstanceGroupAccessList.as_view(), name='instance_group_access_list'),
re_path(r'^(?P<pk>[0-9]+)/object_roles/$', InstanceGroupObjectRolesList.as_view(), name='instance_group_object_role_list'),
] ]
__all__ = ['urls'] __all__ = ['urls']

View File

@@ -31,13 +31,6 @@ from awx.api.views import (
ApplicationOAuth2TokenList, ApplicationOAuth2TokenList,
OAuth2ApplicationDetail, OAuth2ApplicationDetail,
) )
from awx.api.views.bulk import (
BulkView,
BulkHostCreateView,
BulkJobLaunchView,
)
from awx.api.views.mesh_visualizer import MeshVisualizer from awx.api.views.mesh_visualizer import MeshVisualizer
from awx.api.views.metrics import MetricsView from awx.api.views.metrics import MetricsView
@@ -143,9 +136,6 @@ v2_urls = [
re_path(r'^activity_stream/', include(activity_stream_urls)), re_path(r'^activity_stream/', include(activity_stream_urls)),
re_path(r'^workflow_approval_templates/', include(workflow_approval_template_urls)), re_path(r'^workflow_approval_templates/', include(workflow_approval_template_urls)),
re_path(r'^workflow_approvals/', include(workflow_approval_urls)), re_path(r'^workflow_approvals/', include(workflow_approval_urls)),
re_path(r'^bulk/$', BulkView.as_view(), name='bulk'),
re_path(r'^bulk/host_create/$', BulkHostCreateView.as_view(), name='bulk_host_create'),
re_path(r'^bulk/job_launch/$', BulkJobLaunchView.as_view(), name='bulk_job_launch'),
] ]

View File

@@ -33,6 +33,7 @@ class HostnameRegexValidator(RegexValidator):
return f"regex={self.regex}, message={self.message}, code={self.code}, inverse_match={self.inverse_match}, flags={self.flags}" return f"regex={self.regex}, message={self.message}, code={self.code}, inverse_match={self.inverse_match}, flags={self.flags}"
def __validate(self, value): def __validate(self, value):
if ' ' in value: if ' ' in value:
return False, ValidationError("whitespaces in hostnames are illegal") return False, ValidationError("whitespaces in hostnames are illegal")

File diff suppressed because it is too large Load Diff

View File

@@ -1,69 +0,0 @@
from collections import OrderedDict
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import JSONRenderer
from rest_framework.reverse import reverse
from rest_framework import status
from rest_framework.response import Response
from awx.main.models import UnifiedJob, Host
from awx.api.generics import (
GenericAPIView,
APIView,
)
from awx.api import (
serializers,
renderers,
)
class BulkView(APIView):
permission_classes = [IsAuthenticated]
renderer_classes = [
renderers.BrowsableAPIRenderer,
JSONRenderer,
]
allowed_methods = ['GET', 'OPTIONS']
def get(self, request, format=None):
'''List top level resources'''
data = OrderedDict()
data['host_create'] = reverse('api:bulk_host_create', request=request)
data['job_launch'] = reverse('api:bulk_job_launch', request=request)
return Response(data)
class BulkJobLaunchView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = UnifiedJob
serializer_class = serializers.BulkJobLaunchSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
data = OrderedDict()
data['detail'] = "Specify a list of unified job templates to launch alongside their launchtime parameters"
return Response(data, status=status.HTTP_200_OK)
def post(self, request):
bulkjob_serializer = serializers.BulkJobLaunchSerializer(data=request.data, context={'request': request})
if bulkjob_serializer.is_valid():
result = bulkjob_serializer.create(bulkjob_serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(bulkjob_serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class BulkHostCreateView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = Host
serializer_class = serializers.BulkHostCreateSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
return Response({"detail": "Bulk create hosts with this endpoint"}, status=status.HTTP_200_OK)
def post(self, request):
serializer = serializers.BulkHostCreateSerializer(data=request.data, context={'request': request})
if serializer.is_valid():
result = serializer.create(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -25,7 +25,6 @@ from rest_framework import status
# Red Hat has an OID namespace (RHANANA). Receptor has its own designation under that. # Red Hat has an OID namespace (RHANANA). Receptor has its own designation under that.
RECEPTOR_OID = "1.3.6.1.4.1.2312.19.1" RECEPTOR_OID = "1.3.6.1.4.1.2312.19.1"
# generate install bundle for the instance # generate install bundle for the instance
# install bundle directory structure # install bundle directory structure
# ├── install_receptor.yml (playbook) # ├── install_receptor.yml (playbook)
@@ -41,6 +40,7 @@ RECEPTOR_OID = "1.3.6.1.4.1.2312.19.1"
# │ └── work-public-key.pem # │ └── work-public-key.pem
# └── requirements.yml # └── requirements.yml
class InstanceInstallBundle(GenericAPIView): class InstanceInstallBundle(GenericAPIView):
name = _('Install Bundle') name = _('Install Bundle')
model = models.Instance model = models.Instance
serializer_class = serializers.InstanceSerializer serializer_class = serializers.InstanceSerializer

View File

@@ -46,6 +46,7 @@ logger = logging.getLogger('awx.api.views.organization')
class InventoryUpdateEventsList(SubListAPIView): class InventoryUpdateEventsList(SubListAPIView):
model = InventoryUpdateEvent model = InventoryUpdateEvent
serializer_class = InventoryUpdateEventSerializer serializer_class = InventoryUpdateEventSerializer
parent_model = InventoryUpdate parent_model = InventoryUpdate
@@ -65,11 +66,13 @@ class InventoryUpdateEventsList(SubListAPIView):
class InventoryList(ListCreateAPIView): class InventoryList(ListCreateAPIView):
model = Inventory model = Inventory
serializer_class = InventorySerializer serializer_class = InventorySerializer
class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView): class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Inventory model = Inventory
serializer_class = InventorySerializer serializer_class = InventorySerializer
@@ -95,6 +98,7 @@ class InventoryDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIVie
class InventoryActivityStreamList(SubListAPIView): class InventoryActivityStreamList(SubListAPIView):
model = ActivityStream model = ActivityStream
serializer_class = ActivityStreamSerializer serializer_class = ActivityStreamSerializer
parent_model = Inventory parent_model = Inventory
@@ -109,6 +113,7 @@ class InventoryActivityStreamList(SubListAPIView):
class InventoryInstanceGroupsList(SubListAttachDetachAPIView): class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup model = InstanceGroup
serializer_class = InstanceGroupSerializer serializer_class = InstanceGroupSerializer
parent_model = Inventory parent_model = Inventory
@@ -116,11 +121,13 @@ class InventoryInstanceGroupsList(SubListAttachDetachAPIView):
class InventoryAccessList(ResourceAccessList): class InventoryAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's model = User # needs to be User for AccessLists's
parent_model = Inventory parent_model = Inventory
class InventoryObjectRolesList(SubListAPIView): class InventoryObjectRolesList(SubListAPIView):
model = Role model = Role
serializer_class = RoleSerializer serializer_class = RoleSerializer
parent_model = Inventory parent_model = Inventory
@@ -133,6 +140,7 @@ class InventoryObjectRolesList(SubListAPIView):
class InventoryJobTemplateList(SubListAPIView): class InventoryJobTemplateList(SubListAPIView):
model = JobTemplate model = JobTemplate
serializer_class = JobTemplateSerializer serializer_class = JobTemplateSerializer
parent_model = Inventory parent_model = Inventory
@@ -146,9 +154,11 @@ class InventoryJobTemplateList(SubListAPIView):
class InventoryLabelList(LabelSubListCreateAttachDetachView): class InventoryLabelList(LabelSubListCreateAttachDetachView):
parent_model = Inventory parent_model = Inventory
class InventoryCopy(CopyAPIView): class InventoryCopy(CopyAPIView):
model = Inventory model = Inventory
copy_return_serializer_class = InventorySerializer copy_return_serializer_class = InventorySerializer

View File

@@ -59,11 +59,13 @@ class LabelSubListCreateAttachDetachView(SubListCreateAttachDetachAPIView):
class LabelDetail(RetrieveUpdateAPIView): class LabelDetail(RetrieveUpdateAPIView):
model = Label model = Label
serializer_class = LabelSerializer serializer_class = LabelSerializer
class LabelList(ListCreateAPIView): class LabelList(ListCreateAPIView):
name = _("Labels") name = _("Labels")
model = Label model = Label
serializer_class = LabelSerializer serializer_class = LabelSerializer

View File

@@ -10,11 +10,13 @@ from awx.main.models import InstanceLink, Instance
class MeshVisualizer(APIView): class MeshVisualizer(APIView):
name = _("Mesh Visualizer") name = _("Mesh Visualizer")
permission_classes = (IsSystemAdminOrAuditor,) permission_classes = (IsSystemAdminOrAuditor,)
swagger_topic = "System Configuration" swagger_topic = "System Configuration"
def get(self, request, format=None): def get(self, request, format=None):
data = { data = {
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data, 'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target', 'source'), many=True).data, 'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target', 'source'), many=True).data,

View File

@@ -5,11 +5,9 @@
import logging import logging
# Django # Django
from django.conf import settings
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
# Django REST Framework # Django REST Framework
from rest_framework.permissions import AllowAny
from rest_framework.response import Response from rest_framework.response import Response
from rest_framework.exceptions import PermissionDenied from rest_framework.exceptions import PermissionDenied
@@ -27,19 +25,15 @@ logger = logging.getLogger('awx.analytics')
class MetricsView(APIView): class MetricsView(APIView):
name = _('Metrics') name = _('Metrics')
swagger_topic = 'Metrics' swagger_topic = 'Metrics'
renderer_classes = [renderers.PlainTextRenderer, renderers.PrometheusJSONRenderer, renderers.BrowsableAPIRenderer] renderer_classes = [renderers.PlainTextRenderer, renderers.PrometheusJSONRenderer, renderers.BrowsableAPIRenderer]
def initialize_request(self, request, *args, **kwargs):
if settings.ALLOW_METRICS_FOR_ANONYMOUS_USERS:
self.permission_classes = (AllowAny,)
return super(APIView, self).initialize_request(request, *args, **kwargs)
def get(self, request): def get(self, request):
'''Show Metrics Details''' '''Show Metrics Details'''
if settings.ALLOW_METRICS_FOR_ANONYMOUS_USERS or request.user.is_superuser or request.user.is_system_auditor: if request.user.is_superuser or request.user.is_system_auditor:
metrics_to_show = '' metrics_to_show = ''
if not request.query_params.get('subsystemonly', "0") == "1": if not request.query_params.get('subsystemonly', "0") == "1":
metrics_to_show += metrics().decode('UTF-8') metrics_to_show += metrics().decode('UTF-8')

View File

@@ -16,7 +16,7 @@ from rest_framework import status
from awx.main.constants import ACTIVE_STATES from awx.main.constants import ACTIVE_STATES
from awx.main.utils import get_object_or_400 from awx.main.utils import get_object_or_400
from awx.main.models.ha import Instance, InstanceGroup, schedule_policy_task from awx.main.models.ha import Instance, InstanceGroup
from awx.main.models.organization import Team from awx.main.models.organization import Team
from awx.main.models.projects import Project from awx.main.models.projects import Project
from awx.main.models.inventory import Inventory from awx.main.models.inventory import Inventory
@@ -107,11 +107,6 @@ class InstanceGroupMembershipMixin(object):
if inst_name in ig_obj.policy_instance_list: if inst_name in ig_obj.policy_instance_list:
ig_obj.policy_instance_list.pop(ig_obj.policy_instance_list.index(inst_name)) ig_obj.policy_instance_list.pop(ig_obj.policy_instance_list.index(inst_name))
ig_obj.save(update_fields=['policy_instance_list']) ig_obj.save(update_fields=['policy_instance_list'])
# sometimes removing an instance has a non-obvious consequence
# this is almost always true if policy_instance_percentage or _minimum is non-zero
# after removing a single instance, the other memberships need to be re-balanced
schedule_policy_task()
return response return response

View File

@@ -58,6 +58,7 @@ logger = logging.getLogger('awx.api.views.organization')
class OrganizationList(OrganizationCountsMixin, ListCreateAPIView): class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization model = Organization
serializer_class = OrganizationSerializer serializer_class = OrganizationSerializer
@@ -69,6 +70,7 @@ class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView): class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization model = Organization
serializer_class = OrganizationSerializer serializer_class = OrganizationSerializer
@@ -104,6 +106,7 @@ class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPI
class OrganizationInventoriesList(SubListAPIView): class OrganizationInventoriesList(SubListAPIView):
model = Inventory model = Inventory
serializer_class = InventorySerializer serializer_class = InventorySerializer
parent_model = Organization parent_model = Organization
@@ -111,6 +114,7 @@ class OrganizationInventoriesList(SubListAPIView):
class OrganizationUsersList(BaseUsersList): class OrganizationUsersList(BaseUsersList):
model = User model = User
serializer_class = UserSerializer serializer_class = UserSerializer
parent_model = Organization parent_model = Organization
@@ -119,6 +123,7 @@ class OrganizationUsersList(BaseUsersList):
class OrganizationAdminsList(BaseUsersList): class OrganizationAdminsList(BaseUsersList):
model = User model = User
serializer_class = UserSerializer serializer_class = UserSerializer
parent_model = Organization parent_model = Organization
@@ -127,6 +132,7 @@ class OrganizationAdminsList(BaseUsersList):
class OrganizationProjectsList(SubListCreateAPIView): class OrganizationProjectsList(SubListCreateAPIView):
model = Project model = Project
serializer_class = ProjectSerializer serializer_class = ProjectSerializer
parent_model = Organization parent_model = Organization
@@ -134,6 +140,7 @@ class OrganizationProjectsList(SubListCreateAPIView):
class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView): class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
model = ExecutionEnvironment model = ExecutionEnvironment
serializer_class = ExecutionEnvironmentSerializer serializer_class = ExecutionEnvironmentSerializer
parent_model = Organization parent_model = Organization
@@ -143,6 +150,7 @@ class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
class OrganizationJobTemplatesList(SubListCreateAPIView): class OrganizationJobTemplatesList(SubListCreateAPIView):
model = JobTemplate model = JobTemplate
serializer_class = JobTemplateSerializer serializer_class = JobTemplateSerializer
parent_model = Organization parent_model = Organization
@@ -150,6 +158,7 @@ class OrganizationJobTemplatesList(SubListCreateAPIView):
class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView): class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
model = WorkflowJobTemplate model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateSerializer serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization parent_model = Organization
@@ -157,6 +166,7 @@ class OrganizationWorkflowJobTemplatesList(SubListCreateAPIView):
class OrganizationTeamsList(SubListCreateAttachDetachAPIView): class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
model = Team model = Team
serializer_class = TeamSerializer serializer_class = TeamSerializer
parent_model = Organization parent_model = Organization
@@ -165,6 +175,7 @@ class OrganizationTeamsList(SubListCreateAttachDetachAPIView):
class OrganizationActivityStreamList(SubListAPIView): class OrganizationActivityStreamList(SubListAPIView):
model = ActivityStream model = ActivityStream
serializer_class = ActivityStreamSerializer serializer_class = ActivityStreamSerializer
parent_model = Organization parent_model = Organization
@@ -173,6 +184,7 @@ class OrganizationActivityStreamList(SubListAPIView):
class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView): class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate model = NotificationTemplate
serializer_class = NotificationTemplateSerializer serializer_class = NotificationTemplateSerializer
parent_model = Organization parent_model = Organization
@@ -181,28 +193,34 @@ class OrganizationNotificationTemplatesList(SubListCreateAttachDetachAPIView):
class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView): class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView):
model = NotificationTemplate model = NotificationTemplate
serializer_class = NotificationTemplateSerializer serializer_class = NotificationTemplateSerializer
parent_model = Organization parent_model = Organization
class OrganizationNotificationTemplatesStartedList(OrganizationNotificationTemplatesAnyList): class OrganizationNotificationTemplatesStartedList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_started' relationship = 'notification_templates_started'
class OrganizationNotificationTemplatesErrorList(OrganizationNotificationTemplatesAnyList): class OrganizationNotificationTemplatesErrorList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_error' relationship = 'notification_templates_error'
class OrganizationNotificationTemplatesSuccessList(OrganizationNotificationTemplatesAnyList): class OrganizationNotificationTemplatesSuccessList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_success' relationship = 'notification_templates_success'
class OrganizationNotificationTemplatesApprovalList(OrganizationNotificationTemplatesAnyList): class OrganizationNotificationTemplatesApprovalList(OrganizationNotificationTemplatesAnyList):
relationship = 'notification_templates_approvals' relationship = 'notification_templates_approvals'
class OrganizationInstanceGroupsList(SubListAttachDetachAPIView): class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
model = InstanceGroup model = InstanceGroup
serializer_class = InstanceGroupSerializer serializer_class = InstanceGroupSerializer
parent_model = Organization parent_model = Organization
@@ -210,6 +228,7 @@ class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView): class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
model = Credential model = Credential
serializer_class = CredentialSerializer serializer_class = CredentialSerializer
parent_model = Organization parent_model = Organization
@@ -221,11 +240,13 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
class OrganizationAccessList(ResourceAccessList): class OrganizationAccessList(ResourceAccessList):
model = User # needs to be User for AccessLists's model = User # needs to be User for AccessLists's
parent_model = Organization parent_model = Organization
class OrganizationObjectRolesList(SubListAPIView): class OrganizationObjectRolesList(SubListAPIView):
model = Role model = Role
serializer_class = RoleSerializer serializer_class = RoleSerializer
parent_model = Organization parent_model = Organization

View File

@@ -36,6 +36,7 @@ logger = logging.getLogger('awx.api.views.root')
class ApiRootView(APIView): class ApiRootView(APIView):
permission_classes = (AllowAny,) permission_classes = (AllowAny,)
name = _('REST API') name = _('REST API')
versioning_class = None versioning_class = None
@@ -58,6 +59,7 @@ class ApiRootView(APIView):
class ApiOAuthAuthorizationRootView(APIView): class ApiOAuthAuthorizationRootView(APIView):
permission_classes = (AllowAny,) permission_classes = (AllowAny,)
name = _("API OAuth 2 Authorization Root") name = _("API OAuth 2 Authorization Root")
versioning_class = None versioning_class = None
@@ -72,6 +74,7 @@ class ApiOAuthAuthorizationRootView(APIView):
class ApiVersionRootView(APIView): class ApiVersionRootView(APIView):
permission_classes = (AllowAny,) permission_classes = (AllowAny,)
swagger_topic = 'Versioning' swagger_topic = 'Versioning'
@@ -121,7 +124,6 @@ class ApiVersionRootView(APIView):
data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request) data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request)
data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request) data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request)
data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request) data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request)
data['bulk'] = reverse('api:bulk', request=request)
return Response(data) return Response(data)
@@ -170,6 +172,7 @@ class ApiV2PingView(APIView):
class ApiV2SubscriptionView(APIView): class ApiV2SubscriptionView(APIView):
permission_classes = (IsAuthenticated,) permission_classes = (IsAuthenticated,)
name = _('Subscriptions') name = _('Subscriptions')
swagger_topic = 'System Configuration' swagger_topic = 'System Configuration'
@@ -209,6 +212,7 @@ class ApiV2SubscriptionView(APIView):
class ApiV2AttachView(APIView): class ApiV2AttachView(APIView):
permission_classes = (IsAuthenticated,) permission_classes = (IsAuthenticated,)
name = _('Attach Subscription') name = _('Attach Subscription')
swagger_topic = 'System Configuration' swagger_topic = 'System Configuration'
@@ -226,6 +230,7 @@ class ApiV2AttachView(APIView):
user = getattr(settings, 'SUBSCRIPTIONS_USERNAME', None) user = getattr(settings, 'SUBSCRIPTIONS_USERNAME', None)
pw = getattr(settings, 'SUBSCRIPTIONS_PASSWORD', None) pw = getattr(settings, 'SUBSCRIPTIONS_PASSWORD', None)
if pool_id and user and pw: if pool_id and user and pw:
data = request.data.copy() data = request.data.copy()
try: try:
with set_environ(**settings.AWX_TASK_ENV): with set_environ(**settings.AWX_TASK_ENV):
@@ -253,6 +258,7 @@ class ApiV2AttachView(APIView):
class ApiV2ConfigView(APIView): class ApiV2ConfigView(APIView):
permission_classes = (IsAuthenticated,) permission_classes = (IsAuthenticated,)
name = _('Configuration') name = _('Configuration')
swagger_topic = 'System Configuration' swagger_topic = 'System Configuration'

View File

@@ -8,6 +8,7 @@ from django.utils.translation import gettext_lazy as _
class ConfConfig(AppConfig): class ConfConfig(AppConfig):
name = 'awx.conf' name = 'awx.conf'
verbose_name = _('Configuration') verbose_name = _('Configuration')
@@ -15,6 +16,7 @@ class ConfConfig(AppConfig):
self.module.autodiscover() self.module.autodiscover()
if not set(sys.argv) & {'migrate', 'check_migrations'}: if not set(sys.argv) & {'migrate', 'check_migrations'}:
from .settings import SettingsWrapper from .settings import SettingsWrapper
SettingsWrapper.initialize() SettingsWrapper.initialize()

View File

@@ -21,7 +21,7 @@ logger = logging.getLogger('awx.conf.fields')
# Use DRF fields to convert/validate settings: # Use DRF fields to convert/validate settings:
# - to_representation(obj) should convert a native Python object to a primitive # - to_representation(obj) should convert a native Python object to a primitive
# serializable type. This primitive type will be what is presented in the API # serializable type. This primitive type will be what is presented in the API
# and stored in the JSON field in the database. # and stored in the JSON field in the datbase.
# - to_internal_value(data) should convert the primitive type back into the # - to_internal_value(data) should convert the primitive type back into the
# appropriate Python type to be used in settings. # appropriate Python type to be used in settings.
@@ -47,6 +47,7 @@ class IntegerField(IntegerField):
class StringListField(ListField): class StringListField(ListField):
child = CharField() child = CharField()
def to_representation(self, value): def to_representation(self, value):
@@ -56,6 +57,7 @@ class StringListField(ListField):
class StringListBooleanField(ListField): class StringListBooleanField(ListField):
default_error_messages = {'type_error': _('Expected None, True, False, a string or list of strings but got {input_type} instead.')} default_error_messages = {'type_error': _('Expected None, True, False, a string or list of strings but got {input_type} instead.')}
child = CharField() child = CharField()
@@ -94,6 +96,7 @@ class StringListBooleanField(ListField):
class StringListPathField(StringListField): class StringListPathField(StringListField):
default_error_messages = {'type_error': _('Expected list of strings but got {input_type} instead.'), 'path_error': _('{path} is not a valid path choice.')} default_error_messages = {'type_error': _('Expected list of strings but got {input_type} instead.'), 'path_error': _('{path} is not a valid path choice.')}
def to_internal_value(self, paths): def to_internal_value(self, paths):
@@ -123,6 +126,7 @@ class StringListIsolatedPathField(StringListField):
} }
def to_internal_value(self, paths): def to_internal_value(self, paths):
if isinstance(paths, (list, tuple)): if isinstance(paths, (list, tuple)):
for p in paths: for p in paths:
if not isinstance(p, str): if not isinstance(p, str):

View File

@@ -8,6 +8,7 @@ import awx.main.fields
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [migrations.swappable_dependency(settings.AUTH_USER_MODEL)] dependencies = [migrations.swappable_dependency(settings.AUTH_USER_MODEL)]
operations = [ operations = [

View File

@@ -48,6 +48,7 @@ def revert_tower_settings(apps, schema_editor):
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [('conf', '0001_initial'), ('main', '0004_squashed_v310_release')] dependencies = [('conf', '0001_initial'), ('main', '0004_squashed_v310_release')]
run_before = [('main', '0005_squashed_v310_v313_updates')] run_before = [('main', '0005_squashed_v310_v313_updates')]

View File

@@ -7,6 +7,7 @@ import awx.main.fields
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [('conf', '0002_v310_copy_tower_settings')] dependencies = [('conf', '0002_v310_copy_tower_settings')]
operations = [migrations.AlterField(model_name='setting', name='value', field=awx.main.fields.JSONBlob(null=True))] operations = [migrations.AlterField(model_name='setting', name='value', field=awx.main.fields.JSONBlob(null=True))]

View File

@@ -5,6 +5,7 @@ from django.db import migrations
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [('conf', '0003_v310_JSONField_changes')] dependencies = [('conf', '0003_v310_JSONField_changes')]
operations = [ operations = [

View File

@@ -15,6 +15,7 @@ def reverse_copy_session_settings(apps, schema_editor):
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [('conf', '0004_v320_reencrypt')] dependencies = [('conf', '0004_v320_reencrypt')]
operations = [migrations.RunPython(copy_session_settings, reverse_copy_session_settings)] operations = [migrations.RunPython(copy_session_settings, reverse_copy_session_settings)]

View File

@@ -8,6 +8,7 @@ from django.db import migrations
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [('conf', '0005_v330_rename_two_session_settings')] dependencies = [('conf', '0005_v330_rename_two_session_settings')]
operations = [migrations.RunPython(fill_ldap_group_type_params)] operations = [migrations.RunPython(fill_ldap_group_type_params)]

View File

@@ -9,6 +9,7 @@ def copy_allowed_ips(apps, schema_editor):
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [('conf', '0006_v331_ldap_group_type')] dependencies = [('conf', '0006_v331_ldap_group_type')]
operations = [migrations.RunPython(copy_allowed_ips)] operations = [migrations.RunPython(copy_allowed_ips)]

View File

@@ -14,6 +14,7 @@ def _noop(apps, schema_editor):
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [('conf', '0007_v380_rename_more_settings')] dependencies = [('conf', '0007_v380_rename_more_settings')]
operations = [migrations.RunPython(clear_old_license, _noop), migrations.RunPython(prefill_rh_credentials, _noop)] operations = [migrations.RunPython(clear_old_license, _noop), migrations.RunPython(prefill_rh_credentials, _noop)]

View File

@@ -10,6 +10,7 @@ def rename_proot_settings(apps, schema_editor):
class Migration(migrations.Migration): class Migration(migrations.Migration):
dependencies = [('conf', '0008_subscriptions')] dependencies = [('conf', '0008_subscriptions')]
operations = [migrations.RunPython(rename_proot_settings)] operations = [migrations.RunPython(rename_proot_settings)]

View File

@@ -1,11 +1,7 @@
import inspect import inspect
from django.conf import settings from django.conf import settings
from django.utils.timezone import now
import logging
logger = logging.getLogger('awx.conf.migrations')
def fill_ldap_group_type_params(apps, schema_editor): def fill_ldap_group_type_params(apps, schema_editor):
@@ -19,7 +15,7 @@ def fill_ldap_group_type_params(apps, schema_editor):
entry = qs[0] entry = qs[0]
group_type_params = entry.value group_type_params = entry.value
else: else:
return # for new installs we prefer to use the default value entry = Setting(key='AUTH_LDAP_GROUP_TYPE_PARAMS', value=group_type_params, created=now(), modified=now())
init_attrs = set(inspect.getfullargspec(group_type.__init__).args[1:]) init_attrs = set(inspect.getfullargspec(group_type.__init__).args[1:])
for k in list(group_type_params.keys()): for k in list(group_type_params.keys()):
@@ -27,5 +23,4 @@ def fill_ldap_group_type_params(apps, schema_editor):
del group_type_params[k] del group_type_params[k]
entry.value = group_type_params entry.value = group_type_params
logger.warning(f'Migration updating AUTH_LDAP_GROUP_TYPE_PARAMS with value {entry.value}')
entry.save() entry.save()

View File

@@ -10,6 +10,7 @@ __all__ = ['rename_setting']
def rename_setting(apps, schema_editor, old_key, new_key): def rename_setting(apps, schema_editor, old_key, new_key):
old_setting = None old_setting = None
Setting = apps.get_model('conf', 'Setting') Setting = apps.get_model('conf', 'Setting')
if Setting.objects.filter(key=new_key).exists() or hasattr(settings, new_key): if Setting.objects.filter(key=new_key).exists() or hasattr(settings, new_key):

View File

@@ -17,6 +17,7 @@ __all__ = ['Setting']
class Setting(CreatedModifiedModel): class Setting(CreatedModifiedModel):
key = models.CharField(max_length=255) key = models.CharField(max_length=255)
value = JSONBlob(null=True) value = JSONBlob(null=True)
user = prevent_search(models.ForeignKey('auth.User', related_name='settings', default=None, null=True, editable=False, on_delete=models.CASCADE)) user = prevent_search(models.ForeignKey('auth.User', related_name='settings', default=None, null=True, editable=False, on_delete=models.CASCADE))

View File

@@ -104,6 +104,7 @@ def filter_sensitive(registry, key, value):
class TransientSetting(object): class TransientSetting(object):
__slots__ = ('pk', 'value') __slots__ = ('pk', 'value')
def __init__(self, pk, value): def __init__(self, pk, value):

View File

@@ -1,25 +0,0 @@
import pytest
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from awx.conf.models import Setting
from django.apps import apps
@pytest.mark.django_db
def test_fill_group_type_params_no_op():
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 0
@pytest.mark.django_db
def test_keep_old_setting_with_default_value():
Setting.objects.create(key='AUTH_LDAP_GROUP_TYPE', value={'name_attr': 'cn', 'member_attr': 'member'})
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 1
s = Setting.objects.first()
assert s.value == {'name_attr': 'cn', 'member_attr': 'member'}
# NOTE: would be good to test the removal of attributes by migration
# but this requires fighting with the validator and is not done here

View File

@@ -5,6 +5,7 @@ from awx.conf.fields import StringListBooleanField, StringListPathField, ListTup
class TestStringListBooleanField: class TestStringListBooleanField:
FIELD_VALUES = [ FIELD_VALUES = [
("hello", "hello"), ("hello", "hello"),
(("a", "b"), ["a", "b"]), (("a", "b"), ["a", "b"]),
@@ -52,6 +53,7 @@ class TestStringListBooleanField:
class TestListTuplesField: class TestListTuplesField:
FIELD_VALUES = [([('a', 'b'), ('abc', '123')], [("a", "b"), ("abc", "123")])] FIELD_VALUES = [([('a', 'b'), ('abc', '123')], [("a", "b"), ("abc", "123")])]
FIELD_VALUES_INVALID = [("abc", type("abc")), ([('a', 'b', 'c'), ('abc', '123', '456')], type(('a',))), (['a', 'b'], type('a')), (123, type(123))] FIELD_VALUES_INVALID = [("abc", type("abc")), ([('a', 'b', 'c'), ('abc', '123', '456')], type(('a',))), (['a', 'b'], type('a')), (123, type(123))]
@@ -71,6 +73,7 @@ class TestListTuplesField:
class TestStringListPathField: class TestStringListPathField:
FIELD_VALUES = [ FIELD_VALUES = [
((".", "..", "/"), [".", "..", "/"]), ((".", "..", "/"), [".", "..", "/"]),
(("/home",), ["/home"]), (("/home",), ["/home"]),

View File

@@ -36,6 +36,7 @@ SettingCategory = collections.namedtuple('SettingCategory', ('url', 'slug', 'nam
class SettingCategoryList(ListAPIView): class SettingCategoryList(ListAPIView):
model = Setting # Not exactly, but needed for the view. model = Setting # Not exactly, but needed for the view.
serializer_class = SettingCategorySerializer serializer_class = SettingCategorySerializer
filter_backends = [] filter_backends = []
@@ -57,6 +58,7 @@ class SettingCategoryList(ListAPIView):
class SettingSingletonDetail(RetrieveUpdateDestroyAPIView): class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
model = Setting # Not exactly, but needed for the view. model = Setting # Not exactly, but needed for the view.
serializer_class = SettingSingletonSerializer serializer_class = SettingSingletonSerializer
filter_backends = [] filter_backends = []
@@ -144,6 +146,7 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
class SettingLoggingTest(GenericAPIView): class SettingLoggingTest(GenericAPIView):
name = _('Logging Connectivity Test') name = _('Logging Connectivity Test')
model = Setting model = Setting
serializer_class = SettingSingletonSerializer serializer_class = SettingSingletonSerializer
@@ -180,7 +183,7 @@ class SettingLoggingTest(GenericAPIView):
if not port: if not port:
return Response({'error': 'Port required for ' + protocol}, status=status.HTTP_400_BAD_REQUEST) return Response({'error': 'Port required for ' + protocol}, status=status.HTTP_400_BAD_REQUEST)
else: else:
# if http/https by this point, domain is reachable # if http/https by this point, domain is reacheable
return Response(status=status.HTTP_202_ACCEPTED) return Response(status=status.HTTP_202_ACCEPTED)
if protocol == 'udp': if protocol == 'udp':

View File

@@ -1972,7 +1972,7 @@ msgid ""
"HTTP headers and meta keys to search to determine remote host name or IP. " "HTTP headers and meta keys to search to determine remote host name or IP. "
"Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if " "Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if "
"behind a reverse proxy. See the \"Proxy Support\" section of the " "behind a reverse proxy. See the \"Proxy Support\" section of the "
"Administrator guide for more details." "Adminstrator guide for more details."
msgstr "" msgstr ""
#: awx/main/conf.py:85 #: awx/main/conf.py:85
@@ -2457,7 +2457,7 @@ msgid ""
msgstr "" msgstr ""
#: awx/main/conf.py:631 #: awx/main/conf.py:631
msgid "Maximum disk persistence for external log aggregation (in GB)" msgid "Maximum disk persistance for external log aggregation (in GB)"
msgstr "" msgstr ""
#: awx/main/conf.py:633 #: awx/main/conf.py:633
@@ -2548,7 +2548,7 @@ msgid "Enable"
msgstr "" msgstr ""
#: awx/main/constants.py:27 #: awx/main/constants.py:27
msgid "Does" msgid "Doas"
msgstr "" msgstr ""
#: awx/main/constants.py:28 #: awx/main/constants.py:28
@@ -4801,7 +4801,7 @@ msgstr ""
#: awx/main/models/workflow.py:251 #: awx/main/models/workflow.py:251
msgid "" msgid ""
"An identifier corresponding to the workflow job template node that this node " "An identifier coresponding to the workflow job template node that this node "
"was created from." "was created from."
msgstr "" msgstr ""
@@ -5521,7 +5521,7 @@ msgstr ""
#: awx/sso/conf.py:606 #: awx/sso/conf.py:606
msgid "" msgid ""
"Extra arguments for Google OAuth2 login. You can restrict it to only allow a " "Extra arguments for Google OAuth2 login. You can restrict it to only allow a "
"single domain to authenticate, even if the user is logged in with multiple " "single domain to authenticate, even if the user is logged in with multple "
"Google accounts. Refer to the documentation for more detail." "Google accounts. Refer to the documentation for more detail."
msgstr "" msgstr ""
@@ -5905,7 +5905,7 @@ msgstr ""
#: awx/sso/conf.py:1290 #: awx/sso/conf.py:1290
msgid "" msgid ""
"Create a key pair to use as a service provider (SP) and include the " "Create a keypair to use as a service provider (SP) and include the "
"certificate content here." "certificate content here."
msgstr "" msgstr ""
@@ -5915,7 +5915,7 @@ msgstr ""
#: awx/sso/conf.py:1302 #: awx/sso/conf.py:1302
msgid "" msgid ""
"Create a key pair to use as a service provider (SP) and include the private " "Create a keypair to use as a service provider (SP) and include the private "
"key content here." "key content here."
msgstr "" msgstr ""

View File

@@ -1971,7 +1971,7 @@ msgid ""
"HTTP headers and meta keys to search to determine remote host name or IP. " "HTTP headers and meta keys to search to determine remote host name or IP. "
"Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if " "Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if "
"behind a reverse proxy. See the \"Proxy Support\" section of the " "behind a reverse proxy. See the \"Proxy Support\" section of the "
"Administrator guide for more details." "Adminstrator guide for more details."
msgstr "Los encabezados HTTP y las llaves de activación para buscar y determinar el nombre de host remoto o IP. Añada elementos adicionales a esta lista, como \"HTTP_X_FORWARDED_FOR\", si está detrás de un proxy inverso. Consulte la sección \"Soporte de proxy\" de la guía del adminstrador para obtener más información." msgstr "Los encabezados HTTP y las llaves de activación para buscar y determinar el nombre de host remoto o IP. Añada elementos adicionales a esta lista, como \"HTTP_X_FORWARDED_FOR\", si está detrás de un proxy inverso. Consulte la sección \"Soporte de proxy\" de la guía del adminstrador para obtener más información."
#: awx/main/conf.py:85 #: awx/main/conf.py:85
@@ -4804,7 +4804,7 @@ msgstr "Indica que un trabajo no se creará cuando es sea True. La semántica de
#: awx/main/models/workflow.py:251 #: awx/main/models/workflow.py:251
msgid "" msgid ""
"An identifier corresponding to the workflow job template node that this node " "An identifier coresponding to the workflow job template node that this node "
"was created from." "was created from."
msgstr "Un identificador que corresponde al nodo de plantilla de tarea del flujo de trabajo a partir del cual se creó este nodo." msgstr "Un identificador que corresponde al nodo de plantilla de tarea del flujo de trabajo a partir del cual se creó este nodo."
@@ -5526,7 +5526,7 @@ msgstr "Argumentos adicionales para Google OAuth2"
#: awx/sso/conf.py:606 #: awx/sso/conf.py:606
msgid "" msgid ""
"Extra arguments for Google OAuth2 login. You can restrict it to only allow a " "Extra arguments for Google OAuth2 login. You can restrict it to only allow a "
"single domain to authenticate, even if the user is logged in with multiple " "single domain to authenticate, even if the user is logged in with multple "
"Google accounts. Refer to the documentation for more detail." "Google accounts. Refer to the documentation for more detail."
msgstr "Argumentos adicionales para el inicio de sesión en Google OAuth2. Puede limitarlo para permitir la autenticación de un solo dominio, incluso si el usuario ha iniciado sesión con varias cuentas de Google. Consulte la documentación para obtener información detallada." msgstr "Argumentos adicionales para el inicio de sesión en Google OAuth2. Puede limitarlo para permitir la autenticación de un solo dominio, incluso si el usuario ha iniciado sesión con varias cuentas de Google. Consulte la documentación para obtener información detallada."
@@ -5910,7 +5910,7 @@ msgstr "Certificado público del proveedor de servicio SAML"
#: awx/sso/conf.py:1290 #: awx/sso/conf.py:1290
msgid "" msgid ""
"Create a key pair to use as a service provider (SP) and include the " "Create a keypair to use as a service provider (SP) and include the "
"certificate content here." "certificate content here."
msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido del certificado aquí." msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido del certificado aquí."
@@ -5920,7 +5920,7 @@ msgstr "Clave privada del proveedor de servicio SAML"
#: awx/sso/conf.py:1302 #: awx/sso/conf.py:1302
msgid "" msgid ""
"Create a key pair to use as a service provider (SP) and include the private " "Create a keypair to use as a service provider (SP) and include the private "
"key content here." "key content here."
msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido de la clave privada aquí." msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido de la clave privada aquí."
@@ -6237,5 +6237,4 @@ msgstr "%s se está actualizando."
#: awx/ui/urls.py:24 #: awx/ui/urls.py:24
msgid "This page will refresh when complete." msgid "This page will refresh when complete."
msgstr "Esta página se actualizará cuando se complete." msgstr "Esta página se actualizará cuando se complete."

View File

@@ -721,7 +721,7 @@ msgstr "DTSTART valide obligatoire dans rrule. La valeur doit commencer par : DT
#: awx/api/serializers.py:4657 #: awx/api/serializers.py:4657
msgid "" msgid ""
"DTSTART cannot be a naive datetime. Specify ;TZINFO= or YYYYMMDDTHHMMSSZZ." "DTSTART cannot be a naive datetime. Specify ;TZINFO= or YYYYMMDDTHHMMSSZZ."
msgstr "DTSTART ne peut correspondre à une date-heure naïve. Spécifier ;TZINFO= ou YYYYMMDDTHHMMSSZZ." msgstr "DTSTART ne peut correspondre à une DateHeure naïve. Spécifier ;TZINFO= ou YYYYMMDDTHHMMSSZZ."
#: awx/api/serializers.py:4659 #: awx/api/serializers.py:4659
msgid "Multiple DTSTART is not supported." msgid "Multiple DTSTART is not supported."
@@ -6239,5 +6239,4 @@ msgstr "%s est en cours de mise à niveau."
#: awx/ui/urls.py:24 #: awx/ui/urls.py:24
msgid "This page will refresh when complete." msgid "This page will refresh when complete."
msgstr "Cette page sera rafraîchie une fois terminée." msgstr "Cette page sera rafraîchie une fois terminée."

View File

@@ -6237,5 +6237,4 @@ msgstr "Er wordt momenteel een upgrade van%s geïnstalleerd."
#: awx/ui/urls.py:24 #: awx/ui/urls.py:24
msgid "This page will refresh when complete." msgid "This page will refresh when complete."
msgstr "Deze pagina wordt vernieuwd als hij klaar is." msgstr "Deze pagina wordt vernieuwd als hij klaar is."

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -561,6 +561,7 @@ class NotificationAttachMixin(BaseAccess):
class InstanceAccess(BaseAccess): class InstanceAccess(BaseAccess):
model = Instance model = Instance
prefetch_related = ('rampart_groups',) prefetch_related = ('rampart_groups',)
@@ -578,6 +579,7 @@ class InstanceAccess(BaseAccess):
return super(InstanceAccess, self).can_unattach(obj, sub_obj, relationship, relationship, data=data) return super(InstanceAccess, self).can_unattach(obj, sub_obj, relationship, relationship, data=data)
def can_add(self, data): def can_add(self, data):
return self.user.is_superuser return self.user.is_superuser
def can_change(self, obj, data): def can_change(self, obj, data):
@@ -588,39 +590,18 @@ class InstanceAccess(BaseAccess):
class InstanceGroupAccess(BaseAccess): class InstanceGroupAccess(BaseAccess):
"""
I can see Instance Groups when I am:
- a superuser(system administrator)
- at least read_role on the instance group
I can edit Instance Groups when I am:
- a superuser
- admin role on the Instance group
I can add/delete Instance Groups:
- a superuser(system administrator)
I can use Instance Groups when I have:
- use_role on the instance group
"""
model = InstanceGroup model = InstanceGroup
prefetch_related = ('instances',) prefetch_related = ('instances',)
def filtered_queryset(self): def filtered_queryset(self):
return self.model.accessible_objects(self.user, 'read_role') return InstanceGroup.objects.filter(organization__in=Organization.accessible_pk_qs(self.user, 'admin_role')).distinct()
@check_superuser
def can_use(self, obj):
return self.user in obj.use_role
def can_add(self, data): def can_add(self, data):
return self.user.is_superuser return self.user.is_superuser
@check_superuser
def can_change(self, obj, data): def can_change(self, obj, data):
return self.can_admin(obj) return self.user.is_superuser
@check_superuser
def can_admin(self, obj):
return self.user in obj.admin_role
def can_delete(self, obj): def can_delete(self, obj):
if obj.name in [settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]: if obj.name in [settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]:
@@ -867,7 +848,7 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
return RoleAccess(self.user).can_attach(rel_role, sub_obj, 'members', *args, **kwargs) return RoleAccess(self.user).can_attach(rel_role, sub_obj, 'members', *args, **kwargs)
if relationship == "instance_groups": if relationship == "instance_groups":
if self.user in obj.admin_role and self.user in sub_obj.use_role: if self.user.is_superuser:
return True return True
return False return False
return super(OrganizationAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs) return super(OrganizationAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
@@ -956,7 +937,7 @@ class InventoryAccess(BaseAccess):
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs): def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
if relationship == "instance_groups": if relationship == "instance_groups":
if self.user in sub_obj.use_role and self.user in obj.admin_role: if self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role:
return True return True
return False return False
return super(InventoryAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs) return super(InventoryAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
@@ -1049,9 +1030,7 @@ class GroupAccess(BaseAccess):
return Group.objects.filter(inventory__in=Inventory.accessible_pk_qs(self.user, 'read_role')) return Group.objects.filter(inventory__in=Inventory.accessible_pk_qs(self.user, 'read_role'))
def can_add(self, data): def can_add(self, data):
if not data: # So the browseable API will work if not data or 'inventory' not in data:
return Inventory.accessible_objects(self.user, 'admin_role').exists()
if 'inventory' not in data:
return False return False
# Checks for admin or change permission on inventory. # Checks for admin or change permission on inventory.
return self.check_related('inventory', Inventory, data) return self.check_related('inventory', Inventory, data)
@@ -1693,12 +1672,11 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
return self.user.is_superuser or self.user in obj.admin_role return self.user.is_superuser or self.user in obj.admin_role
@check_superuser @check_superuser
# object here is the job template. sub_object here is what is being attached
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False): def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if relationship == "instance_groups": if relationship == "instance_groups":
if not obj.organization: if not obj.organization:
return False return False
return self.user in sub_obj.use_role and self.user in obj.admin_role return self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role
return super(JobTemplateAccess, self).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check) return super(JobTemplateAccess, self).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
@check_superuser @check_superuser
@@ -1875,6 +1853,8 @@ class JobLaunchConfigAccess(UnifiedCredentialsMixin, BaseAccess):
def _related_filtered_queryset(self, cls): def _related_filtered_queryset(self, cls):
if cls is Label: if cls is Label:
return LabelAccess(self.user).filtered_queryset() return LabelAccess(self.user).filtered_queryset()
elif cls is InstanceGroup:
return InstanceGroupAccess(self.user).filtered_queryset()
else: else:
return cls._accessible_pk_qs(cls, self.user, 'use_role') return cls._accessible_pk_qs(cls, self.user, 'use_role')
@@ -1886,7 +1866,6 @@ class JobLaunchConfigAccess(UnifiedCredentialsMixin, BaseAccess):
@check_superuser @check_superuser
def can_add(self, data, template=None): def can_add(self, data, template=None):
# WARNING: duplicated with BulkJobLaunchSerializer, check when changing permission levels
# This is a special case, we don't check related many-to-many elsewhere # This is a special case, we don't check related many-to-many elsewhere
# launch RBAC checks use this # launch RBAC checks use this
if 'reference_obj' in data: if 'reference_obj' in data:
@@ -2019,16 +1998,7 @@ class WorkflowJobNodeAccess(BaseAccess):
) )
def filtered_queryset(self): def filtered_queryset(self):
return self.model.objects.filter( return self.model.objects.filter(workflow_job__unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
Q(workflow_job__unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(workflow_job__organization__in=Organization.objects.filter(Q(admin_role__members=self.user)))
)
def can_read(self, obj):
"""Overriding this opens up detail view access for bulk jobs, where the workflow job has no associated workflow job template."""
if obj.workflow_job.is_bulk_job and obj.workflow_job.created_by_id == self.user.id:
return True
return super().can_read(obj)
@check_superuser @check_superuser
def can_add(self, data): def can_add(self, data):
@@ -2154,16 +2124,7 @@ class WorkflowJobAccess(BaseAccess):
) )
def filtered_queryset(self): def filtered_queryset(self):
return WorkflowJob.objects.filter( return WorkflowJob.objects.filter(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
Q(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(organization__in=Organization.objects.filter(Q(admin_role__members=self.user)), is_bulk_job=True)
)
def can_read(self, obj):
"""Overriding this opens up detail view access for bulk jobs, where the workflow job has no associated workflow job template."""
if obj.is_bulk_job and obj.created_by_id == self.user.id:
return True
return super().can_read(obj)
def can_add(self, data): def can_add(self, data):
# Old add-start system for launching jobs is being depreciated, and # Old add-start system for launching jobs is being depreciated, and
@@ -2391,6 +2352,7 @@ class JobEventAccess(BaseAccess):
class UnpartitionedJobEventAccess(JobEventAccess): class UnpartitionedJobEventAccess(JobEventAccess):
model = UnpartitionedJobEvent model = UnpartitionedJobEvent
@@ -2735,66 +2697,46 @@ class ActivityStreamAccess(BaseAccess):
# 'job_template', 'job', 'project', 'project_update', 'workflow_job', # 'job_template', 'job', 'project', 'project_update', 'workflow_job',
# 'inventory_source', 'workflow_job_template' # 'inventory_source', 'workflow_job_template'
q = Q(user=self.user) inventory_set = Inventory.accessible_objects(self.user, 'read_role')
inventory_set = Inventory.accessible_pk_qs(self.user, 'read_role') credential_set = Credential.accessible_objects(self.user, 'read_role')
if inventory_set:
q |= (
Q(ad_hoc_command__inventory__in=inventory_set)
| Q(inventory__in=inventory_set)
| Q(host__inventory__in=inventory_set)
| Q(group__inventory__in=inventory_set)
| Q(inventory_source__inventory__in=inventory_set)
| Q(inventory_update__inventory_source__inventory__in=inventory_set)
)
credential_set = Credential.accessible_pk_qs(self.user, 'read_role')
if credential_set:
q |= Q(credential__in=credential_set)
auditing_orgs = ( auditing_orgs = (
(Organization.accessible_objects(self.user, 'admin_role') | Organization.accessible_objects(self.user, 'auditor_role')) (Organization.accessible_objects(self.user, 'admin_role') | Organization.accessible_objects(self.user, 'auditor_role'))
.distinct() .distinct()
.values_list('id', flat=True) .values_list('id', flat=True)
) )
if auditing_orgs: project_set = Project.accessible_objects(self.user, 'read_role')
q |= ( jt_set = JobTemplate.accessible_objects(self.user, 'read_role')
Q(user__in=auditing_orgs.values('member_role__members')) team_set = Team.accessible_objects(self.user, 'read_role')
| Q(organization__in=auditing_orgs) wfjt_set = WorkflowJobTemplate.accessible_objects(self.user, 'read_role')
| Q(notification_template__organization__in=auditing_orgs)
| Q(notification__notification_template__organization__in=auditing_orgs)
| Q(label__organization__in=auditing_orgs)
| Q(role__in=Role.objects.filter(ancestors__in=self.user.roles.all()) if auditing_orgs else [])
)
project_set = Project.accessible_pk_qs(self.user, 'read_role')
if project_set:
q |= Q(project__in=project_set) | Q(project_update__project__in=project_set)
jt_set = JobTemplate.accessible_pk_qs(self.user, 'read_role')
if jt_set:
q |= Q(job_template__in=jt_set) | Q(job__job_template__in=jt_set)
wfjt_set = WorkflowJobTemplate.accessible_pk_qs(self.user, 'read_role')
if wfjt_set:
q |= (
Q(workflow_job_template__in=wfjt_set)
| Q(workflow_job_template_node__workflow_job_template__in=wfjt_set)
| Q(workflow_job__workflow_job_template__in=wfjt_set)
)
team_set = Team.accessible_pk_qs(self.user, 'read_role')
if team_set:
q |= Q(team__in=team_set)
app_set = OAuth2ApplicationAccess(self.user).filtered_queryset() app_set = OAuth2ApplicationAccess(self.user).filtered_queryset()
if app_set:
q |= Q(o_auth2_application__in=app_set)
token_set = OAuth2TokenAccess(self.user).filtered_queryset() token_set = OAuth2TokenAccess(self.user).filtered_queryset()
if token_set:
q |= Q(o_auth2_access_token__in=token_set)
return qs.filter(q).distinct() return qs.filter(
Q(ad_hoc_command__inventory__in=inventory_set)
| Q(o_auth2_application__in=app_set)
| Q(o_auth2_access_token__in=token_set)
| Q(user__in=auditing_orgs.values('member_role__members'))
| Q(user=self.user)
| Q(organization__in=auditing_orgs)
| Q(inventory__in=inventory_set)
| Q(host__inventory__in=inventory_set)
| Q(group__inventory__in=inventory_set)
| Q(inventory_source__inventory__in=inventory_set)
| Q(inventory_update__inventory_source__inventory__in=inventory_set)
| Q(credential__in=credential_set)
| Q(team__in=team_set)
| Q(project__in=project_set)
| Q(project_update__project__in=project_set)
| Q(job_template__in=jt_set)
| Q(job__job_template__in=jt_set)
| Q(workflow_job_template__in=wfjt_set)
| Q(workflow_job_template_node__workflow_job_template__in=wfjt_set)
| Q(workflow_job__workflow_job_template__in=wfjt_set)
| Q(notification_template__organization__in=auditing_orgs)
| Q(notification__notification_template__organization__in=auditing_orgs)
| Q(label__organization__in=auditing_orgs)
| Q(role__in=Role.objects.filter(ancestors__in=self.user.roles.all()) if auditing_orgs else [])
).distinct()
def can_add(self, data): def can_add(self, data):
return False return False

View File

@@ -1,8 +1,8 @@
import datetime import datetime
import asyncio import asyncio
import logging import logging
import aioredis
import redis import redis
import redis.asyncio
import re import re
from prometheus_client import ( from prometheus_client import (
@@ -82,7 +82,7 @@ class BroadcastWebsocketStatsManager:
async def run_loop(self): async def run_loop(self):
try: try:
redis_conn = await redis.asyncio.Redis.from_url(settings.BROKER_URL) redis_conn = await aioredis.create_redis_pool(settings.BROKER_URL)
while True: while True:
stats_data_str = ''.join(stat.serialize() for stat in self._stats.values()) stats_data_str = ''.join(stat.serialize() for stat in self._stats.values())
await redis_conn.set(self._redis_key, stats_data_str) await redis_conn.set(self._redis_key, stats_data_str)
@@ -122,8 +122,8 @@ class BroadcastWebsocketStats:
'Number of messages received, to be forwarded, by the broadcast websocket system', 'Number of messages received, to be forwarded, by the broadcast websocket system',
registry=self._registry, registry=self._registry,
) )
self._messages_received_current_conn = Gauge( self._messages_received = Gauge(
f'awx_{self.remote_name}_messages_received_currrent_conn', f'awx_{self.remote_name}_messages_received',
'Number forwarded messages received by the broadcast websocket system, for the duration of the current connection', 'Number forwarded messages received by the broadcast websocket system, for the duration of the current connection',
registry=self._registry, registry=self._registry,
) )
@@ -144,13 +144,13 @@ class BroadcastWebsocketStats:
def record_message_received(self): def record_message_received(self):
self._internal_messages_received_per_minute.record() self._internal_messages_received_per_minute.record()
self._messages_received_current_conn.inc() self._messages_received.inc()
self._messages_received_total.inc() self._messages_received_total.inc()
def record_connection_established(self): def record_connection_established(self):
self._connection.state('connected') self._connection.state('connected')
self._connection_start.set_to_current_time() self._connection_start.set_to_current_time()
self._messages_received_current_conn.set(0) self._messages_received.set(0)
def record_connection_lost(self): def record_connection_lost(self):
self._connection.state('disconnected') self._connection.state('disconnected')

View File

@@ -16,7 +16,7 @@ from awx.conf.license import get_license
from awx.main.utils import get_awx_version, camelcase_to_underscore, datetime_hook from awx.main.utils import get_awx_version, camelcase_to_underscore, datetime_hook
from awx.main import models from awx.main import models
from awx.main.analytics import register from awx.main.analytics import register
from awx.main.scheduler.task_manager_models import TaskManagerModels from awx.main.scheduler.task_manager_models import TaskManagerInstances
""" """
This module is used to define metrics collected by awx.main.analytics.gather() This module is used to define metrics collected by awx.main.analytics.gather()
@@ -233,14 +233,15 @@ def projects_by_scm_type(since, **kwargs):
return counts return counts
@register('instance_info', '1.3', description=_('Cluster topology and capacity')) @register('instance_info', '1.2', description=_('Cluster topology and capacity'))
def instance_info(since, include_hostnames=False, **kwargs): def instance_info(since, include_hostnames=False, **kwargs):
info = {} info = {}
# Use same method that the TaskManager does to compute consumed capacity without querying all running jobs for each Instance # Use same method that the TaskManager does to compute consumed capacity without querying all running jobs for each Instance
tm_models = TaskManagerModels.init_with_consumed_capacity( active_tasks = models.UnifiedJob.objects.filter(status__in=['running', 'waiting']).only('task_impact', 'controller_node', 'execution_node')
instance_fields=['uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'enabled', 'node_type'] tm_instances = TaskManagerInstances(
active_tasks, instance_fields=['uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'enabled', 'node_type']
) )
for tm_instance in tm_models.instances.instances_by_hostname.values(): for tm_instance in tm_instances.instances_by_hostname.values():
instance = tm_instance.obj instance = tm_instance.obj
instance_info = { instance_info = {
'uuid': instance.uuid, 'uuid': instance.uuid,

View File

@@ -3,5 +3,6 @@ from django.utils.translation import gettext_lazy as _
class MainConfig(AppConfig): class MainConfig(AppConfig):
name = 'awx.main' name = 'awx.main'
verbose_name = _('Main') verbose_name = _('Main')

View File

@@ -282,16 +282,6 @@ register(
placeholder={'HTTP_PROXY': 'myproxy.local:8080'}, placeholder={'HTTP_PROXY': 'myproxy.local:8080'},
) )
register(
'AWX_RUNNER_KEEPALIVE_SECONDS',
field_class=fields.IntegerField,
label=_('K8S Ansible Runner Keep-Alive Message Interval'),
help_text=_('Only applies to jobs running in a Container Group. If not 0, send a message every so-many seconds to keep connection open.'),
category=_('Jobs'),
category_slug='jobs',
placeholder=240, # intended to be under common 5 minute idle timeout
)
register( register(
'GALAXY_TASK_ENV', 'GALAXY_TASK_ENV',
field_class=fields.KeyValueField, field_class=fields.KeyValueField,
@@ -579,7 +569,7 @@ register(
register( register(
'LOG_AGGREGATOR_LOGGERS', 'LOG_AGGREGATOR_LOGGERS',
field_class=fields.StringListField, field_class=fields.StringListField,
default=['awx', 'activity_stream', 'job_events', 'system_tracking', 'broadcast_websocket'], default=['awx', 'activity_stream', 'job_events', 'system_tracking'],
label=_('Loggers Sending Data to Log Aggregator Form'), label=_('Loggers Sending Data to Log Aggregator Form'),
help_text=_( help_text=_(
'List of loggers that will send HTTP logs to the collector, these can ' 'List of loggers that will send HTTP logs to the collector, these can '
@@ -587,8 +577,7 @@ register(
'awx - service logs\n' 'awx - service logs\n'
'activity_stream - activity stream records\n' 'activity_stream - activity stream records\n'
'job_events - callback data from Ansible job events\n' 'job_events - callback data from Ansible job events\n'
'system_tracking - facts gathered from scan jobs\n' 'system_tracking - facts gathered from scan jobs.'
'broadcast_websocket - errors pertaining to websockets broadcast metrics\n'
), ),
category=_('Logging'), category=_('Logging'),
category_slug='logging', category_slug='logging',
@@ -775,26 +764,6 @@ register(
help_text=_('Indicates whether the instance is part of a kubernetes-based deployment.'), help_text=_('Indicates whether the instance is part of a kubernetes-based deployment.'),
) )
register(
'BULK_JOB_MAX_LAUNCH',
field_class=fields.IntegerField,
default=100,
label=_('Max jobs to allow bulk jobs to launch'),
help_text=_('Max jobs to allow bulk jobs to launch'),
category=_('Bulk Actions'),
category_slug='bulk',
)
register(
'BULK_HOST_MAX_CREATE',
field_class=fields.IntegerField,
default=100,
label=_('Max number of hosts to allow to be created in a single bulk action'),
help_text=_('Max number of hosts to allow to be created in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
)
def logging_validate(serializer, attrs): def logging_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'LOG_AGGREGATOR_HOST') or not hasattr(serializer.instance, 'LOG_AGGREGATOR_TYPE'): if not serializer.instance or not hasattr(serializer.instance, 'LOG_AGGREGATOR_HOST') or not hasattr(serializer.instance, 'LOG_AGGREGATOR_TYPE'):

View File

@@ -9,16 +9,10 @@ aim_inputs = {
'fields': [ 'fields': [
{ {
'id': 'url', 'id': 'url',
'label': _('CyberArk CCP URL'), 'label': _('CyberArk AIM URL'),
'type': 'string', 'type': 'string',
'format': 'url', 'format': 'url',
}, },
{
'id': 'webservice_id',
'label': _('Web Service ID'),
'type': 'string',
'help_text': _('The CCP Web Service ID. Leave blank to default to AIMWebService.'),
},
{ {
'id': 'app_id', 'id': 'app_id',
'label': _('Application ID'), 'label': _('Application ID'),
@@ -70,13 +64,10 @@ def aim_backend(**kwargs):
client_cert = kwargs.get('client_cert', None) client_cert = kwargs.get('client_cert', None)
client_key = kwargs.get('client_key', None) client_key = kwargs.get('client_key', None)
verify = kwargs['verify'] verify = kwargs['verify']
webservice_id = kwargs.get('webservice_id', '')
app_id = kwargs['app_id'] app_id = kwargs['app_id']
object_query = kwargs['object_query'] object_query = kwargs['object_query']
object_query_format = kwargs['object_query_format'] object_query_format = kwargs['object_query_format']
reason = kwargs.get('reason', None) reason = kwargs.get('reason', None)
if webservice_id == '':
webservice_id = 'AIMWebService'
query_params = { query_params = {
'AppId': app_id, 'AppId': app_id,
@@ -87,7 +78,7 @@ def aim_backend(**kwargs):
query_params['reason'] = reason query_params['reason'] = reason
request_qs = '?' + urlencode(query_params, quote_via=quote) request_qs = '?' + urlencode(query_params, quote_via=quote)
request_url = urljoin(url, '/'.join([webservice_id, 'api', 'Accounts'])) request_url = urljoin(url, '/'.join(['AIMWebService', 'api', 'Accounts']))
with CertFiles(client_cert, client_key) as cert: with CertFiles(client_cert, client_key) as cert:
res = requests.get( res = requests.get(
@@ -101,4 +92,4 @@ def aim_backend(**kwargs):
return res.json()['Content'] return res.json()['Content']
aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend) aim_plugin = CredentialPlugin('CyberArk AIM Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)

View File

@@ -68,11 +68,7 @@ def conjur_backend(**kwargs):
with CertFiles(cacert) as cert: with CertFiles(cacert) as cert:
# https://www.conjur.org/api.html#authentication-authenticate-post # https://www.conjur.org/api.html#authentication-authenticate-post
auth_kwargs['verify'] = cert auth_kwargs['verify'] = cert
try: resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)
resp = requests.post(urljoin(url, '/'.join(['authn', account, username, 'authenticate'])), **auth_kwargs)
resp.raise_for_status()
except requests.exceptions.HTTPError:
resp = requests.post(urljoin(url, '/'.join(['api', 'authn', account, username, 'authenticate'])), **auth_kwargs)
raise_for_status(resp) raise_for_status(resp)
token = resp.content.decode('utf-8') token = resp.content.decode('utf-8')
@@ -82,20 +78,14 @@ def conjur_backend(**kwargs):
} }
# https://www.conjur.org/api.html#secrets-retrieve-a-secret-get # https://www.conjur.org/api.html#secrets-retrieve-a-secret-get
path = urljoin(url, '/'.join(['secrets', account, 'variable', secret_path])) path = urljoin(url, '/'.join(['api', 'secrets', account, 'variable', secret_path]))
path_conjurcloud = urljoin(url, '/'.join(['api', 'secrets', account, 'variable', secret_path]))
if version: if version:
ver = "version={}".format(version) ver = "version={}".format(version)
path = '?'.join([path, ver]) path = '?'.join([path, ver])
path_conjurcloud = '?'.join([path_conjurcloud, ver])
with CertFiles(cacert) as cert: with CertFiles(cacert) as cert:
lookup_kwargs['verify'] = cert lookup_kwargs['verify'] = cert
try: resp = requests.get(path, timeout=30, **lookup_kwargs)
resp = requests.get(path, timeout=30, **lookup_kwargs)
resp.raise_for_status()
except requests.exceptions.HTTPError:
resp = requests.get(path_conjurcloud, timeout=30, **lookup_kwargs)
raise_for_status(resp) raise_for_status(resp)
return resp.text return resp.text

View File

@@ -1,7 +1,6 @@
import copy import copy
import os import os
import pathlib import pathlib
import time
from urllib.parse import urljoin from urllib.parse import urljoin
from .plugin import CredentialPlugin, CertFiles, raise_for_status from .plugin import CredentialPlugin, CertFiles, raise_for_status
@@ -248,15 +247,7 @@ def kv_backend(**kwargs):
request_url = urljoin(url, '/'.join(['v1'] + path_segments)).rstrip('/') request_url = urljoin(url, '/'.join(['v1'] + path_segments)).rstrip('/')
with CertFiles(cacert) as cert: with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert request_kwargs['verify'] = cert
request_retries = 0 response = sess.get(request_url, **request_kwargs)
while request_retries < 5:
response = sess.get(request_url, **request_kwargs)
# https://developer.hashicorp.com/vault/docs/enterprise/consistency
if response.status_code == 412:
request_retries += 1
time.sleep(1)
else:
break
raise_for_status(response) raise_for_status(response)
json = response.json() json = response.json()
@@ -298,15 +289,8 @@ def ssh_backend(**kwargs):
with CertFiles(cacert) as cert: with CertFiles(cacert) as cert:
request_kwargs['verify'] = cert request_kwargs['verify'] = cert
request_retries = 0 resp = sess.post(request_url, **request_kwargs)
while request_retries < 5:
resp = sess.post(request_url, **request_kwargs)
# https://developer.hashicorp.com/vault/docs/enterprise/consistency
if resp.status_code == 412:
request_retries += 1
time.sleep(1)
else:
break
raise_for_status(resp) raise_for_status(resp)
return resp.json()['data']['signed_key'] return resp.json()['data']['signed_key']

View File

@@ -49,10 +49,7 @@ def tss_backend(**kwargs):
secret_dict = secret_server.get_secret(kwargs['secret_id']) secret_dict = secret_server.get_secret(kwargs['secret_id'])
secret = ServerSecret(**secret_dict) secret = ServerSecret(**secret_dict)
if isinstance(secret.fields[kwargs['secret_field']].value, str) == False: return secret.fields[kwargs['secret_field']].value
return secret.fields[kwargs['secret_field']].value.text
else:
return secret.fields[kwargs['secret_field']].value
tss_plugin = CredentialPlugin( tss_plugin = CredentialPlugin(

View File

@@ -14,6 +14,7 @@ logger = logging.getLogger('awx.main.dispatch')
class Control(object): class Control(object):
services = ('dispatcher', 'callback_receiver') services = ('dispatcher', 'callback_receiver')
result = None result = None

View File

@@ -192,6 +192,7 @@ class PoolWorker(object):
class StatefulPoolWorker(PoolWorker): class StatefulPoolWorker(PoolWorker):
track_managed_tasks = True track_managed_tasks = True

View File

@@ -66,6 +66,7 @@ class task:
bind_kwargs = self.bind_kwargs bind_kwargs = self.bind_kwargs
class PublisherMixin(object): class PublisherMixin(object):
queue = None queue = None
@classmethod @classmethod

View File

@@ -40,6 +40,7 @@ class WorkerSignalHandler:
class AWXConsumerBase(object): class AWXConsumerBase(object):
last_stats = time.time() last_stats = time.time()
def __init__(self, name, worker, queues=[], pool=None): def __init__(self, name, worker, queues=[], pool=None):

View File

@@ -3,12 +3,14 @@ import logging
import os import os
import signal import signal
import time import time
import traceback
import datetime import datetime
from django.conf import settings from django.conf import settings
from django.utils.functional import cached_property from django.utils.functional import cached_property
from django.utils.timezone import now as tz_now from django.utils.timezone import now as tz_now
from django.db import transaction, connection as django_connection from django.db import DatabaseError, OperationalError, transaction, connection as django_connection
from django.db.utils import InterfaceError, InternalError
from django_guid import set_guid from django_guid import set_guid
import psutil import psutil
@@ -62,7 +64,6 @@ class CallbackBrokerWorker(BaseWorker):
""" """
MAX_RETRIES = 2 MAX_RETRIES = 2
INDIVIDUAL_EVENT_RETRIES = 3
last_stats = time.time() last_stats = time.time()
last_flush = time.time() last_flush = time.time()
total = 0 total = 0
@@ -154,8 +155,6 @@ class CallbackBrokerWorker(BaseWorker):
metrics_events_missing_created = 0 metrics_events_missing_created = 0
metrics_total_job_event_processing_seconds = datetime.timedelta(seconds=0) metrics_total_job_event_processing_seconds = datetime.timedelta(seconds=0)
for cls, events in self.buff.items(): for cls, events in self.buff.items():
if not events:
continue
logger.debug(f'{cls.__name__}.objects.bulk_create({len(events)})') logger.debug(f'{cls.__name__}.objects.bulk_create({len(events)})')
for e in events: for e in events:
e.modified = now # this can be set before created because now is set above on line 149 e.modified = now # this can be set before created because now is set above on line 149
@@ -165,48 +164,38 @@ class CallbackBrokerWorker(BaseWorker):
else: # only calculate the seconds if the created time already has been set else: # only calculate the seconds if the created time already has been set
metrics_total_job_event_processing_seconds += e.modified - e.created metrics_total_job_event_processing_seconds += e.modified - e.created
metrics_duration_to_save = time.perf_counter() metrics_duration_to_save = time.perf_counter()
saved_events = []
try: try:
cls.objects.bulk_create(events) cls.objects.bulk_create(events)
metrics_bulk_events_saved += len(events) metrics_bulk_events_saved += len(events)
saved_events = events
self.buff[cls] = []
except Exception as exc: except Exception as exc:
# If the database is flaking, let ensure_connection throw a general exception logger.warning(f'Error in events bulk_create, will try indiviually up to 5 errors, error {str(exc)}')
# will be caught by the outer loop, which goes into a proper sleep and retry loop
django_connection.ensure_connection()
logger.warning(f'Error in events bulk_create, will try indiviually, error: {str(exc)}')
# if an exception occurs, we should re-attempt to save the # if an exception occurs, we should re-attempt to save the
# events one-by-one, because something in the list is # events one-by-one, because something in the list is
# broken/stale # broken/stale
consecutive_errors = 0
events_saved = 0
metrics_events_batch_save_errors += 1 metrics_events_batch_save_errors += 1
for e in events.copy(): for e in events:
try: try:
e.save() e.save()
metrics_singular_events_saved += 1 events_saved += 1
events.remove(e) consecutive_errors = 0
saved_events.append(e) # Importantly, remove successfully saved events from the buffer
except Exception as exc_indv: except Exception as exc_indv:
retry_count = getattr(e, '_retry_count', 0) + 1 consecutive_errors += 1
e._retry_count = retry_count logger.info(f'Database Error Saving individual Job Event, error {str(exc_indv)}')
if consecutive_errors >= 5:
# special sanitization logic for postgres treatment of NUL 0x00 char raise
if (retry_count == 1) and isinstance(exc_indv, ValueError) and ("\x00" in e.stdout): metrics_singular_events_saved += events_saved
e.stdout = e.stdout.replace("\x00", "") if events_saved == 0:
raise
if retry_count >= self.INDIVIDUAL_EVENT_RETRIES:
logger.error(f'Hit max retries ({retry_count}) saving individual Event error: {str(exc_indv)}\ndata:\n{e.__dict__}')
events.remove(e)
else:
logger.info(f'Database Error Saving individual Event uuid={e.uuid} try={retry_count}, error: {str(exc_indv)}')
metrics_duration_to_save = time.perf_counter() - metrics_duration_to_save metrics_duration_to_save = time.perf_counter() - metrics_duration_to_save
for e in saved_events: for e in events:
if not getattr(e, '_skip_websocket_message', False): if not getattr(e, '_skip_websocket_message', False):
metrics_events_broadcast += 1 metrics_events_broadcast += 1
emit_event_detail(e) emit_event_detail(e)
if getattr(e, '_notification_trigger_event', False): if getattr(e, '_notification_trigger_event', False):
job_stats_wrapup(getattr(e, e.JOB_REFERENCE), event=e) job_stats_wrapup(getattr(e, e.JOB_REFERENCE), event=e)
self.buff = {}
self.last_flush = time.time() self.last_flush = time.time()
# only update metrics if we saved events # only update metrics if we saved events
if (metrics_bulk_events_saved + metrics_singular_events_saved) > 0: if (metrics_bulk_events_saved + metrics_singular_events_saved) > 0:
@@ -278,16 +267,20 @@ class CallbackBrokerWorker(BaseWorker):
try: try:
self.flush(force=flush) self.flush(force=flush)
break break
except Exception as exc: except (OperationalError, InterfaceError, InternalError) as exc:
# Aside form bugs, exceptions here are assumed to be due to database flake
if retries >= self.MAX_RETRIES: if retries >= self.MAX_RETRIES:
logger.exception('Worker could not re-establish database connectivity, giving up on one or more events.') logger.exception('Worker could not re-establish database connectivity, giving up on one or more events.')
self.buff = {}
return return
delay = 60 * retries delay = 60 * retries
logger.warning(f'Database Error Flushing Job Events, retry #{retries + 1} in {delay} seconds: {str(exc)}') logger.warning(f'Database Error Flushing Job Events, retry #{retries + 1} in {delay} seconds: {str(exc)}')
django_connection.close() django_connection.close()
time.sleep(delay) time.sleep(delay)
retries += 1 retries += 1
except Exception: except DatabaseError:
logger.exception(f'Callback Task Processor Raised Unexpected Exception processing event data:\n{body}') logger.exception('Database Error Flushing Job Events')
django_connection.close()
break
except Exception as exc:
tb = traceback.format_exc()
logger.error('Callback Task Processor Raised Exception: %r', exc)
logger.error('Detail: {}'.format(tb))

View File

@@ -232,6 +232,7 @@ class ImplicitRoleField(models.ForeignKey):
field_names = [field_names] field_names = [field_names]
for field_name in field_names: for field_name in field_names:
if field_name.startswith('singleton:'): if field_name.startswith('singleton:'):
continue continue
@@ -243,6 +244,7 @@ class ImplicitRoleField(models.ForeignKey):
field = getattr(cls, field_name, None) field = getattr(cls, field_name, None)
if field and type(field) is ReverseManyToOneDescriptor or type(field) is ManyToManyDescriptor: if field and type(field) is ReverseManyToOneDescriptor or type(field) is ManyToManyDescriptor:
if '.' in field_attr: if '.' in field_attr:
raise Exception('Referencing deep roles through ManyToMany fields is unsupported.') raise Exception('Referencing deep roles through ManyToMany fields is unsupported.')
@@ -627,6 +629,7 @@ class CredentialInputField(JSONSchemaField):
# `ssh_key_unlock` requirements are very specific and can't be # `ssh_key_unlock` requirements are very specific and can't be
# represented without complicated JSON schema # represented without complicated JSON schema
if model_instance.credential_type.managed is True and 'ssh_key_unlock' in defined_fields: if model_instance.credential_type.managed is True and 'ssh_key_unlock' in defined_fields:
# in order to properly test the necessity of `ssh_key_unlock`, we # in order to properly test the necessity of `ssh_key_unlock`, we
# need to know the real value of `ssh_key_data`; for a payload like: # need to know the real value of `ssh_key_data`; for a payload like:
# { # {
@@ -788,8 +791,7 @@ class CredentialTypeInjectorField(JSONSchemaField):
'type': 'object', 'type': 'object',
'patternProperties': { 'patternProperties': {
# http://docs.ansible.com/ansible/playbooks_variables.html#what-makes-a-valid-variable-name # http://docs.ansible.com/ansible/playbooks_variables.html#what-makes-a-valid-variable-name
# plus, add ability to template '^[a-zA-Z_]+[a-zA-Z0-9_]*$': {'type': 'string'},
r'^[a-zA-Z_\{\}]+[a-zA-Z0-9_\{\}]*$': {"anyOf": [{'type': 'string'}, {'type': 'array'}, {'$ref': '#/properties/extra_vars'}]}
}, },
'additionalProperties': False, 'additionalProperties': False,
}, },
@@ -856,44 +858,27 @@ class CredentialTypeInjectorField(JSONSchemaField):
template_name = template_name.split('.')[1] template_name = template_name.split('.')[1]
setattr(valid_namespace['tower'].filename, template_name, 'EXAMPLE_FILENAME') setattr(valid_namespace['tower'].filename, template_name, 'EXAMPLE_FILENAME')
def validate_template_string(type_, key, tmpl):
try:
sandbox.ImmutableSandboxedEnvironment(undefined=StrictUndefined).from_string(tmpl).render(valid_namespace)
except UndefinedError as e:
raise django_exceptions.ValidationError(
_('{sub_key} uses an undefined field ({error_msg})').format(sub_key=key, error_msg=e),
code='invalid',
params={'value': value},
)
except SecurityError as e:
raise django_exceptions.ValidationError(_('Encountered unsafe code execution: {}').format(e))
except TemplateSyntaxError as e:
raise django_exceptions.ValidationError(
_('Syntax error rendering template for {sub_key} inside of {type} ({error_msg})').format(sub_key=key, type=type_, error_msg=e),
code='invalid',
params={'value': value},
)
def validate_extra_vars(key, node):
if isinstance(node, dict):
for k, v in node.items():
validate_template_string("extra_vars", 'a key' if key is None else key, k)
validate_extra_vars(k if key is None else "{key}.{k}".format(key=key, k=k), v)
elif isinstance(node, list):
for i, x in enumerate(node):
validate_extra_vars("{key}[{i}]".format(key=key, i=i), x)
else:
validate_template_string("extra_vars", key, node)
for type_, injector in value.items(): for type_, injector in value.items():
if type_ == 'env': if type_ == 'env':
for key in injector.keys(): for key in injector.keys():
self.validate_env_var_allowed(key) self.validate_env_var_allowed(key)
if type_ == 'extra_vars': for key, tmpl in injector.items():
validate_extra_vars(None, injector) try:
else: sandbox.ImmutableSandboxedEnvironment(undefined=StrictUndefined).from_string(tmpl).render(valid_namespace)
for key, tmpl in injector.items(): except UndefinedError as e:
validate_template_string(type_, key, tmpl) raise django_exceptions.ValidationError(
_('{sub_key} uses an undefined field ({error_msg})').format(sub_key=key, error_msg=e),
code='invalid',
params={'value': value},
)
except SecurityError as e:
raise django_exceptions.ValidationError(_('Encountered unsafe code execution: {}').format(e))
except TemplateSyntaxError as e:
raise django_exceptions.ValidationError(
_('Syntax error rendering template for {sub_key} inside of {type} ({error_msg})').format(sub_key=key, type=type_, error_msg=e),
code='invalid',
params={'value': value},
)
class AskForField(models.BooleanField): class AskForField(models.BooleanField):

View File

@@ -9,6 +9,7 @@ class Command(BaseCommand):
"""Checks connection to the database, and prints out connection info if not connected""" """Checks connection to the database, and prints out connection info if not connected"""
def handle(self, *args, **options): def handle(self, *args, **options):
with connection.cursor() as cursor: with connection.cursor() as cursor:
cursor.execute("SELECT version()") cursor.execute("SELECT version()")
version = str(cursor.fetchone()[0]) version = str(cursor.fetchone()[0])

View File

@@ -82,6 +82,7 @@ class DeleteMeta:
part_drop = {} part_drop = {}
for pk, status, created in self.jobs_qs: for pk, status, created in self.jobs_qs:
part_key = partition_table_name(self.job_class, created) part_key = partition_table_name(self.job_class, created)
if status in ['pending', 'waiting', 'running']: if status in ['pending', 'waiting', 'running']:
part_drop[part_key] = False part_drop[part_key] = False

View File

@@ -17,6 +17,7 @@ class Command(BaseCommand):
def handle(self, *args, **options): def handle(self, *args, **options):
if not options['user']: if not options['user']:
raise CommandError('Username not supplied. Usage: awx-manage create_oauth2_token --user=username.') raise CommandError('Username not supplied. Usage: awx-manage create_oauth2_token --user=username.')
try: try:
user = User.objects.get(username=options['user']) user = User.objects.get(username=options['user'])

View File

@@ -1,143 +0,0 @@
import time
from urllib.parse import urljoin
from argparse import ArgumentTypeError
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError
from django.db.models import Q
from django.utils.timezone import now
from awx.main.models import Instance, UnifiedJob
class AWXInstance:
def __init__(self, **filter):
self.filter = filter
self.get_instance()
def get_instance(self):
filter = self.filter if self.filter is not None else dict(hostname=settings.CLUSTER_HOST_ID)
qs = Instance.objects.filter(**filter)
if not qs.exists():
raise ValueError(f"No AWX instance found with {filter} parameters")
self.instance = qs.first()
def disable(self):
if self.instance.enabled:
self.instance.enabled = False
self.instance.save()
return True
def enable(self):
if not self.instance.enabled:
self.instance.enabled = True
self.instance.save()
return True
def jobs(self):
return UnifiedJob.objects.filter(
Q(controller_node=self.instance.hostname) | Q(execution_node=self.instance.hostname), status__in=("running", "waiting")
)
def jobs_pretty(self):
jobs = []
for j in self.jobs():
job_started = j.started if j.started else now()
# similar calculation of `elapsed` as the corresponding serializer
# does
td = now() - job_started
elapsed = (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / (10**6 * 1.0)
elapsed = float(elapsed)
details = dict(
name=j.name,
url=j.get_ui_url(),
elapsed=elapsed,
)
jobs.append(details)
jobs = sorted(jobs, reverse=True, key=lambda j: j["elapsed"])
return ", ".join([f"[\"{j['name']}\"]({j['url']})" for j in jobs])
def instance_pretty(self):
instance = (
self.instance.hostname,
urljoin(settings.TOWER_URL_BASE, f"/#/instances/{self.instance.pk}/details"),
)
return f"[\"{instance[0]}\"]({instance[1]})"
class Command(BaseCommand):
help = "Disable instance, optionally waiting for all its managed jobs to finish."
@staticmethod
def ge_1(arg):
if arg == "inf":
return float("inf")
int_arg = int(arg)
if int_arg < 1:
raise ArgumentTypeError(f"The value must be a positive number >= 1. Provided: \"{arg}\"")
return int_arg
def add_arguments(self, parser):
filter_group = parser.add_mutually_exclusive_group()
filter_group.add_argument(
"--hostname",
type=str,
default=settings.CLUSTER_HOST_ID,
help=f"{Instance.hostname.field.help_text} Defaults to the hostname of the machine where the Python interpreter is currently executing".strip(),
)
filter_group.add_argument("--id", type=self.ge_1, help=Instance.id.field.help_text)
parser.add_argument(
"--wait",
action="store_true",
help="Wait for jobs managed by the instance to finish. With default retry arguments waits ~1h",
)
parser.add_argument(
"--retry",
type=self.ge_1,
default=120,
help="Number of retries when waiting for jobs to finish. Default: 120. Also accepts \"inf\" to wait indefinitely",
)
parser.add_argument(
"--retry_sleep",
type=self.ge_1,
default=30,
help="Number of seconds to sleep before consequtive retries when waiting. Default: 30",
)
def handle(self, *args, **options):
try:
filter = dict(id=options["id"]) if options["id"] is not None else dict(hostname=options["hostname"])
instance = AWXInstance(**filter)
except ValueError as e:
raise CommandError(e)
if instance.disable():
self.stdout.write(self.style.SUCCESS(f"Instance {instance.instance_pretty()} has been disabled"))
else:
self.stdout.write(f"Instance {instance.instance_pretty()} has already been disabled")
if not options["wait"]:
return
rc = 1
while instance.jobs().count() > 0:
if rc < options["retry"]:
self.stdout.write(
f"{rc}/{options['retry']}: Waiting {options['retry_sleep']}s before the next attempt to see if the following instance' managed jobs have finished: {instance.jobs_pretty()}"
)
rc += 1
time.sleep(options["retry_sleep"])
else:
raise CommandError(
f"{rc}/{options['retry']}: No more retry attempts left, but the instance still has associated managed jobs: {instance.jobs_pretty()}"
)
else:
self.stdout.write(self.style.SUCCESS("Done waiting for instance' managed jobs to finish!"))

View File

@@ -1,35 +0,0 @@
from django.core.management.base import BaseCommand, CommandError
from django.conf import settings
class Command(BaseCommand):
"""enable or disable authentication system"""
def add_arguments(self, parser):
"""
This adds the --enable --disable functionalities to the command using mutally_exclusive to avoid situations in which users pass both flags
"""
group = parser.add_mutually_exclusive_group()
group.add_argument('--enable', dest='enable', action='store_true', help='Pass --enable to enable local authentication')
group.add_argument('--disable', dest='disable', action='store_true', help='Pass --disable to disable local authentication')
def _enable_disable_auth(self, enable, disable):
"""
this method allows the disabling or enabling of local authenication based on the argument passed into the parser
if no arguments throw a command error, if --enable set the DISABLE_LOCAL_AUTH to False
if --disable it's set to True. Realizing that the flag is counterintuitive to what is expected.
"""
if enable:
settings.DISABLE_LOCAL_AUTH = False
print("Setting has changed to {} allowing local authentication".format(settings.DISABLE_LOCAL_AUTH))
elif disable:
settings.DISABLE_LOCAL_AUTH = True
print("Setting has changed to {} disallowing local authentication".format(settings.DISABLE_LOCAL_AUTH))
else:
raise CommandError('Please pass --enable flag to allow local auth or --disable flag to disable local auth')
def handle(self, **options):
self._enable_disable_auth(options.get('enable'), options.get('disable'))

Some files were not shown because too many files have changed in this diff Show More