Compare commits

..

49 Commits

Author SHA1 Message Date
Alex Corey
0d88cee6bf Creates constructed inventory host lists by reusing, and renaming smart inventory host list components. 2023-02-22 10:01:31 -05:00
Hao Liu
103b4567fe Merge pull request #13600 from gamuniz/fix_validation_bug_constructed
[constructed-inventory]Fix validation issue constructed
2023-02-20 10:33:21 -05:00
Gabe Muniz
8ba2b1b50c [constructed-inventory]Fix validation issue constructed 2023-02-20 10:20:23 -05:00
Marliana Lara
dc049af0eb Merge pull request #13578 from marshmalien/constructed-inv-search-option
[constructed-inventory] Add "constructed" option to search columns
2023-02-16 13:05:22 -05:00
Marliana Lara
8ea8558605 Add constructed inv option to search columns 2023-02-15 17:21:23 -05:00
Hao Liu
3335afcd3a Merge branch 'devel' into feature_constructed-inventory 2023-02-15 10:05:02 -05:00
Gabriel Muniz
1f2a5cf7e4 Merge pull request #13555 from gamuniz/deny_adding_constructedinventory
[constructed-inventory] remove ability to add constructed inventories to input inventories
2023-02-15 10:03:58 -05:00
Gabe Muniz
7f2933c43c moved fixture to conftest.py and reverted some test changes 2023-02-14 17:54:54 -05:00
Gabe Muniz
d55af032f7 refactored to use is_valid_relation instead of post 2023-02-14 00:08:21 -05:00
Gabe Muniz
3ff65db2e6 block updates to constructed source type 2023-02-12 13:13:41 -05:00
Gabe Muniz
e25c767a47 added tests for group/host/source creation 2023-02-11 10:57:47 -05:00
Gabriel Muniz
0866bfc549 Merge pull request #13552 from gamuniz/prevent_constructed_sourceinventory
[constructed-inventory] Prevent constructed sourceinventory creation
2023-02-10 15:32:53 -05:00
Gabe Muniz
2d9f2d36a1 remove ability to add constructed inventories to constructed inventories 2023-02-10 14:56:25 -05:00
Gabe Muniz
c60ba5cec9 remove the ability to add constructed inventory sources 2023-02-10 10:18:24 -05:00
Gabriel Muniz
c98f86a355 Merge pull request #13547 from gamuniz/disable_host_creation_constructed
[constructed-inventory] Disable host creation constructed
2023-02-09 15:18:23 -05:00
Gabe Muniz
3f0d28dd7f disable deleting constructed inventory hosts 2023-02-09 13:10:29 -05:00
Gabe Muniz
49e5d76062 prevent constructed inventory host,group,inventory_source creation 2023-02-09 12:56:33 -05:00
Hao Liu
ecd788312e Merge pull request #13537 from shanemcd/input-inventories
Update UI to use new input_inventories endpoint
2023-02-08 11:09:14 -05:00
Shane McDonald
e1e27a028c Update UI to use new input_inventories endpoint 2023-02-08 10:55:36 -05:00
Hao Liu
0961ca06c9 Merge pull request #13514 from marshmalien/constructed-inv-routes
[constructed-inventory] Add details view and skeleton routes
2023-02-06 16:50:41 -05:00
Marliana Lara
a3d7c02802 Add constructed inventory CRUD and subtab routes
* Add constructed inventory API model
 * Add constructed inventory detail view
 * Add util to switch inventory url based on "kind"
2023-02-06 14:55:55 -05:00
Hao Liu
280ceae267 Merge pull request #13525 from TheRealHaoLiu/inventory-module-update
[constructed-inventory] add 'constructed' kind to inventory module for awx-collection
2023-02-06 14:40:12 -05:00
Hao Liu
ea719e053e ignore lack of constructed_inventory module 2023-02-06 14:28:52 -05:00
Hao Liu
f275c2a9c5 add constructed kind to inventory module
- add kind 'constructed' to inventory module
- add 'input_inventories' field to inventory module
2023-02-06 14:28:52 -05:00
Hao Liu
3242dbcbe6 Merge branch 'devel' into feature_constructed-inventory 2023-02-06 10:09:47 -05:00
Gabriel Muniz
341f8e385c Merge pull request #13520 from gamuniz/constructed_inventory_host_linkage
[constructed-inventory] Constructed inventory host linkage
2023-02-03 12:09:43 -05:00
Gabe Muniz
659853dcea missed a file on linting 2023-02-03 10:18:38 -05:00
Gabe Muniz
80c15e286f refactored original hosts and renamed source_inventories to input_inventories 2023-02-03 10:04:58 -05:00
Gabe Muniz
c22d8f1d7e adding a related field to point back to the original host 2023-02-02 21:51:37 -05:00
Gabe Muniz
27a97017dd initial start with notes from alan 2023-02-02 17:09:59 -05:00
Hao Liu
c72dca3ea5 Merge branch 'devel' into feature_constructed-inventory 2023-02-02 13:49:54 -05:00
Gabriel Muniz
ddb3cde872 Merge pull request #13501 from gamuniz/readonly_kind_constructed
force kind to readonly field and set kind to constructed in create
2023-02-02 13:49:18 -05:00
Gabe Muniz
c2ec8396cd force kind to readonly field and set kind to constructed in create 2023-02-01 14:22:00 -05:00
Hao Liu
de115ed1c8 Merge branch 'devel' into feature_constructed-inventory 2023-01-31 16:42:11 -05:00
Alan Rominger
87918bd275 [constructed-inventory] Add constructed inventory docs and do minor field updates (#13487)
* Add constructed inventory docs and do minor field updates

Add verbosity field to the constructed views

automatically set update_on_launch for the auto-created constructed inventory source
2023-01-30 15:12:40 -05:00
Alan Rominger
7598e117d4 [constructed-inventory] Use control plane EE for constructed inventory and hack temporary image (#13474)
* Use control plane EE for constructed inventory and hack temporary image

* Update page registry to work with new endpoints
2023-01-27 13:06:55 -05:00
Gabriel Muniz
700055801a Merge pull request #13470 from gamuniz/collection_limit_addition
[constructed-inventory] adding limit to inventory_source collection module
2023-01-26 09:46:54 -05:00
Gabe Muniz
1c6a48ffb6 adding limit to inventory_source collection module 2023-01-25 10:13:45 -05:00
Alan Rominger
fab83715e9 Merge pull request #13463 from AlanCoding/constructed_view
[constructed-inventory] Add views and serializers for special constructed inventory endpoints
2023-01-25 10:04:56 -05:00
Alan Rominger
0ebe57cbf4 Start on new constructed inventory API view
Make the GET function work at most basic level

Basic functionality of updating working

Add functional test for the GET and PATCH views

Add constructed inventory list view for direct creation

Add limit field to constructed inventory serializer
2023-01-24 11:35:29 -05:00
Hao Liu
d4840b240b move limit field from InventorySourceSerializer to InventorySourceOptionsSerializer (#13464)
InventorySourceOptionsSerializer is the parent for both InventorySourceSerializer and InventoryUpdateSerializer

The limit option need to be exposed to both inventory_source and inventory_update
2023-01-24 11:24:50 -05:00
Gabriel Muniz
8538d37702 Merge pull request #13456 from gamuniz/add_limit_to_serializer
added 'limit' to InventorySourceSerializer
2023-01-20 16:21:58 -05:00
Gabe Muniz
5550086b3b added 'limit' to InventorySourceSerializer 2023-01-20 12:41:35 -05:00
Gabriel Muniz
980bfc4b6f Merge pull request #13449 from gamuniz/inventory_source_limit
[constructed-inventory] pushing limit to inventory sources
2023-01-19 15:01:33 -05:00
Gabe Muniz
6351e8bbc9 added new migration for deprecation of host_filter 2023-01-19 13:01:45 -05:00
Gabe Muniz
325e566a3d pushing limit to inventory sources 2023-01-19 12:40:14 -05:00
Alan Rominger
d7f87ed27c Merge pull request #13303 from AlanCoding/smart_inventory_v2
[constructed-inventory] Constructed inventory as alternative to smart inventory
2023-01-18 22:18:58 -05:00
Alan Rominger
a5baee1b3a Remove extra unwanted assignment from line
Co-authored-by: Rick Elrod <rick@elrod.me>
2023-01-18 10:38:12 -05:00
Alan Rominger
dd8c9f87a9 Model and task changes for constructed inventory
Add in required setting about empty groups
2023-01-18 10:26:56 -05:00
614 changed files with 18913 additions and 36507 deletions

View File

@@ -19,8 +19,6 @@ body:
required: true required: true
- label: I understand that AWX is open source software provided for free and that I might not receive a timely response. - label: I understand that AWX is open source software provided for free and that I might not receive a timely response.
required: true required: true
- label: I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
required: true
- type: textarea - type: textarea
id: summary id: summary
@@ -44,7 +42,6 @@ body:
label: Select the relevant components label: Select the relevant components
options: options:
- label: UI - label: UI
- label: UI (tech preview)
- label: API - label: API
- label: Docs - label: Docs
- label: Collection - label: Collection

19
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
version: 2
updates:
- package-ecosystem: "npm"
directory: "/awx/ui"
schedule:
interval: "monthly"
open-pull-requests-limit: 5
allow:
- dependency-type: "production"
reviewers:
- "AlexSCorey"
- "keithjgrant"
- "kialam"
- "mabashian"
- "marshmalien"
labels:
- "component:ui"
- "dependencies"
target-branch: "devel"

View File

@@ -6,8 +6,6 @@ needs_triage:
- "Feature Summary" - "Feature Summary"
"component:ui": "component:ui":
- "\\[X\\] UI" - "\\[X\\] UI"
"component:ui_next":
- "\\[X\\] UI \\(tech preview\\)"
"component:api": "component:api":
- "\\[X\\] API" - "\\[X\\] API"
"component:docs": "component:docs":

View File

@@ -3,7 +3,7 @@ name: CI
env: env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} CI_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEV_DOCKER_OWNER: ${{ github.repository_owner }} DEV_DOCKER_TAG_BASE: ghcr.io/${{ github.repository_owner }}
COMPOSE_TAG: ${{ github.base_ref || 'devel' }} COMPOSE_TAG: ${{ github.base_ref || 'devel' }}
on: on:
pull_request: pull_request:

View File

@@ -7,7 +7,6 @@ on:
branches: branches:
- devel - devel
- release_* - release_*
- feature_*
jobs: jobs:
push: push:
if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_') if: endsWith(github.repository, '/awx') || startsWith(github.ref, 'refs/heads/release_')
@@ -21,12 +20,6 @@ jobs:
- name: Get python version from Makefile - name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Set lower case owner name
run: |
echo "OWNER_LC=${OWNER,,}" >>${GITHUB_ENV}
env:
OWNER: '${{ github.repository_owner }}'
- name: Install python ${{ env.py_version }} - name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
@@ -38,18 +31,15 @@ jobs:
- name: Pre-pull image to warm build cache - name: Pre-pull image to warm build cache
run: | run: |
docker pull ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/} || : docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/} || : docker pull ghcr.io/${{ github.repository_owner }}/awx_kube_devel:${GITHUB_REF##*/} || :
docker pull ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/} || :
- name: Build images - name: Build images
run: | run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${GITHUB_REF##*/} make docker-compose-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-build
- name: Push image - name: Push image
run: | run: |
docker push ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/} docker push ghcr.io/${{ github.repository_owner }}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/} docker push ghcr.io/${{ github.repository_owner }}/awx_kube_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}

View File

@@ -6,10 +6,6 @@ on:
- opened - opened
- reopened - reopened
permissions:
contents: write # to fetch code
issues: write # to label issues
jobs: jobs:
triage: triage:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -17,7 +13,7 @@ jobs:
steps: steps:
- name: Label Issue - name: Label Issue
uses: github/issue-labeler@v3.1 uses: github/issue-labeler@v2.4.1
with: with:
repo-token: "${{ secrets.GITHUB_TOKEN }}" repo-token: "${{ secrets.GITHUB_TOKEN }}"
not-before: 2021-12-07T07:00:00Z not-before: 2021-12-07T07:00:00Z

View File

@@ -7,10 +7,6 @@ on:
- reopened - reopened
- synchronize - synchronize
permissions:
contents: write # to determine modified files (actions/labeler)
pull-requests: write # to add labels to PRs (actions/labeler)
jobs: jobs:
triage: triage:
runs-on: ubuntu-latest runs-on: ubuntu-latest

View File

@@ -8,12 +8,8 @@ on:
release: release:
types: [published] types: [published]
permissions:
contents: read # to fetch code (actions/checkout)
jobs: jobs:
promote: promote:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout awx - name: Checkout awx

View File

@@ -21,7 +21,6 @@ on:
jobs: jobs:
stage: stage:
if: endsWith(github.repository, '/awx')
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
packages: write packages: write

6
.gitignore vendored
View File

@@ -157,11 +157,7 @@ use_dev_supervisor.txt
*.unison.tmp *.unison.tmp
*.# *.#
/awx/ui/.ui-built /awx/ui/.ui-built
/Dockerfile
/_build/ /_build/
/_build_kube_dev/ /_build_kube_dev/
/Dockerfile
/Dockerfile.dev
/Dockerfile.kube-dev /Dockerfile.kube-dev
awx/ui_next/src
awx/ui_next/build

View File

@@ -31,7 +31,7 @@ If your issue isn't considered high priority, then please be patient as it may t
`state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state. `state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state.
`state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familiar with an area of the code base the issue is for. `state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
`state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on. `state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on.

View File

@@ -6,7 +6,6 @@ recursive-include awx/templates *.html
recursive-include awx/api/templates *.md *.html *.yml recursive-include awx/api/templates *.md *.html *.yml
recursive-include awx/ui/build *.html recursive-include awx/ui/build *.html
recursive-include awx/ui/build * recursive-include awx/ui/build *
recursive-include awx/ui_next/build *
recursive-include awx/playbooks *.yml recursive-include awx/playbooks *.yml
recursive-include awx/lib/site-packages * recursive-include awx/lib/site-packages *
recursive-include awx/plugins *.ps1 recursive-include awx/plugins *.ps1

201
Makefile
View File

@@ -1,14 +1,11 @@
-include awx/ui_next/Makefile PYTHON ?= python3.9
PYTHON := $(notdir $(shell for i in python3.9 python3; do command -v $$i; done|sed 1q))
DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no OFFICIAL ?= no
NODE ?= node NODE ?= node
NPM_BIN ?= npm NPM_BIN ?= npm
CHROMIUM_BIN=/tmp/chrome-linux/chrome CHROMIUM_BIN=/tmp/chrome-linux/chrome
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD) GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage MANAGEMENT_COMMAND ?= awx-manage
VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py) VERSION := $(shell $(PYTHON) tools/scripts/scm_version.py)
# ansible-test requires semver compatable version, so we allow overrides to hack it # ansible-test requires semver compatable version, so we allow overrides to hack it
COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3) COLLECTION_VERSION ?= $(shell $(PYTHON) tools/scripts/scm_version.py | cut -d . -f 1-3)
@@ -27,8 +24,6 @@ COLLECTION_TEMPLATE_VERSION ?= false
# NOTE: This defaults the container image version to the branch that's active # NOTE: This defaults the container image version to the branch that's active
COMPOSE_TAG ?= $(GIT_BRANCH) COMPOSE_TAG ?= $(GIT_BRANCH)
MAIN_NODE_TYPE ?= hybrid MAIN_NODE_TYPE ?= hybrid
# If set to true docker-compose will also start a pgbouncer instance and use it
PGBOUNCER ?= false
# If set to true docker-compose will also start a keycloak instance # If set to true docker-compose will also start a keycloak instance
KEYCLOAK ?= false KEYCLOAK ?= false
# If set to true docker-compose will also start an ldap instance # If set to true docker-compose will also start an ldap instance
@@ -39,24 +34,17 @@ SPLUNK ?= false
PROMETHEUS ?= false PROMETHEUS ?= false
# If set to true docker-compose will also start a grafana instance # If set to true docker-compose will also start a grafana instance
GRAFANA ?= false GRAFANA ?= false
# If set to true docker-compose will also start a hashicorp vault instance
VAULT ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
VENV_BASE ?= /var/lib/awx/venv VENV_BASE ?= /var/lib/awx/venv
DEV_DOCKER_OWNER ?= ansible DEV_DOCKER_TAG_BASE ?= ghcr.io/ansible
# Docker will only accept lowercase, so github names like Paul need to be paul
DEV_DOCKER_OWNER_LOWER = $(shell echo $(DEV_DOCKER_OWNER) | tr A-Z a-z)
DEV_DOCKER_TAG_BASE ?= ghcr.io/$(DEV_DOCKER_OWNER_LOWER)
DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) DEVEL_IMAGE_NAME ?= $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel RECEPTOR_IMAGE ?= quay.io/ansible/receptor:devel
# Python packages to install only from source (not from binary wheels) # Python packages to install only from source (not from binary wheels)
# Comma separated list # Comma separated list
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg,twilio SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio
# These should be upgraded in the AWX and Ansible venv before attempting # These should be upgraded in the AWX and Ansible venv before attempting
# to install the actual requirements # to install the actual requirements
VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==7.0.5 wheel==0.38.4 VENV_BOOTSTRAP ?= pip==21.2.4 setuptools==65.6.3 setuptools_scm[toml]==7.0.5 wheel==0.38.4
@@ -95,7 +83,7 @@ clean-schema:
clean-languages: clean-languages:
rm -f $(I18N_FLAG_FILE) rm -f $(I18N_FLAG_FILE)
find ./awx/locale/ -type f -regex '.*\.mo$$' -delete find ./awx/locale/ -type f -regex ".*\.mo$" -delete
## Remove temporary build files, compiled Python files. ## Remove temporary build files, compiled Python files.
clean: clean-ui clean-api clean-awxkit clean-dist clean: clean-ui clean-api clean-awxkit clean-dist
@@ -215,7 +203,19 @@ uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi; \ fi; \
uwsgi /etc/tower/uwsgi.ini uwsgi -b 32768 \
--socket 127.0.0.1:8050 \
--module=awx.wsgi:application \
--home=/var/lib/awx/venv/awx \
--chdir=/awx_devel/ \
--vacuum \
--processes=5 \
--harakiri=120 --master \
--no-orphans \
--max-requests=1000 \
--stats /tmp/stats.socket \
--lazy-apps \
--logformat "%(addr) %(method) %(uri) - %(proto) %(status)"
awx-autoreload: awx-autoreload:
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)" @/awx_devel/tools/docker-compose/awx-autoreload /awx_devel/awx "$(DEV_RELOAD_COMMAND)"
@@ -226,6 +226,12 @@ daphne:
fi; \ fi; \
daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer
wsbroadcast:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_wsbroadcast
## Run to start the background task dispatcher for development. ## Run to start the background task dispatcher for development.
dispatcher: dispatcher:
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
@@ -233,6 +239,7 @@ dispatcher:
fi; \ fi; \
$(PYTHON) manage.py run_dispatcher $(PYTHON) manage.py run_dispatcher
## Run to start the zeromq callback receiver ## Run to start the zeromq callback receiver
receiver: receiver:
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
@@ -249,34 +256,6 @@ jupyter:
fi; \ fi; \
$(MANAGEMENT_COMMAND) shell_plus --notebook $(MANAGEMENT_COMMAND) shell_plus --notebook
## Start the rsyslog configurer process in background in development environment.
run-rsyslog-configurer:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_rsyslog_configurer
## Start cache_clear process in background in development environment.
run-cache-clear:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_cache_clear
## Start the wsrelay process in background in development environment.
run-wsrelay:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_wsrelay
## Start the heartbeat process in background in development environment.
run-ws-heartbeat:
@if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py run_ws_heartbeat
reports: reports:
mkdir -p $@ mkdir -p $@
@@ -303,13 +282,13 @@ swagger: reports
check: black check: black
api-lint: api-lint:
BLACK_ARGS="--check" $(MAKE) black BLACK_ARGS="--check" make black
flake8 awx flake8 awx
yamllint -s . yamllint -s .
## Run egg_info_dev to generate awx.egg-info for development.
awx-link: awx-link:
[ -d "/awx_devel/awx.egg-info" ] || $(PYTHON) /awx_devel/tools/scripts/egg_info_dev [ -d "/awx_devel/awx.egg-info" ] || $(PYTHON) /awx_devel/tools/scripts/egg_info_dev
cp -f /tmp/awx.egg-link /var/lib/awx/venv/awx/lib/$(PYTHON)/site-packages/awx.egg-link
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
PYTEST_ARGS ?= -n auto PYTEST_ARGS ?= -n auto
@@ -328,7 +307,7 @@ github_ci_setup:
# CI_GITHUB_TOKEN is defined in .github files # CI_GITHUB_TOKEN is defined in .github files
echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin echo $(CI_GITHUB_TOKEN) | docker login ghcr.io -u $(GITHUB_ACTOR) --password-stdin
docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache docker pull $(DEVEL_IMAGE_NAME) || : # Pre-pull image to warm build cache
$(MAKE) docker-compose-build make docker-compose-build
## Runs AWX_DOCKER_CMD inside a new docker container. ## Runs AWX_DOCKER_CMD inside a new docker container.
docker-runner: docker-runner:
@@ -378,7 +357,7 @@ test_collection_sanity:
rm -rf $(COLLECTION_INSTALL) rm -rf $(COLLECTION_INSTALL)
if ! [ -x "$(shell command -v ansible-test)" ]; then pip install ansible-core; fi if ! [ -x "$(shell command -v ansible-test)" ]; then pip install ansible-core; fi
ansible --version ansible --version
COLLECTION_VERSION=1.0.0 $(MAKE) install_collection COLLECTION_VERSION=1.0.0 make install_collection
cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS) cd $(COLLECTION_INSTALL) && ansible-test sanity $(COLLECTION_SANITY_ARGS)
test_collection_integration: install_collection test_collection_integration: install_collection
@@ -443,14 +422,12 @@ ui-release: $(UI_BUILD_FLAG_FILE)
ui-devel: awx/ui/node_modules ui-devel: awx/ui/node_modules
@$(MAKE) -B $(UI_BUILD_FLAG_FILE) @$(MAKE) -B $(UI_BUILD_FLAG_FILE)
@if [ -d "/var/lib/awx" ] ; then \ mkdir -p /var/lib/awx/public/static/css
mkdir -p /var/lib/awx/public/static/css; \ mkdir -p /var/lib/awx/public/static/js
mkdir -p /var/lib/awx/public/static/js; \ mkdir -p /var/lib/awx/public/static/media
mkdir -p /var/lib/awx/public/static/media; \ cp -r awx/ui/build/static/css/* /var/lib/awx/public/static/css
cp -r awx/ui/build/static/css/* /var/lib/awx/public/static/css; \ cp -r awx/ui/build/static/js/* /var/lib/awx/public/static/js
cp -r awx/ui/build/static/js/* /var/lib/awx/public/static/js; \ cp -r awx/ui/build/static/media/* /var/lib/awx/public/static/media
cp -r awx/ui/build/static/media/* /var/lib/awx/public/static/media; \
fi
ui-devel-instrumented: awx/ui/node_modules ui-devel-instrumented: awx/ui/node_modules
$(NPM_BIN) --prefix awx/ui --loglevel warn run start-instrumented $(NPM_BIN) --prefix awx/ui --loglevel warn run start-instrumented
@@ -477,12 +454,11 @@ ui-test-general:
$(NPM_BIN) run --prefix awx/ui pretest $(NPM_BIN) run --prefix awx/ui pretest
$(NPM_BIN) run --prefix awx/ui/ test-general --runInBand $(NPM_BIN) run --prefix awx/ui/ test-general --runInBand
# NOTE: The make target ui-next is imported from awx/ui_next/Makefile
HEADLESS ?= no HEADLESS ?= no
ifeq ($(HEADLESS), yes) ifeq ($(HEADLESS), yes)
dist/$(SDIST_TAR_FILE): dist/$(SDIST_TAR_FILE):
else else
dist/$(SDIST_TAR_FILE): $(UI_BUILD_FLAG_FILE) ui-next dist/$(SDIST_TAR_FILE): $(UI_BUILD_FLAG_FILE)
endif endif
$(PYTHON) -m build -s $(PYTHON) -m build -s
ln -sf $(SDIST_TAR_FILE) dist/awx.tar.gz ln -sf $(SDIST_TAR_FILE) dist/awx.tar.gz
@@ -524,34 +500,29 @@ docker-compose-sources: .git/hooks/pre-commit
-e control_plane_node_count=$(CONTROL_PLANE_NODE_COUNT) \ -e control_plane_node_count=$(CONTROL_PLANE_NODE_COUNT) \
-e execution_node_count=$(EXECUTION_NODE_COUNT) \ -e execution_node_count=$(EXECUTION_NODE_COUNT) \
-e minikube_container_group=$(MINIKUBE_CONTAINER_GROUP) \ -e minikube_container_group=$(MINIKUBE_CONTAINER_GROUP) \
-e enable_pgbouncer=$(PGBOUNCER) \
-e enable_keycloak=$(KEYCLOAK) \ -e enable_keycloak=$(KEYCLOAK) \
-e enable_ldap=$(LDAP) \ -e enable_ldap=$(LDAP) \
-e enable_splunk=$(SPLUNK) \ -e enable_splunk=$(SPLUNK) \
-e enable_prometheus=$(PROMETHEUS) \ -e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) \ -e enable_grafana=$(GRAFANA) $(EXTRA_SOURCES_ANSIBLE_OPTS)
-e enable_vault=$(VAULT) \
-e enable_tacacs=$(TACACS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml; docker-compose -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT);
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources docker-compose-credential-plugins: awx/projects docker-compose-sources
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m" echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx_1 --remove-orphans
docker-compose-test: awx/projects docker-compose-sources docker-compose-test: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash
docker-compose-runtest: awx/projects docker-compose-sources docker-compose-runtest: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
docker-compose-build-swagger: awx/projects docker-compose-sources docker-compose-build-swagger: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
SCHEMA_DIFF_BASE_BRANCH ?= devel SCHEMA_DIFF_BASE_BRANCH ?= devel
detect-schema-change: genschema detect-schema-change: genschema
@@ -560,7 +531,7 @@ detect-schema-change: genschema
diff -u -b reference-schema.json schema.json diff -u -b reference-schema.json schema.json
docker-compose-clean: awx/projects docker-compose-clean: awx/projects
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml rm -sf docker-compose -f tools/docker-compose/_sources/docker-compose.yml rm -sf
docker-compose-container-group-clean: docker-compose-container-group-clean:
@if [ -f "tools/docker-compose-minikube/_sources/minikube" ]; then \ @if [ -f "tools/docker-compose-minikube/_sources/minikube" ]; then \
@@ -568,40 +539,33 @@ docker-compose-container-group-clean:
fi fi
rm -rf tools/docker-compose-minikube/_sources/ rm -rf tools/docker-compose-minikube/_sources/
.PHONY: Dockerfile.dev ## Base development image build
## Generate Dockerfile.dev for awx_devel image docker-compose-build:
Dockerfile.dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2 ansible-playbook tools/ansible/dockerfile.yml -e build_dev=True -e receptor_image=$(RECEPTOR_IMAGE)
ansible-playbook tools/ansible/dockerfile.yml \ DOCKER_BUILDKIT=1 docker build -t $(DEVEL_IMAGE_NAME) \
-e dockerfile_name=Dockerfile.dev \ --build-arg BUILDKIT_INLINE_CACHE=1 \
-e build_dev=True \ --cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
-e receptor_image=$(RECEPTOR_IMAGE)
## Build awx_devel image for docker compose development environment
docker-compose-build: Dockerfile.dev
DOCKER_BUILDKIT=1 docker build \
-f Dockerfile.dev \
-t $(DEVEL_IMAGE_NAME) \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
docker-clean: docker-clean:
-$(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);) $(foreach container_id,$(shell docker ps -f name=tools_awx -aq && docker ps -f name=tools_receptor -aq),docker stop $(container_id); docker rm -f $(container_id);)
-$(foreach image_id,$(shell docker images --filter=reference='*/*/*awx_devel*' --filter=reference='*/*awx_devel*' --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);) if [ "$(shell docker images | grep awx_devel)" ]; then \
docker images | grep awx_devel | awk '{print $$3}' | xargs docker rmi --force; \
fi
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_vault_1 tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q) docker volume rm -f tools_awx_db tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose docker-refresh: docker-clean docker-compose
## Docker Development Environment with Elastic Stack Connected ## Docker Development Environment with Elastic Stack Connected
docker-compose-elk: awx/projects docker-compose-sources docker-compose-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: awx/projects docker-compose-sources docker-compose-cluster-elk: awx/projects docker-compose-sources
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-container-group: docker-compose-container-group:
MINIKUBE_CONTAINER_GROUP=true $(MAKE) docker-compose MINIKUBE_CONTAINER_GROUP=true make docker-compose
clean-elk: clean-elk:
docker stop tools_kibana_1 docker stop tools_kibana_1
@@ -618,36 +582,11 @@ VERSION:
@echo "awx: $(VERSION)" @echo "awx: $(VERSION)"
PYTHON_VERSION: PYTHON_VERSION:
@echo "$(subst python,,$(PYTHON))" @echo "$(PYTHON)" | sed 's:python::'
.PHONY: version-for-buildyml
version-for-buildyml:
@echo $(firstword $(subst +, ,$(VERSION)))
# version-for-buildyml prints a special version string for build.yml,
# chopping off the sha after the '+' sign.
# tools/ansible/build.yml was doing this: make print-VERSION | cut -d + -f -1
# This does the same thing in native make without
# the pipe or the extra processes, and now the pb does `make version-for-buildyml`
# Example:
# 22.1.1.dev38+g523c0d9781 becomes 22.1.1.dev38
.PHONY: Dockerfile
## Generate Dockerfile for awx image
Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2 Dockerfile: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \ ansible-playbook tools/ansible/dockerfile.yml -e receptor_image=$(RECEPTOR_IMAGE)
-e receptor_image=$(RECEPTOR_IMAGE) \
-e headless=$(HEADLESS)
## Build awx image for deployment on Kubernetes environment.
awx-kube-build: Dockerfile
DOCKER_BUILDKIT=1 docker build -f Dockerfile \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
.PHONY: Dockerfile.kube-dev
## Generate Docker.kube-dev for awx_kube_devel image
Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2 Dockerfile.kube-dev: tools/ansible/roles/dockerfile/templates/Dockerfile.j2
ansible-playbook tools/ansible/dockerfile.yml \ ansible-playbook tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.kube-dev \ -e dockerfile_name=Dockerfile.kube-dev \
@@ -662,6 +601,13 @@ awx-kube-dev-build: Dockerfile.kube-dev
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \ --cache-from=$(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) \
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) . -t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
## Build awx image for deployment on Kubernetes environment.
awx-kube-build: Dockerfile
DOCKER_BUILDKIT=1 docker build -f Dockerfile \
--build-arg VERSION=$(VERSION) \
--build-arg SETUPTOOLS_SCM_PRETEND_VERSION=$(VERSION) \
--build-arg HEADLESS=$(HEADLESS) \
-t $(DEV_DOCKER_TAG_BASE)/awx:$(COMPOSE_TAG) .
# Translation TASKS # Translation TASKS
# -------------------------------------- # --------------------------------------
@@ -669,12 +615,10 @@ awx-kube-dev-build: Dockerfile.kube-dev
## generate UI .pot file, an empty template of strings yet to be translated ## generate UI .pot file, an empty template of strings yet to be translated
pot: $(UI_BUILD_FLAG_FILE) pot: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-template --clean $(NPM_BIN) --prefix awx/ui --loglevel warn run extract-template --clean
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-template --clean
## generate UI .po files for each locale (will update translated strings for `en`) ## generate UI .po files for each locale (will update translated strings for `en`)
po: $(UI_BUILD_FLAG_FILE) po: $(UI_BUILD_FLAG_FILE)
$(NPM_BIN) --prefix awx/ui --loglevel warn run extract-strings -- --clean $(NPM_BIN) --prefix awx/ui --loglevel warn run extract-strings -- --clean
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-strings -- --clean
## generate API django .pot .po ## generate API django .pot .po
messages: messages:
@@ -683,7 +627,6 @@ messages:
fi; \ fi; \
$(PYTHON) manage.py makemessages -l en_us --keep-pot $(PYTHON) manage.py makemessages -l en_us --keep-pot
.PHONY: print-%
print-%: print-%:
@echo $($*) @echo $($*)
@@ -695,12 +638,12 @@ HELP_FILTER=.PHONY
## Display help targets ## Display help targets
help: help:
@printf "Available targets:\n" @printf "Available targets:\n"
@$(MAKE) -s help/generate | grep -vE "\w($(HELP_FILTER))" @make -s help/generate | grep -vE "\w($(HELP_FILTER))"
## Display help for all targets ## Display help for all targets
help/all: help/all:
@printf "Available targets:\n" @printf "Available targets:\n"
@$(MAKE) -s help/generate @make -s help/generate
## Generate help output from MAKEFILE_LIST ## Generate help output from MAKEFILE_LIST
help/generate: help/generate:
@@ -721,7 +664,3 @@ help/generate:
} \ } \
{ lastLine = $$0 }' $(MAKEFILE_LIST) | sort -u { lastLine = $$0 }' $(MAKEFILE_LIST) | sort -u
@printf "\n" @printf "\n"
## Display help for ui-next targets
help/ui-next:
@$(MAKE) -s help MAKEFILE_LIST="awx/ui_next/Makefile"

View File

@@ -1,4 +1,5 @@
# Django # Django
from django.conf import settings
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
# Django REST Framework # Django REST Framework
@@ -8,7 +9,6 @@ from rest_framework import serializers
from awx.conf import fields, register, register_validate from awx.conf import fields, register, register_validate
from awx.api.fields import OAuth2ProviderField from awx.api.fields import OAuth2ProviderField
from oauth2_provider.settings import oauth2_settings from oauth2_provider.settings import oauth2_settings
from awx.sso.common import is_remote_auth_enabled
register( register(
@@ -108,8 +108,19 @@ register(
def authentication_validate(serializer, attrs): def authentication_validate(serializer, attrs):
if attrs.get('DISABLE_LOCAL_AUTH', False) and not is_remote_auth_enabled(): remote_auth_settings = [
raise serializers.ValidationError(_("There are no remote authentication systems configured.")) 'AUTH_LDAP_SERVER_URI',
'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY',
'SOCIAL_AUTH_GITHUB_KEY',
'SOCIAL_AUTH_GITHUB_ORG_KEY',
'SOCIAL_AUTH_GITHUB_TEAM_KEY',
'SOCIAL_AUTH_SAML_ENABLED_IDPS',
'RADIUS_SERVER',
'TACACSPLUS_HOST',
]
if attrs.get('DISABLE_LOCAL_AUTH', False):
if not any(getattr(settings, s, None) for s in remote_auth_settings):
raise serializers.ValidationError(_("There are no remote authentication systems configured."))
return attrs return attrs

View File

@@ -155,7 +155,7 @@ class FieldLookupBackend(BaseFilterBackend):
'search', 'search',
) )
# A list of fields that we know can be filtered on without the possibility # A list of fields that we know can be filtered on without the possiblity
# of introducing duplicates # of introducing duplicates
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField) NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField, TextField)
@@ -268,7 +268,7 @@ class FieldLookupBackend(BaseFilterBackend):
continue continue
# HACK: make `created` available via API for the Django User ORM model # HACK: make `created` available via API for the Django User ORM model
# so it keep compatibility with other objects which exposes the `created` attr. # so it keep compatiblity with other objects which exposes the `created` attr.
if queryset.model._meta.object_name == 'User' and key.startswith('created'): if queryset.model._meta.object_name == 'User' and key.startswith('created'):
key = key.replace('created', 'date_joined') key = key.replace('created', 'date_joined')
@@ -347,7 +347,7 @@ class FieldLookupBackend(BaseFilterBackend):
args.append(Q(**{k: v})) args.append(Q(**{k: v}))
for role_name in role_filters: for role_name in role_filters:
if not hasattr(queryset.model, 'accessible_pk_qs'): if not hasattr(queryset.model, 'accessible_pk_qs'):
raise ParseError(_('Cannot apply role_level filter to this list because its model does not use roles for access control.')) raise ParseError(_('Cannot apply role_level filter to this list because its model ' 'does not use roles for access control.'))
args.append(Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name))) args.append(Q(pk__in=queryset.model.accessible_pk_qs(request.user, role_name)))
if or_filters: if or_filters:
q = Q() q = Q()

View File

@@ -5,11 +5,13 @@
import inspect import inspect
import logging import logging
import time import time
import uuid
# Django # Django
from django.conf import settings from django.conf import settings
from django.contrib.auth import views as auth_views from django.contrib.auth import views as auth_views
from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes.models import ContentType
from django.core.cache import cache
from django.core.exceptions import FieldDoesNotExist from django.core.exceptions import FieldDoesNotExist
from django.db import connection, transaction from django.db import connection, transaction
from django.db.models.fields.related import OneToOneRel from django.db.models.fields.related import OneToOneRel
@@ -26,14 +28,14 @@ from rest_framework import generics
from rest_framework.response import Response from rest_framework.response import Response
from rest_framework import status from rest_framework import status
from rest_framework import views from rest_framework import views
from rest_framework.permissions import IsAuthenticated from rest_framework.permissions import AllowAny
from rest_framework.renderers import StaticHTMLRenderer from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.negotiation import DefaultContentNegotiation from rest_framework.negotiation import DefaultContentNegotiation
# AWX # AWX
from awx.api.filters import FieldLookupBackend from awx.api.filters import FieldLookupBackend
from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate from awx.main.models import UnifiedJob, UnifiedJobTemplate, User, Role, Credential, WorkflowJobTemplateNode, WorkflowApprovalTemplate
from awx.main.access import optimize_queryset from awx.main.access import access_registry
from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version from awx.main.utils import camelcase_to_underscore, get_search_fields, getattrd, get_object_or_400, decrypt_field, get_awx_version
from awx.main.utils.db import get_all_field_names from awx.main.utils.db import get_all_field_names
from awx.main.utils.licensing import server_product_name from awx.main.utils.licensing import server_product_name
@@ -169,7 +171,7 @@ class APIView(views.APIView):
self.__init_request_error__ = exc self.__init_request_error__ = exc
except UnsupportedMediaType as exc: except UnsupportedMediaType as exc:
exc.detail = _( exc.detail = _(
'You did not use correct Content-Type in your HTTP request. If you are using our REST API, the Content-Type must be application/json' 'You did not use correct Content-Type in your HTTP request. ' 'If you are using our REST API, the Content-Type must be application/json'
) )
self.__init_request_error__ = exc self.__init_request_error__ = exc
return drf_request return drf_request
@@ -362,7 +364,12 @@ class GenericAPIView(generics.GenericAPIView, APIView):
return self.queryset._clone() return self.queryset._clone()
elif self.model is not None: elif self.model is not None:
qs = self.model._default_manager qs = self.model._default_manager
qs = optimize_queryset(qs) if self.model in access_registry:
access_class = access_registry[self.model]
if access_class.select_related:
qs = qs.select_related(*access_class.select_related)
if access_class.prefetch_related:
qs = qs.prefetch_related(*access_class.prefetch_related)
return qs return qs
else: else:
return super(GenericAPIView, self).get_queryset() return super(GenericAPIView, self).get_queryset()
@@ -505,9 +512,6 @@ class SubListAPIView(ParentMixin, ListAPIView):
# And optionally (user must have given access permission on parent object # And optionally (user must have given access permission on parent object
# to view sublist): # to view sublist):
# parent_access = 'read' # parent_access = 'read'
# filter_read_permission sets whether or not to override the default intersection behavior
# implemented here
filter_read_permission = True
def get_description_context(self): def get_description_context(self):
d = super(SubListAPIView, self).get_description_context() d = super(SubListAPIView, self).get_description_context()
@@ -522,16 +526,12 @@ class SubListAPIView(ParentMixin, ListAPIView):
def get_queryset(self): def get_queryset(self):
parent = self.get_parent_object() parent = self.get_parent_object()
self.check_parent_access(parent) self.check_parent_access(parent)
if not self.filter_read_permission: qs = self.request.user.get_queryset(self.model).distinct()
return optimize_queryset(self.get_sublist_queryset(parent)) sublist_qs = self.get_sublist_queryset(parent)
qs = self.request.user.get_queryset(self.model) return qs & sublist_qs
if hasattr(self, 'parent_key'):
# This is vastly preferable for ReverseForeignKey relationships
return qs.filter(**{self.parent_key: parent})
return qs.distinct() & self.get_sublist_queryset(parent).distinct()
def get_sublist_queryset(self, parent): def get_sublist_queryset(self, parent):
return getattrd(parent, self.relationship) return getattrd(parent, self.relationship).distinct()
class DestroyAPIView(generics.DestroyAPIView): class DestroyAPIView(generics.DestroyAPIView):
@@ -580,6 +580,15 @@ class SubListCreateAPIView(SubListAPIView, ListCreateAPIView):
d.update({'parent_key': getattr(self, 'parent_key', None)}) d.update({'parent_key': getattr(self, 'parent_key', None)})
return d return d
def get_queryset(self):
if hasattr(self, 'parent_key'):
# Prefer this filtering because ForeignKey allows us more assumptions
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(**{self.parent_key: parent})
return super(SubListCreateAPIView, self).get_queryset()
def create(self, request, *args, **kwargs): def create(self, request, *args, **kwargs):
# If the object ID was not specified, it probably doesn't exist in the # If the object ID was not specified, it probably doesn't exist in the
# DB yet. We want to see if we can create it. The URL may choose to # DB yet. We want to see if we can create it. The URL may choose to
@@ -665,7 +674,7 @@ class SubListCreateAttachDetachAPIView(SubListCreateAPIView):
location = None location = None
created = True created = True
# Retrieve the sub object (whether created or by ID). # Retrive the sub object (whether created or by ID).
sub = get_object_or_400(self.model, pk=sub_id) sub = get_object_or_400(self.model, pk=sub_id)
# Verify we have permission to attach. # Verify we have permission to attach.
@@ -813,7 +822,7 @@ def trigger_delayed_deep_copy(*args, **kwargs):
class CopyAPIView(GenericAPIView): class CopyAPIView(GenericAPIView):
serializer_class = CopySerializer serializer_class = CopySerializer
permission_classes = (IsAuthenticated,) permission_classes = (AllowAny,)
copy_return_serializer_class = None copy_return_serializer_class = None
new_in_330 = True new_in_330 = True
new_in_api_v2 = True new_in_api_v2 = True
@@ -958,11 +967,16 @@ class CopyAPIView(GenericAPIView):
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all(): if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all():
new_obj.admin_role.members.add(request.user) new_obj.admin_role.members.add(request.user)
if sub_objs: if sub_objs:
# store the copied object dict into cache, because it's
# often too large for postgres' notification bus
# (which has a default maximum message size of 8k)
key = 'deep-copy-{}'.format(str(uuid.uuid4()))
cache.set(key, sub_objs, timeout=3600)
permission_check_func = None permission_check_func = None
if hasattr(type(self), 'deep_copy_permission_check_func'): if hasattr(type(self), 'deep_copy_permission_check_func'):
permission_check_func = (type(self).__module__, type(self).__name__, 'deep_copy_permission_check_func') permission_check_func = (type(self).__module__, type(self).__name__, 'deep_copy_permission_check_func')
trigger_delayed_deep_copy( trigger_delayed_deep_copy(
self.model.__module__, self.model.__name__, obj.pk, new_obj.pk, request.user.pk, permission_check_func=permission_check_func self.model.__module__, self.model.__name__, obj.pk, new_obj.pk, request.user.pk, key, permission_check_func=permission_check_func
) )
serializer = self._get_copy_return_serializer(new_obj) serializer = self._get_copy_return_serializer(new_obj)
headers = {'Location': new_obj.get_absolute_url(request=request)} headers = {'Location': new_obj.get_absolute_url(request=request)}

View File

@@ -71,7 +71,7 @@ class Metadata(metadata.SimpleMetadata):
'url': _('URL for this {}.'), 'url': _('URL for this {}.'),
'related': _('Data structure with URLs of related resources.'), 'related': _('Data structure with URLs of related resources.'),
'summary_fields': _( 'summary_fields': _(
'Data structure with name/description for related resources. The output for some objects may be limited for performance reasons.' 'Data structure with name/description for related resources. ' 'The output for some objects may be limited for performance reasons.'
), ),
'created': _('Timestamp when this {} was created.'), 'created': _('Timestamp when this {} was created.'),
'modified': _('Timestamp when this {} was last modified.'), 'modified': _('Timestamp when this {} was last modified.'),

View File

@@ -25,7 +25,6 @@ __all__ = [
'UserPermission', 'UserPermission',
'IsSystemAdminOrAuditor', 'IsSystemAdminOrAuditor',
'WorkflowApprovalPermission', 'WorkflowApprovalPermission',
'AnalyticsPermission',
] ]
@@ -251,16 +250,3 @@ class IsSystemAdminOrAuditor(permissions.BasePermission):
class WebhookKeyPermission(permissions.BasePermission): class WebhookKeyPermission(permissions.BasePermission):
def has_object_permission(self, request, view, obj): def has_object_permission(self, request, view, obj):
return request.user.can_access(view.model, 'admin', obj, request.data) return request.user.can_access(view.model, 'admin', obj, request.data)
class AnalyticsPermission(permissions.BasePermission):
"""
Allows GET/POST/OPTIONS to system admins and system auditors.
"""
def has_permission(self, request, view):
if not (request.user and request.user.is_authenticated):
return False
if request.method in ["GET", "POST", "OPTIONS"]:
return request.user.is_superuser or request.user.is_system_auditor
return request.user.is_superuser

View File

@@ -60,7 +60,7 @@ class BrowsableAPIRenderer(renderers.BrowsableAPIRenderer):
delattr(renderer_context['view'], '_request') delattr(renderer_context['view'], '_request')
def get_raw_data_form(self, data, view, method, request): def get_raw_data_form(self, data, view, method, request):
# Set a flag on the view to indicate to the view/serializer that we're # Set a flag on the view to indiciate to the view/serializer that we're
# creating a raw data form for the browsable API. Store the original # creating a raw data form for the browsable API. Store the original
# request method to determine how to populate the raw data form. # request method to determine how to populate the raw data form.
if request.method in {'OPTIONS', 'DELETE'}: if request.method in {'OPTIONS', 'DELETE'}:

View File

@@ -8,7 +8,6 @@ import logging
import re import re
from collections import OrderedDict from collections import OrderedDict
from datetime import timedelta from datetime import timedelta
from uuid import uuid4
# OAuth2 # OAuth2
from oauthlib import oauth2 from oauthlib import oauth2
@@ -56,8 +55,6 @@ from awx.main.models import (
ExecutionEnvironment, ExecutionEnvironment,
Group, Group,
Host, Host,
HostMetric,
HostMetricSummaryMonthly,
Instance, Instance,
InstanceGroup, InstanceGroup,
InstanceLink, InstanceLink,
@@ -111,15 +108,13 @@ from awx.main.utils import (
extract_ansible_vars, extract_ansible_vars,
encrypt_dict, encrypt_dict,
prefetch_page_capabilities, prefetch_page_capabilities,
get_external_account,
truncate_stdout, truncate_stdout,
get_licenser,
) )
from awx.main.utils.filters import SmartFilter from awx.main.utils.filters import SmartFilter
from awx.main.utils.named_url_graph import reset_counters from awx.main.utils.named_url_graph import reset_counters
from awx.main.scheduler.task_manager_models import TaskManagerModels from awx.main.scheduler.task_manager_models import TaskManagerModels
from awx.main.redact import UriCleaner, REPLACE_STR from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.signals import update_inventory_computed_fields
from awx.main.validators import vars_validate_or_raise from awx.main.validators import vars_validate_or_raise
@@ -129,8 +124,6 @@ from awx.api.fields import BooleanNullField, CharNullField, ChoiceNullField, Ver
# AWX Utils # AWX Utils
from awx.api.validators import HostnameRegexValidator from awx.api.validators import HostnameRegexValidator
from awx.sso.common import get_external_account
logger = logging.getLogger('awx.api.serializers') logger = logging.getLogger('awx.api.serializers')
# Fields that should be summarized regardless of object type. # Fields that should be summarized regardless of object type.
@@ -158,12 +151,11 @@ SUMMARIZABLE_FK_FIELDS = {
'kind', 'kind',
), ),
'host': DEFAULT_SUMMARY_FIELDS, 'host': DEFAULT_SUMMARY_FIELDS,
'constructed_host': DEFAULT_SUMMARY_FIELDS,
'group': DEFAULT_SUMMARY_FIELDS, 'group': DEFAULT_SUMMARY_FIELDS,
'default_environment': DEFAULT_SUMMARY_FIELDS + ('image',), 'default_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'execution_environment': DEFAULT_SUMMARY_FIELDS + ('image',), 'execution_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type', 'allow_override'), 'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type', 'allow_override'),
'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type', 'allow_override'), 'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed'), 'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed'),
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'kubernetes', 'credential_type_id'), 'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'kubernetes', 'credential_type_id'),
'signature_validation_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'credential_type_id'), 'signature_validation_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'credential_type_id'),
@@ -192,11 +184,6 @@ SUMMARIZABLE_FK_FIELDS = {
} }
# These fields can be edited on a constructed inventory's generated source (possibly by using the constructed
# inventory's special API endpoint, but also by using the inventory sources endpoint).
CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS = ('source_vars', 'update_cache_timeout', 'limit', 'verbosity')
def reverse_gfk(content_object, request): def reverse_gfk(content_object, request):
""" """
Computes a reverse for a GenericForeignKey field. Computes a reverse for a GenericForeignKey field.
@@ -220,7 +207,7 @@ class CopySerializer(serializers.Serializer):
view = self.context.get('view', None) view = self.context.get('view', None)
obj = view.get_object() obj = view.get_object()
if name == obj.name: if name == obj.name:
raise serializers.ValidationError(_('The original object is already named {}, a copy from it cannot have the same name.'.format(name))) raise serializers.ValidationError(_('The original object is already named {}, a copy from' ' it cannot have the same name.'.format(name)))
return attrs return attrs
@@ -549,7 +536,7 @@ class BaseSerializer(serializers.ModelSerializer, metaclass=BaseSerializerMetacl
# #
# This logic is to force rendering choice's on an uneditable field. # This logic is to force rendering choice's on an uneditable field.
# Note: Consider expanding this rendering for more than just choices fields # Note: Consider expanding this rendering for more than just choices fields
# Note: This logic works in conjunction with # Note: This logic works in conjuction with
if hasattr(model_field, 'choices') and model_field.choices: if hasattr(model_field, 'choices') and model_field.choices:
was_editable = model_field.editable was_editable = model_field.editable
model_field.editable = True model_field.editable = True
@@ -760,7 +747,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
class UnifiedJobSerializer(BaseSerializer): class UnifiedJobSerializer(BaseSerializer):
show_capabilities = ['start', 'delete'] show_capabilities = ['start', 'delete']
event_processing_finished = serializers.BooleanField( event_processing_finished = serializers.BooleanField(
help_text=_('Indicates whether all of the events generated by this unified job have been saved to the database.'), read_only=True help_text=_('Indicates whether all of the events generated by this ' 'unified job have been saved to the database.'), read_only=True
) )
class Meta: class Meta:
@@ -954,7 +941,7 @@ class UnifiedJobStdoutSerializer(UnifiedJobSerializer):
class UserSerializer(BaseSerializer): class UserSerializer(BaseSerializer):
password = serializers.CharField(required=False, default='', help_text=_('Field used to change the password.')) password = serializers.CharField(required=False, default='', write_only=True, help_text=_('Write-only field used to change the password.'))
ldap_dn = serializers.CharField(source='profile.ldap_dn', read_only=True) ldap_dn = serializers.CharField(source='profile.ldap_dn', read_only=True)
external_account = serializers.SerializerMethodField(help_text=_('Set if the account is managed by an external service')) external_account = serializers.SerializerMethodField(help_text=_('Set if the account is managed by an external service'))
is_system_auditor = serializers.BooleanField(default=False) is_system_auditor = serializers.BooleanField(default=False)
@@ -981,12 +968,7 @@ class UserSerializer(BaseSerializer):
def to_representation(self, obj): def to_representation(self, obj):
ret = super(UserSerializer, self).to_representation(obj) ret = super(UserSerializer, self).to_representation(obj)
if self.get_external_account(obj): ret.pop('password', None)
# If this is an external account it shouldn't have a password field
ret.pop('password', None)
else:
# If its an internal account lets assume there is a password and return $encrypted$ to the user
ret['password'] = '$encrypted$'
if obj and type(self) is UserSerializer: if obj and type(self) is UserSerializer:
ret['auth'] = obj.social_auth.values('provider', 'uid') ret['auth'] = obj.social_auth.values('provider', 'uid')
return ret return ret
@@ -1000,31 +982,28 @@ class UserSerializer(BaseSerializer):
django_validate_password(value) django_validate_password(value)
if not self.instance and value in (None, ''): if not self.instance and value in (None, ''):
raise serializers.ValidationError(_('Password required for new User.')) raise serializers.ValidationError(_('Password required for new User.'))
# Check if a password is too long
password_max_length = User._meta.get_field('password').max_length
if len(value) > password_max_length:
raise serializers.ValidationError(_('Password max length is {}'.format(password_max_length)))
if getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH', 0) and len(value) < getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH'):
raise serializers.ValidationError(_('Password must be at least {} characters long.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_LENGTH'))))
if getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS', 0) and sum(c.isdigit() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS'):
raise serializers.ValidationError(_('Password must contain at least {} digits.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_DIGITS'))))
if getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER', 0) and sum(c.isupper() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER'):
raise serializers.ValidationError(
_('Password must contain at least {} uppercase characters.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_UPPER')))
)
if getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL', 0) and sum(not c.isalnum() for c in value) < getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL'):
raise serializers.ValidationError(
_('Password must contain at least {} special characters.'.format(getattr(settings, 'LOCAL_PASSWORD_MIN_SPECIAL')))
)
return value return value
def _update_password(self, obj, new_password): def _update_password(self, obj, new_password):
# For now we're not raising an error, just not saving password for # For now we're not raising an error, just not saving password for
# users managed by LDAP who already have an unusable password set. # users managed by LDAP who already have an unusable password set.
# Get external password will return something like ldap or enterprise or None if the user isn't external. We only want to allow a password update for a None option if getattr(settings, 'AUTH_LDAP_SERVER_URI', None):
if new_password and new_password != '$encrypted$' and not self.get_external_account(obj): try:
if obj.pk and obj.profile.ldap_dn and not obj.has_usable_password():
new_password = None
except AttributeError:
pass
if (
getattr(settings, 'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_ORG_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_GITHUB_TEAM_KEY', None)
or getattr(settings, 'SOCIAL_AUTH_SAML_ENABLED_IDPS', None)
) and obj.social_auth.all():
new_password = None
if (getattr(settings, 'RADIUS_SERVER', None) or getattr(settings, 'TACACSPLUS_HOST', None)) and obj.enterprise_auth.all():
new_password = None
if new_password:
obj.set_password(new_password) obj.set_password(new_password)
obj.save(update_fields=['password']) obj.save(update_fields=['password'])
@@ -1579,7 +1558,7 @@ class ProjectPlaybooksSerializer(ProjectSerializer):
class ProjectInventoriesSerializer(ProjectSerializer): class ProjectInventoriesSerializer(ProjectSerializer):
inventory_files = serializers.ReadOnlyField(help_text=_('Array of inventory files and directories available within this project, not comprehensive.')) inventory_files = serializers.ReadOnlyField(help_text=_('Array of inventory files and directories available within this project, ' 'not comprehensive.'))
class Meta: class Meta:
model = Project model = Project
@@ -1629,8 +1608,8 @@ class ProjectUpdateDetailSerializer(ProjectUpdateSerializer):
fields = ('*', 'host_status_counts', 'playbook_counts') fields = ('*', 'host_status_counts', 'playbook_counts')
def get_playbook_counts(self, obj): def get_playbook_counts(self, obj):
task_count = obj.get_event_queryset().filter(event='playbook_on_task_start').count() task_count = obj.project_update_events.filter(event='playbook_on_task_start').count()
play_count = obj.get_event_queryset().filter(event='playbook_on_play_start').count() play_count = obj.project_update_events.filter(event='playbook_on_play_start').count()
data = {'play_count': play_count, 'task_count': task_count} data = {'play_count': play_count, 'task_count': task_count}
@@ -1813,12 +1792,12 @@ class ConstructedInventorySerializer(InventorySerializer):
class Meta: class Meta:
model = Inventory model = Inventory
fields = ('*', '-host_filter') + CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS fields = ('*', '-host_filter', 'source_vars', 'update_cache_timeout', 'limit', 'verbosity')
read_only_fields = ('*', 'kind') read_only_fields = ('*', 'kind')
def pop_inv_src_data(self, data): def pop_inv_src_data(self, data):
inv_src_data = {} inv_src_data = {}
for field in CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS: for field in ('source_vars', 'update_cache_timeout', 'limit', 'verbosity'):
if field in data: if field in data:
# values always need to be removed, as they are not valid for Inventory model # values always need to be removed, as they are not valid for Inventory model
value = data.pop(field) value = data.pop(field)
@@ -1906,7 +1885,6 @@ class HostSerializer(BaseSerializerWithVariables):
) )
if obj.inventory.kind == 'constructed': if obj.inventory.kind == 'constructed':
res['original_host'] = self.reverse('api:host_detail', kwargs={'pk': obj.instance_id}) res['original_host'] = self.reverse('api:host_detail', kwargs={'pk': obj.instance_id})
res['ansible_facts'] = self.reverse('api:host_ansible_facts_detail', kwargs={'pk': obj.instance_id})
if obj.inventory: if obj.inventory:
res['inventory'] = self.reverse('api:inventory_detail', kwargs={'pk': obj.inventory.pk}) res['inventory'] = self.reverse('api:inventory_detail', kwargs={'pk': obj.inventory.pk})
if obj.last_job: if obj.last_job:
@@ -1928,10 +1906,6 @@ class HostSerializer(BaseSerializerWithVariables):
group_list = [{'id': g.id, 'name': g.name} for g in obj.groups.all().order_by('id')[:5]] group_list = [{'id': g.id, 'name': g.name} for g in obj.groups.all().order_by('id')[:5]]
group_cnt = obj.groups.count() group_cnt = obj.groups.count()
d.setdefault('groups', {'count': group_cnt, 'results': group_list}) d.setdefault('groups', {'count': group_cnt, 'results': group_list})
if obj.inventory.kind == 'constructed':
summaries_qs = obj.constructed_host_summaries
else:
summaries_qs = obj.job_host_summaries
d.setdefault( d.setdefault(
'recent_jobs', 'recent_jobs',
[ [
@@ -1942,7 +1916,7 @@ class HostSerializer(BaseSerializerWithVariables):
'status': j.job.status, 'status': j.job.status,
'finished': j.job.finished, 'finished': j.job.finished,
} }
for j in summaries_qs.select_related('job__job_template').order_by('-created').defer('job__extra_vars', 'job__artifacts')[:5] for j in obj.job_host_summaries.select_related('job__job_template').order_by('-created').defer('job__extra_vars', 'job__artifacts')[:5]
], ],
) )
return d return d
@@ -1985,7 +1959,7 @@ class HostSerializer(BaseSerializerWithVariables):
vars_dict = parse_yaml_or_json(variables) vars_dict = parse_yaml_or_json(variables)
vars_dict['ansible_ssh_port'] = port vars_dict['ansible_ssh_port'] = port
attrs['variables'] = json.dumps(vars_dict) attrs['variables'] = json.dumps(vars_dict)
if inventory and Group.objects.filter(name=name, inventory=inventory).exists(): if Group.objects.filter(name=name, inventory=inventory).exists():
raise serializers.ValidationError(_('A Group with that name already exists.')) raise serializers.ValidationError(_('A Group with that name already exists.'))
return super(HostSerializer, self).validate(attrs) return super(HostSerializer, self).validate(attrs)
@@ -2077,130 +2051,6 @@ class GroupSerializer(BaseSerializerWithVariables):
return ret return ret
class BulkHostSerializer(HostSerializer):
class Meta:
model = Host
fields = (
'name',
'enabled',
'instance_id',
'description',
'variables',
)
class BulkHostCreateSerializer(serializers.Serializer):
inventory = serializers.PrimaryKeyRelatedField(
queryset=Inventory.objects.all(), required=True, write_only=True, help_text=_('Primary Key ID of inventory to add hosts to.')
)
hosts = serializers.ListField(
child=BulkHostSerializer(),
allow_empty=False,
max_length=100000,
write_only=True,
help_text=_('List of hosts to be created, JSON. e.g. [{"name": "example.com"}, {"name": "127.0.0.1"}]'),
)
class Meta:
model = Inventory
fields = ('inventory', 'hosts')
read_only_fields = ()
def raise_if_host_counts_violated(self, attrs):
validation_info = get_licenser().validate()
org = attrs['inventory'].organization
if org:
org_active_count = Host.objects.org_active_count(org.id)
new_hosts = [h['name'] for h in attrs['hosts']]
org_net_new_host_count = len(new_hosts) - Host.objects.filter(inventory__organization=1, name__in=new_hosts).values('name').distinct().count()
if org.max_hosts > 0 and org_active_count + org_net_new_host_count > org.max_hosts:
raise PermissionDenied(
_(
"You have already reached the maximum number of %s hosts"
" allowed for your organization. Contact your System Administrator"
" for assistance." % org.max_hosts
)
)
# Don't check license if it is open license
if validation_info.get('license_type', 'UNLICENSED') == 'open':
return
sys_free_instances = validation_info.get('free_instances', 0)
system_net_new_host_count = Host.objects.exclude(name__in=new_hosts).count()
if system_net_new_host_count > sys_free_instances:
hard_error = validation_info.get('trial', False) is True or validation_info['instance_count'] == 10
if hard_error:
# Only raise permission error for trial, otherwise just log a warning as we do in other inventory import situations
raise PermissionDenied(_("Host count exceeds available instances."))
logger.warning(_("Number of hosts allowed by license has been exceeded."))
def validate(self, attrs):
request = self.context.get('request', None)
inv = attrs['inventory']
if inv.kind != '':
raise serializers.ValidationError(_('Hosts can only be created in manual inventories (not smart or constructed types).'))
if len(attrs['hosts']) > settings.BULK_HOST_MAX_CREATE:
raise serializers.ValidationError(_('Number of hosts exceeds system setting BULK_HOST_MAX_CREATE'))
if request and not request.user.is_superuser:
if request.user not in inv.admin_role:
raise serializers.ValidationError(_(f'Inventory with id {inv.id} not found or lack permissions to add hosts.'))
current_hostnames = set(inv.hosts.values_list('name', flat=True))
new_names = [host['name'] for host in attrs['hosts']]
duplicate_new_names = [n for n in new_names if n in current_hostnames or new_names.count(n) > 1]
if duplicate_new_names:
raise serializers.ValidationError(_(f'Hostnames must be unique in an inventory. Duplicates found: {duplicate_new_names}'))
self.raise_if_host_counts_violated(attrs)
_now = now()
for host in attrs['hosts']:
host['created'] = _now
host['modified'] = _now
host['inventory'] = inv
return attrs
def create(self, validated_data):
# This assumes total_hosts is up to date, and it can get out of date if the inventory computed fields have not been updated lately.
# If we wanted to side step this we could query Hosts.objects.filter(inventory...)
old_total_hosts = validated_data['inventory'].total_hosts
result = [Host(**attrs) for attrs in validated_data['hosts']]
try:
Host.objects.bulk_create(result)
except Exception as e:
raise serializers.ValidationError({"detail": _(f"cannot create host, host creation error {e}")})
new_total_hosts = old_total_hosts + len(result)
request = self.context.get('request', None)
changes = {'total_hosts': [old_total_hosts, new_total_hosts]}
activity_entry = ActivityStream.objects.create(
operation='update',
object1='inventory',
changes=json.dumps(changes),
actor=request.user,
)
activity_entry.inventory.add(validated_data['inventory'])
# This actually updates the cached "total_hosts" field on the inventory
update_inventory_computed_fields.delay(validated_data['inventory'].id)
return_keys = [k for k in BulkHostSerializer().fields.keys()] + ['id']
return_data = {}
host_data = []
for r in result:
item = {k: getattr(r, k) for k in return_keys}
if settings.DATABASES and ('sqlite3' not in settings.DATABASES.get('default', {}).get('ENGINE')):
# sqlite acts different with bulk_create -- it doesn't return the id of the objects
# to get it, you have to do an additional query, which is not useful for our tests
item['url'] = reverse('api:host_detail', kwargs={'pk': r.id})
item['inventory'] = reverse('api:inventory_detail', kwargs={'pk': validated_data['inventory'].id})
host_data.append(item)
return_data['url'] = reverse('api:inventory_detail', kwargs={'pk': validated_data['inventory'].id})
return_data['hosts'] = host_data
return return_data
class GroupTreeSerializer(GroupSerializer): class GroupTreeSerializer(GroupSerializer):
children = serializers.SerializerMethodField() children = serializers.SerializerMethodField()
@@ -2256,7 +2106,6 @@ class InventorySourceOptionsSerializer(BaseSerializer):
'source', 'source',
'source_path', 'source_path',
'source_vars', 'source_vars',
'scm_branch',
'credential', 'credential',
'enabled_var', 'enabled_var',
'enabled_value', 'enabled_value',
@@ -2395,6 +2244,8 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
obj = super(InventorySourceSerializer, self).update(obj, validated_data) obj = super(InventorySourceSerializer, self).update(obj, validated_data)
if deprecated_fields: if deprecated_fields:
self._update_deprecated_fields(deprecated_fields, obj) self._update_deprecated_fields(deprecated_fields, obj)
if obj.source == 'constructed':
raise serializers.ValidationError({'error': _("Cannot edit source of type constructed.")})
return obj return obj
# TODO: remove when old 'credential' fields are removed # TODO: remove when old 'credential' fields are removed
@@ -2418,25 +2269,16 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
def get_field_from_model_or_attrs(fd): def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None) return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
if self.instance and self.instance.source == 'constructed': if get_field_from_model_or_attrs('source') == 'scm':
allowed_fields = CONSTRUCTED_INVENTORY_SOURCE_EDITABLE_FIELDS
for field in attrs:
if attrs[field] != getattr(self.instance, field) and field not in allowed_fields:
raise serializers.ValidationError({"error": _("Cannot change field '{}' on a constructed inventory source.").format(field)})
elif get_field_from_model_or_attrs('source') == 'scm':
if ('source' in attrs or 'source_project' in attrs) and get_field_from_model_or_attrs('source_project') is None: if ('source' in attrs or 'source_project' in attrs) and get_field_from_model_or_attrs('source_project') is None:
raise serializers.ValidationError({"source_project": _("Project required for scm type sources.")}) raise serializers.ValidationError({"source_project": _("Project required for scm type sources.")})
elif get_field_from_model_or_attrs('source') == 'constructed': elif (get_field_from_model_or_attrs('source') == 'constructed') and (self.instance and self.instance.source != 'constructed'):
raise serializers.ValidationError({"error": _('constructed not a valid source for inventory')}) raise serializers.ValidationError({"Error": _('constructed not a valid source for inventory')})
else: else:
redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path', 'scm_branch'])) redundant_scm_fields = list(filter(lambda x: attrs.get(x, None), ['source_project', 'source_path']))
if redundant_scm_fields: if redundant_scm_fields:
raise serializers.ValidationError({"detail": _("Cannot set %s if not SCM type." % ' '.join(redundant_scm_fields))}) raise serializers.ValidationError({"detail": _("Cannot set %s if not SCM type." % ' '.join(redundant_scm_fields))})
project = get_field_from_model_or_attrs('source_project')
if get_field_from_model_or_attrs('scm_branch') and not project.allow_override:
raise serializers.ValidationError({'scm_branch': _('Project does not allow overriding branch.')})
attrs = super(InventorySourceSerializer, self).validate(attrs) attrs = super(InventorySourceSerializer, self).validate(attrs)
# Check type consistency of source and cloud credential, if provided # Check type consistency of source and cloud credential, if provided
@@ -2905,7 +2747,7 @@ class CredentialSerializer(BaseSerializer):
): ):
if getattr(self.instance, related_objects).count() > 0: if getattr(self.instance, related_objects).count() > 0:
raise ValidationError( raise ValidationError(
_('You cannot change the credential type of the credential, as it may break the functionality of the resources using it.') _('You cannot change the credential type of the credential, as it may break the functionality' ' of the resources using it.')
) )
return credential_type return credential_type
@@ -2925,7 +2767,7 @@ class CredentialSerializerCreate(CredentialSerializer):
default=None, default=None,
write_only=True, write_only=True,
allow_null=True, allow_null=True,
help_text=_('Write-only field used to add user to owner role. If provided, do not give either team or organization. Only valid for creation.'), help_text=_('Write-only field used to add user to owner role. If provided, ' 'do not give either team or organization. Only valid for creation.'),
) )
team = serializers.PrimaryKeyRelatedField( team = serializers.PrimaryKeyRelatedField(
queryset=Team.objects.all(), queryset=Team.objects.all(),
@@ -2933,14 +2775,14 @@ class CredentialSerializerCreate(CredentialSerializer):
default=None, default=None,
write_only=True, write_only=True,
allow_null=True, allow_null=True,
help_text=_('Write-only field used to add team to owner role. If provided, do not give either user or organization. Only valid for creation.'), help_text=_('Write-only field used to add team to owner role. If provided, ' 'do not give either user or organization. Only valid for creation.'),
) )
organization = serializers.PrimaryKeyRelatedField( organization = serializers.PrimaryKeyRelatedField(
queryset=Organization.objects.all(), queryset=Organization.objects.all(),
required=False, required=False,
default=None, default=None,
allow_null=True, allow_null=True,
help_text=_('Inherit permissions from organization roles. If provided on creation, do not give either user or team.'), help_text=_('Inherit permissions from organization roles. If provided on creation, ' 'do not give either user or team.'),
) )
class Meta: class Meta:
@@ -2962,7 +2804,7 @@ class CredentialSerializerCreate(CredentialSerializer):
if len(owner_fields) > 1: if len(owner_fields) > 1:
received = ", ".join(sorted(owner_fields)) received = ", ".join(sorted(owner_fields))
raise serializers.ValidationError( raise serializers.ValidationError(
{"detail": _("Only one of 'user', 'team', or 'organization' should be provided, received {} fields.".format(received))} {"detail": _("Only one of 'user', 'team', or 'organization' should be provided, " "received {} fields.".format(received))}
) )
if attrs.get('team'): if attrs.get('team'):
@@ -3622,7 +3464,7 @@ class SystemJobSerializer(UnifiedJobSerializer):
try: try:
return obj.result_stdout return obj.result_stdout
except StdoutMaxBytesExceeded as e: except StdoutMaxBytesExceeded as e:
return _("Standard Output too large to display ({text_size} bytes), only download supported for sizes over {supported_size} bytes.").format( return _("Standard Output too large to display ({text_size} bytes), " "only download supported for sizes over {supported_size} bytes.").format(
text_size=e.total, supported_size=e.supported text_size=e.total, supported_size=e.supported
) )
@@ -4169,7 +4011,6 @@ class JobHostSummarySerializer(BaseSerializer):
'-description', '-description',
'job', 'job',
'host', 'host',
'constructed_host',
'host_name', 'host_name',
'changed', 'changed',
'dark', 'dark',
@@ -4253,7 +4094,7 @@ class JobEventSerializer(BaseSerializer):
# Show full stdout for playbook_on_* events. # Show full stdout for playbook_on_* events.
if obj and obj.event.startswith('playbook_on'): if obj and obj.event.startswith('playbook_on'):
return data return data
# If the view logic says to not truncate (request was to the detail view or a param was used) # If the view logic says to not trunctate (request was to the detail view or a param was used)
if self.context.get('no_truncate', False): if self.context.get('no_truncate', False):
return data return data
max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY
@@ -4284,7 +4125,7 @@ class ProjectUpdateEventSerializer(JobEventSerializer):
# raw SCM URLs in their stdout (which *could* contain passwords) # raw SCM URLs in their stdout (which *could* contain passwords)
# attempt to detect and filter HTTP basic auth passwords in the stdout # attempt to detect and filter HTTP basic auth passwords in the stdout
# of these types of events # of these types of events
if obj.event_data.get('task_action') in ('git', 'svn', 'ansible.builtin.git', 'ansible.builtin.svn'): if obj.event_data.get('task_action') in ('git', 'svn'):
try: try:
return json.loads(UriCleaner.remove_sensitive(json.dumps(obj.event_data))) return json.loads(UriCleaner.remove_sensitive(json.dumps(obj.event_data)))
except Exception: except Exception:
@@ -4328,7 +4169,7 @@ class AdHocCommandEventSerializer(BaseSerializer):
def to_representation(self, obj): def to_representation(self, obj):
data = super(AdHocCommandEventSerializer, self).to_representation(obj) data = super(AdHocCommandEventSerializer, self).to_representation(obj)
# If the view logic says to not truncate (request was to the detail view or a param was used) # If the view logic says to not trunctate (request was to the detail view or a param was used)
if self.context.get('no_truncate', False): if self.context.get('no_truncate', False):
return data return data
max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY max_bytes = settings.EVENT_STDOUT_MAX_BYTES_DISPLAY
@@ -4536,7 +4377,7 @@ class JobLaunchSerializer(BaseSerializer):
if cred.unique_hash() in provided_mapping.keys(): if cred.unique_hash() in provided_mapping.keys():
continue # User replaced credential with new of same type continue # User replaced credential with new of same type
errors.setdefault('credentials', []).append( errors.setdefault('credentials', []).append(
_('Removing {} credential at launch time without replacement is not supported. Provided list lacked credential(s): {}.').format( _('Removing {} credential at launch time without replacement is not supported. ' 'Provided list lacked credential(s): {}.').format(
cred.unique_hash(display=True), ', '.join([str(c) for c in removed_creds]) cred.unique_hash(display=True), ', '.join([str(c) for c in removed_creds])
) )
) )
@@ -4675,270 +4516,6 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
return accepted return accepted
class BulkJobNodeSerializer(WorkflowJobNodeSerializer):
# We don't do a PrimaryKeyRelatedField for unified_job_template and others, because that increases the number
# of database queries, rather we take them as integer and later convert them to objects in get_objectified_jobs
unified_job_template = serializers.IntegerField(
required=True, min_value=1, help_text=_('Primary key of the template for this job, can be a job template or inventory source.')
)
inventory = serializers.IntegerField(required=False, min_value=1)
execution_environment = serializers.IntegerField(required=False, min_value=1)
# many-to-many fields
credentials = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
labels = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
instance_groups = serializers.ListField(child=serializers.IntegerField(min_value=1), required=False)
class Meta:
model = WorkflowJobNode
fields = ('*', 'credentials', 'labels', 'instance_groups') # m2m fields are not canonical for WJ nodes
def validate(self, attrs):
return super(LaunchConfigurationBaseSerializer, self).validate(attrs)
def get_validation_exclusions(self, obj=None):
ret = super().get_validation_exclusions(obj)
ret.extend(['unified_job_template', 'inventory', 'execution_environment'])
return ret
class BulkJobLaunchSerializer(serializers.Serializer):
name = serializers.CharField(default='Bulk Job Launch', max_length=512, write_only=True, required=False, allow_blank=True) # limited by max name of jobs
jobs = BulkJobNodeSerializer(
many=True,
allow_empty=False,
write_only=True,
max_length=100000,
help_text=_('List of jobs to be launched, JSON. e.g. [{"unified_job_template": 7}, {"unified_job_template": 10}]'),
)
description = serializers.CharField(write_only=True, required=False, allow_blank=False)
extra_vars = serializers.JSONField(write_only=True, required=False)
organization = serializers.PrimaryKeyRelatedField(
queryset=Organization.objects.all(),
required=False,
default=None,
allow_null=True,
write_only=True,
help_text=_('Inherit permissions from this organization. If not provided, a organization the user is a member of will be selected automatically.'),
)
inventory = serializers.PrimaryKeyRelatedField(queryset=Inventory.objects.all(), required=False, write_only=True)
limit = serializers.CharField(write_only=True, required=False, allow_blank=False)
scm_branch = serializers.CharField(write_only=True, required=False, allow_blank=False)
skip_tags = serializers.CharField(write_only=True, required=False, allow_blank=False)
job_tags = serializers.CharField(write_only=True, required=False, allow_blank=False)
class Meta:
model = WorkflowJob
fields = ('name', 'jobs', 'description', 'extra_vars', 'organization', 'inventory', 'limit', 'scm_branch', 'skip_tags', 'job_tags')
read_only_fields = ()
def validate(self, attrs):
request = self.context.get('request', None)
identifiers = set()
if len(attrs['jobs']) > settings.BULK_JOB_MAX_LAUNCH:
raise serializers.ValidationError(_('Number of requested jobs exceeds system setting BULK_JOB_MAX_LAUNCH'))
for node in attrs['jobs']:
if 'identifier' in node:
if node['identifier'] in identifiers:
raise serializers.ValidationError(_(f"Identifier {node['identifier']} not unique"))
identifiers.add(node['identifier'])
else:
node['identifier'] = str(uuid4())
requested_ujts = {j['unified_job_template'] for j in attrs['jobs']}
requested_use_inventories = {job['inventory'] for job in attrs['jobs'] if 'inventory' in job}
requested_use_execution_environments = {job['execution_environment'] for job in attrs['jobs'] if 'execution_environment' in job}
requested_use_credentials = set()
requested_use_labels = set()
requested_use_instance_groups = set()
for job in attrs['jobs']:
for cred in job.get('credentials', []):
requested_use_credentials.add(cred)
for label in job.get('labels', []):
requested_use_labels.add(label)
for instance_group in job.get('instance_groups', []):
requested_use_instance_groups.add(instance_group)
key_to_obj_map = {
"unified_job_template": {obj.id: obj for obj in UnifiedJobTemplate.objects.filter(id__in=requested_ujts)},
"inventory": {obj.id: obj for obj in Inventory.objects.filter(id__in=requested_use_inventories)},
"credentials": {obj.id: obj for obj in Credential.objects.filter(id__in=requested_use_credentials)},
"labels": {obj.id: obj for obj in Label.objects.filter(id__in=requested_use_labels)},
"instance_groups": {obj.id: obj for obj in InstanceGroup.objects.filter(id__in=requested_use_instance_groups)},
"execution_environment": {obj.id: obj for obj in ExecutionEnvironment.objects.filter(id__in=requested_use_execution_environments)},
}
ujts = {}
for ujt in key_to_obj_map['unified_job_template'].values():
ujts.setdefault(type(ujt), [])
ujts[type(ujt)].append(ujt)
unallowed_types = set(ujts.keys()) - set([JobTemplate, Project, InventorySource, WorkflowJobTemplate])
if unallowed_types:
type_names = ' '.join([cls._meta.verbose_name.title() for cls in unallowed_types])
raise serializers.ValidationError(_("Template types {type_names} not allowed in bulk jobs").format(type_names=type_names))
for model, obj_list in ujts.items():
role_field = 'execute_role' if issubclass(model, (JobTemplate, WorkflowJobTemplate)) else 'update_role'
self.check_list_permission(model, set([obj.id for obj in obj_list]), role_field)
self.check_organization_permission(attrs, request)
if 'inventory' in attrs:
requested_use_inventories.add(attrs['inventory'].id)
self.check_list_permission(Inventory, requested_use_inventories, 'use_role')
self.check_list_permission(Credential, requested_use_credentials, 'use_role')
self.check_list_permission(Label, requested_use_labels)
self.check_list_permission(InstanceGroup, requested_use_instance_groups) # TODO: change to use_role for conflict
self.check_list_permission(ExecutionEnvironment, requested_use_execution_environments) # TODO: change if roles introduced
jobs_object = self.get_objectified_jobs(attrs, key_to_obj_map)
attrs['jobs'] = jobs_object
if 'extra_vars' in attrs:
extra_vars_dict = parse_yaml_or_json(attrs['extra_vars'])
attrs['extra_vars'] = json.dumps(extra_vars_dict)
attrs = super().validate(attrs)
return attrs
def check_list_permission(self, model, id_list, role_field=None):
if not id_list:
return
user = self.context['request'].user
if role_field is None: # implies "read" level permission is required
access_qs = user.get_queryset(model)
else:
access_qs = model.accessible_objects(user, role_field)
not_allowed = set(id_list) - set(access_qs.filter(id__in=id_list).values_list('id', flat=True))
if not_allowed:
raise serializers.ValidationError(
_("{model_name} {not_allowed} not found or you don't have permissions to access it").format(
model_name=model._meta.verbose_name_plural.title(), not_allowed=not_allowed
)
)
def create(self, validated_data):
request = self.context.get('request', None)
launch_user = request.user if request else None
job_node_data = validated_data.pop('jobs')
wfj_deferred_attr_names = ('skip_tags', 'limit', 'job_tags')
wfj_deferred_vals = {}
for item in wfj_deferred_attr_names:
wfj_deferred_vals[item] = validated_data.pop(item, None)
wfj = WorkflowJob.objects.create(**validated_data, is_bulk_job=True, launch_type='manual', created_by=launch_user)
for key, val in wfj_deferred_vals.items():
if val:
setattr(wfj, key, val)
nodes = []
node_m2m_objects = {}
node_m2m_object_types_to_through_model = {
'credentials': WorkflowJobNode.credentials.through,
'labels': WorkflowJobNode.labels.through,
'instance_groups': WorkflowJobNode.instance_groups.through,
}
node_deferred_attr_names = (
'limit',
'scm_branch',
'verbosity',
'forks',
'diff_mode',
'job_tags',
'job_type',
'skip_tags',
'job_slice_count',
'timeout',
)
node_deferred_attrs = {}
for node_attrs in job_node_data:
# we need to add any m2m objects after creation via the through model
node_m2m_objects[node_attrs['identifier']] = {}
node_deferred_attrs[node_attrs['identifier']] = {}
for item in node_m2m_object_types_to_through_model.keys():
if item in node_attrs:
node_m2m_objects[node_attrs['identifier']][item] = node_attrs.pop(item)
# Some attributes are not accepted by WorkflowJobNode __init__, we have to set them after
for item in node_deferred_attr_names:
if item in node_attrs:
node_deferred_attrs[node_attrs['identifier']][item] = node_attrs.pop(item)
# Create the node objects
node_obj = WorkflowJobNode(workflow_job=wfj, created=wfj.created, modified=wfj.modified, **node_attrs)
# we can set the deferred attrs now
for item, value in node_deferred_attrs[node_attrs['identifier']].items():
setattr(node_obj, item, value)
# the node is now ready to be bulk created
nodes.append(node_obj)
# we'll need this later when we do the m2m through model bulk create
node_m2m_objects[node_attrs['identifier']]['node'] = node_obj
WorkflowJobNode.objects.bulk_create(nodes)
# Deal with the m2m objects we have to create once the node exists
for field_name, through_model in node_m2m_object_types_to_through_model.items():
through_model_objects = []
for node_identifier in node_m2m_objects.keys():
if field_name in node_m2m_objects[node_identifier] and field_name == 'credentials':
for cred in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(credential=cred, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if field_name in node_m2m_objects[node_identifier] and field_name == 'labels':
for label in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(label=label, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if field_name in node_m2m_objects[node_identifier] and field_name == 'instance_groups':
for instance_group in node_m2m_objects[node_identifier][field_name]:
through_model_objects.append(through_model(instancegroup=instance_group, workflowjobnode=node_m2m_objects[node_identifier]['node']))
if through_model_objects:
through_model.objects.bulk_create(through_model_objects)
wfj.save()
wfj.signal_start()
return WorkflowJobSerializer().to_representation(wfj)
def check_organization_permission(self, attrs, request):
# validate Organization
# - If the orgs is not set, set it to the org of the launching user
# - If the user is part of multiple orgs, throw a validation error saying user is part of multiple orgs, please provide one
if not request.user.is_superuser:
read_org_qs = Organization.accessible_objects(request.user, 'member_role')
if 'organization' not in attrs or attrs['organization'] == None or attrs['organization'] == '':
read_org_ct = read_org_qs.count()
if read_org_ct == 1:
attrs['organization'] = read_org_qs.first()
elif read_org_ct > 1:
raise serializers.ValidationError("User has permission to multiple Organizations, please set one of them in the request")
else:
raise serializers.ValidationError("User not part of any organization, please assign an organization to assign to the bulk job")
else:
allowed_orgs = set(read_org_qs.values_list('id', flat=True))
requested_org = attrs['organization']
if requested_org.id not in allowed_orgs:
raise ValidationError(_(f"Organization {requested_org.id} not found or you don't have permissions to access it"))
def get_objectified_jobs(self, attrs, key_to_obj_map):
objectified_jobs = []
# This loop is generalized so we should only have to add related items to the key_to_obj_map
for job in attrs['jobs']:
objectified_job = {}
for key, value in job.items():
if key in key_to_obj_map:
if isinstance(value, int):
objectified_job[key] = key_to_obj_map[key][value]
elif isinstance(value, list):
objectified_job[key] = [key_to_obj_map[key][item] for item in value]
else:
objectified_job[key] = value
objectified_jobs.append(objectified_job)
return objectified_jobs
class NotificationTemplateSerializer(BaseSerializer): class NotificationTemplateSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete', 'copy'] show_capabilities = ['edit', 'delete', 'copy']
capabilities_prefetch = [{'copy': 'organization.admin'}] capabilities_prefetch = [{'copy': 'organization.admin'}]
@@ -5018,7 +4595,7 @@ class NotificationTemplateSerializer(BaseSerializer):
for subevent in event_messages: for subevent in event_messages:
if subevent not in ('running', 'approved', 'timed_out', 'denied'): if subevent not in ('running', 'approved', 'timed_out', 'denied'):
error_list.append( error_list.append(
_("Workflow Approval event '{}' invalid, must be one of 'running', 'approved', 'timed_out', or 'denied'").format(subevent) _("Workflow Approval event '{}' invalid, must be one of " "'running', 'approved', 'timed_out', or 'denied'").format(subevent)
) )
continue continue
subevent_messages = event_messages[subevent] subevent_messages = event_messages[subevent]
@@ -5285,7 +4862,7 @@ class ScheduleSerializer(LaunchConfigurationBaseSerializer, SchedulePreviewSeria
), ),
) )
until = serializers.SerializerMethodField( until = serializers.SerializerMethodField(
help_text=_('The date this schedule will end. This field is computed from the RRULE. If the schedule does not end an empty string will be returned'), help_text=_('The date this schedule will end. This field is computed from the RRULE. If the schedule does not end an emptry string will be returned'),
) )
class Meta: class Meta:
@@ -5435,7 +5012,7 @@ class InstanceSerializer(BaseSerializer):
res = super(InstanceSerializer, self).get_related(obj) res = super(InstanceSerializer, self).get_related(obj)
res['jobs'] = self.reverse('api:instance_unified_jobs_list', kwargs={'pk': obj.pk}) res['jobs'] = self.reverse('api:instance_unified_jobs_list', kwargs={'pk': obj.pk})
res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk}) res['instance_groups'] = self.reverse('api:instance_instance_groups_list', kwargs={'pk': obj.pk})
if obj.node_type in [Instance.Types.EXECUTION, Instance.Types.HOP]: if settings.IS_K8S and obj.node_type in (Instance.Types.EXECUTION,):
res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk}) res['install_bundle'] = self.reverse('api:instance_install_bundle', kwargs={'pk': obj.pk})
res['peers'] = self.reverse('api:instance_peers_list', kwargs={"pk": obj.pk}) res['peers'] = self.reverse('api:instance_peers_list', kwargs={"pk": obj.pk})
if self.context['request'].user.is_superuser or self.context['request'].user.is_system_auditor: if self.context['request'].user.is_superuser or self.context['request'].user.is_system_auditor:
@@ -5522,32 +5099,6 @@ class InstanceHealthCheckSerializer(BaseSerializer):
fields = read_only_fields fields = read_only_fields
class HostMetricSerializer(BaseSerializer):
show_capabilities = ['delete']
class Meta:
model = HostMetric
fields = (
"id",
"hostname",
"url",
"first_automation",
"last_automation",
"last_deleted",
"automated_counter",
"deleted_counter",
"deleted",
"used_in_inventories",
)
class HostMetricSummaryMonthlySerializer(BaseSerializer):
class Meta:
model = HostMetricSummaryMonthly
read_only_fields = ("id", "date", "license_consumed", "license_capacity", "hosts_added", "hosts_deleted", "indirectly_managed_hosts")
fields = read_only_fields
class InstanceGroupSerializer(BaseSerializer): class InstanceGroupSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete'] show_capabilities = ['edit', 'delete']
capacity = serializers.SerializerMethodField() capacity = serializers.SerializerMethodField()
@@ -5558,7 +5109,7 @@ class InstanceGroupSerializer(BaseSerializer):
instances = serializers.SerializerMethodField() instances = serializers.SerializerMethodField()
is_container_group = serializers.BooleanField( is_container_group = serializers.BooleanField(
required=False, required=False,
help_text=_('Indicates whether instances in this group are containerized.Containerized groups have a designated Openshift or Kubernetes cluster.'), help_text=_('Indicates whether instances in this group are containerized.' 'Containerized groups have a designated Openshift or Kubernetes cluster.'),
) )
# NOTE: help_text is duplicated from field definitions, no obvious way of # NOTE: help_text is duplicated from field definitions, no obvious way of
# both defining field details here and also getting the field's help_text # both defining field details here and also getting the field's help_text
@@ -5569,7 +5120,7 @@ class InstanceGroupSerializer(BaseSerializer):
required=False, required=False,
initial=0, initial=0,
label=_('Policy Instance Percentage'), label=_('Policy Instance Percentage'),
help_text=_("Minimum percentage of all instances that will be automatically assigned to this group when new instances come online."), help_text=_("Minimum percentage of all instances that will be automatically assigned to " "this group when new instances come online."),
) )
policy_instance_minimum = serializers.IntegerField( policy_instance_minimum = serializers.IntegerField(
default=0, default=0,
@@ -5577,7 +5128,7 @@ class InstanceGroupSerializer(BaseSerializer):
required=False, required=False,
initial=0, initial=0,
label=_('Policy Instance Minimum'), label=_('Policy Instance Minimum'),
help_text=_("Static minimum number of Instances that will be automatically assign to this group when new instances come online."), help_text=_("Static minimum number of Instances that will be automatically assign to " "this group when new instances come online."),
) )
max_concurrent_jobs = serializers.IntegerField( max_concurrent_jobs = serializers.IntegerField(
default=0, default=0,
@@ -5633,8 +5184,6 @@ class InstanceGroupSerializer(BaseSerializer):
res = super(InstanceGroupSerializer, self).get_related(obj) res = super(InstanceGroupSerializer, self).get_related(obj)
res['jobs'] = self.reverse('api:instance_group_unified_jobs_list', kwargs={'pk': obj.pk}) res['jobs'] = self.reverse('api:instance_group_unified_jobs_list', kwargs={'pk': obj.pk})
res['instances'] = self.reverse('api:instance_group_instance_list', kwargs={'pk': obj.pk}) res['instances'] = self.reverse('api:instance_group_instance_list', kwargs={'pk': obj.pk})
res['access_list'] = self.reverse('api:instance_group_access_list', kwargs={'pk': obj.pk})
res['object_roles'] = self.reverse('api:instance_group_object_role_list', kwargs={'pk': obj.pk})
if obj.credential: if obj.credential:
res['credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.credential_id}) res['credential'] = self.reverse('api:credential_detail', kwargs={'pk': obj.credential_id})

View File

@@ -1,10 +1,16 @@
import json
import warnings import warnings
from rest_framework.permissions import AllowAny from coreapi.document import Object, Link
from rest_framework.schemas import SchemaGenerator, AutoSchema as DRFAuthSchema
from drf_yasg.views import get_schema_view from rest_framework import exceptions
from drf_yasg import openapi from rest_framework.permissions import AllowAny
from rest_framework.renderers import CoreJSONRenderer
from rest_framework.response import Response
from rest_framework.schemas import SchemaGenerator, AutoSchema as DRFAuthSchema
from rest_framework.views import APIView
from rest_framework_swagger import renderers
class SuperUserSchemaGenerator(SchemaGenerator): class SuperUserSchemaGenerator(SchemaGenerator):
@@ -49,15 +55,43 @@ class AutoSchema(DRFAuthSchema):
return description return description
schema_view = get_schema_view( class SwaggerSchemaView(APIView):
openapi.Info( _ignore_model_permissions = True
title="Snippets API", exclude_from_schema = True
default_version='v1', permission_classes = [AllowAny]
description="Test description", renderer_classes = [CoreJSONRenderer, renderers.OpenAPIRenderer, renderers.SwaggerUIRenderer]
terms_of_service="https://www.google.com/policies/terms/",
contact=openapi.Contact(email="contact@snippets.local"), def get(self, request):
license=openapi.License(name="BSD License"), generator = SuperUserSchemaGenerator(title='Ansible Automation Platform controller API', patterns=None, urlconf=None)
), schema = generator.get_schema(request=request)
public=True, # python core-api doesn't support the deprecation yet, so track it
permission_classes=[AllowAny], # ourselves and return it in a response header
) _deprecated = []
# By default, DRF OpenAPI serialization places all endpoints in
# a single node based on their root path (/api). Instead, we want to
# group them by topic/tag so that they're categorized in the rendered
# output
document = schema._data.pop('api')
for path, node in document.items():
if isinstance(node, Object):
for action in node.values():
topic = getattr(action, 'topic', None)
if topic:
schema._data.setdefault(topic, Object())
schema._data[topic]._data[path] = node
if isinstance(action, Object):
for link in action.links.values():
if link.deprecated:
_deprecated.append(link.url)
elif isinstance(node, Link):
topic = getattr(node, 'topic', None)
if topic:
schema._data.setdefault(topic, Object())
schema._data[topic]._data[path] = node
if not schema:
raise exceptions.ValidationError('The schema generator did not return a schema Document')
return Response(schema, headers={'X-Deprecated-Paths': json.dumps(_deprecated)})

View File

@@ -7,12 +7,10 @@ the following fields (some fields may not be visible to all users):
* `project_base_dir`: Path on the server where projects and playbooks are \ * `project_base_dir`: Path on the server where projects and playbooks are \
stored. stored.
* `project_local_paths`: List of directories beneath `project_base_dir` to * `project_local_paths`: List of directories beneath `project_base_dir` to
use when creating/editing a manual project. use when creating/editing a project.
* `time_zone`: The configured time zone for the server. * `time_zone`: The configured time zone for the server.
* `license_info`: Information about the current license. * `license_info`: Information about the current license.
* `version`: Version of Ansible Tower package installed. * `version`: Version of Ansible Tower package installed.
* `custom_virtualenvs`: Deprecated venv locations from before migration to
execution environments. Export tooling is in `awx-manage` commands.
* `eula`: The current End-User License Agreement * `eula`: The current End-User License Agreement
{% endifmeth %} {% endifmeth %}

View File

@@ -0,0 +1,4 @@
Version 1 of the Ansible Tower REST API.
Make a GET request to this resource to obtain a list of all child resources
available via the API.

View File

@@ -1,41 +0,0 @@
# Bulk Host Create
This endpoint allows the client to create multiple hosts and associate them with an inventory. They may do this by providing the inventory ID and a list of json that would normally be provided to create hosts.
Example:
{
"inventory": 1,
"hosts": [
{"name": "example1.com", "variables": "ansible_connection: local"},
{"name": "example2.com"}
]
}
Return data:
{
"url": "/api/v2/inventories/3/hosts/",
"hosts": [
{
"name": "example1.com",
"enabled": true,
"instance_id": "",
"description": "",
"variables": "ansible_connection: local",
"id": 1255,
"url": "/api/v2/hosts/1255/",
"inventory": "/api/v2/inventories/3/"
},
{
"name": "example2.com",
"enabled": true,
"instance_id": "",
"description": "",
"variables": "",
"id": 1256,
"url": "/api/v2/hosts/1256/",
"inventory": "/api/v2/inventories/3/"
}
]
}

View File

@@ -1,13 +0,0 @@
# Bulk Job Launch
This endpoint allows the client to launch multiple UnifiedJobTemplates at a time, along side any launch time parameters that they would normally set at launch time.
Example:
{
"name": "my bulk job",
"jobs": [
{"unified_job_template": 7, "inventory": 2},
{"unified_job_template": 7, "credentials": [3]}
]
}

View File

@@ -1,3 +0,0 @@
# Bulk Actions
This endpoint lists available bulk action APIs.

View File

@@ -3,7 +3,7 @@ Make a GET request to this resource to retrieve aggregate statistics about inven
Including fetching the number of total hosts tracked by Tower over an amount of time and the current success or Including fetching the number of total hosts tracked by Tower over an amount of time and the current success or
failed status of hosts which have run jobs within an Inventory. failed status of hosts which have run jobs within an Inventory.
## Parameters and Filtering ## Parmeters and Filtering
The `period` of the data can be adjusted with: The `period` of the data can be adjusted with:
@@ -24,7 +24,7 @@ Data about the number of hosts will be returned in the following format:
Each element contains an epoch timestamp represented in seconds and a numerical value indicating Each element contains an epoch timestamp represented in seconds and a numerical value indicating
the number of hosts that exist at a given moment the number of hosts that exist at a given moment
Data about failed and successful hosts by inventory will be given as: Data about failed and successfull hosts by inventory will be given as:
{ {
"sources": [ "sources": [

View File

@@ -2,7 +2,7 @@
Make a GET request to this resource to retrieve aggregate statistics about job runs suitable for graphing. Make a GET request to this resource to retrieve aggregate statistics about job runs suitable for graphing.
## Parameters and Filtering ## Parmeters and Filtering
The `period` of the data can be adjusted with: The `period` of the data can be adjusted with:

View File

@@ -0,0 +1,11 @@
# List Fact Scans for a Host Specific Host Scan
Make a GET request to this resource to retrieve system tracking data for a particular scan
You may filter by datetime:
`?datetime=2015-06-01`
and module
`?datetime=2015-06-01&module=ansible`

View File

@@ -0,0 +1,11 @@
# List Fact Scans for a Host by Module and Date
Make a GET request to this resource to retrieve system tracking scans by module and date/time
You may filter scan runs using the `from` and `to` properties:
`?from=2015-06-01%2012:00:00&to=2015-06-03`
You may also filter by module
`?module=packages`

View File

@@ -0,0 +1 @@
# List Red Hat Insights for a Host

View File

@@ -1,18 +0,0 @@
{% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}:
Make GET request to this resource to retrieve a single {{ model_verbose_name }}
record containing the following fields:
{% include "api/_result_fields_common.md" %}
{% endifmeth %}
{% ifmeth DELETE %}
# Delete {{ model_verbose_name|title|anora }}:
Make a DELETE request to this resource to soft-delete this {{ model_verbose_name }}.
A soft deletion will mark the `deleted` field as true and exclude the host
metric from license calculations.
This may be undone later if the same hostname is automated again afterwards.
{% endifmeth %}

View File

@@ -18,7 +18,7 @@ inventory sources:
* `inventory_update`: ID of the inventory update job that was started. * `inventory_update`: ID of the inventory update job that was started.
(integer, read-only) (integer, read-only)
* `project_update`: ID of the project update job that was started if this inventory source is an SCM source. * `project_update`: ID of the project update job that was started if this inventory source is an SCM source.
(integer, read-only, optional) (interger, read-only, optional)
Note: All manual inventory sources (source="") will be ignored by the update_inventory_sources endpoint. This endpoint will not update inventory sources for Smart Inventories. Note: All manual inventory sources (source="") will be ignored by the update_inventory_sources endpoint. This endpoint will not update inventory sources for Smart Inventories.

View File

@@ -0,0 +1,21 @@
{% ifmeth GET %}
# Determine if a Job can be started
Make a GET request to this resource to determine if the job can be started and
whether any passwords are required to start the job. The response will include
the following fields:
* `can_start`: Flag indicating if this job can be started (boolean, read-only)
* `passwords_needed_to_start`: Password names required to start the job (array,
read-only)
{% endifmeth %}
{% ifmeth POST %}
# Start a Job
Make a POST request to this resource to start the job. If any passwords are
required, they must be passed via POST data.
If successful, the response status code will be 202. If any required passwords
are not provided, a 400 status code will be returned. If the job cannot be
started, a 405 status code will be returned.
{% endifmeth %}

View File

@@ -2,17 +2,16 @@ receptor_user: awx
receptor_group: awx receptor_group: awx
receptor_verify: true receptor_verify: true
receptor_tls: true receptor_tls: true
receptor_mintls13: false
receptor_work_commands: receptor_work_commands:
ansible-runner: ansible-runner:
command: ansible-runner command: ansible-runner
params: worker params: worker
allowruntimeparams: true allowruntimeparams: true
verifysignature: true verifysignature: true
custom_worksign_public_keyfile: receptor/work_public_key.pem custom_worksign_public_keyfile: receptor/work-public-key.pem
custom_tls_certfile: receptor/tls/receptor.crt custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt custom_ca_certfile: receptor/tls/ca/receptor-ca.crt
receptor_protocol: 'tcp' receptor_protocol: 'tcp'
receptor_listener: true receptor_listener: true
receptor_port: {{ instance.listener_port }} receptor_port: {{ instance.listener_port }}

View File

@@ -1,31 +0,0 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
import awx.api.views.analytics as analytics
urls = [
re_path(r'^$', analytics.AnalyticsRootView.as_view(), name='analytics_root_view'),
re_path(r'^authorized/$', analytics.AnalyticsAuthorizedView.as_view(), name='analytics_authorized'),
re_path(r'^reports/$', analytics.AnalyticsReportsList.as_view(), name='analytics_reports_list'),
re_path(r'^report/(?P<slug>[\w-]+)/$', analytics.AnalyticsReportDetail.as_view(), name='analytics_report_detail'),
re_path(r'^report_options/$', analytics.AnalyticsReportOptionsList.as_view(), name='analytics_report_options_list'),
re_path(r'^adoption_rate/$', analytics.AnalyticsAdoptionRateList.as_view(), name='analytics_adoption_rate'),
re_path(r'^adoption_rate_options/$', analytics.AnalyticsAdoptionRateList.as_view(), name='analytics_adoption_rate_options'),
re_path(r'^event_explorer/$', analytics.AnalyticsEventExplorerList.as_view(), name='analytics_event_explorer'),
re_path(r'^event_explorer_options/$', analytics.AnalyticsEventExplorerList.as_view(), name='analytics_event_explorer_options'),
re_path(r'^host_explorer/$', analytics.AnalyticsHostExplorerList.as_view(), name='analytics_host_explorer'),
re_path(r'^host_explorer_options/$', analytics.AnalyticsHostExplorerList.as_view(), name='analytics_host_explorer_options'),
re_path(r'^job_explorer/$', analytics.AnalyticsJobExplorerList.as_view(), name='analytics_job_explorer'),
re_path(r'^job_explorer_options/$', analytics.AnalyticsJobExplorerList.as_view(), name='analytics_job_explorer_options'),
re_path(r'^probe_templates/$', analytics.AnalyticsProbeTemplatesList.as_view(), name='analytics_probe_templates_explorer'),
re_path(r'^probe_templates_options/$', analytics.AnalyticsProbeTemplatesList.as_view(), name='analytics_probe_templates_options'),
re_path(r'^probe_template_for_hosts/$', analytics.AnalyticsProbeTemplateForHostsList.as_view(), name='analytics_probe_template_for_hosts_explorer'),
re_path(r'^probe_template_for_hosts_options/$', analytics.AnalyticsProbeTemplateForHostsList.as_view(), name='analytics_probe_template_for_hosts_options'),
re_path(r'^roi_templates/$', analytics.AnalyticsRoiTemplatesList.as_view(), name='analytics_roi_templates_explorer'),
re_path(r'^roi_templates_options/$', analytics.AnalyticsRoiTemplatesList.as_view(), name='analytics_roi_templates_options'),
]
__all__ = ['urls']

View File

@@ -1,10 +0,0 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.urls import re_path
from awx.api.views import HostMetricList, HostMetricDetail
urls = [re_path(r'^$', HostMetricList.as_view(), name='host_metric_list'), re_path(r'^(?P<pk>[0-9]+)/$', HostMetricDetail.as_view(), name='host_metric_detail')]
__all__ = ['urls']

View File

@@ -3,14 +3,7 @@
from django.urls import re_path from django.urls import re_path
from awx.api.views import ( from awx.api.views import InstanceGroupList, InstanceGroupDetail, InstanceGroupUnifiedJobsList, InstanceGroupInstanceList
InstanceGroupList,
InstanceGroupDetail,
InstanceGroupUnifiedJobsList,
InstanceGroupInstanceList,
InstanceGroupAccessList,
InstanceGroupObjectRolesList,
)
urls = [ urls = [
@@ -18,8 +11,6 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/$', InstanceGroupDetail.as_view(), name='instance_group_detail'), re_path(r'^(?P<pk>[0-9]+)/$', InstanceGroupDetail.as_view(), name='instance_group_detail'),
re_path(r'^(?P<pk>[0-9]+)/jobs/$', InstanceGroupUnifiedJobsList.as_view(), name='instance_group_unified_jobs_list'), re_path(r'^(?P<pk>[0-9]+)/jobs/$', InstanceGroupUnifiedJobsList.as_view(), name='instance_group_unified_jobs_list'),
re_path(r'^(?P<pk>[0-9]+)/instances/$', InstanceGroupInstanceList.as_view(), name='instance_group_instance_list'), re_path(r'^(?P<pk>[0-9]+)/instances/$', InstanceGroupInstanceList.as_view(), name='instance_group_instance_list'),
re_path(r'^(?P<pk>[0-9]+)/access_list/$', InstanceGroupAccessList.as_view(), name='instance_group_access_list'),
re_path(r'^(?P<pk>[0-9]+)/object_roles/$', InstanceGroupObjectRolesList.as_view(), name='instance_group_object_role_list'),
] ]
__all__ = ['urls'] __all__ = ['urls']

View File

@@ -30,19 +30,10 @@ from awx.api.views import (
OAuth2TokenList, OAuth2TokenList,
ApplicationOAuth2TokenList, ApplicationOAuth2TokenList,
OAuth2ApplicationDetail, OAuth2ApplicationDetail,
# HostMetricSummaryMonthlyList, # It will be enabled in future version of the AWX
) )
from awx.api.views.bulk import (
BulkView,
BulkHostCreateView,
BulkJobLaunchView,
)
from awx.api.views.mesh_visualizer import MeshVisualizer from awx.api.views.mesh_visualizer import MeshVisualizer
from awx.api.views.metrics import MetricsView from awx.api.views.metrics import MetricsView
from awx.api.views.analytics import AWX_ANALYTICS_API_PREFIX
from .organization import urls as organization_urls from .organization import urls as organization_urls
from .user import urls as user_urls from .user import urls as user_urls
@@ -52,7 +43,6 @@ from .inventory import urls as inventory_urls, constructed_inventory_urls
from .execution_environments import urls as execution_environment_urls from .execution_environments import urls as execution_environment_urls
from .team import urls as team_urls from .team import urls as team_urls
from .host import urls as host_urls from .host import urls as host_urls
from .host_metric import urls as host_metric_urls
from .group import urls as group_urls from .group import urls as group_urls
from .inventory_source import urls as inventory_source_urls from .inventory_source import urls as inventory_source_urls
from .inventory_update import urls as inventory_update_urls from .inventory_update import urls as inventory_update_urls
@@ -83,7 +73,7 @@ from .oauth2 import urls as oauth2_urls
from .oauth2_root import urls as oauth2_root_urls from .oauth2_root import urls as oauth2_root_urls
from .workflow_approval_template import urls as workflow_approval_template_urls from .workflow_approval_template import urls as workflow_approval_template_urls
from .workflow_approval import urls as workflow_approval_urls from .workflow_approval import urls as workflow_approval_urls
from .analytics import urls as analytics_urls
v2_urls = [ v2_urls = [
re_path(r'^$', ApiV2RootView.as_view(), name='api_v2_root_view'), re_path(r'^$', ApiV2RootView.as_view(), name='api_v2_root_view'),
@@ -122,9 +112,6 @@ v2_urls = [
re_path(r'^inventories/', include(inventory_urls)), re_path(r'^inventories/', include(inventory_urls)),
re_path(r'^constructed_inventories/', include(constructed_inventory_urls)), re_path(r'^constructed_inventories/', include(constructed_inventory_urls)),
re_path(r'^hosts/', include(host_urls)), re_path(r'^hosts/', include(host_urls)),
re_path(r'^host_metrics/', include(host_metric_urls)),
# It will be enabled in future version of the AWX
# re_path(r'^host_metric_summary_monthly/$', HostMetricSummaryMonthlyList.as_view(), name='host_metric_summary_monthly_list'),
re_path(r'^groups/', include(group_urls)), re_path(r'^groups/', include(group_urls)),
re_path(r'^inventory_sources/', include(inventory_source_urls)), re_path(r'^inventory_sources/', include(inventory_source_urls)),
re_path(r'^inventory_updates/', include(inventory_update_urls)), re_path(r'^inventory_updates/', include(inventory_update_urls)),
@@ -148,12 +135,8 @@ v2_urls = [
re_path(r'^unified_job_templates/$', UnifiedJobTemplateList.as_view(), name='unified_job_template_list'), re_path(r'^unified_job_templates/$', UnifiedJobTemplateList.as_view(), name='unified_job_template_list'),
re_path(r'^unified_jobs/$', UnifiedJobList.as_view(), name='unified_job_list'), re_path(r'^unified_jobs/$', UnifiedJobList.as_view(), name='unified_job_list'),
re_path(r'^activity_stream/', include(activity_stream_urls)), re_path(r'^activity_stream/', include(activity_stream_urls)),
re_path(rf'^{AWX_ANALYTICS_API_PREFIX}/', include(analytics_urls)),
re_path(r'^workflow_approval_templates/', include(workflow_approval_template_urls)), re_path(r'^workflow_approval_templates/', include(workflow_approval_template_urls)),
re_path(r'^workflow_approvals/', include(workflow_approval_urls)), re_path(r'^workflow_approvals/', include(workflow_approval_urls)),
re_path(r'^bulk/$', BulkView.as_view(), name='bulk'),
re_path(r'^bulk/host_create/$', BulkHostCreateView.as_view(), name='bulk_host_create'),
re_path(r'^bulk/job_launch/$', BulkJobLaunchView.as_view(), name='bulk_job_launch'),
] ]
@@ -167,13 +150,10 @@ urlpatterns = [
] ]
if MODE == 'development': if MODE == 'development':
# Only include these if we are in the development environment # Only include these if we are in the development environment
from awx.api.swagger import schema_view from awx.api.swagger import SwaggerSchemaView
urlpatterns += [re_path(r'^swagger/$', SwaggerSchemaView.as_view(), name='swagger_view')]
from awx.api.urls.debug import urls as debug_urls from awx.api.urls.debug import urls as debug_urls
urlpatterns += [re_path(r'^debug/', include(debug_urls))] urlpatterns += [re_path(r'^debug/', include(debug_urls))]
urlpatterns += [
re_path(r'^swagger(?P<format>\.json|\.yaml)/$', schema_view.without_ui(cache_timeout=0), name='schema-json'),
re_path(r'^swagger/$', schema_view.with_ui('swagger', cache_timeout=0), name='schema-swagger-ui'),
re_path(r'^redoc/$', schema_view.with_ui('redoc', cache_timeout=0), name='schema-redoc'),
]

View File

@@ -17,6 +17,7 @@ from collections import OrderedDict
from urllib3.exceptions import ConnectTimeoutError from urllib3.exceptions import ConnectTimeoutError
# Django # Django
from django.conf import settings from django.conf import settings
from django.core.exceptions import FieldError, ObjectDoesNotExist from django.core.exceptions import FieldError, ObjectDoesNotExist
@@ -29,7 +30,7 @@ from django.utils.safestring import mark_safe
from django.utils.timezone import now from django.utils.timezone import now
from django.views.decorators.csrf import csrf_exempt from django.views.decorators.csrf import csrf_exempt
from django.template.loader import render_to_string from django.template.loader import render_to_string
from django.http import HttpResponse, HttpResponseRedirect from django.http import HttpResponse
from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes.models import ContentType
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
@@ -62,7 +63,7 @@ from wsgiref.util import FileWrapper
# AWX # AWX
from awx.main.tasks.system import send_notifications, update_inventory_computed_fields from awx.main.tasks.system import send_notifications, update_inventory_computed_fields
from awx.main.access import get_user_queryset from awx.main.access import get_user_queryset, HostAccess
from awx.api.generics import ( from awx.api.generics import (
APIView, APIView,
BaseUsersList, BaseUsersList,
@@ -151,7 +152,7 @@ def api_exception_handler(exc, context):
if 'awx.named_url_rewritten' in req.environ and not str(getattr(exc, 'status_code', 0)).startswith('2'): if 'awx.named_url_rewritten' in req.environ and not str(getattr(exc, 'status_code', 0)).startswith('2'):
# if the URL was rewritten, and it's not a 2xx level status code, # if the URL was rewritten, and it's not a 2xx level status code,
# revert the request.path to its original value to avoid leaking # revert the request.path to its original value to avoid leaking
# any context about the existence of resources # any context about the existance of resources
req.path = req.environ['awx.named_url_rewritten'] req.path = req.environ['awx.named_url_rewritten']
if exc.status_code == 403: if exc.status_code == 403:
exc = NotFound(detail=_('Not found.')) exc = NotFound(detail=_('Not found.'))
@@ -171,7 +172,7 @@ class DashboardView(APIView):
user_inventory = get_user_queryset(request.user, models.Inventory) user_inventory = get_user_queryset(request.user, models.Inventory)
inventory_with_failed_hosts = user_inventory.filter(hosts_with_active_failures__gt=0) inventory_with_failed_hosts = user_inventory.filter(hosts_with_active_failures__gt=0)
user_inventory_external = user_inventory.filter(has_inventory_sources=True) user_inventory_external = user_inventory.filter(has_inventory_sources=True)
# if there are *zero* inventories, this aggregate query will be None, fall back to 0 # if there are *zero* inventories, this aggregrate query will be None, fall back to 0
failed_inventory = user_inventory.aggregate(Sum('inventory_sources_with_failures'))['inventory_sources_with_failures__sum'] or 0 failed_inventory = user_inventory.aggregate(Sum('inventory_sources_with_failures'))['inventory_sources_with_failures__sum'] or 0
data['inventories'] = { data['inventories'] = {
'url': reverse('api:inventory_list', request=request), 'url': reverse('api:inventory_list', request=request),
@@ -465,23 +466,6 @@ class InstanceGroupUnifiedJobsList(SubListAPIView):
relationship = "unifiedjob_set" relationship = "unifiedjob_set"
class InstanceGroupAccessList(ResourceAccessList):
model = models.User # needs to be User for AccessLists
parent_model = models.InstanceGroup
class InstanceGroupObjectRolesList(SubListAPIView):
model = models.Role
serializer_class = serializers.RoleSerializer
parent_model = models.InstanceGroup
search_fields = ('role_field', 'content_type__model')
def get_queryset(self):
po = self.get_parent_object()
content_type = ContentType.objects.get_for_model(self.parent_model)
return models.Role.objects.filter(content_type=content_type, object_id=po.pk)
class InstanceGroupInstanceList(InstanceGroupMembershipMixin, SubListAttachDetachAPIView): class InstanceGroupInstanceList(InstanceGroupMembershipMixin, SubListAttachDetachAPIView):
name = _("Instance Group's Instances") name = _("Instance Group's Instances")
model = models.Instance model = models.Instance
@@ -565,7 +549,7 @@ class LaunchConfigCredentialsBase(SubListAttachDetachAPIView):
if self.relationship not in ask_mapping: if self.relationship not in ask_mapping:
return {"msg": _("Related template cannot accept {} on launch.").format(self.relationship)} return {"msg": _("Related template cannot accept {} on launch.").format(self.relationship)}
elif sub.passwords_needed: elif sub.passwords_needed:
return {"msg": _("Credential that requires user input on launch cannot be used in saved launch configuration.")} return {"msg": _("Credential that requires user input on launch " "cannot be used in saved launch configuration.")}
ask_field_name = ask_mapping[self.relationship] ask_field_name = ask_mapping[self.relationship]
@@ -794,7 +778,13 @@ class ExecutionEnvironmentActivityStreamList(SubListAPIView):
parent_model = models.ExecutionEnvironment parent_model = models.ExecutionEnvironment
relationship = 'activitystream_set' relationship = 'activitystream_set'
search_fields = ('changes',) search_fields = ('changes',)
filter_read_permission = False
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(execution_environment=parent)
class ProjectList(ListCreateAPIView): class ProjectList(ListCreateAPIView):
@@ -1541,41 +1531,6 @@ class HostRelatedSearchMixin(object):
return ret return ret
class HostMetricList(ListAPIView):
name = _("Host Metrics List")
model = models.HostMetric
serializer_class = serializers.HostMetricSerializer
permission_classes = (IsSystemAdminOrAuditor,)
search_fields = ('hostname', 'deleted')
def get_queryset(self):
return self.model.objects.all()
class HostMetricDetail(RetrieveDestroyAPIView):
name = _("Host Metric Detail")
model = models.HostMetric
serializer_class = serializers.HostMetricSerializer
permission_classes = (IsSystemAdminOrAuditor,)
def delete(self, request, *args, **kwargs):
self.get_object().soft_delete()
return Response(status=status.HTTP_204_NO_CONTENT)
# It will be enabled in future version of the AWX
# class HostMetricSummaryMonthlyList(ListAPIView):
# name = _("Host Metrics Summary Monthly")
# model = models.HostMetricSummaryMonthly
# serializer_class = serializers.HostMetricSummaryMonthlySerializer
# permission_classes = (IsSystemAdminOrAuditor,)
# search_fields = ('date',)
#
# def get_queryset(self):
# return self.model.objects.all()
class HostList(HostRelatedSearchMixin, ListCreateAPIView): class HostList(HostRelatedSearchMixin, ListCreateAPIView):
always_allow_superuser = False always_allow_superuser = False
model = models.Host model = models.Host
@@ -1613,14 +1568,6 @@ class HostAnsibleFactsDetail(RetrieveAPIView):
model = models.Host model = models.Host
serializer_class = serializers.AnsibleFactsSerializer serializer_class = serializers.AnsibleFactsSerializer
def get(self, request, *args, **kwargs):
obj = self.get_object()
if obj.inventory.kind == 'constructed':
# If this is a constructed inventory host, it is not the source of truth about facts
# redirect to the original input inventory host instead
return HttpResponseRedirect(reverse('api:host_ansible_facts_detail', kwargs={'pk': obj.instance_id}, request=self.request))
return super().get(request, *args, **kwargs)
class InventoryHostsList(HostRelatedSearchMixin, SubListCreateAttachDetachAPIView): class InventoryHostsList(HostRelatedSearchMixin, SubListCreateAttachDetachAPIView):
model = models.Host model = models.Host
@@ -1628,7 +1575,13 @@ class InventoryHostsList(HostRelatedSearchMixin, SubListCreateAttachDetachAPIVie
parent_model = models.Inventory parent_model = models.Inventory
relationship = 'hosts' relationship = 'hosts'
parent_key = 'inventory' parent_key = 'inventory'
filter_read_permission = False
def get_queryset(self):
inventory = self.get_parent_object()
qs = getattrd(inventory, self.relationship).all()
# Apply queryset optimizations
qs = qs.select_related(*HostAccess.select_related).prefetch_related(*HostAccess.prefetch_related)
return qs
class HostGroupsList(SubListCreateAttachDetachAPIView): class HostGroupsList(SubListCreateAttachDetachAPIView):
@@ -1716,7 +1669,7 @@ class GroupList(ListCreateAPIView):
class EnforceParentRelationshipMixin(object): class EnforceParentRelationshipMixin(object):
""" """
Useful when you have a self-referring ManyToManyRelationship. Useful when you have a self-refering ManyToManyRelationship.
* Tower uses a shallow (2-deep only) url pattern. For example: * Tower uses a shallow (2-deep only) url pattern. For example:
When an object hangs off of a parent object you would have the url of the When an object hangs off of a parent object you would have the url of the
@@ -2464,7 +2417,7 @@ class JobTemplateSurveySpec(GenericAPIView):
status=status.HTTP_400_BAD_REQUEST, status=status.HTTP_400_BAD_REQUEST,
) )
# if it's a multiselect or multiple choice, it must have coices listed # if it's a multiselect or multiple choice, it must have coices listed
# choices and defaults must come in as strings separated by /n characters. # choices and defualts must come in as strings seperated by /n characters.
if qtype == 'multiselect' or qtype == 'multiplechoice': if qtype == 'multiselect' or qtype == 'multiplechoice':
if 'choices' in survey_item: if 'choices' in survey_item:
if isinstance(survey_item['choices'], str): if isinstance(survey_item['choices'], str):
@@ -2501,7 +2454,7 @@ class JobTemplateSurveySpec(GenericAPIView):
return Response( return Response(
dict( dict(
error=_( error=_(
"$encrypted$ is a reserved keyword for password question defaults, survey question {idx} is type {survey_item[type]}." "$encrypted$ is a reserved keyword for password question defaults, " "survey question {idx} is type {survey_item[type]}."
).format(**context) ).format(**context)
), ),
status=status.HTTP_400_BAD_REQUEST, status=status.HTTP_400_BAD_REQUEST,
@@ -2569,7 +2522,16 @@ class JobTemplateCredentialsList(SubListCreateAttachDetachAPIView):
serializer_class = serializers.CredentialSerializer serializer_class = serializers.CredentialSerializer
parent_model = models.JobTemplate parent_model = models.JobTemplate
relationship = 'credentials' relationship = 'credentials'
filter_read_permission = False
def get_queryset(self):
# Return the full list of credentials
parent = self.get_parent_object()
self.check_parent_access(parent)
sublist_qs = getattrd(parent, self.relationship)
sublist_qs = sublist_qs.prefetch_related(
'created_by', 'modified_by', 'admin_role', 'use_role', 'read_role', 'admin_role__parents', 'admin_role__members'
)
return sublist_qs
def is_valid_relation(self, parent, sub, created=False): def is_valid_relation(self, parent, sub, created=False):
if sub.unique_hash() in [cred.unique_hash() for cred in parent.credentials.all()]: if sub.unique_hash() in [cred.unique_hash() for cred in parent.credentials.all()]:
@@ -2671,10 +2633,7 @@ class JobTemplateCallback(GenericAPIView):
# Permission class should have already validated host_config_key. # Permission class should have already validated host_config_key.
job_template = self.get_object() job_template = self.get_object()
# Attempt to find matching hosts based on remote address. # Attempt to find matching hosts based on remote address.
if job_template.inventory: matching_hosts = self.find_matching_hosts()
matching_hosts = self.find_matching_hosts()
else:
return Response({"msg": _("Cannot start automatically, an inventory is required.")}, status=status.HTTP_400_BAD_REQUEST)
# If the host is not found, update the inventory before trying to # If the host is not found, update the inventory before trying to
# match again. # match again.
inventory_sources_already_updated = [] inventory_sources_already_updated = []
@@ -2759,7 +2718,6 @@ class JobTemplateInstanceGroupsList(SubListAttachDetachAPIView):
serializer_class = serializers.InstanceGroupSerializer serializer_class = serializers.InstanceGroupSerializer
parent_model = models.JobTemplate parent_model = models.JobTemplate
relationship = 'instance_groups' relationship = 'instance_groups'
filter_read_permission = False
class JobTemplateAccessList(ResourceAccessList): class JobTemplateAccessList(ResourceAccessList):
@@ -2850,7 +2808,16 @@ class WorkflowJobTemplateNodeChildrenBaseList(EnforceParentRelationshipMixin, Su
relationship = '' relationship = ''
enforce_parent_relationship = 'workflow_job_template' enforce_parent_relationship = 'workflow_job_template'
search_fields = ('unified_job_template__name', 'unified_job_template__description') search_fields = ('unified_job_template__name', 'unified_job_template__description')
filter_read_permission = False
'''
Limit the set of WorkflowJobTemplateNodes to the related nodes of specified by
'relationship'
'''
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).all()
def is_valid_relation(self, parent, sub, created=False): def is_valid_relation(self, parent, sub, created=False):
if created: if created:
@@ -2925,7 +2892,14 @@ class WorkflowJobNodeChildrenBaseList(SubListAPIView):
parent_model = models.WorkflowJobNode parent_model = models.WorkflowJobNode
relationship = '' relationship = ''
search_fields = ('unified_job_template__name', 'unified_job_template__description') search_fields = ('unified_job_template__name', 'unified_job_template__description')
filter_read_permission = False
#
# Limit the set of WorkflowJobNodes to the related nodes of specified by self.relationship
#
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).all()
class WorkflowJobNodeSuccessNodesList(WorkflowJobNodeChildrenBaseList): class WorkflowJobNodeSuccessNodesList(WorkflowJobNodeChildrenBaseList):
@@ -3104,8 +3078,9 @@ class WorkflowJobTemplateWorkflowNodesList(SubListCreateAPIView):
relationship = 'workflow_job_template_nodes' relationship = 'workflow_job_template_nodes'
parent_key = 'workflow_job_template' parent_key = 'workflow_job_template'
search_fields = ('unified_job_template__name', 'unified_job_template__description') search_fields = ('unified_job_template__name', 'unified_job_template__description')
ordering = ('id',) # assure ordering by id for consistency
filter_read_permission = False def get_queryset(self):
return super(WorkflowJobTemplateWorkflowNodesList, self).get_queryset().order_by('id')
class WorkflowJobTemplateJobsList(SubListAPIView): class WorkflowJobTemplateJobsList(SubListAPIView):
@@ -3197,8 +3172,9 @@ class WorkflowJobWorkflowNodesList(SubListAPIView):
relationship = 'workflow_job_nodes' relationship = 'workflow_job_nodes'
parent_key = 'workflow_job' parent_key = 'workflow_job'
search_fields = ('unified_job_template__name', 'unified_job_template__description') search_fields = ('unified_job_template__name', 'unified_job_template__description')
ordering = ('id',) # assure ordering by id for consistency
filter_read_permission = False def get_queryset(self):
return super(WorkflowJobWorkflowNodesList, self).get_queryset().order_by('id')
class WorkflowJobCancel(GenericCancelView): class WorkflowJobCancel(GenericCancelView):
@@ -3333,6 +3309,7 @@ class JobLabelList(SubListAPIView):
serializer_class = serializers.LabelSerializer serializer_class = serializers.LabelSerializer
parent_model = models.Job parent_model = models.Job
relationship = 'labels' relationship = 'labels'
parent_key = 'job'
class WorkflowJobLabelList(JobLabelList): class WorkflowJobLabelList(JobLabelList):
@@ -3455,7 +3432,7 @@ class JobCreateSchedule(RetrieveAPIView):
config = obj.launch_config config = obj.launch_config
# Make up a name for the schedule, guarantee that it is unique # Make up a name for the schedule, guarentee that it is unique
name = 'Auto-generated schedule from job {}'.format(obj.id) name = 'Auto-generated schedule from job {}'.format(obj.id)
existing_names = models.Schedule.objects.filter(name__startswith=name).values_list('name', flat=True) existing_names = models.Schedule.objects.filter(name__startswith=name).values_list('name', flat=True)
if name in existing_names: if name in existing_names:
@@ -3511,7 +3488,11 @@ class BaseJobHostSummariesList(SubListAPIView):
relationship = 'job_host_summaries' relationship = 'job_host_summaries'
name = _('Job Host Summaries List') name = _('Job Host Summaries List')
search_fields = ('host_name',) search_fields = ('host_name',)
filter_read_permission = False
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
return getattr(parent, self.relationship).select_related('job', 'job__job_template', 'host')
class HostJobHostSummariesList(BaseJobHostSummariesList): class HostJobHostSummariesList(BaseJobHostSummariesList):
@@ -3642,7 +3623,7 @@ class JobJobEventsChildrenSummary(APIView):
# key is counter of meta events (i.e. verbose), value is uuid of the assigned parent # key is counter of meta events (i.e. verbose), value is uuid of the assigned parent
map_meta_counter_nested_uuid = {} map_meta_counter_nested_uuid = {}
# collapsible tree view in the UI only makes sense for tree-like # collapsable tree view in the UI only makes sense for tree-like
# hierarchy. If ansible is ran with a strategy like free or host_pinned, then # hierarchy. If ansible is ran with a strategy like free or host_pinned, then
# events can be out of sequential order, and no longer follow a tree structure # events can be out of sequential order, and no longer follow a tree structure
# E1 # E1
@@ -4055,7 +4036,7 @@ class UnifiedJobStdout(RetrieveAPIView):
return super(UnifiedJobStdout, self).retrieve(request, *args, **kwargs) return super(UnifiedJobStdout, self).retrieve(request, *args, **kwargs)
except models.StdoutMaxBytesExceeded as e: except models.StdoutMaxBytesExceeded as e:
response_message = _( response_message = _(
"Standard Output too large to display ({text_size} bytes), only download supported for sizes over {supported_size} bytes." "Standard Output too large to display ({text_size} bytes), " "only download supported for sizes over {supported_size} bytes."
).format(text_size=e.total, supported_size=e.supported) ).format(text_size=e.total, supported_size=e.supported)
if request.accepted_renderer.format == 'json': if request.accepted_renderer.format == 'json':
return Response({'range': {'start': 0, 'end': 1, 'absolute_end': 1}, 'content': response_message}) return Response({'range': {'start': 0, 'end': 1, 'absolute_end': 1}, 'content': response_message})
@@ -4309,7 +4290,7 @@ class WorkflowApprovalTemplateJobsList(SubListAPIView):
parent_key = 'workflow_approval_template' parent_key = 'workflow_approval_template'
class WorkflowApprovalList(ListAPIView): class WorkflowApprovalList(ListCreateAPIView):
model = models.WorkflowApproval model = models.WorkflowApproval
serializer_class = serializers.WorkflowApprovalListSerializer serializer_class = serializers.WorkflowApprovalListSerializer

View File

@@ -1,296 +0,0 @@
import requests
import logging
import urllib.parse as urlparse
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from django.utils import translation
from awx.api.generics import APIView, Response
from awx.api.permissions import AnalyticsPermission
from awx.api.versioning import reverse
from awx.main.utils import get_awx_version
from rest_framework import status
from collections import OrderedDict
AUTOMATION_ANALYTICS_API_URL_PATH = "/api/tower-analytics/v1"
AWX_ANALYTICS_API_PREFIX = 'analytics'
ERROR_UPLOAD_NOT_ENABLED = "analytics-upload-not-enabled"
ERROR_MISSING_URL = "missing-url"
ERROR_MISSING_USER = "missing-user"
ERROR_MISSING_PASSWORD = "missing-password"
ERROR_NO_DATA_OR_ENTITLEMENT = "no-data-or-entitlement"
ERROR_NOT_FOUND = "not-found"
ERROR_UNAUTHORIZED = "unauthorized"
ERROR_UNKNOWN = "unknown"
ERROR_UNSUPPORTED_METHOD = "unsupported-method"
logger = logging.getLogger('awx.api.views.analytics')
class MissingSettings(Exception):
"""Settings are not correct Exception"""
pass
class GetNotAllowedMixin(object):
def get(self, request, format=None):
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
class AnalyticsRootView(APIView):
permission_classes = (AnalyticsPermission,)
name = _('Automation Analytics')
swagger_topic = 'Automation Analytics'
def get(self, request, format=None):
data = OrderedDict()
data['authorized'] = reverse('api:analytics_authorized')
data['reports'] = reverse('api:analytics_reports_list')
data['report_options'] = reverse('api:analytics_report_options_list')
data['adoption_rate'] = reverse('api:analytics_adoption_rate')
data['adoption_rate_options'] = reverse('api:analytics_adoption_rate_options')
data['event_explorer'] = reverse('api:analytics_event_explorer')
data['event_explorer_options'] = reverse('api:analytics_event_explorer_options')
data['host_explorer'] = reverse('api:analytics_host_explorer')
data['host_explorer_options'] = reverse('api:analytics_host_explorer_options')
data['job_explorer'] = reverse('api:analytics_job_explorer')
data['job_explorer_options'] = reverse('api:analytics_job_explorer_options')
data['probe_templates'] = reverse('api:analytics_probe_templates_explorer')
data['probe_templates_options'] = reverse('api:analytics_probe_templates_options')
data['probe_template_for_hosts'] = reverse('api:analytics_probe_template_for_hosts_explorer')
data['probe_template_for_hosts_options'] = reverse('api:analytics_probe_template_for_hosts_options')
data['roi_templates'] = reverse('api:analytics_roi_templates_explorer')
data['roi_templates_options'] = reverse('api:analytics_roi_templates_options')
return Response(data)
class AnalyticsGenericView(APIView):
"""
Example:
headers = {
'Content-Type': 'application/json',
}
params = {
'limit': '20',
'offset': '0',
'sort_by': 'name:asc',
}
json_data = {
'limit': '20',
'offset': '0',
'sort_options': 'name',
'sort_order': 'asc',
'tags': [],
'slug': [],
'name': [],
'description': '',
}
response = requests.post(f'{AUTOMATION_ANALYTICS_API_URL}/reports/', params=params,
headers=headers, json=json_data)
return Response(response.json(), status=response.status_code)
"""
permission_classes = (AnalyticsPermission,)
@staticmethod
def _request_headers(request):
headers = {}
for header in ['Content-Type', 'Content-Length', 'Accept-Encoding', 'User-Agent', 'Accept']:
if request.headers.get(header, None):
headers[header] = request.headers.get(header)
headers['X-Rh-Analytics-Source'] = 'controller'
headers['X-Rh-Analytics-Source-Version'] = get_awx_version()
headers['Accept-Language'] = translation.get_language()
return headers
@staticmethod
def _get_analytics_path(request_path):
parts = request_path.split(f'{AWX_ANALYTICS_API_PREFIX}/')
path_specific = parts[-1]
return f"{AUTOMATION_ANALYTICS_API_URL_PATH}/{path_specific}"
def _get_analytics_url(self, request_path):
analytics_path = self._get_analytics_path(request_path)
url = getattr(settings, 'AUTOMATION_ANALYTICS_URL', None)
if not url:
raise MissingSettings(ERROR_MISSING_URL)
url_parts = urlparse.urlsplit(url)
analytics_url = urlparse.urlunsplit([url_parts.scheme, url_parts.netloc, analytics_path, url_parts.query, url_parts.fragment])
return analytics_url
@staticmethod
def _get_setting(setting_name, default, error_message):
setting = getattr(settings, setting_name, default)
if not setting:
raise MissingSettings(error_message)
return setting
@staticmethod
def _error_response(keyword, message=None, remote=True, remote_status_code=None, status_code=status.HTTP_403_FORBIDDEN):
text = {"error": {"remote": remote, "remote_status": remote_status_code, "keyword": keyword}}
if message:
text["error"]["message"] = message
return Response(text, status=status_code)
def _error_response_404(self, response):
try:
json_response = response.json()
# Subscription/entitlement problem or missing tenant data in AA db => HTTP 403
message = json_response.get('error', None)
if message:
return self._error_response(ERROR_NO_DATA_OR_ENTITLEMENT, message, remote=True, remote_status_code=response.status_code)
# Standard 404 problem => HTTP 404
message = json_response.get('detail', None) or response.text
except requests.exceptions.JSONDecodeError:
# Unexpected text => still HTTP 404
message = response.text
return self._error_response(ERROR_NOT_FOUND, message, remote=True, remote_status_code=status.HTTP_404_NOT_FOUND, status_code=status.HTTP_404_NOT_FOUND)
@staticmethod
def _update_response_links(json_response):
if not json_response.get('links', None):
return
for key, value in json_response['links'].items():
if value:
json_response['links'][key] = value.replace(AUTOMATION_ANALYTICS_API_URL_PATH, f"/api/v2/{AWX_ANALYTICS_API_PREFIX}")
def _forward_response(self, response):
try:
content_type = response.headers.get('content-type', '')
if content_type.find('application/json') != -1:
json_response = response.json()
self._update_response_links(json_response)
return Response(json_response, status=response.status_code)
except Exception as e:
logger.error(f"Analytics API: Response error: {e}")
return Response(response.content, status=response.status_code)
def _send_to_analytics(self, request, method):
try:
headers = self._request_headers(request)
self._get_setting('INSIGHTS_TRACKING_STATE', False, ERROR_UPLOAD_NOT_ENABLED)
url = self._get_analytics_url(request.path)
rh_user = self._get_setting('REDHAT_USERNAME', None, ERROR_MISSING_USER)
rh_password = self._get_setting('REDHAT_PASSWORD', None, ERROR_MISSING_PASSWORD)
if method not in ["GET", "POST", "OPTIONS"]:
return self._error_response(ERROR_UNSUPPORTED_METHOD, method, remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
else:
response = requests.request(
method,
url,
auth=(rh_user, rh_password),
verify=settings.INSIGHTS_CERT_PATH,
params=request.query_params,
headers=headers,
json=request.data,
timeout=(31, 31),
)
#
# Missing or wrong user/pass
#
if response.status_code == status.HTTP_401_UNAUTHORIZED:
text = (response.text or '').rstrip("\n")
return self._error_response(ERROR_UNAUTHORIZED, text, remote=True, remote_status_code=response.status_code)
#
# Not found, No entitlement or No data in Analytics
#
elif response.status_code == status.HTTP_404_NOT_FOUND:
return self._error_response_404(response)
#
# Success or not a 401/404 errors are just forwarded
#
else:
return self._forward_response(response)
except MissingSettings as e:
logger.warning(f"Analytics API: Setting missing: {e.args[0]}")
return self._error_response(e.args[0], remote=False)
except requests.exceptions.RequestException as e:
logger.error(f"Analytics API: Request error: {e}")
return self._error_response(ERROR_UNKNOWN, str(e), remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
except Exception as e:
logger.error(f"Analytics API: Error: {e}")
return self._error_response(ERROR_UNKNOWN, str(e), remote=False, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
class AnalyticsGenericListView(AnalyticsGenericView):
def get(self, request, format=None):
return self._send_to_analytics(request, method="GET")
def post(self, request, format=None):
return self._send_to_analytics(request, method="POST")
def options(self, request, format=None):
return self._send_to_analytics(request, method="OPTIONS")
class AnalyticsGenericDetailView(AnalyticsGenericView):
def get(self, request, slug, format=None):
return self._send_to_analytics(request, method="GET")
def post(self, request, slug, format=None):
return self._send_to_analytics(request, method="POST")
def options(self, request, slug, format=None):
return self._send_to_analytics(request, method="OPTIONS")
class AnalyticsAuthorizedView(AnalyticsGenericListView):
name = _("Authorized")
class AnalyticsReportsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Reports")
swagger_topic = "Automation Analytics"
class AnalyticsReportDetail(AnalyticsGenericDetailView):
name = _("Report")
class AnalyticsReportOptionsList(AnalyticsGenericListView):
name = _("Report Options")
class AnalyticsAdoptionRateList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Adoption Rate")
class AnalyticsEventExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Event Explorer")
class AnalyticsHostExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Host Explorer")
class AnalyticsJobExplorerList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Job Explorer")
class AnalyticsProbeTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Templates")
class AnalyticsProbeTemplateForHostsList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("Probe Template For Hosts")
class AnalyticsRoiTemplatesList(GetNotAllowedMixin, AnalyticsGenericListView):
name = _("ROI Templates")

View File

@@ -1,74 +0,0 @@
from collections import OrderedDict
from django.utils.translation import gettext_lazy as _
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import JSONRenderer
from rest_framework.reverse import reverse
from rest_framework import status
from rest_framework.response import Response
from awx.main.models import UnifiedJob, Host
from awx.api.generics import (
GenericAPIView,
APIView,
)
from awx.api import (
serializers,
renderers,
)
class BulkView(APIView):
name = _('Bulk')
swagger_topic = 'Bulk'
permission_classes = [IsAuthenticated]
renderer_classes = [
renderers.BrowsableAPIRenderer,
JSONRenderer,
]
allowed_methods = ['GET', 'OPTIONS']
def get(self, request, format=None):
'''List top level resources'''
data = OrderedDict()
data['host_create'] = reverse('api:bulk_host_create', request=request)
data['job_launch'] = reverse('api:bulk_job_launch', request=request)
return Response(data)
class BulkJobLaunchView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = UnifiedJob
serializer_class = serializers.BulkJobLaunchSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
data = OrderedDict()
data['detail'] = "Specify a list of unified job templates to launch alongside their launchtime parameters"
return Response(data, status=status.HTTP_200_OK)
def post(self, request):
bulkjob_serializer = serializers.BulkJobLaunchSerializer(data=request.data, context={'request': request})
if bulkjob_serializer.is_valid():
result = bulkjob_serializer.create(bulkjob_serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(bulkjob_serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class BulkHostCreateView(GenericAPIView):
permission_classes = [IsAuthenticated]
model = Host
serializer_class = serializers.BulkHostCreateSerializer
allowed_methods = ['GET', 'POST', 'OPTIONS']
def get(self, request):
return Response({"detail": "Bulk create hosts with this endpoint"}, status=status.HTTP_200_OK)
def post(self, request):
serializer = serializers.BulkHostCreateSerializer(data=request.data, context={'request': request})
if serializer.is_valid():
result = serializer.create(serializer.validated_data)
return Response(result, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)

View File

@@ -57,11 +57,13 @@ class InstanceInstallBundle(GenericAPIView):
with io.BytesIO() as f: with io.BytesIO() as f:
with tarfile.open(fileobj=f, mode='w:gz') as tar: with tarfile.open(fileobj=f, mode='w:gz') as tar:
# copy /etc/receptor/tls/ca/mesh-CA.crt to receptor/tls/ca in the tar file # copy /etc/receptor/tls/ca/receptor-ca.crt to receptor/tls/ca in the tar file
tar.add(os.path.realpath('/etc/receptor/tls/ca/mesh-CA.crt'), arcname=f"{instance_obj.hostname}_install_bundle/receptor/tls/ca/mesh-CA.crt") tar.add(
os.path.realpath('/etc/receptor/tls/ca/receptor-ca.crt'), arcname=f"{instance_obj.hostname}_install_bundle/receptor/tls/ca/receptor-ca.crt"
)
# copy /etc/receptor/work_public_key.pem to receptor/work_public_key.pem # copy /etc/receptor/signing/work-public-key.pem to receptor/work-public-key.pem
tar.add('/etc/receptor/work_public_key.pem', arcname=f"{instance_obj.hostname}_install_bundle/receptor/work_public_key.pem") tar.add('/etc/receptor/signing/work-public-key.pem', arcname=f"{instance_obj.hostname}_install_bundle/receptor/work-public-key.pem")
# generate and write the receptor key to receptor/tls/receptor.key in the tar file # generate and write the receptor key to receptor/tls/receptor.key in the tar file
key, cert = generate_receptor_tls(instance_obj) key, cert = generate_receptor_tls(instance_obj)
@@ -159,14 +161,14 @@ def generate_receptor_tls(instance_obj):
.sign(key, hashes.SHA256()) .sign(key, hashes.SHA256())
) )
# sign csr with the receptor ca key from /etc/receptor/ca/mesh-CA.key # sign csr with the receptor ca key from /etc/receptor/ca/receptor-ca.key
with open('/etc/receptor/tls/ca/mesh-CA.key', 'rb') as f: with open('/etc/receptor/tls/ca/receptor-ca.key', 'rb') as f:
ca_key = serialization.load_pem_private_key( ca_key = serialization.load_pem_private_key(
f.read(), f.read(),
password=None, password=None,
) )
with open('/etc/receptor/tls/ca/mesh-CA.crt', 'rb') as f: with open('/etc/receptor/tls/ca/receptor-ca.crt', 'rb') as f:
ca_cert = x509.load_pem_x509_certificate(f.read()) ca_cert = x509.load_pem_x509_certificate(f.read())
cert = ( cert = (

View File

@@ -50,7 +50,7 @@ class UnifiedJobDeletionMixin(object):
return Response({"error": _("Job has not finished processing events.")}, status=status.HTTP_400_BAD_REQUEST) return Response({"error": _("Job has not finished processing events.")}, status=status.HTTP_400_BAD_REQUEST)
else: else:
# if it has been > 1 minute, events are probably lost # if it has been > 1 minute, events are probably lost
logger.warning('Allowing deletion of {} through the API without all events processed.'.format(obj.log_format)) logger.warning('Allowing deletion of {} through the API without all events ' 'processed.'.format(obj.log_format))
# Manually cascade delete events if unpartitioned job # Manually cascade delete events if unpartitioned job
if obj.has_unpartitioned_events: if obj.has_unpartitioned_events:

View File

@@ -61,6 +61,12 @@ class OrganizationList(OrganizationCountsMixin, ListCreateAPIView):
model = Organization model = Organization
serializer_class = OrganizationSerializer serializer_class = OrganizationSerializer
def get_queryset(self):
qs = Organization.accessible_objects(self.request.user, 'read_role')
qs = qs.select_related('admin_role', 'auditor_role', 'member_role', 'read_role')
qs = qs.prefetch_related('created_by', 'modified_by')
return qs
class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView): class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPIView):
model = Organization model = Organization
@@ -201,7 +207,6 @@ class OrganizationInstanceGroupsList(SubListAttachDetachAPIView):
serializer_class = InstanceGroupSerializer serializer_class = InstanceGroupSerializer
parent_model = Organization parent_model = Organization
relationship = 'instance_groups' relationship = 'instance_groups'
filter_read_permission = False
class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView): class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
@@ -209,7 +214,6 @@ class OrganizationGalaxyCredentialsList(SubListAttachDetachAPIView):
serializer_class = CredentialSerializer serializer_class = CredentialSerializer
parent_model = Organization parent_model = Organization
relationship = 'galaxy_credentials' relationship = 'galaxy_credentials'
filter_read_permission = False
def is_valid_relation(self, parent, sub, created=False): def is_valid_relation(self, parent, sub, created=False):
if sub.kind != 'galaxy_api_token': if sub.kind != 'galaxy_api_token':

View File

@@ -20,7 +20,6 @@ from rest_framework import status
import requests import requests
from awx import MODE
from awx.api.generics import APIView from awx.api.generics import APIView
from awx.conf.registry import settings_registry from awx.conf.registry import settings_registry
from awx.main.analytics import all_collectors from awx.main.analytics import all_collectors
@@ -55,8 +54,6 @@ class ApiRootView(APIView):
data['custom_logo'] = settings.CUSTOM_LOGO data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE data['login_redirect_override'] = settings.LOGIN_REDIRECT_OVERRIDE
if MODE == 'development':
data['swagger'] = drf_reverse('api:schema-swagger-ui')
return Response(data) return Response(data)
@@ -106,9 +103,6 @@ class ApiVersionRootView(APIView):
data['inventory_updates'] = reverse('api:inventory_update_list', request=request) data['inventory_updates'] = reverse('api:inventory_update_list', request=request)
data['groups'] = reverse('api:group_list', request=request) data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request) data['hosts'] = reverse('api:host_list', request=request)
data['host_metrics'] = reverse('api:host_metric_list', request=request)
# It will be enabled in future version of the AWX
# data['host_metric_summary_monthly'] = reverse('api:host_metric_summary_monthly_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request) data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request) data['jobs'] = reverse('api:job_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request) data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)
@@ -128,8 +122,6 @@ class ApiVersionRootView(APIView):
data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request) data['workflow_job_template_nodes'] = reverse('api:workflow_job_template_node_list', request=request)
data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request) data['workflow_job_nodes'] = reverse('api:workflow_job_node_list', request=request)
data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request) data['mesh_visualizer'] = reverse('api:mesh_visualizer_view', request=request)
data['bulk'] = reverse('api:bulk', request=request)
data['analytics'] = reverse('api:analytics_root_view', request=request)
return Response(data) return Response(data)
@@ -280,9 +272,6 @@ class ApiV2ConfigView(APIView):
pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off' pendo_state = settings.PENDO_TRACKING_STATE if settings.PENDO_TRACKING_STATE in ('off', 'anonymous', 'detailed') else 'off'
# Guarding against settings.UI_NEXT being set to a non-boolean value
ui_next_state = settings.UI_NEXT if settings.UI_NEXT in (True, False) else False
data = dict( data = dict(
time_zone=settings.TIME_ZONE, time_zone=settings.TIME_ZONE,
license_info=license_data, license_info=license_data,
@@ -291,7 +280,6 @@ class ApiV2ConfigView(APIView):
analytics_status=pendo_state, analytics_status=pendo_state,
analytics_collectors=all_collectors(), analytics_collectors=all_collectors(),
become_methods=PRIVILEGE_ESCALATION_METHODS, become_methods=PRIVILEGE_ESCALATION_METHODS,
ui_next=ui_next_state,
) )
# If LDAP is enabled, user_ldap_fields will return a list of field # If LDAP is enabled, user_ldap_fields will return a list of field

View File

@@ -114,7 +114,7 @@ class WebhookReceiverBase(APIView):
# Ensure that the full contents of the request are captured for multiple uses. # Ensure that the full contents of the request are captured for multiple uses.
request.body request.body
logger.debug("headers: {}\ndata: {}\n".format(request.headers, request.data)) logger.debug("headers: {}\n" "data: {}\n".format(request.headers, request.data))
obj = self.get_object() obj = self.get_object()
self.check_signature(obj) self.check_signature(obj)

View File

@@ -14,7 +14,7 @@ class ConfConfig(AppConfig):
def ready(self): def ready(self):
self.module.autodiscover() self.module.autodiscover()
if not set(sys.argv) & {'migrate', 'check_migrations', 'showmigrations'}: if not set(sys.argv) & {'migrate', 'check_migrations'}:
from .settings import SettingsWrapper from .settings import SettingsWrapper
SettingsWrapper.initialize() SettingsWrapper.initialize()

View File

@@ -21,7 +21,7 @@ logger = logging.getLogger('awx.conf.fields')
# Use DRF fields to convert/validate settings: # Use DRF fields to convert/validate settings:
# - to_representation(obj) should convert a native Python object to a primitive # - to_representation(obj) should convert a native Python object to a primitive
# serializable type. This primitive type will be what is presented in the API # serializable type. This primitive type will be what is presented in the API
# and stored in the JSON field in the database. # and stored in the JSON field in the datbase.
# - to_internal_value(data) should convert the primitive type back into the # - to_internal_value(data) should convert the primitive type back into the
# appropriate Python type to be used in settings. # appropriate Python type to be used in settings.

View File

@@ -1,17 +0,0 @@
# Generated by Django 4.2 on 2023-06-09 19:51
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('conf', '0009_rename_proot_settings'),
]
operations = [
migrations.AlterField(
model_name='setting',
name='value',
field=models.JSONField(null=True),
),
]

View File

@@ -1,11 +1,7 @@
import inspect import inspect
from django.conf import settings from django.conf import settings
from django.utils.timezone import now
import logging
logger = logging.getLogger('awx.conf.migrations')
def fill_ldap_group_type_params(apps, schema_editor): def fill_ldap_group_type_params(apps, schema_editor):
@@ -19,7 +15,7 @@ def fill_ldap_group_type_params(apps, schema_editor):
entry = qs[0] entry = qs[0]
group_type_params = entry.value group_type_params = entry.value
else: else:
return # for new installs we prefer to use the default value entry = Setting(key='AUTH_LDAP_GROUP_TYPE_PARAMS', value=group_type_params, created=now(), modified=now())
init_attrs = set(inspect.getfullargspec(group_type.__init__).args[1:]) init_attrs = set(inspect.getfullargspec(group_type.__init__).args[1:])
for k in list(group_type_params.keys()): for k in list(group_type_params.keys()):
@@ -27,5 +23,4 @@ def fill_ldap_group_type_params(apps, schema_editor):
del group_type_params[k] del group_type_params[k]
entry.value = group_type_params entry.value = group_type_params
logger.warning(f'Migration updating AUTH_LDAP_GROUP_TYPE_PARAMS with value {entry.value}')
entry.save() entry.save()

View File

@@ -8,6 +8,7 @@ import json
from django.db import models from django.db import models
# AWX # AWX
from awx.main.fields import JSONBlob
from awx.main.models.base import CreatedModifiedModel, prevent_search from awx.main.models.base import CreatedModifiedModel, prevent_search
from awx.main.utils import encrypt_field from awx.main.utils import encrypt_field
from awx.conf import settings_registry from awx.conf import settings_registry
@@ -17,7 +18,7 @@ __all__ = ['Setting']
class Setting(CreatedModifiedModel): class Setting(CreatedModifiedModel):
key = models.CharField(max_length=255) key = models.CharField(max_length=255)
value = models.JSONField(null=True) value = JSONBlob(null=True)
user = prevent_search(models.ForeignKey('auth.User', related_name='settings', default=None, null=True, editable=False, on_delete=models.CASCADE)) user = prevent_search(models.ForeignKey('auth.User', related_name='settings', default=None, null=True, editable=False, on_delete=models.CASCADE))
def __str__(self): def __str__(self):

View File

@@ -5,13 +5,11 @@ import threading
import time import time
import os import os
from concurrent.futures import ThreadPoolExecutor
# Django # Django
from django.conf import LazySettings from django.conf import LazySettings
from django.conf import settings, UserSettingsHolder from django.conf import settings, UserSettingsHolder
from django.core.cache import cache as django_cache from django.core.cache import cache as django_cache
from django.core.exceptions import ImproperlyConfigured, SynchronousOnlyOperation from django.core.exceptions import ImproperlyConfigured
from django.db import transaction, connection from django.db import transaction, connection
from django.db.utils import Error as DBError, ProgrammingError from django.db.utils import Error as DBError, ProgrammingError
from django.utils.functional import cached_property from django.utils.functional import cached_property
@@ -159,7 +157,7 @@ class EncryptedCacheProxy(object):
obj_id = self.cache.get(Setting.get_cache_id_key(key), default=empty) obj_id = self.cache.get(Setting.get_cache_id_key(key), default=empty)
if obj_id is empty: if obj_id is empty:
logger.info('Efficiency notice: Corresponding id not stored in cache %s', Setting.get_cache_id_key(key)) logger.info('Efficiency notice: Corresponding id not stored in cache %s', Setting.get_cache_id_key(key))
obj_id = getattr(_get_setting_from_db(self.registry, key), 'pk', None) obj_id = getattr(self._get_setting_from_db(key), 'pk', None)
elif obj_id == SETTING_CACHE_NONE: elif obj_id == SETTING_CACHE_NONE:
obj_id = None obj_id = None
return method(TransientSetting(pk=obj_id, value=value), 'value') return method(TransientSetting(pk=obj_id, value=value), 'value')
@@ -168,6 +166,11 @@ class EncryptedCacheProxy(object):
# a no-op; it just returns the provided value # a no-op; it just returns the provided value
return value return value
def _get_setting_from_db(self, key):
field = self.registry.get_setting_field(key)
if not field.read_only:
return Setting.objects.filter(key=key, user__isnull=True).order_by('pk').first()
def __getattr__(self, name): def __getattr__(self, name):
return getattr(self.cache, name) return getattr(self.cache, name)
@@ -183,22 +186,6 @@ def get_settings_to_cache(registry):
return dict([(key, SETTING_CACHE_NOTSET) for key in get_writeable_settings(registry)]) return dict([(key, SETTING_CACHE_NOTSET) for key in get_writeable_settings(registry)])
# Will first attempt to get the setting from the database in synchronous mode.
# If call from async context, it will attempt to get the setting from the database in a thread.
def _get_setting_from_db(registry, key):
def get_settings_from_db_sync(registry, key):
field = registry.get_setting_field(key)
if not field.read_only or key == 'INSTALL_UUID':
return Setting.objects.filter(key=key, user__isnull=True).order_by('pk').first()
try:
return get_settings_from_db_sync(registry, key)
except SynchronousOnlyOperation:
with ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(get_settings_from_db_sync, registry, key)
return future.result()
def get_cache_value(value): def get_cache_value(value):
"""Returns the proper special cache setting for a value """Returns the proper special cache setting for a value
based on instance type. based on instance type.
@@ -358,7 +345,7 @@ class SettingsWrapper(UserSettingsHolder):
setting_id = None setting_id = None
# this value is read-only, however we *do* want to fetch its value from the database # this value is read-only, however we *do* want to fetch its value from the database
if not field.read_only or name == 'INSTALL_UUID': if not field.read_only or name == 'INSTALL_UUID':
setting = _get_setting_from_db(self.registry, name) setting = Setting.objects.filter(key=name, user__isnull=True).order_by('pk').first()
if setting: if setting:
if getattr(field, 'encrypted', False): if getattr(field, 'encrypted', False):
value = decrypt_field(setting, 'value') value = decrypt_field(setting, 'value')

View File

@@ -94,7 +94,9 @@ def test_setting_singleton_retrieve_readonly(api_request, dummy_setting):
@pytest.mark.django_db @pytest.mark.django_db
def test_setting_singleton_update(api_request, dummy_setting): def test_setting_singleton_update(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch('awx.conf.views.clear_setting_cache'): with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 3}) api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 3})
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'})) response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
assert response.data['FOO_BAR'] == 3 assert response.data['FOO_BAR'] == 3
@@ -110,7 +112,7 @@ def test_setting_singleton_update_hybriddictfield_with_forbidden(api_request, du
# sure that the _Forbidden validator doesn't get used for the # sure that the _Forbidden validator doesn't get used for the
# fields. See also https://github.com/ansible/awx/issues/4099. # fields. See also https://github.com/ansible/awx/issues/4099.
with dummy_setting('FOO_BAR', field_class=sso_fields.SAMLOrgAttrField, category='FooBar', category_slug='foobar'), mock.patch( with dummy_setting('FOO_BAR', field_class=sso_fields.SAMLOrgAttrField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.clear_setting_cache' 'awx.conf.views.handle_setting_changes'
): ):
api_request( api_request(
'patch', 'patch',
@@ -124,7 +126,7 @@ def test_setting_singleton_update_hybriddictfield_with_forbidden(api_request, du
@pytest.mark.django_db @pytest.mark.django_db
def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy_setting): def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=4, category='FooBar', category_slug='foobar'), mock.patch( with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=4, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.clear_setting_cache' 'awx.conf.views.handle_setting_changes'
): ):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 5}) api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 5})
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'})) response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
@@ -134,7 +136,7 @@ def test_setting_singleton_update_dont_change_readonly_fields(api_request, dummy
@pytest.mark.django_db @pytest.mark.django_db
def test_setting_singleton_update_dont_change_encrypted_mark(api_request, dummy_setting): def test_setting_singleton_update_dont_change_encrypted_mark(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.CharField, encrypted=True, category='FooBar', category_slug='foobar'), mock.patch( with dummy_setting('FOO_BAR', field_class=fields.CharField, encrypted=True, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.clear_setting_cache' 'awx.conf.views.handle_setting_changes'
): ):
api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 'password'}) api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 'password'})
assert Setting.objects.get(key='FOO_BAR').value.startswith('$encrypted$') assert Setting.objects.get(key='FOO_BAR').value.startswith('$encrypted$')
@@ -153,14 +155,16 @@ def test_setting_singleton_update_runs_custom_validate(api_request, dummy_settin
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), dummy_validate( with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), dummy_validate(
'foobar', func_raising_exception 'foobar', func_raising_exception
), mock.patch('awx.conf.views.clear_setting_cache'): ), mock.patch('awx.conf.views.handle_setting_changes'):
response = api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 23}) response = api_request('patch', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}), data={'FOO_BAR': 23})
assert response.status_code == 400 assert response.status_code == 400
@pytest.mark.django_db @pytest.mark.django_db
def test_setting_singleton_delete(api_request, dummy_setting): def test_setting_singleton_delete(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch('awx.conf.views.clear_setting_cache'): with dummy_setting('FOO_BAR', field_class=fields.IntegerField, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.handle_setting_changes'
):
api_request('delete', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'})) api_request('delete', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'})) response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
assert not response.data['FOO_BAR'] assert not response.data['FOO_BAR']
@@ -169,7 +173,7 @@ def test_setting_singleton_delete(api_request, dummy_setting):
@pytest.mark.django_db @pytest.mark.django_db
def test_setting_singleton_delete_no_read_only_fields(api_request, dummy_setting): def test_setting_singleton_delete_no_read_only_fields(api_request, dummy_setting):
with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=23, category='FooBar', category_slug='foobar'), mock.patch( with dummy_setting('FOO_BAR', field_class=fields.IntegerField, read_only=True, default=23, category='FooBar', category_slug='foobar'), mock.patch(
'awx.conf.views.clear_setting_cache' 'awx.conf.views.handle_setting_changes'
): ):
api_request('delete', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'})) api_request('delete', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))
response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'})) response = api_request('get', reverse('api:setting_singleton_detail', kwargs={'category_slug': 'foobar'}))

View File

@@ -1,25 +0,0 @@
import pytest
from awx.conf.migrations._ldap_group_type import fill_ldap_group_type_params
from awx.conf.models import Setting
from django.apps import apps
@pytest.mark.django_db
def test_fill_group_type_params_no_op():
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 0
@pytest.mark.django_db
def test_keep_old_setting_with_default_value():
Setting.objects.create(key='AUTH_LDAP_GROUP_TYPE', value={'name_attr': 'cn', 'member_attr': 'member'})
fill_ldap_group_type_params(apps, 'dont-use-me')
assert Setting.objects.count() == 1
s = Setting.objects.first()
assert s.value == {'name_attr': 'cn', 'member_attr': 'member'}
# NOTE: would be good to test the removal of attributes by migration
# but this requires fighting with the validator and is not done here

View File

@@ -35,7 +35,7 @@ class TestStringListBooleanField:
field = StringListBooleanField() field = StringListBooleanField()
with pytest.raises(ValidationError) as e: with pytest.raises(ValidationError) as e:
field.to_internal_value(value) field.to_internal_value(value)
assert e.value.detail[0] == "Expected None, True, False, a string or list of strings but got {} instead.".format(type(value)) assert e.value.detail[0] == "Expected None, True, False, a string or list " "of strings but got {} instead.".format(type(value))
@pytest.mark.parametrize("value_in, value_known", FIELD_VALUES) @pytest.mark.parametrize("value_in, value_known", FIELD_VALUES)
def test_to_representation_valid(self, value_in, value_known): def test_to_representation_valid(self, value_in, value_known):
@@ -48,7 +48,7 @@ class TestStringListBooleanField:
field = StringListBooleanField() field = StringListBooleanField()
with pytest.raises(ValidationError) as e: with pytest.raises(ValidationError) as e:
field.to_representation(value) field.to_representation(value)
assert e.value.detail[0] == "Expected None, True, False, a string or list of strings but got {} instead.".format(type(value)) assert e.value.detail[0] == "Expected None, True, False, a string or list " "of strings but got {} instead.".format(type(value))
class TestListTuplesField: class TestListTuplesField:
@@ -67,7 +67,7 @@ class TestListTuplesField:
field = ListTuplesField() field = ListTuplesField()
with pytest.raises(ValidationError) as e: with pytest.raises(ValidationError) as e:
field.to_internal_value(value) field.to_internal_value(value)
assert e.value.detail[0] == "Expected a list of tuples of max length 2 but got {} instead.".format(t) assert e.value.detail[0] == "Expected a list of tuples of max length 2 " "but got {} instead.".format(t)
class TestStringListPathField: class TestStringListPathField:

View File

@@ -26,11 +26,10 @@ from awx.api.generics import APIView, GenericAPIView, ListAPIView, RetrieveUpdat
from awx.api.permissions import IsSystemAdminOrAuditor from awx.api.permissions import IsSystemAdminOrAuditor
from awx.api.versioning import reverse from awx.api.versioning import reverse
from awx.main.utils import camelcase_to_underscore from awx.main.utils import camelcase_to_underscore
from awx.main.tasks.system import clear_setting_cache from awx.main.tasks.system import handle_setting_changes
from awx.conf.models import Setting from awx.conf.models import Setting
from awx.conf.serializers import SettingCategorySerializer, SettingSingletonSerializer from awx.conf.serializers import SettingCategorySerializer, SettingSingletonSerializer
from awx.conf import settings_registry from awx.conf import settings_registry
from awx.main.utils.external_logging import reconfigure_rsyslog
SettingCategory = collections.namedtuple('SettingCategory', ('url', 'slug', 'name')) SettingCategory = collections.namedtuple('SettingCategory', ('url', 'slug', 'name'))
@@ -119,10 +118,7 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
setting.save(update_fields=['value']) setting.save(update_fields=['value'])
settings_change_list.append(key) settings_change_list.append(key)
if settings_change_list: if settings_change_list:
connection.on_commit(lambda: clear_setting_cache.delay(settings_change_list)) connection.on_commit(lambda: handle_setting_changes.delay(settings_change_list))
if any([setting.startswith('LOG_AGGREGATOR') for setting in settings_change_list]):
# call notify to rsyslog. no data is need so payload is empty
reconfigure_rsyslog.delay()
def destroy(self, request, *args, **kwargs): def destroy(self, request, *args, **kwargs):
instance = self.get_object() instance = self.get_object()
@@ -137,10 +133,7 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
setting.delete() setting.delete()
settings_change_list.append(setting.key) settings_change_list.append(setting.key)
if settings_change_list: if settings_change_list:
connection.on_commit(lambda: clear_setting_cache.delay(settings_change_list)) connection.on_commit(lambda: handle_setting_changes.delay(settings_change_list))
if any([setting.startswith('LOG_AGGREGATOR') for setting in settings_change_list]):
# call notify to rsyslog. no data is need so payload is empty
reconfigure_rsyslog.delay()
# When TOWER_URL_BASE is deleted from the API, reset it to the hostname # When TOWER_URL_BASE is deleted from the API, reset it to the hostname
# used to make the request as a default. # used to make the request as a default.
@@ -187,7 +180,7 @@ class SettingLoggingTest(GenericAPIView):
if not port: if not port:
return Response({'error': 'Port required for ' + protocol}, status=status.HTTP_400_BAD_REQUEST) return Response({'error': 'Port required for ' + protocol}, status=status.HTTP_400_BAD_REQUEST)
else: else:
# if http/https by this point, domain is reachable # if http/https by this point, domain is reacheable
return Response(status=status.HTTP_202_ACCEPTED) return Response(status=status.HTTP_202_ACCEPTED)
if protocol == 'udp': if protocol == 'udp':

View File

@@ -1972,7 +1972,7 @@ msgid ""
"HTTP headers and meta keys to search to determine remote host name or IP. " "HTTP headers and meta keys to search to determine remote host name or IP. "
"Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if " "Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if "
"behind a reverse proxy. See the \"Proxy Support\" section of the " "behind a reverse proxy. See the \"Proxy Support\" section of the "
"Administrator guide for more details." "Adminstrator guide for more details."
msgstr "" msgstr ""
#: awx/main/conf.py:85 #: awx/main/conf.py:85
@@ -2457,7 +2457,7 @@ msgid ""
msgstr "" msgstr ""
#: awx/main/conf.py:631 #: awx/main/conf.py:631
msgid "Maximum disk persistence for external log aggregation (in GB)" msgid "Maximum disk persistance for external log aggregation (in GB)"
msgstr "" msgstr ""
#: awx/main/conf.py:633 #: awx/main/conf.py:633
@@ -2548,7 +2548,7 @@ msgid "Enable"
msgstr "" msgstr ""
#: awx/main/constants.py:27 #: awx/main/constants.py:27
msgid "Does" msgid "Doas"
msgstr "" msgstr ""
#: awx/main/constants.py:28 #: awx/main/constants.py:28
@@ -4801,7 +4801,7 @@ msgstr ""
#: awx/main/models/workflow.py:251 #: awx/main/models/workflow.py:251
msgid "" msgid ""
"An identifier corresponding to the workflow job template node that this node " "An identifier coresponding to the workflow job template node that this node "
"was created from." "was created from."
msgstr "" msgstr ""
@@ -5521,7 +5521,7 @@ msgstr ""
#: awx/sso/conf.py:606 #: awx/sso/conf.py:606
msgid "" msgid ""
"Extra arguments for Google OAuth2 login. You can restrict it to only allow a " "Extra arguments for Google OAuth2 login. You can restrict it to only allow a "
"single domain to authenticate, even if the user is logged in with multiple " "single domain to authenticate, even if the user is logged in with multple "
"Google accounts. Refer to the documentation for more detail." "Google accounts. Refer to the documentation for more detail."
msgstr "" msgstr ""
@@ -5905,7 +5905,7 @@ msgstr ""
#: awx/sso/conf.py:1290 #: awx/sso/conf.py:1290
msgid "" msgid ""
"Create a key pair to use as a service provider (SP) and include the " "Create a keypair to use as a service provider (SP) and include the "
"certificate content here." "certificate content here."
msgstr "" msgstr ""
@@ -5915,7 +5915,7 @@ msgstr ""
#: awx/sso/conf.py:1302 #: awx/sso/conf.py:1302
msgid "" msgid ""
"Create a key pair to use as a service provider (SP) and include the private " "Create a keypair to use as a service provider (SP) and include the private "
"key content here." "key content here."
msgstr "" msgstr ""

View File

@@ -1971,7 +1971,7 @@ msgid ""
"HTTP headers and meta keys to search to determine remote host name or IP. " "HTTP headers and meta keys to search to determine remote host name or IP. "
"Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if " "Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if "
"behind a reverse proxy. See the \"Proxy Support\" section of the " "behind a reverse proxy. See the \"Proxy Support\" section of the "
"Administrator guide for more details." "Adminstrator guide for more details."
msgstr "Los encabezados HTTP y las llaves de activación para buscar y determinar el nombre de host remoto o IP. Añada elementos adicionales a esta lista, como \"HTTP_X_FORWARDED_FOR\", si está detrás de un proxy inverso. Consulte la sección \"Soporte de proxy\" de la guía del adminstrador para obtener más información." msgstr "Los encabezados HTTP y las llaves de activación para buscar y determinar el nombre de host remoto o IP. Añada elementos adicionales a esta lista, como \"HTTP_X_FORWARDED_FOR\", si está detrás de un proxy inverso. Consulte la sección \"Soporte de proxy\" de la guía del adminstrador para obtener más información."
#: awx/main/conf.py:85 #: awx/main/conf.py:85
@@ -4804,7 +4804,7 @@ msgstr "Indica que un trabajo no se creará cuando es sea True. La semántica de
#: awx/main/models/workflow.py:251 #: awx/main/models/workflow.py:251
msgid "" msgid ""
"An identifier corresponding to the workflow job template node that this node " "An identifier coresponding to the workflow job template node that this node "
"was created from." "was created from."
msgstr "Un identificador que corresponde al nodo de plantilla de tarea del flujo de trabajo a partir del cual se creó este nodo." msgstr "Un identificador que corresponde al nodo de plantilla de tarea del flujo de trabajo a partir del cual se creó este nodo."
@@ -5526,7 +5526,7 @@ msgstr "Argumentos adicionales para Google OAuth2"
#: awx/sso/conf.py:606 #: awx/sso/conf.py:606
msgid "" msgid ""
"Extra arguments for Google OAuth2 login. You can restrict it to only allow a " "Extra arguments for Google OAuth2 login. You can restrict it to only allow a "
"single domain to authenticate, even if the user is logged in with multiple " "single domain to authenticate, even if the user is logged in with multple "
"Google accounts. Refer to the documentation for more detail." "Google accounts. Refer to the documentation for more detail."
msgstr "Argumentos adicionales para el inicio de sesión en Google OAuth2. Puede limitarlo para permitir la autenticación de un solo dominio, incluso si el usuario ha iniciado sesión con varias cuentas de Google. Consulte la documentación para obtener información detallada." msgstr "Argumentos adicionales para el inicio de sesión en Google OAuth2. Puede limitarlo para permitir la autenticación de un solo dominio, incluso si el usuario ha iniciado sesión con varias cuentas de Google. Consulte la documentación para obtener información detallada."
@@ -5910,7 +5910,7 @@ msgstr "Certificado público del proveedor de servicio SAML"
#: awx/sso/conf.py:1290 #: awx/sso/conf.py:1290
msgid "" msgid ""
"Create a key pair to use as a service provider (SP) and include the " "Create a keypair to use as a service provider (SP) and include the "
"certificate content here." "certificate content here."
msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido del certificado aquí." msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido del certificado aquí."
@@ -5920,7 +5920,7 @@ msgstr "Clave privada del proveedor de servicio SAML"
#: awx/sso/conf.py:1302 #: awx/sso/conf.py:1302
msgid "" msgid ""
"Create a key pair to use as a service provider (SP) and include the private " "Create a keypair to use as a service provider (SP) and include the private "
"key content here." "key content here."
msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido de la clave privada aquí." msgstr "Crear un par de claves para usar como proveedor de servicio (SP) e incluir el contenido de la clave privada aquí."

View File

@@ -588,39 +588,17 @@ class InstanceAccess(BaseAccess):
class InstanceGroupAccess(BaseAccess): class InstanceGroupAccess(BaseAccess):
"""
I can see Instance Groups when I am:
- a superuser(system administrator)
- at least read_role on the instance group
I can edit Instance Groups when I am:
- a superuser
- admin role on the Instance group
I can add/delete Instance Groups:
- a superuser(system administrator)
I can use Instance Groups when I have:
- use_role on the instance group
"""
model = InstanceGroup model = InstanceGroup
prefetch_related = ('instances',) prefetch_related = ('instances',)
def filtered_queryset(self): def filtered_queryset(self):
return self.model.accessible_objects(self.user, 'read_role') return InstanceGroup.objects.filter(organization__in=Organization.accessible_pk_qs(self.user, 'admin_role')).distinct()
@check_superuser
def can_use(self, obj):
return self.user in obj.use_role
def can_add(self, data): def can_add(self, data):
return self.user.is_superuser return self.user.is_superuser
@check_superuser
def can_change(self, obj, data): def can_change(self, obj, data):
return self.can_admin(obj) return self.user.is_superuser
@check_superuser
def can_admin(self, obj):
return self.user in obj.admin_role
def can_delete(self, obj): def can_delete(self, obj):
if obj.name in [settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]: if obj.name in [settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]:
@@ -867,7 +845,7 @@ class OrganizationAccess(NotificationAttachMixin, BaseAccess):
return RoleAccess(self.user).can_attach(rel_role, sub_obj, 'members', *args, **kwargs) return RoleAccess(self.user).can_attach(rel_role, sub_obj, 'members', *args, **kwargs)
if relationship == "instance_groups": if relationship == "instance_groups":
if self.user in obj.admin_role and self.user in sub_obj.use_role: if self.user.is_superuser:
return True return True
return False return False
return super(OrganizationAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs) return super(OrganizationAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
@@ -956,7 +934,7 @@ class InventoryAccess(BaseAccess):
def can_attach(self, obj, sub_obj, relationship, *args, **kwargs): def can_attach(self, obj, sub_obj, relationship, *args, **kwargs):
if relationship == "instance_groups": if relationship == "instance_groups":
if self.user in sub_obj.use_role and self.user in obj.admin_role: if self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role:
return True return True
return False return False
return super(InventoryAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs) return super(InventoryAccess, self).can_attach(obj, sub_obj, relationship, *args, **kwargs)
@@ -1693,12 +1671,11 @@ class JobTemplateAccess(NotificationAttachMixin, UnifiedCredentialsMixin, BaseAc
return self.user.is_superuser or self.user in obj.admin_role return self.user.is_superuser or self.user in obj.admin_role
@check_superuser @check_superuser
# object here is the job template. sub_object here is what is being attached
def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False): def can_attach(self, obj, sub_obj, relationship, data, skip_sub_obj_read_check=False):
if relationship == "instance_groups": if relationship == "instance_groups":
if not obj.organization: if not obj.organization:
return False return False
return self.user in sub_obj.use_role and self.user in obj.admin_role return self.user.can_access(type(sub_obj), "read", sub_obj) and self.user in obj.organization.admin_role
return super(JobTemplateAccess, self).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check) return super(JobTemplateAccess, self).can_attach(obj, sub_obj, relationship, data, skip_sub_obj_read_check=skip_sub_obj_read_check)
@check_superuser @check_superuser
@@ -1875,6 +1852,8 @@ class JobLaunchConfigAccess(UnifiedCredentialsMixin, BaseAccess):
def _related_filtered_queryset(self, cls): def _related_filtered_queryset(self, cls):
if cls is Label: if cls is Label:
return LabelAccess(self.user).filtered_queryset() return LabelAccess(self.user).filtered_queryset()
elif cls is InstanceGroup:
return InstanceGroupAccess(self.user).filtered_queryset()
else: else:
return cls._accessible_pk_qs(cls, self.user, 'use_role') return cls._accessible_pk_qs(cls, self.user, 'use_role')
@@ -1886,7 +1865,6 @@ class JobLaunchConfigAccess(UnifiedCredentialsMixin, BaseAccess):
@check_superuser @check_superuser
def can_add(self, data, template=None): def can_add(self, data, template=None):
# WARNING: duplicated with BulkJobLaunchSerializer, check when changing permission levels
# This is a special case, we don't check related many-to-many elsewhere # This is a special case, we don't check related many-to-many elsewhere
# launch RBAC checks use this # launch RBAC checks use this
if 'reference_obj' in data: if 'reference_obj' in data:
@@ -2019,16 +1997,7 @@ class WorkflowJobNodeAccess(BaseAccess):
) )
def filtered_queryset(self): def filtered_queryset(self):
return self.model.objects.filter( return self.model.objects.filter(workflow_job__unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
Q(workflow_job__unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(workflow_job__organization__in=Organization.objects.filter(Q(admin_role__members=self.user)))
)
def can_read(self, obj):
"""Overriding this opens up detail view access for bulk jobs, where the workflow job has no associated workflow job template."""
if obj.workflow_job.is_bulk_job and obj.workflow_job.created_by_id == self.user.id:
return True
return super().can_read(obj)
@check_superuser @check_superuser
def can_add(self, data): def can_add(self, data):
@@ -2154,16 +2123,7 @@ class WorkflowJobAccess(BaseAccess):
) )
def filtered_queryset(self): def filtered_queryset(self):
return WorkflowJob.objects.filter( return WorkflowJob.objects.filter(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
Q(unified_job_template__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
| Q(organization__in=Organization.objects.filter(Q(admin_role__members=self.user)), is_bulk_job=True)
)
def can_read(self, obj):
"""Overriding this opens up detail view access for bulk jobs, where the workflow job has no associated workflow job template."""
if obj.is_bulk_job and obj.created_by_id == self.user.id:
return True
return super().can_read(obj)
def can_add(self, data): def can_add(self, data):
# Old add-start system for launching jobs is being depreciated, and # Old add-start system for launching jobs is being depreciated, and
@@ -2234,7 +2194,7 @@ class WorkflowJobAccess(BaseAccess):
if not node_access.can_add({'reference_obj': node}): if not node_access.can_add({'reference_obj': node}):
wj_add_perm = False wj_add_perm = False
if not wj_add_perm and self.save_messages: if not wj_add_perm and self.save_messages:
self.messages['workflow_job_template'] = _('You do not have permission to the workflow job resources required for relaunch.') self.messages['workflow_job_template'] = _('You do not have permission to the workflow job ' 'resources required for relaunch.')
return wj_add_perm return wj_add_perm
def can_cancel(self, obj): def can_cancel(self, obj):
@@ -2952,19 +2912,3 @@ class WorkflowApprovalTemplateAccess(BaseAccess):
for cls in BaseAccess.__subclasses__(): for cls in BaseAccess.__subclasses__():
access_registry[cls.model] = cls access_registry[cls.model] = cls
access_registry[UnpartitionedJobEvent] = UnpartitionedJobEventAccess access_registry[UnpartitionedJobEvent] = UnpartitionedJobEventAccess
def optimize_queryset(queryset):
"""
A utility method in case you already have a queryset and just want to
apply the standard optimizations for that model.
In other words, use if you do not want to start from filtered_queryset for some reason.
"""
if not queryset.model or queryset.model not in access_registry:
return queryset
access_class = access_registry[queryset.model]
if access_class.select_related:
queryset = queryset.select_related(*access_class.select_related)
if access_class.prefetch_related:
queryset = queryset.prefetch_related(*access_class.prefetch_related)
return queryset

View File

@@ -4,11 +4,11 @@ import logging
# AWX # AWX
from awx.main.analytics.subsystem_metrics import Metrics from awx.main.analytics.subsystem_metrics import Metrics
from awx.main.dispatch.publish import task from awx.main.dispatch.publish import task
from awx.main.dispatch import get_task_queuename from awx.main.dispatch import get_local_queuename
logger = logging.getLogger('awx.main.scheduler') logger = logging.getLogger('awx.main.scheduler')
@task(queue=get_task_queuename) @task(queue=get_local_queuename)
def send_subsystem_metrics(): def send_subsystem_metrics():
Metrics().send_metrics() Metrics().send_metrics()

View File

@@ -65,7 +65,7 @@ class FixedSlidingWindow:
return sum(self.buckets.values()) or 0 return sum(self.buckets.values()) or 0
class RelayWebsocketStatsManager: class BroadcastWebsocketStatsManager:
def __init__(self, event_loop, local_hostname): def __init__(self, event_loop, local_hostname):
self._local_hostname = local_hostname self._local_hostname = local_hostname
@@ -74,7 +74,7 @@ class RelayWebsocketStatsManager:
self._redis_key = BROADCAST_WEBSOCKET_REDIS_KEY_NAME self._redis_key = BROADCAST_WEBSOCKET_REDIS_KEY_NAME
def new_remote_host_stats(self, remote_hostname): def new_remote_host_stats(self, remote_hostname):
self._stats[remote_hostname] = RelayWebsocketStats(self._local_hostname, remote_hostname) self._stats[remote_hostname] = BroadcastWebsocketStats(self._local_hostname, remote_hostname)
return self._stats[remote_hostname] return self._stats[remote_hostname]
def delete_remote_host_stats(self, remote_hostname): def delete_remote_host_stats(self, remote_hostname):
@@ -107,7 +107,7 @@ class RelayWebsocketStatsManager:
return parser.text_string_to_metric_families(stats_str.decode('UTF-8')) return parser.text_string_to_metric_families(stats_str.decode('UTF-8'))
class RelayWebsocketStats: class BroadcastWebsocketStats:
def __init__(self, local_hostname, remote_hostname): def __init__(self, local_hostname, remote_hostname):
self._local_hostname = local_hostname self._local_hostname = local_hostname
self._remote_hostname = remote_hostname self._remote_hostname = remote_hostname

View File

@@ -6,7 +6,7 @@ import platform
import distro import distro
from django.db import connection from django.db import connection
from django.db.models import Count, Min from django.db.models import Count
from django.conf import settings from django.conf import settings
from django.contrib.sessions.models import Session from django.contrib.sessions.models import Session
from django.utils.timezone import now, timedelta from django.utils.timezone import now, timedelta
@@ -35,7 +35,7 @@ data _since_ the last report date - i.e., new data in the last 24 hours)
""" """
def trivial_slicing(key, since, until, last_gather, **kwargs): def trivial_slicing(key, since, until, last_gather):
if since is not None: if since is not None:
return [(since, until)] return [(since, until)]
@@ -48,7 +48,7 @@ def trivial_slicing(key, since, until, last_gather, **kwargs):
return [(last_entry, until)] return [(last_entry, until)]
def four_hour_slicing(key, since, until, last_gather, **kwargs): def four_hour_slicing(key, since, until, last_gather):
if since is not None: if since is not None:
last_entry = since last_entry = since
else: else:
@@ -69,54 +69,6 @@ def four_hour_slicing(key, since, until, last_gather, **kwargs):
start = end start = end
def host_metric_slicing(key, since, until, last_gather, **kwargs):
"""
Slicing doesn't start 4 weeks ago, but sends whole table monthly or first time
"""
from awx.main.models.inventory import HostMetric
if since is not None:
return [(since, until)]
from awx.conf.models import Setting
# Check if full sync should be done
full_sync_enabled = kwargs.get('full_sync_enabled', False)
last_entry = None
if not full_sync_enabled:
#
# If not, try incremental sync first
#
last_entries = Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_ENTRIES').first()
last_entries = json.loads((last_entries.value if last_entries is not None else '') or '{}', object_hook=datetime_hook)
last_entry = last_entries.get(key)
if not last_entry:
#
# If not done before, switch to full sync
#
full_sync_enabled = True
if full_sync_enabled:
#
# Find the lowest date for full sync
#
min_dates = HostMetric.objects.aggregate(min_last_automation=Min('last_automation'), min_last_deleted=Min('last_deleted'))
if min_dates['min_last_automation'] and min_dates['min_last_deleted']:
last_entry = min(min_dates['min_last_automation'], min_dates['min_last_deleted'])
elif min_dates['min_last_automation'] or min_dates['min_last_deleted']:
last_entry = min_dates['min_last_automation'] or min_dates['min_last_deleted']
if not last_entry:
# empty table
return []
start, end = last_entry, None
while start < until:
end = min(start + timedelta(days=30), until)
yield (start, end)
start = end
def _identify_lower(key, since, until, last_gather): def _identify_lower(key, since, until, last_gather):
from awx.conf.models import Setting from awx.conf.models import Setting
@@ -131,7 +83,7 @@ def _identify_lower(key, since, until, last_gather):
return lower, last_entries return lower, last_entries
@register('config', '1.6', description=_('General platform configuration.')) @register('config', '1.4', description=_('General platform configuration.'))
def config(since, **kwargs): def config(since, **kwargs):
license_info = get_license() license_info = get_license()
install_type = 'traditional' install_type = 'traditional'
@@ -155,13 +107,10 @@ def config(since, **kwargs):
'subscription_name': license_info.get('subscription_name'), 'subscription_name': license_info.get('subscription_name'),
'sku': license_info.get('sku'), 'sku': license_info.get('sku'),
'support_level': license_info.get('support_level'), 'support_level': license_info.get('support_level'),
'usage': license_info.get('usage'),
'product_name': license_info.get('product_name'), 'product_name': license_info.get('product_name'),
'valid_key': license_info.get('valid_key'), 'valid_key': license_info.get('valid_key'),
'satellite': license_info.get('satellite'), 'satellite': license_info.get('satellite'),
'pool_id': license_info.get('pool_id'), 'pool_id': license_info.get('pool_id'),
'subscription_id': license_info.get('subscription_id'),
'account_number': license_info.get('account_number'),
'current_instances': license_info.get('current_instances'), 'current_instances': license_info.get('current_instances'),
'automated_instances': license_info.get('automated_instances'), 'automated_instances': license_info.get('automated_instances'),
'automated_since': license_info.get('automated_since'), 'automated_since': license_info.get('automated_since'),
@@ -170,7 +119,6 @@ def config(since, **kwargs):
'compliant': license_info.get('compliant'), 'compliant': license_info.get('compliant'),
'date_warning': license_info.get('date_warning'), 'date_warning': license_info.get('date_warning'),
'date_expired': license_info.get('date_expired'), 'date_expired': license_info.get('date_expired'),
'subscription_usage_model': getattr(settings, 'SUBSCRIPTION_USAGE_MODEL', ''), # 1.5+
'free_instances': license_info.get('free_instances', 0), 'free_instances': license_info.get('free_instances', 0),
'total_licensed_instances': license_info.get('instance_count', 0), 'total_licensed_instances': license_info.get('instance_count', 0),
'license_expiry': license_info.get('time_remaining', 0), 'license_expiry': license_info.get('time_remaining', 0),
@@ -285,13 +233,11 @@ def projects_by_scm_type(since, **kwargs):
return counts return counts
@register('instance_info', '1.3', description=_('Cluster topology and capacity')) @register('instance_info', '1.2', description=_('Cluster topology and capacity'))
def instance_info(since, include_hostnames=False, **kwargs): def instance_info(since, include_hostnames=False, **kwargs):
info = {} info = {}
# Use same method that the TaskManager does to compute consumed capacity without querying all running jobs for each Instance # Use same method that the TaskManager does to compute consumed capacity without querying all running jobs for each Instance
tm_models = TaskManagerModels.init_with_consumed_capacity( tm_models = TaskManagerModels.init_with_consumed_capacity(instance_fields=['uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'enabled'])
instance_fields=['uuid', 'version', 'capacity', 'cpu', 'memory', 'managed_by_policy', 'enabled', 'node_type']
)
for tm_instance in tm_models.instances.instances_by_hostname.values(): for tm_instance in tm_models.instances.instances_by_hostname.values():
instance = tm_instance.obj instance = tm_instance.obj
instance_info = { instance_info = {
@@ -399,10 +345,7 @@ def _copy_table(table, query, path):
file_path = os.path.join(path, table + '_table.csv') file_path = os.path.join(path, table + '_table.csv')
file = FileSplitter(filespec=file_path) file = FileSplitter(filespec=file_path)
with connection.cursor() as cursor: with connection.cursor() as cursor:
with cursor.copy(query) as copy: cursor.copy_expert(query, file)
while data := copy.read():
byte_data = bytes(data)
file.write(byte_data.decode())
return file.file_list() return file.file_list()
@@ -591,25 +534,3 @@ def workflow_job_template_node_table(since, full_path, **kwargs):
) always_nodes ON main_workflowjobtemplatenode.id = always_nodes.from_workflowjobtemplatenode_id ) always_nodes ON main_workflowjobtemplatenode.id = always_nodes.from_workflowjobtemplatenode_id
ORDER BY main_workflowjobtemplatenode.id ASC) TO STDOUT WITH CSV HEADER''' ORDER BY main_workflowjobtemplatenode.id ASC) TO STDOUT WITH CSV HEADER'''
return _copy_table(table='workflow_job_template_node', query=workflow_job_template_node_query, path=full_path) return _copy_table(table='workflow_job_template_node', query=workflow_job_template_node_query, path=full_path)
@register(
'host_metric_table', '1.0', format='csv', description=_('Host Metric data, incremental/full sync'), expensive=host_metric_slicing, full_sync_interval=30
)
def host_metric_table(since, full_path, until, **kwargs):
host_metric_query = '''COPY (SELECT main_hostmetric.id,
main_hostmetric.hostname,
main_hostmetric.first_automation,
main_hostmetric.last_automation,
main_hostmetric.last_deleted,
main_hostmetric.deleted,
main_hostmetric.automated_counter,
main_hostmetric.deleted_counter,
main_hostmetric.used_in_inventories
FROM main_hostmetric
WHERE (main_hostmetric.last_automation > '{}' AND main_hostmetric.last_automation <= '{}') OR
(main_hostmetric.last_deleted > '{}' AND main_hostmetric.last_deleted <= '{}')
ORDER BY main_hostmetric.id ASC) TO STDOUT WITH CSV HEADER'''.format(
since.isoformat(), until.isoformat(), since.isoformat(), until.isoformat()
)
return _copy_table(table='host_metric', query=host_metric_query, path=full_path)

View File

@@ -52,7 +52,7 @@ def all_collectors():
} }
def register(key, version, description=None, format='json', expensive=None, full_sync_interval=None): def register(key, version, description=None, format='json', expensive=None):
""" """
A decorator used to register a function as a metric collector. A decorator used to register a function as a metric collector.
@@ -71,7 +71,6 @@ def register(key, version, description=None, format='json', expensive=None, full
f.__awx_analytics_description__ = description f.__awx_analytics_description__ = description
f.__awx_analytics_type__ = format f.__awx_analytics_type__ = format
f.__awx_expensive__ = expensive f.__awx_expensive__ = expensive
f.__awx_full_sync_interval__ = full_sync_interval
return f return f
return decorate return decorate
@@ -260,19 +259,10 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
# These slicer functions may return a generator. The `since` parameter is # These slicer functions may return a generator. The `since` parameter is
# allowed to be None, and will fall back to LAST_ENTRIES[key] or to # allowed to be None, and will fall back to LAST_ENTRIES[key] or to
# LAST_GATHER (truncated appropriately to match the 4-week limit). # LAST_GATHER (truncated appropriately to match the 4-week limit).
#
# Or it can force full table sync if interval is given
kwargs = dict()
full_sync_enabled = False
if func.__awx_full_sync_interval__:
last_full_sync = last_entries.get(f"{key}_full")
full_sync_enabled = not last_full_sync or last_full_sync < now() - timedelta(days=func.__awx_full_sync_interval__)
kwargs['full_sync_enabled'] = full_sync_enabled
if func.__awx_expensive__: if func.__awx_expensive__:
slices = func.__awx_expensive__(key, since, until, last_gather, **kwargs) slices = func.__awx_expensive__(key, since, until, last_gather)
else: else:
slices = collectors.trivial_slicing(key, since, until, last_gather, **kwargs) slices = collectors.trivial_slicing(key, since, until, last_gather)
for start, end in slices: for start, end in slices:
files = func(start, full_path=gather_dir, until=end) files = func(start, full_path=gather_dir, until=end)
@@ -311,12 +301,6 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
succeeded = False succeeded = False
logger.exception("Could not generate metric {}".format(filename)) logger.exception("Could not generate metric {}".format(filename))
# update full sync timestamp if successfully shipped
if full_sync_enabled and collection_type != 'dry-run' and succeeded:
with disable_activity_stream():
last_entries[f"{key}_full"] = now()
settings.AUTOMATION_ANALYTICS_LAST_ENTRIES = json.dumps(last_entries, cls=DjangoJSONEncoder)
if collection_type != 'dry-run': if collection_type != 'dry-run':
if succeeded: if succeeded:
for fpath in tarfiles: for fpath in tarfiles:
@@ -375,7 +359,9 @@ def ship(path):
s.headers = get_awx_http_client_headers() s.headers = get_awx_http_client_headers()
s.headers.pop('Content-Type') s.headers.pop('Content-Type')
with set_environ(**settings.AWX_TASK_ENV): with set_environ(**settings.AWX_TASK_ENV):
response = s.post(url, files=files, verify=settings.INSIGHTS_CERT_PATH, auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31)) response = s.post(
url, files=files, verify="/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", auth=(rh_user, rh_password), headers=s.headers, timeout=(31, 31)
)
# Accept 2XX status_codes # Accept 2XX status_codes
if response.status_code >= 300: if response.status_code >= 300:
logger.error('Upload failed with status {}, {}'.format(response.status_code, response.text)) logger.error('Upload failed with status {}, {}'.format(response.status_code, response.text))

View File

@@ -9,7 +9,7 @@ from django.apps import apps
from awx.main.consumers import emit_channel_notification from awx.main.consumers import emit_channel_notification
from awx.main.utils import is_testing from awx.main.utils import is_testing
root_key = settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX root_key = 'awx_metrics'
logger = logging.getLogger('awx.main.analytics') logger = logging.getLogger('awx.main.analytics')
@@ -209,11 +209,6 @@ class Metrics:
SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'), SetFloatM('workflow_manager_recorded_timestamp', 'Unix timestamp when metrics were last recorded'),
SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'), SetFloatM('workflow_manager_spawn_workflow_graph_jobs_seconds', 'Time spent spawning workflow tasks'),
SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'), SetFloatM('workflow_manager_get_tasks_seconds', 'Time spent loading workflow tasks from db'),
# dispatcher subsystem metrics
SetIntM('dispatcher_pool_scale_up_events', 'Number of times local dispatcher scaled up a worker since startup'),
SetIntM('dispatcher_pool_active_task_count', 'Number of active tasks in the worker pool when last task was submitted'),
SetIntM('dispatcher_pool_max_worker_count', 'Highest number of workers in worker pool in last collection interval, about 20s'),
SetFloatM('dispatcher_availability', 'Fraction of time (in last collection interval) dispatcher was able to receive messages'),
] ]
# turn metric list into dictionary with the metric name as a key # turn metric list into dictionary with the metric name as a key
self.METRICS = {} self.METRICS = {}
@@ -269,6 +264,13 @@ class Metrics:
data[field] = self.METRICS[field].decode(self.conn) data[field] = self.METRICS[field].decode(self.conn)
return data return data
def store_metrics(self, data_json):
# called when receiving metrics from other instances
data = json.loads(data_json)
if self.instance_name != data['instance']:
logger.debug(f"{self.instance_name} received subsystem metrics from {data['instance']}")
self.conn.set(root_key + "_instance_" + data['instance'], data['metrics'])
def should_pipe_execute(self): def should_pipe_execute(self):
if self.metrics_have_changed is False: if self.metrics_have_changed is False:
return False return False
@@ -303,15 +305,13 @@ class Metrics:
try: try:
current_time = time.time() current_time = time.time()
if current_time - self.previous_send_metrics.decode(self.conn) > self.send_metrics_interval: if current_time - self.previous_send_metrics.decode(self.conn) > self.send_metrics_interval:
serialized_metrics = self.serialize_local_metrics()
payload = { payload = {
'instance': self.instance_name, 'instance': self.instance_name,
'metrics': serialized_metrics, 'metrics': self.serialize_local_metrics(),
} }
# store the serialized data locally as well, so that load_other_metrics will read it # store a local copy as well
self.conn.set(root_key + '_instance_' + self.instance_name, serialized_metrics) self.store_metrics(json.dumps(payload))
emit_channel_notification("metrics", payload) emit_channel_notification("metrics", payload)
self.previous_send_metrics.set(current_time) self.previous_send_metrics.set(current_time)
self.previous_send_metrics.store_value(self.conn) self.previous_send_metrics.store_value(self.conn)
finally: finally:

View File

@@ -10,7 +10,7 @@ from rest_framework import serializers
# AWX # AWX
from awx.conf import fields, register, register_validate from awx.conf import fields, register, register_validate
from awx.main.models import ExecutionEnvironment from awx.main.models import ExecutionEnvironment
from awx.main.constants import SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS
logger = logging.getLogger('awx.main.conf') logger = logging.getLogger('awx.main.conf')
@@ -94,20 +94,6 @@ register(
category_slug='system', category_slug='system',
) )
register(
'CSRF_TRUSTED_ORIGINS',
default=[],
field_class=fields.StringListField,
label=_('CSRF Trusted Origins List'),
help_text=_(
"If the service is behind a reverse proxy/load balancer, use this setting "
"to configure the schema://addresses from which the service should trust "
"Origin header values. "
),
category=_('System'),
category_slug='system',
)
register( register(
'LICENSE', 'LICENSE',
field_class=fields.DictField, field_class=fields.DictField,
@@ -296,16 +282,6 @@ register(
placeholder={'HTTP_PROXY': 'myproxy.local:8080'}, placeholder={'HTTP_PROXY': 'myproxy.local:8080'},
) )
register(
'AWX_RUNNER_KEEPALIVE_SECONDS',
field_class=fields.IntegerField,
label=_('K8S Ansible Runner Keep-Alive Message Interval'),
help_text=_('Only applies to jobs running in a Container Group. If not 0, send a message every so-many seconds to keep connection open.'),
category=_('Jobs'),
category_slug='jobs',
placeholder=240, # intended to be under common 5 minute idle timeout
)
register( register(
'GALAXY_TASK_ENV', 'GALAXY_TASK_ENV',
field_class=fields.KeyValueField, field_class=fields.KeyValueField,
@@ -698,28 +674,11 @@ register(
field_class=fields.IntegerField, field_class=fields.IntegerField,
default=1, default=1,
min_value=1, min_value=1,
label=_('Maximum disk persistence for external log aggregation (in GB)'), label=_('Maximum disk persistance for external log aggregation (in GB)'),
help_text=_( help_text=_(
'Amount of data to store (in gigabytes) during an outage of ' 'Amount of data to store (in gigabytes) during an outage of '
'the external log aggregator (defaults to 1). ' 'the external log aggregator (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting for main_queue. ' 'Equivalent to the rsyslogd queue.maxdiskspace setting.'
'Notably, this is used for the rsyslogd main queue (for input messages).'
),
category=_('Logging'),
category_slug='logging',
)
register(
'LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB',
field_class=fields.IntegerField,
default=1,
min_value=1,
label=_('Maximum disk persistence for rsyslogd action queuing (in GB)'),
help_text=_(
'Amount of data to store (in gigabytes) if an rsyslog action takes time '
'to process an incoming message (defaults to 1). '
'Equivalent to the rsyslogd queue.maxdiskspace setting on the action (e.g. omhttp). '
'Like LOG_AGGREGATOR_MAX_DISK_USAGE_GB, it stores files in the directory specified '
'by LOG_AGGREGATOR_MAX_DISK_USAGE_PATH.'
), ),
category=_('Logging'), category=_('Logging'),
category_slug='logging', category_slug='logging',
@@ -806,102 +765,6 @@ register(
help_text=_('Indicates whether the instance is part of a kubernetes-based deployment.'), help_text=_('Indicates whether the instance is part of a kubernetes-based deployment.'),
) )
register(
'BULK_JOB_MAX_LAUNCH',
field_class=fields.IntegerField,
default=100,
label=_('Max jobs to allow bulk jobs to launch'),
help_text=_('Max jobs to allow bulk jobs to launch'),
category=_('Bulk Actions'),
category_slug='bulk',
)
register(
'BULK_HOST_MAX_CREATE',
field_class=fields.IntegerField,
default=100,
label=_('Max number of hosts to allow to be created in a single bulk action'),
help_text=_('Max number of hosts to allow to be created in a single bulk action'),
category=_('Bulk Actions'),
category_slug='bulk',
)
register(
'UI_NEXT',
field_class=fields.BooleanField,
default=False,
label=_('Enable Preview of New User Interface'),
help_text=_('Enable preview of new user interface.'),
category=_('System'),
category_slug='system',
)
register(
'SUBSCRIPTION_USAGE_MODEL',
field_class=fields.ChoiceField,
choices=[
('', _('Default model for AWX - no subscription. Deletion of host_metrics will not be considered for purposes of managed host counting')),
(
SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS,
_('Usage based on unique managed nodes in a large historical time frame and delete functionality for no longer used managed nodes'),
),
],
default='',
allow_blank=True,
label=_('Defines subscription usage model and shows Host Metrics'),
category=_('System'),
category_slug='system',
)
register(
'CLEANUP_HOST_METRICS_LAST_TS',
field_class=fields.DateTimeField,
label=_('Last cleanup date for HostMetrics'),
allow_null=True,
category=_('System'),
category_slug='system',
)
register(
'AWX_CLEANUP_PATHS',
field_class=fields.BooleanField,
label=_('Enable or Disable tmp dir cleanup'),
default=True,
help_text=_('Enable or Disable TMP Dir cleanup'),
category=('Debug'),
category_slug='debug',
)
register(
'AWX_REQUEST_PROFILE',
field_class=fields.BooleanField,
label=_('Debug Web Requests'),
default=False,
help_text=_('Debug web request python timing'),
category=('Debug'),
category_slug='debug',
)
register(
'DEFAULT_CONTAINER_RUN_OPTIONS',
field_class=fields.StringListField,
label=_('Container Run Options'),
default=['--network', 'slirp4netns:enable_ipv6=true'],
help_text=_("List of options to pass to podman run example: ['--network', 'slirp4netns:enable_ipv6=true', '--log-level', 'debug']"),
category=('Jobs'),
category_slug='jobs',
)
register(
'RECEPTOR_RELEASE_WORK',
field_class=fields.BooleanField,
label=_('Release Receptor Work'),
default=True,
help_text=_('Release receptor work'),
category=('Debug'),
category_slug='debug',
)
def logging_validate(serializer, attrs): def logging_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'LOG_AGGREGATOR_HOST') or not hasattr(serializer.instance, 'LOG_AGGREGATOR_TYPE'): if not serializer.instance or not hasattr(serializer.instance, 'LOG_AGGREGATOR_HOST') or not hasattr(serializer.instance, 'LOG_AGGREGATOR_TYPE'):

View File

@@ -38,8 +38,6 @@ STANDARD_INVENTORY_UPDATE_ENV = {
'ANSIBLE_INVENTORY_EXPORT': 'True', 'ANSIBLE_INVENTORY_EXPORT': 'True',
# Redirecting output to stderr allows JSON parsing to still work with -vvv # Redirecting output to stderr allows JSON parsing to still work with -vvv
'ANSIBLE_VERBOSE_TO_STDERR': 'True', 'ANSIBLE_VERBOSE_TO_STDERR': 'True',
# if ansible-inventory --limit is used for an inventory import, unmatched should be a failure
'ANSIBLE_HOST_PATTERN_MISMATCH': 'error',
} }
CAN_CANCEL = ('new', 'pending', 'waiting', 'running') CAN_CANCEL = ('new', 'pending', 'waiting', 'running')
ACTIVE_STATES = CAN_CANCEL ACTIVE_STATES = CAN_CANCEL
@@ -65,7 +63,7 @@ ENV_BLOCKLIST = frozenset(
'INVENTORY_HOSTVARS', 'INVENTORY_HOSTVARS',
'AWX_HOST', 'AWX_HOST',
'PROJECT_REVISION', 'PROJECT_REVISION',
'SUPERVISOR_CONFIG_PATH', 'SUPERVISOR_WEB_CONFIG_PATH',
) )
) )
@@ -108,9 +106,3 @@ JOB_VARIABLE_PREFIXES = [
ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE = ( ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE = (
'\u001b[31m \u001b[1m This can be caused if the version of ansible-runner in your execution environment is out of date.\u001b[0m' '\u001b[31m \u001b[1m This can be caused if the version of ansible-runner in your execution environment is out of date.\u001b[0m'
) )
# Values for setting SUBSCRIPTION_USAGE_MODEL
SUBSCRIPTION_USAGE_MODEL_UNIQUE_HOSTS = 'unique_managed_hosts'
# Shared prefetch to use for creating a queryset for the purpose of writing or saving facts
HOST_FACTS_FIELDS = ('name', 'ansible_facts', 'ansible_facts_modified', 'modified', 'inventory_id')

View File

@@ -3,7 +3,6 @@ import logging
import time import time
import hmac import hmac
import asyncio import asyncio
import redis
from django.core.serializers.json import DjangoJSONEncoder from django.core.serializers.json import DjangoJSONEncoder
from django.conf import settings from django.conf import settings
@@ -81,7 +80,7 @@ class WebsocketSecretAuthHelper:
WebsocketSecretAuthHelper.verify_secret(secret) WebsocketSecretAuthHelper.verify_secret(secret)
class RelayConsumer(AsyncJsonWebsocketConsumer): class BroadcastConsumer(AsyncJsonWebsocketConsumer):
async def connect(self): async def connect(self):
try: try:
WebsocketSecretAuthHelper.is_authorized(self.scope) WebsocketSecretAuthHelper.is_authorized(self.scope)
@@ -101,21 +100,6 @@ class RelayConsumer(AsyncJsonWebsocketConsumer):
async def internal_message(self, event): async def internal_message(self, event):
await self.send(event['text']) await self.send(event['text'])
async def receive_json(self, data):
(group, message) = unwrap_broadcast_msg(data)
if group == "metrics":
message = json.loads(message['text'])
conn = redis.Redis.from_url(settings.BROKER_URL)
conn.set(settings.SUBSYSTEM_METRICS_REDIS_KEY_PREFIX + "_instance_" + message['instance'], message['metrics'])
else:
await self.channel_layer.group_send(group, message)
async def consumer_subscribe(self, event):
await self.send_json(event)
async def consumer_unsubscribe(self, event):
await self.send_json(event)
class EventConsumer(AsyncJsonWebsocketConsumer): class EventConsumer(AsyncJsonWebsocketConsumer):
async def connect(self): async def connect(self):
@@ -144,11 +128,6 @@ class EventConsumer(AsyncJsonWebsocketConsumer):
self.channel_name, self.channel_name,
) )
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.unsubscribe", "groups": list(current_groups), "origin_channel": self.channel_name},
)
@database_sync_to_async @database_sync_to_async
def user_can_see_object_id(self, user_access, oid): def user_can_see_object_id(self, user_access, oid):
# At this point user is a channels.auth.UserLazyObject object # At this point user is a channels.auth.UserLazyObject object
@@ -197,20 +176,9 @@ class EventConsumer(AsyncJsonWebsocketConsumer):
self.channel_name, self.channel_name,
) )
if len(old_groups):
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.unsubscribe", "groups": list(old_groups), "origin_channel": self.channel_name},
)
new_groups_exclusive = new_groups - current_groups new_groups_exclusive = new_groups - current_groups
for group_name in new_groups_exclusive: for group_name in new_groups_exclusive:
await self.channel_layer.group_add(group_name, self.channel_name) await self.channel_layer.group_add(group_name, self.channel_name)
await self.channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{"type": "consumer.subscribe", "groups": list(new_groups), "origin_channel": self.channel_name},
)
self.scope['session']['groups'] = new_groups self.scope['session']['groups'] = new_groups
await self.send_json({"groups_current": list(new_groups), "groups_left": list(old_groups), "groups_joined": list(new_groups_exclusive)}) await self.send_json({"groups_current": list(new_groups), "groups_left": list(old_groups), "groups_joined": list(new_groups_exclusive)})
@@ -232,11 +200,9 @@ def _dump_payload(payload):
return None return None
def unwrap_broadcast_msg(payload: dict):
return (payload['group'], payload['message'])
def emit_channel_notification(group, payload): def emit_channel_notification(group, payload):
from awx.main.wsbroadcast import wrap_broadcast_msg # noqa
payload_dumped = _dump_payload(payload) payload_dumped = _dump_payload(payload)
if payload_dumped is None: if payload_dumped is None:
return return
@@ -246,6 +212,16 @@ def emit_channel_notification(group, payload):
run_sync( run_sync(
channel_layer.group_send( channel_layer.group_send(
group, group,
{"type": "internal.message", "text": payload_dumped, "needs_relay": True}, {"type": "internal.message", "text": payload_dumped},
)
)
run_sync(
channel_layer.group_send(
settings.BROADCAST_WEBSOCKET_GROUP_NAME,
{
"type": "internal.message",
"text": wrap_broadcast_msg(group, payload_dumped),
},
) )
) )

View File

@@ -54,12 +54,6 @@ aim_inputs = {
'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'), 'help_text': _('Lookup query for the object. Ex: Safe=TestSafe;Object=testAccountName123'),
}, },
{'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']}, {'id': 'object_query_format', 'label': _('Object Query Format'), 'type': 'string', 'default': 'Exact', 'choices': ['Exact', 'Regexp']},
{
'id': 'object_property',
'label': _('Object Property'),
'type': 'string',
'help_text': _('The property of the object to return. Default: Content Ex: Username, Address, etc.'),
},
{ {
'id': 'reason', 'id': 'reason',
'label': _('Reason'), 'label': _('Reason'),
@@ -76,11 +70,10 @@ def aim_backend(**kwargs):
client_cert = kwargs.get('client_cert', None) client_cert = kwargs.get('client_cert', None)
client_key = kwargs.get('client_key', None) client_key = kwargs.get('client_key', None)
verify = kwargs['verify'] verify = kwargs['verify']
webservice_id = kwargs.get('webservice_id', '') webservice_id = kwargs['webservice_id']
app_id = kwargs['app_id'] app_id = kwargs['app_id']
object_query = kwargs['object_query'] object_query = kwargs['object_query']
object_query_format = kwargs['object_query_format'] object_query_format = kwargs['object_query_format']
object_property = kwargs.get('object_property', '')
reason = kwargs.get('reason', None) reason = kwargs.get('reason', None)
if webservice_id == '': if webservice_id == '':
webservice_id = 'AIMWebService' webservice_id = 'AIMWebService'
@@ -105,18 +98,7 @@ def aim_backend(**kwargs):
allow_redirects=False, allow_redirects=False,
) )
raise_for_status(res) raise_for_status(res)
# CCP returns the property name capitalized, username is camel case return res.json()['Content']
# so we need to handle that case
if object_property == '':
object_property = 'Content'
elif object_property.lower() == 'username':
object_property = 'UserName'
elif object_property not in res:
raise KeyError('Property {} not found in object'.format(object_property))
else:
object_property = object_property.capitalize()
return res.json()[object_property]
aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend) aim_plugin = CredentialPlugin('CyberArk Central Credential Provider Lookup', inputs=aim_inputs, backend=aim_backend)

View File

@@ -1,65 +0,0 @@
import boto3
from botocore.exceptions import ClientError
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
secrets_manager_inputs = {
'fields': [
{
'id': 'aws_access_key',
'label': _('AWS Access Key'),
'type': 'string',
},
{
'id': 'aws_secret_key',
'label': _('AWS Secret Key'),
'type': 'string',
'secret': True,
},
],
'metadata': [
{
'id': 'region_name',
'label': _('AWS Secrets Manager Region'),
'type': 'string',
'help_text': _('Region which the secrets manager is located'),
},
{
'id': 'secret_name',
'label': _('AWS Secret Name'),
'type': 'string',
},
],
'required': ['aws_access_key', 'aws_secret_key', 'region_name', 'secret_name'],
}
def aws_secretsmanager_backend(**kwargs):
secret_name = kwargs['secret_name']
region_name = kwargs['region_name']
aws_secret_access_key = kwargs['aws_secret_key']
aws_access_key_id = kwargs['aws_access_key']
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager', region_name=region_name, aws_secret_access_key=aws_secret_access_key, aws_access_key_id=aws_access_key_id
)
try:
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
except ClientError as e:
raise e
# Secrets Manager decrypts the secret value using the associated KMS CMK
# Depending on whether the secret was a string or binary, only one of these fields will be populated
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
else:
secret = get_secret_value_response['SecretBinary']
return secret
aws_secretmanager_plugin = CredentialPlugin('AWS Secrets Manager lookup', inputs=secrets_manager_inputs, backend=aws_secretsmanager_backend)

View File

@@ -35,14 +35,8 @@ dsv_inputs = {
'type': 'string', 'type': 'string',
'help_text': _('The secret path e.g. /test/secret1'), 'help_text': _('The secret path e.g. /test/secret1'),
}, },
{
'id': 'secret_field',
'label': _('Secret Field'),
'help_text': _('The field to extract from the secret'),
'type': 'string',
},
], ],
'required': ['tenant', 'client_id', 'client_secret', 'path', 'secret_field'], 'required': ['tenant', 'client_id', 'client_secret', 'path'],
} }
if settings.DEBUG: if settings.DEBUG:
@@ -58,5 +52,5 @@ if settings.DEBUG:
dsv_plugin = CredentialPlugin( dsv_plugin = CredentialPlugin(
'Thycotic DevOps Secrets Vault', 'Thycotic DevOps Secrets Vault',
dsv_inputs, dsv_inputs,
lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path'])['data'][kwargs['secret_field']], # fmt: skip lambda **kwargs: SecretsVault(**{k: v for (k, v) in kwargs.items() if k in [field['id'] for field in dsv_inputs['fields']]}).get_secret(kwargs['path']),
) )

View File

@@ -265,8 +265,6 @@ def kv_backend(**kwargs):
if secret_key: if secret_key:
try: try:
if (secret_key != 'data') and (secret_key not in json['data']) and ('data' in json['data']):
return json['data']['data'][secret_key]
return json['data'][secret_key] return json['data'][secret_key]
except KeyError: except KeyError:
raise RuntimeError('{} is not present at {}'.format(secret_key, secret_path)) raise RuntimeError('{} is not present at {}'.format(secret_key, secret_path))

View File

@@ -1,7 +1,7 @@
from .plugin import CredentialPlugin from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret from thycotic.secrets.server import PasswordGrantAuthorizer, SecretServer, ServerSecret
tss_inputs = { tss_inputs = {
'fields': [ 'fields': [
@@ -17,12 +17,6 @@ tss_inputs = {
'help_text': _('The (Application) user username'), 'help_text': _('The (Application) user username'),
'type': 'string', 'type': 'string',
}, },
{
'id': 'domain',
'label': _('Domain'),
'help_text': _('The (Application) user domain'),
'type': 'string',
},
{ {
'id': 'password', 'id': 'password',
'label': _('Password'), 'label': _('Password'),
@@ -50,18 +44,12 @@ tss_inputs = {
def tss_backend(**kwargs): def tss_backend(**kwargs):
if kwargs.get("domain"): authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
authorizer = DomainPasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'], kwargs['domain'])
else:
authorizer = PasswordGrantAuthorizer(kwargs['server_url'], kwargs['username'], kwargs['password'])
secret_server = SecretServer(kwargs['server_url'], authorizer) secret_server = SecretServer(kwargs['server_url'], authorizer)
secret_dict = secret_server.get_secret(kwargs['secret_id']) secret_dict = secret_server.get_secret(kwargs['secret_id'])
secret = ServerSecret(**secret_dict) secret = ServerSecret(**secret_dict)
if isinstance(secret.fields[kwargs['secret_field']].value, str) == False: return secret.fields[kwargs['secret_field']].value
return secret.fields[kwargs['secret_field']].value.text
else:
return secret.fields[kwargs['secret_field']].value
tss_plugin = CredentialPlugin( tss_plugin = CredentialPlugin(

View File

@@ -63,7 +63,7 @@ class RecordedQueryLog(object):
if not os.path.isdir(self.dest): if not os.path.isdir(self.dest):
os.makedirs(self.dest) os.makedirs(self.dest)
progname = ' '.join(sys.argv) progname = ' '.join(sys.argv)
for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'wsrelay'): for match in ('uwsgi', 'dispatcher', 'callback_receiver', 'wsbroadcast'):
if match in progname: if match in progname:
progname = match progname = match
break break
@@ -87,7 +87,7 @@ class RecordedQueryLog(object):
) )
log.commit() log.commit()
log.execute( log.execute(
'INSERT INTO queries (pid, version, argv, time, sql, explain, bt) VALUES (?, ?, ?, ?, ?, ?, ?);', 'INSERT INTO queries (pid, version, argv, time, sql, explain, bt) ' 'VALUES (?, ?, ?, ?, ?, ?, ?);',
(os.getpid(), version, ' '.join(sys.argv), seconds, sql, explain, bt), (os.getpid(), version, ' '.join(sys.argv), seconds, sql, explain, bt),
) )
log.commit() log.commit()

View File

@@ -1,14 +1,12 @@
import os import psycopg2
import psycopg
import select import select
from contextlib import contextmanager from contextlib import contextmanager
from awx.settings.application_name import get_application_name
from django.conf import settings from django.conf import settings
from django.db import connection as pg_connection from django.db import connection as pg_connection
NOT_READY = ([], [], []) NOT_READY = ([], [], [])
@@ -16,29 +14,6 @@ def get_local_queuename():
return settings.CLUSTER_HOST_ID return settings.CLUSTER_HOST_ID
def get_task_queuename():
if os.getenv('AWX_COMPONENT') != 'web':
return settings.CLUSTER_HOST_ID
from awx.main.models.ha import Instance
random_task_instance = (
Instance.objects.filter(
node_type__in=(Instance.Types.CONTROL, Instance.Types.HYBRID),
node_state=Instance.States.READY,
enabled=True,
)
.only('hostname')
.order_by('?')
.first()
)
if random_task_instance is None:
raise ValueError('No task instances are READY and Enabled.')
return random_task_instance.hostname
class PubSub(object): class PubSub(object):
def __init__(self, conn): def __init__(self, conn):
self.conn = conn self.conn = conn
@@ -64,9 +39,9 @@ class PubSub(object):
if yield_timeouts: if yield_timeouts:
yield None yield None
else: else:
notification_generator = self.conn.notifies() self.conn.poll()
for notification in notification_generator: while self.conn.notifies:
yield notification yield self.conn.notifies.pop(0)
def close(self): def close(self):
self.conn.close() self.conn.close()
@@ -85,12 +60,12 @@ def pg_bus_conn(new_connection=False):
''' '''
if new_connection: if new_connection:
conf = settings.DATABASES['default'].copy() conf = settings.DATABASES['default']
conf['OPTIONS'] = conf.get('OPTIONS', {}).copy() conn = psycopg2.connect(
# Modify the application name to distinguish from other connections the process might use dbname=conf['NAME'], host=conf['HOST'], user=conf['USER'], password=conf['PASSWORD'], port=conf['PORT'], **conf.get("OPTIONS", {})
conf['OPTIONS']['application_name'] = get_application_name(settings.CLUSTER_HOST_ID, function='listener') )
connection_data = f"dbname={conf['NAME']} host={conf['HOST']} user={conf['USER']} password={conf['PASSWORD']} port={conf['PORT']}" # Django connection.cursor().connection doesn't have autocommit=True on by default
conn = psycopg.connect(connection_data, autocommit=True, **conf['OPTIONS']) conn.set_session(autocommit=True)
else: else:
if pg_connection.connection is None: if pg_connection.connection is None:
pg_connection.connect() pg_connection.connect()

View File

@@ -6,7 +6,7 @@ from django.conf import settings
from django.db import connection from django.db import connection
import redis import redis
from awx.main.dispatch import get_task_queuename from awx.main.dispatch import get_local_queuename
from . import pg_bus_conn from . import pg_bus_conn
@@ -21,7 +21,7 @@ class Control(object):
if service not in self.services: if service not in self.services:
raise RuntimeError('{} must be in {}'.format(service, self.services)) raise RuntimeError('{} must be in {}'.format(service, self.services))
self.service = service self.service = service
self.queuename = host or get_task_queuename() self.queuename = host or get_local_queuename()
def status(self, *args, **kwargs): def status(self, *args, **kwargs):
r = redis.Redis.from_url(settings.BROKER_URL) r = redis.Redis.from_url(settings.BROKER_URL)

View File

@@ -10,7 +10,6 @@ from django_guid import set_guid
from django_guid.utils import generate_guid from django_guid.utils import generate_guid
from awx.main.dispatch.worker import TaskWorker from awx.main.dispatch.worker import TaskWorker
from awx.main.utils.db import set_connection_name
logger = logging.getLogger('awx.main.dispatch.periodic') logger = logging.getLogger('awx.main.dispatch.periodic')
@@ -22,9 +21,6 @@ class Scheduler(Scheduler):
def run(): def run():
ppid = os.getppid() ppid = os.getppid()
logger.warning('periodic beat started') logger.warning('periodic beat started')
set_connection_name('periodic') # set application_name to distinguish from other dispatcher processes
while True: while True:
if os.getppid() != ppid: if os.getppid() != ppid:
# if the parent PID changes, this process has been orphaned # if the parent PID changes, this process has been orphaned

View File

@@ -339,17 +339,6 @@ class AutoscalePool(WorkerPool):
# but if the task takes longer than the time defined here, we will force it to stop here # but if the task takes longer than the time defined here, we will force it to stop here
self.task_manager_timeout = settings.TASK_MANAGER_TIMEOUT + settings.TASK_MANAGER_TIMEOUT_GRACE_PERIOD self.task_manager_timeout = settings.TASK_MANAGER_TIMEOUT + settings.TASK_MANAGER_TIMEOUT_GRACE_PERIOD
# initialize some things for subsystem metrics periodic gathering
# the AutoscalePool class does not save these to redis directly, but reports via produce_subsystem_metrics
self.scale_up_ct = 0
self.worker_count_max = 0
def produce_subsystem_metrics(self, metrics_object):
metrics_object.set('dispatcher_pool_scale_up_events', self.scale_up_ct)
metrics_object.set('dispatcher_pool_active_task_count', sum(len(w.managed_tasks) for w in self.workers))
metrics_object.set('dispatcher_pool_max_worker_count', self.worker_count_max)
self.worker_count_max = len(self.workers)
@property @property
def should_grow(self): def should_grow(self):
if len(self.workers) < self.min_workers: if len(self.workers) < self.min_workers:
@@ -454,12 +443,7 @@ class AutoscalePool(WorkerPool):
idx = random.choice(range(len(self.workers))) idx = random.choice(range(len(self.workers)))
return idx, self.workers[idx] return idx, self.workers[idx]
else: else:
self.scale_up_ct += 1 return super(AutoscalePool, self).up()
ret = super(AutoscalePool, self).up()
new_worker_ct = len(self.workers)
if new_worker_ct > self.worker_count_max:
self.worker_count_max = new_worker_ct
return ret
def write(self, preferred_queue, body): def write(self, preferred_queue, body):
if 'guid' in body: if 'guid' in body:

View File

@@ -70,7 +70,7 @@ def reap_waiting(instance=None, status='failed', job_explanation=None, grace_per
reap_job(j, status, job_explanation=job_explanation) reap_job(j, status, job_explanation=job_explanation)
def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=None, ref_time=None): def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=None):
""" """
Reap all jobs in running for this instance. Reap all jobs in running for this instance.
""" """
@@ -79,11 +79,9 @@ def reap(instance=None, status='failed', job_explanation=None, excluded_uuids=No
else: else:
hostname = instance.hostname hostname = instance.hostname
workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id
base_Q = Q(status='running') & (Q(execution_node=hostname) | Q(controller_node=hostname)) & ~Q(polymorphic_ctype_id=workflow_ctype_id) jobs = UnifiedJob.objects.filter(
if ref_time: Q(status='running') & (Q(execution_node=hostname) | Q(controller_node=hostname)) & ~Q(polymorphic_ctype_id=workflow_ctype_id)
jobs = UnifiedJob.objects.filter(base_Q & Q(started__lte=ref_time)) )
else:
jobs = UnifiedJob.objects.filter(base_Q)
if excluded_uuids: if excluded_uuids:
jobs = jobs.exclude(celery_task_id__in=excluded_uuids) jobs = jobs.exclude(celery_task_id__in=excluded_uuids)
for j in jobs: for j in jobs:

View File

@@ -7,7 +7,7 @@ import signal
import sys import sys
import redis import redis
import json import json
import psycopg import psycopg2
import time import time
from uuid import UUID from uuid import UUID
from queue import Empty as QueueEmpty from queue import Empty as QueueEmpty
@@ -18,8 +18,6 @@ from django.conf import settings
from awx.main.dispatch.pool import WorkerPool from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch import pg_bus_conn from awx.main.dispatch import pg_bus_conn
from awx.main.utils.common import log_excess_runtime from awx.main.utils.common import log_excess_runtime
from awx.main.utils.db import set_connection_name
import awx.main.analytics.subsystem_metrics as s_metrics
if 'run_callback_receiver' in sys.argv: if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver') logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -143,10 +141,9 @@ class AWXConsumerRedis(AWXConsumerBase):
def run(self, *args, **kwargs): def run(self, *args, **kwargs):
super(AWXConsumerRedis, self).run(*args, **kwargs) super(AWXConsumerRedis, self).run(*args, **kwargs)
self.worker.on_start() self.worker.on_start()
logger.info(f'Callback receiver started with pid={os.getpid()}')
db.connection.close() # logs use database, so close connection
while True: while True:
logger.debug(f'{os.getpid()} is alive')
time.sleep(60) time.sleep(60)
@@ -156,30 +153,17 @@ class AWXConsumerPG(AWXConsumerBase):
self.pg_max_wait = settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE self.pg_max_wait = settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE
# if no successful loops have ran since startup, then we should fail right away # if no successful loops have ran since startup, then we should fail right away
self.pg_is_down = True # set so that we fail if we get database errors on startup self.pg_is_down = True # set so that we fail if we get database errors on startup
init_time = time.time() self.pg_down_time = time.time() - self.pg_max_wait # allow no grace period
self.pg_down_time = init_time - self.pg_max_wait # allow no grace period self.last_cleanup = time.time()
self.last_cleanup = init_time
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.last_metrics_gather = init_time
self.listen_cumulative_time = 0.0
def run_periodic_tasks(self): def run_periodic_tasks(self):
self.record_statistics() # maintains time buffer in method self.record_statistics() # maintains time buffer in method
current_time = time.time() if time.time() - self.last_cleanup > 60: # same as cluster_node_heartbeat
if current_time - self.last_cleanup > 60: # same as cluster_node_heartbeat
# NOTE: if we run out of database connections, it is important to still run cleanup # NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections # so that we scale down workers and free up connections
self.pool.cleanup() self.pool.cleanup()
self.last_cleanup = current_time self.last_cleanup = time.time()
# record subsystem metrics for the dispatcher
if current_time - self.last_metrics_gather > 20:
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
def run(self, *args, **kwargs): def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs) super(AWXConsumerPG, self).run(*args, **kwargs)
@@ -195,20 +179,17 @@ class AWXConsumerPG(AWXConsumerBase):
if init is False: if init is False:
self.worker.on_start() self.worker.on_start()
init = True init = True
self.listen_start = time.time()
for e in conn.events(yield_timeouts=True): for e in conn.events(yield_timeouts=True):
self.listen_cumulative_time += time.time() - self.listen_start
if e is not None: if e is not None:
self.process_task(json.loads(e.payload)) self.process_task(json.loads(e.payload))
self.run_periodic_tasks() self.run_periodic_tasks()
self.pg_is_down = False self.pg_is_down = False
self.listen_start = time.time()
if self.should_stop: if self.should_stop:
return return
except psycopg.InterfaceError: except psycopg2.InterfaceError:
logger.warning("Stale Postgres message bus connection, reconnecting") logger.warning("Stale Postgres message bus connection, reconnecting")
continue continue
except (db.DatabaseError, psycopg.OperationalError): except (db.DatabaseError, psycopg2.OperationalError):
# If we have attained stady state operation, tolerate short-term database hickups # If we have attained stady state operation, tolerate short-term database hickups
if not self.pg_is_down: if not self.pg_is_down:
logger.exception(f"Error consuming new events from postgres, will retry for {self.pg_max_wait} s") logger.exception(f"Error consuming new events from postgres, will retry for {self.pg_max_wait} s")
@@ -238,7 +219,6 @@ class BaseWorker(object):
def work_loop(self, queue, finished, idx, *args): def work_loop(self, queue, finished, idx, *args):
ppid = os.getppid() ppid = os.getppid()
signal_handler = WorkerSignalHandler() signal_handler = WorkerSignalHandler()
set_connection_name('worker') # set application_name to distinguish from other dispatcher processes
while not signal_handler.kill_now: while not signal_handler.kill_now:
# if the parent PID changes, this process has been orphaned # if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too # via e.g., segfault or sigkill, we should exit too

View File

@@ -191,9 +191,7 @@ class CallbackBrokerWorker(BaseWorker):
e._retry_count = retry_count e._retry_count = retry_count
# special sanitization logic for postgres treatment of NUL 0x00 char # special sanitization logic for postgres treatment of NUL 0x00 char
# This used to check the class of the exception but on the postgres3 upgrade it could appear if (retry_count == 1) and isinstance(exc_indv, ValueError) and ("\x00" in e.stdout):
# as either DataError or ValueError, so now lets just try if its there.
if (retry_count == 1) and ("\x00" in e.stdout):
e.stdout = e.stdout.replace("\x00", "") e.stdout = e.stdout.replace("\x00", "")
if retry_count >= self.INDIVIDUAL_EVENT_RETRIES: if retry_count >= self.INDIVIDUAL_EVENT_RETRIES:

View File

@@ -26,8 +26,8 @@ class TaskWorker(BaseWorker):
`awx.main.dispatch.publish`. `awx.main.dispatch.publish`.
""" """
@staticmethod @classmethod
def resolve_callable(task): def resolve_callable(cls, task):
""" """
Transform a dotted notation task into an imported, callable function, e.g., Transform a dotted notation task into an imported, callable function, e.g.,
@@ -46,8 +46,7 @@ class TaskWorker(BaseWorker):
return _call return _call
@staticmethod def run_callable(self, body):
def run_callable(body):
""" """
Given some AMQP message, import the correct Python code and run it. Given some AMQP message, import the correct Python code and run it.
""" """

View File

@@ -67,60 +67,10 @@ def __enum_validate__(validator, enums, instance, schema):
Draft4Validator.VALIDATORS['enum'] = __enum_validate__ Draft4Validator.VALIDATORS['enum'] = __enum_validate__
import logging
logger = logging.getLogger('awx.main.fields')
class JSONBlob(JSONField): class JSONBlob(JSONField):
# Cringe... a JSONField that is back ended with a TextField.
# This field was a legacy custom field type that tl;dr; was a TextField
# Over the years, with Django upgrades, we were able to go to a JSONField instead of the custom field
# However, we didn't want to have large customers with millions of events to update from text to json during an upgrade
# So we keep this field type as backended with TextField.
def get_internal_type(self): def get_internal_type(self):
return "TextField" return "TextField"
# postgres uses a Jsonb field as the default backend
# with psycopg2 it was using a psycopg2._json.Json class internally
# with psycopg3 it uses a psycopg.types.json.Jsonb class internally
# The binary class was not compatible with a text field, so we are going to override these next two methods and ensure we are using a string
def from_db_value(self, value, expression, connection):
if value is None:
return value
if isinstance(value, str):
try:
return json.loads(value)
except Exception as e:
logger.error(f"Failed to load JSONField {self.name}: {e}")
return value
def get_db_prep_value(self, value, connection, prepared=False):
if not prepared:
value = self.get_prep_value(value)
try:
# Null characters are not allowed in text fields and JSONBlobs are JSON data but saved as text
# So we want to make sure we strip out any null characters also note, these "should" be escaped by the dumps process:
# >>> my_obj = { 'test': '\x00' }
# >>> import json
# >>> json.dumps(my_obj)
# '{"test": "\\u0000"}'
# But just to be safe, lets remove them if they are there. \x00 and \u0000 are the same:
# >>> string = "\x00"
# >>> "\u0000" in string
# True
dumped_value = json.dumps(value)
if "\x00" in dumped_value:
dumped_value = dumped_value.replace("\x00", '')
return dumped_value
except Exception as e:
logger.error(f"Failed to dump JSONField {self.name}: {e} value: {value}")
return value
# Based on AutoOneToOneField from django-annoying: # Based on AutoOneToOneField from django-annoying:
# https://bitbucket.org/offline/django-annoying/src/a0de8b294db3/annoying/fields.py # https://bitbucket.org/offline/django-annoying/src/a0de8b294db3/annoying/fields.py
@@ -850,7 +800,7 @@ class CredentialTypeInjectorField(JSONSchemaField):
def validate_env_var_allowed(self, env_var): def validate_env_var_allowed(self, env_var):
if env_var.startswith('ANSIBLE_'): if env_var.startswith('ANSIBLE_'):
raise django_exceptions.ValidationError( raise django_exceptions.ValidationError(
_('Environment variable {} may affect Ansible configuration so its use is not allowed in credentials.').format(env_var), _('Environment variable {} may affect Ansible configuration so its ' 'use is not allowed in credentials.').format(env_var),
code='invalid', code='invalid',
params={'value': env_var}, params={'value': env_var},
) )
@@ -1004,16 +954,6 @@ class OrderedManyToManyDescriptor(ManyToManyDescriptor):
def get_queryset(self): def get_queryset(self):
return super(OrderedManyRelatedManager, self).get_queryset().order_by('%s__position' % self.through._meta.model_name) return super(OrderedManyRelatedManager, self).get_queryset().order_by('%s__position' % self.through._meta.model_name)
def add(self, *objects):
if len(objects) > 1:
raise RuntimeError('Ordered many-to-many fields do not support multiple objects')
return super().add(*objects)
def remove(self, *objects):
if len(objects) > 1:
raise RuntimeError('Ordered many-to-many fields do not support multiple objects')
return super().remove(*objects)
return OrderedManyRelatedManager return OrderedManyRelatedManager
return add_custom_queryset_to_many_related_manager( return add_custom_queryset_to_many_related_manager(
@@ -1031,12 +971,13 @@ class OrderedManyToManyField(models.ManyToManyField):
by a special `position` column on the M2M table by a special `position` column on the M2M table
""" """
def _update_m2m_position(self, sender, instance, action, **kwargs): def _update_m2m_position(self, sender, **kwargs):
if action in ('post_add', 'post_remove'): if kwargs.get('action') in ('post_add', 'post_remove'):
descriptor = getattr(instance, self.name) order_with_respect_to = None
order_with_respect_to = descriptor.source_field_name for field in sender._meta.local_fields:
if isinstance(field, models.ForeignKey) and isinstance(kwargs['instance'], field.related_model):
for i, ig in enumerate(sender.objects.filter(**{order_with_respect_to: instance.pk})): order_with_respect_to = field.name
for i, ig in enumerate(sender.objects.filter(**{order_with_respect_to: kwargs['instance'].pk})):
if ig.position != i: if ig.position != i:
ig.position = i ig.position = i
ig.save() ig.save()

View File

@@ -23,7 +23,7 @@ class Command(BaseCommand):
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old') parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove activity stream events more than N days old')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)') parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would ' 'be removed)')
def init_logging(self): def init_logging(self):
log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0])) log_levels = dict(enumerate([logging.ERROR, logging.INFO, logging.DEBUG, 0]))

View File

@@ -1,22 +0,0 @@
from awx.main.models import HostMetric
from django.core.management.base import BaseCommand
from django.conf import settings
class Command(BaseCommand):
"""
Run soft-deleting of HostMetrics
"""
help = 'Run soft-deleting of HostMetrics'
def add_arguments(self, parser):
parser.add_argument('--months-ago', type=int, dest='months-ago', action='store', help='Threshold in months for soft-deleting')
def handle(self, *args, **options):
months_ago = options.get('months-ago') or None
if not months_ago:
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_THRESHOLD', 12)
HostMetric.cleanup_task(months_ago)

View File

@@ -17,7 +17,10 @@ from django.utils.timezone import now
# AWX # AWX
from awx.main.models import Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob, WorkflowJob, Notification from awx.main.models import Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob, WorkflowJob, Notification
from awx.main.utils import unified_job_class_to_event_table_name
def unified_job_class_to_event_table_name(job_class):
return f'main_{job_class().event_class.__name__.lower()}'
def partition_table_name(job_class, dt): def partition_table_name(job_class, dt):
@@ -149,7 +152,7 @@ class Command(BaseCommand):
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.') parser.add_argument('--days', dest='days', type=int, default=90, metavar='N', help='Remove jobs/updates executed more than N days ago. Defaults to 90.')
parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would be removed)') parser.add_argument('--dry-run', dest='dry_run', action='store_true', default=False, help='Dry run mode (show items that would ' 'be removed)')
parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs') parser.add_argument('--jobs', dest='only_jobs', action='store_true', default=False, help='Remove jobs')
parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands') parser.add_argument('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, help='Remove ad hoc commands')
parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates') parser.add_argument('--project-updates', dest='only_project_updates', action='store_true', default=False, help='Remove project updates')

View File

@@ -44,7 +44,7 @@ class Command(BaseCommand):
'- To list all (now deprecated) custom virtual environments run:', '- To list all (now deprecated) custom virtual environments run:',
'awx-manage list_custom_venvs', 'awx-manage list_custom_venvs',
'', '',
'- To export the contents of a (deprecated) virtual environment, run the following command while supplying the path as an argument:', '- To export the contents of a (deprecated) virtual environment, ' 'run the following command while supplying the path as an argument:',
'awx-manage export_custom_venv /path/to/venv', 'awx-manage export_custom_venv /path/to/venv',
'', '',
'- Run these commands with `-q` to remove tool tips.', '- Run these commands with `-q` to remove tool tips.',

View File

@@ -13,7 +13,7 @@ class Command(BaseCommand):
Deprovision a cluster node Deprovision a cluster node
""" """
help = 'Remove instance from the database. Specify `--hostname` to use this command.' help = 'Remove instance from the database. ' 'Specify `--hostname` to use this command.'
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help='Hostname used during provisioning') parser.add_argument('--hostname', dest='hostname', type=str, help='Hostname used during provisioning')

View File

@@ -1,143 +0,0 @@
import time
from urllib.parse import urljoin
from argparse import ArgumentTypeError
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError
from django.db.models import Q
from django.utils.timezone import now
from awx.main.models import Instance, UnifiedJob
class AWXInstance:
def __init__(self, **filter):
self.filter = filter
self.get_instance()
def get_instance(self):
filter = self.filter if self.filter is not None else dict(hostname=settings.CLUSTER_HOST_ID)
qs = Instance.objects.filter(**filter)
if not qs.exists():
raise ValueError(f"No AWX instance found with {filter} parameters")
self.instance = qs.first()
def disable(self):
if self.instance.enabled:
self.instance.enabled = False
self.instance.save()
return True
def enable(self):
if not self.instance.enabled:
self.instance.enabled = True
self.instance.save()
return True
def jobs(self):
return UnifiedJob.objects.filter(
Q(controller_node=self.instance.hostname) | Q(execution_node=self.instance.hostname), status__in=("running", "waiting")
)
def jobs_pretty(self):
jobs = []
for j in self.jobs():
job_started = j.started if j.started else now()
# similar calculation of `elapsed` as the corresponding serializer
# does
td = now() - job_started
elapsed = (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / (10**6 * 1.0)
elapsed = float(elapsed)
details = dict(
name=j.name,
url=j.get_ui_url(),
elapsed=elapsed,
)
jobs.append(details)
jobs = sorted(jobs, reverse=True, key=lambda j: j["elapsed"])
return ", ".join([f"[\"{j['name']}\"]({j['url']})" for j in jobs])
def instance_pretty(self):
instance = (
self.instance.hostname,
urljoin(settings.TOWER_URL_BASE, f"/#/instances/{self.instance.pk}/details"),
)
return f"[\"{instance[0]}\"]({instance[1]})"
class Command(BaseCommand):
help = "Disable instance, optionally waiting for all its managed jobs to finish."
@staticmethod
def ge_1(arg):
if arg == "inf":
return float("inf")
int_arg = int(arg)
if int_arg < 1:
raise ArgumentTypeError(f"The value must be a positive number >= 1. Provided: \"{arg}\"")
return int_arg
def add_arguments(self, parser):
filter_group = parser.add_mutually_exclusive_group()
filter_group.add_argument(
"--hostname",
type=str,
default=settings.CLUSTER_HOST_ID,
help=f"{Instance.hostname.field.help_text} Defaults to the hostname of the machine where the Python interpreter is currently executing".strip(),
)
filter_group.add_argument("--id", type=self.ge_1, help=Instance.id.field.help_text)
parser.add_argument(
"--wait",
action="store_true",
help="Wait for jobs managed by the instance to finish. With default retry arguments waits ~1h",
)
parser.add_argument(
"--retry",
type=self.ge_1,
default=120,
help="Number of retries when waiting for jobs to finish. Default: 120. Also accepts \"inf\" to wait indefinitely",
)
parser.add_argument(
"--retry_sleep",
type=self.ge_1,
default=30,
help="Number of seconds to sleep before consequtive retries when waiting. Default: 30",
)
def handle(self, *args, **options):
try:
filter = dict(id=options["id"]) if options["id"] is not None else dict(hostname=options["hostname"])
instance = AWXInstance(**filter)
except ValueError as e:
raise CommandError(e)
if instance.disable():
self.stdout.write(self.style.SUCCESS(f"Instance {instance.instance_pretty()} has been disabled"))
else:
self.stdout.write(f"Instance {instance.instance_pretty()} has already been disabled")
if not options["wait"]:
return
rc = 1
while instance.jobs().count() > 0:
if rc < options["retry"]:
self.stdout.write(
f"{rc}/{options['retry']}: Waiting {options['retry_sleep']}s before the next attempt to see if the following instance' managed jobs have finished: {instance.jobs_pretty()}"
)
rc += 1
time.sleep(options["retry_sleep"])
else:
raise CommandError(
f"{rc}/{options['retry']}: No more retry attempts left, but the instance still has associated managed jobs: {instance.jobs_pretty()}"
)
else:
self.stdout.write(self.style.SUCCESS("Done waiting for instance' managed jobs to finish!"))

View File

@@ -1,6 +1,5 @@
from awx.main.tasks.system import clear_setting_cache
from django.conf import settings
from django.core.management.base import BaseCommand, CommandError from django.core.management.base import BaseCommand, CommandError
from django.conf import settings
class Command(BaseCommand): class Command(BaseCommand):
@@ -32,7 +31,5 @@ class Command(BaseCommand):
else: else:
raise CommandError('Please pass --enable flag to allow local auth or --disable flag to disable local auth') raise CommandError('Please pass --enable flag to allow local auth or --disable flag to disable local auth')
clear_setting_cache.delay(['DISABLE_LOCAL_AUTH'])
def handle(self, **options): def handle(self, **options):
self._enable_disable_auth(options.get('enable'), options.get('disable')) self._enable_disable_auth(options.get('enable'), options.get('disable'))

View File

@@ -1,230 +1,53 @@
from django.core.management.base import BaseCommand from django.core.management.base import BaseCommand
import datetime import datetime
from django.core.serializers.json import DjangoJSONEncoder from django.core.serializers.json import DjangoJSONEncoder
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly from awx.main.models.inventory import HostMetric
from awx.main.analytics.collectors import config
import json import json
import sys
import tempfile
import tarfile
import csv
CSV_PREFERRED_ROW_COUNT = 500000
BATCHED_FETCH_COUNT = 10000
class Command(BaseCommand): class Command(BaseCommand):
help = 'This is for offline licensing usage' help = 'This is for offline licensing usage'
def host_metric_queryset(self, result, offset=0, limit=BATCHED_FETCH_COUNT):
list_of_queryset = list(
result.values(
'id',
'hostname',
'first_automation',
'last_automation',
'last_deleted',
'automated_counter',
'deleted_counter',
'deleted',
'used_in_inventories',
).order_by('first_automation')[offset : offset + limit]
)
return list_of_queryset
def host_metric_summary_monthly_queryset(self, result, offset=0, limit=BATCHED_FETCH_COUNT):
list_of_queryset = list(
result.values(
'id',
'date',
'license_consumed',
'license_capacity',
'hosts_added',
'hosts_deleted',
'indirectly_managed_hosts',
).order_by(
'date'
)[offset : offset + limit]
)
return list_of_queryset
def paginated_db_retrieval(self, type, filter_kwargs, rows_per_file):
offset = 0
list_of_queryset = []
while True:
if type == 'host_metric':
result = HostMetric.objects.filter(**filter_kwargs)
list_of_queryset = self.host_metric_queryset(result, offset, rows_per_file)
elif type == 'host_metric_summary_monthly':
result = HostMetricSummaryMonthly.objects.filter(**filter_kwargs)
list_of_queryset = self.host_metric_summary_monthly_queryset(result, offset, rows_per_file)
if not list_of_queryset:
break
else:
yield list_of_queryset
offset += len(list_of_queryset)
def controlled_db_retrieval(self, type, filter_kwargs, offset=0, fetch_count=BATCHED_FETCH_COUNT):
if type == 'host_metric':
result = HostMetric.objects.filter(**filter_kwargs)
return self.host_metric_queryset(result, offset, fetch_count)
elif type == 'host_metric_summary_monthly':
result = HostMetricSummaryMonthly.objects.filter(**filter_kwargs)
return self.host_metric_summary_monthly_queryset(result, offset, fetch_count)
def write_to_csv(self, csv_file, list_of_queryset, always_header, first_write=False, mode='a'):
with open(csv_file, mode, newline='') as output_file:
try:
keys = list_of_queryset[0].keys() if list_of_queryset else []
dict_writer = csv.DictWriter(output_file, keys)
if always_header or first_write:
dict_writer.writeheader()
dict_writer.writerows(list_of_queryset)
except Exception as e:
print(e)
def csv_for_tar(self, temp_dir, type, filter_kwargs, rows_per_file, always_header=True):
for index, list_of_queryset in enumerate(self.paginated_db_retrieval(type, filter_kwargs, rows_per_file)):
csv_file = f'{temp_dir}/{type}{index+1}.csv'
arcname_file = f'{type}{index+1}.csv'
first_write = True if index == 0 else False
self.write_to_csv(csv_file, list_of_queryset, always_header, first_write, 'w')
yield csv_file, arcname_file
def csv_for_tar_batched_fetch(self, temp_dir, type, filter_kwargs, rows_per_file, always_header=True):
csv_iteration = 1
offset = 0
rows_written_per_csv = 0
to_fetch = BATCHED_FETCH_COUNT
while True:
list_of_queryset = self.controlled_db_retrieval(type, filter_kwargs, offset, to_fetch)
if not list_of_queryset:
break
csv_file = f'{temp_dir}/{type}{csv_iteration}.csv'
arcname_file = f'{type}{csv_iteration}.csv'
self.write_to_csv(csv_file, list_of_queryset, always_header)
offset += to_fetch
rows_written_per_csv += to_fetch
always_header = False
remaining_rows_per_csv = rows_per_file - rows_written_per_csv
if not remaining_rows_per_csv:
yield csv_file, arcname_file
rows_written_per_csv = 0
always_header = True
to_fetch = BATCHED_FETCH_COUNT
csv_iteration += 1
elif remaining_rows_per_csv < BATCHED_FETCH_COUNT:
to_fetch = remaining_rows_per_csv
if rows_written_per_csv:
yield csv_file, arcname_file
def config_for_tar(self, options, temp_dir):
config_json = json.dumps(config(options.get('since')))
config_file = f'{temp_dir}/config.json'
arcname_file = 'config.json'
with open(config_file, 'w') as f:
f.write(config_json)
return config_file, arcname_file
def output_json(self, options, filter_kwargs):
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in self.csv_for_tar(temp_dir, options.get('json', 'host_metric'), filter_kwargs, BATCHED_FETCH_COUNT, True):
csv_file = csv_detail[0]
with open(csv_file) as f:
reader = csv.DictReader(f)
rows = list(reader)
json_result = json.dumps(rows, cls=DjangoJSONEncoder)
print(json_result)
def output_csv(self, options, filter_kwargs):
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in self.csv_for_tar(temp_dir, options.get('csv', 'host_metric'), filter_kwargs, BATCHED_FETCH_COUNT, False):
csv_file = csv_detail[0]
with open(csv_file) as f:
sys.stdout.write(f.read())
def output_tarball(self, options, filter_kwargs):
always_header = True
rows_per_file = options['rows_per_file'] or CSV_PREFERRED_ROW_COUNT
tar = tarfile.open("./host_metrics.tar.gz", "w:gz")
if rows_per_file <= BATCHED_FETCH_COUNT:
csv_function = self.csv_for_tar
else:
csv_function = self.csv_for_tar_batched_fetch
with tempfile.TemporaryDirectory() as temp_dir:
for csv_detail in csv_function(temp_dir, 'host_metric', filter_kwargs, rows_per_file, always_header):
tar.add(csv_detail[0], arcname=csv_detail[1])
for csv_detail in csv_function(temp_dir, 'host_metric_summary_monthly', filter_kwargs, rows_per_file, always_header):
tar.add(csv_detail[0], arcname=csv_detail[1])
config_file, arcname_file = self.config_for_tar(options, temp_dir)
tar.add(config_file, arcname=arcname_file)
tar.close()
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--since', type=datetime.datetime.fromisoformat, help='Start Date in ISO format YYYY-MM-DD') parser.add_argument('--since', type=datetime.datetime.fromisoformat, help='Start Date in ISO format YYYY-MM-DD')
parser.add_argument('--json', type=str, const='host_metric', nargs='?', help='Select output as JSON for host_metric or host_metric_summary_monthly') parser.add_argument('--until', type=datetime.datetime.fromisoformat, help='End Date in ISO format YYYY-MM-DD')
parser.add_argument('--csv', type=str, const='host_metric', nargs='?', help='Select output as CSV for host_metric or host_metric_summary_monthly') parser.add_argument('--json', action='store_true', help='Select output as JSON')
parser.add_argument('--tarball', action='store_true', help=f'Package CSV files into a tar with upto {CSV_PREFERRED_ROW_COUNT} rows')
parser.add_argument('--rows_per_file', type=int, help=f'Split rows in chunks of {CSV_PREFERRED_ROW_COUNT}')
def handle(self, *args, **options): def handle(self, *args, **options):
since = options.get('since') since = options.get('since')
until = options.get('until')
if since is None and until is None:
print("No Arguments received")
return None
if since is not None and since.tzinfo is None: if since is not None and since.tzinfo is None:
since = since.replace(tzinfo=datetime.timezone.utc) since = since.replace(tzinfo=datetime.timezone.utc)
if until is not None and until.tzinfo is None:
until = until.replace(tzinfo=datetime.timezone.utc)
filter_kwargs = {} filter_kwargs = {}
if since is not None: if since is not None:
filter_kwargs['last_automation__gte'] = since filter_kwargs['last_automation__gte'] = since
if until is not None:
filter_kwargs['last_automation__lte'] = until
filter_kwargs_host_metrics_summary = {} result = HostMetric.objects.filter(**filter_kwargs)
if since is not None:
filter_kwargs_host_metrics_summary['date__gte'] = since
if options['rows_per_file'] and options.get('rows_per_file') > CSV_PREFERRED_ROW_COUNT:
print(f"rows_per_file exceeds the allowable limit of {CSV_PREFERRED_ROW_COUNT}.")
return
# if --json flag is set, output the result in json format # if --json flag is set, output the result in json format
if options['json']: if options['json']:
self.output_json(options, filter_kwargs) list_of_queryset = list(result.values('hostname', 'first_automation', 'last_automation'))
elif options['csv']: json_result = json.dumps(list_of_queryset, cls=DjangoJSONEncoder)
self.output_csv(options, filter_kwargs) print(json_result)
elif options['tarball']:
self.output_tarball(options, filter_kwargs)
# --json flag is not set, output in plain text # --json flag is not set, output in plain text
else: else:
print(f"Printing up to {BATCHED_FETCH_COUNT} automated hosts:") print(f"Total Number of hosts automated: {len(result)}")
result = HostMetric.objects.filter(**filter_kwargs) for item in result:
list_of_queryset = self.host_metric_queryset(result, 0, BATCHED_FETCH_COUNT)
for item in list_of_queryset:
print( print(
"Hostname : {hostname} | first_automation : {first_automation} | last_automation : {last_automation}".format( "Hostname : {hostname} | first_automation : {first_automation} | last_automation : {last_automation}".format(
hostname=item['hostname'], first_automation=item['first_automation'], last_automation=item['last_automation'] hostname=item.hostname, first_automation=item.first_automation, last_automation=item.last_automation
) )
) )
return return

View File

@@ -458,19 +458,12 @@ class Command(BaseCommand):
# TODO: We disable variable overwrite here in case user-defined inventory variables get # TODO: We disable variable overwrite here in case user-defined inventory variables get
# mangled. But we still need to figure out a better way of processing multiple inventory # mangled. But we still need to figure out a better way of processing multiple inventory
# update variables mixing with each other. # update variables mixing with each other.
# issue for this: https://github.com/ansible/awx/issues/11623 all_obj = self.inventory
db_variables = all_obj.variables_dict
if self.inventory.kind == 'constructed' and self.inventory_source.overwrite_vars: db_variables.update(self.all_group.variables)
# NOTE: we had to add a exception case to not merge variables if db_variables != all_obj.variables_dict:
# to make constructed inventory coherent all_obj.variables = json.dumps(db_variables)
db_variables = self.all_group.variables all_obj.save(update_fields=['variables'])
else:
db_variables = self.inventory.variables_dict
db_variables.update(self.all_group.variables)
if db_variables != self.inventory.variables_dict:
self.inventory.variables = json.dumps(db_variables)
self.inventory.save(update_fields=['variables'])
logger.debug('Inventory variables updated from "all" group') logger.debug('Inventory variables updated from "all" group')
else: else:
logger.debug('Inventory variables unmodified') logger.debug('Inventory variables unmodified')
@@ -529,32 +522,16 @@ class Command(BaseCommand):
def _update_db_host_from_mem_host(self, db_host, mem_host): def _update_db_host_from_mem_host(self, db_host, mem_host):
# Update host variables. # Update host variables.
db_variables = db_host.variables_dict db_variables = db_host.variables_dict
mem_variables = mem_host.variables
update_fields = []
# Update host instance_id.
instance_id = self._get_instance_id(mem_variables)
if instance_id != db_host.instance_id:
old_instance_id = db_host.instance_id
db_host.instance_id = instance_id
update_fields.append('instance_id')
if self.inventory.kind == 'constructed':
# remote towervars so the constructed hosts do not have extra variables
for prefix in ('host', 'tower'):
for var in ('remote_{}_enabled', 'remote_{}_id'):
mem_variables.pop(var.format(prefix), None)
if self.overwrite_vars: if self.overwrite_vars:
db_variables = mem_variables db_variables = mem_host.variables
else: else:
db_variables.update(mem_variables) db_variables.update(mem_host.variables)
update_fields = []
if db_variables != db_host.variables_dict: if db_variables != db_host.variables_dict:
db_host.variables = json.dumps(db_variables) db_host.variables = json.dumps(db_variables)
update_fields.append('variables') update_fields.append('variables')
# Update host enabled flag. # Update host enabled flag.
enabled = self._get_enabled(mem_variables) enabled = self._get_enabled(mem_host.variables)
if enabled is not None and db_host.enabled != enabled: if enabled is not None and db_host.enabled != enabled:
db_host.enabled = enabled db_host.enabled = enabled
update_fields.append('enabled') update_fields.append('enabled')
@@ -563,6 +540,12 @@ class Command(BaseCommand):
old_name = db_host.name old_name = db_host.name
db_host.name = mem_host.name db_host.name = mem_host.name
update_fields.append('name') update_fields.append('name')
# Update host instance_id.
instance_id = self._get_instance_id(mem_host.variables)
if instance_id != db_host.instance_id:
old_instance_id = db_host.instance_id
db_host.instance_id = instance_id
update_fields.append('instance_id')
# Update host and display message(s) on what changed. # Update host and display message(s) on what changed.
if update_fields: if update_fields:
db_host.save(update_fields=update_fields) db_host.save(update_fields=update_fields)
@@ -671,19 +654,13 @@ class Command(BaseCommand):
mem_host = self.all_group.all_hosts[mem_host_name] mem_host = self.all_group.all_hosts[mem_host_name]
import_vars = mem_host.variables import_vars = mem_host.variables
host_desc = import_vars.pop('_awx_description', 'imported') host_desc = import_vars.pop('_awx_description', 'imported')
host_attrs = dict(description=host_desc) host_attrs = dict(variables=json.dumps(import_vars), description=host_desc)
enabled = self._get_enabled(mem_host.variables) enabled = self._get_enabled(mem_host.variables)
if enabled is not None: if enabled is not None:
host_attrs['enabled'] = enabled host_attrs['enabled'] = enabled
if self.instance_id_var: if self.instance_id_var:
instance_id = self._get_instance_id(mem_host.variables) instance_id = self._get_instance_id(mem_host.variables)
host_attrs['instance_id'] = instance_id host_attrs['instance_id'] = instance_id
if self.inventory.kind == 'constructed':
# remote towervars so the constructed hosts do not have extra variables
for prefix in ('host', 'tower'):
for var in ('remote_{}_enabled', 'remote_{}_id'):
import_vars.pop(var.format(prefix), None)
host_attrs['variables'] = json.dumps(import_vars)
try: try:
sanitize_jinja(mem_host_name) sanitize_jinja(mem_host_name)
except ValueError as e: except ValueError as e:
@@ -874,7 +851,6 @@ class Command(BaseCommand):
logger.info('Updating inventory %d: %s' % (inventory.pk, inventory.name)) logger.info('Updating inventory %d: %s' % (inventory.pk, inventory.name))
# Create ad-hoc inventory source and inventory update objects # Create ad-hoc inventory source and inventory update objects
ee = get_default_execution_environment()
with ignore_inventory_computed_fields(): with ignore_inventory_computed_fields():
source = Command.get_source_absolute_path(raw_source) source = Command.get_source_absolute_path(raw_source)
@@ -884,22 +860,14 @@ class Command(BaseCommand):
source_path=os.path.abspath(source), source_path=os.path.abspath(source),
overwrite=bool(options.get('overwrite', False)), overwrite=bool(options.get('overwrite', False)),
overwrite_vars=bool(options.get('overwrite_vars', False)), overwrite_vars=bool(options.get('overwrite_vars', False)),
execution_environment=ee,
) )
inventory_update = inventory_source.create_inventory_update( inventory_update = inventory_source.create_inventory_update(
_eager_fields=dict( _eager_fields=dict(status='running', job_args=json.dumps(sys.argv), job_env=dict(os.environ.items()), job_cwd=os.getcwd())
status='running', job_args=json.dumps(sys.argv), job_env=dict(os.environ.items()), job_cwd=os.getcwd(), execution_environment=ee
)
) )
try: data = AnsibleInventoryLoader(source=source, verbosity=verbosity).load()
data = AnsibleInventoryLoader(source=source, verbosity=verbosity).load()
logger.debug('Finished loading from source: %s', source)
except SystemExit: logger.debug('Finished loading from source: %s', source)
logger.debug("Error occurred while running ansible-inventory")
inventory_update.cancel()
sys.exit(1)
status, tb, exc = 'error', '', None status, tb, exc = 'error', '', None
try: try:

View File

@@ -22,7 +22,7 @@ class Command(BaseCommand):
'# Discovered Virtual Environments:', '# Discovered Virtual Environments:',
'\n'.join(venvs), '\n'.join(venvs),
'', '',
'- To export the contents of a (deprecated) virtual environment, run the following command while supplying the path as an argument:', '- To export the contents of a (deprecated) virtual environment, ' 'run the following command while supplying the path as an argument:',
'awx-manage export_custom_venv /path/to/venv', 'awx-manage export_custom_venv /path/to/venv',
'', '',
'- To view the connections a (deprecated) virtual environment had in the database, run the following command while supplying the path as an argument:', '- To view the connections a (deprecated) virtual environment had in the database, run the following command while supplying the path as an argument:',

View File

@@ -44,18 +44,16 @@ class Command(BaseCommand):
for x in ig.instances.all(): for x in ig.instances.all():
color = '\033[92m' color = '\033[92m'
end_color = '\033[0m'
if x.capacity == 0 and x.node_type != 'hop': if x.capacity == 0 and x.node_type != 'hop':
color = '\033[91m' color = '\033[91m'
if not x.enabled: if not x.enabled:
color = '\033[90m[DISABLED] ' color = '\033[90m[DISABLED] '
if no_color: if no_color:
color = '' color = ''
end_color = ''
capacity = f' capacity={x.capacity}' if x.node_type != 'hop' else '' capacity = f' capacity={x.capacity}' if x.node_type != 'hop' else ''
version = f" version={x.version or '?'}" if x.node_type != 'hop' else '' version = f" version={x.version or '?'}" if x.node_type != 'hop' else ''
heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else '' heartbeat = f' heartbeat="{x.last_seen:%Y-%m-%d %H:%M:%S}"' if x.capacity or x.node_type == 'hop' else ''
print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}{end_color}') print(f'\t{color}{x.hostname}{capacity} node_type={x.node_type}{version}{heartbeat}\033[0m')
print() print()

View File

@@ -1,27 +0,0 @@
from django.utils.timezone import now
from django.core.management.base import BaseCommand, CommandParser
from datetime import timedelta
from awx.main.utils.common import create_partition, unified_job_class_to_event_table_name
from awx.main.models import Job, SystemJob, ProjectUpdate, InventoryUpdate, AdHocCommand
class Command(BaseCommand):
"""Command used to precreate database partitions to avoid pg_dump locks"""
def add_arguments(self, parser: CommandParser) -> None:
parser.add_argument('--count', dest='count', action='store', help='The amount of hours of partitions to create', type=int, default=1)
def _create_partitioned_tables(self, count):
tables = list()
for model in (Job, SystemJob, ProjectUpdate, InventoryUpdate, AdHocCommand):
tables.append(unified_job_class_to_event_table_name(model))
start = now()
while count > 0:
for table in tables:
create_partition(table, start)
print(f'Created partitions for {table} {start}')
start = start + timedelta(hours=1)
count -= 1
def handle(self, **options):
self._create_partitioned_tables(count=options.get('count'))

View File

@@ -35,7 +35,7 @@ class Command(BaseCommand):
from awx.main.management.commands.register_queue import RegisterQueue from awx.main.management.commands.register_queue import RegisterQueue
(changed, instance) = Instance.objects.register(ip_address=os.environ.get('MY_POD_IP'), node_type='control', node_uuid=settings.SYSTEM_UUID) (changed, instance) = Instance.objects.register(ip_address=os.environ.get('MY_POD_IP'), node_type='control', uuid=settings.SYSTEM_UUID)
RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register() RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register()
RegisterQueue( RegisterQueue(
settings.DEFAULT_EXECUTION_QUEUE_NAME, settings.DEFAULT_EXECUTION_QUEUE_NAME,
@@ -48,7 +48,7 @@ class Command(BaseCommand):
max_concurrent_jobs=settings.DEFAULT_EXECUTION_QUEUE_MAX_CONCURRENT_JOBS, max_concurrent_jobs=settings.DEFAULT_EXECUTION_QUEUE_MAX_CONCURRENT_JOBS,
).register() ).register()
else: else:
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, node_uuid=uuid) (changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, uuid=uuid)
if changed: if changed:
print("Successfully registered instance {}".format(hostname)) print("Successfully registered instance {}".format(hostname))
else: else:

View File

@@ -1,33 +0,0 @@
import logging
import json
from django.core.management.base import BaseCommand
from awx.main.dispatch import pg_bus_conn
from awx.main.dispatch.worker.task import TaskWorker
logger = logging.getLogger('awx.main.cache_clear')
class Command(BaseCommand):
"""
Cache Clear
Runs as a management command and starts a daemon that listens for a pg_notify message to clear the cache.
"""
help = 'Launch the cache clear daemon'
def handle(self, *arg, **options):
try:
with pg_bus_conn() as conn:
conn.listen("tower_settings_change")
for e in conn.events(yield_timeouts=True):
if e is not None:
body = json.loads(e.payload)
logger.info(f"Cache clear request received. Clearing now, payload: {e.payload}")
TaskWorker.run_callable(body)
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in cache clear main loop')
raise

View File

@@ -3,11 +3,12 @@
import logging import logging
import yaml import yaml
from django.conf import settings
from django.core.cache import cache as django_cache from django.core.cache import cache as django_cache
from django.core.management.base import BaseCommand from django.core.management.base import BaseCommand
from django.db import connection as django_connection from django.db import connection as django_connection
from awx.main.dispatch import get_task_queuename from awx.main.dispatch import get_local_queuename
from awx.main.dispatch.control import Control from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
@@ -16,6 +17,10 @@ from awx.main.dispatch import periodic
logger = logging.getLogger('awx.main.dispatch') logger = logging.getLogger('awx.main.dispatch')
def construct_bcast_queue_name(common_name):
return common_name + '_' + settings.CLUSTER_HOST_ID
class Command(BaseCommand): class Command(BaseCommand):
help = 'Launch the task dispatcher' help = 'Launch the task dispatcher'
@@ -71,7 +76,7 @@ class Command(BaseCommand):
consumer = None consumer = None
try: try:
queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()] queues = ['tower_broadcast_all', get_local_queuename()]
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4)) consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4))
consumer.run() consumer.run()
except KeyboardInterrupt: except KeyboardInterrupt:

View File

@@ -1,41 +0,0 @@
import logging
import json
from django.core.management.base import BaseCommand
from django.conf import settings
from django.core.cache import cache
from awx.main.dispatch import pg_bus_conn
from awx.main.dispatch.worker.task import TaskWorker
from awx.main.utils.external_logging import reconfigure_rsyslog
logger = logging.getLogger('awx.main.rsyslog_configurer')
class Command(BaseCommand):
"""
Rsyslog Configurer
Runs as a management command and starts rsyslog configurer daemon. Daemon listens
for pg_notify then calls reconfigure_rsyslog
"""
help = 'Launch the rsyslog_configurer daemon'
def handle(self, *arg, **options):
try:
with pg_bus_conn() as conn:
conn.listen("rsyslog_configurer")
# reconfigure rsyslog on start up
reconfigure_rsyslog()
for e in conn.events(yield_timeouts=True):
if e is not None:
logger.info("Change in logging settings found. Restarting rsyslogd")
# clear the cache of relevant settings then restart
setting_keys = [k for k in dir(settings) if k.startswith('LOG_AGGREGATOR')]
cache.delete_many(setting_keys)
settings._awx_conf_memoizedcache.clear()
body = json.loads(e.payload)
TaskWorker.run_callable(body)
except Exception:
# Log unanticipated exception in addition to writing to stderr to get timestamps and other metadata
logger.exception('Encountered unhandled error in rsyslog_configurer main loop')
raise

View File

@@ -1,45 +0,0 @@
import json
import logging
import os
import time
import signal
import sys
from django.core.management.base import BaseCommand
from django.conf import settings
from awx.main.dispatch import pg_bus_conn
logger = logging.getLogger('awx.main.commands.run_ws_heartbeat')
class Command(BaseCommand):
help = 'Launch the web server beacon (ws_heartbeat)'
def construct_payload(self, action='online'):
payload = {
'hostname': settings.CLUSTER_HOST_ID,
'ip': os.environ.get('MY_POD_IP'),
'action': action,
}
return json.dumps(payload)
def notify_listener_and_exit(self, *args):
with pg_bus_conn(new_connection=False) as conn:
conn.notify('web_ws_heartbeat', self.construct_payload(action='offline'))
sys.exit(0)
def do_heartbeat_loop(self):
while True:
with pg_bus_conn() as conn:
logger.debug('Sending heartbeat')
conn.notify('web_ws_heartbeat', self.construct_payload())
time.sleep(settings.BROADCAST_WEBSOCKET_BEACON_FROM_WEB_RATE_SECONDS)
def handle(self, *arg, **options):
signal.signal(signal.SIGTERM, self.notify_listener_and_exit)
signal.signal(signal.SIGINT, self.notify_listener_and_exit)
# Note: We don't really try any reconnect logic to pg_notify here,
# just let supervisor restart if we fail.
self.do_heartbeat_loop()

Some files were not shown because too many files have changed in this diff Show More