mirror of
https://github.com/ansible/awx.git
synced 2026-01-14 03:10:42 -03:30
commit
7089c5f06e
6
.github/ISSUE_TEMPLATE/bug_report.md
vendored
6
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -3,6 +3,12 @@ name: "\U0001F41B Bug report"
|
||||
about: Create a report to help us improve
|
||||
|
||||
---
|
||||
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
|
||||
|
||||
- http://webchat.freenode.net/?channels=ansible-awx
|
||||
- https://groups.google.com/forum/#!forum/awx-project
|
||||
|
||||
We have to limit this because of limited volunteer time to respond to issues! -->
|
||||
|
||||
##### ISSUE TYPE
|
||||
- Bug Report
|
||||
|
||||
6
.github/ISSUE_TEMPLATE/feature_request.md
vendored
6
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -3,6 +3,12 @@ name: "✨ Feature request"
|
||||
about: Suggest an idea for this project
|
||||
|
||||
---
|
||||
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
|
||||
|
||||
- http://webchat.freenode.net/?channels=ansible-awx
|
||||
- https://groups.google.com/forum/#!forum/awx-project
|
||||
|
||||
We have to limit this because of limited volunteer time to respond to issues! -->
|
||||
|
||||
##### ISSUE TYPE
|
||||
- Feature Idea
|
||||
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@ -29,8 +29,10 @@ awx/ui/client/languages
|
||||
awx/ui/templates/ui/index.html
|
||||
awx/ui/templates/ui/installing.html
|
||||
awx/ui_next/node_modules/
|
||||
awx/ui_next/src/locales/
|
||||
awx/ui_next/coverage/
|
||||
awx/ui_next/build/locales/_build
|
||||
awx/ui_next/build
|
||||
awx/ui_next/.env.local
|
||||
rsyslog.pid
|
||||
/tower-license
|
||||
/tower-license/**
|
||||
@ -139,8 +141,8 @@ use_dev_supervisor.txt
|
||||
# Ansible module tests
|
||||
/awx_collection_test_venv/
|
||||
/awx_collection/*.tar.gz
|
||||
/awx_collection/galaxy.yml
|
||||
/sanity/
|
||||
/awx_collection_build/
|
||||
|
||||
.idea/*
|
||||
*.unison.tmp
|
||||
|
||||
42
CHANGELOG.md
42
CHANGELOG.md
@ -2,15 +2,53 @@
|
||||
|
||||
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
|
||||
|
||||
## 12.0.0 (TBD)
|
||||
## 14.0.0 (Aug 6, 2020)
|
||||
- As part of our commitment to inclusivity in open source, we recently took some time to audit AWX's source code and user interface and replace certain terminology with more inclusive language. Strictly speaking, this isn't a bug or a feature, but we think it's important and worth calling attention to:
|
||||
* https://github.com/ansible/awx/commit/78229f58715fbfbf88177e54031f532543b57acc
|
||||
* https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language
|
||||
- Installing roles and collections via requirements.yml as part of Project Updates now requires at least Ansible 2.9 - https://github.com/ansible/awx/issues/7769
|
||||
- Deprecated the use of the `PRIMARY_GALAXY_USERNAME` and `PRIMARY_GALAXY_PASSWORD` settings. We recommend using tokens to access Galaxy or Automation Hub.
|
||||
- Added local caching for downloaded roles and collections so they are not re-downloaded on nodes where they are up to date with the project - https://github.com/ansible/awx/issues/5518
|
||||
- Added the ability to associate K8S/OpenShift credentials to Job Template for playbook interaction with the `community.kubernetes` collection - https://github.com/ansible/awx/issues/5735
|
||||
- Added the ability to include HTML in the Custom Login Info presented on the login page - https://github.com/ansible/awx/issues/7600
|
||||
- Fixed https://access.redhat.com/security/cve/cve-2020-14327 - Server-side request forgery on credentials
|
||||
- Fixed https://access.redhat.com/security/cve/cve-2020-14328 - Server-side request forgery on webhooks
|
||||
- Fixed https://access.redhat.com/security/cve/cve-2020-14329 - Sensitive data exposure on labels
|
||||
- Fixed https://access.redhat.com/security/cve/cve-2020-14337 - Named URLs allow for testing the presence or absence of objects
|
||||
- Fixed a number of bugs in the user interface related to an upgrade of jQuery:
|
||||
* https://github.com/ansible/awx/issues/7530
|
||||
* https://github.com/ansible/awx/issues/7546
|
||||
* https://github.com/ansible/awx/issues/7534
|
||||
* https://github.com/ansible/awx/issues/7606
|
||||
- Fixed a bug that caused the `-f yaml` flag of the AWX CLI to not print properly formatted YAML - https://github.com/ansible/awx/issues/7795
|
||||
- Fixed a bug in the installer that caused errors when `docker_registry_password` was set - https://github.com/ansible/awx/issues/7695
|
||||
- Fixed a permissions error that prevented certain users from starting AWX services - https://github.com/ansible/awx/issues/7545
|
||||
- Fixed a bug that allows superusers to run unsafe Jinja code when defining custom Credential Types - https://github.com/ansible/awx/pull/7584/
|
||||
- Fixed a bug that prevented users from creating (or editing) custom Credential Types containing boolean fields - https://github.com/ansible/awx/issues/7483
|
||||
- Fixed a bug that prevented users with postgres usernames containing uppercase letters from restoring backups succesfully - https://github.com/ansible/awx/pull/7519
|
||||
- Fixed a bug which allowed the creation (in the Tower API) of Groups and Hosts with the same name - https://github.com/ansible/awx/issues/4680
|
||||
|
||||
## 13.0.0 (Jun 23, 2020)
|
||||
- Added import and export commands to the official AWX CLI, replacing send and receive from the old tower-cli (https://github.com/ansible/awx/pull/6125).
|
||||
- Removed scripts as a means of running inventory updates of built-in types (https://github.com/ansible/awx/pull/6911)
|
||||
- Ansible 2.8 is now partially unsupported; some inventory source types are known to no longer work.
|
||||
- Fixed an issue where the vmware inventory source ssl_verify source variable was not recognized (https://github.com/ansible/awx/pull/7360)
|
||||
- Fixed a bug that caused redis' listen socket to have too-permissive file permissions (https://github.com/ansible/awx/pull/7317)
|
||||
- Fixed a bug that caused rsyslogd's configuration file to have world-readable file permissions, potentially leaking secrets (CVE-2020-10782)
|
||||
|
||||
## 12.0.0 (Jun 9, 2020)
|
||||
- Removed memcached as a dependency of AWX (https://github.com/ansible/awx/pull/7240)
|
||||
- Moved to a single container image build instead of separate awx_web and awx_task images. The container image is just `awx` (https://github.com/ansible/awx/pull/7228)
|
||||
- Official AWX container image builds now use a two-stage container build process that notably reduces the size of our published images (https://github.com/ansible/awx/pull/7017)
|
||||
- Removed support for HipChat notifications ([EoL announcement](https://www.atlassian.com/partnerships/slack/faq#faq-98b17ca3-247f-423b-9a78-70a91681eff0)); all previously-created HipChat notification templates will be deleted due to this removal.
|
||||
- Fixed a bug which broke AWX installations with oc version 4.3 (https://github.com/ansible/awx/pull/6948/files)
|
||||
- Fixed a bug which broke AWX installations with oc version 4.3 (https://github.com/ansible/awx/pull/6948/)
|
||||
- Fixed a performance issue that caused notable delay of stdout processing for playbooks run against large numbers of hosts (https://github.com/ansible/awx/issues/6991)
|
||||
- Fixed a bug that caused CyberArk AIM credential plugin looks to hang forever in some environments (https://github.com/ansible/awx/issues/6986)
|
||||
- Fixed a bug that caused ANY/ALL converage settings not to properly save when editing approval nodes in the UI (https://github.com/ansible/awx/issues/6998)
|
||||
- Fixed a bug that broke support for the satellite6_group_prefix source variable (https://github.com/ansible/awx/issues/7031)
|
||||
- Fixed a bug that prevented changes to workflow node convergence settings when approval nodes were in use (https://github.com/ansible/awx/issues/7063)
|
||||
- Fixed a bug that caused notifications to fail on newer version of Mattermost (https://github.com/ansible/awx/issues/7264)
|
||||
- Fixed a bug (by upgrading to 0.8.1 of the foreman collection) that prevented host_filters from working properly with Foreman-based inventory (https://github.com/ansible/awx/issues/7225)
|
||||
- Fixed a bug that prevented the usage of the Conjur credential plugin with secrets that contain spaces (https://github.com/ansible/awx/issues/7191)
|
||||
- Fixed a bug in awx-manage run_wsbroadcast --status in kubernetes (https://github.com/ansible/awx/pull/7009)
|
||||
- Fixed a bug that broke notification toggles for system jobs in the UI (https://github.com/ansible/awx/pull/7042)
|
||||
|
||||
@ -157,8 +157,7 @@ If you start a second terminal session, you can take a look at the running conta
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
44251b476f98 gcr.io/ansible-tower-engineering/awx_devel:devel "/entrypoint.sh /bin…" 27 seconds ago Up 23 seconds 0.0.0.0:6899->6899/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 0.0.0.0:8080->8080/tcp, 22/tcp, 0.0.0.0:8888->8888/tcp tools_awx_run_9e820694d57e
|
||||
b049a43817b4 memcached:alpine "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:11211->11211/tcp tools_memcached_1
|
||||
40de380e3c2e redis:latest "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:6379->6379/tcp tools_redis_1
|
||||
40de380e3c2e redis:latest "docker-entrypoint.s…" 28 seconds ago Up 26 seconds
|
||||
b66a506d3007 postgres:10 "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:5432->5432/tcp tools_postgres_1
|
||||
```
|
||||
**NOTE**
|
||||
|
||||
20
INSTALL.md
20
INSTALL.md
@ -43,7 +43,7 @@ This document provides a guide for installing AWX.
|
||||
- [Installing the AWX CLI](#installing-the-awx-cli)
|
||||
* [Building the CLI Documentation](#building-the-cli-documentation)
|
||||
|
||||
|
||||
|
||||
## Getting started
|
||||
|
||||
### Clone the repo
|
||||
@ -351,7 +351,7 @@ Once you access the AWX server, you will be prompted with a login dialog. The de
|
||||
A Kubernetes deployment will require you to have access to a Kubernetes cluster as well as the following tools:
|
||||
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
|
||||
- [helm](https://docs.helm.sh/using_helm/#quickstart-guide)
|
||||
- [helm](https://helm.sh/docs/intro/quickstart/)
|
||||
|
||||
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
|
||||
|
||||
@ -382,9 +382,11 @@ Before starting the install process, review the [inventory](./installer/inventor
|
||||
|
||||
### Configuring Helm
|
||||
|
||||
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://docs.helm.sh/using_helm/#quickstart-guide](https://docs.helm.sh/using_helm/#quickstart-guide).
|
||||
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://helm.sh/docs/intro/quickstart/](https://helm.sh/docs/intro/quickstart/).
|
||||
|
||||
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://docs.helm.sh/using_helm/#role-based-access-control](https://docs.helm.sh/using_helm/#role-based-access-control)
|
||||
You do not need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) as Helm does it for you. However, an existing one may be used by setting the `pg_persistence_existingclaim` variable.
|
||||
|
||||
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://helm.sh/docs/topics/rbac/](https://helm.sh/docs/topics/rbac/)
|
||||
|
||||
### Run the installer
|
||||
|
||||
@ -575,7 +577,7 @@ If you're deploying using Docker Compose, container names will be prefixed by th
|
||||
Immediately after the containers start, the *awx_task* container will perform required setup tasks, including database migrations. These tasks need to complete before the web interface can be accessed. To monitor the progress, you can follow the container's STDOUT by running the following:
|
||||
|
||||
```bash
|
||||
# Tail the the awx_task log
|
||||
# Tail the awx_task log
|
||||
$ docker logs -f awx_task
|
||||
```
|
||||
|
||||
@ -651,16 +653,14 @@ Potential uses include:
|
||||
* Checking on the status and output of job runs
|
||||
* Managing objects like organizations, users, teams, etc...
|
||||
|
||||
The preferred way to install the AWX CLI is through pip directly from GitHub:
|
||||
The preferred way to install the AWX CLI is through pip directly from PyPI:
|
||||
|
||||
pip install "https://github.com/ansible/awx/archive/$VERSION.tar.gz#egg=awxkit&subdirectory=awxkit"
|
||||
pip3 install awxkit
|
||||
awx --help
|
||||
|
||||
...where ``$VERSION`` is the version of AWX you're running. To see a list of all available releases, visit: https://github.com/ansible/awx/releases
|
||||
|
||||
## Building the CLI Documentation
|
||||
|
||||
To build the docs, spin up a real AWX server, `pip install sphinx sphinxcontrib-autoprogram`, and run:
|
||||
To build the docs, spin up a real AWX server, `pip3 install sphinx sphinxcontrib-autoprogram`, and run:
|
||||
|
||||
~ TOWER_HOST=https://awx.example.org TOWER_USERNAME=example TOWER_PASSWORD=secret make clean html
|
||||
~ cd build/html/ && python -m http.server
|
||||
|
||||
@ -31,7 +31,7 @@ If your issue isn't considered high priority, then please be patient as it may t
|
||||
|
||||
`state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state.
|
||||
|
||||
`state:needs_review` The the issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
|
||||
`state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
|
||||
|
||||
`state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on.
|
||||
|
||||
|
||||
@ -6,6 +6,8 @@ recursive-include awx/templates *.html
|
||||
recursive-include awx/api/templates *.md *.html
|
||||
recursive-include awx/ui/templates *.html
|
||||
recursive-include awx/ui/static *
|
||||
recursive-include awx/ui_next/build *.html
|
||||
recursive-include awx/ui_next/build *
|
||||
recursive-include awx/playbooks *.yml
|
||||
recursive-include awx/lib/site-packages *
|
||||
recursive-include awx/plugins *.ps1
|
||||
|
||||
46
Makefile
46
Makefile
@ -79,6 +79,7 @@ clean-ui: clean-languages
|
||||
rm -rf awx/ui/test/e2e/reports/
|
||||
rm -rf awx/ui/client/languages/
|
||||
rm -rf awx/ui_next/node_modules/
|
||||
rm -rf node_modules
|
||||
rm -rf awx/ui_next/coverage/
|
||||
rm -rf awx/ui_next/build/locales/_build/
|
||||
rm -f $(UI_DEPS_FLAG_FILE)
|
||||
@ -362,13 +363,13 @@ TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/ss
|
||||
|
||||
# Run all API unit tests.
|
||||
test:
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/awx/bin/activate; \
|
||||
fi; \
|
||||
PYTHONDONTWRITEBYTECODE=1 py.test -p no:cacheprovider -n auto $(TEST_DIRS)
|
||||
cmp VERSION awxkit/VERSION || "VERSION and awxkit/VERSION *must* match"
|
||||
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py2,py3
|
||||
awx-manage check_migrations --dry-run --check -n 'vNNN_missing_migration_file'
|
||||
cd awxkit && $(VENV_BASE)/awx/bin/tox -re py3
|
||||
awx-manage check_migrations --dry-run --check -n 'missing_migration_file'
|
||||
|
||||
COLLECTION_TEST_DIRS ?= awx_collection/test/awx
|
||||
COLLECTION_TEST_TARGET ?=
|
||||
@ -377,10 +378,11 @@ COLLECTION_NAMESPACE ?= awx
|
||||
COLLECTION_INSTALL = ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE)/$(COLLECTION_PACKAGE)
|
||||
|
||||
test_collection:
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
|
||||
if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/awx/bin/activate; \
|
||||
fi; \
|
||||
PYTHONPATH=$(PYTHONPATH):$(VENV_BASE)/awx/lib/python3.6/site-packages:/usr/lib/python3.6/site-packages py.test $(COLLECTION_TEST_DIRS)
|
||||
py.test $(COLLECTION_TEST_DIRS) -v
|
||||
# The python path needs to be modified so that the tests can find Ansible within the container
|
||||
# First we will use anything expility set as PYTHONPATH
|
||||
# Second we will load any libraries out of the virtualenv (if it's unspecified that should be ok because python should not load out of an empty directory)
|
||||
@ -400,11 +402,11 @@ symlink_collection:
|
||||
|
||||
build_collection:
|
||||
ansible-playbook -i localhost, awx_collection/tools/template_galaxy.yml -e collection_package=$(COLLECTION_PACKAGE) -e collection_namespace=$(COLLECTION_NAMESPACE) -e collection_version=$(VERSION) -e '{"awx_template_version":false}'
|
||||
ansible-galaxy collection build awx_collection --force --output-path=awx_collection
|
||||
ansible-galaxy collection build awx_collection_build --force --output-path=awx_collection_build
|
||||
|
||||
install_collection: build_collection
|
||||
rm -rf $(COLLECTION_INSTALL)
|
||||
ansible-galaxy collection install awx_collection/$(COLLECTION_NAMESPACE)-$(COLLECTION_PACKAGE)-$(VERSION).tar.gz
|
||||
ansible-galaxy collection install awx_collection_build/$(COLLECTION_NAMESPACE)-$(COLLECTION_PACKAGE)-$(VERSION).tar.gz
|
||||
|
||||
test_collection_sanity: install_collection
|
||||
cd $(COLLECTION_INSTALL) && ansible-test sanity
|
||||
@ -567,14 +569,28 @@ ui-zuul-lint-and-test:
|
||||
# UI NEXT TASKS
|
||||
# --------------------------------------
|
||||
|
||||
ui-next-lint:
|
||||
awx/ui_next/node_modules:
|
||||
$(NPM_BIN) --prefix awx/ui_next install
|
||||
$(NPM_BIN) run --prefix awx/ui_next lint
|
||||
$(NPM_BIN) run --prefix awx/ui_next prettier-check
|
||||
|
||||
ui-next-test:
|
||||
$(NPM_BIN) --prefix awx/ui_next install
|
||||
$(NPM_BIN) run --prefix awx/ui_next test
|
||||
ui-release-next:
|
||||
mkdir -p awx/ui_next/build/static
|
||||
touch awx/ui_next/build/static/.placeholder
|
||||
|
||||
ui-devel-next: awx/ui_next/node_modules
|
||||
$(NPM_BIN) --prefix awx/ui_next run extract-strings
|
||||
$(NPM_BIN) --prefix awx/ui_next run compile-strings
|
||||
$(NPM_BIN) --prefix awx/ui_next run build
|
||||
mkdir -p awx/public/static/css
|
||||
mkdir -p awx/public/static/js
|
||||
mkdir -p awx/public/static/media
|
||||
cp -r awx/ui_next/build/static/css/* awx/public/static/css
|
||||
cp -r awx/ui_next/build/static/js/* awx/public/static/js
|
||||
cp -r awx/ui_next/build/static/media/* awx/public/static/media
|
||||
|
||||
clean-ui-next:
|
||||
rm -rf node_modules
|
||||
rm -rf awx/ui_next/node_modules
|
||||
rm -rf awx/ui_next/build
|
||||
|
||||
ui-next-zuul-lint-and-test:
|
||||
$(NPM_BIN) --prefix awx/ui_next install
|
||||
@ -593,10 +609,10 @@ dev_build:
|
||||
release_build:
|
||||
$(PYTHON) setup.py release_build
|
||||
|
||||
dist/$(SDIST_TAR_FILE): ui-release VERSION
|
||||
dist/$(SDIST_TAR_FILE): ui-release ui-release-next VERSION
|
||||
$(PYTHON) setup.py $(SDIST_COMMAND)
|
||||
|
||||
dist/$(WHEEL_FILE): ui-release
|
||||
dist/$(WHEEL_FILE): ui-release ui-release-next
|
||||
$(PYTHON) setup.py $(WHEEL_COMMAND)
|
||||
|
||||
sdist: dist/$(SDIST_TAR_FILE)
|
||||
|
||||
@ -146,7 +146,7 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
|
||||
# A list of fields that we know can be filtered on without the possiblity
|
||||
# of introducing duplicates
|
||||
NO_DUPLICATES_WHITELIST = (CharField, IntegerField, BooleanField)
|
||||
NO_DUPLICATES_ALLOW_LIST = (CharField, IntegerField, BooleanField)
|
||||
|
||||
def get_fields_from_lookup(self, model, lookup):
|
||||
|
||||
@ -205,7 +205,7 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
field_list, new_lookup = self.get_fields_from_lookup(model, lookup)
|
||||
field = field_list[-1]
|
||||
|
||||
needs_distinct = (not all(isinstance(f, self.NO_DUPLICATES_WHITELIST) for f in field_list))
|
||||
needs_distinct = (not all(isinstance(f, self.NO_DUPLICATES_ALLOW_LIST) for f in field_list))
|
||||
|
||||
# Type names are stored without underscores internally, but are presented and
|
||||
# and serialized over the API containing underscores so we remove `_`
|
||||
@ -257,6 +257,11 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
if key in self.RESERVED_NAMES:
|
||||
continue
|
||||
|
||||
# HACK: make `created` available via API for the Django User ORM model
|
||||
# so it keep compatiblity with other objects which exposes the `created` attr.
|
||||
if queryset.model._meta.object_name == 'User' and key.startswith('created'):
|
||||
key = key.replace('created', 'date_joined')
|
||||
|
||||
# HACK: Make job event filtering by host name mostly work even
|
||||
# when not capturing job event hosts M2M.
|
||||
if queryset.model._meta.object_name == 'JobEvent' and key.startswith('hosts__name'):
|
||||
|
||||
@ -51,6 +51,7 @@ from awx.main.utils import (
|
||||
StubLicense
|
||||
)
|
||||
from awx.main.utils.db import get_all_field_names
|
||||
from awx.main.views import ApiErrorView
|
||||
from awx.api.serializers import ResourceAccessListElementSerializer, CopySerializer, UserSerializer
|
||||
from awx.api.versioning import URLPathVersioning
|
||||
from awx.api.metadata import SublistAttachDetatchMetadata, Metadata
|
||||
@ -159,11 +160,11 @@ class APIView(views.APIView):
|
||||
self.queries_before = len(connection.queries)
|
||||
|
||||
# If there are any custom headers in REMOTE_HOST_HEADERS, make sure
|
||||
# they respect the proxy whitelist
|
||||
# they respect the allowed proxy list
|
||||
if all([
|
||||
settings.PROXY_IP_WHITELIST,
|
||||
request.environ.get('REMOTE_ADDR') not in settings.PROXY_IP_WHITELIST,
|
||||
request.environ.get('REMOTE_HOST') not in settings.PROXY_IP_WHITELIST
|
||||
settings.PROXY_IP_ALLOWED_LIST,
|
||||
request.environ.get('REMOTE_ADDR') not in settings.PROXY_IP_ALLOWED_LIST,
|
||||
request.environ.get('REMOTE_HOST') not in settings.PROXY_IP_ALLOWED_LIST
|
||||
]):
|
||||
for custom_header in settings.REMOTE_HOST_HEADERS:
|
||||
if custom_header.startswith('HTTP_'):
|
||||
@ -188,6 +189,29 @@ class APIView(views.APIView):
|
||||
'''
|
||||
Log warning for 400 requests. Add header with elapsed time.
|
||||
'''
|
||||
|
||||
#
|
||||
# If the URL was rewritten, and we get a 404, we should entirely
|
||||
# replace the view in the request context with an ApiErrorView()
|
||||
# Without this change, there will be subtle differences in the BrowseableAPIRenderer
|
||||
#
|
||||
# These differences could provide contextual clues which would allow
|
||||
# anonymous users to determine if usernames were valid or not
|
||||
# (e.g., if an anonymous user visited `/api/v2/users/valid/`, and got a 404,
|
||||
# but also saw that the page heading said "User Detail", they might notice
|
||||
# that's a difference in behavior from a request to `/api/v2/users/not-valid/`, which
|
||||
# would show a page header of "Not Found"). Changing the view here
|
||||
# guarantees that the rendered response will look exactly like the response
|
||||
# when you visit a URL that has no matching URL paths in `awx.api.urls`.
|
||||
#
|
||||
if response.status_code == 404 and 'awx.named_url_rewritten' in request.environ:
|
||||
self.headers.pop('Allow', None)
|
||||
response = super(APIView, self).finalize_response(request, response, *args, **kwargs)
|
||||
view = ApiErrorView()
|
||||
setattr(view, 'request', request)
|
||||
response.renderer_context['view'] = view
|
||||
return response
|
||||
|
||||
if response.status_code >= 400:
|
||||
status_msg = "status %s received by user %s attempting to access %s from %s" % \
|
||||
(response.status_code, request.user, request.path, request.META.get('REMOTE_ADDR', None))
|
||||
@ -837,7 +861,7 @@ class CopyAPIView(GenericAPIView):
|
||||
|
||||
@staticmethod
|
||||
def _decrypt_model_field_if_needed(obj, field_name, field_val):
|
||||
if field_name in getattr(type(obj), 'REENCRYPTION_BLACKLIST_AT_COPY', []):
|
||||
if field_name in getattr(type(obj), 'REENCRYPTION_BLOCKLIST_AT_COPY', []):
|
||||
return field_val
|
||||
if isinstance(obj, Credential) and field_name == 'inputs':
|
||||
for secret in obj.credential_type.secret_fields:
|
||||
@ -883,7 +907,7 @@ class CopyAPIView(GenericAPIView):
|
||||
field_val = getattr(obj, field.name)
|
||||
except AttributeError:
|
||||
continue
|
||||
# Adjust copy blacklist fields here.
|
||||
# Adjust copy blocked fields here.
|
||||
if field.name in fields_to_discard or field.name in [
|
||||
'id', 'pk', 'polymorphic_ctype', 'unifiedjobtemplate_ptr', 'created_by', 'modified_by'
|
||||
] or field.name.endswith('_role'):
|
||||
@ -980,7 +1004,7 @@ class CopyAPIView(GenericAPIView):
|
||||
if hasattr(new_obj, 'admin_role') and request.user not in new_obj.admin_role.members.all():
|
||||
new_obj.admin_role.members.add(request.user)
|
||||
if sub_objs:
|
||||
# store the copied object dict into memcached, because it's
|
||||
# store the copied object dict into cache, because it's
|
||||
# often too large for postgres' notification bus
|
||||
# (which has a default maximum message size of 8k)
|
||||
key = 'deep-copy-{}'.format(str(uuid.uuid4()))
|
||||
|
||||
@ -126,7 +126,7 @@ SUMMARIZABLE_FK_FIELDS = {
|
||||
'current_job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'license_error'),
|
||||
'inventory_source': ('source', 'last_updated', 'status'),
|
||||
'custom_inventory_script': DEFAULT_SUMMARY_FIELDS,
|
||||
'source_script': ('name', 'description'),
|
||||
'source_script': DEFAULT_SUMMARY_FIELDS,
|
||||
'role': ('id', 'role_field'),
|
||||
'notification_template': DEFAULT_SUMMARY_FIELDS,
|
||||
'instance_group': ('id', 'name', 'controller_id', 'is_containerized'),
|
||||
@ -1697,6 +1697,7 @@ class HostSerializer(BaseSerializerWithVariables):
|
||||
d.setdefault('recent_jobs', [{
|
||||
'id': j.job.id,
|
||||
'name': j.job.job_template.name if j.job.job_template is not None else "",
|
||||
'type': j.job.job_type_name,
|
||||
'status': j.job.status,
|
||||
'finished': j.job.finished,
|
||||
} for j in obj.job_host_summaries.select_related('job__job_template').order_by('-created')[:5]])
|
||||
@ -1731,6 +1732,7 @@ class HostSerializer(BaseSerializerWithVariables):
|
||||
|
||||
def validate(self, attrs):
|
||||
name = force_text(attrs.get('name', self.instance and self.instance.name or ''))
|
||||
inventory = attrs.get('inventory', self.instance and self.instance.inventory or '')
|
||||
host, port = self._get_host_port_from_name(name)
|
||||
|
||||
if port:
|
||||
@ -1739,7 +1741,9 @@ class HostSerializer(BaseSerializerWithVariables):
|
||||
vars_dict = parse_yaml_or_json(variables)
|
||||
vars_dict['ansible_ssh_port'] = port
|
||||
attrs['variables'] = json.dumps(vars_dict)
|
||||
|
||||
if Group.objects.filter(name=name, inventory=inventory).exists():
|
||||
raise serializers.ValidationError(_('A Group with that name already exists.'))
|
||||
|
||||
return super(HostSerializer, self).validate(attrs)
|
||||
|
||||
def to_representation(self, obj):
|
||||
@ -1805,6 +1809,13 @@ class GroupSerializer(BaseSerializerWithVariables):
|
||||
res['inventory'] = self.reverse('api:inventory_detail', kwargs={'pk': obj.inventory.pk})
|
||||
return res
|
||||
|
||||
def validate(self, attrs):
|
||||
name = force_text(attrs.get('name', self.instance and self.instance.name or ''))
|
||||
inventory = attrs.get('inventory', self.instance and self.instance.inventory or '')
|
||||
if Host.objects.filter(name=name, inventory=inventory).exists():
|
||||
raise serializers.ValidationError(_('A Host with that name already exists.'))
|
||||
return super(GroupSerializer, self).validate(attrs)
|
||||
|
||||
def validate_name(self, value):
|
||||
if value in ('all', '_meta'):
|
||||
raise serializers.ValidationError(_('Invalid group name.'))
|
||||
@ -1936,7 +1947,7 @@ class InventorySourceOptionsSerializer(BaseSerializer):
|
||||
def validate_source_vars(self, value):
|
||||
ret = vars_validate_or_raise(value)
|
||||
for env_k in parse_yaml_or_json(value):
|
||||
if env_k in settings.INV_ENV_VARIABLE_BLACKLIST:
|
||||
if env_k in settings.INV_ENV_VARIABLE_BLOCKED:
|
||||
raise serializers.ValidationError(_("`{}` is a prohibited environment variable".format(env_k)))
|
||||
return ret
|
||||
|
||||
@ -2644,9 +2655,17 @@ class CredentialSerializerCreate(CredentialSerializer):
|
||||
owner_fields.add(field)
|
||||
else:
|
||||
attrs.pop(field)
|
||||
|
||||
if not owner_fields:
|
||||
raise serializers.ValidationError({"detail": _("Missing 'user', 'team', or 'organization'.")})
|
||||
|
||||
if len(owner_fields) > 1:
|
||||
received = ", ".join(sorted(owner_fields))
|
||||
raise serializers.ValidationError({"detail": _(
|
||||
"Only one of 'user', 'team', or 'organization' should be provided, "
|
||||
"received {} fields.".format(received)
|
||||
)})
|
||||
|
||||
if attrs.get('team'):
|
||||
attrs['organization'] = attrs['team'].organization
|
||||
|
||||
@ -2823,7 +2842,7 @@ class JobTemplateMixin(object):
|
||||
return [{
|
||||
'id': x.id, 'status': x.status, 'finished': x.finished, 'canceled_on': x.canceled_on,
|
||||
# Make type consistent with API top-level key, for instance workflow_job
|
||||
'type': x.get_real_instance_class()._meta.verbose_name.replace(' ', '_')
|
||||
'type': x.job_type_name
|
||||
} for x in optimized_qs[:10]]
|
||||
|
||||
def get_summary_fields(self, obj):
|
||||
@ -3600,7 +3619,7 @@ class LaunchConfigurationBaseSerializer(BaseSerializer):
|
||||
ujt = self.instance.unified_job_template
|
||||
if ujt is None:
|
||||
ret = {}
|
||||
for fd in ('workflow_job_template', 'identifier'):
|
||||
for fd in ('workflow_job_template', 'identifier', 'all_parents_must_converge'):
|
||||
if fd in attrs:
|
||||
ret[fd] = attrs[fd]
|
||||
return ret
|
||||
@ -4081,7 +4100,8 @@ class JobLaunchSerializer(BaseSerializer):
|
||||
errors.setdefault('credentials', []).append(_(
|
||||
'Cannot assign multiple {} credentials.'
|
||||
).format(cred.unique_hash(display=True)))
|
||||
if cred.credential_type.kind not in ('ssh', 'vault', 'cloud', 'net'):
|
||||
if cred.credential_type.kind not in ('ssh', 'vault', 'cloud',
|
||||
'net', 'kubernetes'):
|
||||
errors.setdefault('credentials', []).append(_(
|
||||
'Cannot assign a Credential of kind `{}`'
|
||||
).format(cred.credential_type.kind))
|
||||
@ -4645,6 +4665,8 @@ class InstanceSerializer(BaseSerializer):
|
||||
|
||||
class InstanceGroupSerializer(BaseSerializer):
|
||||
|
||||
show_capabilities = ['edit', 'delete']
|
||||
|
||||
committed_capacity = serializers.SerializerMethodField()
|
||||
consumed_capacity = serializers.SerializerMethodField()
|
||||
percent_capacity_remaining = serializers.SerializerMethodField()
|
||||
|
||||
@ -14,6 +14,8 @@ import time
|
||||
from base64 import b64encode
|
||||
from collections import OrderedDict
|
||||
|
||||
from urllib3.exceptions import ConnectTimeoutError
|
||||
|
||||
|
||||
# Django
|
||||
from django.conf import settings
|
||||
@ -171,6 +173,15 @@ def api_exception_handler(exc, context):
|
||||
exc = ParseError(exc.args[0])
|
||||
if isinstance(context['view'], UnifiedJobStdout):
|
||||
context['view'].renderer_classes = [renderers.BrowsableAPIRenderer, JSONRenderer]
|
||||
if isinstance(exc, APIException):
|
||||
req = context['request']._request
|
||||
if 'awx.named_url_rewritten' in req.environ and not str(getattr(exc, 'status_code', 0)).startswith('2'):
|
||||
# if the URL was rewritten, and it's not a 2xx level status code,
|
||||
# revert the request.path to its original value to avoid leaking
|
||||
# any context about the existance of resources
|
||||
req.path = req.environ['awx.named_url_rewritten']
|
||||
if exc.status_code == 403:
|
||||
exc = NotFound(detail=_('Not found.'))
|
||||
return exception_handler(exc, context)
|
||||
|
||||
|
||||
@ -1397,10 +1408,18 @@ class CredentialExternalTest(SubDetailAPIView):
|
||||
obj.credential_type.plugin.backend(**backend_kwargs)
|
||||
return Response({}, status=status.HTTP_202_ACCEPTED)
|
||||
except requests.exceptions.HTTPError as exc:
|
||||
message = 'HTTP {}\n{}'.format(exc.response.status_code, exc.response.text)
|
||||
message = 'HTTP {}'.format(exc.response.status_code)
|
||||
return Response({'inputs': message}, status=status.HTTP_400_BAD_REQUEST)
|
||||
except Exception as exc:
|
||||
return Response({'inputs': str(exc)}, status=status.HTTP_400_BAD_REQUEST)
|
||||
message = exc.__class__.__name__
|
||||
args = getattr(exc, 'args', [])
|
||||
for a in args:
|
||||
if isinstance(
|
||||
getattr(a, 'reason', None),
|
||||
ConnectTimeoutError
|
||||
):
|
||||
message = str(a.reason)
|
||||
return Response({'inputs': message}, status=status.HTTP_400_BAD_REQUEST)
|
||||
|
||||
|
||||
class CredentialInputSourceDetail(RetrieveUpdateDestroyAPIView):
|
||||
@ -1449,10 +1468,18 @@ class CredentialTypeExternalTest(SubDetailAPIView):
|
||||
obj.plugin.backend(**backend_kwargs)
|
||||
return Response({}, status=status.HTTP_202_ACCEPTED)
|
||||
except requests.exceptions.HTTPError as exc:
|
||||
message = 'HTTP {}\n{}'.format(exc.response.status_code, exc.response.text)
|
||||
message = 'HTTP {}'.format(exc.response.status_code)
|
||||
return Response({'inputs': message}, status=status.HTTP_400_BAD_REQUEST)
|
||||
except Exception as exc:
|
||||
return Response({'inputs': str(exc)}, status=status.HTTP_400_BAD_REQUEST)
|
||||
message = exc.__class__.__name__
|
||||
args = getattr(exc, 'args', [])
|
||||
for a in args:
|
||||
if isinstance(
|
||||
getattr(a, 'reason', None),
|
||||
ConnectTimeoutError
|
||||
):
|
||||
message = str(a.reason)
|
||||
return Response({'inputs': message}, status=status.HTTP_400_BAD_REQUEST)
|
||||
|
||||
|
||||
class HostRelatedSearchMixin(object):
|
||||
@ -2657,7 +2684,7 @@ class JobTemplateCredentialsList(SubListCreateAttachDetachAPIView):
|
||||
return {"error": _("Cannot assign multiple {credential_type} credentials.").format(
|
||||
credential_type=sub.unique_hash(display=True))}
|
||||
kind = sub.credential_type.kind
|
||||
if kind not in ('ssh', 'vault', 'cloud', 'net'):
|
||||
if kind not in ('ssh', 'vault', 'cloud', 'net', 'kubernetes'):
|
||||
return {'error': _('Cannot assign a Credential of kind `{}`.').format(kind)}
|
||||
|
||||
return super(JobTemplateCredentialsList, self).is_valid_relation(parent, sub, created)
|
||||
|
||||
19
awx/conf/migrations/0007_v380_rename_more_settings.py
Normal file
19
awx/conf/migrations/0007_v380_rename_more_settings.py
Normal file
@ -0,0 +1,19 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
from django.db import migrations
|
||||
from awx.conf.migrations import _rename_setting
|
||||
|
||||
|
||||
def copy_allowed_ips(apps, schema_editor):
|
||||
_rename_setting.rename_setting(apps, schema_editor, old_key='PROXY_IP_WHITELIST', new_key='PROXY_IP_ALLOWED_LIST')
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('conf', '0006_v331_ldap_group_type'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(copy_allowed_ips),
|
||||
]
|
||||
@ -31,18 +31,18 @@ logger = logging.getLogger('awx.conf.settings')
|
||||
# Store a special value to indicate when a setting is not set in the database.
|
||||
SETTING_CACHE_NOTSET = '___notset___'
|
||||
|
||||
# Cannot store None in memcached; use a special value instead to indicate None.
|
||||
# Cannot store None in cache; use a special value instead to indicate None.
|
||||
# If the special value for None is the same as the "not set" value, then a value
|
||||
# of None will be equivalent to the setting not being set (and will raise an
|
||||
# AttributeError if there is no other default defined).
|
||||
# SETTING_CACHE_NONE = '___none___'
|
||||
SETTING_CACHE_NONE = SETTING_CACHE_NOTSET
|
||||
|
||||
# Cannot store empty list/tuple in memcached; use a special value instead to
|
||||
# Cannot store empty list/tuple in cache; use a special value instead to
|
||||
# indicate an empty list.
|
||||
SETTING_CACHE_EMPTY_LIST = '___[]___'
|
||||
|
||||
# Cannot store empty dict in memcached; use a special value instead to indicate
|
||||
# Cannot store empty dict in cache; use a special value instead to indicate
|
||||
# an empty dict.
|
||||
SETTING_CACHE_EMPTY_DICT = '___{}___'
|
||||
|
||||
|
||||
@ -29,9 +29,10 @@ def reg(request):
|
||||
# as "defined in a settings file". This is analogous to manually
|
||||
# specifying a setting on the filesystem (e.g., in a local_settings.py in
|
||||
# development, or in /etc/tower/conf.d/<something>.py)
|
||||
defaults = request.node.get_marker('defined_in_file')
|
||||
if defaults:
|
||||
settings.configure(**defaults.kwargs)
|
||||
for marker in request.node.own_markers:
|
||||
if marker.name == 'defined_in_file':
|
||||
settings.configure(**marker.kwargs)
|
||||
|
||||
settings._wrapped = SettingsWrapper(settings._wrapped,
|
||||
cache,
|
||||
registry)
|
||||
|
||||
@ -41,13 +41,16 @@ def settings(request):
|
||||
cache = LocMemCache(str(uuid4()), {}) # make a new random cache each time
|
||||
settings = LazySettings()
|
||||
registry = SettingsRegistry(settings)
|
||||
defaults = {}
|
||||
|
||||
# @pytest.mark.defined_in_file can be used to mark specific setting values
|
||||
# as "defined in a settings file". This is analogous to manually
|
||||
# specifying a setting on the filesystem (e.g., in a local_settings.py in
|
||||
# development, or in /etc/tower/conf.d/<something>.py)
|
||||
in_file_marker = request.node.get_marker('defined_in_file')
|
||||
defaults = in_file_marker.kwargs if in_file_marker else {}
|
||||
for marker in request.node.own_markers:
|
||||
if marker.name == 'defined_in_file':
|
||||
defaults = marker.kwargs
|
||||
|
||||
defaults['DEFAULTS_SNAPSHOT'] = {}
|
||||
settings.configure(**defaults)
|
||||
settings._wrapped = SettingsWrapper(settings._wrapped,
|
||||
@ -63,15 +66,6 @@ def test_unregistered_setting(settings):
|
||||
assert settings.cache.get('DEBUG') is None
|
||||
|
||||
|
||||
def test_cached_settings_unicode_is_auto_decoded(settings):
|
||||
# https://github.com/linsomniac/python-memcached/issues/79
|
||||
# https://github.com/linsomniac/python-memcached/blob/288c159720eebcdf667727a859ef341f1e908308/memcache.py#L961
|
||||
|
||||
value = 'Iñtërnâtiônàlizætiøn' # this simulates what python-memcached does on cache.set()
|
||||
settings.cache.set('DEBUG', value)
|
||||
assert settings.cache.get('DEBUG') == 'Iñtërnâtiônàlizætiøn'
|
||||
|
||||
|
||||
def test_read_only_setting(settings):
|
||||
settings.registry.register(
|
||||
'AWX_READ_ONLY',
|
||||
@ -251,31 +245,6 @@ def test_setting_from_db(settings, mocker):
|
||||
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
|
||||
|
||||
|
||||
@pytest.mark.parametrize('encrypted', (True, False))
|
||||
def test_setting_from_db_with_unicode(settings, mocker, encrypted):
|
||||
settings.registry.register(
|
||||
'AWX_SOME_SETTING',
|
||||
field_class=fields.CharField,
|
||||
category=_('System'),
|
||||
category_slug='system',
|
||||
default='DEFAULT',
|
||||
encrypted=encrypted
|
||||
)
|
||||
# this simulates a bug in python-memcached; see https://github.com/linsomniac/python-memcached/issues/79
|
||||
value = 'Iñtërnâtiônàlizætiøn'
|
||||
|
||||
setting_from_db = mocker.Mock(id=1, key='AWX_SOME_SETTING', value=value)
|
||||
mocks = mocker.Mock(**{
|
||||
'order_by.return_value': mocker.Mock(**{
|
||||
'__iter__': lambda self: iter([setting_from_db]),
|
||||
'first.return_value': setting_from_db
|
||||
}),
|
||||
})
|
||||
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
|
||||
assert settings.AWX_SOME_SETTING == 'Iñtërnâtiônàlizætiøn'
|
||||
assert settings.cache.get('AWX_SOME_SETTING') == 'Iñtërnâtiônàlizætiøn'
|
||||
|
||||
|
||||
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
|
||||
def test_read_only_setting_assignment(settings):
|
||||
"read-only settings cannot be overwritten"
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1513,8 +1513,7 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
|
||||
thus can be made by a job template administrator which may not have access
|
||||
to the any inventory, project, or credentials associated with the template.
|
||||
'''
|
||||
# We are white listing fields that can
|
||||
field_whitelist = [
|
||||
allowed_fields = [
|
||||
'name', 'description', 'forks', 'limit', 'verbosity', 'extra_vars',
|
||||
'job_tags', 'force_handlers', 'skip_tags', 'ask_variables_on_launch',
|
||||
'ask_tags_on_launch', 'ask_job_type_on_launch', 'ask_skip_tags_on_launch',
|
||||
@ -1529,7 +1528,7 @@ class JobTemplateAccess(NotificationAttachMixin, BaseAccess):
|
||||
if k not in [x.name for x in obj._meta.concrete_fields]:
|
||||
continue
|
||||
if hasattr(obj, k) and getattr(obj, k) != v:
|
||||
if k not in field_whitelist and v != getattr(obj, '%s_id' % k, None) \
|
||||
if k not in allowed_fields and v != getattr(obj, '%s_id' % k, None) \
|
||||
and not (hasattr(obj, '%s_id' % k) and getattr(obj, '%s_id' % k) is None and v == ''): # Equate '' to None in the case of foreign keys
|
||||
return False
|
||||
return True
|
||||
@ -2480,13 +2479,16 @@ class NotificationAccess(BaseAccess):
|
||||
|
||||
class LabelAccess(BaseAccess):
|
||||
'''
|
||||
I can see/use a Label if I have permission to associated organization
|
||||
I can see/use a Label if I have permission to associated organization, or to a JT that the label is on
|
||||
'''
|
||||
model = Label
|
||||
prefetch_related = ('modified_by', 'created_by', 'organization',)
|
||||
|
||||
def filtered_queryset(self):
|
||||
return self.model.objects.all()
|
||||
return self.model.objects.filter(
|
||||
Q(organization__in=Organization.accessible_pk_qs(self.user, 'read_role')) |
|
||||
Q(unifiedjobtemplate_labels__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role'))
|
||||
)
|
||||
|
||||
@check_superuser
|
||||
def can_add(self, data):
|
||||
|
||||
@ -80,11 +80,11 @@ register(
|
||||
)
|
||||
|
||||
register(
|
||||
'PROXY_IP_WHITELIST',
|
||||
'PROXY_IP_ALLOWED_LIST',
|
||||
field_class=fields.StringListField,
|
||||
label=_('Proxy IP Whitelist'),
|
||||
label=_('Proxy IP Allowed List'),
|
||||
help_text=_("If Tower is behind a reverse proxy/load balancer, use this setting "
|
||||
"to whitelist the proxy IP addresses from which Tower should trust "
|
||||
"to configure the proxy IP addresses from which Tower should trust "
|
||||
"custom REMOTE_HOST_HEADERS header values. "
|
||||
"If this setting is an empty list (the default), the headers specified by "
|
||||
"REMOTE_HOST_HEADERS will be trusted unconditionally')"),
|
||||
@ -241,7 +241,7 @@ register(
|
||||
field_class=fields.StringListField,
|
||||
required=False,
|
||||
label=_('Paths to expose to isolated jobs'),
|
||||
help_text=_('Whitelist of paths that would otherwise be hidden to expose to isolated jobs. Enter one path per line.'),
|
||||
help_text=_('List of paths that would otherwise be hidden to expose to isolated jobs. Enter one path per line.'),
|
||||
category=_('Jobs'),
|
||||
category_slug='jobs',
|
||||
)
|
||||
@ -458,7 +458,8 @@ register(
|
||||
required=False,
|
||||
allow_blank=True,
|
||||
label=_('Primary Galaxy Server Username'),
|
||||
help_text=_('For using a galaxy server at higher precedence than the public Ansible Galaxy. '
|
||||
help_text=_('(This setting is deprecated and will be removed in a future release) '
|
||||
'For using a galaxy server at higher precedence than the public Ansible Galaxy. '
|
||||
'The username to use for basic authentication against the Galaxy instance, '
|
||||
'this is mutually exclusive with PRIMARY_GALAXY_TOKEN.'),
|
||||
category=_('Jobs'),
|
||||
@ -472,7 +473,8 @@ register(
|
||||
required=False,
|
||||
allow_blank=True,
|
||||
label=_('Primary Galaxy Server Password'),
|
||||
help_text=_('For using a galaxy server at higher precedence than the public Ansible Galaxy. '
|
||||
help_text=_('(This setting is deprecated and will be removed in a future release) '
|
||||
'For using a galaxy server at higher precedence than the public Ansible Galaxy. '
|
||||
'The password to use for basic authentication against the Galaxy instance, '
|
||||
'this is mutually exclusive with PRIMARY_GALAXY_TOKEN.'),
|
||||
category=_('Jobs'),
|
||||
|
||||
@ -10,8 +10,7 @@ __all__ = [
|
||||
'ANSI_SGR_PATTERN', 'CAN_CANCEL', 'ACTIVE_STATES', 'STANDARD_INVENTORY_UPDATE_ENV'
|
||||
]
|
||||
|
||||
|
||||
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'cloudforms', 'tower')
|
||||
CLOUD_PROVIDERS = ('azure_rm', 'ec2', 'gce', 'vmware', 'openstack', 'rhv', 'satellite6', 'tower')
|
||||
SCHEDULEABLE_PROVIDERS = CLOUD_PROVIDERS + ('custom', 'scm',)
|
||||
PRIVILEGE_ESCALATION_METHODS = [
|
||||
('sudo', _('Sudo')), ('su', _('Su')), ('pbrun', _('Pbrun')), ('pfexec', _('Pfexec')),
|
||||
@ -32,7 +31,7 @@ STANDARD_INVENTORY_UPDATE_ENV = {
|
||||
CAN_CANCEL = ('new', 'pending', 'waiting', 'running')
|
||||
ACTIVE_STATES = CAN_CANCEL
|
||||
CENSOR_VALUE = '************'
|
||||
ENV_BLACKLIST = frozenset((
|
||||
ENV_BLOCKLIST = frozenset((
|
||||
'VIRTUAL_ENV', 'PATH', 'PYTHONPATH', 'PROOT_TMP_DIR', 'JOB_ID',
|
||||
'INVENTORY_ID', 'INVENTORY_SOURCE_ID', 'INVENTORY_UPDATE_ID',
|
||||
'AD_HOC_COMMAND_ID', 'REST_API_URL', 'REST_API_TOKEN', 'MAX_EVENT_RES',
|
||||
@ -42,7 +41,7 @@ ENV_BLACKLIST = frozenset((
|
||||
))
|
||||
|
||||
# loggers that may be called in process of emitting a log
|
||||
LOGGER_BLACKLIST = (
|
||||
LOGGER_BLOCKLIST = (
|
||||
'awx.main.utils.handlers',
|
||||
'awx.main.utils.formatters',
|
||||
'awx.main.utils.filters',
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
from .plugin import CredentialPlugin, CertFiles
|
||||
from .plugin import CredentialPlugin, CertFiles, raise_for_status
|
||||
|
||||
from urllib.parse import quote, urlencode, urljoin
|
||||
|
||||
@ -82,8 +82,9 @@ def aim_backend(**kwargs):
|
||||
timeout=30,
|
||||
cert=cert,
|
||||
verify=verify,
|
||||
allow_redirects=False,
|
||||
)
|
||||
res.raise_for_status()
|
||||
raise_for_status(res)
|
||||
return res.json()['Content']
|
||||
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
from .plugin import CredentialPlugin, CertFiles
|
||||
from .plugin import CredentialPlugin, CertFiles, raise_for_status
|
||||
|
||||
import base64
|
||||
from urllib.parse import urljoin, quote
|
||||
@ -58,7 +58,8 @@ def conjur_backend(**kwargs):
|
||||
|
||||
auth_kwargs = {
|
||||
'headers': {'Content-Type': 'text/plain'},
|
||||
'data': api_key
|
||||
'data': api_key,
|
||||
'allow_redirects': False,
|
||||
}
|
||||
|
||||
with CertFiles(cacert) as cert:
|
||||
@ -68,11 +69,12 @@ def conjur_backend(**kwargs):
|
||||
urljoin(url, '/'.join(['authn', account, username, 'authenticate'])),
|
||||
**auth_kwargs
|
||||
)
|
||||
resp.raise_for_status()
|
||||
raise_for_status(resp)
|
||||
token = base64.b64encode(resp.content).decode('utf-8')
|
||||
|
||||
lookup_kwargs = {
|
||||
'headers': {'Authorization': 'Token token="{}"'.format(token)},
|
||||
'allow_redirects': False,
|
||||
}
|
||||
|
||||
# https://www.conjur.org/api.html#secrets-retrieve-a-secret-get
|
||||
@ -88,7 +90,7 @@ def conjur_backend(**kwargs):
|
||||
with CertFiles(cacert) as cert:
|
||||
lookup_kwargs['verify'] = cert
|
||||
resp = requests.get(path, timeout=30, **lookup_kwargs)
|
||||
resp.raise_for_status()
|
||||
raise_for_status(resp)
|
||||
return resp.text
|
||||
|
||||
|
||||
|
||||
@ -3,7 +3,7 @@ import os
|
||||
import pathlib
|
||||
from urllib.parse import urljoin
|
||||
|
||||
from .plugin import CredentialPlugin, CertFiles
|
||||
from .plugin import CredentialPlugin, CertFiles, raise_for_status
|
||||
|
||||
import requests
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
@ -145,7 +145,10 @@ def kv_backend(**kwargs):
|
||||
cacert = kwargs.get('cacert', None)
|
||||
api_version = kwargs['api_version']
|
||||
|
||||
request_kwargs = {'timeout': 30}
|
||||
request_kwargs = {
|
||||
'timeout': 30,
|
||||
'allow_redirects': False,
|
||||
}
|
||||
|
||||
sess = requests.Session()
|
||||
sess.headers['Authorization'] = 'Bearer {}'.format(token)
|
||||
@ -175,7 +178,7 @@ def kv_backend(**kwargs):
|
||||
with CertFiles(cacert) as cert:
|
||||
request_kwargs['verify'] = cert
|
||||
response = sess.get(request_url, **request_kwargs)
|
||||
response.raise_for_status()
|
||||
raise_for_status(response)
|
||||
|
||||
json = response.json()
|
||||
if api_version == 'v2':
|
||||
@ -198,7 +201,10 @@ def ssh_backend(**kwargs):
|
||||
role = kwargs['role']
|
||||
cacert = kwargs.get('cacert', None)
|
||||
|
||||
request_kwargs = {'timeout': 30}
|
||||
request_kwargs = {
|
||||
'timeout': 30,
|
||||
'allow_redirects': False,
|
||||
}
|
||||
|
||||
request_kwargs['json'] = {'public_key': kwargs['public_key']}
|
||||
if kwargs.get('valid_principals'):
|
||||
@ -215,7 +221,7 @@ def ssh_backend(**kwargs):
|
||||
request_kwargs['verify'] = cert
|
||||
resp = sess.post(request_url, **request_kwargs)
|
||||
|
||||
resp.raise_for_status()
|
||||
raise_for_status(resp)
|
||||
return resp.json()['data']['signed_key']
|
||||
|
||||
|
||||
|
||||
@ -3,9 +3,19 @@ import tempfile
|
||||
|
||||
from collections import namedtuple
|
||||
|
||||
from requests.exceptions import HTTPError
|
||||
|
||||
CredentialPlugin = namedtuple('CredentialPlugin', ['name', 'inputs', 'backend'])
|
||||
|
||||
|
||||
def raise_for_status(resp):
|
||||
resp.raise_for_status()
|
||||
if resp.status_code >= 300:
|
||||
exc = HTTPError()
|
||||
setattr(exc, 'response', resp)
|
||||
raise exc
|
||||
|
||||
|
||||
class CertFiles():
|
||||
"""
|
||||
A context manager used for writing a certificate and (optional) key
|
||||
|
||||
@ -24,7 +24,7 @@ class RecordedQueryLog(object):
|
||||
try:
|
||||
self.threshold = cache.get('awx-profile-sql-threshold')
|
||||
except Exception:
|
||||
# if we can't reach memcached, just assume profiling's off
|
||||
# if we can't reach the cache, just assume profiling's off
|
||||
self.threshold = None
|
||||
|
||||
def append(self, query):
|
||||
@ -110,7 +110,7 @@ class RecordedQueryLog(object):
|
||||
class DatabaseWrapper(BaseDatabaseWrapper):
|
||||
"""
|
||||
This is a special subclass of Django's postgres DB backend which - based on
|
||||
the value of a special flag in memcached - captures slow queries and
|
||||
the value of a special flag in cache - captures slow queries and
|
||||
writes profile and Python stack metadata to the disk.
|
||||
"""
|
||||
|
||||
@ -133,19 +133,19 @@ class DatabaseWrapper(BaseDatabaseWrapper):
|
||||
# is the same mechanism used by libraries like the django-debug-toolbar)
|
||||
#
|
||||
# in _this_ implementation, we represent it as a property which will
|
||||
# check memcache for a special flag to be set (when the flag is set, it
|
||||
# check the cache for a special flag to be set (when the flag is set, it
|
||||
# means we should start recording queries because somebody called
|
||||
# `awx-manage profile_sql`)
|
||||
#
|
||||
# it's worth noting that this property is wrapped w/ @memoize because
|
||||
# Django references this attribute _constantly_ (in particular, once
|
||||
# per executed query); doing a memcached.get() _at most_ once per
|
||||
# per executed query); doing a cache.get() _at most_ once per
|
||||
# second is a good enough window to detect when profiling is turned
|
||||
# on/off by a system administrator
|
||||
try:
|
||||
threshold = cache.get('awx-profile-sql-threshold')
|
||||
except Exception:
|
||||
# if we can't reach memcached, just assume profiling's off
|
||||
# if we can't reach the cache, just assume profiling's off
|
||||
threshold = None
|
||||
self.queries_log.threshold = threshold
|
||||
return threshold is not None
|
||||
|
||||
@ -43,7 +43,7 @@ class Control(object):
|
||||
for reply in conn.events(select_timeout=timeout, yield_timeouts=True):
|
||||
if reply is None:
|
||||
logger.error(f'{self.service} did not reply within {timeout}s')
|
||||
raise RuntimeError("{self.service} did not reply within {timeout}s")
|
||||
raise RuntimeError(f"{self.service} did not reply within {timeout}s")
|
||||
break
|
||||
|
||||
return json.loads(reply.payload)
|
||||
|
||||
@ -222,7 +222,7 @@ class WorkerPool(object):
|
||||
idx = len(self.workers)
|
||||
# It's important to close these because we're _about_ to fork, and we
|
||||
# don't want the forked processes to inherit the open sockets
|
||||
# for the DB and memcached connections (that way lies race conditions)
|
||||
# for the DB and cache connections (that way lies race conditions)
|
||||
django_connection.close()
|
||||
django_cache.close()
|
||||
worker = PoolWorker(self.queue_size, self.target, (idx,) + self.target_args)
|
||||
|
||||
@ -7,8 +7,8 @@ import json
|
||||
import re
|
||||
import urllib.parse
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
from jinja2.exceptions import UndefinedError, TemplateSyntaxError
|
||||
from jinja2 import sandbox, StrictUndefined
|
||||
from jinja2.exceptions import UndefinedError, TemplateSyntaxError, SecurityError
|
||||
|
||||
# Django
|
||||
from django.contrib.postgres.fields import JSONField as upstream_JSONBField
|
||||
@ -50,7 +50,7 @@ from awx.main.models.rbac import (
|
||||
batch_role_ancestor_rebuilding, Role,
|
||||
ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR
|
||||
)
|
||||
from awx.main.constants import ENV_BLACKLIST
|
||||
from awx.main.constants import ENV_BLOCKLIST
|
||||
from awx.main import utils
|
||||
|
||||
|
||||
@ -637,6 +637,14 @@ class CredentialInputField(JSONSchemaField):
|
||||
else:
|
||||
decrypted_values[k] = v
|
||||
|
||||
# don't allow secrets with $encrypted$ on new object creation
|
||||
if not model_instance.pk:
|
||||
for field in model_instance.credential_type.secret_fields:
|
||||
if value.get(field) == '$encrypted$':
|
||||
raise serializers.ValidationError({
|
||||
self.name: [f'$encrypted$ is a reserved keyword, and cannot be used for {field}.']
|
||||
})
|
||||
|
||||
super(JSONSchemaField, self).validate(decrypted_values, model_instance)
|
||||
errors = {}
|
||||
for error in Draft4Validator(
|
||||
@ -870,9 +878,9 @@ class CredentialTypeInjectorField(JSONSchemaField):
|
||||
'use is not allowed in credentials.').format(env_var),
|
||||
code='invalid', params={'value': env_var},
|
||||
)
|
||||
if env_var in ENV_BLACKLIST:
|
||||
if env_var in ENV_BLOCKLIST:
|
||||
raise django_exceptions.ValidationError(
|
||||
_('Environment variable {} is blacklisted from use in credentials.').format(env_var),
|
||||
_('Environment variable {} is not allowed to be used in credentials.').format(env_var),
|
||||
code='invalid', params={'value': env_var},
|
||||
)
|
||||
|
||||
@ -932,7 +940,7 @@ class CredentialTypeInjectorField(JSONSchemaField):
|
||||
self.validate_env_var_allowed(key)
|
||||
for key, tmpl in injector.items():
|
||||
try:
|
||||
Environment(
|
||||
sandbox.ImmutableSandboxedEnvironment(
|
||||
undefined=StrictUndefined
|
||||
).from_string(tmpl).render(valid_namespace)
|
||||
except UndefinedError as e:
|
||||
@ -942,6 +950,10 @@ class CredentialTypeInjectorField(JSONSchemaField):
|
||||
code='invalid',
|
||||
params={'value': value},
|
||||
)
|
||||
except SecurityError as e:
|
||||
raise django_exceptions.ValidationError(
|
||||
_('Encountered unsafe code execution: {}').format(e)
|
||||
)
|
||||
except TemplateSyntaxError as e:
|
||||
raise django_exceptions.ValidationError(
|
||||
_('Syntax error rendering template for {sub_key} inside of {type} ({error_msg})').format(
|
||||
|
||||
96
awx/main/management/commands/bottleneck.py
Normal file
96
awx/main/management/commands/bottleneck.py
Normal file
@ -0,0 +1,96 @@
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.db import connection
|
||||
|
||||
from awx.main.models import JobTemplate
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Find the slowest tasks and hosts for a Job Template's most recent runs."
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument('--template', dest='jt', type=int,
|
||||
help='ID of the Job Template to profile')
|
||||
parser.add_argument('--threshold', dest='threshold', type=float, default=30,
|
||||
help='Only show tasks that took at least this many seconds (defaults to 30)')
|
||||
parser.add_argument('--history', dest='history', type=float, default=25,
|
||||
help='The number of historic jobs to look at')
|
||||
parser.add_argument('--ignore', action='append', help='ignore a specific action (e.g., --ignore git)')
|
||||
|
||||
def handle(self, *args, **options):
|
||||
jt = options['jt']
|
||||
threshold = options['threshold']
|
||||
history = options['history']
|
||||
ignore = options['ignore']
|
||||
|
||||
print('## ' + JobTemplate.objects.get(pk=jt).name + f' (last {history} runs)\n')
|
||||
with connection.cursor() as cursor:
|
||||
cursor.execute(
|
||||
f'''
|
||||
SELECT
|
||||
b.id, b.job_id, b.host_name, b.created - a.created delta,
|
||||
b.task task,
|
||||
b.event_data::json->'task_action' task_action,
|
||||
b.event_data::json->'task_path' task_path
|
||||
FROM main_jobevent a JOIN main_jobevent b
|
||||
ON b.parent_uuid = a.parent_uuid AND a.host_name = b.host_name
|
||||
WHERE
|
||||
a.event = 'runner_on_start' AND
|
||||
b.event != 'runner_on_start' AND
|
||||
b.event != 'runner_on_skipped' AND
|
||||
b.failed = false AND
|
||||
a.job_id IN (
|
||||
SELECT unifiedjob_ptr_id FROM main_job
|
||||
WHERE job_template_id={jt}
|
||||
ORDER BY unifiedjob_ptr_id DESC
|
||||
LIMIT {history}
|
||||
)
|
||||
ORDER BY delta DESC;
|
||||
'''
|
||||
)
|
||||
slowest_events = cursor.fetchall()
|
||||
|
||||
def format_td(x):
|
||||
return str(x).split('.')[0]
|
||||
|
||||
fastest = dict()
|
||||
for event in slowest_events:
|
||||
_id, job_id, host, duration, task, action, playbook = event
|
||||
playbook = playbook.rsplit('/')[-1]
|
||||
if ignore and action in ignore:
|
||||
continue
|
||||
if host:
|
||||
fastest[(action, playbook)] = (_id, host, format_td(duration))
|
||||
|
||||
host_counts = dict()
|
||||
warned = set()
|
||||
print(f'slowest tasks (--threshold={threshold})\n---')
|
||||
|
||||
for event in slowest_events:
|
||||
_id, job_id, host, duration, task, action, playbook = event
|
||||
if ignore and action in ignore:
|
||||
continue
|
||||
if duration.total_seconds() < threshold:
|
||||
break
|
||||
playbook = playbook.rsplit('/')[-1]
|
||||
human_duration = format_td(duration)
|
||||
|
||||
fastest_summary = ''
|
||||
fastest_match = fastest.get((action, playbook))
|
||||
if fastest_match[2] != human_duration and (host, action, playbook) not in warned:
|
||||
warned.add((host, action, playbook))
|
||||
fastest_summary = ' ' + self.style.WARNING(f'{fastest_match[1]} ran this in {fastest_match[2]}s at /api/v2/job_events/{fastest_match[0]}/')
|
||||
|
||||
url = f'/api/v2/jobs/{job_id}/'
|
||||
print(' -- '.join([url, host, human_duration, action, task, playbook]) + fastest_summary)
|
||||
host_counts.setdefault(host, [])
|
||||
host_counts[host].append(duration)
|
||||
|
||||
host_counts = sorted(host_counts.items(), key=lambda item: [e.total_seconds() for e in item[1]], reverse=True)
|
||||
|
||||
print('\nslowest hosts\n---')
|
||||
for h, matches in host_counts:
|
||||
total = len(matches)
|
||||
total_seconds = sum([e.total_seconds() for e in matches])
|
||||
print(f'{h} had {total} tasks that ran longer than {threshold} second(s) for a total of {total_seconds}')
|
||||
|
||||
print('')
|
||||
@ -271,7 +271,7 @@ class Command(BaseCommand):
|
||||
logging.DEBUG, 0]))
|
||||
logger.setLevel(log_levels.get(self.verbosity, 0))
|
||||
|
||||
def _get_instance_id(self, from_dict, default=''):
|
||||
def _get_instance_id(self, variables, default=''):
|
||||
'''
|
||||
Retrieve the instance ID from the given dict of host variables.
|
||||
|
||||
@ -279,15 +279,23 @@ class Command(BaseCommand):
|
||||
the lookup will traverse into nested dicts, equivalent to:
|
||||
|
||||
from_dict.get('foo', {}).get('bar', default)
|
||||
|
||||
Multiple ID variables may be specified as 'foo.bar,foobar', so that
|
||||
it will first try to find 'bar' inside of 'foo', and if unable,
|
||||
will try to find 'foobar' as a fallback
|
||||
'''
|
||||
instance_id = default
|
||||
if getattr(self, 'instance_id_var', None):
|
||||
for key in self.instance_id_var.split('.'):
|
||||
if not hasattr(from_dict, 'get'):
|
||||
instance_id = default
|
||||
for single_instance_id in self.instance_id_var.split(','):
|
||||
from_dict = variables
|
||||
for key in single_instance_id.split('.'):
|
||||
if not hasattr(from_dict, 'get'):
|
||||
instance_id = default
|
||||
break
|
||||
instance_id = from_dict.get(key, default)
|
||||
from_dict = instance_id
|
||||
if instance_id:
|
||||
break
|
||||
instance_id = from_dict.get(key, default)
|
||||
from_dict = instance_id
|
||||
return smart_text(instance_id)
|
||||
|
||||
def _get_enabled(self, from_dict, default=None):
|
||||
@ -422,7 +430,7 @@ class Command(BaseCommand):
|
||||
for mem_host in self.all_group.all_hosts.values():
|
||||
instance_id = self._get_instance_id(mem_host.variables)
|
||||
if not instance_id:
|
||||
logger.warning('Host "%s" has no "%s" variable',
|
||||
logger.warning('Host "%s" has no "%s" variable(s)',
|
||||
mem_host.name, self.instance_id_var)
|
||||
continue
|
||||
mem_host.instance_id = instance_id
|
||||
|
||||
@ -19,3 +19,7 @@ class Command(BaseCommand):
|
||||
profile_sql.delay(
|
||||
threshold=options['threshold'], minutes=options['minutes']
|
||||
)
|
||||
print(f"Logging initiated with a threshold of {options['threshold']} second(s) and a duration of"
|
||||
f" {options['minutes']} minute(s), any queries that meet criteria can"
|
||||
f" be found in /var/log/tower/profile/."
|
||||
)
|
||||
|
||||
@ -82,7 +82,7 @@ class Command(BaseCommand):
|
||||
OAuth2Application.objects.filter(pk=app.pk).update(client_secret=encrypted)
|
||||
|
||||
def _settings(self):
|
||||
# don't update memcached, the *actual* value isn't changing
|
||||
# don't update the cache, the *actual* value isn't changing
|
||||
post_save.disconnect(on_post_save_setting, sender=Setting)
|
||||
for setting in Setting.objects.filter().order_by('pk'):
|
||||
if settings_registry.is_setting_encrypted(setting.key):
|
||||
|
||||
@ -44,7 +44,7 @@ class Command(BaseCommand):
|
||||
|
||||
# It's important to close these because we're _about_ to fork, and we
|
||||
# don't want the forked processes to inherit the open sockets
|
||||
# for the DB and memcached connections (that way lies race conditions)
|
||||
# for the DB and cache connections (that way lies race conditions)
|
||||
django_connection.close()
|
||||
django_cache.close()
|
||||
|
||||
|
||||
@ -44,20 +44,6 @@ class HostManager(models.Manager):
|
||||
inventory_sources__source='tower'
|
||||
).filter(inventory__organization=org_id).values('name').distinct().count()
|
||||
|
||||
def active_counts_by_org(self):
|
||||
"""Return the counts of active, unique hosts for each organization.
|
||||
Construction of query involves:
|
||||
- remove any ordering specified in model's Meta
|
||||
- Exclude hosts sourced from another Tower
|
||||
- Consider only hosts where the canonical inventory is owned by each organization
|
||||
- Restrict the query to only count distinct names
|
||||
- Return the counts
|
||||
"""
|
||||
return self.order_by().exclude(
|
||||
inventory_sources__source='tower'
|
||||
).values('inventory__organization').annotate(
|
||||
inventory__organization__count=models.Count('name', distinct=True))
|
||||
|
||||
def get_queryset(self):
|
||||
"""When the parent instance of the host query set has a `kind=smart` and a `host_filter`
|
||||
set. Use the `host_filter` to generate the queryset for the hosts.
|
||||
|
||||
@ -14,7 +14,7 @@ from django.conf import settings
|
||||
from django.contrib.auth.models import User
|
||||
from django.db.migrations.executor import MigrationExecutor
|
||||
from django.db import connection
|
||||
from django.shortcuts import get_object_or_404, redirect
|
||||
from django.shortcuts import redirect
|
||||
from django.apps import apps
|
||||
from django.utils.deprecation import MiddlewareMixin
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
@ -148,7 +148,21 @@ class URLModificationMiddleware(MiddlewareMixin):
|
||||
def _named_url_to_pk(cls, node, resource, named_url):
|
||||
kwargs = {}
|
||||
if node.populate_named_url_query_kwargs(kwargs, named_url):
|
||||
return str(get_object_or_404(node.model, **kwargs).pk)
|
||||
match = node.model.objects.filter(**kwargs).first()
|
||||
if match:
|
||||
return str(match.pk)
|
||||
else:
|
||||
# if the name does *not* resolve to any actual resource,
|
||||
# we should still attempt to route it through so that 401s are
|
||||
# respected
|
||||
# using "zero" here will cause the URL regex to match e.g.,
|
||||
# /api/v2/users/<integer>/, but it also means that anonymous
|
||||
# users will go down the path of having their credentials
|
||||
# verified; in this way, *anonymous* users will that visit
|
||||
# /api/v2/users/invalid-username/ *won't* see a 404, they'll
|
||||
# see a 401 as if they'd gone to /api/v2/users/0/
|
||||
#
|
||||
return '0'
|
||||
if resource == 'job_templates' and '++' not in named_url:
|
||||
# special case for deprecated job template case
|
||||
# will not raise a 404 on its own
|
||||
@ -178,6 +192,7 @@ class URLModificationMiddleware(MiddlewareMixin):
|
||||
old_path = request.path_info
|
||||
new_path = self._convert_named_url(old_path)
|
||||
if request.path_info != new_path:
|
||||
request.environ['awx.named_url_rewritten'] = request.path
|
||||
request.path = request.path.replace(request.path_info, new_path)
|
||||
request.path_info = new_path
|
||||
|
||||
|
||||
29
awx/main/migrations/0117_v400_remove_cloudforms_inventory.py
Normal file
29
awx/main/migrations/0117_v400_remove_cloudforms_inventory.py
Normal file
@ -0,0 +1,29 @@
|
||||
# Generated by Django 2.2.11 on 2020-05-01 13:25
|
||||
|
||||
from django.db import migrations, models
|
||||
from awx.main.migrations._inventory_source import create_scm_script_substitute
|
||||
|
||||
|
||||
def convert_cloudforms_to_scm(apps, schema_editor):
|
||||
create_scm_script_substitute(apps, 'cloudforms')
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0116_v400_remove_hipchat_notifications'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(convert_cloudforms_to_scm),
|
||||
migrations.AlterField(
|
||||
model_name='inventorysource',
|
||||
name='source',
|
||||
field=models.CharField(choices=[('file', 'File, Directory or Script'), ('scm', 'Sourced from a Project'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('openstack', 'OpenStack'), ('rhv', 'Red Hat Virtualization'), ('tower', 'Ansible Tower'), ('custom', 'Custom Script')], default=None, max_length=32),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='inventoryupdate',
|
||||
name='source',
|
||||
field=models.CharField(choices=[('file', 'File, Directory or Script'), ('scm', 'Sourced from a Project'), ('ec2', 'Amazon EC2'), ('gce', 'Google Compute Engine'), ('azure_rm', 'Microsoft Azure Resource Manager'), ('vmware', 'VMware vCenter'), ('satellite6', 'Red Hat Satellite 6'), ('openstack', 'OpenStack'), ('rhv', 'Red Hat Virtualization'), ('tower', 'Ansible Tower'), ('custom', 'Custom Script')], default=None, max_length=32),
|
||||
),
|
||||
]
|
||||
@ -1,6 +1,9 @@
|
||||
import logging
|
||||
|
||||
from uuid import uuid4
|
||||
|
||||
from django.utils.encoding import smart_text
|
||||
from django.utils.timezone import now
|
||||
|
||||
from awx.main.utils.common import parse_yaml_or_json
|
||||
|
||||
@ -87,3 +90,44 @@ def back_out_new_instance_id(apps, source, new_id):
|
||||
modified_ct, source
|
||||
))
|
||||
|
||||
|
||||
def create_scm_script_substitute(apps, source):
|
||||
"""Only applies for cloudforms in practice, but written generally.
|
||||
Given a source type, this will replace all inventory sources of that type
|
||||
with SCM inventory sources that source the script from Ansible core
|
||||
"""
|
||||
# the revision in the Ansible 2.9 stable branch this project will start out as
|
||||
# it can still be updated manually later (but staying within 2.9 branch), if desired
|
||||
ansible_rev = '6f83b9aff42331e15c55a171de0a8b001208c18c'
|
||||
InventorySource = apps.get_model('main', 'InventorySource')
|
||||
ContentType = apps.get_model('contenttypes', 'ContentType')
|
||||
Project = apps.get_model('main', 'Project')
|
||||
if not InventorySource.objects.filter(source=source).exists():
|
||||
logger.debug('No sources of type {} to migrate'.format(source))
|
||||
return
|
||||
proj_name = 'Replacement project for {} type sources - {}'.format(source, uuid4())
|
||||
right_now = now()
|
||||
project = Project.objects.create(
|
||||
name=proj_name,
|
||||
created=right_now,
|
||||
modified=right_now,
|
||||
description='Created by migration',
|
||||
polymorphic_ctype=ContentType.objects.get(model='project'),
|
||||
# project-specific fields
|
||||
scm_type='git',
|
||||
scm_url='https://github.com/ansible/ansible.git',
|
||||
scm_branch='stable-2.9',
|
||||
scm_revision=ansible_rev
|
||||
)
|
||||
ct = 0
|
||||
for inv_src in InventorySource.objects.filter(source=source).iterator():
|
||||
inv_src.source = 'scm'
|
||||
inv_src.source_project = project
|
||||
inv_src.source_path = 'contrib/inventory/{}.py'.format(source)
|
||||
inv_src.scm_last_revision = ansible_rev
|
||||
inv_src.save(update_fields=['source', 'source_project', 'source_path', 'scm_last_revision'])
|
||||
logger.debug('Changed inventory source {} to scm type'.format(inv_src.pk))
|
||||
ct += 1
|
||||
if ct:
|
||||
logger.info('Changed total of {} inventory sources from {} type to scm'.format(ct, source))
|
||||
|
||||
|
||||
@ -127,9 +127,15 @@ def user_get_auditor_of_organizations(user):
|
||||
return Organization.objects.filter(auditor_role__members=user)
|
||||
|
||||
|
||||
@property
|
||||
def created(user):
|
||||
return user.date_joined
|
||||
|
||||
|
||||
User.add_to_class('organizations', user_get_organizations)
|
||||
User.add_to_class('admin_of_organizations', user_get_admin_of_organizations)
|
||||
User.add_to_class('auditor_of_organizations', user_get_auditor_of_organizations)
|
||||
User.add_to_class('created', created)
|
||||
|
||||
|
||||
@property
|
||||
|
||||
@ -15,6 +15,7 @@ from crum import get_current_user
|
||||
|
||||
# AWX
|
||||
from awx.main.utils import encrypt_field, parse_yaml_or_json
|
||||
from awx.main.constants import CLOUD_PROVIDERS
|
||||
|
||||
__all__ = ['prevent_search', 'VarsDictProperty', 'BaseModel', 'CreatedModifiedModel',
|
||||
'PasswordFieldsModel', 'PrimordialModel', 'CommonModel',
|
||||
@ -50,7 +51,7 @@ PROJECT_UPDATE_JOB_TYPE_CHOICES = [
|
||||
(PERM_INVENTORY_CHECK, _('Check')),
|
||||
]
|
||||
|
||||
CLOUD_INVENTORY_SOURCES = ['ec2', 'vmware', 'gce', 'azure_rm', 'openstack', 'rhv', 'custom', 'satellite6', 'cloudforms', 'scm', 'tower',]
|
||||
CLOUD_INVENTORY_SOURCES = list(CLOUD_PROVIDERS) + ['scm', 'custom']
|
||||
|
||||
VERBOSITY_CHOICES = [
|
||||
(0, '0 (Normal)'),
|
||||
@ -406,7 +407,7 @@ def prevent_search(relation):
|
||||
sensitive_data = prevent_search(models.CharField(...))
|
||||
|
||||
The flag set by this function is used by
|
||||
`awx.api.filters.FieldLookupBackend` to blacklist fields and relations that
|
||||
`awx.api.filters.FieldLookupBackend` to block fields and relations that
|
||||
should not be searchable/filterable via search query params
|
||||
"""
|
||||
setattr(relation, '__prevent_search__', True)
|
||||
|
||||
@ -11,7 +11,7 @@ import tempfile
|
||||
from types import SimpleNamespace
|
||||
|
||||
# Jinja2
|
||||
from jinja2 import Template
|
||||
from jinja2 import sandbox
|
||||
|
||||
# Django
|
||||
from django.db import models
|
||||
@ -514,8 +514,11 @@ class CredentialType(CommonModelNameNotUnique):
|
||||
# If any file templates are provided, render the files and update the
|
||||
# special `tower` template namespace so the filename can be
|
||||
# referenced in other injectors
|
||||
|
||||
sandbox_env = sandbox.ImmutableSandboxedEnvironment()
|
||||
|
||||
for file_label, file_tmpl in file_tmpls.items():
|
||||
data = Template(file_tmpl).render(**namespace)
|
||||
data = sandbox_env.from_string(file_tmpl).render(**namespace)
|
||||
_, path = tempfile.mkstemp(dir=private_data_dir)
|
||||
with open(path, 'w') as f:
|
||||
f.write(data)
|
||||
@ -537,14 +540,14 @@ class CredentialType(CommonModelNameNotUnique):
|
||||
except ValidationError as e:
|
||||
logger.error('Ignoring prohibited env var {}, reason: {}'.format(env_var, e))
|
||||
continue
|
||||
env[env_var] = Template(tmpl).render(**namespace)
|
||||
safe_env[env_var] = Template(tmpl).render(**safe_namespace)
|
||||
env[env_var] = sandbox_env.from_string(tmpl).render(**namespace)
|
||||
safe_env[env_var] = sandbox_env.from_string(tmpl).render(**safe_namespace)
|
||||
|
||||
if 'INVENTORY_UPDATE_ID' not in env:
|
||||
# awx-manage inventory_update does not support extra_vars via -e
|
||||
extra_vars = {}
|
||||
for var_name, tmpl in self.injectors.get('extra_vars', {}).items():
|
||||
extra_vars[var_name] = Template(tmpl).render(**namespace)
|
||||
extra_vars[var_name] = sandbox_env.from_string(tmpl).render(**namespace)
|
||||
|
||||
def build_extra_vars_file(vars, private_dir):
|
||||
handle, path = tempfile.mkstemp(dir = private_dir)
|
||||
@ -1103,26 +1106,36 @@ ManagedCredentialType(
|
||||
}, {
|
||||
'id': 'username',
|
||||
'label': ugettext_noop('Username'),
|
||||
'type': 'string'
|
||||
'type': 'string',
|
||||
'help_text': ugettext_noop('The Ansible Tower user to authenticate as.'
|
||||
'This should not be set if an OAuth token is being used.')
|
||||
}, {
|
||||
'id': 'password',
|
||||
'label': ugettext_noop('Password'),
|
||||
'type': 'string',
|
||||
'secret': True,
|
||||
}, {
|
||||
'id': 'oauth_token',
|
||||
'label': ugettext_noop('OAuth Token'),
|
||||
'type': 'string',
|
||||
'secret': True,
|
||||
'help_text': ugettext_noop('An OAuth token to use to authenticate to Tower with.'
|
||||
'This should not be set if username/password are being used.')
|
||||
}, {
|
||||
'id': 'verify_ssl',
|
||||
'label': ugettext_noop('Verify SSL'),
|
||||
'type': 'boolean',
|
||||
'secret': False
|
||||
}],
|
||||
'required': ['host', 'username', 'password'],
|
||||
'required': ['host'],
|
||||
},
|
||||
injectors={
|
||||
'env': {
|
||||
'TOWER_HOST': '{{host}}',
|
||||
'TOWER_USERNAME': '{{username}}',
|
||||
'TOWER_PASSWORD': '{{password}}',
|
||||
'TOWER_VERIFY_SSL': '{{verify_ssl}}'
|
||||
'TOWER_VERIFY_SSL': '{{verify_ssl}}',
|
||||
'TOWER_OAUTH_TOKEN': '{{oauth_token}}'
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
@ -101,3 +101,17 @@ def openstack(cred, env, private_data_dir):
|
||||
f.close()
|
||||
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
|
||||
env['OS_CLIENT_CONFIG_FILE'] = path
|
||||
|
||||
|
||||
def kubernetes_bearer_token(cred, env, private_data_dir):
|
||||
env['K8S_AUTH_HOST'] = cred.get_input('host', default='')
|
||||
env['K8S_AUTH_API_KEY'] = cred.get_input('bearer_token', default='')
|
||||
if cred.get_input('verify_ssl') and 'ssl_ca_cert' in cred.inputs:
|
||||
env['K8S_AUTH_VERIFY_SSL'] = 'True'
|
||||
handle, path = tempfile.mkstemp(dir=private_data_dir)
|
||||
with os.fdopen(handle, 'w') as f:
|
||||
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
|
||||
f.write(cred.get_input('ssl_ca_cert'))
|
||||
env['K8S_AUTH_SSL_CA_CERT'] = path
|
||||
else:
|
||||
env['K8S_AUTH_VERIFY_SSL'] = 'False'
|
||||
|
||||
@ -338,7 +338,7 @@ class BasePlaybookEvent(CreatedModifiedModel):
|
||||
|
||||
if isinstance(self, JobEvent):
|
||||
hostnames = self._hostnames()
|
||||
self._update_host_summary_from_stats(hostnames)
|
||||
self._update_host_summary_from_stats(set(hostnames))
|
||||
if self.job.inventory:
|
||||
try:
|
||||
self.job.inventory.update_computed_fields()
|
||||
@ -521,7 +521,9 @@ class JobEvent(BasePlaybookEvent):
|
||||
for summary in JobHostSummary.objects.filter(job_id=job.id).values('id', 'host_id')
|
||||
)
|
||||
for h in all_hosts:
|
||||
h.last_job_id = job.id
|
||||
# if the hostname *shows up* in the playbook_on_stats event
|
||||
if h.name in hostnames:
|
||||
h.last_job_id = job.id
|
||||
if h.id in host_mapping:
|
||||
h.last_job_host_summary_id = host_mapping[h.id]
|
||||
Host.objects.bulk_update(all_hosts, ['last_job_id', 'last_job_host_summary_id'])
|
||||
|
||||
@ -11,10 +11,6 @@ import copy
|
||||
import os.path
|
||||
from urllib.parse import urljoin
|
||||
import yaml
|
||||
import configparser
|
||||
import tempfile
|
||||
from io import StringIO
|
||||
from distutils.version import LooseVersion as Version
|
||||
|
||||
# Django
|
||||
from django.conf import settings
|
||||
@ -60,7 +56,7 @@ from awx.main.models.notifications import (
|
||||
JobNotificationMixin,
|
||||
)
|
||||
from awx.main.models.credential.injectors import _openstack_data
|
||||
from awx.main.utils import _inventory_updates, region_sorting, get_licenser
|
||||
from awx.main.utils import _inventory_updates, region_sorting
|
||||
from awx.main.utils.safe_yaml import sanitize_jinja
|
||||
|
||||
|
||||
@ -829,7 +825,6 @@ class InventorySourceOptions(BaseModel):
|
||||
('azure_rm', _('Microsoft Azure Resource Manager')),
|
||||
('vmware', _('VMware vCenter')),
|
||||
('satellite6', _('Red Hat Satellite 6')),
|
||||
('cloudforms', _('Red Hat CloudForms')),
|
||||
('openstack', _('OpenStack')),
|
||||
('rhv', _('Red Hat Virtualization')),
|
||||
('tower', _('Ansible Tower')),
|
||||
@ -1069,11 +1064,6 @@ class InventorySourceOptions(BaseModel):
|
||||
"""Red Hat Satellite 6 region choices (not implemented)"""
|
||||
return [('all', 'All')]
|
||||
|
||||
@classmethod
|
||||
def get_cloudforms_region_choices(self):
|
||||
"""Red Hat CloudForms region choices (not implemented)"""
|
||||
return [('all', 'All')]
|
||||
|
||||
@classmethod
|
||||
def get_rhv_region_choices(self):
|
||||
"""No region supprt"""
|
||||
@ -1602,19 +1592,12 @@ class CustomInventoryScript(CommonModelNameNotUnique, ResourceMixin):
|
||||
return reverse('api:inventory_script_detail', kwargs={'pk': self.pk}, request=request)
|
||||
|
||||
|
||||
# TODO: move to awx/main/models/inventory/injectors.py
|
||||
class PluginFileInjector(object):
|
||||
# if plugin_name is not given, no inventory plugin functionality exists
|
||||
plugin_name = None # Ansible core name used to reference plugin
|
||||
# if initial_version is None, but we have plugin name, injection logic exists,
|
||||
# but it is vaporware, meaning we do not use it for some reason in Ansible core
|
||||
initial_version = None # at what version do we switch to the plugin
|
||||
ini_env_reference = None # env var name that points to old ini config file
|
||||
# base injector should be one of None, "managed", or "template"
|
||||
# this dictates which logic to borrow from playbook injectors
|
||||
base_injector = None
|
||||
# every source should have collection, but these are set here
|
||||
# so that a source without a collection will have null values
|
||||
# every source should have collection, these are for the collection name
|
||||
namespace = None
|
||||
collection = None
|
||||
collection_migration = '2.9' # Starting with this version, we use collections
|
||||
@ -1630,12 +1613,6 @@ class PluginFileInjector(object):
|
||||
"""
|
||||
return '{0}.yml'.format(self.plugin_name)
|
||||
|
||||
@property
|
||||
def script_name(self):
|
||||
"""Name of the script located in awx/plugins/inventory
|
||||
"""
|
||||
return '{0}.py'.format(self.__class__.__name__)
|
||||
|
||||
def inventory_as_dict(self, inventory_update, private_data_dir):
|
||||
"""Default implementation of inventory plugin file contents.
|
||||
There are some valid cases when all parameters can be obtained from
|
||||
@ -1644,10 +1621,7 @@ class PluginFileInjector(object):
|
||||
"""
|
||||
if self.plugin_name is None:
|
||||
raise NotImplementedError('At minimum the plugin name is needed for inventory plugin use.')
|
||||
if self.initial_version is None or Version(self.ansible_version) >= Version(self.collection_migration):
|
||||
proper_name = f'{self.namespace}.{self.collection}.{self.plugin_name}'
|
||||
else:
|
||||
proper_name = self.plugin_name
|
||||
proper_name = f'{self.namespace}.{self.collection}.{self.plugin_name}'
|
||||
return {'plugin': proper_name}
|
||||
|
||||
def inventory_contents(self, inventory_update, private_data_dir):
|
||||
@ -1659,17 +1633,8 @@ class PluginFileInjector(object):
|
||||
width=1000
|
||||
)
|
||||
|
||||
def should_use_plugin(self):
|
||||
return bool(
|
||||
self.plugin_name and self.initial_version and
|
||||
Version(self.ansible_version) >= Version(self.initial_version)
|
||||
)
|
||||
|
||||
def build_env(self, inventory_update, env, private_data_dir, private_data_files):
|
||||
if self.should_use_plugin():
|
||||
injector_env = self.get_plugin_env(inventory_update, private_data_dir, private_data_files)
|
||||
else:
|
||||
injector_env = self.get_script_env(inventory_update, private_data_dir, private_data_files)
|
||||
injector_env = self.get_plugin_env(inventory_update, private_data_dir, private_data_files)
|
||||
env.update(injector_env)
|
||||
# Preserves current behavior for Ansible change in default planned for 2.10
|
||||
env['ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS'] = 'never'
|
||||
@ -1677,7 +1642,6 @@ class PluginFileInjector(object):
|
||||
|
||||
def _get_shared_env(self, inventory_update, private_data_dir, private_data_files):
|
||||
"""By default, we will apply the standard managed_by_tower injectors
|
||||
for the script injection
|
||||
"""
|
||||
injected_env = {}
|
||||
credential = inventory_update.get_cloud_credential()
|
||||
@ -1704,52 +1668,18 @@ class PluginFileInjector(object):
|
||||
|
||||
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
|
||||
env = self._get_shared_env(inventory_update, private_data_dir, private_data_files)
|
||||
if self.initial_version is None or Version(self.ansible_version) >= Version(self.collection_migration):
|
||||
env['ANSIBLE_COLLECTIONS_PATHS'] = settings.AWX_ANSIBLE_COLLECTIONS_PATHS
|
||||
env['ANSIBLE_COLLECTIONS_PATHS'] = settings.AWX_ANSIBLE_COLLECTIONS_PATHS
|
||||
return env
|
||||
|
||||
def get_script_env(self, inventory_update, private_data_dir, private_data_files):
|
||||
injected_env = self._get_shared_env(inventory_update, private_data_dir, private_data_files)
|
||||
|
||||
# Put in env var reference to private ini data files, if relevant
|
||||
if self.ini_env_reference:
|
||||
credential = inventory_update.get_cloud_credential()
|
||||
cred_data = private_data_files['credentials']
|
||||
injected_env[self.ini_env_reference] = cred_data[credential]
|
||||
|
||||
return injected_env
|
||||
|
||||
def build_private_data(self, inventory_update, private_data_dir):
|
||||
if self.should_use_plugin():
|
||||
return self.build_plugin_private_data(inventory_update, private_data_dir)
|
||||
else:
|
||||
return self.build_script_private_data(inventory_update, private_data_dir)
|
||||
|
||||
def build_script_private_data(self, inventory_update, private_data_dir):
|
||||
return None
|
||||
return self.build_plugin_private_data(inventory_update, private_data_dir)
|
||||
|
||||
def build_plugin_private_data(self, inventory_update, private_data_dir):
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def dump_cp(cp, credential):
|
||||
"""Dump config parser data and return it as a string.
|
||||
Helper method intended for use by build_script_private_data
|
||||
"""
|
||||
if cp.sections():
|
||||
f = StringIO()
|
||||
cp.write(f)
|
||||
private_data = {'credentials': {}}
|
||||
private_data['credentials'][credential] = f.getvalue()
|
||||
return private_data
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
class azure_rm(PluginFileInjector):
|
||||
plugin_name = 'azure_rm'
|
||||
initial_version = '2.8' # Driven by unsafe group names issue, hostvars, host names
|
||||
ini_env_reference = 'AZURE_INI_PATH'
|
||||
base_injector = 'managed'
|
||||
namespace = 'azure'
|
||||
collection = 'azcollection'
|
||||
@ -1860,32 +1790,9 @@ class azure_rm(PluginFileInjector):
|
||||
ret['exclude_host_filters'].append("location not in {}".format(repr(python_regions)))
|
||||
return ret
|
||||
|
||||
def build_script_private_data(self, inventory_update, private_data_dir):
|
||||
cp = configparser.RawConfigParser()
|
||||
section = 'azure'
|
||||
cp.add_section(section)
|
||||
cp.set(section, 'include_powerstate', 'yes')
|
||||
cp.set(section, 'group_by_resource_group', 'yes')
|
||||
cp.set(section, 'group_by_location', 'yes')
|
||||
cp.set(section, 'group_by_tag', 'yes')
|
||||
|
||||
if inventory_update.source_regions and 'all' not in inventory_update.source_regions:
|
||||
cp.set(
|
||||
section, 'locations',
|
||||
','.join([x.strip() for x in inventory_update.source_regions.split(',')])
|
||||
)
|
||||
|
||||
azure_rm_opts = dict(inventory_update.source_vars_dict.items())
|
||||
for k, v in azure_rm_opts.items():
|
||||
cp.set(section, k, str(v))
|
||||
return self.dump_cp(cp, inventory_update.get_cloud_credential())
|
||||
|
||||
|
||||
class ec2(PluginFileInjector):
|
||||
plugin_name = 'aws_ec2'
|
||||
# blocked by https://github.com/ansible/ansible/issues/54059
|
||||
initial_version = '2.9' # Driven by unsafe group names issue, parent_group templating, hostvars
|
||||
ini_env_reference = 'EC2_INI_PATH'
|
||||
base_injector = 'managed'
|
||||
namespace = 'amazon'
|
||||
collection = 'aws'
|
||||
@ -2003,7 +1910,7 @@ class ec2(PluginFileInjector):
|
||||
# Compatibility content
|
||||
legacy_regex = {
|
||||
True: r"[^A-Za-z0-9\_]",
|
||||
False: r"[^A-Za-z0-9\_\-]" # do not replace dash, dash is whitelisted
|
||||
False: r"[^A-Za-z0-9\_\-]" # do not replace dash, dash is allowed
|
||||
}[replace_dash]
|
||||
list_replacer = 'map("regex_replace", "{rx}", "_") | list'.format(rx=legacy_regex)
|
||||
# this option, a plugin option, will allow dashes, but not unicode
|
||||
@ -2036,7 +1943,7 @@ class ec2(PluginFileInjector):
|
||||
ret['boto_profile'] = source_vars['boto_profile']
|
||||
|
||||
elif not replace_dash:
|
||||
# Using the plugin, but still want dashes whitelisted
|
||||
# Using the plugin, but still want dashes allowed
|
||||
ret['use_contrib_script_compatible_sanitization'] = True
|
||||
|
||||
if source_vars.get('nested_groups') is False:
|
||||
@ -2108,46 +2015,9 @@ class ec2(PluginFileInjector):
|
||||
|
||||
return ret
|
||||
|
||||
def build_script_private_data(self, inventory_update, private_data_dir):
|
||||
cp = configparser.RawConfigParser()
|
||||
# Build custom ec2.ini for ec2 inventory script to use.
|
||||
section = 'ec2'
|
||||
cp.add_section(section)
|
||||
ec2_opts = dict(inventory_update.source_vars_dict.items())
|
||||
regions = inventory_update.source_regions or 'all'
|
||||
regions = ','.join([x.strip() for x in regions.split(',')])
|
||||
regions_blacklist = ','.join(settings.EC2_REGIONS_BLACKLIST)
|
||||
ec2_opts['regions'] = regions
|
||||
ec2_opts.setdefault('regions_exclude', regions_blacklist)
|
||||
ec2_opts.setdefault('destination_variable', 'public_dns_name')
|
||||
ec2_opts.setdefault('vpc_destination_variable', 'ip_address')
|
||||
ec2_opts.setdefault('route53', 'False')
|
||||
ec2_opts.setdefault('all_instances', 'True')
|
||||
ec2_opts.setdefault('all_rds_instances', 'False')
|
||||
ec2_opts.setdefault('include_rds_clusters', 'False')
|
||||
ec2_opts.setdefault('rds', 'False')
|
||||
ec2_opts.setdefault('nested_groups', 'True')
|
||||
ec2_opts.setdefault('elasticache', 'False')
|
||||
ec2_opts.setdefault('stack_filters', 'False')
|
||||
if inventory_update.instance_filters:
|
||||
ec2_opts.setdefault('instance_filters', inventory_update.instance_filters)
|
||||
group_by = [x.strip().lower() for x in inventory_update.group_by.split(',') if x.strip()]
|
||||
for choice in inventory_update.get_ec2_group_by_choices():
|
||||
value = bool((group_by and choice[0] in group_by) or (not group_by and choice[0] != 'instance_id'))
|
||||
ec2_opts.setdefault('group_by_%s' % choice[0], str(value))
|
||||
if 'cache_path' not in ec2_opts:
|
||||
cache_path = tempfile.mkdtemp(prefix='ec2_cache', dir=private_data_dir)
|
||||
ec2_opts['cache_path'] = cache_path
|
||||
ec2_opts.setdefault('cache_max_age', '300')
|
||||
for k, v in ec2_opts.items():
|
||||
cp.set(section, k, str(v))
|
||||
return self.dump_cp(cp, inventory_update.get_cloud_credential())
|
||||
|
||||
|
||||
class gce(PluginFileInjector):
|
||||
plugin_name = 'gcp_compute'
|
||||
initial_version = '2.8' # Driven by unsafe group names issue, hostvars
|
||||
ini_env_reference = 'GCE_INI_PATH'
|
||||
base_injector = 'managed'
|
||||
namespace = 'google'
|
||||
collection = 'cloud'
|
||||
@ -2158,17 +2028,6 @@ class gce(PluginFileInjector):
|
||||
ret['ANSIBLE_JINJA2_NATIVE'] = str(True)
|
||||
return ret
|
||||
|
||||
def get_script_env(self, inventory_update, private_data_dir, private_data_files):
|
||||
env = super(gce, self).get_script_env(inventory_update, private_data_dir, private_data_files)
|
||||
cred = inventory_update.get_cloud_credential()
|
||||
# these environment keys are unique to the script operation, and are not
|
||||
# concepts in the modern inventory plugin or gce Ansible module
|
||||
# email and project are redundant with the creds file
|
||||
env['GCE_EMAIL'] = cred.get_input('username', default='')
|
||||
env['GCE_PROJECT'] = cred.get_input('project', default='')
|
||||
env['GCE_ZONE'] = inventory_update.source_regions if inventory_update.source_regions != 'all' else '' # noqa
|
||||
return env
|
||||
|
||||
def _compat_compose_vars(self):
|
||||
# missing: gce_image, gce_uuid
|
||||
# https://github.com/ansible/ansible/issues/51884
|
||||
@ -2241,28 +2100,13 @@ class gce(PluginFileInjector):
|
||||
ret['zones'] = inventory_update.source_regions.split(',')
|
||||
return ret
|
||||
|
||||
def build_script_private_data(self, inventory_update, private_data_dir):
|
||||
cp = configparser.RawConfigParser()
|
||||
# by default, the GCE inventory source caches results on disk for
|
||||
# 5 minutes; disable this behavior
|
||||
cp.add_section('cache')
|
||||
cp.set('cache', 'cache_max_age', '0')
|
||||
return self.dump_cp(cp, inventory_update.get_cloud_credential())
|
||||
|
||||
|
||||
class vmware(PluginFileInjector):
|
||||
plugin_name = 'vmware_vm_inventory'
|
||||
initial_version = '2.9'
|
||||
ini_env_reference = 'VMWARE_INI_PATH'
|
||||
base_injector = 'managed'
|
||||
namespace = 'community'
|
||||
collection = 'vmware'
|
||||
|
||||
@property
|
||||
def script_name(self):
|
||||
return 'vmware_inventory.py' # exception
|
||||
|
||||
|
||||
def inventory_as_dict(self, inventory_update, private_data_dir):
|
||||
ret = super(vmware, self).inventory_as_dict(inventory_update, private_data_dir)
|
||||
ret['strict'] = False
|
||||
@ -2363,57 +2207,16 @@ class vmware(PluginFileInjector):
|
||||
|
||||
return ret
|
||||
|
||||
def build_script_private_data(self, inventory_update, private_data_dir):
|
||||
cp = configparser.RawConfigParser()
|
||||
credential = inventory_update.get_cloud_credential()
|
||||
|
||||
# Allow custom options to vmware inventory script.
|
||||
section = 'vmware'
|
||||
cp.add_section(section)
|
||||
cp.set('vmware', 'cache_max_age', '0')
|
||||
cp.set('vmware', 'validate_certs', str(settings.VMWARE_VALIDATE_CERTS))
|
||||
cp.set('vmware', 'username', credential.get_input('username', default=''))
|
||||
cp.set('vmware', 'password', credential.get_input('password', default=''))
|
||||
cp.set('vmware', 'server', credential.get_input('host', default=''))
|
||||
|
||||
vmware_opts = dict(inventory_update.source_vars_dict.items())
|
||||
if inventory_update.instance_filters:
|
||||
vmware_opts.setdefault('host_filters', inventory_update.instance_filters)
|
||||
if inventory_update.group_by:
|
||||
vmware_opts.setdefault('groupby_patterns', inventory_update.group_by)
|
||||
|
||||
for k, v in vmware_opts.items():
|
||||
cp.set(section, k, str(v))
|
||||
|
||||
return self.dump_cp(cp, credential)
|
||||
|
||||
|
||||
class openstack(PluginFileInjector):
|
||||
ini_env_reference = 'OS_CLIENT_CONFIG_FILE'
|
||||
plugin_name = 'openstack'
|
||||
# minimum version of 2.7.8 may be theoretically possible
|
||||
initial_version = '2.8' # Driven by consistency with other sources
|
||||
namespace = 'openstack'
|
||||
collection = 'cloud'
|
||||
|
||||
@property
|
||||
def script_name(self):
|
||||
return 'openstack_inventory.py' # exception
|
||||
|
||||
def _get_clouds_dict(self, inventory_update, cred, private_data_dir, mk_cache=True):
|
||||
def _get_clouds_dict(self, inventory_update, cred, private_data_dir):
|
||||
openstack_data = _openstack_data(cred)
|
||||
|
||||
openstack_data['clouds']['devstack']['private'] = inventory_update.source_vars_dict.get('private', True)
|
||||
if mk_cache:
|
||||
# Retrieve cache path from inventory update vars if available,
|
||||
# otherwise create a temporary cache path only for this update.
|
||||
cache = inventory_update.source_vars_dict.get('cache', {})
|
||||
if not isinstance(cache, dict):
|
||||
cache = {}
|
||||
if not cache.get('path', ''):
|
||||
cache_path = tempfile.mkdtemp(prefix='openstack_cache', dir=private_data_dir)
|
||||
cache['path'] = cache_path
|
||||
openstack_data['cache'] = cache
|
||||
ansible_variables = {
|
||||
'use_hostnames': True,
|
||||
'expand_hostvars': False,
|
||||
@ -2430,27 +2233,16 @@ class openstack(PluginFileInjector):
|
||||
openstack_data['ansible'] = ansible_variables
|
||||
return openstack_data
|
||||
|
||||
def build_script_private_data(self, inventory_update, private_data_dir, mk_cache=True):
|
||||
def build_plugin_private_data(self, inventory_update, private_data_dir):
|
||||
credential = inventory_update.get_cloud_credential()
|
||||
private_data = {'credentials': {}}
|
||||
|
||||
openstack_data = self._get_clouds_dict(inventory_update, credential, private_data_dir, mk_cache=mk_cache)
|
||||
openstack_data = self._get_clouds_dict(inventory_update, credential, private_data_dir)
|
||||
private_data['credentials'][credential] = yaml.safe_dump(
|
||||
openstack_data, default_flow_style=False, allow_unicode=True
|
||||
)
|
||||
return private_data
|
||||
|
||||
def build_plugin_private_data(self, inventory_update, private_data_dir):
|
||||
# Credentials can be passed in the same way as the script did
|
||||
# but do not create the tmp cache file
|
||||
return self.build_script_private_data(inventory_update, private_data_dir, mk_cache=False)
|
||||
|
||||
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
|
||||
env = super(openstack, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
|
||||
script_env = self.get_script_env(inventory_update, private_data_dir, private_data_files)
|
||||
env.update(script_env)
|
||||
return env
|
||||
|
||||
def inventory_as_dict(self, inventory_update, private_data_dir):
|
||||
def use_host_name_for_name(a_bool_maybe):
|
||||
if not isinstance(a_bool_maybe, bool):
|
||||
@ -2485,6 +2277,13 @@ class openstack(PluginFileInjector):
|
||||
ret['inventory_hostname'] = use_host_name_for_name(source_vars['use_hostnames'])
|
||||
return ret
|
||||
|
||||
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
|
||||
env = super(openstack, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
|
||||
credential = inventory_update.get_cloud_credential()
|
||||
cred_data = private_data_files['credentials']
|
||||
env['OS_CLIENT_CONFIG_FILE'] = cred_data[credential]
|
||||
return env
|
||||
|
||||
|
||||
class rhv(PluginFileInjector):
|
||||
"""ovirt uses the custom credential templating, and that is all
|
||||
@ -2495,10 +2294,6 @@ class rhv(PluginFileInjector):
|
||||
namespace = 'ovirt'
|
||||
collection = 'ovirt'
|
||||
|
||||
@property
|
||||
def script_name(self):
|
||||
return 'ovirt4.py' # exception
|
||||
|
||||
def inventory_as_dict(self, inventory_update, private_data_dir):
|
||||
ret = super(rhv, self).inventory_as_dict(inventory_update, private_data_dir)
|
||||
ret['ovirt_insecure'] = False # Default changed from script
|
||||
@ -2521,68 +2316,9 @@ class rhv(PluginFileInjector):
|
||||
|
||||
class satellite6(PluginFileInjector):
|
||||
plugin_name = 'foreman'
|
||||
ini_env_reference = 'FOREMAN_INI_PATH'
|
||||
initial_version = '2.9'
|
||||
# No base injector, because this does not work in playbooks. Bug??
|
||||
namespace = 'theforeman'
|
||||
collection = 'foreman'
|
||||
|
||||
@property
|
||||
def script_name(self):
|
||||
return 'foreman.py' # exception
|
||||
|
||||
def build_script_private_data(self, inventory_update, private_data_dir):
|
||||
cp = configparser.RawConfigParser()
|
||||
credential = inventory_update.get_cloud_credential()
|
||||
|
||||
section = 'foreman'
|
||||
cp.add_section(section)
|
||||
|
||||
group_patterns = '[]'
|
||||
group_prefix = 'foreman_'
|
||||
want_hostcollections = 'False'
|
||||
want_ansible_ssh_host = 'False'
|
||||
rich_params = 'False'
|
||||
want_facts = 'True'
|
||||
foreman_opts = dict(inventory_update.source_vars_dict.items())
|
||||
foreman_opts.setdefault('ssl_verify', 'False')
|
||||
for k, v in foreman_opts.items():
|
||||
if k == 'satellite6_group_patterns' and isinstance(v, str):
|
||||
group_patterns = v
|
||||
elif k == 'satellite6_group_prefix' and isinstance(v, str):
|
||||
group_prefix = v
|
||||
elif k == 'satellite6_want_hostcollections' and isinstance(v, bool):
|
||||
want_hostcollections = v
|
||||
elif k == 'satellite6_want_ansible_ssh_host' and isinstance(v, bool):
|
||||
want_ansible_ssh_host = v
|
||||
elif k == 'satellite6_rich_params' and isinstance(v, bool):
|
||||
rich_params = v
|
||||
elif k == 'satellite6_want_facts' and isinstance(v, bool):
|
||||
want_facts = v
|
||||
else:
|
||||
cp.set(section, k, str(v))
|
||||
|
||||
if credential:
|
||||
cp.set(section, 'url', credential.get_input('host', default=''))
|
||||
cp.set(section, 'user', credential.get_input('username', default=''))
|
||||
cp.set(section, 'password', credential.get_input('password', default=''))
|
||||
|
||||
section = 'ansible'
|
||||
cp.add_section(section)
|
||||
cp.set(section, 'group_patterns', group_patterns)
|
||||
cp.set(section, 'want_facts', str(want_facts))
|
||||
cp.set(section, 'want_hostcollections', str(want_hostcollections))
|
||||
cp.set(section, 'group_prefix', group_prefix)
|
||||
cp.set(section, 'want_ansible_ssh_host', str(want_ansible_ssh_host))
|
||||
cp.set(section, 'rich_params', str(rich_params))
|
||||
|
||||
section = 'cache'
|
||||
cp.add_section(section)
|
||||
cp.set(section, 'path', '/tmp')
|
||||
cp.set(section, 'max_age', '0')
|
||||
|
||||
return self.dump_cp(cp, credential)
|
||||
|
||||
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
|
||||
# this assumes that this is merged
|
||||
# https://github.com/ansible/ansible/pull/52693
|
||||
@ -2596,6 +2332,7 @@ class satellite6(PluginFileInjector):
|
||||
|
||||
def inventory_as_dict(self, inventory_update, private_data_dir):
|
||||
ret = super(satellite6, self).inventory_as_dict(inventory_update, private_data_dir)
|
||||
ret['validate_certs'] = False
|
||||
|
||||
group_patterns = '[]'
|
||||
group_prefix = 'foreman_'
|
||||
@ -2615,6 +2352,10 @@ class satellite6(PluginFileInjector):
|
||||
want_ansible_ssh_host = v
|
||||
elif k == 'satellite6_want_facts' and isinstance(v, bool):
|
||||
want_facts = v
|
||||
# add backwards support for ssl_verify
|
||||
# plugin uses new option, validate_certs, instead
|
||||
elif k == 'ssl_verify' and isinstance(v, bool):
|
||||
ret['validate_certs'] = v
|
||||
else:
|
||||
ret[k] = str(v)
|
||||
|
||||
@ -2698,56 +2439,12 @@ class satellite6(PluginFileInjector):
|
||||
return ret
|
||||
|
||||
|
||||
class cloudforms(PluginFileInjector):
|
||||
# plugin_name = 'FIXME' # contribute inventory plugin to Ansible
|
||||
ini_env_reference = 'CLOUDFORMS_INI_PATH'
|
||||
# Also no base_injector because this does not work in playbooks
|
||||
# namespace = '' # does not have a collection
|
||||
# collection = ''
|
||||
|
||||
def build_script_private_data(self, inventory_update, private_data_dir):
|
||||
cp = configparser.RawConfigParser()
|
||||
credential = inventory_update.get_cloud_credential()
|
||||
|
||||
section = 'cloudforms'
|
||||
cp.add_section(section)
|
||||
|
||||
if credential:
|
||||
cp.set(section, 'url', credential.get_input('host', default=''))
|
||||
cp.set(section, 'username', credential.get_input('username', default=''))
|
||||
cp.set(section, 'password', credential.get_input('password', default=''))
|
||||
cp.set(section, 'ssl_verify', "false")
|
||||
|
||||
cloudforms_opts = dict(inventory_update.source_vars_dict.items())
|
||||
for opt in ['version', 'purge_actions', 'clean_group_keys', 'nest_tags', 'suffix', 'prefer_ipv4']:
|
||||
if opt in cloudforms_opts:
|
||||
cp.set(section, opt, str(cloudforms_opts[opt]))
|
||||
|
||||
section = 'cache'
|
||||
cp.add_section(section)
|
||||
cp.set(section, 'max_age', "0")
|
||||
cache_path = tempfile.mkdtemp(
|
||||
prefix='cloudforms_cache',
|
||||
dir=private_data_dir
|
||||
)
|
||||
cp.set(section, 'path', cache_path)
|
||||
|
||||
return self.dump_cp(cp, credential)
|
||||
|
||||
|
||||
class tower(PluginFileInjector):
|
||||
plugin_name = 'tower'
|
||||
base_injector = 'template'
|
||||
initial_version = '2.8' # Driven by "include_metadata" hostvars
|
||||
namespace = 'awx'
|
||||
collection = 'awx'
|
||||
|
||||
def get_script_env(self, inventory_update, private_data_dir, private_data_files):
|
||||
env = super(tower, self).get_script_env(inventory_update, private_data_dir, private_data_files)
|
||||
env['TOWER_INVENTORY'] = inventory_update.instance_filters
|
||||
env['TOWER_LICENSE_TYPE'] = get_licenser().validate().get('license_type', 'unlicensed')
|
||||
return env
|
||||
|
||||
def inventory_as_dict(self, inventory_update, private_data_dir):
|
||||
ret = super(tower, self).inventory_as_dict(inventory_update, private_data_dir)
|
||||
# Credentials injected as env vars, same as script
|
||||
|
||||
@ -566,7 +566,6 @@ class WebhookMixin(models.Model):
|
||||
|
||||
def update_webhook_status(self, status):
|
||||
if not self.webhook_credential:
|
||||
logger.debug("No credential configured to post back webhook status, skipping.")
|
||||
return
|
||||
|
||||
status_api = self.extra_vars_dict.get('tower_webhook_status_api')
|
||||
|
||||
@ -262,25 +262,25 @@ class JobNotificationMixin(object):
|
||||
'running': 'started',
|
||||
'failed': 'error'}
|
||||
# Tree of fields that can be safely referenced in a notification message
|
||||
JOB_FIELDS_WHITELIST = ['id', 'type', 'url', 'created', 'modified', 'name', 'description', 'job_type', 'playbook',
|
||||
'forks', 'limit', 'verbosity', 'job_tags', 'force_handlers', 'skip_tags', 'start_at_task',
|
||||
'timeout', 'use_fact_cache', 'launch_type', 'status', 'failed', 'started', 'finished',
|
||||
'elapsed', 'job_explanation', 'execution_node', 'controller_node', 'allow_simultaneous',
|
||||
'scm_revision', 'diff_mode', 'job_slice_number', 'job_slice_count', 'custom_virtualenv',
|
||||
'approval_status', 'approval_node_name', 'workflow_url', 'scm_branch',
|
||||
{'host_status_counts': ['skipped', 'ok', 'changed', 'failed', 'failures', 'dark'
|
||||
'processed', 'rescued', 'ignored']},
|
||||
{'summary_fields': [{'inventory': ['id', 'name', 'description', 'has_active_failures',
|
||||
'total_hosts', 'hosts_with_active_failures', 'total_groups',
|
||||
'has_inventory_sources',
|
||||
'total_inventory_sources', 'inventory_sources_with_failures',
|
||||
'organization_id', 'kind']},
|
||||
{'project': ['id', 'name', 'description', 'status', 'scm_type']},
|
||||
{'job_template': ['id', 'name', 'description']},
|
||||
{'unified_job_template': ['id', 'name', 'description', 'unified_job_type']},
|
||||
{'instance_group': ['name', 'id']},
|
||||
{'created_by': ['id', 'username', 'first_name', 'last_name']},
|
||||
{'labels': ['count', 'results']}]}]
|
||||
JOB_FIELDS_ALLOWED_LIST = ['id', 'type', 'url', 'created', 'modified', 'name', 'description', 'job_type', 'playbook',
|
||||
'forks', 'limit', 'verbosity', 'job_tags', 'force_handlers', 'skip_tags', 'start_at_task',
|
||||
'timeout', 'use_fact_cache', 'launch_type', 'status', 'failed', 'started', 'finished',
|
||||
'elapsed', 'job_explanation', 'execution_node', 'controller_node', 'allow_simultaneous',
|
||||
'scm_revision', 'diff_mode', 'job_slice_number', 'job_slice_count', 'custom_virtualenv',
|
||||
'approval_status', 'approval_node_name', 'workflow_url', 'scm_branch', 'artifacts',
|
||||
{'host_status_counts': ['skipped', 'ok', 'changed', 'failed', 'failures', 'dark'
|
||||
'processed', 'rescued', 'ignored']},
|
||||
{'summary_fields': [{'inventory': ['id', 'name', 'description', 'has_active_failures',
|
||||
'total_hosts', 'hosts_with_active_failures', 'total_groups',
|
||||
'has_inventory_sources',
|
||||
'total_inventory_sources', 'inventory_sources_with_failures',
|
||||
'organization_id', 'kind']},
|
||||
{'project': ['id', 'name', 'description', 'status', 'scm_type']},
|
||||
{'job_template': ['id', 'name', 'description']},
|
||||
{'unified_job_template': ['id', 'name', 'description', 'unified_job_type']},
|
||||
{'instance_group': ['name', 'id']},
|
||||
{'created_by': ['id', 'username', 'first_name', 'last_name']},
|
||||
{'labels': ['count', 'results']}]}]
|
||||
|
||||
@classmethod
|
||||
def context_stub(cls):
|
||||
@ -288,6 +288,7 @@ class JobNotificationMixin(object):
|
||||
Context has the same structure as the context that will actually be used to render
|
||||
a notification message."""
|
||||
context = {'job': {'allow_simultaneous': False,
|
||||
'artifacts': {},
|
||||
'controller_node': 'foo_controller',
|
||||
'created': datetime.datetime(2018, 11, 13, 6, 4, 0, 0, tzinfo=datetime.timezone.utc),
|
||||
'custom_virtualenv': 'my_venv',
|
||||
@ -377,8 +378,8 @@ class JobNotificationMixin(object):
|
||||
|
||||
def context(self, serialized_job):
|
||||
"""Returns a dictionary that can be used for rendering notification messages.
|
||||
The context will contain whitelisted content retrieved from a serialized job object
|
||||
(see JobNotificationMixin.JOB_FIELDS_WHITELIST), the job's friendly name,
|
||||
The context will contain allowed content retrieved from a serialized job object
|
||||
(see JobNotificationMixin.JOB_FIELDS_ALLOWED_LIST the job's friendly name,
|
||||
and a url to the job run."""
|
||||
job_context = {'host_status_counts': {}}
|
||||
summary = None
|
||||
@ -395,22 +396,22 @@ class JobNotificationMixin(object):
|
||||
'job_metadata': json.dumps(self.notification_data(), indent=4)
|
||||
}
|
||||
|
||||
def build_context(node, fields, whitelisted_fields):
|
||||
for safe_field in whitelisted_fields:
|
||||
def build_context(node, fields, allowed_fields):
|
||||
for safe_field in allowed_fields:
|
||||
if type(safe_field) is dict:
|
||||
field, whitelist_subnode = safe_field.copy().popitem()
|
||||
field, allowed_subnode = safe_field.copy().popitem()
|
||||
# ensure content present in job serialization
|
||||
if field not in fields:
|
||||
continue
|
||||
subnode = fields[field]
|
||||
node[field] = {}
|
||||
build_context(node[field], subnode, whitelist_subnode)
|
||||
build_context(node[field], subnode, allowed_subnode)
|
||||
else:
|
||||
# ensure content present in job serialization
|
||||
if safe_field not in fields:
|
||||
continue
|
||||
node[safe_field] = fields[safe_field]
|
||||
build_context(context['job'], serialized_job, self.JOB_FIELDS_WHITELIST)
|
||||
build_context(context['job'], serialized_job, self.JOB_FIELDS_ALLOWED_LIST)
|
||||
|
||||
return context
|
||||
|
||||
|
||||
@ -194,6 +194,11 @@ class ProjectOptions(models.Model):
|
||||
if not check_if_exists or os.path.exists(smart_str(proj_path)):
|
||||
return proj_path
|
||||
|
||||
def get_cache_path(self):
|
||||
local_path = os.path.basename(self.local_path)
|
||||
if local_path:
|
||||
return os.path.join(settings.PROJECTS_ROOT, '.__awx_cache', local_path)
|
||||
|
||||
@property
|
||||
def playbooks(self):
|
||||
results = []
|
||||
@ -418,6 +423,10 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
|
||||
return True
|
||||
return False
|
||||
|
||||
@property
|
||||
def cache_id(self):
|
||||
return str(self.last_job_id)
|
||||
|
||||
@property
|
||||
def notification_templates(self):
|
||||
base_notification_templates = NotificationTemplate.objects
|
||||
@ -455,11 +464,12 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
|
||||
)
|
||||
|
||||
def delete(self, *args, **kwargs):
|
||||
path_to_delete = self.get_project_path(check_if_exists=False)
|
||||
paths_to_delete = (self.get_project_path(check_if_exists=False), self.get_cache_path())
|
||||
r = super(Project, self).delete(*args, **kwargs)
|
||||
if self.scm_type and path_to_delete: # non-manual, concrete path
|
||||
from awx.main.tasks import delete_project_files
|
||||
delete_project_files.delay(path_to_delete)
|
||||
for path_to_delete in paths_to_delete:
|
||||
if self.scm_type and path_to_delete: # non-manual, concrete path
|
||||
from awx.main.tasks import delete_project_files
|
||||
delete_project_files.delay(path_to_delete)
|
||||
return r
|
||||
|
||||
|
||||
@ -554,6 +564,19 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
|
||||
def result_stdout_raw(self):
|
||||
return self._result_stdout_raw(redact_sensitive=True)
|
||||
|
||||
@property
|
||||
def branch_override(self):
|
||||
"""Whether a branch other than the project default is used."""
|
||||
if not self.project:
|
||||
return True
|
||||
return bool(self.scm_branch and self.scm_branch != self.project.scm_branch)
|
||||
|
||||
@property
|
||||
def cache_id(self):
|
||||
if self.branch_override or self.job_type == 'check' or (not self.project):
|
||||
return str(self.id)
|
||||
return self.project.cache_id
|
||||
|
||||
def result_stdout_raw_limited(self, start_line=0, end_line=None, redact_sensitive=True):
|
||||
return self._result_stdout_raw_limited(start_line, end_line, redact_sensitive=redact_sensitive)
|
||||
|
||||
@ -597,10 +620,7 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
|
||||
def save(self, *args, **kwargs):
|
||||
added_update_fields = []
|
||||
if not self.job_tags:
|
||||
job_tags = ['update_{}'.format(self.scm_type)]
|
||||
if self.job_type == 'run':
|
||||
job_tags.append('install_roles')
|
||||
job_tags.append('install_collections')
|
||||
job_tags = ['update_{}'.format(self.scm_type), 'install_roles', 'install_collections']
|
||||
self.job_tags = ','.join(job_tags)
|
||||
added_update_fields.append('job_tags')
|
||||
if self.scm_delete_on_update and 'delete' not in self.job_tags and self.job_type == 'check':
|
||||
|
||||
@ -962,6 +962,10 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
def event_class(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
@property
|
||||
def job_type_name(self):
|
||||
return self.get_real_instance_class()._meta.verbose_name.replace(' ', '_')
|
||||
|
||||
@property
|
||||
def result_stdout_text(self):
|
||||
related = UnifiedJobDeprecatedStdout.objects.get(pk=self.pk)
|
||||
@ -1221,7 +1225,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
|
||||
def websocket_emit_data(self):
|
||||
''' Return extra data that should be included when submitting data to the browser over the websocket connection '''
|
||||
websocket_data = dict(type=self.get_real_instance_class()._meta.verbose_name.replace(' ', '_'))
|
||||
websocket_data = dict(type=self.job_type_name)
|
||||
if self.spawned_by_workflow:
|
||||
websocket_data.update(dict(workflow_job_id=self.workflow_job_id,
|
||||
workflow_node_id=self.workflow_node_id))
|
||||
@ -1362,7 +1366,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
running = self.celery_task_id in ControlDispatcher(
|
||||
'dispatcher', self.controller_node or self.execution_node
|
||||
).running(timeout=timeout)
|
||||
except socket.timeout:
|
||||
except (socket.timeout, RuntimeError):
|
||||
logger.error('could not reach dispatcher on {} within {}s'.format(
|
||||
self.execution_node, timeout
|
||||
))
|
||||
|
||||
@ -139,7 +139,7 @@ class WorkflowJobTemplateNode(WorkflowNodeBase):
|
||||
'always_nodes', 'credentials', 'inventory', 'extra_data', 'survey_passwords',
|
||||
'char_prompts', 'all_parents_must_converge', 'identifier'
|
||||
]
|
||||
REENCRYPTION_BLACKLIST_AT_COPY = ['extra_data', 'survey_passwords']
|
||||
REENCRYPTION_BLOCKLIST_AT_COPY = ['extra_data', 'survey_passwords']
|
||||
|
||||
workflow_job_template = models.ForeignKey(
|
||||
'WorkflowJobTemplate',
|
||||
|
||||
@ -94,8 +94,8 @@ class GrafanaBackend(AWXBaseEmailBackend, CustomNotificationBase):
|
||||
headers=grafana_headers,
|
||||
verify=(not self.grafana_no_verify_ssl))
|
||||
if r.status_code >= 400:
|
||||
logger.error(smart_text(_("Error sending notification grafana: {}").format(r.text)))
|
||||
logger.error(smart_text(_("Error sending notification grafana: {}").format(r.status_code)))
|
||||
if not self.fail_silently:
|
||||
raise Exception(smart_text(_("Error sending notification grafana: {}").format(r.text)))
|
||||
raise Exception(smart_text(_("Error sending notification grafana: {}").format(r.status_code)))
|
||||
sent_messages += 1
|
||||
return sent_messages
|
||||
|
||||
@ -46,8 +46,8 @@ class MattermostBackend(AWXBaseEmailBackend, CustomNotificationBase):
|
||||
r = requests.post("{}".format(m.recipients()[0]),
|
||||
json=payload, verify=(not self.mattermost_no_verify_ssl))
|
||||
if r.status_code >= 400:
|
||||
logger.error(smart_text(_("Error sending notification mattermost: {}").format(r.text)))
|
||||
logger.error(smart_text(_("Error sending notification mattermost: {}").format(r.status_code)))
|
||||
if not self.fail_silently:
|
||||
raise Exception(smart_text(_("Error sending notification mattermost: {}").format(r.text)))
|
||||
raise Exception(smart_text(_("Error sending notification mattermost: {}").format(r.status_code)))
|
||||
sent_messages += 1
|
||||
return sent_messages
|
||||
|
||||
@ -46,9 +46,9 @@ class RocketChatBackend(AWXBaseEmailBackend, CustomNotificationBase):
|
||||
|
||||
if r.status_code >= 400:
|
||||
logger.error(smart_text(
|
||||
_("Error sending notification rocket.chat: {}").format(r.text)))
|
||||
_("Error sending notification rocket.chat: {}").format(r.status_code)))
|
||||
if not self.fail_silently:
|
||||
raise Exception(smart_text(
|
||||
_("Error sending notification rocket.chat: {}").format(r.text)))
|
||||
_("Error sending notification rocket.chat: {}").format(r.status_code)))
|
||||
sent_messages += 1
|
||||
return sent_messages
|
||||
|
||||
@ -72,8 +72,8 @@ class WebhookBackend(AWXBaseEmailBackend, CustomNotificationBase):
|
||||
headers=self.headers,
|
||||
verify=(not self.disable_ssl_verification))
|
||||
if r.status_code >= 400:
|
||||
logger.error(smart_text(_("Error sending notification webhook: {}").format(r.text)))
|
||||
logger.error(smart_text(_("Error sending notification webhook: {}").format(r.status_code)))
|
||||
if not self.fail_silently:
|
||||
raise Exception(smart_text(_("Error sending notification webhook: {}").format(r.text)))
|
||||
raise Exception(smart_text(_("Error sending notification webhook: {}").format(r.status_code)))
|
||||
sent_messages += 1
|
||||
return sent_messages
|
||||
|
||||
@ -50,7 +50,7 @@ import ansible_runner
|
||||
|
||||
# AWX
|
||||
from awx import __version__ as awx_application_version
|
||||
from awx.main.constants import CLOUD_PROVIDERS, PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV, GALAXY_SERVER_FIELDS
|
||||
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV, GALAXY_SERVER_FIELDS
|
||||
from awx.main.access import access_registry
|
||||
from awx.main.redact import UriCleaner
|
||||
from awx.main.models import (
|
||||
@ -1013,8 +1013,6 @@ class BaseTask(object):
|
||||
'resource_profiling_memory_poll_interval': mem_poll_interval,
|
||||
'resource_profiling_pid_poll_interval': pid_poll_interval,
|
||||
'resource_profiling_results_dir': results_dir})
|
||||
else:
|
||||
logger.debug('Resource profiling not enabled for task')
|
||||
|
||||
return resource_profiling_params
|
||||
|
||||
@ -1804,7 +1802,7 @@ class RunJob(BaseTask):
|
||||
|
||||
# By default, all extra vars disallow Jinja2 template usage for
|
||||
# security reasons; top level key-values defined in JT.extra_vars, however,
|
||||
# are whitelisted as "safe" (because they can only be set by users with
|
||||
# are allowed as "safe" (because they can only be set by users with
|
||||
# higher levels of privilege - those that have the ability create and
|
||||
# edit Job Templates)
|
||||
safe_dict = {}
|
||||
@ -1867,44 +1865,31 @@ class RunJob(BaseTask):
|
||||
project_path = job.project.get_project_path(check_if_exists=False)
|
||||
job_revision = job.project.scm_revision
|
||||
sync_needs = []
|
||||
all_sync_needs = ['update_{}'.format(job.project.scm_type), 'install_roles', 'install_collections']
|
||||
source_update_tag = 'update_{}'.format(job.project.scm_type)
|
||||
branch_override = bool(job.scm_branch and job.scm_branch != job.project.scm_branch)
|
||||
if not job.project.scm_type:
|
||||
pass # manual projects are not synced, user has responsibility for that
|
||||
elif not os.path.exists(project_path):
|
||||
logger.debug('Performing fresh clone of {} on this instance.'.format(job.project))
|
||||
sync_needs = all_sync_needs
|
||||
elif not job.project.scm_revision:
|
||||
logger.debug('Revision not known for {}, will sync with remote'.format(job.project))
|
||||
sync_needs = all_sync_needs
|
||||
elif job.project.scm_type == 'git':
|
||||
sync_needs.append(source_update_tag)
|
||||
elif job.project.scm_type == 'git' and job.project.scm_revision and (not branch_override):
|
||||
git_repo = git.Repo(project_path)
|
||||
try:
|
||||
desired_revision = job.project.scm_revision
|
||||
if job.scm_branch and job.scm_branch != job.project.scm_branch:
|
||||
desired_revision = job.scm_branch # could be commit or not, but will try as commit
|
||||
current_revision = git_repo.head.commit.hexsha
|
||||
if desired_revision == current_revision:
|
||||
job_revision = desired_revision
|
||||
if job_revision == git_repo.head.commit.hexsha:
|
||||
logger.debug('Skipping project sync for {} because commit is locally available'.format(job.log_format))
|
||||
else:
|
||||
sync_needs = all_sync_needs
|
||||
sync_needs.append(source_update_tag)
|
||||
except (ValueError, BadGitName):
|
||||
logger.debug('Needed commit for {} not in local source tree, will sync with remote'.format(job.log_format))
|
||||
sync_needs = all_sync_needs
|
||||
sync_needs.append(source_update_tag)
|
||||
else:
|
||||
sync_needs = all_sync_needs
|
||||
# Galaxy requirements are not supported for manual projects
|
||||
if not sync_needs and job.project.scm_type:
|
||||
# see if we need a sync because of presence of roles
|
||||
galaxy_req_path = os.path.join(project_path, 'roles', 'requirements.yml')
|
||||
if os.path.exists(galaxy_req_path):
|
||||
logger.debug('Running project sync for {} because of galaxy role requirements.'.format(job.log_format))
|
||||
sync_needs.append('install_roles')
|
||||
logger.debug('Project not available locally, {} will sync with remote'.format(job.log_format))
|
||||
sync_needs.append(source_update_tag)
|
||||
|
||||
galaxy_collections_req_path = os.path.join(project_path, 'collections', 'requirements.yml')
|
||||
if os.path.exists(galaxy_collections_req_path):
|
||||
logger.debug('Running project sync for {} because of galaxy collections requirements.'.format(job.log_format))
|
||||
sync_needs.append('install_collections')
|
||||
has_cache = os.path.exists(os.path.join(job.project.get_cache_path(), job.project.cache_id))
|
||||
# Galaxy requirements are not supported for manual projects
|
||||
if job.project.scm_type and ((not has_cache) or branch_override):
|
||||
sync_needs.extend(['install_roles', 'install_collections'])
|
||||
|
||||
if sync_needs:
|
||||
pu_ig = job.instance_group
|
||||
@ -1922,7 +1907,7 @@ class RunJob(BaseTask):
|
||||
execution_node=pu_en,
|
||||
celery_task_id=job.celery_task_id
|
||||
)
|
||||
if job.scm_branch and job.scm_branch != job.project.scm_branch:
|
||||
if branch_override:
|
||||
sync_metafields['scm_branch'] = job.scm_branch
|
||||
if 'update_' not in sync_metafields['job_tags']:
|
||||
sync_metafields['scm_revision'] = job_revision
|
||||
@ -1954,10 +1939,7 @@ class RunJob(BaseTask):
|
||||
if job_revision:
|
||||
job = self.update_model(job.pk, scm_revision=job_revision)
|
||||
# Project update does not copy the folder, so copy here
|
||||
RunProjectUpdate.make_local_copy(
|
||||
project_path, os.path.join(private_data_dir, 'project'),
|
||||
job.project.scm_type, job_revision
|
||||
)
|
||||
RunProjectUpdate.make_local_copy(job.project, private_data_dir, scm_revision=job_revision)
|
||||
|
||||
if job.inventory.kind == 'smart':
|
||||
# cache smart inventory memberships so that the host_filter query is not
|
||||
@ -1997,10 +1979,7 @@ class RunProjectUpdate(BaseTask):
|
||||
|
||||
@property
|
||||
def proot_show_paths(self):
|
||||
show_paths = [settings.PROJECTS_ROOT]
|
||||
if self.job_private_data_dir:
|
||||
show_paths.append(self.job_private_data_dir)
|
||||
return show_paths
|
||||
return [settings.PROJECTS_ROOT]
|
||||
|
||||
def __init__(self, *args, job_private_data_dir=None, **kwargs):
|
||||
super(RunProjectUpdate, self).__init__(*args, **kwargs)
|
||||
@ -2034,12 +2013,6 @@ class RunProjectUpdate(BaseTask):
|
||||
credential = project_update.credential
|
||||
if credential.has_input('ssh_key_data'):
|
||||
private_data['credentials'][credential] = credential.get_input('ssh_key_data', default='')
|
||||
|
||||
# Create dir where collections will live for the job run
|
||||
if project_update.job_type != 'check' and getattr(self, 'job_private_data_dir'):
|
||||
for folder_name in ('requirements_collections', 'requirements_roles'):
|
||||
folder_path = os.path.join(self.job_private_data_dir, folder_name)
|
||||
os.mkdir(folder_path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
|
||||
return private_data
|
||||
|
||||
def build_passwords(self, project_update, runtime_passwords):
|
||||
@ -2167,13 +2140,17 @@ class RunProjectUpdate(BaseTask):
|
||||
extra_vars.update(extra_vars_new)
|
||||
|
||||
scm_branch = project_update.scm_branch
|
||||
branch_override = bool(scm_branch and project_update.scm_branch != project_update.project.scm_branch)
|
||||
if project_update.job_type == 'run' and (not branch_override):
|
||||
scm_branch = project_update.project.scm_revision
|
||||
if project_update.job_type == 'run' and (not project_update.branch_override):
|
||||
if project_update.project.scm_revision:
|
||||
scm_branch = project_update.project.scm_revision
|
||||
elif not scm_branch:
|
||||
raise RuntimeError('Could not determine a revision to run from project.')
|
||||
elif not scm_branch:
|
||||
scm_branch = {'hg': 'tip'}.get(project_update.scm_type, 'HEAD')
|
||||
extra_vars.update({
|
||||
'project_path': project_update.get_project_path(check_if_exists=False),
|
||||
'projects_root': settings.PROJECTS_ROOT.rstrip('/'),
|
||||
'local_path': os.path.basename(project_update.project.local_path),
|
||||
'project_path': project_update.get_project_path(check_if_exists=False), # deprecated
|
||||
'insights_url': settings.INSIGHTS_URL_BASE,
|
||||
'awx_license_type': get_license(show_key=False).get('license_type', 'UNLICENSED'),
|
||||
'awx_version': get_awx_version(),
|
||||
@ -2183,9 +2160,6 @@ class RunProjectUpdate(BaseTask):
|
||||
'roles_enabled': settings.AWX_ROLES_ENABLED,
|
||||
'collections_enabled': settings.AWX_COLLECTIONS_ENABLED,
|
||||
})
|
||||
if project_update.job_type != 'check' and self.job_private_data_dir:
|
||||
extra_vars['collections_destination'] = os.path.join(self.job_private_data_dir, 'requirements_collections')
|
||||
extra_vars['roles_destination'] = os.path.join(self.job_private_data_dir, 'requirements_roles')
|
||||
# apply custom refspec from user for PR refs and the like
|
||||
if project_update.scm_refspec:
|
||||
extra_vars['scm_refspec'] = project_update.scm_refspec
|
||||
@ -2280,7 +2254,11 @@ class RunProjectUpdate(BaseTask):
|
||||
def acquire_lock(self, instance, blocking=True):
|
||||
lock_path = instance.get_lock_file()
|
||||
if lock_path is None:
|
||||
raise RuntimeError(u'Invalid lock file path')
|
||||
# If from migration or someone blanked local_path for any other reason, recoverable by save
|
||||
instance.save()
|
||||
lock_path = instance.get_lock_file()
|
||||
if lock_path is None:
|
||||
raise RuntimeError(u'Invalid lock file path')
|
||||
|
||||
try:
|
||||
self.lock_fd = os.open(lock_path, os.O_RDWR | os.O_CREAT)
|
||||
@ -2317,8 +2295,7 @@ class RunProjectUpdate(BaseTask):
|
||||
os.mkdir(settings.PROJECTS_ROOT)
|
||||
self.acquire_lock(instance)
|
||||
self.original_branch = None
|
||||
if (instance.scm_type == 'git' and instance.job_type == 'run' and instance.project and
|
||||
instance.scm_branch != instance.project.scm_branch):
|
||||
if instance.scm_type == 'git' and instance.branch_override:
|
||||
project_path = instance.project.get_project_path(check_if_exists=False)
|
||||
if os.path.exists(project_path):
|
||||
git_repo = git.Repo(project_path)
|
||||
@ -2327,17 +2304,48 @@ class RunProjectUpdate(BaseTask):
|
||||
else:
|
||||
self.original_branch = git_repo.active_branch
|
||||
|
||||
stage_path = os.path.join(instance.get_cache_path(), 'stage')
|
||||
if os.path.exists(stage_path):
|
||||
logger.warning('{0} unexpectedly existed before update'.format(stage_path))
|
||||
shutil.rmtree(stage_path)
|
||||
os.makedirs(stage_path) # presence of empty cache indicates lack of roles or collections
|
||||
|
||||
@staticmethod
|
||||
def make_local_copy(project_path, destination_folder, scm_type, scm_revision):
|
||||
if scm_type == 'git':
|
||||
def clear_project_cache(cache_dir, keep_value):
|
||||
if os.path.isdir(cache_dir):
|
||||
for entry in os.listdir(cache_dir):
|
||||
old_path = os.path.join(cache_dir, entry)
|
||||
if entry not in (keep_value, 'stage'):
|
||||
# invalidate, then delete
|
||||
new_path = os.path.join(cache_dir,'.~~delete~~' + entry)
|
||||
try:
|
||||
os.rename(old_path, new_path)
|
||||
shutil.rmtree(new_path)
|
||||
except OSError:
|
||||
logger.warning(f"Could not remove cache directory {old_path}")
|
||||
|
||||
@staticmethod
|
||||
def make_local_copy(p, job_private_data_dir, scm_revision=None):
|
||||
"""Copy project content (roles and collections) to a job private_data_dir
|
||||
|
||||
:param object p: Either a project or a project update
|
||||
:param str job_private_data_dir: The root of the target ansible-runner folder
|
||||
:param str scm_revision: For branch_override cases, the git revision to copy
|
||||
"""
|
||||
project_path = p.get_project_path(check_if_exists=False)
|
||||
destination_folder = os.path.join(job_private_data_dir, 'project')
|
||||
if not scm_revision:
|
||||
scm_revision = p.scm_revision
|
||||
|
||||
if p.scm_type == 'git':
|
||||
git_repo = git.Repo(project_path)
|
||||
if not os.path.exists(destination_folder):
|
||||
os.mkdir(destination_folder, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
|
||||
tmp_branch_name = 'awx_internal/{}'.format(uuid4())
|
||||
# always clone based on specific job revision
|
||||
if not scm_revision:
|
||||
if not p.scm_revision:
|
||||
raise RuntimeError('Unexpectedly could not determine a revision to run from project.')
|
||||
source_branch = git_repo.create_head(tmp_branch_name, scm_revision)
|
||||
source_branch = git_repo.create_head(tmp_branch_name, p.scm_revision)
|
||||
# git clone must take file:// syntax for source repo or else options like depth will be ignored
|
||||
source_as_uri = Path(project_path).as_uri()
|
||||
git.Repo.clone_from(
|
||||
@ -2356,19 +2364,48 @@ class RunProjectUpdate(BaseTask):
|
||||
else:
|
||||
copy_tree(project_path, destination_folder, preserve_symlinks=1)
|
||||
|
||||
# copy over the roles and collection cache to job folder
|
||||
cache_path = os.path.join(p.get_cache_path(), p.cache_id)
|
||||
subfolders = []
|
||||
if settings.AWX_COLLECTIONS_ENABLED:
|
||||
subfolders.append('requirements_collections')
|
||||
if settings.AWX_ROLES_ENABLED:
|
||||
subfolders.append('requirements_roles')
|
||||
for subfolder in subfolders:
|
||||
cache_subpath = os.path.join(cache_path, subfolder)
|
||||
if os.path.exists(cache_subpath):
|
||||
dest_subpath = os.path.join(job_private_data_dir, subfolder)
|
||||
copy_tree(cache_subpath, dest_subpath, preserve_symlinks=1)
|
||||
logger.debug('{0} {1} prepared {2} from cache'.format(type(p).__name__, p.pk, dest_subpath))
|
||||
|
||||
def post_run_hook(self, instance, status):
|
||||
# To avoid hangs, very important to release lock even if errors happen here
|
||||
try:
|
||||
if self.playbook_new_revision:
|
||||
instance.scm_revision = self.playbook_new_revision
|
||||
instance.save(update_fields=['scm_revision'])
|
||||
|
||||
# Roles and collection folders copy to durable cache
|
||||
base_path = instance.get_cache_path()
|
||||
stage_path = os.path.join(base_path, 'stage')
|
||||
if status == 'successful' and 'install_' in instance.job_tags:
|
||||
# Clear other caches before saving this one, and if branch is overridden
|
||||
# do not clear cache for main branch, but do clear it for other branches
|
||||
self.clear_project_cache(base_path, keep_value=instance.project.cache_id)
|
||||
cache_path = os.path.join(base_path, instance.cache_id)
|
||||
if os.path.exists(stage_path):
|
||||
if os.path.exists(cache_path):
|
||||
logger.warning('Rewriting cache at {0}, performance may suffer'.format(cache_path))
|
||||
shutil.rmtree(cache_path)
|
||||
os.rename(stage_path, cache_path)
|
||||
logger.debug('{0} wrote to cache at {1}'.format(instance.log_format, cache_path))
|
||||
elif os.path.exists(stage_path):
|
||||
shutil.rmtree(stage_path) # cannot trust content update produced
|
||||
|
||||
if self.job_private_data_dir:
|
||||
# copy project folder before resetting to default branch
|
||||
# because some git-tree-specific resources (like submodules) might matter
|
||||
self.make_local_copy(
|
||||
instance.get_project_path(check_if_exists=False), os.path.join(self.job_private_data_dir, 'project'),
|
||||
instance.scm_type, instance.scm_revision
|
||||
)
|
||||
self.make_local_copy(instance, self.job_private_data_dir)
|
||||
if self.original_branch:
|
||||
# for git project syncs, non-default branches can be problems
|
||||
# restore to branch the repo was on before this run
|
||||
@ -2462,15 +2499,12 @@ class RunInventoryUpdate(BaseTask):
|
||||
|
||||
if injector is not None:
|
||||
env = injector.build_env(inventory_update, env, private_data_dir, private_data_files)
|
||||
# All CLOUD_PROVIDERS sources implement as either script or auto plugin
|
||||
if injector.should_use_plugin():
|
||||
env['ANSIBLE_INVENTORY_ENABLED'] = 'auto'
|
||||
else:
|
||||
env['ANSIBLE_INVENTORY_ENABLED'] = 'script'
|
||||
# All CLOUD_PROVIDERS sources implement as inventory plugin from collection
|
||||
env['ANSIBLE_INVENTORY_ENABLED'] = 'auto'
|
||||
|
||||
if inventory_update.source in ['scm', 'custom']:
|
||||
for env_k in inventory_update.source_vars_dict:
|
||||
if str(env_k) not in env and str(env_k) not in settings.INV_ENV_VARIABLE_BLACKLIST:
|
||||
if str(env_k) not in env and str(env_k) not in settings.INV_ENV_VARIABLE_BLOCKED:
|
||||
env[str(env_k)] = str(inventory_update.source_vars_dict[env_k])
|
||||
elif inventory_update.source == 'file':
|
||||
raise NotImplementedError('Cannot update file sources through the task system.')
|
||||
@ -2554,7 +2588,7 @@ class RunInventoryUpdate(BaseTask):
|
||||
args.append('--exclude-empty-groups')
|
||||
if getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper(), False):
|
||||
args.extend(['--instance-id-var',
|
||||
getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper()),])
|
||||
"'{}'".format(getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper())),])
|
||||
# Add arguments for the source inventory script
|
||||
args.append('--source')
|
||||
args.append(self.pseudo_build_inventory(inventory_update, private_data_dir))
|
||||
@ -2582,16 +2616,12 @@ class RunInventoryUpdate(BaseTask):
|
||||
injector = InventorySource.injectors[src](self.get_ansible_version(inventory_update))
|
||||
|
||||
if injector is not None:
|
||||
if injector.should_use_plugin():
|
||||
content = injector.inventory_contents(inventory_update, private_data_dir)
|
||||
# must be a statically named file
|
||||
inventory_path = os.path.join(private_data_dir, injector.filename)
|
||||
with open(inventory_path, 'w') as f:
|
||||
f.write(content)
|
||||
os.chmod(inventory_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
|
||||
else:
|
||||
# Use the vendored script path
|
||||
inventory_path = self.get_path_to('..', 'plugins', 'inventory', injector.script_name)
|
||||
content = injector.inventory_contents(inventory_update, private_data_dir)
|
||||
# must be a statically named file
|
||||
inventory_path = os.path.join(private_data_dir, injector.filename)
|
||||
with open(inventory_path, 'w') as f:
|
||||
f.write(content)
|
||||
os.chmod(inventory_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
|
||||
elif src == 'scm':
|
||||
inventory_path = os.path.join(private_data_dir, 'project', inventory_update.source_path)
|
||||
elif src == 'custom':
|
||||
@ -2615,12 +2645,6 @@ class RunInventoryUpdate(BaseTask):
|
||||
src = inventory_update.source
|
||||
if src == 'scm' and inventory_update.source_project_update:
|
||||
return os.path.join(private_data_dir, 'project')
|
||||
if src in CLOUD_PROVIDERS:
|
||||
injector = None
|
||||
if src in InventorySource.injectors:
|
||||
injector = InventorySource.injectors[src](self.get_ansible_version(inventory_update))
|
||||
if (not injector) or (not injector.should_use_plugin()):
|
||||
return self.get_path_to('..', 'plugins', 'inventory')
|
||||
return private_data_dir
|
||||
|
||||
def build_playbook_path_relative_to_cwd(self, inventory_update, private_data_dir):
|
||||
@ -2634,13 +2658,21 @@ class RunInventoryUpdate(BaseTask):
|
||||
source_project = None
|
||||
if inventory_update.inventory_source:
|
||||
source_project = inventory_update.inventory_source.source_project
|
||||
if (inventory_update.source=='scm' and inventory_update.launch_type!='scm' and source_project):
|
||||
# In project sync, pulling galaxy roles is not needed
|
||||
if (inventory_update.source=='scm' and inventory_update.launch_type!='scm' and
|
||||
source_project and source_project.scm_type): # never ever update manual projects
|
||||
|
||||
# Check if the content cache exists, so that we do not unnecessarily re-download roles
|
||||
sync_needs = ['update_{}'.format(source_project.scm_type)]
|
||||
has_cache = os.path.exists(os.path.join(source_project.get_cache_path(), source_project.cache_id))
|
||||
# Galaxy requirements are not supported for manual projects
|
||||
if not has_cache:
|
||||
sync_needs.extend(['install_roles', 'install_collections'])
|
||||
|
||||
local_project_sync = source_project.create_project_update(
|
||||
_eager_fields=dict(
|
||||
launch_type="sync",
|
||||
job_type='run',
|
||||
job_tags='update_{},install_collections'.format(source_project.scm_type), # roles are never valid for inventory
|
||||
job_tags=','.join(sync_needs),
|
||||
status='running',
|
||||
execution_node=inventory_update.execution_node,
|
||||
instance_group = inventory_update.instance_group,
|
||||
@ -2664,11 +2696,7 @@ class RunInventoryUpdate(BaseTask):
|
||||
raise
|
||||
elif inventory_update.source == 'scm' and inventory_update.launch_type == 'scm' and source_project:
|
||||
# This follows update, not sync, so make copy here
|
||||
project_path = source_project.get_project_path(check_if_exists=False)
|
||||
RunProjectUpdate.make_local_copy(
|
||||
project_path, os.path.join(private_data_dir, 'project'),
|
||||
source_project.scm_type, source_project.scm_revision
|
||||
)
|
||||
RunProjectUpdate.make_local_copy(source_project, private_data_dir)
|
||||
|
||||
|
||||
@task(queue=get_local_queuename)
|
||||
|
||||
@ -131,8 +131,8 @@ def mock_cache():
|
||||
|
||||
def pytest_runtest_teardown(item, nextitem):
|
||||
# clear Django cache at the end of every test ran
|
||||
# NOTE: this should not be memcache, see test_cache in test_env.py
|
||||
# this is a local test cache, so we want every test to start with empty cache
|
||||
# NOTE: this should not be memcache (as it is deprecated), nor should it be redis.
|
||||
# This is a local test cache, so we want every test to start with an empty cache
|
||||
cache.clear()
|
||||
|
||||
|
||||
|
||||
@ -24,6 +24,7 @@ keyed_groups:
|
||||
separator: ''
|
||||
legacy_hostvars: true
|
||||
plugin: theforeman.foreman.foreman
|
||||
validate_certs: false
|
||||
want_facts: true
|
||||
want_hostcollections: true
|
||||
want_params: true
|
||||
|
||||
@ -3,5 +3,6 @@
|
||||
"TOWER_HOST": "https://foo.invalid",
|
||||
"TOWER_PASSWORD": "fooo",
|
||||
"TOWER_USERNAME": "fooo",
|
||||
"TOWER_OAUTH_TOKEN": "",
|
||||
"TOWER_VERIFY_SSL": "False"
|
||||
}
|
||||
@ -1,9 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"AZURE_CLIENT_ID": "fooo",
|
||||
"AZURE_CLOUD_ENVIRONMENT": "fooo",
|
||||
"AZURE_INI_PATH": "{{ file_reference }}",
|
||||
"AZURE_SECRET": "fooo",
|
||||
"AZURE_SUBSCRIPTION_ID": "fooo",
|
||||
"AZURE_TENANT": "fooo"
|
||||
}
|
||||
@ -1,11 +0,0 @@
|
||||
[azure]
|
||||
include_powerstate = yes
|
||||
group_by_resource_group = yes
|
||||
group_by_location = yes
|
||||
group_by_tag = yes
|
||||
locations = southcentralus,westus
|
||||
base_source_var = value_of_var
|
||||
use_private_ip = True
|
||||
resource_groups = foo_resources,bar_resources
|
||||
tags = Creator:jmarshall, peanutbutter:jelly
|
||||
|
||||
@ -1,4 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"CLOUDFORMS_INI_PATH": "{{ file_reference }}"
|
||||
}
|
||||
@ -1 +0,0 @@
|
||||
<directory>
|
||||
@ -1,16 +0,0 @@
|
||||
[cloudforms]
|
||||
url = https://foo.invalid
|
||||
username = fooo
|
||||
password = fooo
|
||||
ssl_verify = false
|
||||
version = 2.4
|
||||
purge_actions = maybe
|
||||
clean_group_keys = this_key
|
||||
nest_tags = yes
|
||||
suffix = .ppt
|
||||
prefer_ipv4 = yes
|
||||
|
||||
[cache]
|
||||
max_age = 0
|
||||
path = {{ cache_dir }}
|
||||
|
||||
@ -1,7 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"AWS_ACCESS_KEY_ID": "fooo",
|
||||
"AWS_SECRET_ACCESS_KEY": "fooo",
|
||||
"AWS_SECURITY_TOKEN": "fooo",
|
||||
"EC2_INI_PATH": "{{ file_reference }}"
|
||||
}
|
||||
@ -1 +0,0 @@
|
||||
<directory>
|
||||
@ -1,34 +0,0 @@
|
||||
[ec2]
|
||||
base_source_var = value_of_var
|
||||
boto_profile = /tmp/my_boto_stuff
|
||||
iam_role_arn = arn:aws:iam::123456789012:role/test-role
|
||||
hostname_variable = public_dns_name
|
||||
destination_variable = public_dns_name
|
||||
regions = us-east-2,ap-south-1
|
||||
regions_exclude = us-gov-west-1,cn-north-1
|
||||
vpc_destination_variable = ip_address
|
||||
route53 = False
|
||||
all_instances = True
|
||||
all_rds_instances = False
|
||||
include_rds_clusters = False
|
||||
rds = False
|
||||
nested_groups = True
|
||||
elasticache = False
|
||||
stack_filters = False
|
||||
instance_filters = foobaa
|
||||
group_by_ami_id = False
|
||||
group_by_availability_zone = True
|
||||
group_by_aws_account = False
|
||||
group_by_instance_id = False
|
||||
group_by_instance_state = False
|
||||
group_by_platform = False
|
||||
group_by_instance_type = True
|
||||
group_by_key_pair = False
|
||||
group_by_region = True
|
||||
group_by_security_group = False
|
||||
group_by_tag_keys = True
|
||||
group_by_tag_none = False
|
||||
group_by_vpc_id = False
|
||||
cache_path = {{ cache_dir }}
|
||||
cache_max_age = 300
|
||||
|
||||
@ -1,12 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"GCE_CREDENTIALS_FILE_PATH": "{{ file_reference }}",
|
||||
"GCE_EMAIL": "fooo",
|
||||
"GCE_INI_PATH": "{{ file_reference_0 }}",
|
||||
"GCE_PROJECT": "fooo",
|
||||
"GCE_ZONE": "us-east4-a,us-west1-b",
|
||||
"GCP_AUTH_KIND": "serviceaccount",
|
||||
"GCP_ENV_TYPE": "tower",
|
||||
"GCP_PROJECT": "fooo",
|
||||
"GCP_SERVICE_ACCOUNT_FILE": "{{ file_reference }}"
|
||||
}
|
||||
@ -1,7 +0,0 @@
|
||||
{
|
||||
"type": "service_account",
|
||||
"private_key": "{{private_key}}",
|
||||
"client_email": "fooo",
|
||||
"project_id": "fooo",
|
||||
"token_uri": "https://oauth2.googleapis.com/token"
|
||||
}
|
||||
@ -1,3 +0,0 @@
|
||||
[cache]
|
||||
cache_max_age = 0
|
||||
|
||||
@ -1,4 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"OS_CLIENT_CONFIG_FILE": "{{ file_reference }}"
|
||||
}
|
||||
@ -1 +0,0 @@
|
||||
<directory>
|
||||
@ -1,17 +0,0 @@
|
||||
ansible:
|
||||
expand_hostvars: true
|
||||
fail_on_errors: true
|
||||
use_hostnames: false
|
||||
cache:
|
||||
path: {{ cache_dir }}
|
||||
clouds:
|
||||
devstack:
|
||||
auth:
|
||||
auth_url: https://foo.invalid
|
||||
domain_name: fooo
|
||||
password: fooo
|
||||
project_domain_name: fooo
|
||||
project_name: fooo
|
||||
username: fooo
|
||||
private: false
|
||||
verify: false
|
||||
@ -1,7 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"OVIRT_INI_PATH": "{{ file_reference }}",
|
||||
"OVIRT_PASSWORD": "fooo",
|
||||
"OVIRT_URL": "https://foo.invalid",
|
||||
"OVIRT_USERNAME": "fooo"
|
||||
}
|
||||
@ -1,5 +0,0 @@
|
||||
[ovirt]
|
||||
ovirt_url=https://foo.invalid
|
||||
ovirt_username=fooo
|
||||
ovirt_password=fooo
|
||||
ovirt_ca_file=fooo
|
||||
@ -1,4 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"FOREMAN_INI_PATH": "{{ file_reference }}"
|
||||
}
|
||||
@ -1,19 +0,0 @@
|
||||
[foreman]
|
||||
base_source_var = value_of_var
|
||||
ssl_verify = False
|
||||
url = https://foo.invalid
|
||||
user = fooo
|
||||
password = fooo
|
||||
|
||||
[ansible]
|
||||
group_patterns = ["{app}-{tier}-{color}", "{app}-{color}"]
|
||||
want_facts = True
|
||||
want_hostcollections = True
|
||||
group_prefix = foo_group_prefix
|
||||
want_ansible_ssh_host = True
|
||||
rich_params = False
|
||||
|
||||
[cache]
|
||||
path = /tmp
|
||||
max_age = 0
|
||||
|
||||
@ -1,9 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"TOWER_HOST": "https://foo.invalid",
|
||||
"TOWER_INVENTORY": "42",
|
||||
"TOWER_LICENSE_TYPE": "open",
|
||||
"TOWER_PASSWORD": "fooo",
|
||||
"TOWER_USERNAME": "fooo",
|
||||
"TOWER_VERIFY_SSL": "False"
|
||||
}
|
||||
@ -1,8 +0,0 @@
|
||||
{
|
||||
"ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS": "never",
|
||||
"VMWARE_HOST": "https://foo.invalid",
|
||||
"VMWARE_INI_PATH": "{{ file_reference }}",
|
||||
"VMWARE_PASSWORD": "fooo",
|
||||
"VMWARE_USER": "fooo",
|
||||
"VMWARE_VALIDATE_CERTS": "False"
|
||||
}
|
||||
@ -1,11 +0,0 @@
|
||||
[vmware]
|
||||
cache_max_age = 0
|
||||
validate_certs = False
|
||||
username = fooo
|
||||
password = fooo
|
||||
server = https://foo.invalid
|
||||
base_source_var = value_of_var
|
||||
alias_pattern = {{ config.foo }}
|
||||
host_filters = {{ config.zoo == "DC0_H0_VM0" }}
|
||||
groupby_patterns = {{ config.asdf }}
|
||||
|
||||
@ -50,8 +50,6 @@ class TestSwaggerGeneration():
|
||||
data.update(response.accepted_renderer.get_customizations() or {})
|
||||
|
||||
data['host'] = None
|
||||
if not pytest.config.getoption("--genschema"):
|
||||
data['modified'] = datetime.datetime.utcnow().isoformat()
|
||||
data['schemes'] = ['https']
|
||||
data['consumes'] = ['application/json']
|
||||
|
||||
@ -79,10 +77,14 @@ class TestSwaggerGeneration():
|
||||
data['paths'] = revised_paths
|
||||
self.__class__.JSON = data
|
||||
|
||||
def test_sanity(self, release):
|
||||
def test_sanity(self, release, request):
|
||||
JSON = self.__class__.JSON
|
||||
JSON['info']['version'] = release
|
||||
|
||||
|
||||
if not request.config.getoption('--genschema'):
|
||||
JSON['modified'] = datetime.datetime.utcnow().isoformat()
|
||||
|
||||
# Make some basic assertions about the rendered JSON so we can
|
||||
# be sure it doesn't break across DRF upgrades and view/serializer
|
||||
# changes.
|
||||
@ -115,7 +117,7 @@ class TestSwaggerGeneration():
|
||||
# hit a couple important endpoints so we always have example data
|
||||
get(path, user=admin, expect=200)
|
||||
|
||||
def test_autogen_response_examples(self, swagger_autogen):
|
||||
def test_autogen_response_examples(self, swagger_autogen, request):
|
||||
for pattern, node in TestSwaggerGeneration.JSON['paths'].items():
|
||||
pattern = pattern.replace('{id}', '[0-9]+')
|
||||
pattern = pattern.replace(r'{category_slug}', r'[a-zA-Z0-9\-]+')
|
||||
@ -138,7 +140,7 @@ class TestSwaggerGeneration():
|
||||
for param in node[method].get('parameters'):
|
||||
if param['in'] == 'body':
|
||||
node[method]['parameters'].remove(param)
|
||||
if pytest.config.getoption("--genschema"):
|
||||
if request.config.getoption("--genschema"):
|
||||
pytest.skip("In schema generator skipping swagger generator", allow_module_level=True)
|
||||
else:
|
||||
node[method].setdefault('parameters', []).append({
|
||||
|
||||
@ -60,6 +60,36 @@ def test_credential_validation_error_with_bad_user(post, admin, credentialtype_s
|
||||
assert response.data['user'][0] == 'Incorrect type. Expected pk value, received str.'
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_credential_validation_error_with_no_owner_field(post, admin, credentialtype_ssh):
|
||||
params = {
|
||||
'credential_type': credentialtype_ssh.id,
|
||||
'inputs': {'username': 'someusername'},
|
||||
'name': 'Some name',
|
||||
}
|
||||
response = post(reverse('api:credential_list'), params, admin)
|
||||
assert response.status_code == 400
|
||||
assert response.data['detail'][0] == "Missing 'user', 'team', or 'organization'."
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_credential_validation_error_with_multiple_owner_fields(post, admin, alice, team, organization, credentialtype_ssh):
|
||||
params = {
|
||||
'credential_type': credentialtype_ssh.id,
|
||||
'inputs': {'username': 'someusername'},
|
||||
'team': team.id,
|
||||
'user': alice.id,
|
||||
'organization': organization.id,
|
||||
'name': 'Some name',
|
||||
}
|
||||
response = post(reverse('api:credential_list'), params, admin)
|
||||
assert response.status_code == 400
|
||||
assert response.data['detail'][0] == (
|
||||
"Only one of 'user', 'team', or 'organization' should be provided, "
|
||||
"received organization, team, user fields."
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_create_user_credential_via_user_credentials_list(post, get, alice, credentialtype_ssh):
|
||||
params = {
|
||||
@ -1123,6 +1153,22 @@ def test_cloud_credential_type_mutability(patch, organization, admin, credential
|
||||
assert response.status_code == 200
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
@pytest.mark.parametrize('field', ['password', 'ssh_key_data'])
|
||||
def test_secret_fields_cannot_be_special_encrypted_variable(post, organization, admin, credentialtype_ssh, field):
|
||||
params = {
|
||||
'name': 'Best credential ever',
|
||||
'credential_type': credentialtype_ssh.id,
|
||||
'inputs': {
|
||||
'username': 'joe',
|
||||
field: '$encrypted$',
|
||||
},
|
||||
'organization': organization.id,
|
||||
}
|
||||
response = post(reverse('api:credential_list'), params, admin, status=400)
|
||||
assert str(response.data['inputs'][0]) == f'$encrypted$ is a reserved keyword, and cannot be used for {field}.'
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_ssh_unlock_needed(put, organization, admin, credentialtype_ssh):
|
||||
params = {
|
||||
|
||||
@ -220,7 +220,7 @@ def test_create_valid_kind(kind, get, post, admin):
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
@pytest.mark.parametrize('kind', ['ssh', 'vault', 'scm', 'insights'])
|
||||
@pytest.mark.parametrize('kind', ['ssh', 'vault', 'scm', 'insights', 'kubernetes'])
|
||||
def test_create_invalid_kind(kind, get, post, admin):
|
||||
response = post(reverse('api:credential_type_list'), {
|
||||
'kind': kind,
|
||||
|
||||
@ -4,7 +4,7 @@ from awx.api.versioning import reverse
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_proxy_ip_whitelist(get, patch, admin):
|
||||
def test_proxy_ip_allowed(get, patch, admin):
|
||||
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'system'})
|
||||
patch(url, user=admin, data={
|
||||
'REMOTE_HOST_HEADERS': [
|
||||
@ -23,37 +23,37 @@ def test_proxy_ip_whitelist(get, patch, admin):
|
||||
def process_response(self, request, response):
|
||||
self.environ = request.environ
|
||||
|
||||
# By default, `PROXY_IP_WHITELIST` is disabled, so custom `REMOTE_HOST_HEADERS`
|
||||
# By default, `PROXY_IP_ALLOWED_LIST` is disabled, so custom `REMOTE_HOST_HEADERS`
|
||||
# should just pass through
|
||||
middleware = HeaderTrackingMiddleware()
|
||||
get(url, user=admin, middleware=middleware,
|
||||
HTTP_X_FROM_THE_LOAD_BALANCER='some-actual-ip')
|
||||
assert middleware.environ['HTTP_X_FROM_THE_LOAD_BALANCER'] == 'some-actual-ip'
|
||||
|
||||
# If `PROXY_IP_WHITELIST` is restricted to 10.0.1.100 and we make a request
|
||||
# If `PROXY_IP_ALLOWED_LIST` is restricted to 10.0.1.100 and we make a request
|
||||
# from 8.9.10.11, the custom `HTTP_X_FROM_THE_LOAD_BALANCER` header should
|
||||
# be stripped
|
||||
patch(url, user=admin, data={
|
||||
'PROXY_IP_WHITELIST': ['10.0.1.100']
|
||||
'PROXY_IP_ALLOWED_LIST': ['10.0.1.100']
|
||||
})
|
||||
middleware = HeaderTrackingMiddleware()
|
||||
get(url, user=admin, middleware=middleware, REMOTE_ADDR='8.9.10.11',
|
||||
HTTP_X_FROM_THE_LOAD_BALANCER='some-actual-ip')
|
||||
assert 'HTTP_X_FROM_THE_LOAD_BALANCER' not in middleware.environ
|
||||
|
||||
# If 8.9.10.11 is added to `PROXY_IP_WHITELIST` the
|
||||
# If 8.9.10.11 is added to `PROXY_IP_ALLOWED_LIST` the
|
||||
# `HTTP_X_FROM_THE_LOAD_BALANCER` header should be passed through again
|
||||
patch(url, user=admin, data={
|
||||
'PROXY_IP_WHITELIST': ['10.0.1.100', '8.9.10.11']
|
||||
'PROXY_IP_ALLOWED_LIST': ['10.0.1.100', '8.9.10.11']
|
||||
})
|
||||
middleware = HeaderTrackingMiddleware()
|
||||
get(url, user=admin, middleware=middleware, REMOTE_ADDR='8.9.10.11',
|
||||
HTTP_X_FROM_THE_LOAD_BALANCER='some-actual-ip')
|
||||
assert middleware.environ['HTTP_X_FROM_THE_LOAD_BALANCER'] == 'some-actual-ip'
|
||||
|
||||
# Allow whitelisting of proxy hostnames in addition to IP addresses
|
||||
# Allow allowed list of proxy hostnames in addition to IP addresses
|
||||
patch(url, user=admin, data={
|
||||
'PROXY_IP_WHITELIST': ['my.proxy.example.org']
|
||||
'PROXY_IP_ALLOWED_LIST': ['my.proxy.example.org']
|
||||
})
|
||||
middleware = HeaderTrackingMiddleware()
|
||||
get(url, user=admin, middleware=middleware, REMOTE_ADDR='8.9.10.11',
|
||||
|
||||
@ -60,6 +60,42 @@ def test_inventory_source_unique_together_with_inv(inventory_factory):
|
||||
is2.validate_unique()
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_inventory_host_name_unique(scm_inventory, post, admin_user):
|
||||
inv_src = scm_inventory.inventory_sources.first()
|
||||
inv_src.groups.create(name='barfoo', inventory=scm_inventory)
|
||||
resp = post(
|
||||
reverse('api:inventory_hosts_list', kwargs={'pk': scm_inventory.id}),
|
||||
{
|
||||
'name': 'barfoo',
|
||||
'inventory_id': scm_inventory.id,
|
||||
},
|
||||
admin_user,
|
||||
expect=400
|
||||
)
|
||||
|
||||
assert resp.status_code == 400
|
||||
assert "A Group with that name already exists." in json.dumps(resp.data)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_inventory_group_name_unique(scm_inventory, post, admin_user):
|
||||
inv_src = scm_inventory.inventory_sources.first()
|
||||
inv_src.hosts.create(name='barfoo', inventory=scm_inventory)
|
||||
resp = post(
|
||||
reverse('api:inventory_groups_list', kwargs={'pk': scm_inventory.id}),
|
||||
{
|
||||
'name': 'barfoo',
|
||||
'inventory_id': scm_inventory.id,
|
||||
},
|
||||
admin_user,
|
||||
expect=400
|
||||
)
|
||||
|
||||
assert resp.status_code == 400
|
||||
assert "A Host with that name already exists." in json.dumps(resp.data)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("role_field,expected_status_code", [
|
||||
(None, 403),
|
||||
('admin_role', 200),
|
||||
@ -413,7 +449,7 @@ def test_inventory_update_access_called(post, inventory_source, alice, mock_acce
|
||||
@pytest.mark.django_db
|
||||
def test_inventory_source_vars_prohibition(post, inventory, admin_user):
|
||||
with mock.patch('awx.api.serializers.settings') as mock_settings:
|
||||
mock_settings.INV_ENV_VARIABLE_BLACKLIST = ('FOOBAR',)
|
||||
mock_settings.INV_ENV_VARIABLE_BLOCKED = ('FOOBAR',)
|
||||
r = post(reverse('api:inventory_source_list'),
|
||||
{'name': 'new inv src', 'source_vars': '{\"FOOBAR\": \"val\"}', 'inventory': inventory.pk},
|
||||
admin_user, expect=400)
|
||||
|
||||
@ -483,25 +483,26 @@ def test_job_launch_pass_with_prompted_vault_password(machine_credential, vault_
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_job_launch_JT_with_credentials(machine_credential, credential, net_credential, deploy_jobtemplate):
|
||||
def test_job_launch_JT_with_credentials(machine_credential, credential, net_credential, kube_credential, deploy_jobtemplate):
|
||||
deploy_jobtemplate.ask_credential_on_launch = True
|
||||
deploy_jobtemplate.save()
|
||||
|
||||
kv = dict(credentials=[credential.pk, net_credential.pk, machine_credential.pk])
|
||||
kv = dict(credentials=[credential.pk, net_credential.pk, machine_credential.pk, kube_credential.pk])
|
||||
serializer = JobLaunchSerializer(data=kv, context={'template': deploy_jobtemplate})
|
||||
validated = serializer.is_valid()
|
||||
assert validated, serializer.errors
|
||||
|
||||
kv['credentials'] = [credential, net_credential, machine_credential] # convert to internal value
|
||||
kv['credentials'] = [credential, net_credential, machine_credential, kube_credential] # convert to internal value
|
||||
prompted_fields, ignored_fields, errors = deploy_jobtemplate._accept_or_ignore_job_kwargs(
|
||||
_exclude_errors=['required', 'prompts'], **kv)
|
||||
job_obj = deploy_jobtemplate.create_unified_job(**prompted_fields)
|
||||
|
||||
creds = job_obj.credentials.all()
|
||||
assert len(creds) == 3
|
||||
assert len(creds) == 4
|
||||
assert credential in creds
|
||||
assert net_credential in creds
|
||||
assert machine_credential in creds
|
||||
assert kube_credential in creds
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
|
||||
@ -54,7 +54,9 @@ def test_no_changing_overwrite_behavior_if_used(post, patch, organization, admin
|
||||
data={
|
||||
'name': 'fooo',
|
||||
'organization': organization.id,
|
||||
'allow_override': True
|
||||
'allow_override': True,
|
||||
'scm_type': 'git',
|
||||
'scm_url': 'https://github.com/ansible/test-playbooks.git'
|
||||
},
|
||||
user=admin_user,
|
||||
expect=201
|
||||
@ -83,7 +85,9 @@ def test_changing_overwrite_behavior_okay_if_not_used(post, patch, organization,
|
||||
data={
|
||||
'name': 'fooo',
|
||||
'organization': organization.id,
|
||||
'allow_override': True
|
||||
'allow_override': True,
|
||||
'scm_type': 'git',
|
||||
'scm_url': 'https://github.com/ansible/test-playbooks.git'
|
||||
},
|
||||
user=admin_user,
|
||||
expect=201
|
||||
|
||||
@ -1,3 +1,5 @@
|
||||
from datetime import date
|
||||
|
||||
import pytest
|
||||
|
||||
from django.contrib.sessions.middleware import SessionMiddleware
|
||||
@ -61,3 +63,21 @@ def test_user_cannot_update_last_login(patch, admin):
|
||||
middleware=SessionMiddleware()
|
||||
)
|
||||
assert User.objects.get(pk=admin.pk).last_login is None
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_user_verify_attribute_created(admin, get):
|
||||
assert admin.created == admin.date_joined
|
||||
resp = get(
|
||||
reverse('api:user_detail', kwargs={'pk': admin.pk}),
|
||||
admin
|
||||
)
|
||||
assert resp.data['created'] == admin.date_joined
|
||||
|
||||
past = date(2020, 1, 1).isoformat()
|
||||
for op, count in (('gt', 1), ('lt', 0)):
|
||||
resp = get(
|
||||
reverse('api:user_list') + f'?created__{op}={past}',
|
||||
admin
|
||||
)
|
||||
assert resp.data['count'] == count
|
||||
|
||||
@ -71,6 +71,18 @@ def test_node_accepts_prompted_fields(inventory, project, workflow_job_template,
|
||||
user=admin_user, expect=201)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
@pytest.mark.parametrize("field_name, field_value", [
|
||||
('all_parents_must_converge', True),
|
||||
('all_parents_must_converge', False),
|
||||
])
|
||||
def test_create_node_with_field(field_name, field_value, workflow_job_template, post, admin_user):
|
||||
url = reverse('api:workflow_job_template_workflow_nodes_list',
|
||||
kwargs={'pk': workflow_job_template.pk})
|
||||
res = post(url, {field_name: field_value}, user=admin_user, expect=201)
|
||||
assert res.data[field_name] == field_value
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestApprovalNodes():
|
||||
def test_approval_node_creation(self, post, approval_node, admin_user):
|
||||
|
||||
@ -145,7 +145,6 @@ def project(instance, organization):
|
||||
description="test-proj-desc",
|
||||
organization=organization,
|
||||
playbook_files=['helloworld.yml', 'alt-helloworld.yml'],
|
||||
local_path='_92__test_proj',
|
||||
scm_revision='1234567890123456789012345678901234567890',
|
||||
scm_url='localhost',
|
||||
scm_type='git'
|
||||
|
||||
@ -3,7 +3,7 @@ import pytest
|
||||
|
||||
from django.utils.timezone import now
|
||||
|
||||
from awx.main.models import Job, JobEvent, Inventory, Host
|
||||
from awx.main.models import Job, JobEvent, Inventory, Host, JobHostSummary
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
@ -153,3 +153,58 @@ def test_host_summary_generation_with_deleted_hosts():
|
||||
assert ids == [-1, -1, -1, -1, -1, 6, 7, 8, 9, 10]
|
||||
assert names == ['Host 0', 'Host 1', 'Host 2', 'Host 3', 'Host 4', 'Host 5',
|
||||
'Host 6', 'Host 7', 'Host 8', 'Host 9']
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_host_summary_generation_with_limit():
|
||||
# Make an inventory with 10 hosts, run a playbook with a --limit
|
||||
# pointed at *one* host,
|
||||
# Verify that *only* that host has an associated JobHostSummary and that
|
||||
# *only* that host has an updated value for .last_job.
|
||||
hostnames = [f'Host {i}' for i in range(10)]
|
||||
inv = Inventory()
|
||||
inv.save()
|
||||
Host.objects.bulk_create([
|
||||
Host(created=now(), modified=now(), name=h, inventory_id=inv.id)
|
||||
for h in hostnames
|
||||
])
|
||||
j = Job(inventory=inv)
|
||||
j.save()
|
||||
|
||||
# host map is a data structure that tracks a mapping of host name --> ID
|
||||
# for the inventory, _regardless_ of whether or not there's a limit
|
||||
# applied to the actual playbook run
|
||||
host_map = dict((host.name, host.id) for host in inv.hosts.all())
|
||||
|
||||
# by making the playbook_on_stats *only* include Host 1, we're emulating
|
||||
# the behavior of a `--limit=Host 1`
|
||||
matching_host = Host.objects.get(name='Host 1')
|
||||
JobEvent.create_from_data(
|
||||
job_id=j.pk,
|
||||
parent_uuid='abc123',
|
||||
event='playbook_on_stats',
|
||||
event_data={
|
||||
'ok': {matching_host.name: len(matching_host.name)}, # effectively, limit=Host 1
|
||||
'changed': {},
|
||||
'dark': {},
|
||||
'failures': {},
|
||||
'ignored': {},
|
||||
'processed': {},
|
||||
'rescued': {},
|
||||
'skipped': {},
|
||||
},
|
||||
host_map=host_map
|
||||
).save()
|
||||
|
||||
# since the playbook_on_stats only references one host,
|
||||
# there should *only* be on JobHostSummary record (and it should
|
||||
# be related to the appropriate Host)
|
||||
assert JobHostSummary.objects.count() == 1
|
||||
for h in Host.objects.all():
|
||||
if h.name == 'Host 1':
|
||||
assert h.last_job_id == j.id
|
||||
assert h.last_job_host_summary_id == JobHostSummary.objects.first().id
|
||||
else:
|
||||
# all other hosts in the inventory should remain untouched
|
||||
assert h.last_job_id is None
|
||||
assert h.last_job_host_summary_id is None
|
||||
|
||||
@ -17,7 +17,6 @@ from awx.main.models import (
|
||||
Job
|
||||
)
|
||||
from awx.main.constants import CLOUD_PROVIDERS
|
||||
from awx.main.models.inventory import PluginFileInjector
|
||||
from awx.main.utils.filters import SmartFilter
|
||||
|
||||
|
||||
@ -170,7 +169,8 @@ class TestSCMUpdateFeatures:
|
||||
inventory_update = InventoryUpdate(
|
||||
inventory_source=scm_inventory_source,
|
||||
source_path=scm_inventory_source.source_path)
|
||||
assert inventory_update.get_actual_source_path().endswith('_92__test_proj/inventory_file')
|
||||
p = scm_inventory_source.source_project
|
||||
assert inventory_update.get_actual_source_path().endswith(f'_{p.id}__test_proj/inventory_file')
|
||||
|
||||
def test_no_unwanted_updates(self, scm_inventory_source):
|
||||
# Changing the non-sensitive fields should not trigger update
|
||||
@ -227,13 +227,6 @@ class TestSCMClean:
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestInventorySourceInjectors:
|
||||
def test_should_use_plugin(self):
|
||||
class foo(PluginFileInjector):
|
||||
plugin_name = 'foo_compute'
|
||||
initial_version = '2.7.8'
|
||||
assert not foo('2.7.7').should_use_plugin()
|
||||
assert foo('2.8').should_use_plugin()
|
||||
|
||||
def test_extra_credentials(self, project, credential):
|
||||
inventory_source = InventorySource.objects.create(
|
||||
name='foo', source='custom', source_project=project
|
||||
@ -266,18 +259,6 @@ class TestInventorySourceInjectors:
|
||||
injector = InventorySource.injectors[source]('2.7.7')
|
||||
assert injector.filename == filename
|
||||
|
||||
@pytest.mark.parametrize('source,script_name', [
|
||||
('ec2', 'ec2.py'),
|
||||
('rhv', 'ovirt4.py'),
|
||||
('satellite6', 'foreman.py'),
|
||||
('openstack', 'openstack_inventory.py')
|
||||
], ids=['ec2', 'rhv', 'satellite6', 'openstack'])
|
||||
def test_script_filenames(self, source, script_name):
|
||||
"""Ansible has several exceptions in naming of scripts
|
||||
"""
|
||||
injector = InventorySource.injectors[source]('2.7.7')
|
||||
assert injector.script_name == script_name
|
||||
|
||||
def test_group_by_azure(self):
|
||||
injector = InventorySource.injectors['azure_rm']('2.9')
|
||||
inv_src = InventorySource(
|
||||
|
||||
@ -12,6 +12,7 @@ from awx.api.serializers import UnifiedJobSerializer
|
||||
|
||||
class TestJobNotificationMixin(object):
|
||||
CONTEXT_STRUCTURE = {'job': {'allow_simultaneous': bool,
|
||||
'artifacts': {},
|
||||
'custom_virtualenv': str,
|
||||
'controller_node': str,
|
||||
'created': datetime.datetime,
|
||||
|
||||
@ -34,6 +34,18 @@ def test_sensitive_change_triggers_update(project):
|
||||
mock_update.assert_called_once_with()
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_local_path_autoset(organization):
|
||||
with mock.patch.object(Project, "update"):
|
||||
p = Project.objects.create(
|
||||
name="test-proj",
|
||||
organization=organization,
|
||||
scm_url='localhost',
|
||||
scm_type='git'
|
||||
)
|
||||
assert p.local_path == f'_{p.id}__test_proj'
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_foreign_key_change_changes_modified_by(project, organization):
|
||||
assert project._get_fields_snapshot()['organization_id'] == organization.id
|
||||
|
||||
@ -68,15 +68,6 @@ INI_TEST_VARS = {
|
||||
'satellite6_want_hostcollections': True,
|
||||
'satellite6_want_ansible_ssh_host': True,
|
||||
'satellite6_want_facts': True
|
||||
|
||||
},
|
||||
'cloudforms': {
|
||||
'version': '2.4',
|
||||
'purge_actions': 'maybe',
|
||||
'clean_group_keys': 'this_key',
|
||||
'nest_tags': 'yes',
|
||||
'suffix': '.ppt',
|
||||
'prefer_ipv4': 'yes'
|
||||
},
|
||||
'rhv': { # options specific to the plugin
|
||||
'ovirt_insecure': False,
|
||||
@ -121,21 +112,27 @@ def credential_kind(source):
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def fake_credential_factory(source):
|
||||
ct = CredentialType.defaults[credential_kind(source)]()
|
||||
ct.save()
|
||||
def fake_credential_factory():
|
||||
def wrap(source):
|
||||
ct = CredentialType.defaults[credential_kind(source)]()
|
||||
ct.save()
|
||||
|
||||
inputs = {}
|
||||
var_specs = {} # pivoted version of inputs
|
||||
for element in ct.inputs.get('fields'):
|
||||
var_specs[element['id']] = element
|
||||
for var in var_specs.keys():
|
||||
inputs[var] = generate_fake_var(var_specs[var])
|
||||
inputs = {}
|
||||
var_specs = {} # pivoted version of inputs
|
||||
for element in ct.inputs.get('fields'):
|
||||
var_specs[element['id']] = element
|
||||
for var in var_specs.keys():
|
||||
inputs[var] = generate_fake_var(var_specs[var])
|
||||
|
||||
if source == 'tower':
|
||||
inputs.pop('oauth_token') # mutually exclusive with user/pass
|
||||
|
||||
return Credential.objects.create(
|
||||
credential_type=ct,
|
||||
inputs=inputs
|
||||
)
|
||||
return wrap
|
||||
|
||||
return Credential.objects.create(
|
||||
credential_type=ct,
|
||||
inputs=inputs
|
||||
)
|
||||
|
||||
|
||||
def read_content(private_data_dir, raw_env, inventory_update):
|
||||
@ -247,8 +244,7 @@ def create_reference_data(source_dir, env, content):
|
||||
|
||||
@pytest.mark.django_db
|
||||
@pytest.mark.parametrize('this_kind', CLOUD_PROVIDERS)
|
||||
@pytest.mark.parametrize('script_or_plugin', ['scripts', 'plugins'])
|
||||
def test_inventory_update_injected_content(this_kind, script_or_plugin, inventory):
|
||||
def test_inventory_update_injected_content(this_kind, inventory, fake_credential_factory):
|
||||
src_vars = dict(base_source_var='value_of_var')
|
||||
if this_kind in INI_TEST_VARS:
|
||||
src_vars.update(INI_TEST_VARS[this_kind])
|
||||
@ -265,8 +261,7 @@ def test_inventory_update_injected_content(this_kind, script_or_plugin, inventor
|
||||
inventory_update = inventory_source.create_unified_job()
|
||||
task = RunInventoryUpdate()
|
||||
|
||||
use_plugin = bool(script_or_plugin == 'plugins')
|
||||
if use_plugin and InventorySource.injectors[this_kind].plugin_name is None:
|
||||
if InventorySource.injectors[this_kind].plugin_name is None:
|
||||
pytest.skip('Use of inventory plugin is not enabled for this source')
|
||||
|
||||
def substitute_run(envvars=None, **_kw):
|
||||
@ -276,11 +271,11 @@ def test_inventory_update_injected_content(this_kind, script_or_plugin, inventor
|
||||
If MAKE_INVENTORY_REFERENCE_FILES is set, it will produce reference files
|
||||
"""
|
||||
private_data_dir = envvars.pop('AWX_PRIVATE_DATA_DIR')
|
||||
assert envvars.pop('ANSIBLE_INVENTORY_ENABLED') == ('auto' if use_plugin else 'script')
|
||||
assert envvars.pop('ANSIBLE_INVENTORY_ENABLED') == 'auto'
|
||||
set_files = bool(os.getenv("MAKE_INVENTORY_REFERENCE_FILES", 'false').lower()[0] not in ['f', '0'])
|
||||
env, content = read_content(private_data_dir, envvars, inventory_update)
|
||||
env.pop('ANSIBLE_COLLECTIONS_PATHS', None) # collection paths not relevant to this test
|
||||
base_dir = os.path.join(DATA, script_or_plugin)
|
||||
base_dir = os.path.join(DATA, 'plugins')
|
||||
if not os.path.exists(base_dir):
|
||||
os.mkdir(base_dir)
|
||||
source_dir = os.path.join(base_dir, this_kind) # this_kind is a global
|
||||
@ -314,21 +309,13 @@ def test_inventory_update_injected_content(this_kind, script_or_plugin, inventor
|
||||
Res = namedtuple('Result', ['status', 'rc'])
|
||||
return Res('successful', 0)
|
||||
|
||||
mock_licenser = mock.Mock(return_value=mock.Mock(
|
||||
validate=mock.Mock(return_value={'license_type': 'open'})
|
||||
))
|
||||
|
||||
# Mock this so that it will not send events to the callback receiver
|
||||
# because doing so in pytest land creates large explosions
|
||||
with mock.patch('awx.main.queue.CallbackQueueDispatcher.dispatch', lambda self, obj: None):
|
||||
# Force the update to use the script injector
|
||||
with mock.patch('awx.main.models.inventory.PluginFileInjector.should_use_plugin', return_value=use_plugin):
|
||||
# Also do not send websocket status updates
|
||||
with mock.patch.object(UnifiedJob, 'websocket_emit_status', mock.Mock()):
|
||||
with mock.patch.object(task, 'get_ansible_version', return_value='2.13'):
|
||||
# The point of this test is that we replace run with assertions
|
||||
with mock.patch('awx.main.tasks.ansible_runner.interface.run', substitute_run):
|
||||
# mocking the licenser is necessary for the tower source
|
||||
with mock.patch('awx.main.models.inventory.get_licenser', mock_licenser):
|
||||
# so this sets up everything for a run and then yields control over to substitute_run
|
||||
task.run(inventory_update.pk)
|
||||
# Also do not send websocket status updates
|
||||
with mock.patch.object(UnifiedJob, 'websocket_emit_status', mock.Mock()):
|
||||
with mock.patch.object(task, 'get_ansible_version', return_value='2.13'):
|
||||
# The point of this test is that we replace run with assertions
|
||||
with mock.patch('awx.main.tasks.ansible_runner.interface.run', substitute_run):
|
||||
# so this sets up everything for a run and then yields control over to substitute_run
|
||||
task.run(inventory_update.pk)
|
||||
|
||||
@ -5,6 +5,8 @@ from awx.main.migrations import _inventory_source as invsrc
|
||||
|
||||
from django.apps import apps
|
||||
|
||||
from awx.main.models import InventorySource
|
||||
|
||||
|
||||
@pytest.mark.parametrize('vars,id_var,result', [
|
||||
({'foo': {'bar': '1234'}}, 'foo.bar', '1234'),
|
||||
@ -37,3 +39,19 @@ def test_apply_new_instance_id(inventory_source):
|
||||
host2.refresh_from_db()
|
||||
assert host1.instance_id == ''
|
||||
assert host2.instance_id == 'bad_user'
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_replacement_scm_sources(inventory):
|
||||
inv_source = InventorySource.objects.create(
|
||||
name='test',
|
||||
inventory=inventory,
|
||||
organization=inventory.organization,
|
||||
source='ec2'
|
||||
)
|
||||
invsrc.create_scm_script_substitute(apps, 'ec2')
|
||||
inv_source.refresh_from_db()
|
||||
assert inv_source.source == 'scm'
|
||||
assert inv_source.source_project
|
||||
project = inv_source.source_project
|
||||
assert 'Replacement project for' in project.name
|
||||
|
||||
@ -2,7 +2,9 @@ import pytest
|
||||
from unittest import mock
|
||||
import json
|
||||
|
||||
from awx.main.models import Job, Instance, JobHostSummary
|
||||
from awx.main.models import (Job, Instance, JobHostSummary, InventoryUpdate,
|
||||
InventorySource, Project, ProjectUpdate,
|
||||
SystemJob, AdHocCommand)
|
||||
from awx.main.tasks import cluster_node_heartbeat
|
||||
from django.test.utils import override_settings
|
||||
|
||||
@ -33,6 +35,31 @@ def test_job_capacity_and_with_inactive_node():
|
||||
assert i.capacity == 0
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_job_type_name():
|
||||
job = Job.objects.create()
|
||||
assert job.job_type_name == 'job'
|
||||
|
||||
ahc = AdHocCommand.objects.create()
|
||||
assert ahc.job_type_name == 'ad_hoc_command'
|
||||
|
||||
source = InventorySource.objects.create(source='ec2')
|
||||
source.save()
|
||||
iu = InventoryUpdate.objects.create(
|
||||
inventory_source=source,
|
||||
source='ec2'
|
||||
)
|
||||
assert iu.job_type_name == 'inventory_update'
|
||||
|
||||
proj = Project.objects.create()
|
||||
proj.save()
|
||||
pu = ProjectUpdate.objects.create(project=proj)
|
||||
assert pu.job_type_name == 'project_update'
|
||||
|
||||
sjob = SystemJob.objects.create()
|
||||
assert sjob.job_type_name == 'system_job'
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_job_notification_data(inventory, machine_credential, project):
|
||||
encrypted_str = "$encrypted$"
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user