Merge branch 'release_3.2.0' into devel

* release_3.2.0: (342 commits)
  fail all jobs on an offline node
  filtering out super users from permissions lists
  removing vars from schedules for project syncs and inv syncs
  update license page if user inputs a new type of license
  Show IG name on job results if it comes from the socket
  rename isolated->expect in script tooling
  Center survey maker delete dialog in browser window
  Fix job details right panel content from overflowing in Firefox
  graceful killing of receiver worker processes
  change imports to reflect isolated->expect move
  Update smart inventory host popover content
  Fix extra variable textarea scroll in Firefox
  initial commit to move folder isolated->expect
  Add missing super call in NotificationTemplateSerializer
  Various workflow maker bug fixes
  Style nodes with deleted unified job templates
  Fixed job template view for user with read-only access
  presume 401 from insights means invalid credential
  only reap non-netsplit nodes
  import os, fixing bug that forced SIGKILL
  ...
This commit is contained in:
Matthew Jones
2017-08-15 22:22:26 -04:00
529 changed files with 46524 additions and 15521 deletions

4
.gitignore vendored
View File

@@ -12,10 +12,7 @@ awx/job_output
awx/public/media awx/public/media
awx/public/static awx/public/static
awx/ui/tests/test-results.xml awx/ui/tests/test-results.xml
awx/ui/static/js/awx.min.js
awx/ui/static/js/local_settings.json
awx/ui/client/src/local_settings.json awx/ui/client/src/local_settings.json
awx/ui/static/css/awx.min.css
awx/main/fixtures awx/main/fixtures
awx/*.log awx/*.log
tower/tower_warnings.log tower/tower_warnings.log
@@ -110,6 +107,7 @@ local/
*.mo *.mo
requirements/vendor requirements/vendor
.i18n_built .i18n_built
VERSION
# AWX python libs populated by requirements.txt # AWX python libs populated by requirements.txt
awx/lib/.deps_built awx/lib/.deps_built

View File

@@ -1 +1,280 @@
placeholder
AWX
===========
Hi there! We're excited to have you as a contributor.
Have questions about this document or anything not covered here? Come chat with us on IRC (#ansible-awx on freenode) or the mailing list.
Table of contents
-----------------
* [Contributing Agreement](#dco)
* [Code of Conduct](#code-of-conduct)
* [Setting up the development environment](#setting-up-the-development-environment)
* [Prerequisites](#prerequisites)
* [Local Settings](#local-settings)
* [Building the base image](#building-the-base-image)
* [Building the user interface](#building-the-user-interface)
* [Starting up the development environment](#starting-up-the-development-environment)
* [Starting the development environment at the container shell](#starting-the-container-environment-at-the-container-shell)
* [Using the development environment](#using-the-development-environment)
* [What should I work on?](#what-should-i-work-on)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Reporting Issues](#reporting-issues)
* [How issues are resolved](#how-issues-are-resolved)
* [Ansible Issue Bot](#ansible-issue-bot)
DCO
===
All contributors must use "git commit --signoff" for any
commit to be merged, and agree that usage of --signoff constitutes
agreement with the terms of DCO 1.1. Any contribution that does not
have such a signoff will not be merged.
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
Code of Conduct
===============
All contributors are expected to adhere to the Ansible Community Code of Conduct: http://docs.ansible.com/ansible/latest/community/code_of_conduct.html
Setting up the development environment
======================================
The AWX development environment workflow and toolchain is based on Docker and the docker-compose tool to contain
the dependencies, services, and databases necessary to run everything. It will bind the local source tree into the container
making it possible to observe changes while developing.
Prerequisites
-------------
`docker` and `docker-compose` are required for starting the development services, on Linux you can generally find these in your
distro's packaging, but you may find that Docker themselves maintain a seperate repo that tracks more closely to the latest releases.
For macOS and Windows, we recommend Docker for Mac (https://www.docker.com/docker-mac) and Docker for Windows (https://www.docker.com/docker-windows)
respectively. Docker for Mac/Windows automatically comes with `docker-compose`.
> Fedora
https://docs.docker.com/engine/installation/linux/docker-ce/fedora/
> Centos
https://docs.docker.com/engine/installation/linux/docker-ce/centos/
> Ubuntu
https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/
> Debian
https://docs.docker.com/engine/installation/linux/docker-ce/debian/
> Arch
https://wiki.archlinux.org/index.php/Docker
For `docker-compose` you may need/choose to install it seperately:
pip install docker-compose
Local Settings
--------------
In development mode (i.e. when running from a source checkout), AWX
will import the file `awx/settings/local_settings.py` and combine it with defaults in `awx/settings/defaults.py`. This file
is required for starting the development environment and startup will fail if it's not provided
An example file that works for the `docker-compose` tool is provided. Make a copy of it and edit as needed (the defaults are usually fine):
(host)$ cp awx/settings/local_settings.py.docker_compose awx/settings/local_settings.py
Building the base image
-----------------------
The AWX base container image (found in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and
symbolic links into the development environment that make running the services easy. You'll first need to build the image:
(host)$ make docker-compose-build
The image will only need to be rebuilt if the requirements or OS dependencies change. A core concept about this image is that it relies
on having your local development environment mapped in.
Building the user interface
---------------------------
> AWX requires the 6.x LTS version of Node and 3.x LTS NPM
In order for the AWX user interface to load from the development environment it must be built:
(host)$ make ui-devel
When developing features and fixes for the user interface you can find more detail here: [UI Developer README](awx/ui/README.md)
Starting up the development environment
----------------------------------------------
There are several ways of starting the development environment depending on your desired workflow. The easiest and most common way is with:
(host)$ make docker-compose
This utilizes the image you built in the previous step and will automatically start all required services and dependent containers. You'll
be able to watch log messages and events as they come through.
The Makefile assumes that the image you built is tagged with your current branch. This allows you to pre-build images for different contexts
but you may want to use a particular branch's image (for instance if you are developing a PR from a branch based on the integration branch):
(host)$ COMPOSE_TAG=devel make docker-compose
Starting the development environment at the container shell
-----------------------------------------------------------
Often times you'll want to start the development environment without immediately starting all services and instead be taken directly to a shell:
(host)$ make docker-compose-test
From here you'll need to bootstrap the development environment before it will be usable for you. The `docker-compose` make target will
automatically do this:
(container)$ /bootstrap_development.sh
From here you can start each service individually, or choose to start all service in a pre-configured tmux session:
(container)# cd /awx_devel
(container)# make server
Using the development environment
---------------------------------
With the development environment running there are a few optional steps to pre-populate the environment with data. If you are using the `docker-compose`
method above you'll first need a shell in the container:
(host)$ docker exec -it tools_awx_1 bash
Create a superuser account:
(container)# awx-manage createsuperuser
Preload AWX with demo data:
(container)# awx-manage create_preload_data
This information will persist in the database running in the `tools_postgres_1` container, until it is removed. You may periodically need to recreate
this container and database if the database schema changes in an upstream commit.
You should now be able to visit and login to the AWX user interface at https://localhost:8043 or http://localhost:8013 if you have built the UI.
If not you can visit the API directly in your browser at: https://localhost:8043/api/ or http://localhost:8013/api/
When working on the source code for AWX the code will auto-reload for you when changes are made, with the exception of any background tasks that run in
celery.
Occasionally it may be necessary to purge any containers and images that may have collected:
(host)$ make docker-clean
There are host of other shortcuts, tools, and container configurations in the Makefile designed for various purposes. Feel free to explore.
What should I work on?
======================
We list our specs in `/docs`. `/docs/current` are things that we are actively working on. `/docs/future` are ideas for future work and the direction we
want that work to take. Fixing bugs, translations, and updates to documentation are also appreciated.
Be aware that if you are working in a part of the codebase that is going through active development your changes may be rejected or you may be asked to
rebase them. A good idea before starting work is to have a discussion with us on IRC or the mailing list.
Submitting Pull Requests
========================
Fixes and Features for AWX will go through the Github PR interface. There are a few things that can be done to help the visibility of your change
and increase the likelihood that it will be accepted
> Add UI detail to these
* No issues when running linters/code checkers
* Python: flake8: `(container)/awx_devel$ make flake8`
* Javascript: JsHint: `(container)/awx_devel$ make jshint`
* No issues from unit tests
* Python: py.test: `(container)/awx_devel$ make test`
* JavaScript: Jasmine: `(container)/awx_devel$ make ui-test-ci`
* Write tests for new functionality, update/add tests for bug fixes
* Make the smallest change possible
* Write good commit messages: https://chris.beams.io/posts/git-commit/
It's generally a good idea to discuss features with us first by engaging us in IRC or on the mailing list, especially if you are unsure if it's a good
fit.
We like to keep our commit history clean and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase` rather than
`git pull` and `git rebase` rather than `git merge`.
Sometimes it might take us a while to fully review your PR. We try to keep the `devel` branch in pretty good working order so we review requests carefuly.
Please be patient.
All submitted PRs will have the linter and unit tests run against them and the status reported in the PR.
Reporting Issues
================
Use the Github issue tracker for filing bugs. In order to save time and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information and an accurate reproducing scenario are critical to helping us identify the problem.
When reporting issues for the UI we also appreciate having screenshots and any error messages from the web browser's console. It's not unsual for browser extensions
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
For the API and backend services, please capture all of the logs that you can from the time the problem was occuring.
Don't use the issue tracker to get help on how to do something - please use the mailing list and IRC for that.
How issues are resolved
-----------------------
We triage our issues into high, medium, and low and will tag them with the relevant component (api, ui, installer, etc). We will typically focus on high priority
issues. There aren't hard and fast rules for determining the severity of an issue, but generally high priority issues have an increased likelihood of breaking
existing functionality and/or negatively impacting a large number of users.
If your issue isn't considered `high` priority then please be patient as it may take some time to get to your report.
Before opening a new issue, please use the issue search feature to see if it's already been reported. If you have any extra detail to provide then please comment.
Rather than posting a "me too" comment you might consider giving it a "thumbs up" on github.
Ansible Issue Bot
-----------------
> Fill in

2
INSTALL.md Normal file
View File

@@ -0,0 +1,2 @@
Installing AWX
==============

View File

@@ -5,7 +5,7 @@
### Environment ### Environment
<!-- <!--
* Tower version: X.Y.Z * AWX version: X.Y.Z
* Ansible version: X.Y.Z * Ansible version: X.Y.Z
* Operating System: * Operating System:
* Web Browser: * Web Browser:

View File

@@ -19,9 +19,10 @@ include tools/scripts/request_tower_configuration.sh
include tools/scripts/request_tower_configuration.ps1 include tools/scripts/request_tower_configuration.ps1
include tools/scripts/ansible-tower-service include tools/scripts/ansible-tower-service
include tools/scripts/failure-event-handler include tools/scripts/failure-event-handler
include tools/scripts/tower-python include tools/scripts/awx-python
include awx/playbooks/library/mkfifo.py include awx/playbooks/library/mkfifo.py
include tools/sosreport/* include tools/sosreport/*
include VERSION
include COPYING include COPYING
include Makefile include Makefile
prune awx/public prune awx/public

119
Makefile
View File

@@ -9,7 +9,8 @@ NPM_BIN ?= npm
DEPS_SCRIPT ?= packaging/bundle/deps.py DEPS_SCRIPT ?= packaging/bundle/deps.py
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD) GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage MANAGEMENT_COMMAND ?= awx-manage
GCLOUD_AUTH ?= IMAGE_REPOSITORY_AUTH ?=
IMAGE_REPOSITORY_BASE ?= https://gcr.io
VERSION=$(shell git describe --long) VERSION=$(shell git describe --long)
VERSION3=$(shell git describe --long | sed 's/\-g.*//') VERSION3=$(shell git describe --long | sed 's/\-g.*//')
@@ -24,7 +25,7 @@ VENV_BASE ?= /venv
SCL_PREFIX ?= SCL_PREFIX ?=
CELERY_SCHEDULE_FILE ?= /celerybeat-schedule CELERY_SCHEDULE_FILE ?= /celerybeat-schedule
DEV_DOCKER_TAG_BASE ?= gcr.io/ansible-tower-engineering/ DEV_DOCKER_TAG_BASE ?= gcr.io/ansible-tower-engineering
# Python packages to install only from source (not from binary wheels) # Python packages to install only from source (not from binary wheels)
# Comma separated list # Comma separated list
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio
@@ -44,45 +45,39 @@ DATE := $(shell date -u +%Y%m%d%H%M)
NAME ?= awx NAME ?= awx
GIT_REMOTE_URL = $(shell git config --get remote.origin.url) GIT_REMOTE_URL = $(shell git config --get remote.origin.url)
ifeq ($(OFFICIAL),yes) ifeq ($(OFFICIAL),yes)
RELEASE ?= 1 VERSION_TARGET ?= $(RELEASE_VERSION)
AW_REPO_URL ?= http://releases.ansible.com/ansible-tower
else else
RELEASE ?= 0.git$(shell git describe --long | cut -d - -f 2-2) VERSION_TARGET ?= $(VERSION3DOT)
AW_REPO_URL ?= http://jenkins.testing.ansible.com/ansible-tower_nightlies_f8b8c5588b2505970227a7b0900ef69040ad5a00/$(GIT_BRANCH)
endif endif
# TAR build parameters # TAR build parameters
ifeq ($(OFFICIAL),yes) ifeq ($(OFFICIAL),yes)
SETUP_TAR_NAME=$(NAME)-setup-$(RELEASE_VERSION)
SDIST_TAR_NAME=$(NAME)-$(RELEASE_VERSION) SDIST_TAR_NAME=$(NAME)-$(RELEASE_VERSION)
WHEEL_NAME=$(NAME)-$(RELEASE_VERSION)
else else
SETUP_TAR_NAME=$(NAME)-setup-$(RELEASE_VERSION)-$(RELEASE) SDIST_TAR_NAME=$(NAME)-$(VERSION3DOT)
SDIST_TAR_NAME=$(NAME)-$(RELEASE_VERSION)-$(RELEASE) WHEEL_NAME=$(NAME)-$(VERSION3DOT)
endif endif
SDIST_COMMAND ?= sdist SDIST_COMMAND ?= sdist
WHEEL_COMMAND ?= bdist_wheel
SDIST_TAR_FILE ?= $(SDIST_TAR_NAME).tar.gz SDIST_TAR_FILE ?= $(SDIST_TAR_NAME).tar.gz
WHEEL_FILE ?= $(WHEEL_NAME)-py2-none-any.whl
SETUP_TAR_FILE=$(SETUP_TAR_NAME).tar.gz
SETUP_TAR_LINK=$(NAME)-setup-latest.tar.gz
SETUP_TAR_CHECKSUM=$(NAME)-setup-CHECKSUM
# UI flag files # UI flag files
UI_DEPS_FLAG_FILE = awx/ui/.deps_built UI_DEPS_FLAG_FILE = awx/ui/.deps_built
UI_RELEASE_FLAG_FILE = awx/ui/.release_built UI_RELEASE_FLAG_FILE = awx/ui/.release_built
.DEFAULT_GOAL := build I18N_FLAG_FILE = .i18n_built
.PHONY: clean clean-tmp clean-venv rebase push requirements requirements_dev \ .PHONY: clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange dbshell runserver celeryd \ develop refresh adduser migrate dbchange dbshell runserver celeryd \
receiver test test_unit test_ansible test_coverage coverage_html \ receiver test test_unit test_ansible test_coverage coverage_html \
test_jenkins dev_build release_build release_clean sdist rpmtar mock-rpm \ dev_build release_build release_clean sdist \
mock-srpm rpm-sign deb deb-src debian debsign pbuilder \
reprepro setup_tarball virtualbox-ovf virtualbox-centos-7 \
virtualbox-centos-6 clean-bundle setup_bundle_tarball \
ui-docker-machine ui-docker ui-release ui-devel \ ui-docker-machine ui-docker ui-release ui-devel \
ui-test ui-deps ui-test-ci ui-test-saucelabs jlaska ui-test ui-deps ui-test-ci ui-test-saucelabs VERSION
# remove ui build artifacts # remove ui build artifacts
clean-ui: clean-ui:
@@ -113,6 +108,7 @@ clean: clean-ui clean-dist
rm -rf requirements/vendor rm -rf requirements/vendor
rm -rf tmp rm -rf tmp
rm -rf $(I18N_FLAG_FILE) rm -rf $(I18N_FLAG_FILE)
rm -f VERSION
mkdir tmp mkdir tmp
rm -rf build $(NAME)-$(VERSION) *.egg-info rm -rf build $(NAME)-$(VERSION) *.egg-info
find . -type f -regex ".*\.py[co]$$" -delete find . -type f -regex ".*\.py[co]$$" -delete
@@ -125,14 +121,6 @@ guard-%:
exit 1; \ exit 1; \
fi fi
# Fetch from origin, rebase local commits on top of origin commits.
rebase:
git pull --rebase origin master
# Push changes to origin.
push:
git push origin master
virtualenv: virtualenv_ansible virtualenv_awx virtualenv: virtualenv_ansible virtualenv_awx
virtualenv_ansible: virtualenv_ansible:
@@ -143,7 +131,7 @@ virtualenv_ansible:
if [ ! -d "$(VENV_BASE)/ansible" ]; then \ if [ ! -d "$(VENV_BASE)/ansible" ]; then \
virtualenv --system-site-packages $(VENV_BASE)/ansible && \ virtualenv --system-site-packages $(VENV_BASE)/ansible && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed six packaging appdirs && \ $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed six packaging appdirs && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==35.0.2 && \ $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==36.0.1 && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \ $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
fi; \ fi; \
fi fi
@@ -156,7 +144,7 @@ virtualenv_awx:
if [ ! -d "$(VENV_BASE)/awx" ]; then \ if [ ! -d "$(VENV_BASE)/awx" ]; then \
virtualenv --system-site-packages $(VENV_BASE)/awx && \ virtualenv --system-site-packages $(VENV_BASE)/awx && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed six packaging appdirs && \ $(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed six packaging appdirs && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==35.0.2 && \ $(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==36.0.1 && \
$(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \ $(VENV_BASE)/awx/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
fi; \ fi; \
fi fi
@@ -217,18 +205,19 @@ version_file:
python -c "import awx as awx; print awx.__version__" > /var/lib/awx/.awx_version python -c "import awx as awx; print awx.__version__" > /var/lib/awx/.awx_version
# Do any one-time init tasks. # Do any one-time init tasks.
comma := ,
init: init:
if [ "$(VENV_BASE)" ]; then \ if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi; \ fi; \
$(MANAGEMENT_COMMAND) register_instance --hostname=$(COMPOSE_HOST); \ $(MANAGEMENT_COMMAND) provision_instance --hostname=$(COMPOSE_HOST); \
$(MANAGEMENT_COMMAND) register_queue --queuename=tower --hostnames=$(COMPOSE_HOST);\ $(MANAGEMENT_COMMAND) register_queue --queuename=tower --hostnames=$(COMPOSE_HOST);\
if [ "$(EXTRA_GROUP_QUEUES)" == "thepentagon" ]; then \ if [ "$(AWX_GROUP_QUEUES)" == "tower,thepentagon" ]; then \
$(MANAGEMENT_COMMAND) register_instance --hostname=isolated; \ $(MANAGEMENT_COMMAND) provision_instance --hostname=isolated; \
$(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \ $(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \
$(MANAGEMENT_COMMAND) generate_isolated_key | ssh -o "StrictHostKeyChecking no" root@isolated 'cat > /root/.ssh/authorized_keys'; \ $(MANAGEMENT_COMMAND) generate_isolated_key | ssh -o "StrictHostKeyChecking no" root@isolated 'cat > /root/.ssh/authorized_keys'; \
elif [ "$(EXTRA_GROUP_QUEUES)" != "" ]; then \ elif [ "$(AWX_GROUP_QUEUES)" != "tower" ]; then \
$(MANAGEMENT_COMMAND) register_queue --queuename=$(EXTRA_GROUP_QUEUES) --hostnames=$(COMPOSE_HOST); \ $(MANAGEMENT_COMMAND) register_queue --queuename=$(firstword $(subst $(comma), ,$(AWX_GROUP_QUEUES))) --hostnames=$(COMPOSE_HOST); \
fi; fi;
# Refresh development environment after pulling new code. # Refresh development environment after pulling new code.
@@ -331,7 +320,7 @@ celeryd:
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi; \ fi; \
$(PYTHON) manage.py celeryd -l DEBUG -B -Ofair --autoreload --autoscale=100,4 --schedule=$(CELERY_SCHEDULE_FILE) -Q tower_scheduler,tower_broadcast_all,tower,$(COMPOSE_HOST),$(EXTRA_GROUP_QUEUES) -n celery@$(COMPOSE_HOST) $(PYTHON) manage.py celeryd -l DEBUG -B -Ofair --autoreload --autoscale=100,4 --schedule=$(CELERY_SCHEDULE_FILE) -Q tower_scheduler,tower_broadcast_all,$(COMPOSE_HOST),$(AWX_GROUP_QUEUES) -n celery@$(COMPOSE_HOST)
#$(PYTHON) manage.py celery multi show projects jobs default -l DEBUG -Q:projects projects -Q:jobs jobs -Q:default default -c:projects 1 -c:jobs 3 -c:default 3 -Ofair -B --schedule=$(CELERY_SCHEDULE_FILE) #$(PYTHON) manage.py celery multi show projects jobs default -l DEBUG -Q:projects projects -Q:jobs jobs -Q:default default -c:projects 1 -c:jobs 3 -c:default 3 -Ofair -B --schedule=$(CELERY_SCHEDULE_FILE)
# Run to start the zeromq callback receiver # Run to start the zeromq callback receiver
@@ -408,10 +397,6 @@ coverage_html:
test_tox: test_tox:
tox -v tox -v
# Run unit tests to produce output for Jenkins.
# Alias existing make target so old versions run against Jekins the same way
test_jenkins : test_coverage
# Make fake data # Make fake data
DATA_GEN_PRESET = "" DATA_GEN_PRESET = ""
bulk_data: bulk_data:
@@ -531,11 +516,11 @@ dev_build:
release_build: release_build:
$(PYTHON) setup.py release_build $(PYTHON) setup.py release_build
dist/$(SDIST_TAR_FILE): ui-release dist/$(SDIST_TAR_FILE): ui-release VERSION
BUILD="$(BUILD)" $(PYTHON) setup.py $(SDIST_COMMAND) $(PYTHON) setup.py $(SDIST_COMMAND)
dist/ansible-tower.tar.gz: ui-release dist/$(WHEEL_FILE): ui-release
OFFICIAL="yes" $(PYTHON) setup.py sdist $(PYTHON) setup.py $(WHEEL_COMMAND)
sdist: dist/$(SDIST_TAR_FILE) sdist: dist/$(SDIST_TAR_FILE)
@echo "#############################################" @echo "#############################################"
@@ -543,18 +528,24 @@ sdist: dist/$(SDIST_TAR_FILE)
@echo dist/$(SDIST_TAR_FILE) @echo dist/$(SDIST_TAR_FILE)
@echo "#############################################" @echo "#############################################"
wheel: dist/$(WHEEL_FILE)
@echo "#############################################"
@echo "Artifacts:"
@echo dist/$(WHEEL_FILE)
@echo "#############################################"
# Build setup bundle tarball # Build setup bundle tarball
setup-bundle-build: setup-bundle-build:
mkdir -p $@ mkdir -p $@
docker-auth: docker-auth:
if [ "$(GCLOUD_AUTH)" ]; then \ if [ "$(IMAGE_REPOSITORY_AUTH)" ]; then \
docker login -u oauth2accesstoken -p "$(GCLOUD_AUTH)" https://gcr.io; \ docker login -u oauth2accesstoken -p "$(IMAGE_REPOSITORY_AUTH)" $(IMAGE_REPOSITORY_BASE); \
fi; fi;
# Docker isolated rampart # Docker isolated rampart
docker-isolated: docker-isolated:
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml create TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml create
docker start tools_awx_1 docker start tools_awx_1
docker start tools_isolated_1 docker start tools_isolated_1
if [ "`docker exec -i -t tools_isolated_1 cat /root/.ssh/authorized_keys`" == "`docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub`" ]; then \ if [ "`docker exec -i -t tools_isolated_1 cat /root/.ssh/authorized_keys`" == "`docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub`" ]; then \
@@ -562,29 +553,31 @@ docker-isolated:
else \ else \
docker exec "tools_isolated_1" bash -c "mkdir -p /root/.ssh && rm -f /root/.ssh/authorized_keys && echo $$(docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub) >> /root/.ssh/authorized_keys"; \ docker exec "tools_isolated_1" bash -c "mkdir -p /root/.ssh && rm -f /root/.ssh/authorized_keys && echo $$(docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub) >> /root/.ssh/authorized_keys"; \
fi fi
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
# Docker Compose Development environment # Docker Compose Development environment
docker-compose: docker-auth docker-compose: docker-auth
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose.yml up --no-recreate awx TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml up --no-recreate awx
docker-compose-cluster: docker-auth docker-compose-cluster: docker-auth
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose-cluster.yml up TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml up
docker-compose-test: docker-auth docker-compose-test: docker-auth
cd tools && TAG=$(COMPOSE_TAG) docker-compose run --rm --service-ports awx /bin/bash cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /bin/bash
docker-compose-build: awx-devel-build awx-isolated-build docker-compose-build: awx-devel-build
# Base development image build
awx-devel-build: awx-devel-build:
docker build -t ansible/awx_devel -f tools/docker-compose/Dockerfile . docker build -t ansible/awx_devel -f tools/docker-compose/Dockerfile .
docker tag ansible/awx_devel $(DEV_DOCKER_TAG_BASE)awx_devel:$(COMPOSE_TAG) docker tag ansible/awx_devel $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)awx_devel:$(COMPOSE_TAG) #docker push $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
# For use when developing on "isolated" AWX deployments
awx-isolated-build: awx-isolated-build:
docker build -t ansible/awx_isolated -f tools/docker-isolated/Dockerfile . docker build -t ansible/awx_isolated -f tools/docker-isolated/Dockerfile .
docker tag ansible/awx_isolated $(DEV_DOCKER_TAG_BASE)awx_isolated:$(COMPOSE_TAG) docker tag ansible/awx_isolated $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)awx_isolated:$(COMPOSE_TAG) #docker push $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
MACHINE?=default MACHINE?=default
docker-clean: docker-clean:
@@ -596,10 +589,10 @@ docker-refresh: docker-clean docker-compose
# Docker Development Environment with Elastic Stack Connected # Docker Development Environment with Elastic Stack Connected
docker-compose-elk: docker-auth docker-compose-elk: docker-auth
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: docker-auth docker-compose-cluster-elk: docker-auth
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
clean-elk: clean-elk:
docker stop tools_kibana_1 docker stop tools_kibana_1
@@ -611,3 +604,13 @@ clean-elk:
psql-container: psql-container:
docker run -it --net tools_default --rm postgres:9.4.1 sh -c 'exec psql -h "postgres" -p "5432" -U postgres' docker run -it --net tools_default --rm postgres:9.4.1 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
VERSION:
echo $(VERSION_TARGET) > $@
production-openshift-image: sdist
cat installer/openshift/Dockerfile | sed "s/{{ version }}/$(VERSION_TARGET)/g" | sed "s/{{ tar }}/$(SDIST_TAR_FILE)/g" > ./Dockerfile.production
cp installer/openshift/Dockerfile.celery ./Dockerfile.celery.production
docker build -t awx_web -f ./Dockerfile.production .
docker build -t awx_task -f ./Dockerfile.celery.production .

View File

@@ -1,21 +1,16 @@
[![Build Status](http://jenkins.testing.ansible.com/buildStatus/icon?job=Test_Tower_Unittest)](http://jenkins.testing.ansible.com/job/Test_Tower_Unittest) [![Devel Requirements Status](https://requires.io/github/ansible/awx/requirements.svg?branch=devel)](https://requires.io/github/ansible/awx/requirements/?branch=devel)
[![Requirements Status](https://requires.io/github/ansible/ansible-tower/requirements.svg?branch=devel)](https://requires.io/github/ansible/ansible-tower/requirements/?branch=devel)
Ansible Tower AWX
============= =============
Tower provides a web-based user interface, REST API and task engine built on top of AWX provides a web-based user interface, REST API and task engine built on top of
Ansible. Ansible.
Resources Resources
--------- ---------
Refer to `CONTRIBUTING.md` to get started developing, testing and building Tower. Refer to `CONTRIBUTING.md` to get started developing, testing and building AWX.
Refer to `setup/README.md` to get started deploying Tower. Refer to `INSTALL.md` to get started deploying AWX.
Refer to `docs/build_system.md` for more about Jenkins and installing nightly builds (as opposed to running from source). Refer to `LOCALIZATION.md` for translation and localization help.
Refer to `docs/release_process.md` for information on the steps involved in creating a release.
Refer to http://docs.ansible.com/ansible-tower/index.html for information on installing/upgrading, setup, troubleshooting, and much more.

View File

@@ -7,7 +7,7 @@ import warnings
from pkg_resources import get_distribution from pkg_resources import get_distribution
__version__ = get_distribution('ansible-awx').version __version__ = get_distribution('awx').version
__all__ = ['__version__'] __all__ = ['__version__']

View File

@@ -20,7 +20,7 @@ from django.utils.translation import ugettext_lazy as _
from rest_framework.exceptions import ParseError, PermissionDenied from rest_framework.exceptions import ParseError, PermissionDenied
from rest_framework.filters import BaseFilterBackend from rest_framework.filters import BaseFilterBackend
# Ansible Tower # AWX
from awx.main.utils import get_type_for_model, to_python_boolean from awx.main.utils import get_type_for_model, to_python_boolean
from awx.main.models.credential import CredentialType from awx.main.models.credential import CredentialType
from awx.main.models.rbac import RoleAncestorEntry from awx.main.models.rbac import RoleAncestorEntry

View File

@@ -627,6 +627,13 @@ class SubListAttachDetachAPIView(SubListCreateAttachDetachAPIView):
status=status.HTTP_400_BAD_REQUEST) status=status.HTTP_400_BAD_REQUEST)
return super(SubListAttachDetachAPIView, self).post(request, *args, **kwargs) return super(SubListAttachDetachAPIView, self).post(request, *args, **kwargs)
def update_raw_data(self, data):
request_method = getattr(self, '_raw_data_request_method', None)
response_status = getattr(self, '_raw_data_response_status', 0)
if request_method == 'POST' and response_status in xrange(400, 500):
return super(SubListAttachDetachAPIView, self).update_raw_data(data)
return {'id': None}
class DeleteLastUnattachLabelMixin(object): class DeleteLastUnattachLabelMixin(object):
''' '''

View File

@@ -16,7 +16,7 @@ from rest_framework import serializers
from rest_framework.relations import RelatedField, ManyRelatedField from rest_framework.relations import RelatedField, ManyRelatedField
from rest_framework.request import clone_request from rest_framework.request import clone_request
# Ansible Tower # AWX
from awx.main.models import InventorySource, NotificationTemplate from awx.main.models import InventorySource, NotificationTemplate

View File

@@ -57,8 +57,10 @@ class JSONParser(parsers.JSONParser):
try: try:
data = stream.read().decode(encoding) data = stream.read().decode(encoding)
if not data:
return {}
obj = json.loads(data, object_pairs_hook=OrderedDict) obj = json.loads(data, object_pairs_hook=OrderedDict)
if not isinstance(obj, dict): if not isinstance(obj, dict) and obj is not None:
raise ParseError(_('JSON parse error - not a JSON object')) raise ParseError(_('JSON parse error - not a JSON object'))
return obj return obj
except ValueError as exc: except ValueError as exc:

View File

@@ -1203,7 +1203,6 @@ class InventoryScriptSerializer(InventorySerializer):
class HostSerializer(BaseSerializerWithVariables): class HostSerializer(BaseSerializerWithVariables):
show_capabilities = ['edit', 'delete'] show_capabilities = ['edit', 'delete']
insights_system_id = serializers.CharField(allow_blank=True, allow_null=True, required=False, default=None)
class Meta: class Meta:
model = Host model = Host
@@ -1289,6 +1288,11 @@ class HostSerializer(BaseSerializerWithVariables):
host, port = self._get_host_port_from_name(name) host, port = self._get_host_port_from_name(name)
return value return value
def validate_inventory(self, value):
if value.kind == 'smart':
raise serializers.ValidationError({"detail": _("Cannot create Host for Smart Inventory")})
return value
def validate(self, attrs): def validate(self, attrs):
name = force_text(attrs.get('name', self.instance and self.instance.name or '')) name = force_text(attrs.get('name', self.instance and self.instance.name or ''))
host, port = self._get_host_port_from_name(name) host, port = self._get_host_port_from_name(name)
@@ -1407,6 +1411,11 @@ class GroupSerializer(BaseSerializerWithVariables):
raise serializers.ValidationError(_('Invalid group name.')) raise serializers.ValidationError(_('Invalid group name.'))
return value return value
def validate_inventory(self, value):
if value.kind == 'smart':
raise serializers.ValidationError({"detail": _("Cannot create Group for Smart Inventory")})
return value
def to_representation(self, obj): def to_representation(self, obj):
ret = super(GroupSerializer, self).to_representation(obj) ret = super(GroupSerializer, self).to_representation(obj)
if obj is not None and 'inventory' in ret and not obj.inventory: if obj is not None and 'inventory' in ret and not obj.inventory:
@@ -1660,27 +1669,24 @@ class InventorySourceSerializer(UnifiedJobTemplateSerializer, InventorySourceOpt
raise serializers.ValidationError(_("Setting not compatible with existing schedules.")) raise serializers.ValidationError(_("Setting not compatible with existing schedules."))
return value return value
def validate_inventory(self, value):
if value.kind == 'smart':
raise serializers.ValidationError({"detail": _("Cannot create Inventory Source for Smart Inventory")})
return value
def validate(self, attrs): def validate(self, attrs):
def get_field_from_model_or_attrs(fd): def get_field_from_model_or_attrs(fd):
return attrs.get(fd, self.instance and getattr(self.instance, fd) or None) return attrs.get(fd, self.instance and getattr(self.instance, fd) or None)
update_on_launch = attrs.get('update_on_launch', self.instance and self.instance.update_on_launch) if get_field_from_model_or_attrs('source') != 'scm':
update_on_project_update = get_field_from_model_or_attrs('update_on_project_update') redundant_scm_fields = filter(
source = get_field_from_model_or_attrs('source') lambda x: attrs.get(x, None),
overwrite_vars = get_field_from_model_or_attrs('overwrite_vars') ['source_project', 'source_path', 'update_on_project_update']
)
if attrs.get('source_path', None) and source!='scm': if redundant_scm_fields:
raise serializers.ValidationError({"detail": _("Cannot set source_path if not SCM type.")}) raise serializers.ValidationError(
elif update_on_launch and source=='scm' and update_on_project_update: {"detail": _("Cannot set %s if not SCM type." % ' '.join(redundant_scm_fields))}
raise serializers.ValidationError({"detail": _( )
"Cannot update SCM-based inventory source on launch if set to update on project update. "
"Instead, configure the corresponding source project to update on launch.")})
elif not self.instance and attrs.get('inventory', None) and InventorySource.objects.filter(
inventory=attrs.get('inventory', None), update_on_project_update=True, source='scm').exists():
raise serializers.ValidationError({"detail": _("Inventory controlled by project-following SCM.")})
elif source=='scm' and not overwrite_vars:
raise serializers.ValidationError({"detail": _(
"SCM type sources must set `overwrite_vars` to `true`.")})
return super(InventorySourceSerializer, self).validate(attrs) return super(InventorySourceSerializer, self).validate(attrs)
@@ -1763,17 +1769,15 @@ class RoleSerializer(BaseSerializer):
def to_representation(self, obj): def to_representation(self, obj):
ret = super(RoleSerializer, self).to_representation(obj) ret = super(RoleSerializer, self).to_representation(obj)
def spacify_type_name(cls):
return re.sub(r'([a-z])([A-Z])', '\g<1> \g<2>', cls.__name__)
if obj.object_id: if obj.object_id:
content_object = obj.content_object content_object = obj.content_object
if hasattr(content_object, 'username'): if hasattr(content_object, 'username'):
ret['summary_fields']['resource_name'] = obj.content_object.username ret['summary_fields']['resource_name'] = obj.content_object.username
if hasattr(content_object, 'name'): if hasattr(content_object, 'name'):
ret['summary_fields']['resource_name'] = obj.content_object.name ret['summary_fields']['resource_name'] = obj.content_object.name
ret['summary_fields']['resource_type'] = obj.content_type.name content_model = obj.content_type.model_class()
ret['summary_fields']['resource_type_display_name'] = spacify_type_name(obj.content_type.model_class()) ret['summary_fields']['resource_type'] = get_type_for_model(content_model)
ret['summary_fields']['resource_type_display_name'] = content_model._meta.verbose_name.title()
ret.pop('created') ret.pop('created')
ret.pop('modified') ret.pop('modified')
@@ -1826,7 +1830,7 @@ class ResourceAccessListElementSerializer(UserSerializer):
role_dict = { 'id': role.id, 'name': role.name, 'description': role.description} role_dict = { 'id': role.id, 'name': role.name, 'description': role.description}
try: try:
role_dict['resource_name'] = role.content_object.name role_dict['resource_name'] = role.content_object.name
role_dict['resource_type'] = role.content_type.name role_dict['resource_type'] = get_type_for_model(role.content_type.model_class())
role_dict['related'] = reverse_gfk(role.content_object, self.context.get('request')) role_dict['related'] = reverse_gfk(role.content_object, self.context.get('request'))
except AttributeError: except AttributeError:
pass pass
@@ -1854,7 +1858,7 @@ class ResourceAccessListElementSerializer(UserSerializer):
} }
if role.content_type is not None: if role.content_type is not None:
role_dict['resource_name'] = role.content_object.name role_dict['resource_name'] = role.content_object.name
role_dict['resource_type'] = role.content_type.name role_dict['resource_type'] = get_type_for_model(role.content_type.model_class())
role_dict['related'] = reverse_gfk(role.content_object, self.context.get('request')) role_dict['related'] = reverse_gfk(role.content_object, self.context.get('request'))
role_dict['user_capabilities'] = {'unattach': requesting_user.can_access( role_dict['user_capabilities'] = {'unattach': requesting_user.can_access(
Role, 'unattach', role, team_role, 'parents', data={}, skip_sub_obj_read_check=False)} Role, 'unattach', role, team_role, 'parents', data={}, skip_sub_obj_read_check=False)}
@@ -2419,6 +2423,12 @@ class JobTemplateMixin(object):
if obj.survey_spec is not None and ('name' in obj.survey_spec and 'description' in obj.survey_spec): if obj.survey_spec is not None and ('name' in obj.survey_spec and 'description' in obj.survey_spec):
d['survey'] = dict(title=obj.survey_spec['name'], description=obj.survey_spec['description']) d['survey'] = dict(title=obj.survey_spec['name'], description=obj.survey_spec['description'])
d['recent_jobs'] = self._recent_jobs(obj) d['recent_jobs'] = self._recent_jobs(obj)
# TODO: remove in 3.3
if self.version == 1 and 'vault_credential' in d:
if d['vault_credential'].get('kind','') == 'vault':
d['vault_credential']['kind'] = 'ssh'
return d return d
@@ -2460,12 +2470,17 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
inventory = get_field_from_model_or_attrs('inventory') inventory = get_field_from_model_or_attrs('inventory')
credential = get_field_from_model_or_attrs('credential') credential = get_field_from_model_or_attrs('credential')
vault_credential = get_field_from_model_or_attrs('vault_credential')
project = get_field_from_model_or_attrs('project') project = get_field_from_model_or_attrs('project')
prompting_error_message = _("Must either set a default value or ask to prompt on launch.") prompting_error_message = _("Must either set a default value or ask to prompt on launch.")
if project is None: if project is None:
raise serializers.ValidationError({'project': _("Job types 'run' and 'check' must have assigned a project.")}) raise serializers.ValidationError({'project': _("Job types 'run' and 'check' must have assigned a project.")})
elif credential is None and not get_field_from_model_or_attrs('ask_credential_on_launch'): elif all([
credential is None,
vault_credential is None,
not get_field_from_model_or_attrs('ask_credential_on_launch'),
]):
raise serializers.ValidationError({'credential': prompting_error_message}) raise serializers.ValidationError({'credential': prompting_error_message})
elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'): elif inventory is None and not get_field_from_model_or_attrs('ask_inventory_on_launch'):
raise serializers.ValidationError({'inventory': prompting_error_message}) raise serializers.ValidationError({'inventory': prompting_error_message})
@@ -3388,7 +3403,7 @@ class NotificationTemplateSerializer(BaseSerializer):
type_field_error[1])) type_field_error[1]))
if error_list: if error_list:
raise serializers.ValidationError(error_list) raise serializers.ValidationError(error_list)
return attrs return super(NotificationTemplateSerializer, self).validate(attrs)
class NotificationSerializer(BaseSerializer): class NotificationSerializer(BaseSerializer):
@@ -3647,11 +3662,11 @@ class ActivityStreamSerializer(BaseSerializer):
for fk, __ in self._local_summarizable_fk_fields: for fk, __ in self._local_summarizable_fk_fields:
if not hasattr(obj, fk): if not hasattr(obj, fk):
continue continue
allm2m = getattr(obj, fk).all() m2m_list = self._get_rel(obj, fk)
if getattr(obj, fk).exists(): if m2m_list:
rel[fk] = [] rel[fk] = []
id_list = [] id_list = []
for thisItem in allm2m: for thisItem in m2m_list:
if getattr(thisItem, 'id', None) in id_list: if getattr(thisItem, 'id', None) in id_list:
continue continue
id_list.append(getattr(thisItem, 'id', None)) id_list.append(getattr(thisItem, 'id', None))
@@ -3664,16 +3679,26 @@ class ActivityStreamSerializer(BaseSerializer):
rel['unified_job_template'] = thisItem.unified_job_template.get_absolute_url(self.context.get('request')) rel['unified_job_template'] = thisItem.unified_job_template.get_absolute_url(self.context.get('request'))
return rel return rel
def _get_rel(self, obj, fk):
related_model = ActivityStream._meta.get_field(fk).related_model
related_manager = getattr(obj, fk)
if issubclass(related_model, PolymorphicModel) and hasattr(obj, '_prefetched_objects_cache'):
# HACK: manually fill PolymorphicModel caches to prevent running query multiple times
# unnecessary if django-polymorphic issue #68 is solved
if related_manager.prefetch_cache_name not in obj._prefetched_objects_cache:
obj._prefetched_objects_cache[related_manager.prefetch_cache_name] = list(related_manager.all())
return related_manager.all()
def get_summary_fields(self, obj): def get_summary_fields(self, obj):
summary_fields = OrderedDict() summary_fields = OrderedDict()
for fk, related_fields in self._local_summarizable_fk_fields: for fk, related_fields in self._local_summarizable_fk_fields:
try: try:
if not hasattr(obj, fk): if not hasattr(obj, fk):
continue continue
allm2m = getattr(obj, fk).all() m2m_list = self._get_rel(obj, fk)
if getattr(obj, fk).exists(): if m2m_list:
summary_fields[fk] = [] summary_fields[fk] = []
for thisItem in allm2m: for thisItem in m2m_list:
if fk == 'job': if fk == 'job':
summary_fields['job_template'] = [] summary_fields['job_template'] = []
job_template_item = {} job_template_item = {}
@@ -3695,9 +3720,6 @@ class ActivityStreamSerializer(BaseSerializer):
fval = getattr(thisItem, field, None) fval = getattr(thisItem, field, None)
if fval is not None: if fval is not None:
thisItemDict[field] = fval thisItemDict[field] = fval
if thisItemDict.get('id', None):
if thisItemDict.get('id', None) in [obj_dict.get('id', None) for obj_dict in summary_fields[fk]]:
continue
summary_fields[fk].append(thisItemDict) summary_fields[fk].append(thisItemDict)
except ObjectDoesNotExist: except ObjectDoesNotExist:
pass pass

View File

@@ -1,4 +1,4 @@
The root of the Ansible Tower REST API. The root of the REST API.
Make a GET request to this resource to obtain information about the available Make a GET request to this resource to obtain information about the available
API versions. API versions.

View File

@@ -1,4 +1,4 @@
Version 2 of the Ansible Tower REST API. Version 2 of the REST API.
Make a GET request to this resource to obtain a list of all child resources Make a GET request to this resource to obtain a list of all child resources
available via the API. available via the API.

View File

@@ -1,15 +1,15 @@
{% include "api/list_api_view.md" %} {% include "api/list_api_view.md" %}
`host_filter` is available on this endpoint. The filter supports: relational queries, `AND` `OR` boolean logic, as well as expression grouping via `()`. `host_filter` is available on this endpoint. The filter supports: relational queries, `and` `or` boolean logic, as well as expression grouping via `()`.
?host_filter=name=my_host ?host_filter=name=my_host
?host_filter=name="my host" OR name=my_host ?host_filter=name="my host" or name=my_host
?host_filter=groups__name="my group" ?host_filter=groups__name="my group"
?host_filter=name=my_host AND groups__name="my group" ?host_filter=name=my_host and groups__name="my group"
?host_filter=name=my_host AND groups__name="my group" ?host_filter=name=my_host and groups__name="my group"
?host_filter=(name=my_host AND groups__name="my group") OR (name=my_host2 AND groups__name=my_group2) ?host_filter=(name=my_host and groups__name="my group") or (name=my_host2 and groups__name=my_group2)
`host_filter` can also be used to query JSON data in the related `ansible_facts`. `__` may be used to traverse JSON dictionaries. `[]` may be used to traverse JSON arrays. `host_filter` can also be used to query JSON data in the related `ansible_facts`. `__` may be used to traverse JSON dictionaries. `[]` may be used to traverse JSON arrays.
?host_filter=ansible_facts__ansible_processor_vcpus=8 ?host_filter=ansible_facts__ansible_processor_vcpus=8
?host_filter=ansible_facts__ansible_processor_vcpus=8 AND name="my_host" AND ansible_facts__ansible_lo__ipv6[]__scope=host ?host_filter=ansible_facts__ansible_processor_vcpus=8 and name="my_host" and ansible_facts__ansible_lo__ipv6[]__scope=host

View File

@@ -20,7 +20,8 @@ inventory sources:
* `project_update`: ID of the project update job that was started if this inventory source is an SCM source. * `project_update`: ID of the project update job that was started if this inventory source is an SCM source.
(interger, read-only, optional) (interger, read-only, optional)
> *Note:* All manual inventory sources (source='') will be ignored by the update_inventory_sources endpoint. Note: All manual inventory sources (source="") will be ignored by the update_inventory_sources endpoint. This endpoint will not update inventory sources for Smart Inventories.
Response code from this action will be: Response code from this action will be:

View File

@@ -4,7 +4,7 @@ Make a POST request to this resource to launch the system job template.
Variables specified inside of the parameter `extra_vars` are passed to the Variables specified inside of the parameter `extra_vars` are passed to the
system job task as command line parameters. These tasks can be ran manually system job task as command line parameters. These tasks can be ran manually
on the host system via the `tower-manage` command. on the host system via the `awx-manage` command.
For example on `cleanup_jobs` and `cleanup_activitystream`: For example on `cleanup_jobs` and `cleanup_activitystream`:

View File

@@ -18,8 +18,6 @@ from collections import OrderedDict
# Django # Django
from django.conf import settings from django.conf import settings
from django.contrib.auth.models import User, AnonymousUser
from django.core.cache import cache
from django.core.exceptions import FieldError from django.core.exceptions import FieldError
from django.db.models import Q, Count, F from django.db.models import Q, Count, F
from django.db import IntegrityError, transaction, connection from django.db import IntegrityError, transaction, connection
@@ -28,6 +26,7 @@ from django.utils.encoding import smart_text, force_text
from django.utils.safestring import mark_safe from django.utils.safestring import mark_safe
from django.utils.timezone import now from django.utils.timezone import now
from django.views.decorators.csrf import csrf_exempt from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.cache import never_cache
from django.template.loader import render_to_string from django.template.loader import render_to_string
from django.core.servers.basehttp import FileWrapper from django.core.servers.basehttp import FileWrapper
from django.http import HttpResponse from django.http import HttpResponse
@@ -74,6 +73,7 @@ from awx.main.utils import (
decrypt_field, decrypt_field,
) )
from awx.main.utils.filters import SmartFilter from awx.main.utils.filters import SmartFilter
from awx.main.utils.insights import filter_insights_api_response
from awx.api.permissions import * # noqa from awx.api.permissions import * # noqa
from awx.api.renderers import * # noqa from awx.api.renderers import * # noqa
@@ -127,6 +127,25 @@ class WorkflowsEnforcementMixin(object):
return super(WorkflowsEnforcementMixin, self).check_permissions(request) return super(WorkflowsEnforcementMixin, self).check_permissions(request)
class UnifiedJobDeletionMixin(object):
'''
Special handling when deleting a running unified job object.
'''
def destroy(self, request, *args, **kwargs):
obj = self.get_object()
if not request.user.can_access(self.model, 'delete', obj):
raise PermissionDenied()
try:
if obj.unified_job_node.workflow_job.status in ACTIVE_STATES:
raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.'))
except self.model.unified_job_node.RelatedObjectDoesNotExist:
pass
if obj.status in ACTIVE_STATES:
raise PermissionDenied(detail=_("Cannot delete running job resource."))
obj.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
class ApiRootView(APIView): class ApiRootView(APIView):
authentication_classes = [] authentication_classes = []
@@ -140,7 +159,7 @@ class ApiRootView(APIView):
v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'}) v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'})
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'}) v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
data = dict( data = dict(
description = _('Ansible Tower REST API'), description = _('AWX REST API'),
current_version = v2, current_version = v2,
available_versions = dict(v1 = v1, v2 = v2), available_versions = dict(v1 = v1, v2 = v2),
) )
@@ -226,16 +245,12 @@ class ApiV1PingView(APIView):
Everything returned here should be considered public / insecure, as Everything returned here should be considered public / insecure, as
this requires no auth and is intended for use by the installer process. this requires no auth and is intended for use by the installer process.
""" """
active_tasks = cache.get("active_celery_tasks", None)
response = { response = {
'ha': is_ha_environment(), 'ha': is_ha_environment(),
'version': get_awx_version(), 'version': get_awx_version(),
'active_node': settings.CLUSTER_HOST_ID, 'active_node': settings.CLUSTER_HOST_ID,
} }
if not isinstance(request.user, AnonymousUser):
response['celery_active_tasks'] = json.loads(active_tasks) if active_tasks is not None else None
response['instances'] = [] response['instances'] = []
for instance in Instance.objects.all(): for instance in Instance.objects.all():
response['instances'].append(dict(node=instance.hostname, heartbeat=instance.modified, response['instances'].append(dict(node=instance.hostname, heartbeat=instance.modified,
@@ -663,6 +678,7 @@ class AuthTokenView(APIView):
serializer._data = self.update_raw_data(serializer.data) serializer._data = self.update_raw_data(serializer.data)
return serializer return serializer
@never_cache
def post(self, request): def post(self, request):
serializer = self.get_serializer(data=request.data) serializer = self.get_serializer(data=request.data)
if serializer.is_valid(): if serializer.is_valid():
@@ -695,7 +711,8 @@ class AuthTokenView(APIView):
# Note: This header is normally added in the middleware whenever an # Note: This header is normally added in the middleware whenever an
# auth token is included in the request header. # auth token is included in the request header.
headers = { headers = {
'Auth-Token-Timeout': int(settings.AUTH_TOKEN_EXPIRATION) 'Auth-Token-Timeout': int(settings.AUTH_TOKEN_EXPIRATION),
'Pragma': 'no-cache',
} }
return Response({'token': token.key, 'expires': token.expires}, headers=headers) return Response({'token': token.key, 'expires': token.expires}, headers=headers)
if 'username' in request.data: if 'username' in request.data:
@@ -1298,21 +1315,12 @@ class ProjectUpdateList(ListAPIView):
new_in_13 = True new_in_13 = True
class ProjectUpdateDetail(RetrieveDestroyAPIView): class ProjectUpdateDetail(UnifiedJobDeletionMixin, RetrieveDestroyAPIView):
model = ProjectUpdate model = ProjectUpdate
serializer_class = ProjectUpdateSerializer serializer_class = ProjectUpdateSerializer
new_in_13 = True new_in_13 = True
def destroy(self, request, *args, **kwargs):
obj = self.get_object()
try:
if obj.unified_job_node.workflow_job.status in ACTIVE_STATES:
raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.'))
except ProjectUpdate.unified_job_node.RelatedObjectDoesNotExist:
pass
return super(ProjectUpdateDetail, self).destroy(request, *args, **kwargs)
class ProjectUpdateCancel(RetrieveAPIView): class ProjectUpdateCancel(RetrieveAPIView):
@@ -1614,7 +1622,18 @@ class CredentialTypeActivityStreamList(ActivityStreamEnforcementMixin, SubListAP
new_in_api_v2 = True new_in_api_v2 = True
class CredentialList(ListCreateAPIView): # remove in 3.3
class CredentialViewMixin(object):
@property
def related_search_fields(self):
ret = super(CredentialViewMixin, self).related_search_fields
if get_request_version(self.request) == 1 and 'credential_type__search' in ret:
ret.remove('credential_type__search')
return ret
class CredentialList(CredentialViewMixin, ListCreateAPIView):
model = Credential model = Credential
serializer_class = CredentialSerializerCreate serializer_class = CredentialSerializerCreate
@@ -1649,7 +1668,7 @@ class CredentialOwnerTeamsList(SubListAPIView):
return self.model.objects.filter(pk__in=teams) return self.model.objects.filter(pk__in=teams)
class UserCredentialsList(SubListCreateAPIView): class UserCredentialsList(CredentialViewMixin, SubListCreateAPIView):
model = Credential model = Credential
serializer_class = UserCredentialSerializerCreate serializer_class = UserCredentialSerializerCreate
@@ -1666,7 +1685,7 @@ class UserCredentialsList(SubListCreateAPIView):
return user_creds & visible_creds return user_creds & visible_creds
class TeamCredentialsList(SubListCreateAPIView): class TeamCredentialsList(CredentialViewMixin, SubListCreateAPIView):
model = Credential model = Credential
serializer_class = TeamCredentialSerializerCreate serializer_class = TeamCredentialSerializerCreate
@@ -1683,7 +1702,7 @@ class TeamCredentialsList(SubListCreateAPIView):
return (team_creds & visible_creds).distinct() return (team_creds & visible_creds).distinct()
class OrganizationCredentialList(SubListCreateAPIView): class OrganizationCredentialList(CredentialViewMixin, SubListCreateAPIView):
model = Credential model = Credential
serializer_class = OrganizationCredentialSerializerCreate serializer_class = OrganizationCredentialSerializerCreate
@@ -1839,7 +1858,7 @@ class InventoryDetail(ControlledByScmMixin, RetrieveUpdateDestroyAPIView):
if not request.user.can_access(self.model, 'delete', obj): if not request.user.can_access(self.model, 'delete', obj):
raise PermissionDenied() raise PermissionDenied()
try: try:
obj.schedule_deletion() obj.schedule_deletion(getattr(request.user, 'id', None))
return Response(status=status.HTTP_202_ACCEPTED) return Response(status=status.HTTP_202_ACCEPTED)
except RuntimeError, e: except RuntimeError, e:
return Response(dict(error=_("{0}".format(e))), status=status.HTTP_400_BAD_REQUEST) return Response(dict(error=_("{0}".format(e))), status=status.HTTP_400_BAD_REQUEST)
@@ -1950,6 +1969,10 @@ class InventoryHostsList(SubListCreateAttachDetachAPIView):
parent_key = 'inventory' parent_key = 'inventory'
capabilities_prefetch = ['inventory.admin'] capabilities_prefetch = ['inventory.admin']
def get_queryset(self):
inventory = self.get_parent_object()
return getattrd(inventory, self.relationship).all()
class HostGroupsList(ControlledByScmMixin, SubListCreateAttachDetachAPIView): class HostGroupsList(ControlledByScmMixin, SubListCreateAttachDetachAPIView):
''' the list of groups a host is directly a member of ''' ''' the list of groups a host is directly a member of '''
@@ -2087,19 +2110,22 @@ class HostInsights(GenericAPIView):
try: try:
res = self._get_insights(url, username, password) res = self._get_insights(url, username, password)
except requests.exceptions.SSLError: except requests.exceptions.SSLError:
return (dict(error=_('SSLError while trying to connect to {}').format(url)), status.HTTP_500_INTERNAL_SERVER_ERROR) return (dict(error=_('SSLError while trying to connect to {}').format(url)), status.HTTP_502_BAD_GATEWAY)
except requests.exceptions.Timeout: except requests.exceptions.Timeout:
return (dict(error=_('Request to {} timed out.').format(url)), status.HTTP_504_GATEWAY_TIMEOUT) return (dict(error=_('Request to {} timed out.').format(url)), status.HTTP_504_GATEWAY_TIMEOUT)
except requests.exceptions.RequestException as e: except requests.exceptions.RequestException as e:
return (dict(error=_('Unkown exception {} while trying to GET {}').format(e, url)), status.HTTP_500_INTERNAL_SERVER_ERROR) return (dict(error=_('Unkown exception {} while trying to GET {}').format(e, url)), status.HTTP_502_BAD_GATEWAY)
if res.status_code != 200: if res.status_code == 401:
return (dict(error=_('Failed to gather reports and maintenance plans from Insights API at URL {}. Server responded with {} status code and message {}').format(url, res.status_code, res.content)), status.HTTP_500_INTERNAL_SERVER_ERROR) return (dict(error=_('Unauthorized access. Please check your Insights Credential username and password.')), status.HTTP_502_BAD_GATEWAY)
elif res.status_code != 200:
return (dict(error=_('Failed to gather reports and maintenance plans from Insights API at URL {}. Server responded with {} status code and message {}').format(url, res.status_code, res.content)), status.HTTP_502_BAD_GATEWAY)
try: try:
return (dict(insights_content=res.json()), status.HTTP_200_OK) filtered_insights_content = filter_insights_api_response(res.json())
return (dict(insights_content=filtered_insights_content), status.HTTP_200_OK)
except ValueError: except ValueError:
return (dict(error=_('Expected JSON response from Insights but instead got {}').format(res.content)), status.HTTP_500_INTERNAL_SERVER_ERROR) return (dict(error=_('Expected JSON response from Insights but instead got {}').format(res.content)), status.HTTP_502_BAD_GATEWAY)
def get(self, request, *args, **kwargs): def get(self, request, *args, **kwargs):
host = self.get_object() host = self.get_object()
@@ -2362,45 +2388,53 @@ class InventoryScriptView(RetrieveAPIView):
if obj.variables_dict: if obj.variables_dict:
all_group = data.setdefault('all', OrderedDict()) all_group = data.setdefault('all', OrderedDict())
all_group['vars'] = obj.variables_dict all_group['vars'] = obj.variables_dict
if obj.kind == 'smart':
if len(obj.hosts.all()) == 0:
return Response({})
else:
all_group = data.setdefault('all', OrderedDict())
smart_hosts_qs = obj.hosts.all().order_by('name')
smart_hosts = list(smart_hosts_qs.values_list('name', flat=True))
all_group['hosts'] = smart_hosts
else:
# Add hosts without a group to the all group.
groupless_hosts_qs = obj.hosts.filter(groups__isnull=True, **hosts_q).order_by('name')
groupless_hosts = list(groupless_hosts_qs.values_list('name', flat=True))
if groupless_hosts:
all_group = data.setdefault('all', OrderedDict())
all_group['hosts'] = groupless_hosts
# Add hosts without a group to the all group. # Build in-memory mapping of groups and their hosts.
groupless_hosts_qs = obj.hosts.filter(groups__isnull=True, **hosts_q).order_by('name') group_hosts_kw = dict(group__inventory_id=obj.id, host__inventory_id=obj.id)
groupless_hosts = list(groupless_hosts_qs.values_list('name', flat=True)) if 'enabled' in hosts_q:
if groupless_hosts: group_hosts_kw['host__enabled'] = hosts_q['enabled']
all_group = data.setdefault('all', OrderedDict()) group_hosts_qs = Group.hosts.through.objects.filter(**group_hosts_kw)
all_group['hosts'] = groupless_hosts group_hosts_qs = group_hosts_qs.order_by('host__name')
group_hosts_qs = group_hosts_qs.values_list('group_id', 'host_id', 'host__name')
group_hosts_map = {}
for group_id, host_id, host_name in group_hosts_qs:
group_hostnames = group_hosts_map.setdefault(group_id, [])
group_hostnames.append(host_name)
# Build in-memory mapping of groups and their hosts. # Build in-memory mapping of groups and their children.
group_hosts_kw = dict(group__inventory_id=obj.id, host__inventory_id=obj.id) group_parents_qs = Group.parents.through.objects.filter(
if 'enabled' in hosts_q: from_group__inventory_id=obj.id,
group_hosts_kw['host__enabled'] = hosts_q['enabled'] to_group__inventory_id=obj.id,
group_hosts_qs = Group.hosts.through.objects.filter(**group_hosts_kw) )
group_hosts_qs = group_hosts_qs.order_by('host__name') group_parents_qs = group_parents_qs.order_by('from_group__name')
group_hosts_qs = group_hosts_qs.values_list('group_id', 'host_id', 'host__name') group_parents_qs = group_parents_qs.values_list('from_group_id', 'from_group__name', 'to_group_id')
group_hosts_map = {} group_children_map = {}
for group_id, host_id, host_name in group_hosts_qs: for from_group_id, from_group_name, to_group_id in group_parents_qs:
group_hostnames = group_hosts_map.setdefault(group_id, []) group_children = group_children_map.setdefault(to_group_id, [])
group_hostnames.append(host_name) group_children.append(from_group_name)
# Build in-memory mapping of groups and their children. # Now use in-memory maps to build up group info.
group_parents_qs = Group.parents.through.objects.filter( for group in obj.groups.all():
from_group__inventory_id=obj.id, group_info = OrderedDict()
to_group__inventory_id=obj.id, group_info['hosts'] = group_hosts_map.get(group.id, [])
) group_info['children'] = group_children_map.get(group.id, [])
group_parents_qs = group_parents_qs.order_by('from_group__name') group_info['vars'] = group.variables_dict
group_parents_qs = group_parents_qs.values_list('from_group_id', 'from_group__name', 'to_group_id') data[group.name] = group_info
group_children_map = {}
for from_group_id, from_group_name, to_group_id in group_parents_qs:
group_children = group_children_map.setdefault(to_group_id, [])
group_children.append(from_group_name)
# Now use in-memory maps to build up group info.
for group in obj.groups.all():
group_info = OrderedDict()
group_info['hosts'] = group_hosts_map.get(group.id, [])
group_info['children'] = group_children_map.get(group.id, [])
group_info['vars'] = group.variables_dict
data[group.name] = group_info
if hostvars: if hostvars:
data.setdefault('_meta', OrderedDict()) data.setdefault('_meta', OrderedDict())
@@ -2408,18 +2442,6 @@ class InventoryScriptView(RetrieveAPIView):
for host in obj.hosts.filter(**hosts_q): for host in obj.hosts.filter(**hosts_q):
data['_meta']['hostvars'][host.name] = host.variables_dict data['_meta']['hostvars'][host.name] = host.variables_dict
# workaround for Ansible inventory bug (github #3687), localhost
# must be explicitly listed in the all group for dynamic inventory
# scripts to pick it up.
localhost_names = ('localhost', '127.0.0.1', '::1')
localhosts_qs = obj.hosts.filter(name__in=localhost_names, **hosts_q)
localhosts = list(localhosts_qs.values_list('name', flat=True))
if localhosts:
all_group = data.setdefault('all', OrderedDict())
all_group_hosts = all_group.get('hosts', [])
all_group_hosts.extend(localhosts)
all_group['hosts'] = sorted(set(all_group_hosts))
return Response(data) return Response(data)
@@ -2494,17 +2516,7 @@ class InventoryInventorySourcesUpdate(RetrieveAPIView):
failures = 0 failures = 0
for inventory_source in inventory.inventory_sources.exclude(source=''): for inventory_source in inventory.inventory_sources.exclude(source=''):
details = {'inventory_source': inventory_source.pk, 'status': None} details = {'inventory_source': inventory_source.pk, 'status': None}
can_update = inventory_source.can_update if inventory_source.can_update:
project_update = False
if inventory_source.source == 'scm' and inventory_source.update_on_project_update:
if not request.user or not request.user.can_access(Project, 'start', inventory_source.source_project):
details['status'] = _('You do not have permission to update project `{}`').format(inventory_source.source_project.name)
can_update = False
else:
project_update = True
if can_update:
if project_update:
details['project_update'] = inventory_source.source_project.update().id
details['status'] = 'started' details['status'] = 'started'
details['inventory_update'] = inventory_source.update().id details['inventory_update'] = inventory_source.update().id
successes += 1 successes += 1
@@ -2532,6 +2544,13 @@ class InventorySourceList(ListCreateAPIView):
always_allow_superuser = False always_allow_superuser = False
new_in_320 = True new_in_320 = True
@property
def allowed_methods(self):
methods = super(InventorySourceList, self).allowed_methods
if get_request_version(self.request) == 1:
methods.remove('POST')
return methods
class InventorySourceDetail(RetrieveUpdateDestroyAPIView): class InventorySourceDetail(RetrieveUpdateDestroyAPIView):
@@ -2652,21 +2671,12 @@ class InventoryUpdateList(ListAPIView):
serializer_class = InventoryUpdateListSerializer serializer_class = InventoryUpdateListSerializer
class InventoryUpdateDetail(RetrieveDestroyAPIView): class InventoryUpdateDetail(UnifiedJobDeletionMixin, RetrieveDestroyAPIView):
model = InventoryUpdate model = InventoryUpdate
serializer_class = InventoryUpdateSerializer serializer_class = InventoryUpdateSerializer
new_in_14 = True new_in_14 = True
def destroy(self, request, *args, **kwargs):
obj = self.get_object()
try:
if obj.unified_job_node.workflow_job.status in ACTIVE_STATES:
raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.'))
except InventoryUpdate.unified_job_node.RelatedObjectDoesNotExist:
pass
return super(InventoryUpdateDetail, self).destroy(request, *args, **kwargs)
class InventoryUpdateCancel(RetrieveAPIView): class InventoryUpdateCancel(RetrieveAPIView):
@@ -2723,6 +2733,7 @@ class JobTemplateDetail(RetrieveUpdateDestroyAPIView):
class JobTemplateLaunch(RetrieveAPIView, GenericAPIView): class JobTemplateLaunch(RetrieveAPIView, GenericAPIView):
model = JobTemplate model = JobTemplate
metadata_class = JobTypeMetadata
serializer_class = JobLaunchSerializer serializer_class = JobLaunchSerializer
is_job_start = True is_job_start = True
always_allow_superuser = False always_allow_superuser = False
@@ -2759,12 +2770,14 @@ class JobTemplateLaunch(RetrieveAPIView, GenericAPIView):
obj = self.get_object() obj = self.get_object()
ignored_fields = {} ignored_fields = {}
if 'credential' not in request.data and 'credential_id' in request.data: for fd in ('credential', 'vault_credential', 'inventory'):
request.data['credential'] = request.data['credential_id'] id_fd = '{}_id'.format(fd)
if 'inventory' not in request.data and 'inventory_id' in request.data: if fd not in request.data and id_fd in request.data:
request.data['inventory'] = request.data['inventory_id'] request.data[fd] = request.data[id_fd]
if get_request_version(self.request) == 1: # TODO: remove in 3.3 if get_request_version(self.request) == 1 and 'extra_credentials' in request.data: # TODO: remove in 3.3
if hasattr(request.data, '_mutable') and not request.data._mutable:
request.data._mutable = True
extra_creds = request.data.pop('extra_credentials', None) extra_creds = request.data.pop('extra_credentials', None)
if extra_creds is not None: if extra_creds is not None:
ignored_fields['extra_credentials'] = extra_creds ignored_fields['extra_credentials'] = extra_creds
@@ -2778,15 +2791,15 @@ class JobTemplateLaunch(RetrieveAPIView, GenericAPIView):
prompted_fields = _accepted_or_ignored[0] prompted_fields = _accepted_or_ignored[0]
ignored_fields.update(_accepted_or_ignored[1]) ignored_fields.update(_accepted_or_ignored[1])
if 'credential' in prompted_fields and prompted_fields['credential'] != getattrd(obj, 'credential.pk', None): for fd, model in (
new_credential = get_object_or_400(Credential, pk=get_pk_from_dict(prompted_fields, 'credential')) ('credential', Credential),
if request.user not in new_credential.use_role: ('vault_credential', Credential),
raise PermissionDenied() ('inventory', Inventory)):
if fd in prompted_fields and prompted_fields[fd] != getattrd(obj, '{}.pk'.format(fd), None):
if 'inventory' in prompted_fields and prompted_fields['inventory'] != getattrd(obj, 'inventory.pk', None): new_res = get_object_or_400(model, pk=get_pk_from_dict(prompted_fields, fd))
new_inventory = get_object_or_400(Inventory, pk=get_pk_from_dict(prompted_fields, 'inventory')) use_role = getattr(new_res, 'use_role')
if request.user not in new_inventory.use_role: if request.user not in use_role:
raise PermissionDenied() raise PermissionDenied()
for cred in prompted_fields.get('extra_credentials', []): for cred in prompted_fields.get('extra_credentials', []):
new_credential = get_object_or_400(Credential, pk=cred) new_credential = get_object_or_400(Credential, pk=cred)
@@ -3569,7 +3582,7 @@ class WorkflowJobList(WorkflowsEnforcementMixin, ListCreateAPIView):
new_in_310 = True new_in_310 = True
class WorkflowJobDetail(WorkflowsEnforcementMixin, RetrieveDestroyAPIView): class WorkflowJobDetail(WorkflowsEnforcementMixin, UnifiedJobDeletionMixin, RetrieveDestroyAPIView):
model = WorkflowJob model = WorkflowJob
serializer_class = WorkflowJobSerializer serializer_class = WorkflowJobSerializer
@@ -3719,8 +3732,15 @@ class JobList(ListCreateAPIView):
metadata_class = JobTypeMetadata metadata_class = JobTypeMetadata
serializer_class = JobListSerializer serializer_class = JobListSerializer
@property
def allowed_methods(self):
methods = super(JobList, self).allowed_methods
if get_request_version(self.request) > 1:
methods.remove('POST')
return methods
class JobDetail(RetrieveUpdateDestroyAPIView):
class JobDetail(UnifiedJobDeletionMixin, RetrieveUpdateDestroyAPIView):
model = Job model = Job
metadata_class = JobTypeMetadata metadata_class = JobTypeMetadata
@@ -3733,15 +3753,6 @@ class JobDetail(RetrieveUpdateDestroyAPIView):
return self.http_method_not_allowed(request, *args, **kwargs) return self.http_method_not_allowed(request, *args, **kwargs)
return super(JobDetail, self).update(request, *args, **kwargs) return super(JobDetail, self).update(request, *args, **kwargs)
def destroy(self, request, *args, **kwargs):
obj = self.get_object()
try:
if obj.unified_job_node.workflow_job.status in ACTIVE_STATES:
raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.'))
except Job.unified_job_node.RelatedObjectDoesNotExist:
pass
return super(JobDetail, self).destroy(request, *args, **kwargs)
class JobExtraCredentialsList(SubListAPIView): class JobExtraCredentialsList(SubListAPIView):
@@ -4056,7 +4067,7 @@ class HostAdHocCommandsList(AdHocCommandList, SubListCreateAPIView):
relationship = 'ad_hoc_commands' relationship = 'ad_hoc_commands'
class AdHocCommandDetail(RetrieveDestroyAPIView): class AdHocCommandDetail(UnifiedJobDeletionMixin, RetrieveDestroyAPIView):
model = AdHocCommand model = AdHocCommand
serializer_class = AdHocCommandSerializer serializer_class = AdHocCommandSerializer
@@ -4207,7 +4218,7 @@ class SystemJobList(ListCreateAPIView):
return super(SystemJobList, self).get(request, *args, **kwargs) return super(SystemJobList, self).get(request, *args, **kwargs)
class SystemJobDetail(RetrieveDestroyAPIView): class SystemJobDetail(UnifiedJobDeletionMixin, RetrieveDestroyAPIView):
model = SystemJob model = SystemJob
serializer_class = SystemJobSerializer serializer_class = SystemJobSerializer
@@ -4352,18 +4363,25 @@ class UnifiedJobStdout(RetrieveAPIView):
tablename, related_name = { tablename, related_name = {
Job: ('main_jobevent', 'job_id'), Job: ('main_jobevent', 'job_id'),
AdHocCommand: ('main_adhoccommandevent', 'ad_hoc_command_id'), AdHocCommand: ('main_adhoccommandevent', 'ad_hoc_command_id'),
}[unified_job.__class__] }.get(unified_job.__class__, (None, None))
cursor.copy_expert( if tablename is None:
"copy (select stdout from {} where {}={} order by start_line) to stdout".format( # stdout job event reconstruction isn't supported
tablename, # for certain job types (such as inventory syncs),
related_name, # so just grab the raw stdout from the DB
unified_job.id write_fd.write(unified_job.result_stdout_text)
), write_fd.close()
write_fd else:
) cursor.copy_expert(
write_fd.close() "copy (select stdout from {} where {}={} order by start_line) to stdout".format(
subprocess.Popen("sed -i 's/\\\\r\\\\n/\\n/g' {}".format(unified_job.result_stdout_file), tablename,
shell=True).wait() related_name,
unified_job.id
),
write_fd
)
write_fd.close()
subprocess.Popen("sed -i 's/\\\\r\\\\n/\\n/g' {}".format(unified_job.result_stdout_file),
shell=True).wait()
except Exception as e: except Exception as e:
return Response({"error": _("Error generating stdout download file: {}".format(e))}) return Response({"error": _("Error generating stdout download file: {}".format(e))})
try: try:

View File

@@ -3,7 +3,11 @@ import hashlib
import six import six
from django.utils.encoding import smart_str from django.utils.encoding import smart_str
from Crypto.Cipher import AES
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher
from cryptography.hazmat.primitives.ciphers.algorithms import AES
from cryptography.hazmat.primitives.ciphers.modes import ECB
from awx.conf import settings_registry from awx.conf import settings_registry
@@ -52,8 +56,8 @@ def decrypt_value(encryption_key, value):
if algo != 'AES': if algo != 'AES':
raise ValueError('unsupported algorithm: %s' % algo) raise ValueError('unsupported algorithm: %s' % algo)
encrypted = base64.b64decode(b64data) encrypted = base64.b64decode(b64data)
cipher = AES.new(encryption_key, AES.MODE_ECB) decryptor = Cipher(AES(encryption_key), ECB(), default_backend()).decryptor()
value = cipher.decrypt(encrypted) value = decryptor.update(encrypted) + decryptor.finalize()
value = value.rstrip('\x00') value = value.rstrip('\x00')
# If the encrypted string contained a UTF8 marker, decode the data # If the encrypted string contained a UTF8 marker, decode the data
if utf8: if utf8:
@@ -90,10 +94,11 @@ def encrypt_field(instance, field_name, ask=False, subfield=None, skip_utf8=Fals
utf8 = type(value) == six.text_type utf8 = type(value) == six.text_type
value = smart_str(value) value = smart_str(value)
key = get_encryption_key(field_name, getattr(instance, 'pk', None)) key = get_encryption_key(field_name, getattr(instance, 'pk', None))
cipher = AES.new(key, AES.MODE_ECB) encryptor = Cipher(AES(key), ECB(), default_backend()).encryptor()
while len(value) % cipher.block_size != 0: block_size = 16
while len(value) % block_size != 0:
value += '\x00' value += '\x00'
encrypted = cipher.encrypt(value) encrypted = encryptor.update(value) + encryptor.finalize()
b64data = base64.b64encode(encrypted) b64data = base64.b64encode(encrypted)
tokens = ['$encrypted', 'AES', b64data] tokens = ['$encrypted', 'AES', b64data]
if utf8: if utf8:

View File

@@ -65,8 +65,10 @@ class Setting(CreatedModifiedModel):
# After saving a new instance for the first time, set the encrypted # After saving a new instance for the first time, set the encrypted
# field and save again. # field and save again.
if encrypted and new_instance: if encrypted and new_instance:
self.value = self._saved_value from awx.main.signals import disable_activity_stream
self.save(update_fields=['value']) with disable_activity_stream():
self.value = self._saved_value
self.save(update_fields=['value'])
@classmethod @classmethod
def get_cache_key(self, key): def get_cache_key(self, key):

View File

@@ -99,7 +99,8 @@ class SettingsRegistry(object):
continue continue
if kwargs.get('category_slug', None) in slugs_to_ignore: if kwargs.get('category_slug', None) in slugs_to_ignore:
continue continue
if read_only in {True, False} and kwargs.get('read_only', False) != read_only: if (read_only in {True, False} and kwargs.get('read_only', False) != read_only and
setting not in ('AWX_ISOLATED_PRIVATE_KEY', 'AWX_ISOLATED_PUBLIC_KEY')):
# Note: Doesn't catch fields that set read_only via __init__; # Note: Doesn't catch fields that set read_only via __init__;
# read-only field kwargs should always include read_only=True. # read-only field kwargs should always include read_only=True.
continue continue
@@ -116,6 +117,9 @@ class SettingsRegistry(object):
def is_setting_encrypted(self, setting): def is_setting_encrypted(self, setting):
return bool(self._registry.get(setting, {}).get('encrypted', False)) return bool(self._registry.get(setting, {}).get('encrypted', False))
def is_setting_read_only(self, setting):
return bool(self._registry.get(setting, {}).get('read_only', False))
def get_setting_field(self, setting, mixin_class=None, for_user=False, **kwargs): def get_setting_field(self, setting, mixin_class=None, for_user=False, **kwargs):
from rest_framework.fields import empty from rest_framework.fields import empty
field_kwargs = {} field_kwargs = {}

View File

@@ -293,7 +293,12 @@ class SettingsWrapper(UserSettingsHolder):
field = self.registry.get_setting_field(name) field = self.registry.get_setting_field(name)
if value is empty: if value is empty:
setting = None setting = None
if not field.read_only: if not field.read_only or name in (
# these two values are read-only - however - we *do* want
# to fetch their value from the database
'AWX_ISOLATED_PRIVATE_KEY',
'AWX_ISOLATED_PUBLIC_KEY',
):
setting = Setting.objects.filter(key=name, user__isnull=True).order_by('pk').first() setting = Setting.objects.filter(key=name, user__isnull=True).order_by('pk').first()
if setting: if setting:
if getattr(field, 'encrypted', False): if getattr(field, 'encrypted', False):

View File

@@ -11,7 +11,7 @@ from django.http import Http404
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
# Django REST Framework # Django REST Framework
from rest_framework.exceptions import PermissionDenied from rest_framework.exceptions import PermissionDenied, ValidationError
from rest_framework.response import Response from rest_framework.response import Response
from rest_framework import serializers from rest_framework import serializers
from rest_framework import status from rest_framework import status
@@ -122,16 +122,18 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
user = self.request.user if self.category_slug == 'user' else None user = self.request.user if self.category_slug == 'user' else None
settings_change_list = [] settings_change_list = []
for key, value in serializer.validated_data.items(): for key, value in serializer.validated_data.items():
if key == 'LICENSE': if key == 'LICENSE' or settings_registry.is_setting_read_only(key):
continue continue
if settings_registry.is_setting_encrypted(key) and isinstance(value, basestring) and value.startswith('$encrypted$'): if settings_registry.is_setting_encrypted(key) and \
isinstance(value, basestring) and \
value.startswith('$encrypted$'):
continue continue
setattr(serializer.instance, key, value) setattr(serializer.instance, key, value)
setting = settings_qs.filter(key=key).order_by('pk').first() setting = settings_qs.filter(key=key).order_by('pk').first()
if not setting: if not setting:
setting = Setting.objects.create(key=key, user=user, value=value) setting = Setting.objects.create(key=key, user=user, value=value)
settings_change_list.append(key) settings_change_list.append(key)
elif setting.value != value or type(setting.value) != type(value): elif setting.value != value:
setting.value = value setting.value = value
setting.save(update_fields=['value']) setting.save(update_fields=['value'])
settings_change_list.append(key) settings_change_list.append(key)
@@ -146,6 +148,8 @@ class SettingSingletonDetail(RetrieveUpdateDestroyAPIView):
def perform_destroy(self, instance): def perform_destroy(self, instance):
settings_change_list = [] settings_change_list = []
for setting in self.get_queryset().exclude(key='LICENSE'): for setting in self.get_queryset().exclude(key='LICENSE'):
if settings_registry.get_setting_field(setting.key).read_only:
continue
setting.delete() setting.delete()
settings_change_list.append(setting.key) settings_change_list.append(setting.key)
if settings_change_list and 'migrate_to_database_settings' not in sys.argv: if settings_change_list and 'migrate_to_database_settings' not in sys.argv:
@@ -178,6 +182,13 @@ class SettingLoggingTest(GenericAPIView):
obj = type('Settings', (object,), defaults)() obj = type('Settings', (object,), defaults)()
serializer = self.get_serializer(obj, data=request.data) serializer = self.get_serializer(obj, data=request.data)
serializer.is_valid(raise_exception=True) serializer.is_valid(raise_exception=True)
# Special validation specific to logging test.
errors = {}
for key in ['LOG_AGGREGATOR_TYPE', 'LOG_AGGREGATOR_HOST']:
if not request.data.get(key, ''):
errors[key] = 'This field is required.'
if errors:
raise ValidationError(errors)
if request.data.get('LOG_AGGREGATOR_PASSWORD', '').startswith('$encrypted$'): if request.data.get('LOG_AGGREGATOR_PASSWORD', '').startswith('$encrypted$'):
serializer.validated_data['LOG_AGGREGATOR_PASSWORD'] = getattr( serializer.validated_data['LOG_AGGREGATOR_PASSWORD'] = getattr(
@@ -190,6 +201,7 @@ class SettingLoggingTest(GenericAPIView):
mock_settings = MockSettings() mock_settings = MockSettings()
for k, v in serializer.validated_data.items(): for k, v in serializer.validated_data.items():
setattr(mock_settings, k, v) setattr(mock_settings, k, v)
mock_settings.LOG_AGGREGATOR_LEVEL = 'DEBUG'
BaseHTTPSHandler.perform_test(mock_settings) BaseHTTPSHandler.perform_test(mock_settings)
except LoggingConnectivityException as e: except LoggingConnectivityException as e:
return Response({'error': str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) return Response({'error': str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)

View File

@@ -17,9 +17,9 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
# Tower Display Callback # AWX Display Callback
from . import cleanup # noqa (registers control persistent cleanup) from . import cleanup # noqa (registers control persistent cleanup)
from . import display # noqa (wraps ansible.display.Display methods) from . import display # noqa (wraps ansible.display.Display methods)
from .module import TowerDefaultCallbackModule, TowerMinimalCallbackModule from .module import AWXDefaultCallbackModule, AWXMinimalCallbackModule
__all__ = ['TowerDefaultCallbackModule', 'TowerMinimalCallbackModule'] __all__ = ['AWXDefaultCallbackModule', 'AWXMinimalCallbackModule']

View File

@@ -27,7 +27,7 @@ from copy import copy
from ansible.plugins.callback import CallbackBase from ansible.plugins.callback import CallbackBase
from ansible.plugins.callback.default import CallbackModule as DefaultCallbackModule from ansible.plugins.callback.default import CallbackModule as DefaultCallbackModule
# Tower Display Callback # AWX Display Callback
from .events import event_context from .events import event_context
from .minimal import CallbackModule as MinimalCallbackModule from .minimal import CallbackModule as MinimalCallbackModule
@@ -448,12 +448,12 @@ class BaseCallbackModule(CallbackBase):
super(BaseCallbackModule, self).v2_runner_retry(result) super(BaseCallbackModule, self).v2_runner_retry(result)
class TowerDefaultCallbackModule(BaseCallbackModule, DefaultCallbackModule): class AWXDefaultCallbackModule(BaseCallbackModule, DefaultCallbackModule):
CALLBACK_NAME = 'tower_display' CALLBACK_NAME = 'awx_display'
class TowerMinimalCallbackModule(BaseCallbackModule, MinimalCallbackModule): class AWXMinimalCallbackModule(BaseCallbackModule, MinimalCallbackModule):
CALLBACK_NAME = 'minimal' CALLBACK_NAME = 'minimal'

View File

@@ -27,4 +27,4 @@ if awx_lib_path not in sys.path:
sys.path.insert(0, awx_lib_path) sys.path.insert(0, awx_lib_path)
# Tower Display Callback # Tower Display Callback
from tower_display_callback import TowerDefaultCallbackModule as CallbackModule # noqa from awx_display_callback import AWXDefaultCallbackModule as CallbackModule # noqa

View File

@@ -27,4 +27,4 @@ if awx_lib_path not in sys.path:
sys.path.insert(0, awx_lib_path) sys.path.insert(0, awx_lib_path)
# Tower Display Callback # Tower Display Callback
from tower_display_callback import TowerMinimalCallbackModule as CallbackModule # noqa from awx_display_callback import AWXMinimalCallbackModule as CallbackModule # noqa

View File

@@ -2,13 +2,13 @@
import os import os
import sys import sys
# Based on http://stackoverflow.com/a/6879344/131141 -- Initialize tower display # Based on http://stackoverflow.com/a/6879344/131141 -- Initialize awx display
# callback as early as possible to wrap ansible.display.Display methods. # callback as early as possible to wrap ansible.display.Display methods.
def argv_ready(argv): def argv_ready(argv):
if argv and os.path.basename(argv[0]) in {'ansible', 'ansible-playbook'}: if argv and os.path.basename(argv[0]) in {'ansible', 'ansible-playbook'}:
import tower_display_callback # noqa import awx_display_callback # noqa
class argv_placeholder(object): class argv_placeholder(object):

View File

@@ -11,10 +11,10 @@ import pytest
# search for a plugin implementation (which should be named `CallbackModule`) # search for a plugin implementation (which should be named `CallbackModule`)
# #
# this code modifies the Python path to make our # this code modifies the Python path to make our
# `awx.lib.tower_display_callback` callback importable (because `awx.lib` # `awx.lib.awx_display_callback` callback importable (because `awx.lib`
# itself is not a package) # itself is not a package)
# #
# we use the `tower_display_callback` imports below within this file, but # we use the `awx_display_callback` imports below within this file, but
# Ansible also uses them when it discovers this file in # Ansible also uses them when it discovers this file in
# `ANSIBLE_CALLBACK_PLUGINS` # `ANSIBLE_CALLBACK_PLUGINS`
CALLBACK = os.path.splitext(os.path.basename(__file__))[0] CALLBACK = os.path.splitext(os.path.basename(__file__))[0]
@@ -32,8 +32,8 @@ with mock.patch.dict(os.environ, {'ANSIBLE_STDOUT_CALLBACK': CALLBACK,
if path not in sys.path: if path not in sys.path:
sys.path.insert(0, path) sys.path.insert(0, path)
from tower_display_callback import TowerDefaultCallbackModule as CallbackModule # noqa from awx_display_callback import AWXDefaultCallbackModule as CallbackModule # noqa
from tower_display_callback.events import event_context # noqa from awx_display_callback.events import event_context # noqa
@pytest.fixture() @pytest.fixture()

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -613,6 +613,8 @@ class InventoryAccess(BaseAccess):
for o in Job.objects.filter(inventory=obj, status__in=ACTIVE_STATES)]) for o in Job.objects.filter(inventory=obj, status__in=ACTIVE_STATES)])
active_jobs.extend([dict(type="inventory_update", id=o.id) active_jobs.extend([dict(type="inventory_update", id=o.id)
for o in InventoryUpdate.objects.filter(inventory_source__inventory=obj, status__in=ACTIVE_STATES)]) for o in InventoryUpdate.objects.filter(inventory_source__inventory=obj, status__in=ACTIVE_STATES)])
active_jobs.extend([dict(type="ad_hoc_command", id=o.id)
for o in AdHocCommand.objects.filter(inventory=obj, status__in=ACTIVE_STATES)])
if len(active_jobs) > 0: if len(active_jobs) > 0:
raise StateConflict({"conflict": _("Resource is being used by running jobs"), raise StateConflict({"conflict": _("Resource is being used by running jobs"),
"active_jobs": active_jobs}) "active_jobs": active_jobs})
@@ -788,14 +790,11 @@ class InventorySourceAccess(BaseAccess):
if not self.check_related('source_project', Project, data, role_field='use_role'): if not self.check_related('source_project', Project, data, role_field='use_role'):
return False return False
# Checks for admin or change permission on inventory. # Checks for admin or change permission on inventory.
return ( return self.check_related('inventory', Inventory, data)
self.check_related('inventory', Inventory, data) and
not InventorySource.objects.filter(
inventory=data.get('inventory'),
update_on_project_update=True, source='scm').exists())
def can_delete(self, obj): def can_delete(self, obj):
if not (self.user.is_superuser or not (obj and obj.inventory and self.user.can_access(Inventory, 'admin', obj.inventory, None))): if not self.user.is_superuser and \
not (obj and obj.inventory and self.user.can_access(Inventory, 'admin', obj.inventory, None)):
return False return False
active_jobs_qs = InventoryUpdate.objects.filter(inventory_source=obj, status__in=ACTIVE_STATES) active_jobs_qs = InventoryUpdate.objects.filter(inventory_source=obj, status__in=ACTIVE_STATES)
if active_jobs_qs.exists(): if active_jobs_qs.exists():
@@ -819,7 +818,7 @@ class InventorySourceAccess(BaseAccess):
def can_start(self, obj, validate_license=True): def can_start(self, obj, validate_license=True):
if obj and obj.inventory: if obj and obj.inventory:
return obj.can_update and self.user in obj.inventory.update_role return self.user in obj.inventory.update_role
return False return False
@@ -1391,26 +1390,45 @@ class JobAccess(BaseAccess):
inventory_access = obj.inventory and self.user in obj.inventory.use_role inventory_access = obj.inventory and self.user in obj.inventory.use_role
credential_access = obj.credential and self.user in obj.credential.use_role credential_access = obj.credential and self.user in obj.credential.use_role
job_extra_credentials = set(obj.extra_credentials.all())
if job_extra_credentials:
credential_access = False
# Check if JT execute access (and related prompts) is sufficient # Check if JT execute access (and related prompts) is sufficient
if obj.job_template is not None: if obj.job_template is not None:
prompts_access = True prompts_access = True
job_fields = {} job_fields = {}
jt_extra_credentials = set(obj.job_template.extra_credentials.all())
for fd in obj.job_template._ask_for_vars_dict(): for fd in obj.job_template._ask_for_vars_dict():
if fd == 'extra_credentials':
job_fields[fd] = job_extra_credentials
job_fields[fd] = getattr(obj, fd) job_fields[fd] = getattr(obj, fd)
accepted_fields, ignored_fields = obj.job_template._accept_or_ignore_job_kwargs(**job_fields) accepted_fields, ignored_fields = obj.job_template._accept_or_ignore_job_kwargs(**job_fields)
# Check if job fields are not allowed by current _on_launch settings
for fd in ignored_fields: for fd in ignored_fields:
if fd == 'extra_credentials': if fd == 'extra_vars':
if set(job_fields[fd].all()) != set(getattr(obj.job_template, fd).all()): continue # we cannot yet validate validity of prompted extra_vars
elif fd == 'extra_credentials':
if job_extra_credentials != jt_extra_credentials:
# Job has extra_credentials that are not promptable # Job has extra_credentials that are not promptable
prompts_access = False prompts_access = False
elif fd != 'extra_vars' and job_fields[fd] != getattr(obj.job_template, fd): break
elif job_fields[fd] != getattr(obj.job_template, fd):
# Job has field that is not promptable # Job has field that is not promptable
prompts_access = False prompts_access = False
if obj.credential != obj.job_template.credential and not credential_access: break
prompts_access = False # For those fields that are allowed by prompting, but differ
if obj.inventory != obj.job_template.inventory and not inventory_access: # from JT, assure that user has explicit access to them
prompts_access = False if prompts_access:
if obj.credential != obj.job_template.credential and not credential_access:
prompts_access = False
if obj.inventory != obj.job_template.inventory and not inventory_access:
prompts_access = False
if prompts_access and job_extra_credentials != jt_extra_credentials:
for cred in job_extra_credentials:
if self.user not in cred.use_role:
prompts_access = False
break
if prompts_access and self.user in obj.job_template.execute_role: if prompts_access and self.user in obj.job_template.execute_role:
return True return True
@@ -2207,12 +2225,17 @@ class ActivityStreamAccess(BaseAccess):
- custom inventory scripts - custom inventory scripts
''' '''
qs = self.model.objects.all() qs = self.model.objects.all()
qs = qs.prefetch_related('organization', 'user', 'inventory', 'host', 'group', 'inventory_source', qs = qs.prefetch_related('organization', 'user', 'inventory', 'host', 'group',
'inventory_update', 'credential', 'credential_type', 'team', 'project', 'project_update', 'inventory_update', 'credential', 'credential_type', 'team',
'job_template', 'job', 'ad_hoc_command', 'ad_hoc_command',
'notification_template', 'notification', 'label', 'role', 'actor', 'notification_template', 'notification', 'label', 'role', 'actor',
'schedule', 'custom_inventory_script', 'unified_job_template', 'schedule', 'custom_inventory_script', 'unified_job_template',
'workflow_job_template', 'workflow_job', 'workflow_job_template_node') 'workflow_job_template_node')
# FIXME: the following fields will be attached to the wrong object
# if they are included in prefetch_related because of
# https://github.com/django-polymorphic/django-polymorphic/issues/68
# 'job_template', 'job', 'project', 'project_update', 'workflow_job',
# 'inventory_source', 'workflow_job_template'
if self.user.is_superuser or self.user.is_system_auditor: if self.user.is_superuser or self.user.is_system_auditor:
return qs.all() return qs.all()

View File

@@ -175,6 +175,7 @@ register(
register( register(
'AWX_ISOLATED_CHECK_INTERVAL', 'AWX_ISOLATED_CHECK_INTERVAL',
field_class=fields.IntegerField, field_class=fields.IntegerField,
min_value=0,
label=_('Isolated status check interval'), label=_('Isolated status check interval'),
help_text=_('The number of seconds to sleep between status checks for jobs running on isolated instances.'), help_text=_('The number of seconds to sleep between status checks for jobs running on isolated instances.'),
category=_('Jobs'), category=_('Jobs'),
@@ -184,6 +185,7 @@ register(
register( register(
'AWX_ISOLATED_LAUNCH_TIMEOUT', 'AWX_ISOLATED_LAUNCH_TIMEOUT',
field_class=fields.IntegerField, field_class=fields.IntegerField,
min_value=0,
label=_('Isolated launch timeout'), label=_('Isolated launch timeout'),
help_text=_('The timeout (in seconds) for launching jobs on isolated instances. ' help_text=_('The timeout (in seconds) for launching jobs on isolated instances. '
'This includes the time needed to copy source control files (playbooks) to the isolated instance.'), 'This includes the time needed to copy source control files (playbooks) to the isolated instance.'),
@@ -194,6 +196,7 @@ register(
register( register(
'AWX_ISOLATED_CONNECTION_TIMEOUT', 'AWX_ISOLATED_CONNECTION_TIMEOUT',
field_class=fields.IntegerField, field_class=fields.IntegerField,
min_value=0,
default=10, default=10,
label=_('Isolated connection timeout'), label=_('Isolated connection timeout'),
help_text=_('Ansible SSH connection timeout (in seconds) to use when communicating with isolated instances. ' help_text=_('Ansible SSH connection timeout (in seconds) to use when communicating with isolated instances. '
@@ -202,12 +205,25 @@ register(
category_slug='jobs', category_slug='jobs',
) )
register(
'AWX_ISOLATED_KEY_GENERATION',
field_class=fields.BooleanField,
default=True,
label=_('Generate RSA keys for isolated instances'),
help_text=_('If set, a random RSA key will be generated and distributed to '
'isolated instances. To disable this behavior and manage authentication '
'for isolated instances outside of Tower, disable this setting.'), # noqa
category=_('Jobs'),
category_slug='jobs',
)
register( register(
'AWX_ISOLATED_PRIVATE_KEY', 'AWX_ISOLATED_PRIVATE_KEY',
field_class=fields.CharField, field_class=fields.CharField,
default='', default='',
allow_blank=True, allow_blank=True,
encrypted=True, encrypted=True,
read_only=True,
label=_('The RSA private key for SSH traffic to isolated instances'), label=_('The RSA private key for SSH traffic to isolated instances'),
help_text=_('The RSA private key for SSH traffic to isolated instances'), # noqa help_text=_('The RSA private key for SSH traffic to isolated instances'), # noqa
category=_('Jobs'), category=_('Jobs'),
@@ -219,6 +235,7 @@ register(
field_class=fields.CharField, field_class=fields.CharField,
default='', default='',
allow_blank=True, allow_blank=True,
read_only=True,
label=_('The RSA public key for SSH traffic to isolated instances'), label=_('The RSA public key for SSH traffic to isolated instances'),
help_text=_('The RSA public key for SSH traffic to isolated instances'), # noqa help_text=_('The RSA public key for SSH traffic to isolated instances'), # noqa
category=_('Jobs'), category=_('Jobs'),
@@ -329,6 +346,7 @@ register(
'LOG_AGGREGATOR_HOST', 'LOG_AGGREGATOR_HOST',
field_class=fields.CharField, field_class=fields.CharField,
allow_null=True, allow_null=True,
default=None,
label=_('Logging Aggregator'), label=_('Logging Aggregator'),
help_text=_('Hostname/IP where external logs will be sent to.'), help_text=_('Hostname/IP where external logs will be sent to.'),
category=_('Logging'), category=_('Logging'),
@@ -338,6 +356,7 @@ register(
'LOG_AGGREGATOR_PORT', 'LOG_AGGREGATOR_PORT',
field_class=fields.IntegerField, field_class=fields.IntegerField,
allow_null=True, allow_null=True,
default=None,
label=_('Logging Aggregator Port'), label=_('Logging Aggregator Port'),
help_text=_('Port on Logging Aggregator to send logs to (if required and not' help_text=_('Port on Logging Aggregator to send logs to (if required and not'
' provided in Logging Aggregator).'), ' provided in Logging Aggregator).'),
@@ -350,6 +369,7 @@ register(
field_class=fields.ChoiceField, field_class=fields.ChoiceField,
choices=['logstash', 'splunk', 'loggly', 'sumologic', 'other'], choices=['logstash', 'splunk', 'loggly', 'sumologic', 'other'],
allow_null=True, allow_null=True,
default=None,
label=_('Logging Aggregator Type'), label=_('Logging Aggregator Type'),
help_text=_('Format messages for the chosen log aggregator.'), help_text=_('Format messages for the chosen log aggregator.'),
category=_('Logging'), category=_('Logging'),

View File

@@ -1,5 +1,8 @@
# Copyright (c) 2015 Ansible, Inc. # Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved. # All Rights Reserved.
from django.utils.translation import ugettext_lazy as _
CLOUD_PROVIDERS = ('azure', 'azure_rm', 'ec2', 'gce', 'rax', 'vmware', 'openstack', 'satellite6', 'cloudforms') CLOUD_PROVIDERS = ('azure', 'azure_rm', 'ec2', 'gce', 'rax', 'vmware', 'openstack', 'satellite6', 'cloudforms')
SCHEDULEABLE_PROVIDERS = CLOUD_PROVIDERS + ('custom', 'scm',) SCHEDULEABLE_PROVIDERS = CLOUD_PROVIDERS + ('custom', 'scm',)
PRIVILEGE_ESCALATION_METHODS = [ ('sudo', _('Sudo')), ('su', _('Su')), ('pbrun', _('Pbrun')), ('pfexec', _('Pfexec')), ('dzdo', _('DZDO')), ('pmrun', _('Pmrun')), ('runas', _('Runas'))]

View File

@@ -13,7 +13,7 @@ import logging
from django.conf import settings from django.conf import settings
import awx import awx
from awx.main.isolated import run from awx.main.expect import run
from awx.main.utils import OutputEventFilter from awx.main.utils import OutputEventFilter
from awx.main.queue import CallbackQueueDispatcher from awx.main.queue import CallbackQueueDispatcher
@@ -170,12 +170,12 @@ class IsolatedManager(object):
# - sets up a temporary directory for proot/bwrap (if necessary) # - sets up a temporary directory for proot/bwrap (if necessary)
# - copies encrypted job data from the controlling host to the isolated host (with rsync) # - copies encrypted job data from the controlling host to the isolated host (with rsync)
# - writes the encryption secret to a named pipe on the isolated host # - writes the encryption secret to a named pipe on the isolated host
# - launches the isolated playbook runner via `tower-expect start <job-id>` # - launches the isolated playbook runner via `awx-expect start <job-id>`
args = self._build_args('run_isolated.yml', '%s,' % self.host, extra_vars) args = self._build_args('run_isolated.yml', '%s,' % self.host, extra_vars)
if self.instance.verbosity: if self.instance.verbosity:
args.append('-%s' % ('v' * min(5, self.instance.verbosity))) args.append('-%s' % ('v' * min(5, self.instance.verbosity)))
buff = StringIO.StringIO() buff = StringIO.StringIO()
logger.debug('Starting job on isolated host with `run_isolated.yml` playbook.') logger.debug('Starting job {} on isolated host with `run_isolated.yml` playbook.'.format(self.instance.id))
status, rc = IsolatedManager.run_pexpect( status, rc = IsolatedManager.run_pexpect(
args, self.awx_playbook_path(), self.management_env, buff, args, self.awx_playbook_path(), self.management_env, buff,
idle_timeout=self.idle_timeout, idle_timeout=self.idle_timeout,
@@ -183,7 +183,7 @@ class IsolatedManager(object):
pexpect_timeout=5 pexpect_timeout=5
) )
output = buff.getvalue() output = buff.getvalue()
playbook_logger.info('Job {} management started\n{}'.format(self.instance.id, output)) playbook_logger.info('Isolated job {} dispatch:\n{}'.format(self.instance.id, output))
if status != 'successful': if status != 'successful':
self.stdout_handle.write(output) self.stdout_handle.write(output)
return status, rc return status, rc
@@ -192,8 +192,11 @@ class IsolatedManager(object):
def run_pexpect(cls, pexpect_args, *args, **kw): def run_pexpect(cls, pexpect_args, *args, **kw):
isolated_ssh_path = None isolated_ssh_path = None
try: try:
if getattr(settings, 'AWX_ISOLATED_PRIVATE_KEY', None): if all([
isolated_ssh_path = tempfile.mkdtemp(prefix='ansible_tower_isolated', dir=settings.AWX_PROOT_BASE_PATH) getattr(settings, 'AWX_ISOLATED_KEY_GENERATION', False) is True,
getattr(settings, 'AWX_ISOLATED_PRIVATE_KEY', None)
]):
isolated_ssh_path = tempfile.mkdtemp(prefix='awx_isolated', dir=settings.AWX_PROOT_BASE_PATH)
os.chmod(isolated_ssh_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR) os.chmod(isolated_ssh_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
isolated_key = os.path.join(isolated_ssh_path, '.isolated') isolated_key = os.path.join(isolated_ssh_path, '.isolated')
ssh_sock = os.path.join(isolated_ssh_path, '.isolated_ssh_auth.sock') ssh_sock = os.path.join(isolated_ssh_path, '.isolated_ssh_auth.sock')
@@ -277,6 +280,7 @@ class IsolatedManager(object):
args.append('-%s' % ('v' * min(5, self.instance.verbosity))) args.append('-%s' % ('v' * min(5, self.instance.verbosity)))
status = 'failed' status = 'failed'
output = ''
rc = None rc = None
buff = cStringIO.StringIO() buff = cStringIO.StringIO()
last_check = time.time() last_check = time.time()
@@ -300,7 +304,7 @@ class IsolatedManager(object):
continue continue
buff = cStringIO.StringIO() buff = cStringIO.StringIO()
logger.debug('Checking job on isolated host with `check_isolated.yml` playbook.') logger.debug('Checking on isolated job {} with `check_isolated.yml`.'.format(self.instance.id))
status, rc = IsolatedManager.run_pexpect( status, rc = IsolatedManager.run_pexpect(
args, self.awx_playbook_path(), self.management_env, buff, args, self.awx_playbook_path(), self.management_env, buff,
cancelled_callback=self.cancelled_callback, cancelled_callback=self.cancelled_callback,
@@ -310,7 +314,7 @@ class IsolatedManager(object):
proot_cmd=self.proot_cmd proot_cmd=self.proot_cmd
) )
output = buff.getvalue() output = buff.getvalue()
playbook_logger.info(output) playbook_logger.info('Isolated job {} check:\n{}'.format(self.instance.id, output))
path = self.path_to('artifacts', 'stdout') path = self.path_to('artifacts', 'stdout')
if os.path.exists(path): if os.path.exists(path):
@@ -350,7 +354,7 @@ class IsolatedManager(object):
], ],
} }
args = self._build_args('clean_isolated.yml', '%s,' % self.host, extra_vars) args = self._build_args('clean_isolated.yml', '%s,' % self.host, extra_vars)
logger.debug('Cleaning up job on isolated host with `clean_isolated.yml` playbook.') logger.debug('Cleaning up job {} on isolated host with `clean_isolated.yml` playbook.'.format(self.instance.id))
buff = cStringIO.StringIO() buff = cStringIO.StringIO()
timeout = max(60, 2 * settings.AWX_ISOLATED_CONNECTION_TIMEOUT) timeout = max(60, 2 * settings.AWX_ISOLATED_CONNECTION_TIMEOUT)
status, rc = IsolatedManager.run_pexpect( status, rc = IsolatedManager.run_pexpect(
@@ -359,11 +363,11 @@ class IsolatedManager(object):
pexpect_timeout=5 pexpect_timeout=5
) )
output = buff.getvalue() output = buff.getvalue()
playbook_logger.info(output) playbook_logger.info('Isolated job {} cleanup:\n{}'.format(self.instance.id, output))
if status != 'successful': if status != 'successful':
# stdout_handle is closed by this point so writing output to logs is our only option # stdout_handle is closed by this point so writing output to logs is our only option
logger.warning('Cleanup from isolated job encountered error, output:\n{}'.format(output)) logger.warning('Isolated job {} cleanup error, output:\n{}'.format(self.instance.id, output))
@classmethod @classmethod
def health_check(cls, instance_qs): def health_check(cls, instance_qs):
@@ -406,15 +410,25 @@ class IsolatedManager(object):
try: try:
task_result = result['plays'][0]['tasks'][0]['hosts'][instance.hostname] task_result = result['plays'][0]['tasks'][0]['hosts'][instance.hostname]
except (KeyError, IndexError): except (KeyError, IndexError):
logger.exception('Failed to read status from isolated instance {}.'.format(instance.hostname)) task_result = {}
continue
if 'capacity' in task_result: if 'capacity' in task_result:
instance.version = task_result['version'] instance.version = task_result['version']
if instance.capacity == 0 and task_result['capacity']:
logger.warning('Isolated instance {} has re-joined.'.format(instance.hostname))
instance.capacity = int(task_result['capacity']) instance.capacity = int(task_result['capacity'])
instance.save(update_fields=['capacity', 'version', 'modified']) instance.save(update_fields=['capacity', 'version', 'modified'])
elif instance.capacity == 0:
logger.debug('Isolated instance {} previously marked as lost, could not re-join.'.format(
instance.hostname))
else: else:
logger.warning('Could not update capacity of {}, msg={}'.format( logger.warning('Could not update status of isolated instance {}, msg={}'.format(
instance.hostname, task_result.get('msg', 'unknown failure'))) instance.hostname, task_result.get('msg', 'unknown failure')
))
if instance.is_lost(isolated=True):
instance.capacity = 0
instance.save(update_fields=['capacity'])
logger.error('Isolated instance {} last checked in at {}, marked as lost.'.format(
instance.hostname, instance.modified))
@staticmethod @staticmethod
def wrap_stdout_handle(instance, private_data_dir, stdout_handle, event_data_key='job_id'): def wrap_stdout_handle(instance, private_data_dir, stdout_handle, event_data_key='job_id'):
@@ -446,7 +460,7 @@ class IsolatedManager(object):
isolated job on isolated job on
:param private_data_dir: an absolute path on the local file system :param private_data_dir: an absolute path on the local file system
where job-specific data should be written where job-specific data should be written
(i.e., `/tmp/ansible_tower_xyz/`) (i.e., `/tmp/ansible_awx_xyz/`)
:param proot_temp_dir: a temporary directory which bwrap maps :param proot_temp_dir: a temporary directory which bwrap maps
restricted paths to restricted paths to

View File

@@ -107,6 +107,7 @@ def run_pexpect(args, cwd, env, logfile,
child.logfile_read = logfile child.logfile_read = logfile
canceled = False canceled = False
timed_out = False timed_out = False
errored = False
last_stdout_update = time.time() last_stdout_update = time.time()
job_start = time.time() job_start = time.time()
@@ -118,17 +119,28 @@ def run_pexpect(args, cwd, env, logfile,
if logfile_pos != logfile.tell(): if logfile_pos != logfile.tell():
logfile_pos = logfile.tell() logfile_pos = logfile.tell()
last_stdout_update = time.time() last_stdout_update = time.time()
canceled = cancelled_callback() if cancelled_callback else False if cancelled_callback:
try:
canceled = cancelled_callback()
except:
logger.exception('Could not check cancel callback - canceling immediately')
if isinstance(extra_update_fields, dict):
extra_update_fields['job_explanation'] = "System error during job execution, check system logs"
errored = True
else:
canceled = False
if not canceled and job_timeout != 0 and (time.time() - job_start) > job_timeout: if not canceled and job_timeout != 0 and (time.time() - job_start) > job_timeout:
timed_out = True timed_out = True
if isinstance(extra_update_fields, dict): if isinstance(extra_update_fields, dict):
extra_update_fields['job_explanation'] = "Job terminated due to timeout" extra_update_fields['job_explanation'] = "Job terminated due to timeout"
if canceled or timed_out: if canceled or timed_out or errored:
handle_termination(child.pid, child.args, proot_cmd, is_cancel=canceled) handle_termination(child.pid, child.args, proot_cmd, is_cancel=canceled)
if idle_timeout and (time.time() - last_stdout_update) > idle_timeout: if idle_timeout and (time.time() - last_stdout_update) > idle_timeout:
child.close(True) child.close(True)
canceled = True canceled = True
if canceled: if errored:
return 'error', child.exitstatus
elif canceled:
return 'canceled', child.exitstatus return 'canceled', child.exitstatus
elif child.exitstatus == 0 and not timed_out: elif child.exitstatus == 0 and not timed_out:
return 'successful', child.exitstatus return 'successful', child.exitstatus
@@ -143,7 +155,7 @@ def run_isolated_job(private_data_dir, secrets, logfile=sys.stdout):
:param private_data_dir: an absolute path on the local file system where :param private_data_dir: an absolute path on the local file system where
job metadata exists (i.e., job metadata exists (i.e.,
`/tmp/ansible_tower_xyz/`) `/tmp/ansible_awx_xyz/`)
:param secrets: a dict containing sensitive job metadata, { :param secrets: a dict containing sensitive job metadata, {
'env': { ... } # environment variables, 'env': { ... } # environment variables,
'passwords': { ... } # pexpect password prompts 'passwords': { ... } # pexpect password prompts
@@ -180,15 +192,15 @@ def run_isolated_job(private_data_dir, secrets, logfile=sys.stdout):
pexpect_timeout = secrets.get('pexpect_timeout', 5) pexpect_timeout = secrets.get('pexpect_timeout', 5)
# Use local callback directory # Use local callback directory
callback_dir = os.getenv('TOWER_LIB_DIRECTORY') callback_dir = os.getenv('AWX_LIB_DIRECTORY')
if callback_dir is None: if callback_dir is None:
raise RuntimeError('Location for Tower Ansible callbacks must be specified ' raise RuntimeError('Location for callbacks must be specified '
'by environment variable TOWER_LIB_DIRECTORY.') 'by environment variable AWX_LIB_DIRECTORY.')
env['ANSIBLE_CALLBACK_PLUGINS'] = os.path.join(callback_dir, 'isolated_callbacks') env['ANSIBLE_CALLBACK_PLUGINS'] = os.path.join(callback_dir, 'isolated_callbacks')
if 'AD_HOC_COMMAND_ID' in env: if 'AD_HOC_COMMAND_ID' in env:
env['ANSIBLE_STDOUT_CALLBACK'] = 'minimal' env['ANSIBLE_STDOUT_CALLBACK'] = 'minimal'
else: else:
env['ANSIBLE_STDOUT_CALLBACK'] = 'tower_display' env['ANSIBLE_STDOUT_CALLBACK'] = 'awx_display'
env['AWX_ISOLATED_DATA_DIR'] = private_data_dir env['AWX_ISOLATED_DATA_DIR'] = private_data_dir
env['PYTHONPATH'] = env.get('PYTHONPATH', '') + callback_dir + ':' env['PYTHONPATH'] = env.get('PYTHONPATH', '') + callback_dir + ':'

View File

@@ -4,6 +4,7 @@
# Python # Python
import copy import copy
import json import json
import re
import six import six
from jinja2 import Environment, StrictUndefined from jinja2 import Environment, StrictUndefined
@@ -369,7 +370,7 @@ class JSONSchemaField(JSONBField):
# If an empty {} is provided, we still want to perform this schema # If an empty {} is provided, we still want to perform this schema
# validation # validation
empty_values=(None, '') empty_values = (None, '')
def get_default(self): def get_default(self):
return copy.deepcopy(super(JSONBField, self).get_default()) return copy.deepcopy(super(JSONBField, self).get_default())
@@ -384,6 +385,9 @@ class JSONSchemaField(JSONBField):
self.schema(model_instance), self.schema(model_instance),
format_checker=self.format_checker format_checker=self.format_checker
).iter_errors(value): ).iter_errors(value):
# strip Python unicode markers from jsonschema validation errors
error.message = re.sub(r'\bu(\'|")', r'\1', error.message)
if error.validator == 'pattern' and 'error' in error.schema: if error.validator == 'pattern' and 'error' in error.schema:
error.message = error.schema['error'] % error.instance error.message = error.schema['error'] % error.instance
errors.append(error) errors.append(error)
@@ -467,6 +471,7 @@ class CredentialInputField(JSONSchemaField):
return { return {
'type': 'object', 'type': 'object',
'properties': properties, 'properties': properties,
'dependencies': model_instance.credential_type.inputs.get('dependencies', {}),
'additionalProperties': False, 'additionalProperties': False,
} }
@@ -504,6 +509,28 @@ class CredentialInputField(JSONSchemaField):
).iter_errors(decrypted_values): ).iter_errors(decrypted_values):
if error.validator == 'pattern' and 'error' in error.schema: if error.validator == 'pattern' and 'error' in error.schema:
error.message = error.schema['error'] % error.instance error.message = error.schema['error'] % error.instance
if error.validator == 'dependencies':
# replace the default error messaging w/ a better i18n string
# I wish there was a better way to determine the parameters of
# this validation failure, but the exception jsonschema raises
# doesn't include them as attributes (just a hard-coded error
# string)
match = re.search(
# 'foo' is a dependency of 'bar'
"'" # apostrophe
"([^']+)" # one or more non-apostrophes (first group)
"'[\w ]+'" # one or more words/spaces
"([^']+)", # second group
error.message,
)
if match:
label, extraneous = match.groups()
if error.schema['properties'].get(label):
label = error.schema['properties'][label]['label']
errors[extraneous] = [
_('cannot be set unless "%s" is set') % label
]
continue
if 'id' not in error.schema: if 'id' not in error.schema:
# If the error is not for a specific field, it's specific to # If the error is not for a specific field, it's specific to
# `inputs` in general # `inputs` in general
@@ -542,7 +569,11 @@ class CredentialInputField(JSONSchemaField):
if model_instance.has_encrypted_ssh_key_data and not value.get('ssh_key_unlock'): if model_instance.has_encrypted_ssh_key_data and not value.get('ssh_key_unlock'):
errors['ssh_key_unlock'] = [_('must be set when SSH key is encrypted.')] errors['ssh_key_unlock'] = [_('must be set when SSH key is encrypted.')]
if not model_instance.has_encrypted_ssh_key_data and value.get('ssh_key_unlock'): if all([
model_instance.ssh_key_data,
value.get('ssh_key_unlock'),
not model_instance.has_encrypted_ssh_key_data
]):
errors['ssh_key_unlock'] = [_('should not be set when SSH key is not encrypted.')] errors['ssh_key_unlock'] = [_('should not be set when SSH key is not encrypted.')]
if errors: if errors:
@@ -598,6 +629,14 @@ class CredentialTypeInputField(JSONSchemaField):
} }
def validate(self, value, model_instance): def validate(self, value, model_instance):
if isinstance(value, dict) and 'dependencies' in value and \
not model_instance.managed_by_tower:
raise django_exceptions.ValidationError(
_("'dependencies' is not supported for custom credentials."),
code='invalid',
params={'value': value},
)
super(CredentialTypeInputField, self).validate( super(CredentialTypeInputField, self).validate(
value, model_instance value, model_instance
) )
@@ -624,7 +663,7 @@ class CredentialTypeInputField(JSONSchemaField):
# If no type is specified, default to string # If no type is specified, default to string
field['type'] = 'string' field['type'] = 'string'
for key in ('choices', 'multiline', 'format'): for key in ('choices', 'multiline', 'format', 'secret',):
if key in field and field['type'] != 'string': if key in field and field['type'] != 'string':
raise django_exceptions.ValidationError( raise django_exceptions.ValidationError(
_('%s not allowed for %s type (%s)' % (key, field['type'], field['id'])), _('%s not allowed for %s type (%s)' % (key, field['type'], field['id'])),

View File

@@ -0,0 +1,20 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from awx.main.utils import get_licenser
from django.core.management.base import NoArgsCommand
class Command(NoArgsCommand):
"""Return 0 if licensed; 1 if unlicensed
"""
def handle(self, **options):
super(Command, self).__init__()
license_info = get_licenser().validate()
if license_info['valid_key'] is True:
return 0
else:
return 1

View File

@@ -0,0 +1,52 @@
# Copyright (c) 2016 Ansible, Inc.
# All Rights Reserved
from optparse import make_option
import subprocess
import warnings
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError
from awx.main.models import Instance
from awx.main.utils.pglock import advisory_lock
class Command(BaseCommand):
"""
Deprovision a Tower cluster node
"""
option_list = BaseCommand.option_list + (
make_option('--hostname', dest='hostname', type='string',
help='Hostname used during provisioning'),
make_option('--name', dest='name', type='string',
help='(PENDING DEPRECIATION) Hostname used during provisioning'),
)
@transaction.atomic
def handle(self, *args, **options):
# TODO: remove in 3.3
if options.get('name'):
warnings.warn("`--name` is depreciated in favor of `--hostname`, and will be removed in release 3.3.")
if options.get('hostname'):
raise CommandError("Cannot accept both --name and --hostname.")
options['hostname'] = options['name']
hostname = options.get('hostname')
if not hostname:
raise CommandError("--hostname is a required argument")
with advisory_lock('instance_registration_%s' % hostname):
instance = Instance.objects.filter(hostname=hostname)
if instance.exists():
instance.delete()
print("Instance Removed")
result = subprocess.Popen("rabbitmqctl forget_cluster_node rabbitmq@{}".format(hostname), shell=True).wait()
if result != 0:
print("Node deprovisioning may have failed when attempting to "
"remove the RabbitMQ instance {} from the cluster".format(hostname))
else:
print('Successfully deprovisioned {}'.format(hostname))
print('(changed: True)')
else:
print('No instance found matching name {}'.format(hostname))

View File

@@ -1,42 +1,17 @@
# Copyright (c) 2016 Ansible, Inc. # Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved # All Rights Reserved
from optparse import make_option # Borrow from another AWX command
import subprocess from awx.main.management.commands.deprovision_instance import Command as OtherCommand
from django.db import transaction # Python
from django.core.management.base import BaseCommand, CommandError import warnings
from awx.main.models import Instance
from awx.main.utils.pglock import advisory_lock
class Command(BaseCommand): class Command(OtherCommand):
"""
Deprovision a Tower cluster node
"""
option_list = BaseCommand.option_list + (
make_option('--name', dest='name', type='string',
help='Hostname used during provisioning'),
)
@transaction.atomic
def handle(self, *args, **options): def handle(self, *args, **options):
hostname = options.get('name') # TODO: delete this entire file in 3.3
if not hostname: warnings.warn('This command is replaced with `deprovision_instance` and will '
raise CommandError("--name is a required argument") 'be removed in release 3.3.')
with advisory_lock('instance_registration_%s' % hostname): return super(Command, self).handle(*args, **options)
instance = Instance.objects.filter(hostname=hostname)
if instance.exists():
instance.delete()
print("Instance Removed")
result = subprocess.Popen("rabbitmqctl forget_cluster_node rabbitmq@{}".format(hostname), shell=True).wait()
if result != 0:
print("Node deprovisioning may have failed when attempting to remove the RabbitMQ instance from the cluster")
else:
print('Successfully deprovisioned {}'.format(hostname))
print('(changed: True)')
else:
print('No instance found matching name {}'.format(hostname))

View File

@@ -619,7 +619,13 @@ class Command(NoArgsCommand):
if group_name in existing_group_names: if group_name in existing_group_names:
continue continue
mem_group = self.all_group.all_groups[group_name] mem_group = self.all_group.all_groups[group_name]
group = self.inventory.groups.update_or_create(name=group_name, defaults={'variables':json.dumps(mem_group.variables), 'description':'imported'})[0] group = self.inventory.groups.update_or_create(
name=group_name,
defaults={
'variables':json.dumps(mem_group.variables),
'description':'imported'
}
)[0]
logger.info('Group "%s" added', group.name) logger.info('Group "%s" added', group.name)
self._batch_add_m2m(self.inventory_source.groups, group) self._batch_add_m2m(self.inventory_source.groups, group)
self._batch_add_m2m(self.inventory_source.groups, flush=True) self._batch_add_m2m(self.inventory_source.groups, flush=True)
@@ -748,8 +754,7 @@ class Command(NoArgsCommand):
if self.instance_id_var: if self.instance_id_var:
instance_id = self._get_instance_id(mem_host.variables) instance_id = self._get_instance_id(mem_host.variables)
host_attrs['instance_id'] = instance_id host_attrs['instance_id'] = instance_id
db_host = self.inventory.hosts.update_or_create(name=mem_host_name, db_host = self.inventory.hosts.update_or_create(name=mem_host_name, defaults=host_attrs)[0]
defaults={'variables':host_attrs['variables'], 'description':host_attrs['description']})[0]
if enabled is False: if enabled is False:
logger.info('Host "%s" added (disabled)', mem_host_name) logger.info('Host "%s" added (disabled)', mem_host_name)
else: else:
@@ -947,7 +952,17 @@ class Command(NoArgsCommand):
self.host_filter_re, self.host_filter_re,
self.exclude_empty_groups, self.exclude_empty_groups,
self.is_custom) self.is_custom)
self.all_group.debug_tree() if settings.DEBUG:
# depending on inventory source, this output can be
# *exceedingly* verbose - crawling a deeply nested
# inventory/group data structure and printing metadata about
# each host and its memberships
#
# it's easy for this scale of data to overwhelm pexpect,
# (and it's likely only useful for purposes of debugging the
# actual inventory import code), so only print it if we have to:
# https://github.com/ansible/ansible-tower/issues/7414#issuecomment-321615104
self.all_group.debug_tree()
with batch_role_ancestor_rebuilding(): with batch_role_ancestor_rebuilding():
# Ensure that this is managed as an atomic SQL transaction, # Ensure that this is managed as an atomic SQL transaction,

View File

@@ -0,0 +1,45 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
from awx.main.models import Instance
from awx.main.utils.pglock import advisory_lock
from django.conf import settings
from optparse import make_option
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
"""
Internal tower command.
Regsiter this instance with the database for HA tracking.
"""
option_list = BaseCommand.option_list + (
make_option('--hostname', dest='hostname', type='string',
help='Hostname used during provisioning'),
)
def _register_hostname(self, hostname):
if not hostname:
return
with advisory_lock('instance_registration_%s' % hostname):
instance = Instance.objects.filter(hostname=hostname)
if instance.exists():
print("Instance already registered {}".format(instance[0].hostname))
return
instance = Instance(uuid=self.uuid, hostname=hostname)
instance.save()
print('Successfully registered instance {}'.format(hostname))
self.changed = True
@transaction.atomic
def handle(self, **options):
if not options.get('hostname'):
raise CommandError("Specify `--hostname` to use this command.")
self.uuid = settings.SYSTEM_UUID
self.changed = False
self._register_hostname(options.get('hostname'))
if self.changed:
print('(changed: True)')

View File

@@ -1,52 +1,17 @@
# Copyright (c) 2015 Ansible, Inc. # Copyright (c) 2017 Ansible by Red Hat
# All Rights Reserved # All Rights Reserved
from awx.main.models import Instance # Borrow from another AWX command
from awx.main.utils.pglock import advisory_lock from awx.main.management.commands.provision_instance import Command as OtherCommand
from django.conf import settings
from optparse import make_option # Python
from django.db import transaction import warnings
from django.core.management.base import BaseCommand
class Command(BaseCommand): class Command(OtherCommand):
"""
Internal tower command.
Regsiter this instance with the database for HA tracking.
"""
option_list = BaseCommand.option_list + ( def handle(self, *args, **options):
make_option('--hostname', dest='hostname', type='string', # TODO: delete this entire file in 3.3
help='Hostname used during provisioning'), warnings.warn('This command is replaced with `provision_instance` and will '
make_option('--hostnames', dest='hostnames', type='string', 'be removed in release 3.3.')
help='Alternatively hostnames can be provided with ' return super(Command, self).handle(*args, **options)
'this option as a comma-Delimited list'),
)
def _register_hostname(self, hostname):
if not hostname:
return
with advisory_lock('instance_registration_%s' % hostname):
instance = Instance.objects.filter(hostname=hostname)
if instance.exists():
print("Instance already registered {}".format(instance[0]))
return
instance = Instance(uuid=self.uuid, hostname=hostname)
instance.save()
print('Successfully registered instance {}'.format(hostname))
self.changed = True
@transaction.atomic
def handle(self, **options):
self.uuid = settings.SYSTEM_UUID
self.changed = False
self._register_hostname(options.get('hostname'))
hostname_list = []
if options.get('hostnames'):
hostname_list = options.get('hostnames').split(",")
instance_list = [x.strip() for x in hostname_list if x]
for inst_name in instance_list:
self._register_hostname(inst_name)
if self.changed:
print('(changed: True)')

View File

@@ -6,7 +6,6 @@ from awx.main.utils.pglock import advisory_lock
from awx.main.models import Instance, InstanceGroup from awx.main.models import Instance, InstanceGroup
from optparse import make_option from optparse import make_option
from django.db import transaction
from django.core.management.base import BaseCommand, CommandError from django.core.management.base import BaseCommand, CommandError
@@ -21,7 +20,6 @@ class Command(BaseCommand):
help='The controlling group (makes this an isolated group)'), help='The controlling group (makes this an isolated group)'),
) )
@transaction.atomic
def handle(self, **options): def handle(self, **options):
queuename = options.get('queuename') queuename = options.get('queuename')
if not queuename: if not queuename:

View File

@@ -9,6 +9,7 @@ from multiprocessing import Process
from multiprocessing import Queue as MPQueue from multiprocessing import Queue as MPQueue
from Queue import Empty as QueueEmpty from Queue import Empty as QueueEmpty
from Queue import Full as QueueFull from Queue import Full as QueueFull
import os
from kombu import Connection, Exchange, Queue from kombu import Connection, Exchange, Queue
from kombu.mixins import ConsumerMixin from kombu.mixins import ConsumerMixin
@@ -26,6 +27,17 @@ from awx.main.models import * # noqa
logger = logging.getLogger('awx.main.commands.run_callback_receiver') logger = logging.getLogger('awx.main.commands.run_callback_receiver')
class WorkerSignalHandler:
def __init__(self):
self.kill_now = False
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, *args, **kwargs):
self.kill_now = True
class CallbackBrokerWorker(ConsumerMixin): class CallbackBrokerWorker(ConsumerMixin):
def __init__(self, connection, use_workers=True): def __init__(self, connection, use_workers=True):
self.connection = connection self.connection = connection
@@ -42,8 +54,7 @@ class CallbackBrokerWorker(ConsumerMixin):
signal.signal(signum, signal.SIG_DFL) signal.signal(signum, signal.SIG_DFL)
os.kill(os.getpid(), signum) # Rethrow signal, this time without catching it os.kill(os.getpid(), signum) # Rethrow signal, this time without catching it
except Exception: except Exception:
# TODO: LOG logger.exception('Error in shutdown_handler')
pass
return _handler return _handler
if use_workers: if use_workers:
@@ -102,7 +113,8 @@ class CallbackBrokerWorker(ConsumerMixin):
return None return None
def callback_worker(self, queue_actual, idx): def callback_worker(self, queue_actual, idx):
while True: signal_handler = WorkerSignalHandler()
while not signal_handler.kill_now:
try: try:
body = queue_actual.get(block=True, timeout=1) body = queue_actual.get(block=True, timeout=1)
except QueueEmpty: except QueueEmpty:

View File

@@ -145,12 +145,12 @@ class Migration(migrations.Migration):
migrations.AlterField( migrations.AlterField(
model_name='inventorysource', model_name='inventorysource',
name='source', name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'File, Directory or Script'), (b'scm', 'Sourced from a project in Tower'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]), field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'File, Directory or Script'), (b'scm', 'Sourced from a Project'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
), ),
migrations.AlterField( migrations.AlterField(
model_name='inventoryupdate', model_name='inventoryupdate',
name='source', name='source',
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'File, Directory or Script'), (b'scm', 'Sourced from a project in Tower'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]), field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'File, Directory or Script'), (b'scm', 'Sourced from a Project'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
), ),
migrations.AlterField( migrations.AlterField(
model_name='inventorysource', model_name='inventorysource',
@@ -414,6 +414,12 @@ class Migration(migrations.Migration):
unique_together=set([('organization', 'name', 'credential_type')]), unique_together=set([('organization', 'name', 'credential_type')]),
), ),
migrations.AlterField(
model_name='credential',
name='become_method',
field=models.CharField(default=b'', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[(b'', 'None'), (b'sudo', 'Sudo'), (b'su', 'Su'), (b'pbrun', 'Pbrun'), (b'pfexec', 'Pfexec'), (b'dzdo', 'DZDO'), (b'pmrun', 'Pmrun'), (b'runas', 'Runas')]),
),
# Connecting activity stream # Connecting activity stream
migrations.AddField( migrations.AddField(
model_name='activitystream', model_name='activitystream',
@@ -462,5 +468,16 @@ class Migration(migrations.Migration):
name='last_isolated_check', name='last_isolated_check',
field=models.DateTimeField(auto_now_add=True, null=True), field=models.DateTimeField(auto_now_add=True, null=True),
), ),
# Migrations that don't change db schema but simply to make Django ORM happy.
# e.g. Choice updates, help_text updates, etc.
migrations.AlterField(
model_name='schedule',
name='enabled',
field=models.BooleanField(default=True, help_text='Enables processing of this schedule.'),
),
migrations.AlterField(
model_name='unifiedjob',
name='execution_node',
field=models.TextField(default=b'', help_text='The node the job executed on.', editable=False, blank=True),
),
] ]

View File

@@ -218,6 +218,8 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
organization_groups = [] organization_groups = []
if self.inventory is not None: if self.inventory is not None:
inventory_groups = [x for x in self.inventory.instance_groups.all()] inventory_groups = [x for x in self.inventory.instance_groups.all()]
else:
inventory_groups = []
selected_groups = inventory_groups + organization_groups selected_groups = inventory_groups + organization_groups
if not selected_groups: if not selected_groups:
return self.global_instance_groups return self.global_instance_groups

View File

@@ -20,7 +20,7 @@ from taggit.managers import TaggableManager
# Django-CRUM # Django-CRUM
from crum import get_current_user from crum import get_current_user
# Ansible Tower # AWX
from awx.main.utils import encrypt_field from awx.main.utils import encrypt_field
__all__ = ['prevent_search', 'VarsDictProperty', 'BaseModel', 'CreatedModifiedModel', __all__ = ['prevent_search', 'VarsDictProperty', 'BaseModel', 'CreatedModifiedModel',
@@ -52,7 +52,7 @@ PROJECT_UPDATE_JOB_TYPE_CHOICES = [
(PERM_INVENTORY_CHECK, _('Check')), (PERM_INVENTORY_CHECK, _('Check')),
] ]
CLOUD_INVENTORY_SOURCES = ['ec2', 'rax', 'vmware', 'gce', 'azure', 'azure_rm', 'openstack', 'custom', 'satellite6', 'cloudforms'] CLOUD_INVENTORY_SOURCES = ['ec2', 'rax', 'vmware', 'gce', 'azure', 'azure_rm', 'openstack', 'custom', 'satellite6', 'cloudforms', 'scm',]
VERBOSITY_CHOICES = [ VERBOSITY_CHOICES = [
(0, '0 (Normal)'), (0, '0 (Normal)'),
@@ -225,7 +225,13 @@ class PasswordFieldsModel(BaseModel):
saved_value = getattr(self, '_saved_%s' % field, '') saved_value = getattr(self, '_saved_%s' % field, '')
setattr(self, field, saved_value) setattr(self, field, saved_value)
self.mark_field_for_save(update_fields, field) self.mark_field_for_save(update_fields, field)
self.save(update_fields=update_fields)
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
# We've already got an activity stream record for the object
# creation, there's no need to have an extra one for the
# secondary save for secrets
self.save(update_fields=update_fields)
def encrypt_field(self, field, ask): def encrypt_field(self, field, ask):
encrypted = encrypt_field(self, field, ask) encrypted = encrypt_field(self, field, ask)

View File

@@ -19,6 +19,7 @@ from django.utils.encoding import force_text
# AWX # AWX
from awx.api.versioning import reverse from awx.api.versioning import reverse
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS
from awx.main.fields import (ImplicitRoleField, CredentialInputField, from awx.main.fields import (ImplicitRoleField, CredentialInputField,
CredentialTypeInputField, CredentialTypeInputField,
CredentialTypeInjectorField) CredentialTypeInjectorField)
@@ -135,15 +136,7 @@ class V1Credential(object):
max_length=32, max_length=32,
blank=True, blank=True,
default='', default='',
choices=[ choices=[('', _('None'))] + PRIVILEGE_ESCALATION_METHODS,
('', _('None')),
('sudo', _('Sudo')),
('su', _('Su')),
('pbrun', _('Pbrun')),
('pfexec', _('Pfexec')),
('dzdo', _('DZDO')),
('pmrun', _('Pmrun')),
],
help_text=_('Privilege escalation method.') help_text=_('Privilege escalation method.')
), ),
'become_username': models.CharField( 'become_username': models.CharField(
@@ -391,7 +384,7 @@ class CredentialType(CommonModelNameNotUnique):
'VIRTUAL_ENV', 'PATH', 'PYTHONPATH', 'PROOT_TMP_DIR', 'JOB_ID', 'VIRTUAL_ENV', 'PATH', 'PYTHONPATH', 'PROOT_TMP_DIR', 'JOB_ID',
'INVENTORY_ID', 'INVENTORY_SOURCE_ID', 'INVENTORY_UPDATE_ID', 'INVENTORY_ID', 'INVENTORY_SOURCE_ID', 'INVENTORY_UPDATE_ID',
'AD_HOC_COMMAND_ID', 'REST_API_URL', 'REST_API_TOKEN', 'TOWER_HOST', 'AD_HOC_COMMAND_ID', 'REST_API_URL', 'REST_API_TOKEN', 'TOWER_HOST',
'MAX_EVENT_RES', 'CALLBACK_QUEUE', 'CALLBACK_CONNECTION', 'CACHE', 'AWX_HOST', 'MAX_EVENT_RES', 'CALLBACK_QUEUE', 'CALLBACK_CONNECTION', 'CACHE',
'JOB_CALLBACK_DEBUG', 'INVENTORY_HOSTVARS', 'FACT_QUEUE', 'JOB_CALLBACK_DEBUG', 'INVENTORY_HOSTVARS', 'FACT_QUEUE',
)) ))
@@ -639,7 +632,10 @@ def ssh(cls):
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
'ask_at_runtime': True 'ask_at_runtime': True
}] }],
'dependencies': {
'ssh_key_unlock': ['ssh_key_data'],
}
} }
) )
@@ -672,7 +668,10 @@ def scm(cls):
'label': 'Private Key Passphrase', 'label': 'Private Key Passphrase',
'type': 'string', 'type': 'string',
'secret': True 'secret': True
}] }],
'dependencies': {
'ssh_key_unlock': ['ssh_key_data'],
}
} }
) )
@@ -732,7 +731,11 @@ def net(cls):
'label': 'Authorize Password', 'label': 'Authorize Password',
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}] }],
'dependencies': {
'ssh_key_unlock': ['ssh_key_data'],
'authorize_password': ['authorize'],
}
} }
) )

View File

@@ -5,6 +5,8 @@ from django.db import models
from django.db.models.signals import post_save from django.db.models.signals import post_save
from django.dispatch import receiver from django.dispatch import receiver
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
from django.conf import settings
from django.utils.timezone import now, timedelta
from solo.models import SingletonModel from solo.models import SingletonModel
@@ -19,7 +21,7 @@ __all__ = ('Instance', 'InstanceGroup', 'JobOrigin', 'TowerScheduleState',)
class Instance(models.Model): class Instance(models.Model):
"""A model representing an Ansible Tower instance running against this database.""" """A model representing an AWX instance running against this database."""
objects = InstanceManager() objects = InstanceManager()
uuid = models.CharField(max_length=40) uuid = models.CharField(max_length=40)
@@ -51,11 +53,19 @@ class Instance(models.Model):
@property @property
def role(self): def role(self):
# NOTE: TODO: Likely to repurpose this once standalone ramparts are a thing # NOTE: TODO: Likely to repurpose this once standalone ramparts are a thing
return "tower" return "awx"
def is_lost(self, ref_time=None, isolated=False):
if ref_time is None:
ref_time = now()
grace_period = 120
if isolated:
grace_period = settings.AWX_ISOLATED_PERIODIC_CHECK * 2
return self.modified < ref_time - timedelta(seconds=grace_period)
class InstanceGroup(models.Model): class InstanceGroup(models.Model):
"""A model representing a Queue/Group of Tower Instances.""" """A model representing a Queue/Group of AWX Instances."""
name = models.CharField(max_length=250, unique=True) name = models.CharField(max_length=250, unique=True)
created = models.DateTimeField(auto_now_add=True) created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True) modified = models.DateTimeField(auto_now=True)

View File

@@ -16,6 +16,7 @@ from django.utils.translation import ugettext_lazy as _
from django.db import transaction from django.db import transaction
from django.core.exceptions import ValidationError from django.core.exceptions import ValidationError
from django.utils.timezone import now from django.utils.timezone import now
from django.db.models import Q
# AWX # AWX
from awx.api.versioning import reverse from awx.api.versioning import reverse
@@ -29,8 +30,7 @@ from awx.main.fields import (
from awx.main.managers import HostManager from awx.main.managers import HostManager
from awx.main.models.base import * # noqa from awx.main.models.base import * # noqa
from awx.main.models.unified_jobs import * # noqa from awx.main.models.unified_jobs import * # noqa
from awx.main.models.jobs import Job from awx.main.models.mixins import ResourceMixin, TaskManagerInventoryUpdateMixin
from awx.main.models.mixins import ResourceMixin
from awx.main.models.notifications import ( from awx.main.models.notifications import (
NotificationTemplate, NotificationTemplate,
JobNotificationMixin, JobNotificationMixin,
@@ -335,7 +335,7 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin):
failed_hosts = active_hosts.filter(has_active_failures=True) failed_hosts = active_hosts.filter(has_active_failures=True)
active_groups = self.groups active_groups = self.groups
failed_groups = active_groups.filter(has_active_failures=True) failed_groups = active_groups.filter(has_active_failures=True)
active_inventory_sources = self.inventory_sources.filter( source__in=CLOUD_INVENTORY_SOURCES) active_inventory_sources = self.inventory_sources.filter(source__in=CLOUD_INVENTORY_SOURCES)
failed_inventory_sources = active_inventory_sources.filter(last_job_failed=True) failed_inventory_sources = active_inventory_sources.filter(last_job_failed=True)
computed_fields = { computed_fields = {
'has_active_failures': bool(failed_hosts.count()), 'has_active_failures': bool(failed_hosts.count()),
@@ -370,21 +370,24 @@ class Inventory(CommonModelNameNotUnique, ResourceMixin):
return self.groups.exclude(parents__pk__in=group_pks).distinct() return self.groups.exclude(parents__pk__in=group_pks).distinct()
def clean_insights_credential(self): def clean_insights_credential(self):
if self.kind == 'smart': if self.kind == 'smart' and self.insights_credential:
raise ValidationError(_("Assignment not allowed for Smart Inventory")) raise ValidationError(_("Assignment not allowed for Smart Inventory"))
if self.insights_credential and self.insights_credential.credential_type.kind != 'insights': if self.insights_credential and self.insights_credential.credential_type.kind != 'insights':
raise ValidationError(_("Credential kind must be 'insights'.")) raise ValidationError(_("Credential kind must be 'insights'."))
return self.insights_credential return self.insights_credential
@transaction.atomic @transaction.atomic
def schedule_deletion(self): def schedule_deletion(self, user_id=None):
from awx.main.tasks import delete_inventory from awx.main.tasks import delete_inventory
from awx.main.signals import activity_stream_delete
if self.pending_deletion is True: if self.pending_deletion is True:
raise RuntimeError("Inventory is already pending deletion.") raise RuntimeError("Inventory is already pending deletion.")
self.pending_deletion = True self.pending_deletion = True
self.save(update_fields=['pending_deletion']) self.save(update_fields=['pending_deletion'])
self.jobtemplates.clear()
activity_stream_delete(Inventory, self, inventory_delete_flag=True)
self.websocket_emit_status('pending_deletion') self.websocket_emit_status('pending_deletion')
delete_inventory.delay(self.pk) delete_inventory.delay(self.pk, user_id)
def _update_host_smart_inventory_memeberships(self): def _update_host_smart_inventory_memeberships(self):
if self.kind == 'smart' and settings.AWX_REBUILD_SMART_MEMBERSHIP: if self.kind == 'smart' and settings.AWX_REBUILD_SMART_MEMBERSHIP:
@@ -1058,16 +1061,18 @@ class InventorySourceOptions(BaseModel):
@classmethod @classmethod
def get_ec2_group_by_choices(cls): def get_ec2_group_by_choices(cls):
return [ return [
('availability_zone', _('Availability Zone')),
('ami_id', _('Image ID')), ('ami_id', _('Image ID')),
('availability_zone', _('Availability Zone')),
('aws_account', _('Account')),
('instance_id', _('Instance ID')), ('instance_id', _('Instance ID')),
('instance_state', _('Instance State')),
('instance_type', _('Instance Type')), ('instance_type', _('Instance Type')),
('key_pair', _('Key Name')), ('key_pair', _('Key Name')),
('region', _('Region')), ('region', _('Region')),
('security_group', _('Security Group')), ('security_group', _('Security Group')),
('tag_keys', _('Tags')), ('tag_keys', _('Tags')),
('vpc_id', _('VPC ID')),
('tag_none', _('Tag None')), ('tag_none', _('Tag None')),
('vpc_id', _('VPC ID')),
] ]
@classmethod @classmethod
@@ -1312,7 +1317,7 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions):
# Schedule a new Project update if one is not already queued # Schedule a new Project update if one is not already queued
if self.source_project and not self.source_project.project_updates.filter( if self.source_project and not self.source_project.project_updates.filter(
status__in=['new', 'pending', 'waiting']).exists(): status__in=['new', 'pending', 'waiting']).exists():
self.source_project.update() self.update()
if not getattr(_inventory_updates, 'is_updating', False): if not getattr(_inventory_updates, 'is_updating', False):
if self.inventory is not None: if self.inventory is not None:
self.inventory.update_computed_fields(update_groups=False, update_hosts=False) self.inventory.update_computed_fields(update_groups=False, update_hosts=False)
@@ -1390,8 +1395,36 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions):
raise ValidationError(_('Unable to configure this item for cloud sync. It is already managed by %s.') % s) raise ValidationError(_('Unable to configure this item for cloud sync. It is already managed by %s.') % s)
return source return source
def clean_update_on_project_update(self):
if self.update_on_project_update is True and \
self.source == 'scm' and \
InventorySource.objects.filter(
Q(inventory=self.inventory,
update_on_project_update=True, source='scm') &
~Q(id=self.id)).exists():
raise ValidationError(_("More than one SCM-based inventory source with update on project update per-inventory not allowed."))
return self.update_on_project_update
class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin): def clean_update_on_launch(self):
if self.update_on_project_update is True and \
self.source == 'scm' and \
self.update_on_launch is True:
raise ValidationError(_("Cannot update SCM-based inventory source on launch if set to update on project update. "
"Instead, configure the corresponding source project to update on launch."))
return self.update_on_launch
def clean_overwrite_vars(self):
if self.source == 'scm' and not self.overwrite_vars:
raise ValidationError(_("SCM type sources must set `overwrite_vars` to `true`."))
return self.overwrite_vars
def clean_source_path(self):
if self.source != 'scm' and self.source_path:
raise ValidationError(_("Cannot set source_path if not SCM type."))
return self.source_path
class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin, TaskManagerInventoryUpdateMixin):
''' '''
Internal job for tracking inventory updates from external sources. Internal job for tracking inventory updates from external sources.
''' '''
@@ -1502,26 +1535,14 @@ class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin):
organization_groups = [] organization_groups = []
if self.inventory_source.inventory is not None: if self.inventory_source.inventory is not None:
inventory_groups = [x for x in self.inventory_source.inventory.instance_groups.all()] inventory_groups = [x for x in self.inventory_source.inventory.instance_groups.all()]
template_groups = [x for x in super(InventoryUpdate, self).preferred_instance_groups] selected_groups = inventory_groups + organization_groups
selected_groups = template_groups + inventory_groups + organization_groups
if not selected_groups: if not selected_groups:
return self.global_instance_groups return self.global_instance_groups
return selected_groups return selected_groups
def _build_job_explanation(self): def cancel(self, job_explanation=None, is_chain=False):
if not self.job_explanation: res = super(InventoryUpdate, self).cancel(job_explanation=job_explanation, is_chain=is_chain)
return 'Previous Task Canceled: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % \
(self.model_to_str(), self.name, self.id)
return None
def get_dependent_jobs(self):
return Job.objects.filter(dependent_jobs__in=[self.id])
def cancel(self, job_explanation=None):
res = super(InventoryUpdate, self).cancel(job_explanation=job_explanation)
if res: if res:
map(lambda x: x.cancel(job_explanation=self._build_job_explanation()), self.get_dependent_jobs())
if self.launch_type != 'scm' and self.source_project_update: if self.launch_type != 'scm' and self.source_project_update:
self.source_project_update.cancel(job_explanation=job_explanation) self.source_project_update.cancel(job_explanation=job_explanation)
return res return res

View File

@@ -38,7 +38,7 @@ from awx.main.utils import (
parse_yaml_or_json, parse_yaml_or_json,
) )
from awx.main.fields import ImplicitRoleField from awx.main.fields import ImplicitRoleField
from awx.main.models.mixins import ResourceMixin, SurveyJobTemplateMixin, SurveyJobMixin from awx.main.models.mixins import ResourceMixin, SurveyJobTemplateMixin, SurveyJobMixin, TaskManagerJobMixin
from awx.main.models.base import PERM_INVENTORY_SCAN from awx.main.models.base import PERM_INVENTORY_SCAN
from awx.main.fields import JSONField from awx.main.fields import JSONField
@@ -314,7 +314,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
resources_needed_to_start.append('inventory') resources_needed_to_start.append('inventory')
if not self.ask_inventory_on_launch: if not self.ask_inventory_on_launch:
validation_errors['inventory'] = [_("Job Template must provide 'inventory' or allow prompting for it."),] validation_errors['inventory'] = [_("Job Template must provide 'inventory' or allow prompting for it."),]
if self.credential is None: if self.credential is None and self.vault_credential is None:
resources_needed_to_start.append('credential') resources_needed_to_start.append('credential')
if not self.ask_credential_on_launch: if not self.ask_credential_on_launch:
validation_errors['credential'] = [_("Job Template must provide 'credential' or allow prompting for it."),] validation_errors['credential'] = [_("Job Template must provide 'credential' or allow prompting for it."),]
@@ -377,6 +377,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
verbosity=self.ask_verbosity_on_launch, verbosity=self.ask_verbosity_on_launch,
inventory=self.ask_inventory_on_launch, inventory=self.ask_inventory_on_launch,
credential=self.ask_credential_on_launch, credential=self.ask_credential_on_launch,
vault_credential=self.ask_credential_on_launch,
extra_credentials=self.ask_credential_on_launch, extra_credentials=self.ask_credential_on_launch,
) )
@@ -449,7 +450,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
return dict(error=list(error_notification_templates), success=list(success_notification_templates), any=list(any_notification_templates)) return dict(error=list(error_notification_templates), success=list(success_notification_templates), any=list(any_notification_templates))
class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin): class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskManagerJobMixin):
''' '''
A job applies a project (with playbook) to an inventory source with a given A job applies a project (with playbook) to an inventory source with a given
credential. It represents a single invocation of ansible-playbook with the credential. It represents a single invocation of ansible-playbook with the
@@ -695,7 +696,7 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin):
if not super(Job, self).can_start: if not super(Job, self).can_start:
return False return False
if not (self.credential): if not (self.credential) and not (self.vault_credential):
return False return False
return True return True
@@ -704,30 +705,22 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin):
JobNotificationMixin JobNotificationMixin
''' '''
def get_notification_templates(self): def get_notification_templates(self):
if not self.job_template:
return NotificationTemplate.objects.none()
return self.job_template.notification_templates return self.job_template.notification_templates
def get_notification_friendly_name(self): def get_notification_friendly_name(self):
return "Job" return "Job"
'''
Canceling a job also cancels the implicit project update with launch_type
run.
'''
def cancel(self, job_explanation=None):
res = super(Job, self).cancel(job_explanation=job_explanation)
if self.project_update:
self.project_update.cancel(job_explanation=job_explanation)
return res
@property @property
def memcached_fact_key(self): def memcached_fact_key(self):
return '{}'.format(self.inventory.id) return '{}'.format(self.inventory.id)
def memcached_fact_host_key(self, host_name): def memcached_fact_host_key(self, host_name):
return '{}-{}'.format(self.inventory.id, base64.b64encode(host_name)) return '{}-{}'.format(self.inventory.id, base64.b64encode(host_name.encode('utf-8')))
def memcached_fact_modified_key(self, host_name): def memcached_fact_modified_key(self, host_name):
return '{}-{}-modified'.format(self.inventory.id, base64.b64encode(host_name)) return '{}-{}-modified'.format(self.inventory.id, base64.b64encode(host_name.encode('utf-8')))
def _get_inventory_hosts(self, only=['name', 'ansible_facts', 'modified',]): def _get_inventory_hosts(self, only=['name', 'ansible_facts', 'modified',]):
return self.inventory.hosts.only(*only) return self.inventory.hosts.only(*only)

View File

@@ -16,7 +16,9 @@ from awx.main.utils import parse_yaml_or_json
from awx.main.fields import JSONField from awx.main.fields import JSONField
__all__ = ['ResourceMixin', 'SurveyJobTemplateMixin', 'SurveyJobMixin'] __all__ = ['ResourceMixin', 'SurveyJobTemplateMixin', 'SurveyJobMixin',
'TaskManagerUnifiedJobMixin', 'TaskManagerJobMixin', 'TaskManagerProjectUpdateMixin',
'TaskManagerInventoryUpdateMixin',]
class ResourceMixin(models.Model): class ResourceMixin(models.Model):
@@ -109,20 +111,29 @@ class SurveyJobTemplateMixin(models.Model):
vars.append(survey_element['variable']) vars.append(survey_element['variable'])
return vars return vars
def _update_unified_job_kwargs(self, **kwargs): def _update_unified_job_kwargs(self, create_kwargs, kwargs):
''' '''
Combine extra_vars with variable precedence order: Combine extra_vars with variable precedence order:
JT extra_vars -> JT survey defaults -> runtime extra_vars JT extra_vars -> JT survey defaults -> runtime extra_vars
:param create_kwargs: key-worded arguments to be updated and later used for creating unified job.
:type create_kwargs: dict
:param kwargs: request parameters used to override unified job template fields with runtime values.
:type kwargs: dict
:return: modified create_kwargs.
:rtype: dict
''' '''
# Job Template extra_vars # Job Template extra_vars
extra_vars = self.extra_vars_dict extra_vars = self.extra_vars_dict
survey_defaults = {}
# transform to dict # transform to dict
if 'extra_vars' in kwargs: if 'extra_vars' in kwargs:
kwargs_extra_vars = kwargs['extra_vars'] runtime_extra_vars = kwargs['extra_vars']
kwargs_extra_vars = parse_yaml_or_json(kwargs_extra_vars) runtime_extra_vars = parse_yaml_or_json(runtime_extra_vars)
else: else:
kwargs_extra_vars = {} runtime_extra_vars = {}
# Overwrite with job template extra vars with survey default vars # Overwrite with job template extra vars with survey default vars
if self.survey_enabled and 'spec' in self.survey_spec: if self.survey_enabled and 'spec' in self.survey_spec:
@@ -131,22 +142,23 @@ class SurveyJobTemplateMixin(models.Model):
variable_key = survey_element.get('variable') variable_key = survey_element.get('variable')
if survey_element.get('type') == 'password': if survey_element.get('type') == 'password':
if variable_key in kwargs_extra_vars and default: if variable_key in runtime_extra_vars and default:
kw_value = kwargs_extra_vars[variable_key] kw_value = runtime_extra_vars[variable_key]
if kw_value.startswith('$encrypted$') and kw_value != default: if kw_value.startswith('$encrypted$') and kw_value != default:
kwargs_extra_vars[variable_key] = default runtime_extra_vars[variable_key] = default
if default is not None: if default is not None:
data = {variable_key: default} data = {variable_key: default}
errors = self._survey_element_validation(survey_element, data) errors = self._survey_element_validation(survey_element, data)
if not errors: if not errors:
extra_vars[variable_key] = default survey_defaults[variable_key] = default
extra_vars.update(survey_defaults)
# Overwrite job template extra vars with explicit job extra vars # Overwrite job template extra vars with explicit job extra vars
# and add on job extra vars # and add on job extra vars
extra_vars.update(kwargs_extra_vars) extra_vars.update(runtime_extra_vars)
kwargs['extra_vars'] = json.dumps(extra_vars) create_kwargs['extra_vars'] = json.dumps(extra_vars)
return kwargs return create_kwargs
def _survey_element_validation(self, survey_element, data): def _survey_element_validation(self, survey_element, data):
errors = [] errors = []
@@ -158,13 +170,14 @@ class SurveyJobTemplateMixin(models.Model):
errors.append("Value %s for '%s' expected to be a string." % (data[survey_element['variable']], errors.append("Value %s for '%s' expected to be a string." % (data[survey_element['variable']],
survey_element['variable'])) survey_element['variable']))
return errors return errors
if not data[survey_element['variable']] == '$encrypted$' and not survey_element['type'] == 'password':
if 'min' in survey_element and survey_element['min'] not in ["", None] and len(data[survey_element['variable']]) < int(survey_element['min']): if 'min' in survey_element and survey_element['min'] not in ["", None] and len(data[survey_element['variable']]) < int(survey_element['min']):
errors.append("'%s' value %s is too small (length is %s must be at least %s)." % errors.append("'%s' value %s is too small (length is %s must be at least %s)." %
(survey_element['variable'], data[survey_element['variable']], len(data[survey_element['variable']]), survey_element['min'])) (survey_element['variable'], data[survey_element['variable']], len(data[survey_element['variable']]), survey_element['min']))
if 'max' in survey_element and survey_element['max'] not in ["", None] and len(data[survey_element['variable']]) > int(survey_element['max']): if 'max' in survey_element and survey_element['max'] not in ["", None] and len(data[survey_element['variable']]) > int(survey_element['max']):
errors.append("'%s' value %s is too large (must be no more than %s)." % errors.append("'%s' value %s is too large (must be no more than %s)." %
(survey_element['variable'], data[survey_element['variable']], survey_element['max'])) (survey_element['variable'], data[survey_element['variable']], survey_element['max']))
elif survey_element['type'] == 'integer': elif survey_element['type'] == 'integer':
if survey_element['variable'] in data: if survey_element['variable'] in data:
if type(data[survey_element['variable']]) != int: if type(data[survey_element['variable']]) != int:
@@ -249,3 +262,43 @@ class SurveyJobMixin(models.Model):
return json.dumps(extra_vars) return json.dumps(extra_vars)
else: else:
return self.extra_vars return self.extra_vars
class TaskManagerUnifiedJobMixin(models.Model):
class Meta:
abstract = True
def get_jobs_fail_chain(self):
return []
def dependent_jobs_finished(self):
return True
class TaskManagerJobMixin(TaskManagerUnifiedJobMixin):
class Meta:
abstract = True
def dependent_jobs_finished(self):
for j in self.dependent_jobs.all():
if j.status in ['pending', 'waiting', 'running']:
return False
return True
class TaskManagerUpdateOnLaunchMixin(TaskManagerUnifiedJobMixin):
class Meta:
abstract = True
def get_jobs_fail_chain(self):
return list(self.dependent_jobs.all())
class TaskManagerProjectUpdateMixin(TaskManagerUpdateOnLaunchMixin):
class Meta:
abstract = True
class TaskManagerInventoryUpdateMixin(TaskManagerUpdateOnLaunchMixin):
class Meta:
abstract = True

View File

@@ -194,11 +194,11 @@ class JobNotificationMixin(object):
def _build_notification_message(self, status_str): def _build_notification_message(self, status_str):
notification_body = self.notification_data() notification_body = self.notification_data()
notification_subject = u"{} #{} '{}' {} on Ansible Tower: {}".format(self.get_notification_friendly_name(), notification_subject = u"{} #{} '{}' {}: {}".format(self.get_notification_friendly_name(),
self.id, self.id,
self.name, self.name,
status_str, status_str,
notification_body['url']) notification_body['url'])
notification_body['friendly_name'] = self.get_notification_friendly_name() notification_body['friendly_name'] = self.get_notification_friendly_name()
return (notification_subject, notification_body) return (notification_subject, notification_body)

View File

@@ -23,7 +23,7 @@ from awx.main.models.notifications import (
JobNotificationMixin, JobNotificationMixin,
) )
from awx.main.models.unified_jobs import * # noqa from awx.main.models.unified_jobs import * # noqa
from awx.main.models.mixins import ResourceMixin from awx.main.models.mixins import ResourceMixin, TaskManagerProjectUpdateMixin
from awx.main.utils import update_scm_url from awx.main.utils import update_scm_url
from awx.main.utils.ansible import skip_directory, could_be_inventory, could_be_playbook from awx.main.utils.ansible import skip_directory, could_be_inventory, could_be_playbook
from awx.main.fields import ImplicitRoleField from awx.main.fields import ImplicitRoleField
@@ -377,10 +377,18 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin):
def _can_update(self): def _can_update(self):
return bool(self.scm_type) return bool(self.scm_type)
def _update_unified_job_kwargs(self, **kwargs): def _update_unified_job_kwargs(self, create_kwargs, kwargs):
'''
:param create_kwargs: key-worded arguments to be updated and later used for creating unified job.
:type create_kwargs: dict
:param kwargs: request parameters used to override unified job template fields with runtime values.
:type kwargs: dict
:return: modified create_kwargs.
:rtype: dict
'''
if self.scm_delete_on_next_update: if self.scm_delete_on_next_update:
kwargs['scm_delete_on_update'] = True create_kwargs['scm_delete_on_update'] = True
return kwargs return create_kwargs
def create_project_update(self, **kwargs): def create_project_update(self, **kwargs):
return self.create_unified_job(**kwargs) return self.create_unified_job(**kwargs)
@@ -430,7 +438,7 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin):
return reverse('api:project_detail', kwargs={'pk': self.pk}, request=request) return reverse('api:project_detail', kwargs={'pk': self.pk}, request=request)
class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin): class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManagerProjectUpdateMixin):
''' '''
Internal job for tracking project updates from SCM. Internal job for tracking project updates from SCM.
''' '''
@@ -512,8 +520,8 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin):
update_fields.append('scm_delete_on_next_update') update_fields.append('scm_delete_on_next_update')
parent_instance.save(update_fields=update_fields) parent_instance.save(update_fields=update_fields)
def cancel(self, job_explanation=None): def cancel(self, job_explanation=None, is_chain=False):
res = super(ProjectUpdate, self).cancel(job_explanation=job_explanation) res = super(ProjectUpdate, self).cancel(job_explanation=job_explanation, is_chain=is_chain)
if res and self.launch_type != 'sync': if res and self.launch_type != 'sync':
for inv_src in self.scm_inventory_updates.filter(status='running'): for inv_src in self.scm_inventory_updates.filter(status='running'):
inv_src.cancel(job_explanation='Source project update `{}` was canceled.'.format(self.name)) inv_src.cancel(job_explanation='Source project update `{}` was canceled.'.format(self.name))

View File

@@ -30,10 +30,11 @@ from djcelery.models import TaskMeta
# AWX # AWX
from awx.main.models.base import * # noqa from awx.main.models.base import * # noqa
from awx.main.models.schedules import Schedule from awx.main.models.schedules import Schedule
from awx.main.models.mixins import ResourceMixin from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin
from awx.main.utils import ( from awx.main.utils import (
decrypt_field, _inventory_updates, decrypt_field, _inventory_updates,
copy_model_by_class, copy_m2m_relationships copy_model_by_class, copy_m2m_relationships,
get_type_for_model
) )
from awx.main.redact import UriCleaner, REPLACE_STR from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.consumers import emit_channel_notification from awx.main.consumers import emit_channel_notification
@@ -359,7 +360,9 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
unified_job.save() unified_job.save()
# Labels and extra credentials copied here # Labels and extra credentials copied here
copy_m2m_relationships(self, unified_job, fields, kwargs=kwargs) from awx.main.signals import disable_activity_stream
with disable_activity_stream():
copy_m2m_relationships(self, unified_job, fields, kwargs=kwargs)
return unified_job return unified_job
@classmethod @classmethod
@@ -414,7 +417,7 @@ class UnifiedJobTypeStringMixin(object):
return UnifiedJobTypeStringMixin._camel_to_underscore(self.__class__.__name__) return UnifiedJobTypeStringMixin._camel_to_underscore(self.__class__.__name__)
class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique, UnifiedJobTypeStringMixin): class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique, UnifiedJobTypeStringMixin, TaskManagerUnifiedJobMixin):
''' '''
Concrete base class for unified job run by the task engine. Concrete base class for unified job run by the task engine.
''' '''
@@ -620,6 +623,10 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
def __unicode__(self): def __unicode__(self):
return u'%s-%s-%s' % (self.created, self.id, self.status) return u'%s-%s-%s' % (self.created, self.id, self.status)
@property
def log_format(self):
return '{} {} ({})'.format(get_type_for_model(type(self)), self.id, self.status)
def _get_parent_instance(self): def _get_parent_instance(self):
return getattr(self, self._get_parent_field_name(), None) return getattr(self, self._get_parent_field_name(), None)
@@ -804,7 +811,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
try: try:
return os.stat(self.result_stdout_file).st_size return os.stat(self.result_stdout_file).st_size
except: except:
return 0 return len(self.result_stdout)
def _result_stdout_raw_limited(self, start_line=0, end_line=None, redact_sensitive=True, escape_ascii=False): def _result_stdout_raw_limited(self, start_line=0, end_line=None, redact_sensitive=True, escape_ascii=False):
return_buffer = u"" return_buffer = u""
@@ -905,14 +912,22 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
return websocket_data return websocket_data
def _websocket_emit_status(self, status): def _websocket_emit_status(self, status):
status_data = dict(unified_job_id=self.id, status=status) try:
status_data.update(self.websocket_emit_data()) status_data = dict(unified_job_id=self.id, status=status)
status_data['group_name'] = 'jobs' if status == 'waiting':
emit_channel_notification('jobs-status_changed', status_data) if self.instance_group:
status_data['instance_group_name'] = self.instance_group.name
else:
status_data['instance_group_name'] = None
status_data.update(self.websocket_emit_data())
status_data['group_name'] = 'jobs'
emit_channel_notification('jobs-status_changed', status_data)
if self.spawned_by_workflow: if self.spawned_by_workflow:
status_data['group_name'] = "workflow_events" status_data['group_name'] = "workflow_events"
emit_channel_notification('workflow_events-' + str(self.workflow_job_id), status_data) emit_channel_notification('workflow_events-' + str(self.workflow_job_id), status_data)
except IOError: # includes socket errors
logger.exception('%s failed to emit channel msg about status change', self.log_format)
def websocket_emit_status(self, status): def websocket_emit_status(self, status):
connection.on_commit(lambda: self._websocket_emit_status(status)) connection.on_commit(lambda: self._websocket_emit_status(status))
@@ -956,6 +971,15 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
return (True, opts) return (True, opts)
def start_celery_task(self, opts, error_callback, success_callback, queue): def start_celery_task(self, opts, error_callback, success_callback, queue):
kwargs = {
'link_error': error_callback,
'link': success_callback,
'queue': None,
'task_id': None,
}
if not self.celery_task_id:
raise RuntimeError("Expected celery_task_id to be set on model.")
kwargs['task_id'] = self.celery_task_id
task_class = self._get_task_class() task_class = self._get_task_class()
from awx.main.models.ha import InstanceGroup from awx.main.models.ha import InstanceGroup
ig = InstanceGroup.objects.get(name=queue) ig = InstanceGroup.objects.get(name=queue)
@@ -966,7 +990,8 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
args.append(isolated_instance.hostname) args.append(isolated_instance.hostname)
else: # proj & inv updates, system jobs run on controller else: # proj & inv updates, system jobs run on controller
queue = ig.controller.name queue = ig.controller.name
task_class().apply_async(args, opts, link_error=error_callback, link=success_callback, queue=queue) kwargs['queue'] = queue
task_class().apply_async(args, opts, **kwargs)
def start(self, error_callback, success_callback, **kwargs): def start(self, error_callback, success_callback, **kwargs):
''' '''
@@ -1058,8 +1083,17 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
if settings.DEBUG: if settings.DEBUG:
raise raise
def cancel(self, job_explanation=None): def _build_job_explanation(self):
if not self.job_explanation:
return 'Previous Task Canceled: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % \
(self.model_to_str(), self.name, self.id)
return None
def cancel(self, job_explanation=None, is_chain=False):
if self.can_cancel: if self.can_cancel:
if not is_chain:
map(lambda x: x.cancel(job_explanation=self._build_job_explanation(), is_chain=True), self.get_jobs_fail_chain())
if not self.cancel_flag: if not self.cancel_flag:
self.cancel_flag = True self.cancel_flag = True
cancel_fields = ['cancel_flag'] cancel_fields = ['cancel_flag']

View File

@@ -8,13 +8,13 @@ from django.core.mail.backends.base import BaseEmailBackend
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
class TowerBaseEmailBackend(BaseEmailBackend): class AWXBaseEmailBackend(BaseEmailBackend):
def format_body(self, body): def format_body(self, body):
if "body" in body: if "body" in body:
body_actual = body['body'] body_actual = body['body']
else: else:
body_actual = smart_text(_("{} #{} had status {} on Ansible Tower, view details at {}\n\n").format( body_actual = smart_text(_("{} #{} had status {}, view details at {}\n\n").format(
body['friendly_name'], body['id'], body['status'], body['url']) body['friendly_name'], body['id'], body['status'], body['url'])
) )
body_actual += json.dumps(body, indent=4) body_actual += json.dumps(body, indent=4)

View File

@@ -25,7 +25,7 @@ class CustomEmailBackend(EmailBackend):
if "body" in body: if "body" in body:
body_actual = body['body'] body_actual = body['body']
else: else:
body_actual = smart_text(_("{} #{} had status {} on Ansible Tower, view details at {}\n\n").format( body_actual = smart_text(_("{} #{} had status {}, view details at {}\n\n").format(
body['friendly_name'], body['id'], body['status'], body['url']) body['friendly_name'], body['id'], body['status'], body['url'])
) )
body_actual += json.dumps(body, indent=4) body_actual += json.dumps(body, indent=4)

View File

@@ -7,12 +7,12 @@ import requests
from django.utils.encoding import smart_text from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
from awx.main.notifications.base import TowerBaseEmailBackend from awx.main.notifications.base import AWXBaseEmailBackend
logger = logging.getLogger('awx.main.notifications.hipchat_backend') logger = logging.getLogger('awx.main.notifications.hipchat_backend')
class HipChatBackend(TowerBaseEmailBackend): class HipChatBackend(AWXBaseEmailBackend):
init_parameters = {"token": {"label": "Token", "type": "password"}, init_parameters = {"token": {"label": "Token", "type": "password"},
"rooms": {"label": "Destination Rooms", "type": "list"}, "rooms": {"label": "Destination Rooms", "type": "list"},

View File

@@ -9,12 +9,12 @@ import irc.client
from django.utils.encoding import smart_text from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
from awx.main.notifications.base import TowerBaseEmailBackend from awx.main.notifications.base import AWXBaseEmailBackend
logger = logging.getLogger('awx.main.notifications.irc_backend') logger = logging.getLogger('awx.main.notifications.irc_backend')
class IrcBackend(TowerBaseEmailBackend): class IrcBackend(AWXBaseEmailBackend):
init_parameters = {"server": {"label": "IRC Server Address", "type": "string"}, init_parameters = {"server": {"label": "IRC Server Address", "type": "string"},
"port": {"label": "IRC Server Port", "type": "int"}, "port": {"label": "IRC Server Port", "type": "int"},

View File

@@ -6,12 +6,12 @@ import pygerduty
from django.utils.encoding import smart_text from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
from awx.main.notifications.base import TowerBaseEmailBackend from awx.main.notifications.base import AWXBaseEmailBackend
logger = logging.getLogger('awx.main.notifications.pagerduty_backend') logger = logging.getLogger('awx.main.notifications.pagerduty_backend')
class PagerDutyBackend(TowerBaseEmailBackend): class PagerDutyBackend(AWXBaseEmailBackend):
init_parameters = {"subdomain": {"label": "Pagerduty subdomain", "type": "string"}, init_parameters = {"subdomain": {"label": "Pagerduty subdomain", "type": "string"},
"token": {"label": "API Token", "type": "password"}, "token": {"label": "API Token", "type": "password"},

View File

@@ -6,12 +6,12 @@ from slackclient import SlackClient
from django.utils.encoding import smart_text from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
from awx.main.notifications.base import TowerBaseEmailBackend from awx.main.notifications.base import AWXBaseEmailBackend
logger = logging.getLogger('awx.main.notifications.slack_backend') logger = logging.getLogger('awx.main.notifications.slack_backend')
class SlackBackend(TowerBaseEmailBackend): class SlackBackend(AWXBaseEmailBackend):
init_parameters = {"token": {"label": "Token", "type": "password"}, init_parameters = {"token": {"label": "Token", "type": "password"},
"channels": {"label": "Destination Channels", "type": "list"}} "channels": {"label": "Destination Channels", "type": "list"}}

View File

@@ -7,12 +7,12 @@ from twilio.rest import Client
from django.utils.encoding import smart_text from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
from awx.main.notifications.base import TowerBaseEmailBackend from awx.main.notifications.base import AWXBaseEmailBackend
logger = logging.getLogger('awx.main.notifications.twilio_backend') logger = logging.getLogger('awx.main.notifications.twilio_backend')
class TwilioBackend(TowerBaseEmailBackend): class TwilioBackend(AWXBaseEmailBackend):
init_parameters = {"account_sid": {"label": "Account SID", "type": "string"}, init_parameters = {"account_sid": {"label": "Account SID", "type": "string"},
"account_token": {"label": "Account Token", "type": "password"}, "account_token": {"label": "Account Token", "type": "password"},

View File

@@ -6,13 +6,13 @@ import requests
from django.utils.encoding import smart_text from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
from awx.main.notifications.base import TowerBaseEmailBackend from awx.main.notifications.base import AWXBaseEmailBackend
from awx.main.utils import get_awx_version from awx.main.utils import get_awx_version
logger = logging.getLogger('awx.main.notifications.webhook_backend') logger = logging.getLogger('awx.main.notifications.webhook_backend')
class WebhookBackend(TowerBaseEmailBackend): class WebhookBackend(AWXBaseEmailBackend):
init_parameters = {"url": {"label": "Target URL", "type": "string"}, init_parameters = {"url": {"label": "Target URL", "type": "string"},
"headers": {"label": "HTTP Headers", "type": "object"}} "headers": {"label": "HTTP Headers", "type": "object"}}

View File

@@ -4,21 +4,23 @@
# Python # Python
from datetime import datetime, timedelta from datetime import datetime, timedelta
import logging import logging
import json import uuid
from sets import Set from sets import Set
# Django # Django
from django.conf import settings from django.conf import settings
from django.core.cache import cache from django.core.cache import cache
from django.db import transaction, connection from django.db import transaction, connection, DatabaseError
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
from django.utils.timezone import now as tz_now, utc from django.utils.timezone import now as tz_now, utc
from django.db.models import Q
# AWX # AWX
from awx.main.models import * # noqa from awx.main.models import * # noqa
#from awx.main.scheduler.dag_simple import SimpleDAG #from awx.main.scheduler.dag_simple import SimpleDAG
from awx.main.scheduler.dag_workflow import WorkflowDAG from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.main.utils.pglock import advisory_lock from awx.main.utils.pglock import advisory_lock
from awx.main.utils import get_type_for_model
from awx.main.signals import disable_activity_stream from awx.main.signals import disable_activity_stream
from awx.main.scheduler.dependency_graph import DependencyGraph from awx.main.scheduler.dependency_graph import DependencyGraph
@@ -47,6 +49,10 @@ class TaskManager():
for g in self.graph: for g in self.graph:
if self.graph[g]['graph'].is_job_blocked(task): if self.graph[g]['graph'].is_job_blocked(task):
return True return True
if not task.dependent_jobs_finished():
return True
return False return False
def get_tasks(self, status_list=('pending', 'waiting', 'running')): def get_tasks(self, status_list=('pending', 'waiting', 'running')):
@@ -61,32 +67,45 @@ class TaskManager():
key=lambda task: task.created) key=lambda task: task.created)
return all_tasks return all_tasks
@classmethod
def get_node_type(cls, obj):
if type(obj) == Job:
return "job"
elif type(obj) == AdHocCommand:
return "ad_hoc_command"
elif type(obj) == InventoryUpdate:
return "inventory_update"
elif type(obj) == ProjectUpdate:
return "project_update"
elif type(obj) == SystemJob:
return "system_job"
elif type(obj) == WorkflowJob:
return "workflow_job"
return "unknown"
''' '''
Tasks that are running and SHOULD have a celery task. Tasks that are running and SHOULD have a celery task.
{
'execution_node': [j1, j2,...],
'execution_node': [j3],
...
}
''' '''
def get_running_tasks(self, all_tasks=None): def get_running_tasks(self):
if all_tasks is None: execution_nodes = {}
return self.get_tasks(status_list=('running',)) now = tz_now()
return filter(lambda t: t.status == 'running', all_tasks) jobs = UnifiedJob.objects.filter(Q(status='running') |
Q(status='waiting', modified__lte=now - timedelta(seconds=60)))
[execution_nodes.setdefault(j.execution_node, [j]).append(j) for j in jobs]
return execution_nodes
''' '''
Tasks that are currently running in celery Tasks that are currently running in celery
Transform:
{
"celery@ec2-54-204-222-62.compute-1.amazonaws.com": [],
"celery@ec2-54-163-144-168.compute-1.amazonaws.com": [{
...
"id": "5238466a-f8c7-43b3-9180-5b78e9da8304",
...
}, {
...,
}, ...]
}
to:
{
"ec2-54-204-222-62.compute-1.amazonaws.com": [
"5238466a-f8c7-43b3-9180-5b78e9da8304",
"5238466a-f8c7-43b3-9180-5b78e9da8306",
...
]
}
''' '''
def get_active_tasks(self): def get_active_tasks(self):
inspector = inspect() inspector = inspect()
@@ -96,15 +115,23 @@ class TaskManager():
logger.warn("Ignoring celery task inspector") logger.warn("Ignoring celery task inspector")
active_task_queues = None active_task_queues = None
active_tasks = set() queues = None
if active_task_queues is not None: if active_task_queues is not None:
queues = {}
for queue in active_task_queues: for queue in active_task_queues:
active_tasks = set()
map(lambda at: active_tasks.add(at['id']), active_task_queues[queue]) map(lambda at: active_tasks.add(at['id']), active_task_queues[queue])
# celery worker name is of the form celery@myhost.com
queue_name = queue.split('@')
queue_name = queue_name[1 if len(queue_name) > 1 else 0]
queues[queue_name] = active_tasks
else: else:
if not hasattr(settings, 'CELERY_UNIT_TEST'): if not hasattr(settings, 'CELERY_UNIT_TEST'):
return (None, None) return (None, None)
return (active_task_queues, active_tasks) return (active_task_queues, queues)
def get_latest_project_update_tasks(self, all_sorted_tasks): def get_latest_project_update_tasks(self, all_sorted_tasks):
project_ids = Set() project_ids = Set()
@@ -157,7 +184,7 @@ class TaskManager():
job.save(update_fields=['status', 'job_explanation']) job.save(update_fields=['status', 'job_explanation'])
connection.on_commit(lambda: job.websocket_emit_status('failed')) connection.on_commit(lambda: job.websocket_emit_status('failed'))
# TODO: should we emit a status on the socket here similar to tasks.py tower_periodic_scheduler() ? # TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
#emit_websocket_notification('/socket.io/jobs', '', dict(id=)) #emit_websocket_notification('/socket.io/jobs', '', dict(id=))
# See comment in tasks.py::RunWorkflowJob::run() # See comment in tasks.py::RunWorkflowJob::run()
@@ -187,25 +214,14 @@ class TaskManager():
from awx.main.tasks import handle_work_error, handle_work_success from awx.main.tasks import handle_work_error, handle_work_success
task_actual = { task_actual = {
'type':self.get_node_type(task), 'type': get_type_for_model(type(task)),
'id': task.id, 'id': task.id,
} }
dependencies = [{'type': self.get_node_type(t), 'id': t.id} for t in dependent_tasks] dependencies = [{'type': get_type_for_model(type(t)), 'id': t.id} for t in dependent_tasks]
error_handler = handle_work_error.s(subtasks=[task_actual] + dependencies) error_handler = handle_work_error.s(subtasks=[task_actual] + dependencies)
success_handler = handle_work_success.s(task_actual=task_actual) success_handler = handle_work_success.s(task_actual=task_actual)
'''
This is to account for when there isn't enough capacity to execute all
dependent jobs (i.e. proj or inv update) within the same schedule()
call.
Proceeding calls to schedule() need to recontruct the proj or inv
update -> job fail logic dependency. The below call recontructs that
failure dependency.
'''
if len(dependencies) == 0:
dependencies = self.get_dependent_jobs_for_inv_and_proj_update(task)
task.status = 'waiting' task.status = 'waiting'
(start_status, opts) = task.pre_start() (start_status, opts) = task.pre_start()
@@ -222,12 +238,13 @@ class TaskManager():
if not task.supports_isolation() and rampart_group.controller_id: if not task.supports_isolation() and rampart_group.controller_id:
# non-Ansible jobs on isolated instances run on controller # non-Ansible jobs on isolated instances run on controller
task.instance_group = rampart_group.controller task.instance_group = rampart_group.controller
logger.info('Submitting isolated job {} to queue {} via {}.'.format( logger.info('Submitting isolated %s to queue %s via %s.',
task.id, task.instance_group_id, rampart_group.controller_id)) task.log_format, task.instance_group_id, rampart_group.controller_id)
else: else:
task.instance_group = rampart_group task.instance_group = rampart_group
logger.info('Submitting job {} to instance group {}.'.format(task.id, task.instance_group_id)) logger.info('Submitting %s to instance group %s.', task.log_format, task.instance_group_id)
with disable_activity_stream(): with disable_activity_stream():
task.celery_task_id = str(uuid.uuid4())
task.save() task.save()
self.consume_capacity(task, rampart_group.name) self.consume_capacity(task, rampart_group.name)
@@ -262,11 +279,12 @@ class TaskManager():
return inventory_task return inventory_task
def capture_chain_failure_dependencies(self, task, dependencies): def capture_chain_failure_dependencies(self, task, dependencies):
for dep in dependencies: with disable_activity_stream():
with disable_activity_stream(): task.dependent_jobs.add(*dependencies)
logger.info('Adding unified job {} to list of dependencies of {}.'.format(task.id, dep.id))
dep.dependent_jobs.add(task.id) for dep in dependencies:
dep.save() # Add task + all deps except self
dep.dependent_jobs.add(*([task] + filter(lambda d: d != dep, dependencies)))
def should_update_inventory_source(self, job, inventory_source): def should_update_inventory_source(self, job, inventory_source):
now = tz_now() now = tz_now()
@@ -342,48 +360,52 @@ class TaskManager():
if self.should_update_inventory_source(task, inventory_source): if self.should_update_inventory_source(task, inventory_source):
inventory_task = self.create_inventory_update(task, inventory_source) inventory_task = self.create_inventory_update(task, inventory_source)
dependencies.append(inventory_task) dependencies.append(inventory_task)
self.capture_chain_failure_dependencies(task, dependencies)
if len(dependencies) > 0:
self.capture_chain_failure_dependencies(task, dependencies)
return dependencies return dependencies
def process_dependencies(self, dependent_task, dependency_tasks): def process_dependencies(self, dependent_task, dependency_tasks):
for task in dependency_tasks: for task in dependency_tasks:
if self.is_job_blocked(task): if self.is_job_blocked(task):
logger.debug("Dependent task {} is blocked from running".format(task)) logger.debug("Dependent %s is blocked from running", task.log_format)
continue continue
preferred_instance_groups = task.preferred_instance_groups preferred_instance_groups = task.preferred_instance_groups
found_acceptable_queue = False found_acceptable_queue = False
for rampart_group in preferred_instance_groups: for rampart_group in preferred_instance_groups:
if self.get_remaining_capacity(rampart_group.name) <= 0: if self.get_remaining_capacity(rampart_group.name) <= 0:
logger.debug("Skipping group {} capacity <= 0".format(rampart_group.name)) logger.debug("Skipping group %s capacity <= 0", rampart_group.name)
continue continue
if not self.would_exceed_capacity(task, rampart_group.name): if not self.would_exceed_capacity(task, rampart_group.name):
logger.debug("Starting dependent task {} in group {}".format(task, rampart_group.name)) logger.debug("Starting dependent %s in group %s", task.log_format, rampart_group.name)
self.graph[rampart_group.name]['graph'].add_job(task) self.graph[rampart_group.name]['graph'].add_job(task)
self.start_task(task, rampart_group, dependency_tasks) tasks_to_fail = filter(lambda t: t != task, dependency_tasks)
tasks_to_fail += [dependent_task]
self.start_task(task, rampart_group, tasks_to_fail)
found_acceptable_queue = True found_acceptable_queue = True
if not found_acceptable_queue: if not found_acceptable_queue:
logger.debug("Dependent task {} couldn't be scheduled on graph, waiting for next cycle".format(task)) logger.debug("Dependent %s couldn't be scheduled on graph, waiting for next cycle", task.log_format)
def process_pending_tasks(self, pending_tasks): def process_pending_tasks(self, pending_tasks):
for task in pending_tasks: for task in pending_tasks:
self.process_dependencies(task, self.generate_dependencies(task)) self.process_dependencies(task, self.generate_dependencies(task))
if self.is_job_blocked(task): if self.is_job_blocked(task):
logger.debug("Task {} is blocked from running".format(task)) logger.debug("%s is blocked from running", task.log_format)
continue continue
preferred_instance_groups = task.preferred_instance_groups preferred_instance_groups = task.preferred_instance_groups
found_acceptable_queue = False found_acceptable_queue = False
for rampart_group in preferred_instance_groups: for rampart_group in preferred_instance_groups:
if self.get_remaining_capacity(rampart_group.name) <= 0: if self.get_remaining_capacity(rampart_group.name) <= 0:
logger.debug("Skipping group {} capacity <= 0".format(rampart_group.name)) logger.debug("Skipping group %s capacity <= 0", rampart_group.name)
continue continue
if not self.would_exceed_capacity(task, rampart_group.name): if not self.would_exceed_capacity(task, rampart_group.name):
logger.debug("Starting task {} in group {}".format(task, rampart_group.name)) logger.debug("Starting %s in group %s", task.log_format, rampart_group.name)
self.graph[rampart_group.name]['graph'].add_job(task) self.graph[rampart_group.name]['graph'].add_job(task)
self.start_task(task, rampart_group) self.start_task(task, rampart_group, task.get_jobs_fail_chain())
found_acceptable_queue = True found_acceptable_queue = True
break break
if not found_acceptable_queue: if not found_acceptable_queue:
logger.debug("Task {} couldn't be scheduled on graph, waiting for next cycle".format(task)) logger.debug("%s couldn't be scheduled on graph, waiting for next cycle", task.log_format)
def cleanup_inconsistent_celery_tasks(self): def cleanup_inconsistent_celery_tasks(self):
''' '''
@@ -395,33 +417,55 @@ class TaskManager():
logger.debug("Failing inconsistent running jobs.") logger.debug("Failing inconsistent running jobs.")
celery_task_start_time = tz_now() celery_task_start_time = tz_now()
active_task_queues, active_tasks = self.get_active_tasks() active_task_queues, active_queues = self.get_active_tasks()
cache.set("active_celery_tasks", json.dumps(active_task_queues))
cache.set('last_celery_task_cleanup', tz_now()) cache.set('last_celery_task_cleanup', tz_now())
if active_tasks is None: if active_queues is None:
logger.error('Failed to retrieve active tasks from celery') logger.error('Failed to retrieve active tasks from celery')
return None return None
all_running_sorted_tasks = self.get_running_tasks() '''
for task in all_running_sorted_tasks: Only consider failing tasks on instances for which we obtained a task
list from celery for.
if (task.celery_task_id not in active_tasks and not hasattr(settings, 'IGNORE_CELERY_INSPECTOR')): '''
# TODO: try catch the getting of the job. The job COULD have been deleted running_tasks = self.get_running_tasks()
if isinstance(task, WorkflowJob): for node, node_jobs in running_tasks.iteritems():
continue if node in active_queues:
if task.modified > celery_task_start_time: active_tasks = active_queues[node]
continue else:
task.status = 'failed' '''
task.job_explanation += ' '.join(( Node task list not found in celery. If tower thinks the node is down
'Task was marked as running in Tower but was not present in', then fail all the jobs on the node.
'Celery, so it has been marked as failed.', '''
)) try:
task.save() instance = Instance.objects.get(hostname=node)
awx_tasks._send_notification_templates(task, 'failed') if instance.capacity == 0:
task.websocket_emit_status('failed') active_tasks = []
logger.error("Task %s appears orphaned... marking as failed" % task) else:
continue
except Instance.DoesNotExist:
logger.error("Execution node Instance {} not found in database. "
"The node is currently executing jobs {}".format(node, [str(j) for j in node_jobs]))
active_tasks = []
for task in node_jobs:
if (task.celery_task_id not in active_tasks and not hasattr(settings, 'IGNORE_CELERY_INSPECTOR')):
if isinstance(task, WorkflowJob):
continue
if task.modified > celery_task_start_time:
continue
task.status = 'failed'
task.job_explanation += ' '.join((
'Task was marked as running in Tower but was not present in',
'Celery, so it has been marked as failed.',
))
try:
task.save(update_fields=['status', 'job_explanation'])
except DatabaseError:
logger.error("Task {} DB error in marking failed. Job possibly deleted.".format(task.log_format))
continue
awx_tasks._send_notification_templates(task, 'failed')
task.websocket_emit_status('failed')
logger.error("Task {} has no record in celery. Marking as failed".format(task.log_format))
def calculate_capacity_used(self, tasks): def calculate_capacity_used(self, tasks):
for rampart_group in self.graph: for rampart_group in self.graph:

View File

@@ -13,7 +13,7 @@ from django.db.models.signals import post_save, pre_delete, post_delete, m2m_cha
from django.dispatch import receiver from django.dispatch import receiver
# Django-CRUM # Django-CRUM
from crum import get_current_request from crum import get_current_request, get_current_user
from crum.signals import current_user_getter from crum.signals import current_user_getter
# AWX # AWX
@@ -34,6 +34,13 @@ logger = logging.getLogger('awx.main.signals')
# when a Host-Group or Group-Group relationship is updated, or when a Job is deleted # when a Host-Group or Group-Group relationship is updated, or when a Job is deleted
def get_current_user_or_none():
u = get_current_user()
if not isinstance(u, User):
return None
return u
def emit_job_event_detail(sender, **kwargs): def emit_job_event_detail(sender, **kwargs):
instance = kwargs['instance'] instance = kwargs['instance']
created = kwargs['created'] created = kwargs['created']
@@ -385,7 +392,8 @@ def activity_stream_create(sender, instance, created, **kwargs):
activity_entry = ActivityStream( activity_entry = ActivityStream(
operation='create', operation='create',
object1=object1, object1=object1,
changes=json.dumps(changes)) changes=json.dumps(changes),
actor=get_current_user_or_none())
activity_entry.save() activity_entry.save()
#TODO: Weird situation where cascade SETNULL doesn't work #TODO: Weird situation where cascade SETNULL doesn't work
# it might actually be a good idea to remove all of these FK references since # it might actually be a good idea to remove all of these FK references since
@@ -412,7 +420,8 @@ def activity_stream_update(sender, instance, **kwargs):
activity_entry = ActivityStream( activity_entry = ActivityStream(
operation='update', operation='update',
object1=object1, object1=object1,
changes=json.dumps(changes)) changes=json.dumps(changes),
actor=get_current_user_or_none())
activity_entry.save() activity_entry.save()
if instance._meta.model_name != 'setting': # Is not conf.Setting instance if instance._meta.model_name != 'setting': # Is not conf.Setting instance
getattr(activity_entry, object1).add(instance) getattr(activity_entry, object1).add(instance)
@@ -425,12 +434,19 @@ def activity_stream_delete(sender, instance, **kwargs):
# Skip recording any inventory source directly associated with a group. # Skip recording any inventory source directly associated with a group.
if isinstance(instance, InventorySource) and instance.deprecated_group: if isinstance(instance, InventorySource) and instance.deprecated_group:
return return
# Inventory delete happens in the task system rather than request-response-cycle.
# If we trigger this handler there we may fall into db-integrity-related race conditions.
# So we add flag verification to prevent normal signal handling. This funciton will be
# explicitly called with flag on in Inventory.schedule_deletion.
if isinstance(instance, Inventory) and not kwargs.get('inventory_delete_flag', False):
return
changes = model_to_dict(instance) changes = model_to_dict(instance)
object1 = camelcase_to_underscore(instance.__class__.__name__) object1 = camelcase_to_underscore(instance.__class__.__name__)
activity_entry = ActivityStream( activity_entry = ActivityStream(
operation='delete', operation='delete',
changes=json.dumps(changes), changes=json.dumps(changes),
object1=object1) object1=object1,
actor=get_current_user_or_none())
activity_entry.save() activity_entry.save()
@@ -477,7 +493,8 @@ def activity_stream_associate(sender, instance, **kwargs):
operation=action, operation=action,
object1=object1, object1=object1,
object2=object2, object2=object2,
object_relationship_type=obj_rel) object_relationship_type=obj_rel,
actor=get_current_user_or_none())
activity_entry.save() activity_entry.save()
getattr(activity_entry, object1).add(obj1) getattr(activity_entry, object1).add(obj1)
getattr(activity_entry, object2).add(obj2_actual) getattr(activity_entry, object2).add(obj2_actual)
@@ -515,8 +532,9 @@ def get_current_user_from_drf_request(sender, **kwargs):
@receiver(pre_delete, sender=Organization) @receiver(pre_delete, sender=Organization)
def delete_inventory_for_org(sender, instance, **kwargs): def delete_inventory_for_org(sender, instance, **kwargs):
inventories = Inventory.objects.filter(organization__pk=instance.pk) inventories = Inventory.objects.filter(organization__pk=instance.pk)
user = get_current_user_or_none()
for inventory in inventories: for inventory in inventories:
try: try:
inventory.schedule_deletion() inventory.schedule_deletion(user_id=getattr(user, 'id', None))
except RuntimeError, e: except RuntimeError, e:
logger.debug(e) logger.debug(e)

View File

@@ -29,11 +29,11 @@ except:
# Celery # Celery
from celery import Task, task from celery import Task, task
from celery.signals import celeryd_init, worker_process_init from celery.signals import celeryd_init, worker_process_init, worker_shutdown
# Django # Django
from django.conf import settings from django.conf import settings
from django.db import transaction, DatabaseError, IntegrityError from django.db import transaction, DatabaseError, IntegrityError, OperationalError
from django.utils.timezone import now, timedelta from django.utils.timezone import now, timedelta
from django.utils.encoding import smart_str from django.utils.encoding import smart_str
from django.core.mail import send_mail from django.core.mail import send_mail
@@ -42,17 +42,21 @@ from django.utils.translation import ugettext_lazy as _
from django.core.cache import cache from django.core.cache import cache
from django.core.exceptions import ObjectDoesNotExist from django.core.exceptions import ObjectDoesNotExist
# Django-CRUM
from crum import impersonate
# AWX # AWX
from awx import __version__ as tower_application_version from awx import __version__ as awx_application_version
from awx.main.constants import CLOUD_PROVIDERS from awx.main.constants import CLOUD_PROVIDERS, PRIVILEGE_ESCALATION_METHODS
from awx.main.models import * # noqa from awx.main.models import * # noqa
from awx.main.models.unified_jobs import ACTIVE_STATES from awx.main.models.unified_jobs import ACTIVE_STATES
from awx.main.queue import CallbackQueueDispatcher from awx.main.queue import CallbackQueueDispatcher
from awx.main.isolated import run, isolated_manager from awx.main.expect import run, isolated_manager
from awx.main.utils import (get_ansible_version, get_ssh_version, decrypt_field, update_scm_url, from awx.main.utils import (get_ansible_version, get_ssh_version, decrypt_field, update_scm_url,
check_proot_installed, build_proot_temp_dir, get_licenser, check_proot_installed, build_proot_temp_dir, get_licenser,
wrap_args_with_proot, get_system_task_capacity, OutputEventFilter, wrap_args_with_proot, get_system_task_capacity, OutputEventFilter,
parse_yaml_or_json, ignore_inventory_computed_fields, ignore_inventory_group_removal) parse_yaml_or_json, ignore_inventory_computed_fields, ignore_inventory_group_removal,
get_type_for_model)
from awx.main.utils.reload import restart_local_services, stop_local_services from awx.main.utils.reload import restart_local_services, stop_local_services
from awx.main.utils.handlers import configure_external_logger from awx.main.utils.handlers import configure_external_logger
from awx.main.consumers import emit_channel_notification from awx.main.consumers import emit_channel_notification
@@ -74,6 +78,17 @@ Try upgrading OpenSSH or providing your private key in an different format. \
logger = logging.getLogger('awx.main.tasks') logger = logging.getLogger('awx.main.tasks')
class LogErrorsTask(Task):
def on_failure(self, exc, task_id, args, kwargs, einfo):
if isinstance(self, BaseTask):
logger.exception(
'%s %s execution encountered exception.',
get_type_for_model(self.model), args[0])
else:
logger.exception('Task {} encountered exception.'.format(self.name), exc_info=exc)
super(LogErrorsTask, self).on_failure(exc, task_id, args, kwargs, einfo)
@celeryd_init.connect @celeryd_init.connect
def celery_startup(conf=None, **kwargs): def celery_startup(conf=None, **kwargs):
# Re-init all schedules # Re-init all schedules
@@ -86,17 +101,34 @@ def celery_startup(conf=None, **kwargs):
from awx.main.signals import disable_activity_stream from awx.main.signals import disable_activity_stream
with disable_activity_stream(): with disable_activity_stream():
sch.save() sch.save()
except Exception as e: except:
logger.error("Failed to rebuild schedule {}: {}".format(sch, e)) logger.exception("Failed to rebuild schedule {}.".format(sch))
@worker_process_init.connect @worker_process_init.connect
def task_set_logger_pre_run(*args, **kwargs): def task_set_logger_pre_run(*args, **kwargs):
cache.close() try:
configure_external_logger(settings, is_startup=False) cache.close()
configure_external_logger(settings, is_startup=False)
except:
# General exception because LogErrorsTask not used with celery signals
logger.exception('Encountered error on initial log configuration.')
@task(queue='tower_broadcast_all', bind=True) @worker_shutdown.connect
def inform_cluster_of_shutdown(*args, **kwargs):
try:
this_inst = Instance.objects.get(hostname=settings.CLUSTER_HOST_ID)
this_inst.capacity = 0 # No thank you to new jobs while shut down
this_inst.save(update_fields=['capacity', 'modified'])
logger.warning('Normal shutdown signal for instance {}, '
'removed self from capacity pool.'.format(this_inst.hostname))
except:
# General exception because LogErrorsTask not used with celery signals
logger.exception('Encountered problem with normal shutdown signal.')
@task(queue='tower_broadcast_all', bind=True, base=LogErrorsTask)
def handle_setting_changes(self, setting_keys): def handle_setting_changes(self, setting_keys):
orig_len = len(setting_keys) orig_len = len(setting_keys)
for i in range(orig_len): for i in range(orig_len):
@@ -109,11 +141,11 @@ def handle_setting_changes(self, setting_keys):
cache.delete_many(cache_keys) cache.delete_many(cache_keys)
for key in cache_keys: for key in cache_keys:
if key.startswith('LOG_AGGREGATOR_'): if key.startswith('LOG_AGGREGATOR_'):
restart_local_services(['uwsgi', 'celery', 'beat', 'callback', 'fact']) restart_local_services(['uwsgi', 'celery', 'beat', 'callback'])
break break
@task(queue='tower') @task(queue='tower', base=LogErrorsTask)
def send_notifications(notification_list, job_id=None): def send_notifications(notification_list, job_id=None):
if not isinstance(notification_list, list): if not isinstance(notification_list, list):
raise TypeError("notification_list should be of type list") raise TypeError("notification_list should be of type list")
@@ -137,7 +169,7 @@ def send_notifications(notification_list, job_id=None):
notification.save() notification.save()
@task(bind=True, queue='tower') @task(bind=True, queue='tower', base=LogErrorsTask)
def run_administrative_checks(self): def run_administrative_checks(self):
logger.warn("Running administrative checks.") logger.warn("Running administrative checks.")
if not settings.TOWER_ADMIN_ALERTS: if not settings.TOWER_ADMIN_ALERTS:
@@ -159,13 +191,13 @@ def run_administrative_checks(self):
fail_silently=True) fail_silently=True)
@task(bind=True, queue='tower') @task(bind=True, queue='tower', base=LogErrorsTask)
def cleanup_authtokens(self): def cleanup_authtokens(self):
logger.warn("Cleaning up expired authtokens.") logger.warn("Cleaning up expired authtokens.")
AuthToken.objects.filter(expires__lt=now()).delete() AuthToken.objects.filter(expires__lt=now()).delete()
@task(bind=True) @task(bind=True, base=LogErrorsTask)
def purge_old_stdout_files(self): def purge_old_stdout_files(self):
nowtime = time.time() nowtime = time.time()
for f in os.listdir(settings.JOBOUTPUT_ROOT): for f in os.listdir(settings.JOBOUTPUT_ROOT):
@@ -174,40 +206,61 @@ def purge_old_stdout_files(self):
logger.info("Removing {}".format(os.path.join(settings.JOBOUTPUT_ROOT,f))) logger.info("Removing {}".format(os.path.join(settings.JOBOUTPUT_ROOT,f)))
@task(bind=True) @task(bind=True, base=LogErrorsTask)
def cluster_node_heartbeat(self): def cluster_node_heartbeat(self):
logger.debug("Cluster node heartbeat task.") logger.debug("Cluster node heartbeat task.")
nowtime = now() nowtime = now()
inst = Instance.objects.filter(hostname=settings.CLUSTER_HOST_ID) instance_list = list(Instance.objects.filter(rampart_groups__controller__isnull=True).distinct())
if inst.exists(): this_inst = None
inst = inst[0] lost_instances = []
inst.capacity = get_system_task_capacity() for inst in list(instance_list):
inst.version = tower_application_version if inst.hostname == settings.CLUSTER_HOST_ID:
inst.save() this_inst = inst
instance_list.remove(inst)
elif inst.is_lost(ref_time=nowtime):
lost_instances.append(inst)
instance_list.remove(inst)
if this_inst:
startup_event = this_inst.is_lost(ref_time=nowtime)
if this_inst.capacity == 0:
logger.warning('Rejoining the cluster as instance {}.'.format(this_inst.hostname))
this_inst.capacity = get_system_task_capacity()
this_inst.version = awx_application_version
this_inst.save(update_fields=['capacity', 'version', 'modified'])
if startup_event:
return
else: else:
raise RuntimeError("Cluster Host Not Found: {}".format(settings.CLUSTER_HOST_ID)) raise RuntimeError("Cluster Host Not Found: {}".format(settings.CLUSTER_HOST_ID))
recent_inst = Instance.objects.filter(modified__gt=nowtime - timedelta(seconds=70)).exclude(hostname=settings.CLUSTER_HOST_ID)
# IFF any node has a greater version than we do, then we'll shutdown services # IFF any node has a greater version than we do, then we'll shutdown services
for other_inst in recent_inst: for other_inst in instance_list:
if other_inst.version == "": if other_inst.version == "":
continue continue
if Version(other_inst.version.split('-', 1)[0]) > Version(tower_application_version) and not settings.DEBUG: if Version(other_inst.version.split('-', 1)[0]) > Version(awx_application_version) and not settings.DEBUG:
logger.error("Host {} reports version {}, but this node {} is at {}, shutting down".format(other_inst.hostname, logger.error("Host {} reports version {}, but this node {} is at {}, shutting down".format(other_inst.hostname,
other_inst.version, other_inst.version,
inst.hostname, this_inst.hostname,
inst.version)) this_inst.version))
# Set the capacity to zero to ensure no Jobs get added to this instance. # Shutdown signal will set the capacity to zero to ensure no Jobs get added to this instance.
# The heartbeat task will reset the capacity to the system capacity after upgrade. # The heartbeat task will reset the capacity to the system capacity after upgrade.
inst.capacity = 0 stop_local_services(['uwsgi', 'celery', 'beat', 'callback'], communicate=False)
inst.save()
stop_local_services(['uwsgi', 'celery', 'beat', 'callback', 'fact'])
# We wait for the Popen call inside stop_local_services above
# so the line below will rarely if ever be executed.
raise RuntimeError("Shutting down.") raise RuntimeError("Shutting down.")
for other_inst in lost_instances:
if other_inst.capacity == 0:
continue
try:
other_inst.capacity = 0
other_inst.save(update_fields=['capacity'])
logger.error("Host {} last checked in at {}, marked as lost.".format(
other_inst.hostname, other_inst.modified))
except DatabaseError as e:
if 'did not affect any rows' in str(e):
logger.debug('Another instance has marked {} as lost'.format(other_inst.hostname))
else:
logger.exception('Error marking {} as lost'.format(other_inst.hostname))
@task(bind=True) @task(bind=True, base=LogErrorsTask)
def tower_isolated_heartbeat(self): def awx_isolated_heartbeat(self):
local_hostname = settings.CLUSTER_HOST_ID local_hostname = settings.CLUSTER_HOST_ID
logger.debug("Controlling node checking for any isolated management tasks.") logger.debug("Controlling node checking for any isolated management tasks.")
poll_interval = settings.AWX_ISOLATED_PERIODIC_CHECK poll_interval = settings.AWX_ISOLATED_PERIODIC_CHECK
@@ -230,8 +283,8 @@ def tower_isolated_heartbeat(self):
isolated_manager.IsolatedManager.health_check(isolated_instance_qs) isolated_manager.IsolatedManager.health_check(isolated_instance_qs)
@task(bind=True, queue='tower') @task(bind=True, queue='tower', base=LogErrorsTask)
def tower_periodic_scheduler(self): def awx_periodic_scheduler(self):
run_now = now() run_now = now()
state = TowerScheduleState.get_solo() state = TowerScheduleState.get_solo()
last_run = state.schedule_last_run last_run = state.schedule_last_run
@@ -280,12 +333,12 @@ def _send_notification_templates(instance, status_str):
job_id=instance.id) job_id=instance.id)
@task(bind=True, queue='tower') @task(bind=True, queue='tower', base=LogErrorsTask)
def handle_work_success(self, result, task_actual): def handle_work_success(self, result, task_actual):
try: try:
instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id']) instance = UnifiedJob.get_instance_by_type(task_actual['type'], task_actual['id'])
except ObjectDoesNotExist: except ObjectDoesNotExist:
logger.warning('Missing job `{}` in success callback.'.format(task_actual['id'])) logger.warning('Missing {} `{}` in success callback.'.format(task_actual['type'], task_actual['id']))
return return
if not instance: if not instance:
return return
@@ -296,25 +349,28 @@ def handle_work_success(self, result, task_actual):
run_job_complete.delay(instance.id) run_job_complete.delay(instance.id)
@task(bind=True, queue='tower') @task(bind=True, queue='tower', base=LogErrorsTask)
def handle_work_error(self, task_id, subtasks=None): def handle_work_error(self, task_id, subtasks=None):
print('Executing error task id %s, subtasks: %s' % logger.debug('Executing error task id %s, subtasks: %s' % (str(self.request.id), str(subtasks)))
(str(self.request.id), str(subtasks)))
first_instance = None first_instance = None
first_instance_type = '' first_instance_type = ''
if subtasks is not None: if subtasks is not None:
for each_task in subtasks: for each_task in subtasks:
instance = UnifiedJob.get_instance_by_type(each_task['type'], each_task['id']) try:
if not instance: instance = UnifiedJob.get_instance_by_type(each_task['type'], each_task['id'])
# Unknown task type if not instance:
logger.warn("Unknown task type: {}".format(each_task['type'])) # Unknown task type
logger.warn("Unknown task type: {}".format(each_task['type']))
continue
except ObjectDoesNotExist:
logger.warning('Missing {} `{}` in error callback.'.format(each_task['type'], each_task['id']))
continue continue
if first_instance is None: if first_instance is None:
first_instance = instance first_instance = instance
first_instance_type = each_task['type'] first_instance_type = each_task['type']
if instance.celery_task_id != task_id: if instance.celery_task_id != task_id and not instance.cancel_flag:
instance.status = 'failed' instance.status = 'failed'
instance.failed = True instance.failed = True
if not instance.job_explanation: if not instance.job_explanation:
@@ -336,7 +392,7 @@ def handle_work_error(self, task_id, subtasks=None):
pass pass
@task(queue='tower') @task(queue='tower', base=LogErrorsTask)
def update_inventory_computed_fields(inventory_id, should_update_hosts=True): def update_inventory_computed_fields(inventory_id, should_update_hosts=True):
''' '''
Signal handler and wrapper around inventory.update_computed_fields to Signal handler and wrapper around inventory.update_computed_fields to
@@ -347,10 +403,16 @@ def update_inventory_computed_fields(inventory_id, should_update_hosts=True):
logger.error("Update Inventory Computed Fields failed due to missing inventory: " + str(inventory_id)) logger.error("Update Inventory Computed Fields failed due to missing inventory: " + str(inventory_id))
return return
i = i[0] i = i[0]
i.update_computed_fields(update_hosts=should_update_hosts) try:
i.update_computed_fields(update_hosts=should_update_hosts)
except DatabaseError as e:
if 'did not affect any rows' in str(e):
logger.debug('Exiting duplicate update_inventory_computed_fields task.')
return
raise
@task(queue='tower') @task(queue='tower', base=LogErrorsTask)
def update_host_smart_inventory_memberships(): def update_host_smart_inventory_memberships():
try: try:
with transaction.atomic(): with transaction.atomic():
@@ -366,10 +428,17 @@ def update_host_smart_inventory_memberships():
return return
@task(queue='tower') @task(bind=True, queue='tower', base=LogErrorsTask, max_retries=5)
def delete_inventory(inventory_id): def delete_inventory(self, inventory_id, user_id):
with ignore_inventory_computed_fields(), \ # Delete inventory as user
ignore_inventory_group_removal(): if user_id is None:
user = None
else:
try:
user = User.objects.get(id=user_id)
except:
user = None
with ignore_inventory_computed_fields(), ignore_inventory_group_removal(), impersonate(user):
try: try:
i = Inventory.objects.get(id=inventory_id) i = Inventory.objects.get(id=inventory_id)
i.delete() i.delete()
@@ -377,7 +446,10 @@ def delete_inventory(inventory_id):
'inventories-status_changed', 'inventories-status_changed',
{'group_name': 'inventories', 'inventory_id': inventory_id, 'status': 'deleted'} {'group_name': 'inventories', 'inventory_id': inventory_id, 'status': 'deleted'}
) )
logger.debug('Deleted inventory: %s' % inventory_id) logger.debug('Deleted inventory %s as user %s.' % (inventory_id, user_id))
except OperationalError:
logger.warning('Database error deleting inventory {}, but will retry.'.format(inventory_id))
self.retry(countdown=10)
except Inventory.DoesNotExist: except Inventory.DoesNotExist:
logger.error("Delete Inventory failed due to missing inventory: " + str(inventory_id)) logger.error("Delete Inventory failed due to missing inventory: " + str(inventory_id))
return return
@@ -401,7 +473,7 @@ def with_path_cleanup(f):
return _wrapped return _wrapped
class BaseTask(Task): class BaseTask(LogErrorsTask):
name = None name = None
model = None model = None
abstract = True abstract = True
@@ -472,7 +544,7 @@ class BaseTask(Task):
''' '''
Create a temporary directory for job-related files. Create a temporary directory for job-related files.
''' '''
path = tempfile.mkdtemp(prefix='ansible_tower_%s_' % instance.pk, dir=settings.AWX_PROOT_BASE_PATH) path = tempfile.mkdtemp(prefix='awx_%s_' % instance.pk, dir=settings.AWX_PROOT_BASE_PATH)
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR) os.chmod(path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
self.cleanup_paths.append(path) self.cleanup_paths.append(path)
return path return path
@@ -535,7 +607,7 @@ class BaseTask(Task):
'': '', '': '',
} }
def add_ansible_venv(self, env, add_tower_lib=True): def add_ansible_venv(self, env, add_awx_lib=True):
env['VIRTUAL_ENV'] = settings.ANSIBLE_VENV_PATH env['VIRTUAL_ENV'] = settings.ANSIBLE_VENV_PATH
env['PATH'] = os.path.join(settings.ANSIBLE_VENV_PATH, "bin") + ":" + env['PATH'] env['PATH'] = os.path.join(settings.ANSIBLE_VENV_PATH, "bin") + ":" + env['PATH']
venv_libdir = os.path.join(settings.ANSIBLE_VENV_PATH, "lib") venv_libdir = os.path.join(settings.ANSIBLE_VENV_PATH, "lib")
@@ -545,13 +617,13 @@ class BaseTask(Task):
env['PYTHONPATH'] = os.path.join(venv_libdir, python_ver, "site-packages") + ":" env['PYTHONPATH'] = os.path.join(venv_libdir, python_ver, "site-packages") + ":"
break break
# Add awx/lib to PYTHONPATH. # Add awx/lib to PYTHONPATH.
if add_tower_lib: if add_awx_lib:
env['PYTHONPATH'] = env.get('PYTHONPATH', '') + self.get_path_to('..', 'lib') + ':' env['PYTHONPATH'] = env.get('PYTHONPATH', '') + self.get_path_to('..', 'lib') + ':'
return env return env
def add_tower_venv(self, env): def add_awx_venv(self, env):
env['VIRTUAL_ENV'] = settings.TOWER_VENV_PATH env['VIRTUAL_ENV'] = settings.AWX_VENV_PATH
env['PATH'] = os.path.join(settings.TOWER_VENV_PATH, "bin") + ":" + env['PATH'] env['PATH'] = os.path.join(settings.AWX_VENV_PATH, "bin") + ":" + env['PATH']
return env return env
def build_env(self, instance, **kwargs): def build_env(self, instance, **kwargs):
@@ -570,7 +642,7 @@ class BaseTask(Task):
# callbacks to work. # callbacks to work.
# Update PYTHONPATH to use local site-packages. # Update PYTHONPATH to use local site-packages.
# NOTE: # NOTE:
# Derived class should call add_ansible_venv() or add_tower_venv() # Derived class should call add_ansible_venv() or add_awx_venv()
if self.should_use_proot(instance, **kwargs): if self.should_use_proot(instance, **kwargs):
env['PROOT_TMP_DIR'] = settings.AWX_PROOT_BASE_PATH env['PROOT_TMP_DIR'] = settings.AWX_PROOT_BASE_PATH
return env return env
@@ -606,7 +678,7 @@ class BaseTask(Task):
# For isolated jobs, we have to interact w/ the REST API from the # For isolated jobs, we have to interact w/ the REST API from the
# controlling node and ship the static JSON inventory to the # controlling node and ship the static JSON inventory to the
# isolated host (because the isolated host itself can't reach the # isolated host (because the isolated host itself can't reach the
# Tower REST API to fetch the inventory). # REST API to fetch the inventory).
path = os.path.join(kwargs['private_data_dir'], 'inventory') path = os.path.join(kwargs['private_data_dir'], 'inventory')
if os.path.exists(path): if os.path.exists(path):
return path return path
@@ -692,7 +764,7 @@ class BaseTask(Task):
''' '''
Run the job/task and capture its output. Run the job/task and capture its output.
''' '''
instance = self.update_model(pk, status='running', celery_task_id='' if self.request.id is None else self.request.id) instance = self.update_model(pk, status='running')
instance.websocket_emit_status("running") instance.websocket_emit_status("running")
status, rc, tb = 'error', None, '' status, rc, tb = 'error', None, ''
@@ -800,7 +872,7 @@ class BaseTask(Task):
if status != 'canceled': if status != 'canceled':
tb = traceback.format_exc() tb = traceback.format_exc()
if settings.DEBUG: if settings.DEBUG:
logger.exception('exception occurred while running task') logger.exception('%s Exception occurred while running task', instance.log_format)
finally: finally:
try: try:
stdout_handle.flush() stdout_handle.flush()
@@ -811,7 +883,7 @@ class BaseTask(Task):
try: try:
self.post_run_hook(instance, status, **kwargs) self.post_run_hook(instance, status, **kwargs)
except Exception: except Exception:
logger.exception('Post run hook of unified job {} errored.'.format(instance.pk)) logger.exception('{} Post run hook errored.'.format(instance.log_format))
instance = self.update_model(pk) instance = self.update_model(pk)
if instance.cancel_flag: if instance.cancel_flag:
status = 'canceled' status = 'canceled'
@@ -819,16 +891,19 @@ class BaseTask(Task):
instance = self.update_model(pk, status=status, result_traceback=tb, instance = self.update_model(pk, status=status, result_traceback=tb,
output_replacements=output_replacements, output_replacements=output_replacements,
**extra_update_fields) **extra_update_fields)
self.final_run_hook(instance, status, **kwargs) try:
self.final_run_hook(instance, status, **kwargs)
except:
logger.exception('%s Final run hook errored.', instance.log_format)
instance.websocket_emit_status(status) instance.websocket_emit_status(status)
if status != 'successful' and not hasattr(settings, 'CELERY_UNIT_TEST'): if status != 'successful' and not hasattr(settings, 'CELERY_UNIT_TEST'):
# Raising an exception will mark the job as 'failed' in celery # Raising an exception will mark the job as 'failed' in celery
# and will stop a task chain from continuing to execute # and will stop a task chain from continuing to execute
if status == 'canceled': if status == 'canceled':
raise Exception("Task %s(pk:%s) was canceled (rc=%s)" % (str(self.model.__class__), str(pk), str(rc))) raise Exception("%s was canceled (rc=%s)" % (instance.log_format, str(rc)))
else: else:
raise Exception("Task %s(pk:%s) encountered an error (rc=%s), please see task stdout for details." % raise Exception("%s encountered an error (rc=%s), please see task stdout for details." %
(str(self.model.__class__), str(pk), str(rc))) (instance.log_format, str(rc)))
if not hasattr(settings, 'CELERY_UNIT_TEST'): if not hasattr(settings, 'CELERY_UNIT_TEST'):
self.signal_finished(pk) self.signal_finished(pk)
@@ -926,7 +1001,7 @@ class RunJob(BaseTask):
plugin_dirs.extend(settings.AWX_ANSIBLE_CALLBACK_PLUGINS) plugin_dirs.extend(settings.AWX_ANSIBLE_CALLBACK_PLUGINS)
plugin_path = ':'.join(plugin_dirs) plugin_path = ':'.join(plugin_dirs)
env = super(RunJob, self).build_env(job, **kwargs) env = super(RunJob, self).build_env(job, **kwargs)
env = self.add_ansible_venv(env, add_tower_lib=kwargs.get('isolated', False)) env = self.add_ansible_venv(env, add_awx_lib=kwargs.get('isolated', False))
# Set environment variables needed for inventory and job event # Set environment variables needed for inventory and job event
# callbacks to work. # callbacks to work.
env['JOB_ID'] = str(job.pk) env['JOB_ID'] = str(job.pk)
@@ -934,8 +1009,8 @@ class RunJob(BaseTask):
if job.use_fact_cache and not kwargs.get('isolated'): if job.use_fact_cache and not kwargs.get('isolated'):
env['ANSIBLE_LIBRARY'] = self.get_path_to('..', 'plugins', 'library') env['ANSIBLE_LIBRARY'] = self.get_path_to('..', 'plugins', 'library')
env['ANSIBLE_CACHE_PLUGINS'] = self.get_path_to('..', 'plugins', 'fact_caching') env['ANSIBLE_CACHE_PLUGINS'] = self.get_path_to('..', 'plugins', 'fact_caching')
env['ANSIBLE_CACHE_PLUGIN'] = "tower" env['ANSIBLE_CACHE_PLUGIN'] = "awx"
env['ANSIBLE_FACT_CACHE_TIMEOUT'] = str(settings.ANSIBLE_FACT_CACHE_TIMEOUT) env['ANSIBLE_CACHE_PLUGIN_TIMEOUT'] = str(settings.ANSIBLE_FACT_CACHE_TIMEOUT)
env['ANSIBLE_CACHE_PLUGIN_CONNECTION'] = settings.CACHES['default']['LOCATION'] if 'LOCATION' in settings.CACHES['default'] else '' env['ANSIBLE_CACHE_PLUGIN_CONNECTION'] = settings.CACHES['default']['LOCATION'] if 'LOCATION' in settings.CACHES['default'] else ''
if job.project: if job.project:
env['PROJECT_REVISION'] = job.project.scm_revision env['PROJECT_REVISION'] = job.project.scm_revision
@@ -943,10 +1018,11 @@ class RunJob(BaseTask):
env['MAX_EVENT_RES'] = str(settings.MAX_EVENT_RES_DATA) env['MAX_EVENT_RES'] = str(settings.MAX_EVENT_RES_DATA)
if not kwargs.get('isolated'): if not kwargs.get('isolated'):
env['ANSIBLE_CALLBACK_PLUGINS'] = plugin_path env['ANSIBLE_CALLBACK_PLUGINS'] = plugin_path
env['ANSIBLE_STDOUT_CALLBACK'] = 'tower_display' env['ANSIBLE_STDOUT_CALLBACK'] = 'awx_display'
env['REST_API_URL'] = settings.INTERNAL_API_URL env['REST_API_URL'] = settings.INTERNAL_API_URL
env['REST_API_TOKEN'] = job.task_auth_token or '' env['REST_API_TOKEN'] = job.task_auth_token or ''
env['TOWER_HOST'] = settings.TOWER_URL_BASE env['TOWER_HOST'] = settings.TOWER_URL_BASE
env['AWX_HOST'] = settings.TOWER_URL_BASE
env['CALLBACK_QUEUE'] = settings.CALLBACK_QUEUE env['CALLBACK_QUEUE'] = settings.CALLBACK_QUEUE
env['CALLBACK_CONNECTION'] = settings.BROKER_URL env['CALLBACK_CONNECTION'] = settings.BROKER_URL
env['CACHE'] = settings.CACHES['default']['LOCATION'] if 'LOCATION' in settings.CACHES['default'] else '' env['CACHE'] = settings.CACHES['default']['LOCATION'] if 'LOCATION' in settings.CACHES['default'] else ''
@@ -1067,24 +1143,31 @@ class RunJob(BaseTask):
if job.start_at_task: if job.start_at_task:
args.append('--start-at-task=%s' % job.start_at_task) args.append('--start-at-task=%s' % job.start_at_task)
# Define special extra_vars for Tower, combine with job.extra_vars. # Define special extra_vars for AWX, combine with job.extra_vars.
extra_vars = { extra_vars = {
'tower_job_id': job.pk, 'tower_job_id': job.pk,
'tower_job_launch_type': job.launch_type, 'tower_job_launch_type': job.launch_type,
'awx_job_id': job.pk,
'awx_job_launch_type': job.launch_type,
} }
if job.project: if job.project:
extra_vars.update({ extra_vars.update({
'tower_project_revision': job.project.scm_revision, 'tower_project_revision': job.project.scm_revision,
'awx_project_revision': job.project.scm_revision,
}) })
if job.job_template: if job.job_template:
extra_vars.update({ extra_vars.update({
'tower_job_template_id': job.job_template.pk, 'tower_job_template_id': job.job_template.pk,
'tower_job_template_name': job.job_template.name, 'tower_job_template_name': job.job_template.name,
'awx_job_template_id': job.job_template.pk,
'awx_job_template_name': job.job_template.name,
}) })
if job.created_by: if job.created_by:
extra_vars.update({ extra_vars.update({
'tower_user_id': job.created_by.pk, 'tower_user_id': job.created_by.pk,
'tower_user_name': job.created_by.username, 'tower_user_name': job.created_by.username,
'awx_user_id': job.created_by.pk,
'awx_user_name': job.created_by.username,
}) })
if job.extra_vars_dict: if job.extra_vars_dict:
if kwargs.get('display', False) and job.job_template: if kwargs.get('display', False) and job.job_template:
@@ -1115,18 +1198,9 @@ class RunJob(BaseTask):
d = super(RunJob, self).get_password_prompts() d = super(RunJob, self).get_password_prompts()
d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock' d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock'
d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = '' d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = ''
d[re.compile(r'sudo password.*:\s*?$', re.M)] = 'become_password' for method in PRIVILEGE_ESCALATION_METHODS:
d[re.compile(r'SUDO password.*:\s*?$', re.M)] = 'become_password' d[re.compile(r'%s password.*:\s*?$' % (method[0]), re.M)] = 'become_password'
d[re.compile(r'su password.*:\s*?$', re.M)] = 'become_password' d[re.compile(r'%s password.*:\s*?$' % (method[0].upper()), re.M)] = 'become_password'
d[re.compile(r'SU password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'PBRUN password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'pbrun password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'PFEXEC password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'pfexec password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'RUNAS password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'runas password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'DZDO password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'dzdo password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'SSH password:\s*?$', re.M)] = 'ssh_password' d[re.compile(r'SSH password:\s*?$', re.M)] = 'ssh_password'
d[re.compile(r'Password:\s*?$', re.M)] = 'ssh_password' d[re.compile(r'Password:\s*?$', re.M)] = 'ssh_password'
d[re.compile(r'Vault password:\s*?$', re.M)] = 'vault_password' d[re.compile(r'Vault password:\s*?$', re.M)] = 'vault_password'
@@ -1398,11 +1472,12 @@ class RunProjectUpdate(BaseTask):
def get_stdout_handle(self, instance): def get_stdout_handle(self, instance):
stdout_handle = super(RunProjectUpdate, self).get_stdout_handle(instance) stdout_handle = super(RunProjectUpdate, self).get_stdout_handle(instance)
pk = instance.pk
def raw_callback(data): def raw_callback(data):
instance_actual = ProjectUpdate.objects.get(pk=instance.pk) instance_actual = self.update_model(pk)
instance_actual.result_stdout_text += data result_stdout_text = instance_actual.result_stdout_text + data
instance_actual.save() self.update_model(pk, result_stdout_text=result_stdout_text)
return OutputEventFilter(stdout_handle, raw_callback=raw_callback) return OutputEventFilter(stdout_handle, raw_callback=raw_callback)
def _update_dependent_inventories(self, project_update, dependent_inventory_sources): def _update_dependent_inventories(self, project_update, dependent_inventory_sources):
@@ -1421,7 +1496,7 @@ class RunProjectUpdate(BaseTask):
if InventoryUpdate.objects.filter(inventory_source=inv_src, if InventoryUpdate.objects.filter(inventory_source=inv_src,
status__in=ACTIVE_STATES).exists(): status__in=ACTIVE_STATES).exists():
logger.info('Skipping SCM inventory update for `{}` because ' logger.info('Skipping SCM inventory update for `{}` because '
'another update is already active.'.format(inv.name)) 'another update is already active.'.format(inv_src.name))
continue continue
local_inv_update = inv_src.create_inventory_update( local_inv_update = inv_src.create_inventory_update(
launch_type='scm', launch_type='scm',
@@ -1437,7 +1512,8 @@ class RunProjectUpdate(BaseTask):
task_instance.request.id = project_request_id task_instance.request.id = project_request_id
task_instance.run(local_inv_update.id) task_instance.run(local_inv_update.id)
except Exception: except Exception:
logger.exception('Encountered unhandled exception updating dependent SCM inventory sources.') logger.exception('%s Unhandled exception updating dependent SCM inventory sources.',
project_update.log_format)
try: try:
project_update.refresh_from_db() project_update.refresh_from_db()
@@ -1447,7 +1523,7 @@ class RunProjectUpdate(BaseTask):
try: try:
local_inv_update.refresh_from_db() local_inv_update.refresh_from_db()
except InventoryUpdate.DoesNotExist: except InventoryUpdate.DoesNotExist:
logger.warning('Inventory update deleted during execution.') logger.warning('%s Dependent inventory update deleted during execution.', project_update.log_format)
continue continue
if project_update.cancel_flag or local_inv_update.cancel_flag: if project_update.cancel_flag or local_inv_update.cancel_flag:
if not project_update.cancel_flag: if not project_update.cancel_flag:
@@ -1507,7 +1583,7 @@ class RunProjectUpdate(BaseTask):
if lines: if lines:
p.scm_revision = lines[0].strip() p.scm_revision = lines[0].strip()
else: else:
logger.info("Could not find scm revision in check") logger.info("%s Could not find scm revision in check", instance.log_format)
p.playbook_files = p.playbooks p.playbook_files = p.playbooks
p.inventory_files = p.inventories p.inventory_files = p.inventories
p.save() p.save()
@@ -1605,8 +1681,7 @@ class RunInventoryUpdate(BaseTask):
ec2_opts.setdefault('route53', 'False') ec2_opts.setdefault('route53', 'False')
ec2_opts.setdefault('all_instances', 'True') ec2_opts.setdefault('all_instances', 'True')
ec2_opts.setdefault('all_rds_instances', 'False') ec2_opts.setdefault('all_rds_instances', 'False')
# TODO: Include this option when boto3 support comes. ec2_opts.setdefault('include_rds_clusters', 'False')
#ec2_opts.setdefault('include_rds_clusters', 'False')
ec2_opts.setdefault('rds', 'False') ec2_opts.setdefault('rds', 'False')
ec2_opts.setdefault('nested_groups', 'True') ec2_opts.setdefault('nested_groups', 'True')
ec2_opts.setdefault('elasticache', 'False') ec2_opts.setdefault('elasticache', 'False')
@@ -1648,10 +1723,17 @@ class RunInventoryUpdate(BaseTask):
section = 'foreman' section = 'foreman'
cp.add_section(section) cp.add_section(section)
group_patterns = '[]'
group_prefix = 'foreman_'
foreman_opts = dict(inventory_update.source_vars_dict.items()) foreman_opts = dict(inventory_update.source_vars_dict.items())
foreman_opts.setdefault('ssl_verify', 'False') foreman_opts.setdefault('ssl_verify', 'False')
for k, v in foreman_opts.items(): for k, v in foreman_opts.items():
cp.set(section, k, unicode(v)) if k == 'satellite6_group_patterns' and isinstance(v, basestring):
group_patterns = v
elif k == 'satellite6_group_prefix' and isinstance(v, basestring):
group_prefix = v
else:
cp.set(section, k, unicode(v))
credential = inventory_update.credential credential = inventory_update.credential
if credential: if credential:
@@ -1661,9 +1743,9 @@ class RunInventoryUpdate(BaseTask):
section = 'ansible' section = 'ansible'
cp.add_section(section) cp.add_section(section)
cp.set(section, 'group_patterns', os.environ.get('SATELLITE6_GROUP_PATTERNS', [])) cp.set(section, 'group_patterns', group_patterns)
cp.set(section, 'want_facts', True) cp.set(section, 'want_facts', True)
cp.set(section, 'group_prefix', os.environ.get('SATELLITE6_GROUP_PREFIX', 'foreman_')) cp.set(section, 'group_prefix', group_prefix)
section = 'cache' section = 'cache'
cp.add_section(section) cp.add_section(section)
@@ -1740,7 +1822,7 @@ class RunInventoryUpdate(BaseTask):
""" """
env = super(RunInventoryUpdate, self).build_env(inventory_update, env = super(RunInventoryUpdate, self).build_env(inventory_update,
**kwargs) **kwargs)
env = self.add_tower_venv(env) env = self.add_awx_venv(env)
# Pass inventory source ID to inventory script. # Pass inventory source ID to inventory script.
env['INVENTORY_SOURCE_ID'] = str(inventory_update.inventory_source_id) env['INVENTORY_SOURCE_ID'] = str(inventory_update.inventory_source_id)
env['INVENTORY_UPDATE_ID'] = str(inventory_update.pk) env['INVENTORY_UPDATE_ID'] = str(inventory_update.pk)
@@ -1750,7 +1832,7 @@ class RunInventoryUpdate(BaseTask):
# These are set here and then read in by the various Ansible inventory # These are set here and then read in by the various Ansible inventory
# modules, which will actually do the inventory sync. # modules, which will actually do the inventory sync.
# #
# The inventory modules are vendored in Tower in the # The inventory modules are vendored in AWX in the
# `awx/plugins/inventory` directory; those files should be kept in # `awx/plugins/inventory` directory; those files should be kept in
# sync with those in Ansible core at all times. # sync with those in Ansible core at all times.
passwords = kwargs.get('passwords', {}) passwords = kwargs.get('passwords', {})
@@ -1784,7 +1866,7 @@ class RunInventoryUpdate(BaseTask):
env['GCE_EMAIL'] = passwords.get('source_username', '') env['GCE_EMAIL'] = passwords.get('source_username', '')
env['GCE_PROJECT'] = passwords.get('source_project', '') env['GCE_PROJECT'] = passwords.get('source_project', '')
env['GCE_PEM_FILE_PATH'] = cloud_credential env['GCE_PEM_FILE_PATH'] = cloud_credential
env['GCE_ZONE'] = inventory_update.source_regions env['GCE_ZONE'] = inventory_update.source_regions if inventory_update.source_regions != 'all' else ''
elif inventory_update.source == 'openstack': elif inventory_update.source == 'openstack':
env['OS_CLIENT_CONFIG_FILE'] = cloud_credential env['OS_CLIENT_CONFIG_FILE'] = cloud_credential
elif inventory_update.source == 'satellite6': elif inventory_update.source == 'satellite6':
@@ -1810,7 +1892,7 @@ class RunInventoryUpdate(BaseTask):
inventory = inventory_source.inventory inventory = inventory_source.inventory
# Piece together the initial command to run via. the shell. # Piece together the initial command to run via. the shell.
args = ['tower-manage', 'inventory_import'] args = ['awx-manage', 'inventory_import']
args.extend(['--inventory-id', str(inventory.pk)]) args.extend(['--inventory-id', str(inventory.pk)])
# Add appropriate arguments for overwrite if the inventory_update # Add appropriate arguments for overwrite if the inventory_update
@@ -1822,9 +1904,9 @@ class RunInventoryUpdate(BaseTask):
src = inventory_update.source src = inventory_update.source
# Add several options to the shell arguments based on the # Add several options to the shell arguments based on the
# inventory-source-specific setting in the Tower configuration. # inventory-source-specific setting in the AWX configuration.
# These settings are "per-source"; it's entirely possible that # These settings are "per-source"; it's entirely possible that
# they will be different between cloud providers if a Tower user # they will be different between cloud providers if an AWX user
# actively uses more than one. # actively uses more than one.
if getattr(settings, '%s_ENABLED_VAR' % src.upper(), False): if getattr(settings, '%s_ENABLED_VAR' % src.upper(), False):
args.extend(['--enabled-var', args.extend(['--enabled-var',
@@ -1854,7 +1936,7 @@ class RunInventoryUpdate(BaseTask):
elif src == 'scm': elif src == 'scm':
args.append(inventory_update.get_actual_source_path()) args.append(inventory_update.get_actual_source_path())
elif src == 'custom': elif src == 'custom':
runpath = tempfile.mkdtemp(prefix='ansible_tower_inventory_', dir=settings.AWX_PROOT_BASE_PATH) runpath = tempfile.mkdtemp(prefix='awx_inventory_', dir=settings.AWX_PROOT_BASE_PATH)
handle, path = tempfile.mkstemp(dir=runpath) handle, path = tempfile.mkstemp(dir=runpath)
f = os.fdopen(handle, 'w') f = os.fdopen(handle, 'w')
if inventory_update.source_script is None: if inventory_update.source_script is None:
@@ -1872,11 +1954,12 @@ class RunInventoryUpdate(BaseTask):
def get_stdout_handle(self, instance): def get_stdout_handle(self, instance):
stdout_handle = super(RunInventoryUpdate, self).get_stdout_handle(instance) stdout_handle = super(RunInventoryUpdate, self).get_stdout_handle(instance)
pk = instance.pk
def raw_callback(data): def raw_callback(data):
instance_actual = InventoryUpdate.objects.get(pk=instance.pk) instance_actual = self.update_model(pk)
instance_actual.result_stdout_text += data result_stdout_text = instance_actual.result_stdout_text + data
instance_actual.save() self.update_model(pk, result_stdout_text=result_stdout_text)
return OutputEventFilter(stdout_handle, raw_callback=raw_callback) return OutputEventFilter(stdout_handle, raw_callback=raw_callback)
def build_cwd(self, inventory_update, **kwargs): def build_cwd(self, inventory_update, **kwargs):
@@ -2064,18 +2147,9 @@ class RunAdHocCommand(BaseTask):
d = super(RunAdHocCommand, self).get_password_prompts() d = super(RunAdHocCommand, self).get_password_prompts()
d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock' d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock'
d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = '' d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = ''
d[re.compile(r'sudo password.*:\s*?$', re.M)] = 'become_password' for method in PRIVILEGE_ESCALATION_METHODS:
d[re.compile(r'SUDO password.*:\s*?$', re.M)] = 'become_password' d[re.compile(r'%s password.*:\s*?$' % (method[0]), re.M)] = 'become_password'
d[re.compile(r'su password.*:\s*?$', re.M)] = 'become_password' d[re.compile(r'%s password.*:\s*?$' % (method[0].upper()), re.M)] = 'become_password'
d[re.compile(r'SU password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'PBRUN password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'pbrun password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'PFEXEC password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'pfexec password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'RUNAS password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'runas password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'DZDO password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'dzdo password.*:\s*?$', re.M)] = 'become_password'
d[re.compile(r'SSH password:\s*?$', re.M)] = 'ssh_password' d[re.compile(r'SSH password:\s*?$', re.M)] = 'ssh_password'
d[re.compile(r'Password:\s*?$', re.M)] = 'ssh_password' d[re.compile(r'Password:\s*?$', re.M)] = 'ssh_password'
return d return d
@@ -2116,7 +2190,7 @@ class RunSystemJob(BaseTask):
model = SystemJob model = SystemJob
def build_args(self, system_job, **kwargs): def build_args(self, system_job, **kwargs):
args = ['tower-manage', system_job.job_type] args = ['awx-manage', system_job.job_type]
try: try:
json_vars = json.loads(system_job.extra_vars) json_vars = json.loads(system_job.extra_vars)
if 'days' in json_vars and system_job.job_type != 'cleanup_facts': if 'days' in json_vars and system_job.job_type != 'cleanup_facts':
@@ -2132,23 +2206,24 @@ class RunSystemJob(BaseTask):
args.extend(['--older_than', str(json_vars['older_than'])]) args.extend(['--older_than', str(json_vars['older_than'])])
if 'granularity' in json_vars: if 'granularity' in json_vars:
args.extend(['--granularity', str(json_vars['granularity'])]) args.extend(['--granularity', str(json_vars['granularity'])])
except Exception as e: except Exception:
logger.error("Failed to parse system job: " + str(e)) logger.exception("%s Failed to parse system job", instance.log_format)
return args return args
def get_stdout_handle(self, instance): def get_stdout_handle(self, instance):
stdout_handle = super(RunSystemJob, self).get_stdout_handle(instance) stdout_handle = super(RunSystemJob, self).get_stdout_handle(instance)
pk = instance.pk
def raw_callback(data): def raw_callback(data):
instance_actual = SystemJob.objects.get(pk=instance.pk) instance_actual = self.update_model(pk)
instance_actual.result_stdout_text += data result_stdout_text = instance_actual.result_stdout_text + data
instance_actual.save() self.update_model(pk, result_stdout_text=result_stdout_text)
return OutputEventFilter(stdout_handle, raw_callback=raw_callback) return OutputEventFilter(stdout_handle, raw_callback=raw_callback)
def build_env(self, instance, **kwargs): def build_env(self, instance, **kwargs):
env = super(RunSystemJob, self).build_env(instance, env = super(RunSystemJob, self).build_env(instance,
**kwargs) **kwargs)
env = self.add_tower_venv(env) env = self.add_awx_venv(env)
return env return env
def build_cwd(self, instance, **kwargs): def build_cwd(self, instance, **kwargs):

View File

@@ -0,0 +1,724 @@
{
"toString": "$REDACTED$",
"isCheckingIn": false,
"system_id": "11111111-1111-1111-1111-111111111111",
"display_name": null,
"remote_branch": null,
"remote_leaf": null,
"account_number": "1111111",
"hostname": "$REDACTED$",
"parent_id": null,
"system_type_id": 105,
"last_check_in": "2017-07-21T07:07:29.000Z",
"stale_ack": false,
"type": "machine",
"product": "rhel",
"created_at": "2017-07-20T17:26:53.000Z",
"updated_at": "2017-07-21T07:07:29.000Z",
"unregistered_at": null,
"reports": [{
"details": {
"vulnerable_setting": "hosts: files dns myhostname",
"affected_package": "glibc-2.17-105.el7",
"error_key": "GLIBC_CVE_2015_7547"
},
"id": 955802695,
"rule_id": "CVE_2015_7547_glibc|GLIBC_CVE_2015_7547",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>A critical security flaw in the <code>glibc</code> library was found. It allows an attacker to crash an application built against that library or, potentially, execute arbitrary code with privileges of the user running the application.</p>\n",
"generic_html": "<p>The <code>glibc</code> library is vulnerable to a stack-based buffer overflow security flaw. A remote attacker could create specially crafted DNS responses that could cause the <code>libresolv</code> part of the library, which performs dual A/AAAA DNS queries, to crash or potentially execute code with the permissions of the user running the library. The issue is only exposed when <code>libresolv</code> is called from the nss_dns NSS service module. This flaw is known as <a href=\"https://access.redhat.com/security/cve/CVE-2015-7547\">CVE-2015-7547</a>.</p>\n",
"more_info_html": "<ul>\n<li>For more information about the flaw see <a href=\"https://access.redhat.com/security/cve/CVE-2015-7547\">CVE-2015-7547</a>.</li>\n<li>To learn how to upgrade packages, see &quot;<a href=\"https://access.redhat.com/solutions/9934\">What is yum and how do I use it?</a>&quot;</li>\n<li>The Customer Portal page for the <a href=\"https://access.redhat.com/security/\">Red Hat Security Team</a> contains more information about policies, procedures, and alerts for Red Hat Products.</li>\n<li>The Security Team also maintains a frequently updated blog at <a href=\"https://securityblog.redhat.com\">securityblog.redhat.com</a>.</li>\n</ul>\n",
"severity": "ERROR",
"ansible": true,
"ansible_fix": false,
"ansible_mitigation": false,
"rule_id": "CVE_2015_7547_glibc|GLIBC_CVE_2015_7547",
"error_key": "GLIBC_CVE_2015_7547",
"plugin": "CVE_2015_7547_glibc",
"description": "Remote code execution vulnerability in libresolv via crafted DNS response (CVE-2015-7547)",
"summary": "A critical security flaw in the `glibc` library was found. It allows an attacker to crash an application built against that library or, potentially, execute arbitrary code with privileges of the user running the application.",
"generic": "The `glibc` library is vulnerable to a stack-based buffer overflow security flaw. A remote attacker could create specially crafted DNS responses that could cause the `libresolv` part of the library, which performs dual A/AAAA DNS queries, to crash or potentially execute code with the permissions of the user running the library. The issue is only exposed when `libresolv` is called from the nss_dns NSS service module. This flaw is known as [CVE-2015-7547](https://access.redhat.com/security/cve/CVE-2015-7547).",
"reason": "<p>This host is vulnerable because it has vulnerable package <strong>glibc-2.17-105.el7</strong> installed and DNS is enabled in <code>/etc/nsswitch.conf</code>:</p>\n<pre><code>hosts: files dns myhostname\n</code></pre><p>The <code>glibc</code> library is vulnerable to a stack-based buffer overflow security flaw. A remote attacker could create specially crafted DNS responses that could cause the <code>libresolv</code> part of the library, which performs dual A/AAAA DNS queries, to crash or potentially execute code with the permissions of the user running the library. The issue is only exposed when <code>libresolv</code> is called from the nss_dns NSS service module. This flaw is known as <a href=\"https://access.redhat.com/security/cve/CVE-2015-7547\">CVE-2015-7547</a>.</p>\n",
"type": null,
"more_info": "* For more information about the flaw see [CVE-2015-7547](https://access.redhat.com/security/cve/CVE-2015-7547).\n* To learn how to upgrade packages, see \"[What is yum and how do I use it?](https://access.redhat.com/solutions/9934)\"\n* The Customer Portal page for the [Red Hat Security Team](https://access.redhat.com/security/) contains more information about policies, procedures, and alerts for Red Hat Products.\n* The Security Team also maintains a frequently updated blog at [securityblog.redhat.com](https://securityblog.redhat.com).",
"active": true,
"node_id": "2168451",
"category": "Security",
"retired": false,
"reboot_required": false,
"publish_date": "2016-10-31T04:08:35.000Z",
"rec_impact": 4,
"rec_likelihood": 2,
"resolution": "<p>Red Hat recommends updating <code>glibc</code> and restarting the affected system:</p>\n<pre><code># yum update glibc\n# reboot\n</code></pre><p>Alternatively, you can restart all affected services, but because this vulnerability affects a large amount of applications on the system, the best solution is to restart the system.</p>\n"
},
"maintenance_actions": [{
"done": false,
"id": 305205,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 305955,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}, {
"details": {
"affected_kernel": "3.10.0-327.el7",
"error_key": "KERNEL_CVE-2016-0728"
},
"id": 955802705,
"rule_id": "CVE_2016_0728_kernel|KERNEL_CVE-2016-0728",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>A vulnerability in the Linux kernel allowing local privilege escalation was discovered. The issue was reported as <a href=\"https://access.redhat.com/security/cve/cve-2016-0728\">CVE-2016-0728</a>.</p>\n",
"generic_html": "<p>A vulnerability in the Linux kernel rated <strong>Important</strong> was discovered. The use-after-free flaw relates to the way the Linux kernel&#39;s key management subsystem handles keyring object reference counting in certain error paths of the join_session_keyring() function. A local, unprivileged user could use this flaw to escalate their privileges on the system. The issue was reported as <a href=\"https://access.redhat.com/security/cve/cve-2016-0728\">CVE-2016-0728</a>.</p>\n<p>Red Hat recommends that you update the kernel and reboot the system. If you cannot reboot now, consider applying the <a href=\"https://bugzilla.redhat.com/attachment.cgi?id=1116284&amp;action=edit\">systemtap patch</a> to update your running kernel.</p>\n",
"more_info_html": "<ul>\n<li>For more information about the flaws and versions of the package that are vulnerable see <a href=\"https://access.redhat.com/security/cve/cve-2016-0728\">CVE-2016-0728</a>.</li>\n<li>To learn how to upgrade packages, see &quot;<a href=\"https://access.redhat.com/solutions/9934\">What is yum and how do I use it?</a>&quot;</li>\n<li>The Customer Portal page for the <a href=\"https://access.redhat.com/security/\">Red Hat Security Team</a> contains more information about policies, procedures, and alerts for Red Hat Products.</li>\n<li>The Security Team also maintains a frequently updated blog at <a href=\"https://securityblog.redhat.com\">securityblog.redhat.com</a>.</li>\n</ul>\n",
"severity": "WARN",
"ansible": true,
"ansible_fix": false,
"ansible_mitigation": false,
"rule_id": "CVE_2016_0728_kernel|KERNEL_CVE-2016-0728",
"error_key": "KERNEL_CVE-2016-0728",
"plugin": "CVE_2016_0728_kernel",
"description": "Kernel key management subsystem vulnerable to local privilege escalation (CVE-2016-0728)",
"summary": "A vulnerability in the Linux kernel allowing local privilege escalation was discovered. The issue was reported as [CVE-2016-0728](https://access.redhat.com/security/cve/cve-2016-0728).",
"generic": "A vulnerability in the Linux kernel rated **Important** was discovered. The use-after-free flaw relates to the way the Linux kernel's key management subsystem handles keyring object reference counting in certain error paths of the join_session_keyring() function. A local, unprivileged user could use this flaw to escalate their privileges on the system. The issue was reported as [CVE-2016-0728](https://access.redhat.com/security/cve/cve-2016-0728).\n\nRed Hat recommends that you update the kernel and reboot the system. If you cannot reboot now, consider applying the [systemtap patch](https://bugzilla.redhat.com/attachment.cgi?id=1116284&action=edit) to update your running kernel.",
"reason": "<p>A vulnerability in the Linux kernel rated <strong>Important</strong> was discovered. The use-after-free flaw relates to the way the Linux kernel&#39;s key management subsystem handles keyring object reference counting in certain error paths of the join_session_keyring() function. A local, unprivileged user could use this flaw to escalate their privileges on the system. The issue was reported as <a href=\"https://access.redhat.com/security/cve/cve-2016-0728\">CVE-2016-0728</a>.</p>\n<p>The host is vulnerable as it is running <strong>kernel-3.10.0-327.el7</strong>.</p>\n",
"type": null,
"more_info": "* For more information about the flaws and versions of the package that are vulnerable see [CVE-2016-0728](https://access.redhat.com/security/cve/cve-2016-0728).\n* To learn how to upgrade packages, see \"[What is yum and how do I use it?](https://access.redhat.com/solutions/9934)\"\n* The Customer Portal page for the [Red Hat Security Team](https://access.redhat.com/security/) contains more information about policies, procedures, and alerts for Red Hat Products.\n* The Security Team also maintains a frequently updated blog at [securityblog.redhat.com](https://securityblog.redhat.com).",
"active": true,
"node_id": "2130791",
"category": "Security",
"retired": false,
"reboot_required": false,
"publish_date": "2016-10-31T04:08:37.000Z",
"rec_impact": 2,
"rec_likelihood": 2,
"resolution": "<p>Red Hat recommends that you update <code>kernel</code> and reboot. If you cannot reboot now, consider applying the <a href=\"https://bugzilla.redhat.com/attachment.cgi?id=1116284&amp;action=edit\">systemtap patch</a> to update your running kernel.</p>\n<pre><code># yum update kernel\n# reboot\n-or-\n# debuginfo-install kernel (or equivalent)\n# stap -vgt -Gfix_p=1 -Gtrace_p=0 cve20160728e.stp\n</code></pre>"
},
"maintenance_actions": [{
"done": false,
"id": 305215,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 306205,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}, {
"details": {
"processes_listening_int": [
["neutron-o", "127.0.0.1", "6633"],
["ovsdb-ser", "127.0.0.1", "6640"]
],
"processes_listening_ext": [
["CPU", "0.0.0.0", "5900"],
["libvirtd", "", "::16509"],
["master", "", ":1:25"],
["qemu-kvm", "0.0.0.0", "5900"],
["vnc_worke", "0.0.0.0", "5900"],
["worker", "0.0.0.0", "5900"]
],
"error_key": "OPENSSL_CVE_2016_0800_DROWN_LISTENING",
"processes_listening": [
["CPU", "0.0.0.0", "5900"],
["libvirtd", "", "::16509"],
["master", "", ":1:25"],
["neutron-o", "127.0.0.1", "6633"],
["ovsdb-ser", "127.0.0.1", "6640"],
["qemu-kvm", "0.0.0.0", "5900"],
["vnc_worke", "0.0.0.0", "5900"],
["worker", "0.0.0.0", "5900"]
],
"processes_names": ["/usr/bin/", "CPU", "ceilomete", "gmain", "handler6", "libvirtd", "master", "neutron-o", "neutron-r", "nova-comp", "ovs-vswit", "ovsdb-cli", "ovsdb-ser", "pickup", "privsep-h", "qemu-kvm", "qmgr", "redhat-ac", "revalidat", "tuned", "urcu3", "virtlogd", "vnc_worke", "worker"],
"vulnerable_package": "openssl-libs-1.0.1e-42.el7_1.9"
},
"id": 955802715,
"rule_id": "CVE_2016_0800_openssl_drown|OPENSSL_CVE_2016_0800_DROWN_LISTENING",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>A new cross-protocol attack against SSLv2 protocol has been found. It has been assigned <a href=\"https://access.redhat.com/security/cve/CVE-2016-0800\">CVE-2016-0800</a> and is referred to as DROWN - Decrypting RSA using Obsolete and Weakened eNcryption. An attacker can decrypt passively collected TLS sessions between up-to-date client and server which supports SSLv2.</p>\n",
"generic_html": "<p>A new cross-protocol attack against a vulnerability in the SSLv2 protocol has been found. It can be used to passively decrypt collected TLS/SSL sessions from any connection that used an RSA key exchange cypher suite on a server that supports SSLv2. Even if a given service does not support SSLv2 the connection is still vulnerable if another service does and shares the same RSA private key.</p>\n<p>A more efficient variant of the attack exists against unpatched OpenSSL servers using versions that predate security advisories released on March 19, 2015 (see <a href=\"https://access.redhat.com/security/cve/CVE-2015-0293\">CVE-2015-0293</a>).</p>\n",
"more_info_html": "<ul>\n<li>For more information about the flaw see <a href=\"https://access.redhat.com/security/cve/CVE-2016-0800\">CVE-2016-0800</a></li>\n<li>To learn how to upgrade packages, see &quot;<a href=\"https://access.redhat.com/solutions/9934\">What is yum and how do I use it?</a>&quot;</li>\n<li>The Customer Portal page for the <a href=\"https://access.redhat.com/security/\">Red Hat Security Team</a> contains more information about policies, procedures, and alerts for Red Hat Products.</li>\n<li>The Security Team also maintains a frequently updated blog at <a href=\"https://securityblog.redhat.com\">securityblog.redhat.com</a>.</li>\n</ul>\n",
"severity": "ERROR",
"ansible": true,
"ansible_fix": false,
"ansible_mitigation": false,
"rule_id": "CVE_2016_0800_openssl_drown|OPENSSL_CVE_2016_0800_DROWN_LISTENING",
"error_key": "OPENSSL_CVE_2016_0800_DROWN_LISTENING",
"plugin": "CVE_2016_0800_openssl_drown",
"description": "OpenSSL with externally listening processes vulnerable to session decryption (CVE-2016-0800/DROWN)",
"summary": "A new cross-protocol attack against SSLv2 protocol has been found. It has been assigned [CVE-2016-0800](https://access.redhat.com/security/cve/CVE-2016-0800) and is referred to as DROWN - Decrypting RSA using Obsolete and Weakened eNcryption. An attacker can decrypt passively collected TLS sessions between up-to-date client and server which supports SSLv2.",
"generic": "A new cross-protocol attack against a vulnerability in the SSLv2 protocol has been found. It can be used to passively decrypt collected TLS/SSL sessions from any connection that used an RSA key exchange cypher suite on a server that supports SSLv2. Even if a given service does not support SSLv2 the connection is still vulnerable if another service does and shares the same RSA private key.\n\nA more efficient variant of the attack exists against unpatched OpenSSL servers using versions that predate security advisories released on March 19, 2015 (see [CVE-2015-0293](https://access.redhat.com/security/cve/CVE-2015-0293)).",
"reason": "<p>This host is vulnerable because it has vulnerable package <strong>openssl-libs-1.0.1e-42.el7_1.9</strong> installed.</p>\n<p>It also runs the following processes that use OpenSSL libraries:</p>\n<ul class=\"pre-code\"><li>/usr/bin/</li><li>CPU</li><li>ceilomete</li><li>gmain</li><li>handler6</li><li>libvirtd</li><li>master</li><li>neutron-o</li><li>neutron-r</li><li>nova-comp</li><li>ovs-vswit</li><li>ovsdb-cli</li><li>ovsdb-ser</li><li>pickup</li><li>privsep-h</li><li>qemu-kvm</li><li>qmgr</li><li>redhat-ac</li><li>revalidat</li><li>tuned</li><li>urcu3</li><li>virtlogd</li><li>vnc_worke</li><li>worker</li></ul>\n\n\n\n\n<p>The following processes that use OpenSSL libraries are listening on the sockets bound to public IP addresses:</p>\n<ul class=\"pre-code\"><li>CPU (0.0.0.0)</li><li>libvirtd ()</li><li>master ()</li><li>qemu-kvm (0.0.0.0)</li><li>vnc_worke (0.0.0.0)</li><li>worker (0.0.0.0)</li></ul>\n\n\n\n\n\n\n\n\n<p>A new cross-protocol attack against a vulnerability in the SSLv2 protocol has been found. It can be used to passively decrypt collected TLS/SSL sessions from any connection that used an RSA key exchange cypher suite on a server that supports SSLv2. Even if a given service does not support SSLv2 the connection is still vulnerable if another service does and shares the same RSA private key.</p>\n<p>A more efficient variant of the attack exists against unpatched OpenSSL servers using versions that predate security advisories released on March 19, 2015 (see <a href=\"https://access.redhat.com/security/cve/CVE-2015-0293\">CVE-2015-0293</a>).</p>\n",
"type": null,
"more_info": "* For more information about the flaw see [CVE-2016-0800](https://access.redhat.com/security/cve/CVE-2016-0800)\n* To learn how to upgrade packages, see \"[What is yum and how do I use it?](https://access.redhat.com/solutions/9934)\"\n* The Customer Portal page for the [Red Hat Security Team](https://access.redhat.com/security/) contains more information about policies, procedures, and alerts for Red Hat Products.\n* The Security Team also maintains a frequently updated blog at [securityblog.redhat.com](https://securityblog.redhat.com).",
"active": true,
"node_id": "2174451",
"category": "Security",
"retired": false,
"reboot_required": false,
"publish_date": "2016-10-31T04:08:33.000Z",
"rec_impact": 3,
"rec_likelihood": 4,
"resolution": "<p>Red Hat recommends that you update <code>openssl</code> and restart the affected system:</p>\n<pre><code># yum update openssl\n# reboot\n</code></pre><p>Alternatively, you can restart all affected services (that is, the ones linked to the openssl library), especially those listening on public IP addresses.</p>\n"
},
"maintenance_actions": [{
"done": false,
"id": 305225,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 306435,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}, {
"details": {
"vulnerable_kernel": "3.10.0-327.el7",
"package_name": "kernel",
"error_key": "KERNEL_CVE_2016_5195_2"
},
"id": 955802725,
"rule_id": "CVE_2016_5195_kernel|KERNEL_CVE_2016_5195_2",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>A flaw was found in the Linux kernel&#39;s memory subsystem. An unprivileged local user could use this flaw to write to files they would normally only have read-only access to and thus increase their privileges on the system.</p>\n",
"generic_html": "<p>A race condition was found in the way Linux kernel&#39;s memory subsystem handled breakage of the read only shared mappings COW situation on write access. An unprivileged local user could use this flaw to write to files they should normally have read-only access to, and thus increase their privileges on the system.</p>\n<p>A process that is able to mmap a file is able to race Copy on Write (COW) page creation (within get_user_pages) with madvise(MADV_DONTNEED) kernel system calls. This would allow modified pages to bypass the page protection mechanism and modify the mapped file. The vulnerability could be abused by allowing an attacker to modify existing setuid files with instructions to elevate permissions. This attack has been found in the wild. </p>\n<p>Red Hat recommends that you update the kernel package.</p>\n",
"more_info_html": "<ul>\n<li>For more information about the flaw see <a href=\"https://access.redhat.com/security/cve/CVE-2016-5195\">CVE-2016-5195</a></li>\n<li>To learn how to upgrade packages, see &quot;<a href=\"https://access.redhat.com/solutions/9934\">What is yum and how do I use it?</a>&quot;</li>\n<li>The Customer Portal page for the <a href=\"https://access.redhat.com/security/\">Red Hat Security Team</a> contains more information about policies, procedures, and alerts for Red Hat Products.</li>\n<li>The Security Team also maintains a frequently updated blog at <a href=\"https://securityblog.redhat.com\">securityblog.redhat.com</a>.</li>\n</ul>\n",
"severity": "WARN",
"ansible": true,
"ansible_fix": false,
"ansible_mitigation": false,
"rule_id": "CVE_2016_5195_kernel|KERNEL_CVE_2016_5195_2",
"error_key": "KERNEL_CVE_2016_5195_2",
"plugin": "CVE_2016_5195_kernel",
"description": "Kernel vulnerable to privilege escalation via permission bypass (CVE-2016-5195)",
"summary": "A flaw was found in the Linux kernel's memory subsystem. An unprivileged local user could use this flaw to write to files they would normally only have read-only access to and thus increase their privileges on the system.",
"generic": "A race condition was found in the way Linux kernel's memory subsystem handled breakage of the read only shared mappings COW situation on write access. An unprivileged local user could use this flaw to write to files they should normally have read-only access to, and thus increase their privileges on the system.\n\nA process that is able to mmap a file is able to race Copy on Write (COW) page creation (within get_user_pages) with madvise(MADV_DONTNEED) kernel system calls. This would allow modified pages to bypass the page protection mechanism and modify the mapped file. The vulnerability could be abused by allowing an attacker to modify existing setuid files with instructions to elevate permissions. This attack has been found in the wild. \n\nRed Hat recommends that you update the kernel package.\n",
"reason": "<p>A flaw was found in the Linux kernel&#39;s memory subsystem. An unprivileged local user could use this flaw to write to files they would normally have read-only access to and thus increase their privileges on the system.</p>\n<p>This host is affected because it is running kernel <strong>3.10.0-327.el7</strong>. </p>\n",
"type": null,
"more_info": "* For more information about the flaw see [CVE-2016-5195](https://access.redhat.com/security/cve/CVE-2016-5195)\n* To learn how to upgrade packages, see \"[What is yum and how do I use it?](https://access.redhat.com/solutions/9934)\"\n* The Customer Portal page for the [Red Hat Security Team](https://access.redhat.com/security/) contains more information about policies, procedures, and alerts for Red Hat Products.\n* The Security Team also maintains a frequently updated blog at [securityblog.redhat.com](https://securityblog.redhat.com).",
"active": true,
"node_id": "2706661",
"category": "Security",
"retired": false,
"reboot_required": true,
"publish_date": "2016-10-31T04:08:33.000Z",
"rec_impact": 2,
"rec_likelihood": 2,
"resolution": "<p>Red Hat recommends that you update the <code>kernel</code> package and restart the system:</p>\n<pre><code># yum update kernel\n# reboot\n</code></pre>"
},
"maintenance_actions": [{
"done": false,
"id": 305235,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 306705,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}, {
"details": {
"mitigation_conf": "no",
"sysctl_live_ack_limit": "100",
"package_name": "kernel",
"sysctl_live_ack_limit_line": "net.ipv4.tcp_challenge_ack_limit = 100",
"error_key": "KERNEL_CVE_2016_5696_URGENT",
"vulnerable_kernel": "3.10.0-327.el7",
"sysctl_conf_ack_limit": "100",
"sysctl_conf_ack_limit_line": "net.ipv4.tcp_challenge_ack_limit = 100 # Implicit default",
"mitigation_live": "no"
},
"id": 955802735,
"rule_id": "CVE_2016_5696_kernel|KERNEL_CVE_2016_5696_URGENT",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>A flaw in the Linux kernel&#39;s TCP/IP networking subsystem implementation of the <a href=\"https://tools.ietf.org/html/rfc5961\">RFC 5961</a> challenge ACK rate limiting was found that could allow an attacker located on different subnet to inject or take over a TCP connection between a server and client without needing to use a traditional man-in-the-middle (MITM) attack.</p>\n",
"generic_html": "<p>A flaw was found in the implementation of the Linux kernel&#39;s handling of networking challenge ack &#40;<a href=\"https://tools.ietf.org/html/rfc5961\">RFC 5961</a>&#41; where an attacker is able to determine the\nshared counter. This flaw allows an attacker located on different subnet to inject or take over a TCP connection between a server and client without needing to use a traditional man-in-the-middle (MITM) attack. </p>\n<p>Red Hat recommends that you update the kernel package or apply mitigations.</p>\n",
"more_info_html": "<ul>\n<li>For more information about the flaw see <a href=\"https://access.redhat.com/security/cve/CVE-2016-5696\">CVE-2016-5696</a></li>\n<li>To learn how to upgrade packages, see &quot;<a href=\"https://access.redhat.com/solutions/9934\">What is yum and how do I use it?</a>&quot;</li>\n<li>The Customer Portal page for the <a href=\"https://access.redhat.com/security/\">Red Hat Security Team</a> contains more information about policies, procedures, and alerts for Red Hat Products.</li>\n<li>The Security Team also maintains a frequently updated blog at <a href=\"https://securityblog.redhat.com\">securityblog.redhat.com</a>.</li>\n</ul>\n",
"severity": "ERROR",
"ansible": true,
"ansible_fix": false,
"ansible_mitigation": false,
"rule_id": "CVE_2016_5696_kernel|KERNEL_CVE_2016_5696_URGENT",
"error_key": "KERNEL_CVE_2016_5696_URGENT",
"plugin": "CVE_2016_5696_kernel",
"description": "Kernel vulnerable to man-in-the-middle via payload injection",
"summary": "A flaw in the Linux kernel's TCP/IP networking subsystem implementation of the [RFC 5961](https://tools.ietf.org/html/rfc5961) challenge ACK rate limiting was found that could allow an attacker located on different subnet to inject or take over a TCP connection between a server and client without needing to use a traditional man-in-the-middle (MITM) attack.",
"generic": "A flaw was found in the implementation of the Linux kernel's handling of networking challenge ack &#40;[RFC 5961](https://tools.ietf.org/html/rfc5961)&#41; where an attacker is able to determine the\nshared counter. This flaw allows an attacker located on different subnet to inject or take over a TCP connection between a server and client without needing to use a traditional man-in-the-middle (MITM) attack. \n\nRed Hat recommends that you update the kernel package or apply mitigations.",
"reason": "<p>A flaw was found in the implementation of the Linux kernel&#39;s handling of networking challenge ack &#40;<a href=\"https://tools.ietf.org/html/rfc5961\">RFC 5961</a>&#41; where an attacker is able to determine the\nshared counter. This flaw allows an attacker located on different subnet to inject or take over a TCP connection between a server and client without needing to use a traditional man-in-the-middle (MITM) attack.</p>\n<p>This host is affected because it is running kernel <strong>3.10.0-327.el7</strong>. </p>\n<p>Your currently loaded kernel configuration contains this setting: </p>\n<pre><code>net.ipv4.tcp_challenge_ack_limit = 100\n</code></pre><p>Your currently stored kernel configuration is: </p>\n<pre><code>net.ipv4.tcp_challenge_ack_limit = 100 # Implicit default\n</code></pre><p>There is currently no mitigation applied and your system is vulnerable.</p>\n",
"type": null,
"more_info": "* For more information about the flaw see [CVE-2016-5696](https://access.redhat.com/security/cve/CVE-2016-5696)\n* To learn how to upgrade packages, see \"[What is yum and how do I use it?](https://access.redhat.com/solutions/9934)\"\n* The Customer Portal page for the [Red Hat Security Team](https://access.redhat.com/security/) contains more information about policies, procedures, and alerts for Red Hat Products.\n* The Security Team also maintains a frequently updated blog at [securityblog.redhat.com](https://securityblog.redhat.com).",
"active": true,
"node_id": "2438571",
"category": "Security",
"retired": false,
"reboot_required": false,
"publish_date": "2016-10-31T04:08:32.000Z",
"rec_impact": 4,
"rec_likelihood": 2,
"resolution": "<p>Red Hat recommends that you update the <code>kernel</code> package and restart the system:</p>\n<pre><code># yum update kernel\n# reboot\n</code></pre><p><strong>or</strong></p>\n<p>Alternatively, this issue can be addressed by applying the following mitigations until the machine is restarted with the updated kernel package.</p>\n<p>Edit <code>/etc/sysctl.conf</code> file as root, add the mitigation configuration, and reload the kernel configuration:</p>\n<pre><code># echo &quot;net.ipv4.tcp_challenge_ack_limit = 2147483647&quot; &gt;&gt; /etc/sysctl.conf \n# sysctl -p\n</code></pre>"
},
"maintenance_actions": [{
"done": false,
"id": 305245,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 306975,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 316055,
"maintenance_plan": {
"maintenance_id": 30575,
"name": "Fix the problem",
"description": null,
"start": null,
"end": null,
"created_by": "asdavis@redhat.com",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}, {
"details": {
"kernel_left_fully_exploitable": true,
"vulnerable_kernel_version_release": "3.10.0-327.el7",
"kernel_kpatch_applied": false,
"kernel_vulnerable": true,
"glibc_left_fully_exploitable": true,
"vulnerable_glibc": {
"PACKAGE_NAMES": ["glibc"],
"PACKAGES": ["glibc-2.17-105.el7"]
},
"kernel_stap_applied": false,
"error_key": "CVE_2017_1000364_KERNEL_CVE_2017_1000366_GLIBC_EXPLOITABLE",
"vulnerable_kernel_name": "kernel",
"nothing_left_fully_exploitable": false,
"glibc_vulnerable": true
},
"id": 955802745,
"rule_id": "CVE_2017_1000366_glibc|CVE_2017_1000364_KERNEL_CVE_2017_1000366_GLIBC_EXPLOITABLE",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>A flaw was found in the way memory is being allocated on the stack for user space binaries. It has been assigned <a href=\"https://access.redhat.com/security/cve/CVE-2017-1000364\">CVE-2017-1000364</a> and <a href=\"https://access.redhat.com/security/cve/CVE-2017-1000366\">CVE-2017-1000366</a>. An unprivileged local user can use this flaw to execute arbitrary code as root and increase their privileges on the system.</p>\n",
"generic_html": "<p>A flaw was found in the way memory is being allocated on the stack for user space binaries. It has been assigned CVE-2017-1000364 and CVE-2017-1000366. An unprivileged local user can use this flaw to execute arbitrary code as root and increase their privileges on the system.</p>\n<p>If heap and stack memory regions are adjacent to each other, an attacker can use this flaw to jump over the heap/stack gap, cause controlled memory corruption on process stack or heap, and thus increase their privileges on the system. </p>\n<p>An attacker must have access to a local account on the system.</p>\n<p>Red Hat recommends that you update the kernel and glibc.</p>\n",
"more_info_html": "<ul>\n<li>For more information about the flaw, see <a href=\"https://access.redhat.com/security/vulnerabilities/stackguard\">the vulnerability article</a> and <a href=\"https://access.redhat.com/security/cve/CVE-2017-1000364\">CVE-2017-1000364</a> and <a href=\"https://access.redhat.com/security/cve/CVE-2017-1000366\">CVE-2017-1000366</a>.</li>\n<li>To learn how to upgrade packages, see <a href=\"https://access.redhat.com/solutions/9934\">What is yum and how do I use it?</a>.</li>\n<li>The Customer Portal page for the <a href=\"https://access.redhat.com/security/\">Red Hat Security Team</a> contains more information about policies, procedures, and alerts for Red Hat products.</li>\n<li>The Security Team also maintains a frequently updated blog at <a href=\"https://securityblog.redhat.com\">securityblog.redhat.com</a>.</li>\n</ul>\n",
"severity": "WARN",
"ansible": true,
"ansible_fix": false,
"ansible_mitigation": false,
"rule_id": "CVE_2017_1000366_glibc|CVE_2017_1000364_KERNEL_CVE_2017_1000366_GLIBC_EXPLOITABLE",
"error_key": "CVE_2017_1000364_KERNEL_CVE_2017_1000366_GLIBC_EXPLOITABLE",
"plugin": "CVE_2017_1000366_glibc",
"description": "Kernel and glibc vulnerable to local privilege escalation via stack and heap memory clash (CVE-2017-1000364 and CVE-2017-1000366)",
"summary": "A flaw was found in the way memory is being allocated on the stack for user space binaries. It has been assigned [CVE-2017-1000364](https://access.redhat.com/security/cve/CVE-2017-1000364) and [CVE-2017-1000366](https://access.redhat.com/security/cve/CVE-2017-1000366). An unprivileged local user can use this flaw to execute arbitrary code as root and increase their privileges on the system.\n",
"generic": "A flaw was found in the way memory is being allocated on the stack for user space binaries. It has been assigned CVE-2017-1000364 and CVE-2017-1000366. An unprivileged local user can use this flaw to execute arbitrary code as root and increase their privileges on the system.\n\nIf heap and stack memory regions are adjacent to each other, an attacker can use this flaw to jump over the heap/stack gap, cause controlled memory corruption on process stack or heap, and thus increase their privileges on the system. \n\nAn attacker must have access to a local account on the system.\n\nRed Hat recommends that you update the kernel and glibc.\n",
"reason": "<p>A flaw was found in kernel and glibc in the way memory is being allocated on the stack for user space binaries.</p>\n<p>The host is affected because it is running <strong>kernel-3.10.0-327.el7</strong> and using <strong>glibc-2.17-105.el7</strong>.</p>\n",
"type": null,
"more_info": "* For more information about the flaw, see [the vulnerability article](https://access.redhat.com/security/vulnerabilities/stackguard) and [CVE-2017-1000364](https://access.redhat.com/security/cve/CVE-2017-1000364) and [CVE-2017-1000366](https://access.redhat.com/security/cve/CVE-2017-1000366).\n* To learn how to upgrade packages, see [What is yum and how do I use it?](https://access.redhat.com/solutions/9934).\n* The Customer Portal page for the [Red Hat Security Team](https://access.redhat.com/security/) contains more information about policies, procedures, and alerts for Red Hat products.\n* The Security Team also maintains a frequently updated blog at [securityblog.redhat.com](https://securityblog.redhat.com).\n",
"active": true,
"node_id": null,
"category": "Security",
"retired": false,
"reboot_required": true,
"publish_date": "2017-06-19T15:00:00.000Z",
"rec_impact": 2,
"rec_likelihood": 2,
"resolution": "<p>Red Hat recommends updating the <code>kernel</code> and <code>glibc</code> packages and rebooting the system.</p>\n<pre><code># yum update kernel glibc\n# reboot\n</code></pre>"
},
"maintenance_actions": [{
"done": false,
"id": 305255,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 307415,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}, {
"details": {
"PACKAGE_NAMES": ["sudo"],
"PACKAGES": ["sudo-1.8.6p7-16.el7"],
"error_key": "CVE_2017_1000367_SUDO"
},
"id": 955802755,
"rule_id": "CVE_2017_1000367_sudo|CVE_2017_1000367_SUDO",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>A local privilege escalation flaw was found in <code>sudo</code>. A local user having sudo access on the system,\ncould use this flaw to execute arbitrary commands as root. This issue was reported as\n<a href=\"https://access.redhat.com/security/cve/CVE-2017-1000367\">CVE-2017-1000367</a></p>\n",
"generic_html": "<p>A local privilege escalation flaw was found in <code>sudo</code>. All versions of sudo package shipped with RHEL 5, 6 and 7 are vulnerable\nto a local privilege escalation vulnerability. A flaw was found in the way <code>get_process_ttyname()</code> function obtained\ninformation about the controlling terminal of the sudo process from the status file in the proc filesystem.\nThis allows a local user who has any level of sudo access on the system to execute arbitrary commands as root or\nin certain conditions escalate his privileges to root.</p>\n<p>Red Hat recommends that you update update the <code>sudo</code> package.</p>\n",
"more_info_html": "<ul>\n<li>For more information about the remote code execution flaw <a href=\"https://access.redhat.com/security/cve/CVE-2017-1000367\">CVE-2017-1000367</a> see <a href=\"https://access.redhat.com/security/vulnerabilities/3059071\">knowledge base article</a>.</li>\n<li>To learn how to upgrade packages, see &quot;<a href=\"https://access.redhat.com/solutions/9934\">What is yum and how do I use it?</a>&quot;</li>\n<li>To better understand <a href=\"https://www.sudo.ws/\">sudo</a>, see <a href=\"https://www.sudo.ws/intro.html\">Sudo in a Nutshell</a></li>\n<li>The Customer Portal page for the <a href=\"https://access.redhat.com/security/\">Red Hat Security Team</a> contains more information about policies, procedures, and alerts for Red Hat Products.</li>\n<li>The Security Team also maintains a frequently updated blog at <a href=\"https://securityblog.redhat.com\">securityblog.redhat.com</a>.</li>\n</ul>\n",
"severity": "WARN",
"ansible": true,
"ansible_fix": true,
"ansible_mitigation": false,
"rule_id": "CVE_2017_1000367_sudo|CVE_2017_1000367_SUDO",
"error_key": "CVE_2017_1000367_SUDO",
"plugin": "CVE_2017_1000367_sudo",
"description": "sudo vulnerable to local privilege escalation via process TTY name parsing (CVE-2017-1000367)",
"summary": "A local privilege escalation flaw was found in `sudo`. A local user having sudo access on the system,\ncould use this flaw to execute arbitrary commands as root. This issue was reported as\n[CVE-2017-1000367](https://access.redhat.com/security/cve/CVE-2017-1000367)",
"generic": "A local privilege escalation flaw was found in `sudo`. All versions of sudo package shipped with RHEL 5, 6 and 7 are vulnerable\nto a local privilege escalation vulnerability. A flaw was found in the way `get_process_ttyname()` function obtained\ninformation about the controlling terminal of the sudo process from the status file in the proc filesystem.\nThis allows a local user who has any level of sudo access on the system to execute arbitrary commands as root or\nin certain conditions escalate his privileges to root.\n\nRed Hat recommends that you update update the `sudo` package.\n",
"reason": "<p>This machine is vulnerable because it has vulnerable <code>sudo</code> package <strong>sudo-1.8.6p7-16.el7</strong> installed.</p>\n",
"type": null,
"more_info": "* For more information about the remote code execution flaw [CVE-2017-1000367](https://access.redhat.com/security/cve/CVE-2017-1000367) see [knowledge base article](https://access.redhat.com/security/vulnerabilities/3059071).\n* To learn how to upgrade packages, see \"[What is yum and how do I use it?](https://access.redhat.com/solutions/9934)\"\n* To better understand [sudo](https://www.sudo.ws/), see [Sudo in a Nutshell](https://www.sudo.ws/intro.html)\n* The Customer Portal page for the [Red Hat Security Team](https://access.redhat.com/security/) contains more information about policies, procedures, and alerts for Red Hat Products.\n* The Security Team also maintains a frequently updated blog at [securityblog.redhat.com](https://securityblog.redhat.com).\n",
"active": true,
"node_id": "3059071",
"category": "Security",
"retired": false,
"reboot_required": false,
"publish_date": "2017-05-30T13:30:00.000Z",
"rec_impact": 2,
"rec_likelihood": 2,
"resolution": "<p>Red Hat recommends that you update the <code>sudo</code> package.</p>\n<pre><code># yum update sudo\n</code></pre>"
},
"maintenance_actions": [{
"done": false,
"id": 305265,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 308075,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}, {
"details": {
"mod_loading_disabled": false,
"package_name": "kernel",
"error_key": "KERNEL_CVE_2017_2636",
"vulnerable_kernel": "3.10.0-327.el7",
"mod_loaded": false,
"mitigation_info": true
},
"id": 955802765,
"rule_id": "CVE_2017_2636_kernel|KERNEL_CVE_2017_2636",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>A vulnerability in the Linux kernel allowing local privilege escalation was discovered.\nThe issue was reported as <a href=\"https://access.redhat.com/security/cve/CVE-2017-2636\">CVE-2017-2636</a>.</p>\n",
"generic_html": "<p>A use-after-free flaw was found in the Linux kernel implementation of the HDLC (High-Level Data Link Control) TTY line discipline implementation. It has been assigned CVE-2017-2636.</p>\n<p>An unprivileged local user could use this flaw to execute arbitrary code in kernel memory and increase their privileges on the system. The kernel uses a TTY subsystem to take and show terminal output to connected systems. An attacker crafting specific-sized memory allocations could abuse this mechanism to place a kernel function pointer with malicious instructions to be executed on behalf of the attacker.</p>\n<p>An attacker must have access to a local account on the system; this is not a remote attack. Exploiting this flaw does not require Microgate or SyncLink hardware to be in use.</p>\n<p>Red Hat recommends that you use the proposed mitigation to disable the N_HDLC module.</p>\n",
"more_info_html": "<ul>\n<li>For more information about the flaw, see <a href=\"https://access.redhat.com/security/cve/CVE-2017-2636\">CVE-2017-2636</a> and <a href=\"https://access.redhat.com/security/vulnerabilities/CVE-2017-2636\">CVE-2017-2636 article</a>.</li>\n<li>The Customer Portal page for the <a href=\"https://access.redhat.com/security/\">Red Hat Security Team</a> contains more information about policies, procedures, and alerts for Red Hat products.</li>\n<li>The Security Team also maintains a frequently updated blog at <a href=\"https://securityblog.redhat.com\">securityblog.redhat.com</a>.</li>\n</ul>\n",
"severity": "WARN",
"ansible": true,
"ansible_fix": false,
"ansible_mitigation": false,
"rule_id": "CVE_2017_2636_kernel|KERNEL_CVE_2017_2636",
"error_key": "KERNEL_CVE_2017_2636",
"plugin": "CVE_2017_2636_kernel",
"description": "Kernel vulnerable to local privilege escalation via n_hdlc module (CVE-2017-2636)",
"summary": "A vulnerability in the Linux kernel allowing local privilege escalation was discovered.\nThe issue was reported as [CVE-2017-2636](https://access.redhat.com/security/cve/CVE-2017-2636).\n",
"generic": "A use-after-free flaw was found in the Linux kernel implementation of the HDLC (High-Level Data Link Control) TTY line discipline implementation. It has been assigned CVE-2017-2636.\n\nAn unprivileged local user could use this flaw to execute arbitrary code in kernel memory and increase their privileges on the system. The kernel uses a TTY subsystem to take and show terminal output to connected systems. An attacker crafting specific-sized memory allocations could abuse this mechanism to place a kernel function pointer with malicious instructions to be executed on behalf of the attacker.\n\nAn attacker must have access to a local account on the system; this is not a remote attack. Exploiting this flaw does not require Microgate or SyncLink hardware to be in use.\n\nRed Hat recommends that you use the proposed mitigation to disable the N_HDLC module.\n",
"reason": "<p>A use-after-free flaw was found in the Linux kernel implementation of the HDLC (High-Level Data Link Control) TTY line discipline implementation.</p>\n<p>This host is affected because it is running kernel <strong>3.10.0-327.el7</strong>.</p>\n",
"type": null,
"more_info": "* For more information about the flaw, see [CVE-2017-2636](https://access.redhat.com/security/cve/CVE-2017-2636) and [CVE-2017-2636 article](https://access.redhat.com/security/vulnerabilities/CVE-2017-2636).\n* The Customer Portal page for the [Red Hat Security Team](https://access.redhat.com/security/) contains more information about policies, procedures, and alerts for Red Hat products.\n* The Security Team also maintains a frequently updated blog at [securityblog.redhat.com](https://securityblog.redhat.com).\n",
"active": true,
"node_id": null,
"category": "Security",
"retired": false,
"reboot_required": false,
"publish_date": "2017-05-16T12:00:00.000Z",
"rec_impact": 2,
"rec_likelihood": 2,
"resolution": "<p>Red Hat recommends updating the <code>kernel</code> package and rebooting the system.</p>\n<pre><code># yum update kernel\n# reboot\n</code></pre><p><strong>Alternatively</strong>, apply one of the following mitigations:</p>\n<p>Disable loading of N_HDLC kernel module:</p>\n<pre><code># echo &quot;install n_hdlc /bin/true&quot; &gt;&gt; /etc/modprobe.d/disable-n_hdlc.conf\n</code></pre>"
},
"maintenance_actions": [{
"done": false,
"id": 305275,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 308675,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}, {
"details": {
"kvr": "3.10.0-327.el7",
"error_key": "IPMI_LIST_CORRUPTION_CRASH"
},
"id": 955826995,
"rule_id": "ipmi_list_corruption_crash|IPMI_LIST_CORRUPTION_CRASH",
"system_id": "11111111-1111-1111-1111-111111111111",
"account_number": "1111111",
"uuid": "11111111111111111111111111111111",
"date": "2017-07-21T07:07:29.000Z",
"rule": {
"summary_html": "<p>Kernel occasionally panics when running <code>ipmitool</code> command due to a bug in the ipmi message handler.</p>\n",
"generic_html": "<p>Kernel occasionally panics when running <code>ipmitool</code> due to a bug in the ipmi message handler.</p>\n",
"more_info_html": "<p>For how to upgrade the kernel to a specific version, refer to <a href=\"https://access.redhat.com/solutions/161803\">How do I upgrade the kernel to a particular version manually?</a>.</p>\n",
"severity": "WARN",
"ansible": false,
"ansible_fix": false,
"ansible_mitigation": false,
"rule_id": "ipmi_list_corruption_crash|IPMI_LIST_CORRUPTION_CRASH",
"error_key": "IPMI_LIST_CORRUPTION_CRASH",
"plugin": "ipmi_list_corruption_crash",
"description": "Kernel panic occurs when running ipmitool command with specific kernels",
"summary": "Kernel occasionally panics when running `ipmitool` command due to a bug in the ipmi message handler.\n",
"generic": "Kernel occasionally panics when running `ipmitool` due to a bug in the ipmi message handler.\n",
"reason": "<p>This host is running kernel <strong>3.10.0-327.el7</strong> with the IPMI management tool installed.\nKernel panics can occur when running <code>ipmitool</code>.</p>\n",
"type": null,
"more_info": "For how to upgrade the kernel to a specific version, refer to [How do I upgrade the kernel to a particular version manually?](https://access.redhat.com/solutions/161803).\n",
"active": true,
"node_id": "2690791",
"category": "Stability",
"retired": false,
"reboot_required": true,
"publish_date": null,
"rec_impact": 3,
"rec_likelihood": 1,
"resolution": "<p>Red Hat recommends that you complete the following steps to fix this issue:</p>\n<ol>\n\n<li>Upgrade kernel to the version <strong>3.10.0-327.36.1.el7</strong> or later:</li>\n\n<code>\n# yum update kernel\n</code>\n<li>Restart the host with the new kernel.</li>\n<code>\n# reboot\n</code>\n</ol>\n"
},
"maintenance_actions": [{
"done": false,
"id": 305285,
"maintenance_plan": {
"maintenance_id": 29315,
"name": "RHEL Demo Infrastructure",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}, {
"done": false,
"id": 310145,
"maintenance_plan": {
"maintenance_id": 29335,
"name": "RHEL Demo All Systems",
"description": null,
"start": null,
"end": null,
"created_by": "$READACTED$",
"silenced": false,
"hidden": false,
"suggestion": null,
"remote_branch": null,
"allow_reboot": true
}
}]
}]
}

View File

@@ -0,0 +1,9 @@
import json
import os
dir_path = os.path.dirname(os.path.realpath(__file__))
with open(os.path.join(dir_path, 'insights.json')) as data_file:
TEST_INSIGHTS_PLANS = json.loads(data_file.read())

View File

@@ -139,7 +139,7 @@ def create_instance_group(name, instances=None):
return mk_instance_group(name=name, instance=instances) return mk_instance_group(name=name, instance=instances)
def create_survey_spec(variables=None, default_type='integer', required=True): def create_survey_spec(variables=None, default_type='integer', required=True, min=None, max=None):
''' '''
Returns a valid survey spec for a job template, based on the input Returns a valid survey spec for a job template, based on the input
argument specifying variable name(s) argument specifying variable name(s)
@@ -174,10 +174,14 @@ def create_survey_spec(variables=None, default_type='integer', required=True):
spec_item.setdefault('question_description', "A question about %s." % var_name) spec_item.setdefault('question_description', "A question about %s." % var_name)
if spec_item['type'] == 'integer': if spec_item['type'] == 'integer':
spec_item.setdefault('default', 0) spec_item.setdefault('default', 0)
spec_item.setdefault('max', spec_item['default'] + 100) spec_item.setdefault('max', max or spec_item['default'] + 100)
spec_item.setdefault('min', spec_item['default'] - 100) spec_item.setdefault('min', min or spec_item['default'] - 100)
else: else:
spec_item.setdefault('default', '') spec_item.setdefault('default', '')
if min:
spec_item.setdefault('min', min)
if max:
spec_item.setdefault('max', max)
spec.append(spec_item) spec.append(spec_item)
survey_spec = {} survey_spec = {}

View File

@@ -1,4 +1,5 @@
import itertools import itertools
import re
import mock # noqa import mock # noqa
import pytest import pytest
@@ -711,7 +712,7 @@ def test_inputs_cannot_contain_extra_fields(get, post, organization, admin, cred
@pytest.mark.django_db @pytest.mark.django_db
@pytest.mark.parametrize('field_name, field_value', itertools.product( @pytest.mark.parametrize('field_name, field_value', itertools.product(
['username', 'password', 'ssh_key_data', 'ssh_key_unlock', 'become_method', 'become_username', 'become_password'], # noqa ['username', 'password', 'ssh_key_data', 'become_method', 'become_username', 'become_password'], # noqa
['', None] ['', None]
)) ))
def test_nullish_field_data(get, post, organization, admin, field_name, field_value): def test_nullish_field_data(get, post, organization, admin, field_name, field_value):
@@ -762,6 +763,33 @@ def test_falsey_field_data(get, post, organization, admin, field_value):
assert cred.authorize is False assert cred.authorize is False
@pytest.mark.django_db
@pytest.mark.parametrize('kind, extraneous', [
['ssh', 'ssh_key_unlock'],
['scm', 'ssh_key_unlock'],
['net', 'ssh_key_unlock'],
['net', 'authorize_password'],
])
def test_field_dependencies(get, post, organization, admin, kind, extraneous):
_type = CredentialType.defaults[kind]()
_type.save()
params = {
'name': 'Best credential ever',
'credential_type': _type.pk,
'organization': organization.id,
'inputs': {extraneous: 'not needed'}
}
response = post(
reverse('api:credential_list', kwargs={'version': 'v2'}),
params,
admin
)
assert response.status_code == 400
assert re.search('cannot be set unless .+ is set.', response.content)
assert Credential.objects.count() == 0
# #
# SCM Credentials # SCM Credentials
# #

View File

@@ -80,16 +80,15 @@ def test_update_managed_by_tower_xfail(patch, delete, admin):
@pytest.mark.django_db @pytest.mark.django_db
def test_update_credential_type_in_use_xfail(patch, delete, admin): def test_update_credential_type_in_use_xfail(patch, delete, admin):
ssh = CredentialType.defaults['ssh']() _type = CredentialType(kind='cloud', inputs={'fields': []})
ssh.managed_by_tower = False _type.save()
ssh.save() Credential(credential_type=_type, name='My Custom Cred').save()
Credential(credential_type=ssh, name='My SSH Key').save()
url = reverse('api:credential_type_detail', kwargs={'pk': ssh.pk}) url = reverse('api:credential_type_detail', kwargs={'pk': _type.pk})
response = patch(url, {'name': 'Some Other Name'}, admin) response = patch(url, {'name': 'Some Other Name'}, admin)
assert response.status_code == 200 assert response.status_code == 200
url = reverse('api:credential_type_detail', kwargs={'pk': ssh.pk}) url = reverse('api:credential_type_detail', kwargs={'pk': _type.pk})
response = patch(url, {'inputs': {}}, admin) response = patch(url, {'inputs': {}}, admin)
assert response.status_code == 403 assert response.status_code == 403
@@ -98,11 +97,10 @@ def test_update_credential_type_in_use_xfail(patch, delete, admin):
@pytest.mark.django_db @pytest.mark.django_db
def test_update_credential_type_success(get, patch, delete, admin): def test_update_credential_type_success(get, patch, delete, admin):
ssh = CredentialType.defaults['ssh']() _type = CredentialType(kind='cloud')
ssh.managed_by_tower = False _type.save()
ssh.save()
url = reverse('api:credential_type_detail', kwargs={'pk': ssh.pk}) url = reverse('api:credential_type_detail', kwargs={'pk': _type.pk})
response = patch(url, {'name': 'Some Other Name'}, admin) response = patch(url, {'name': 'Some Other Name'}, admin)
assert response.status_code == 200 assert response.status_code == 200
@@ -163,6 +161,21 @@ def test_create_managed_by_tower_readonly(get, post, admin):
assert response.data['results'][0]['managed_by_tower'] is False assert response.data['results'][0]['managed_by_tower'] is False
@pytest.mark.django_db
def test_create_dependencies_not_supported(get, post, admin):
response = post(reverse('api:credential_type_list'), {
'kind': 'cloud',
'name': 'Custom Credential Type',
'inputs': {'dependencies': {'foo': ['bar']}},
'injectors': {},
}, admin)
assert response.status_code == 400
assert response.data['inputs'] == ["'dependencies' is not supported for custom credentials."]
response = get(reverse('api:credential_type_list'), admin)
assert response.data['count'] == 0
@pytest.mark.django_db @pytest.mark.django_db
@pytest.mark.parametrize('kind', ['cloud', 'net']) @pytest.mark.parametrize('kind', ['cloud', 'net'])
def test_create_valid_kind(kind, get, post, admin): def test_create_valid_kind(kind, get, post, admin):

View File

@@ -5,18 +5,32 @@ from django.core.exceptions import ValidationError
from awx.api.versioning import reverse from awx.api.versioning import reverse
from awx.main.models import InventorySource from awx.main.models import InventorySource, Inventory, ActivityStream
import json
@pytest.fixture @pytest.fixture
def scm_inventory(inventory, project): def scm_inventory(inventory, project):
with mock.patch.object(project, 'update'): with mock.patch('awx.main.models.unified_jobs.UnifiedJobTemplate.update'):
inventory.inventory_sources.create( inventory.inventory_sources.create(
name='foobar', update_on_project_update=True, source='scm', name='foobar', update_on_project_update=True, source='scm',
source_project=project, scm_last_revision=project.scm_revision) source_project=project, scm_last_revision=project.scm_revision)
return inventory return inventory
@pytest.fixture
def factory_scm_inventory(inventory, project):
def fn(**kwargs):
with mock.patch('awx.main.models.unified_jobs.UnifiedJobTemplate.update'):
return inventory.inventory_sources.create(source_project=project,
overwrite_vars=True,
source='scm',
scm_last_revision=project.scm_revision,
**kwargs)
return fn
@pytest.mark.django_db @pytest.mark.django_db
def test_inventory_source_notification_on_cloud_only(get, post, inventory_source_factory, user, notification_template): def test_inventory_source_notification_on_cloud_only(get, post, inventory_source_factory, user, notification_template):
u = user('admin', True) u = user('admin', True)
@@ -69,6 +83,7 @@ def test_async_inventory_deletion(delete, get, inventory, alice):
inventory.admin_role.members.add(alice) inventory.admin_role.members.add(alice)
resp = delete(reverse('api:inventory_detail', kwargs={'pk': inventory.id}), alice) resp = delete(reverse('api:inventory_detail', kwargs={'pk': inventory.id}), alice)
assert resp.status_code == 202 assert resp.status_code == 202
assert ActivityStream.objects.filter(operation='delete').exists()
resp = get(reverse('api:inventory_detail', kwargs={'pk': inventory.id}), alice) resp = get(reverse('api:inventory_detail', kwargs={'pk': inventory.id}), alice)
assert resp.status_code == 200 assert resp.status_code == 200
@@ -86,6 +101,20 @@ def test_async_inventory_duplicate_deletion_prevention(delete, get, inventory, a
assert resp.data['error'] == 'Inventory is already pending deletion.' assert resp.data['error'] == 'Inventory is already pending deletion.'
@pytest.mark.django_db
def test_async_inventory_deletion_deletes_related_jt(delete, get, job_template, inventory, alice, admin):
job_template.inventory = inventory
job_template.save()
assert job_template.inventory == inventory
inventory.admin_role.members.add(alice)
resp = delete(reverse('api:inventory_detail', kwargs={'pk': inventory.id}), alice)
assert resp.status_code == 202
resp = get(reverse('api:job_template_detail', kwargs={'pk': job_template.id}), admin)
jdata = json.loads(resp.content)
assert jdata['inventory'] is None
@pytest.mark.parametrize('order_by', ('script', '-script', 'script,pk', '-script,pk')) @pytest.mark.parametrize('order_by', ('script', '-script', 'script,pk', '-script,pk'))
@pytest.mark.django_db @pytest.mark.django_db
def test_list_cannot_order_by_unsearchable_field(get, organization, alice, order_by): def test_list_cannot_order_by_unsearchable_field(get, organization, alice, order_by):
@@ -175,6 +204,54 @@ def test_delete_inventory_group(delete, group, alice, role_field, expected_statu
delete(reverse('api:group_detail', kwargs={'pk': group.id}), alice, expect=expected_status_code) delete(reverse('api:group_detail', kwargs={'pk': group.id}), alice, expect=expected_status_code)
@pytest.mark.django_db
def test_create_inventory_smarthost(post, get, inventory, admin_user, organization):
data = { 'name': 'Host 1', 'description': 'Test Host'}
smart_inventory = Inventory(name='smart',
kind='smart',
organization=organization,
host_filter='inventory_sources__source=ec2')
smart_inventory.save()
post(reverse('api:inventory_hosts_list', kwargs={'pk': smart_inventory.id}), data, admin_user)
resp = get(reverse('api:inventory_hosts_list', kwargs={'pk': smart_inventory.id}), admin_user)
jdata = json.loads(resp.content)
assert getattr(smart_inventory, 'kind') == 'smart'
assert jdata['count'] == 0
@pytest.mark.django_db
def test_create_inventory_smartgroup(post, get, inventory, admin_user, organization):
data = { 'name': 'Group 1', 'description': 'Test Group'}
smart_inventory = Inventory(name='smart',
kind='smart',
organization=organization,
host_filter='inventory_sources__source=ec2')
smart_inventory.save()
post(reverse('api:inventory_groups_list', kwargs={'pk': smart_inventory.id}), data, admin_user)
resp = get(reverse('api:inventory_groups_list', kwargs={'pk': smart_inventory.id}), admin_user)
jdata = json.loads(resp.content)
assert getattr(smart_inventory, 'kind') == 'smart'
assert jdata['count'] == 0
@pytest.mark.django_db
def test_create_inventory_smart_inventory_sources(post, get, inventory, admin_user, organization):
data = { 'name': 'Inventory Source 1', 'description': 'Test Inventory Source'}
smart_inventory = Inventory(name='smart',
kind='smart',
organization=organization,
host_filter='inventory_sources__source=ec2')
smart_inventory.save()
post(reverse('api:inventory_inventory_sources_list', kwargs={'pk': smart_inventory.id}), data, admin_user)
resp = get(reverse('api:inventory_inventory_sources_list', kwargs={'pk': smart_inventory.id}), admin_user)
jdata = json.loads(resp.content)
assert getattr(smart_inventory, 'kind') == 'smart'
assert jdata['count'] == 0
@pytest.mark.parametrize("role_field,expected_status_code", [ @pytest.mark.parametrize("role_field,expected_status_code", [
(None, 403), (None, 403),
('admin_role', 201), ('admin_role', 201),
@@ -311,21 +388,30 @@ class TestControlledBySCM:
delete(inv_src.get_absolute_url(), admin_user, expect=204) delete(inv_src.get_absolute_url(), admin_user, expect=204)
assert scm_inventory.inventory_sources.count() == 0 assert scm_inventory.inventory_sources.count() == 0
def test_adding_inv_src_prohibited(self, post, scm_inventory, admin_user): def test_adding_inv_src_ok(self, post, scm_inventory, admin_user):
post(reverse('api:inventory_inventory_sources_list', kwargs={'version': 'v2', 'pk': scm_inventory.id}),
{'name': 'new inv src', 'update_on_project_update': False, 'source': 'scm', 'overwrite_vars': True},
admin_user, expect=201)
def test_adding_inv_src_prohibited(self, post, scm_inventory, project, admin_user):
post(reverse('api:inventory_inventory_sources_list', kwargs={'pk': scm_inventory.id}), post(reverse('api:inventory_inventory_sources_list', kwargs={'pk': scm_inventory.id}),
{'name': 'new inv src'}, admin_user, expect=403) {'name': 'new inv src', 'source_project': project.pk, 'update_on_project_update': True, 'source': 'scm', 'overwrite_vars': True},
admin_user, expect=400)
def test_two_update_on_project_update_inv_src_prohibited(self, patch, scm_inventory, factory_scm_inventory, project, admin_user):
scm_inventory2 = factory_scm_inventory(name="scm_inventory2")
res = patch(reverse('api:inventory_source_detail', kwargs={'version': 'v2', 'pk': scm_inventory2.id}),
{'update_on_project_update': True,},
admin_user, expect=400)
content = json.loads(res.content)
assert content['update_on_project_update'] == ["More than one SCM-based inventory source with update on project update "
"per-inventory not allowed."]
def test_adding_inv_src_without_proj_access_prohibited(self, post, project, inventory, rando): def test_adding_inv_src_without_proj_access_prohibited(self, post, project, inventory, rando):
inventory.admin_role.members.add(rando) inventory.admin_role.members.add(rando)
post( post(reverse('api:inventory_inventory_sources_list', kwargs={'pk': inventory.id}),
reverse('api:inventory_inventory_sources_list', kwargs={'pk': inventory.id}), {'name': 'new inv src', 'source_project': project.pk, 'source': 'scm', 'overwrite_vars': True},
{'name': 'new inv src', 'source_project': project.pk, 'source': 'scm', 'overwrite_vars': True}, rando, expect=403)
rando, expect=403)
def test_no_post_in_options(self, options, scm_inventory, admin_user):
r = options(reverse('api:inventory_inventory_sources_list', kwargs={'pk': scm_inventory.id}),
admin_user, expect=200)
assert 'POST' not in r.data['actions']
@pytest.mark.django_db @pytest.mark.django_db

View File

@@ -316,6 +316,72 @@ def test_job_launch_JT_enforces_unique_extra_credential_kinds(machine_credential
assert validated is False assert validated is False
@pytest.mark.django_db
@pytest.mark.parametrize('ask_credential_on_launch', [True, False])
def test_job_launch_with_no_credentials(deploy_jobtemplate, ask_credential_on_launch):
deploy_jobtemplate.credential = None
deploy_jobtemplate.vault_credential = None
deploy_jobtemplate.ask_credential_on_launch = ask_credential_on_launch
serializer = JobLaunchSerializer(
instance=deploy_jobtemplate, data={},
context={'obj': deploy_jobtemplate, 'data': {}, 'passwords': {}})
validated = serializer.is_valid()
assert validated is False
assert serializer.errors['credential'] == ["Job Template 'credential' is missing or undefined."]
@pytest.mark.django_db
def test_job_launch_with_only_vault_credential(vault_credential, deploy_jobtemplate):
deploy_jobtemplate.credential = None
deploy_jobtemplate.vault_credential = vault_credential
serializer = JobLaunchSerializer(
instance=deploy_jobtemplate, data={},
context={'obj': deploy_jobtemplate, 'data': {}, 'passwords': {}})
validated = serializer.is_valid()
assert validated
prompted_fields, ignored_fields = deploy_jobtemplate._accept_or_ignore_job_kwargs(**{})
job_obj = deploy_jobtemplate.create_unified_job(**prompted_fields)
assert job_obj.vault_credential.pk == vault_credential.pk
@pytest.mark.django_db
def test_job_launch_with_vault_credential_ask_for_machine(vault_credential, deploy_jobtemplate):
deploy_jobtemplate.credential = None
deploy_jobtemplate.ask_credential_on_launch = True
deploy_jobtemplate.vault_credential = vault_credential
serializer = JobLaunchSerializer(
instance=deploy_jobtemplate, data={},
context={'obj': deploy_jobtemplate, 'data': {}, 'passwords': {}})
validated = serializer.is_valid()
assert validated
prompted_fields, ignored_fields = deploy_jobtemplate._accept_or_ignore_job_kwargs(**{})
job_obj = deploy_jobtemplate.create_unified_job(**prompted_fields)
assert job_obj.credential is None
assert job_obj.vault_credential.pk == vault_credential.pk
@pytest.mark.django_db
def test_job_launch_with_vault_credential_and_prompted_machine_cred(machine_credential, vault_credential,
deploy_jobtemplate):
deploy_jobtemplate.credential = None
deploy_jobtemplate.ask_credential_on_launch = True
deploy_jobtemplate.vault_credential = vault_credential
kv = dict(credential=machine_credential.id)
serializer = JobLaunchSerializer(
instance=deploy_jobtemplate, data=kv,
context={'obj': deploy_jobtemplate, 'data': kv, 'passwords': {}})
validated = serializer.is_valid()
assert validated
prompted_fields, ignored_fields = deploy_jobtemplate._accept_or_ignore_job_kwargs(**kv)
job_obj = deploy_jobtemplate.create_unified_job(**prompted_fields)
assert job_obj.credential.pk == machine_credential.pk
assert job_obj.vault_credential.pk == vault_credential.pk
@pytest.mark.django_db @pytest.mark.django_db
def test_job_launch_JT_with_default_vault_credential(machine_credential, vault_credential, deploy_jobtemplate): def test_job_launch_JT_with_default_vault_credential(machine_credential, vault_credential, deploy_jobtemplate):
deploy_jobtemplate.credential = machine_credential deploy_jobtemplate.credential = machine_credential

View File

@@ -0,0 +1,75 @@
import pytest
import json
from awx.api.versioning import reverse
from awx.main.models import Inventory, Host
@pytest.mark.django_db
def test_empty_inventory(post, get, admin_user, organization, group_factory):
inventory = Inventory(name='basic_inventory',
kind='',
organization=organization)
inventory.save()
resp = get(reverse('api:inventory_script_view', kwargs={'version': 'v2', 'pk': inventory.pk}), admin_user)
jdata = json.loads(resp.content)
assert inventory.hosts.count() == 0
assert jdata == {}
@pytest.mark.django_db
def test_empty_smart_inventory(post, get, admin_user, organization, group_factory):
smart_inventory = Inventory(name='smart',
kind='smart',
organization=organization,
host_filter='enabled=True')
smart_inventory.save()
resp = get(reverse('api:inventory_script_view', kwargs={'version': 'v2', 'pk': smart_inventory.pk}), admin_user)
smartjdata = json.loads(resp.content)
assert smart_inventory.hosts.count() == 0
assert smartjdata == {}
@pytest.mark.django_db
def test_ungrouped_hosts(post, get, admin_user, organization, group_factory):
inventory = Inventory(name='basic_inventory',
kind='',
organization=organization)
inventory.save()
Host.objects.create(name='first_host', inventory=inventory)
Host.objects.create(name='second_host', inventory=inventory)
resp = get(reverse('api:inventory_script_view', kwargs={'version': 'v2', 'pk': inventory.pk}), admin_user)
jdata = json.loads(resp.content)
assert inventory.hosts.count() == 2
assert len(jdata['all']['hosts']) == 2
@pytest.mark.django_db
def test_grouped_hosts_smart_inventory(post, get, admin_user, organization, group_factory):
inventory = Inventory(name='basic_inventory',
kind='',
organization=organization)
inventory.save()
groupA = group_factory('test_groupA')
host1 = Host.objects.create(name='first_host', inventory=inventory)
host2 = Host.objects.create(name='second_host', inventory=inventory)
Host.objects.create(name='third_host', inventory=inventory)
groupA.hosts.add(host1)
groupA.hosts.add(host2)
smart_inventory = Inventory(name='smart_inventory',
kind='smart',
organization=organization,
host_filter='enabled=True')
smart_inventory.save()
resp = get(reverse('api:inventory_script_view', kwargs={'version': 'v2', 'pk': inventory.pk}), admin_user)
jdata = json.loads(resp.content)
resp = get(reverse('api:inventory_script_view', kwargs={'version': 'v2', 'pk': smart_inventory.pk}), admin_user)
smartjdata = json.loads(resp.content)
assert getattr(smart_inventory, 'kind') == 'smart'
assert inventory.hosts.count() == 3
assert len(jdata['all']['hosts']) == 1
assert smart_inventory.hosts.count() == 3
assert len(smartjdata['all']['hosts']) == 3

View File

@@ -5,6 +5,8 @@
import pytest import pytest
import os import os
from django.conf import settings
# Mock # Mock
import mock import mock
@@ -145,6 +147,21 @@ def test_radius_settings(get, put, patch, delete, admin, settings):
assert settings.RADIUS_SECRET == '' assert settings.RADIUS_SECRET == ''
@pytest.mark.django_db
def test_tacacsplus_settings(get, put, patch, admin):
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'tacacsplus'})
response = get(url, user=admin, expect=200)
put(url, user=admin, data=response.data, expect=200)
patch(url, user=admin, data={'TACACSPLUS_SECRET': 'mysecret'}, expect=200)
patch(url, user=admin, data={'TACACSPLUS_SECRET': ''}, expect=200)
patch(url, user=admin, data={'TACACSPLUS_HOST': 'localhost'}, expect=400)
patch(url, user=admin, data={'TACACSPLUS_SECRET': 'mysecret'}, expect=200)
patch(url, user=admin, data={'TACACSPLUS_HOST': 'localhost'}, expect=200)
patch(url, user=admin, data={'TACACSPLUS_HOST': '', 'TACACSPLUS_SECRET': ''}, expect=200)
patch(url, user=admin, data={'TACACSPLUS_HOST': 'localhost', 'TACACSPLUS_SECRET': ''}, expect=400)
patch(url, user=admin, data={'TACACSPLUS_HOST': 'localhost', 'TACACSPLUS_SECRET': 'mysecret'}, expect=200)
@pytest.mark.django_db @pytest.mark.django_db
def test_ui_settings(get, put, patch, delete, admin): def test_ui_settings(get, put, patch, delete, admin):
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'ui'}) url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'ui'})
@@ -243,3 +260,44 @@ def test_logging_aggregrator_connection_test_invalid(mocker, get, post, admin):
'LOG_AGGREGATOR_PORT': 8080 'LOG_AGGREGATOR_PORT': 8080
}, user=admin, expect=500) }, user=admin, expect=500)
assert resp.data == {'error': '404: Not Found'} assert resp.data == {'error': '404: Not Found'}
@pytest.mark.django_db
@pytest.mark.parametrize('setting_name', [
'AWX_ISOLATED_CHECK_INTERVAL',
'AWX_ISOLATED_LAUNCH_TIMEOUT',
'AWX_ISOLATED_CONNECTION_TIMEOUT',
])
def test_isolated_job_setting_validation(get, patch, admin, setting_name):
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'jobs'})
patch(url, user=admin, data={
setting_name: -1
}, expect=400)
data = get(url, user=admin).data
assert data[setting_name] != -1
@pytest.mark.django_db
@pytest.mark.parametrize('key, expected', [
['AWX_ISOLATED_PRIVATE_KEY', '$encrypted$'],
['AWX_ISOLATED_PUBLIC_KEY', 'secret'],
])
def test_isolated_keys_readonly(get, patch, delete, admin, key, expected):
Setting.objects.create(
key=key,
value='secret'
).save()
assert getattr(settings, key) == 'secret'
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'jobs'})
resp = get(url, user=admin)
assert resp.data[key] == expected
patch(url, user=admin, data={
key: 'new-secret'
})
assert getattr(settings, key) == 'secret'
delete(url, user=admin)
assert getattr(settings, key) == 'secret'

View File

@@ -1,8 +1,9 @@
import pytest import pytest
from awx.api.versioning import reverse from awx.api.versioning import reverse
from awx.main.models import UnifiedJob, ProjectUpdate from awx.main.models import UnifiedJob, ProjectUpdate, InventoryUpdate
from awx.main.tests.base import URI from awx.main.tests.base import URI
from awx.main.models.unified_jobs import ACTIVE_STATES
TEST_STDOUTS = [] TEST_STDOUTS = []
@@ -90,3 +91,52 @@ def test_options_fields_choices(instance, options, user):
assert UnifiedJob.LAUNCH_TYPE_CHOICES == response.data['actions']['GET']['launch_type']['choices'] assert UnifiedJob.LAUNCH_TYPE_CHOICES == response.data['actions']['GET']['launch_type']['choices']
assert 'choice' == response.data['actions']['GET']['status']['type'] assert 'choice' == response.data['actions']['GET']['status']['type']
assert UnifiedJob.STATUS_CHOICES == response.data['actions']['GET']['status']['choices'] assert UnifiedJob.STATUS_CHOICES == response.data['actions']['GET']['status']['choices']
@pytest.mark.parametrize("status", list(ACTIVE_STATES))
@pytest.mark.django_db
def test_delete_job_in_active_state(job_factory, delete, admin, status):
j = job_factory(initial_state=status)
url = reverse('api:job_detail', kwargs={'pk': j.pk})
delete(url, None, admin, expect=403)
@pytest.mark.parametrize("status", list(ACTIVE_STATES))
@pytest.mark.django_db
def test_delete_project_update_in_active_state(project, delete, admin, status):
p = ProjectUpdate(project=project, status=status)
p.save()
url = reverse('api:project_update_detail', kwargs={'pk': p.pk})
delete(url, None, admin, expect=403)
@pytest.mark.parametrize("status", list(ACTIVE_STATES))
@pytest.mark.django_db
def test_delete_inventory_update_in_active_state(inventory_source, delete, admin, status):
i = InventoryUpdate.objects.create(inventory_source=inventory_source, status=status)
url = reverse('api:inventory_update_detail', kwargs={'pk': i.pk})
delete(url, None, admin, expect=403)
@pytest.mark.parametrize("status", list(ACTIVE_STATES))
@pytest.mark.django_db
def test_delete_workflow_job_in_active_state(workflow_job_factory, delete, admin, status):
wj = workflow_job_factory(initial_state=status)
url = reverse('api:workflow_job_detail', kwargs={'pk': wj.pk})
delete(url, None, admin, expect=403)
@pytest.mark.parametrize("status", list(ACTIVE_STATES))
@pytest.mark.django_db
def test_delete_system_job_in_active_state(system_job_factory, delete, admin, status):
sys_j = system_job_factory(initial_state=status)
url = reverse('api:system_job_detail', kwargs={'pk': sys_j.pk})
delete(url, None, admin, expect=403)
@pytest.mark.parametrize("status", list(ACTIVE_STATES))
@pytest.mark.django_db
def test_delete_ad_hoc_command_in_active_state(ad_hoc_command_factory, delete, admin, status):
adhoc = ad_hoc_command_factory(initial_state=status)
url = reverse('api:ad_hoc_command_detail', kwargs={'pk': adhoc.pk})
delete(url, None, admin, expect=403)

View File

@@ -28,7 +28,7 @@ from rest_framework.test import (
) )
from awx.main.models.credential import CredentialType, Credential from awx.main.models.credential import CredentialType, Credential
from awx.main.models.jobs import JobTemplate from awx.main.models.jobs import JobTemplate, SystemJobTemplate
from awx.main.models.inventory import ( from awx.main.models.inventory import (
Group, Group,
Inventory, Inventory,
@@ -44,6 +44,8 @@ from awx.main.models.notifications import (
NotificationTemplate, NotificationTemplate,
Notification Notification
) )
from awx.main.models.workflow import WorkflowJobTemplate
from awx.main.models.ad_hoc_commands import AdHocCommand
@pytest.fixture(autouse=True) @pytest.fixture(autouse=True)
@@ -314,7 +316,7 @@ def scm_inventory_source(inventory, project):
update_on_project_update=True, update_on_project_update=True,
inventory=inventory, inventory=inventory,
scm_last_revision=project.scm_revision) scm_last_revision=project.scm_revision)
with mock.patch.object(inv_src.source_project, 'update'): with mock.patch('awx.main.models.unified_jobs.UnifiedJobTemplate.update'):
inv_src.save() inv_src.save()
return inv_src return inv_src
@@ -612,6 +614,18 @@ def fact_services_json():
return _fact_json('services') return _fact_json('services')
@pytest.fixture
def ad_hoc_command_factory(inventory, machine_credential, admin):
def factory(inventory=inventory, credential=machine_credential, initial_state='new', created_by=admin):
adhoc = AdHocCommand(
name='test-adhoc', inventory=inventory, credential=credential,
status=initial_state, created_by=created_by
)
adhoc.save()
return adhoc
return factory
@pytest.fixture @pytest.fixture
def job_template(organization): def job_template(organization):
jt = JobTemplate(name='test-job_template') jt = JobTemplate(name='test-job_template')
@@ -628,6 +642,35 @@ def job_template_labels(organization, job_template):
return job_template return job_template
@pytest.fixture
def workflow_job_template(organization):
wjt = WorkflowJobTemplate(name='test-workflow_job_template')
wjt.save()
return wjt
@pytest.fixture
def workflow_job_factory(workflow_job_template, admin):
def factory(workflow_job_template=workflow_job_template, initial_state='new', created_by=admin):
return workflow_job_template.create_unified_job(created_by=created_by, status=initial_state)
return factory
@pytest.fixture
def system_job_template():
sys_jt = SystemJobTemplate(name='test-system_job_template', job_type='cleanup_jobs')
sys_jt.save()
return sys_jt
@pytest.fixture
def system_job_factory(system_job_template, admin):
def factory(system_job_template=system_job_template, initial_state='new', created_by=admin):
return system_job_template.create_unified_job(created_by=created_by, status=initial_state)
return factory
def dumps(value): def dumps(value):
return DjangoJSONEncoder().encode(value) return DjangoJSONEncoder().encode(value)

View File

@@ -1,4 +1,5 @@
import pytest import pytest
import mock
import json import json
@@ -9,6 +10,7 @@ from awx.main.models import (
JobTemplate, JobTemplate,
Credential, Credential,
CredentialType, CredentialType,
Inventory,
InventorySource InventorySource
) )
@@ -16,6 +18,12 @@ from awx.main.models import (
from awx.main.utils import model_to_dict from awx.main.utils import model_to_dict
from awx.api.serializers import InventorySourceSerializer from awx.api.serializers import InventorySourceSerializer
# Django
from django.contrib.auth.models import AnonymousUser
# Django-CRUM
from crum import impersonate
model_serializer_mapping = { model_serializer_mapping = {
InventorySource: InventorySourceSerializer InventorySource: InventorySourceSerializer
@@ -157,3 +165,20 @@ def test_missing_related_on_delete(inventory_source):
inventory_source.inventory.delete() inventory_source.inventory.delete()
d = model_to_dict(old_is, serializer_mapping=model_serializer_mapping) d = model_to_dict(old_is, serializer_mapping=model_serializer_mapping)
assert d['inventory'] == '<missing inventory source>-{}'.format(old_is.inventory_id) assert d['inventory'] == '<missing inventory source>-{}'.format(old_is.inventory_id)
@pytest.mark.django_db
def test_activity_stream_actor(admin_user):
with impersonate(admin_user):
o = Organization.objects.create(name='test organization')
entry = o.activitystream_set.get(operation='create')
assert entry.actor == admin_user
@pytest.mark.django_db
def test_annon_user_action():
with mock.patch('awx.main.signals.get_current_user') as u_mock:
u_mock.return_value = AnonymousUser()
inv = Inventory.objects.create(name='ainventory')
entry = inv.activitystream_set.filter(operation='create').first()
assert not entry.actor

View File

@@ -1,6 +1,8 @@
import pytest import pytest
import mock import mock
from django.core.exceptions import ValidationError
# AWX # AWX
from awx.main.models import ( from awx.main.models import (
Host, Host,
@@ -20,7 +22,7 @@ class TestSCMUpdateFeatures:
inventory=inventory, inventory=inventory,
update_on_project_update=True, update_on_project_update=True,
source='scm') source='scm')
with mock.patch.object(inv_src.source_project, 'update') as mck_update: with mock.patch.object(inv_src, 'update') as mck_update:
inv_src.save() inv_src.save()
mck_update.assert_called_once_with() mck_update.assert_called_once_with()
@@ -47,6 +49,26 @@ class TestSCMUpdateFeatures:
assert not mck_update.called assert not mck_update.called
@pytest.mark.django_db
class TestSCMClean:
def test_clean_update_on_project_update_multiple(self, inventory):
inv_src1 = InventorySource(inventory=inventory,
update_on_project_update=True,
source='scm')
inv_src1.clean_update_on_project_update()
inv_src1.save()
inv_src1.source_vars = '---\nhello: world'
inv_src1.clean_update_on_project_update()
inv_src2 = InventorySource(inventory=inventory,
update_on_project_update=True,
source='scm')
with pytest.raises(ValidationError):
inv_src2.clean_update_on_project_update()
@pytest.fixture @pytest.fixture
def setup_ec2_gce(organization): def setup_ec2_gce(organization):
ec2_inv = Inventory.objects.create(name='test_ec2', organization=organization) ec2_inv = Inventory.objects.create(name='test_ec2', organization=organization)

View File

@@ -78,7 +78,8 @@ class TestIsolatedRuns:
iso_ig.instances.create(hostname='iso1', capacity=50) iso_ig.instances.create(hostname='iso1', capacity=50)
i2 = iso_ig.instances.create(hostname='iso2', capacity=200) i2 = iso_ig.instances.create(hostname='iso2', capacity=200)
job = Job.objects.create( job = Job.objects.create(
instance_group=iso_ig instance_group=iso_ig,
celery_task_id='something',
) )
mock_async = mock.MagicMock() mock_async = mock.MagicMock()
@@ -91,7 +92,11 @@ class TestIsolatedRuns:
with mock.patch.object(job, '_get_task_class') as task_class: with mock.patch.object(job, '_get_task_class') as task_class:
task_class.return_value = MockTaskClass task_class.return_value = MockTaskClass
job.start_celery_task([], error_callback, success_callback, 'thepentagon') job.start_celery_task([], error_callback, success_callback, 'thepentagon')
mock_async.assert_called_with([job.id, 'iso2'], [], link_error=error_callback, link=success_callback, queue='thepentagon') mock_async.assert_called_with([job.id, 'iso2'], [],
link_error=error_callback,
link=success_callback,
queue='thepentagon',
task_id='something')
i2.capacity = 20 i2.capacity = 20
i2.save() i2.save()
@@ -99,4 +104,8 @@ class TestIsolatedRuns:
with mock.patch.object(job, '_get_task_class') as task_class: with mock.patch.object(job, '_get_task_class') as task_class:
task_class.return_value = MockTaskClass task_class.return_value = MockTaskClass
job.start_celery_task([], error_callback, success_callback, 'thepentagon') job.start_celery_task([], error_callback, success_callback, 'thepentagon')
mock_async.assert_called_with([job.id, 'iso1'], [], link_error=error_callback, link=success_callback, queue='thepentagon') mock_async.assert_called_with([job.id, 'iso1'], [],
link_error=error_callback,
link=success_callback,
queue='thepentagon',
task_id='something')

View File

@@ -29,7 +29,7 @@ def test_multi_group_basic_job_launch(instance_factory, default_instance_group,
mock_task_impact.return_value = 500 mock_task_impact.return_value = 500
with mocker.patch("awx.main.scheduler.TaskManager.start_task"): with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1), mock.call(j2, ig2)]) TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, []), mock.call(j2, ig2, [])])
@@ -63,13 +63,26 @@ def test_multi_group_with_shared_dependency(instance_factory, default_instance_g
with mocker.patch("awx.main.scheduler.TaskManager.start_task"): with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
pu = p.project_updates.first() pu = p.project_updates.first()
TaskManager.start_task.assert_called_once_with(pu, default_instance_group, [pu]) TaskManager.start_task.assert_called_once_with(pu, default_instance_group, [j1])
pu.finished = pu.created + timedelta(seconds=1) pu.finished = pu.created + timedelta(seconds=1)
pu.status = "successful" pu.status = "successful"
pu.save() pu.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"): with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1), mock.call(j2, ig2)]) TaskManager.start_task.assert_called_once_with(j1, ig1, [])
j1.finished = j1.created + timedelta(seconds=2)
j1.status = "successful"
j1.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
pu = p.project_updates.last()
TaskManager.start_task.assert_called_once_with(pu, default_instance_group, [j2])
pu.finished = pu.created + timedelta(seconds=1)
pu.status = "successful"
pu.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j2, ig2, [])
@pytest.mark.django_db @pytest.mark.django_db
@@ -114,8 +127,8 @@ def test_overcapacity_blocking_other_groups_unaffected(instance_factory, default
mock_task_impact.return_value = 500 mock_task_impact.return_value = 500
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job: with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule() tm.schedule()
mock_job.assert_has_calls([mock.call(j1, ig1), mock.call(j1_1, ig1), mock_job.assert_has_calls([mock.call(j1, ig1, []), mock.call(j1_1, ig1, []),
mock.call(j2, ig2)]) mock.call(j2, ig2, [])])
assert mock_job.call_count == 3 assert mock_job.call_count == 3
@@ -146,5 +159,5 @@ def test_failover_group_run(instance_factory, default_instance_group, mocker,
mock_task_impact.return_value = 500 mock_task_impact.return_value = 500
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job: with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule() tm.schedule()
mock_job.assert_has_calls([mock.call(j1, ig1), mock.call(j1_1, ig2)]) mock_job.assert_has_calls([mock.call(j1, ig1, []), mock.call(j1_1, ig2, [])])
assert mock_job.call_count == 2 assert mock_job.call_count == 2

View File

@@ -3,8 +3,13 @@ import mock
from datetime import timedelta, datetime from datetime import timedelta, datetime
from django.core.cache import cache from django.core.cache import cache
from django.utils.timezone import now as tz_now
from awx.main.scheduler import TaskManager from awx.main.scheduler import TaskManager
from awx.main.models import (
Job,
Instance,
)
@pytest.mark.django_db @pytest.mark.django_db
@@ -17,8 +22,7 @@ def test_single_job_scheduler_launch(default_instance_group, job_template_factor
j.save() j.save()
with mocker.patch("awx.main.scheduler.TaskManager.start_task"): with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
assert TaskManager.start_task.called TaskManager.start_task.assert_called_once_with(j, default_instance_group, [])
assert TaskManager.start_task.call_args == ((j, default_instance_group),)
@pytest.mark.django_db @pytest.mark.django_db
@@ -34,12 +38,12 @@ def test_single_jt_multi_job_launch_blocks_last(default_instance_group, job_temp
j2.save() j2.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"): with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j1, default_instance_group) TaskManager.start_task.assert_called_once_with(j1, default_instance_group, [])
j1.status = "successful" j1.status = "successful"
j1.save() j1.save()
with mocker.patch("awx.main.scheduler.TaskManager.start_task"): with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j2, default_instance_group) TaskManager.start_task.assert_called_once_with(j2, default_instance_group, [])
@pytest.mark.django_db @pytest.mark.django_db
@@ -60,8 +64,8 @@ def test_single_jt_multi_job_launch_allow_simul_allowed(default_instance_group,
j2.save() j2.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"): with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, default_instance_group), TaskManager.start_task.assert_has_calls([mock.call(j1, default_instance_group, []),
mock.call(j2, default_instance_group)]) mock.call(j2, default_instance_group, [])])
@pytest.mark.django_db @pytest.mark.django_db
@@ -83,12 +87,12 @@ def test_multi_jt_capacity_blocking(default_instance_group, job_template_factory
mock_task_impact.return_value = 500 mock_task_impact.return_value = 500
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job: with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule() tm.schedule()
mock_job.assert_called_once_with(j1, default_instance_group) mock_job.assert_called_once_with(j1, default_instance_group, [])
j1.status = "successful" j1.status = "successful"
j1.save() j1.save()
with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job: with mock.patch.object(TaskManager, "start_task", wraps=tm.start_task) as mock_job:
tm.schedule() tm.schedule()
mock_job.assert_called_once_with(j2, default_instance_group) mock_job.assert_called_once_with(j2, default_instance_group, [])
@@ -113,12 +117,12 @@ def test_single_job_dependencies_project_launch(default_instance_group, job_temp
mock_pu.assert_called_once_with(j) mock_pu.assert_called_once_with(j)
pu = [x for x in p.project_updates.all()] pu = [x for x in p.project_updates.all()]
assert len(pu) == 1 assert len(pu) == 1
TaskManager.start_task.assert_called_once_with(pu[0], default_instance_group, [pu[0]]) TaskManager.start_task.assert_called_once_with(pu[0], default_instance_group, [j])
pu[0].status = "successful" pu[0].status = "successful"
pu[0].save() pu[0].save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"): with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, default_instance_group) TaskManager.start_task.assert_called_once_with(j, default_instance_group, [])
@pytest.mark.django_db @pytest.mark.django_db
@@ -143,12 +147,12 @@ def test_single_job_dependencies_inventory_update_launch(default_instance_group,
mock_iu.assert_called_once_with(j, ii) mock_iu.assert_called_once_with(j, ii)
iu = [x for x in ii.inventory_updates.all()] iu = [x for x in ii.inventory_updates.all()]
assert len(iu) == 1 assert len(iu) == 1
TaskManager.start_task.assert_called_once_with(iu[0], default_instance_group, [iu[0]]) TaskManager.start_task.assert_called_once_with(iu[0], default_instance_group, [j])
iu[0].status = "successful" iu[0].status = "successful"
iu[0].save() iu[0].save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"): with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, default_instance_group) TaskManager.start_task.assert_called_once_with(j, default_instance_group, [])
@pytest.mark.django_db @pytest.mark.django_db
@@ -181,8 +185,8 @@ def test_shared_dependencies_launch(default_instance_group, job_template_factory
TaskManager().schedule() TaskManager().schedule()
pu = p.project_updates.first() pu = p.project_updates.first()
iu = ii.inventory_updates.first() iu = ii.inventory_updates.first()
TaskManager.start_task.assert_has_calls([mock.call(pu, default_instance_group, [pu, iu]), TaskManager.start_task.assert_has_calls([mock.call(pu, default_instance_group, [iu, j1]),
mock.call(iu, default_instance_group, [pu, iu])]) mock.call(iu, default_instance_group, [pu, j1])])
pu.status = "successful" pu.status = "successful"
pu.finished = pu.created + timedelta(seconds=1) pu.finished = pu.created + timedelta(seconds=1)
pu.save() pu.save()
@@ -191,12 +195,12 @@ def test_shared_dependencies_launch(default_instance_group, job_template_factory
iu.save() iu.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"): with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j1, default_instance_group) TaskManager.start_task.assert_called_once_with(j1, default_instance_group, [])
j1.status = "successful" j1.status = "successful"
j1.save() j1.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"): with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule() TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j2, default_instance_group) TaskManager.start_task.assert_called_once_with(j2, default_instance_group, [])
pu = [x for x in p.project_updates.all()] pu = [x for x in p.project_updates.all()]
iu = [x for x in ii.inventory_updates.all()] iu = [x for x in ii.inventory_updates.all()]
assert len(pu) == 1 assert len(pu) == 1
@@ -215,18 +219,115 @@ def test_cleanup_interval():
assert cache.get('last_celery_task_cleanup') == last_cleanup assert cache.get('last_celery_task_cleanup') == last_cleanup
@pytest.mark.django_db class TestReaper():
@mock.patch('awx.main.tasks._send_notification_templates') @pytest.fixture
@mock.patch.object(TaskManager, 'get_active_tasks', lambda self: [[], []]) def all_jobs(self, mocker):
@mock.patch.object(TaskManager, 'get_running_tasks') now = tz_now()
def test_cleanup_inconsistent_task(get_running_tasks, notify):
orphaned_task = mock.Mock(job_explanation='')
get_running_tasks.return_value = [orphaned_task]
TaskManager().cleanup_inconsistent_celery_tasks()
notify.assert_called_once_with(orphaned_task, 'failed') Instance.objects.create(hostname='host1', capacity=100)
orphaned_task.websocket_emit_status.assert_called_once_with('failed') Instance.objects.create(hostname='host2', capacity=100)
assert orphaned_task.status == 'failed' Instance.objects.create(hostname='host3_split', capacity=100)
assert orphaned_task.job_explanation == ( Instance.objects.create(hostname='host4_offline', capacity=0)
'Task was marked as running in Tower but was not present in Celery, so it has been marked as failed.'
) j1 = Job.objects.create(status='pending', execution_node='host1')
j2 = Job.objects.create(status='waiting', celery_task_id='considered_j2', execution_node='host1')
j3 = Job.objects.create(status='waiting', celery_task_id='considered_j3', execution_node='host1')
j3.modified = now - timedelta(seconds=60)
j3.save(update_fields=['modified'])
j4 = Job.objects.create(status='running', celery_task_id='considered_j4', execution_node='host1')
j5 = Job.objects.create(status='waiting', celery_task_id='reapable_j5', execution_node='host1')
j5.modified = now - timedelta(seconds=60)
j5.save(update_fields=['modified'])
j6 = Job.objects.create(status='waiting', celery_task_id='considered_j6', execution_node='host2')
j6.modified = now - timedelta(seconds=60)
j6.save(update_fields=['modified'])
j7 = Job.objects.create(status='running', celery_task_id='considered_j7', execution_node='host2')
j8 = Job.objects.create(status='running', celery_task_id='reapable_j7', execution_node='host2')
j9 = Job.objects.create(status='waiting', celery_task_id='host3_j8', execution_node='host3_split')
j9.modified = now - timedelta(seconds=60)
j9.save(update_fields=['modified'])
j10 = Job.objects.create(status='running', execution_node='host3_split')
j11 = Job.objects.create(status='running', celery_task_id='host4_j11', execution_node='host4_offline')
js = [j1, j2, j3, j4, j5, j6, j7, j8, j9, j10, j11]
for j in js:
j.save = mocker.Mock(wraps=j.save)
j.websocket_emit_status = mocker.Mock()
return js
@pytest.fixture
def considered_jobs(self, all_jobs):
return all_jobs[2:7] + [all_jobs[10]]
@pytest.fixture
def running_tasks(self, all_jobs):
return {
'host1': all_jobs[2:5],
'host2': all_jobs[5:8],
'host3_split': all_jobs[8:10],
'host4_offline': [all_jobs[10]],
}
@pytest.fixture
def reapable_jobs(self, all_jobs):
return [all_jobs[4], all_jobs[7], all_jobs[10]]
@pytest.fixture
def unconsidered_jobs(self, all_jobs):
return all_jobs[0:1] + all_jobs[5:7]
@pytest.fixture
def active_tasks(self):
return ([], {
'host1': ['considered_j2', 'considered_j3', 'considered_j4',],
'host2': ['considered_j6', 'considered_j7'],
})
@pytest.mark.django_db
@mock.patch('awx.main.tasks._send_notification_templates')
@mock.patch.object(TaskManager, 'get_active_tasks', lambda self: ([], []))
def test_cleanup_inconsistent_task(self, notify, active_tasks, considered_jobs, reapable_jobs, running_tasks, mocker):
tm = TaskManager()
tm.get_running_tasks = mocker.Mock(return_value=running_tasks)
tm.get_active_tasks = mocker.Mock(return_value=active_tasks)
tm.cleanup_inconsistent_celery_tasks()
for j in considered_jobs:
if j not in reapable_jobs:
j.save.assert_not_called()
assert notify.call_count == 3
notify.assert_has_calls([mock.call(j, 'failed') for j in reapable_jobs], any_order=True)
for j in reapable_jobs:
j.websocket_emit_status.assert_called_once_with('failed')
assert j.status == 'failed'
assert j.job_explanation == (
'Task was marked as running in Tower but was not present in Celery, so it has been marked as failed.'
)
@pytest.mark.django_db
def test_get_running_tasks(self, all_jobs):
tm = TaskManager()
# Ensure the query grabs the expected jobs
execution_nodes_jobs = tm.get_running_tasks()
assert 'host1' in execution_nodes_jobs
assert 'host2' in execution_nodes_jobs
assert 'host3_split' in execution_nodes_jobs
assert all_jobs[2] in execution_nodes_jobs['host1']
assert all_jobs[3] in execution_nodes_jobs['host1']
assert all_jobs[4] in execution_nodes_jobs['host1']
assert all_jobs[5] in execution_nodes_jobs['host2']
assert all_jobs[6] in execution_nodes_jobs['host2']
assert all_jobs[7] in execution_nodes_jobs['host2']
assert all_jobs[8] in execution_nodes_jobs['host3_split']
assert all_jobs[9] in execution_nodes_jobs['host3_split']
assert all_jobs[10] in execution_nodes_jobs['host4_offline']

View File

@@ -72,6 +72,7 @@ def test_cloud_kind_uniqueness():
({'fields': [{'id': 'ssh_key', 'label': 'SSH Key', 'type': 'string', 'format': 'ssh_private_key'}]}, True), # noqa ({'fields': [{'id': 'ssh_key', 'label': 'SSH Key', 'type': 'string', 'format': 'ssh_private_key'}]}, True), # noqa
({'fields': [{'id': 'flag', 'label': 'Some Flag', 'type': 'boolean'}]}, True), ({'fields': [{'id': 'flag', 'label': 'Some Flag', 'type': 'boolean'}]}, True),
({'fields': [{'id': 'flag', 'label': 'Some Flag', 'type': 'boolean', 'choices': ['a', 'b']}]}, False), ({'fields': [{'id': 'flag', 'label': 'Some Flag', 'type': 'boolean', 'choices': ['a', 'b']}]}, False),
({'fields': [{'id': 'flag', 'label': 'Some Flag', 'type': 'boolean', 'secret': True}]}, False),
({'fields': [{'id': 'certificate', 'label': 'Cert', 'multiline': True}]}, True), ({'fields': [{'id': 'certificate', 'label': 'Cert', 'multiline': True}]}, True),
({'fields': [{'id': 'certificate', 'label': 'Cert', 'multiline': True, 'type': 'boolean'}]}, False), # noqa ({'fields': [{'id': 'certificate', 'label': 'Cert', 'multiline': True, 'type': 'boolean'}]}, False), # noqa
({'fields': [{'id': 'certificate', 'label': 'Cert', 'multiline': 'bad'}]}, False), # noqa ({'fields': [{'id': 'certificate', 'label': 'Cert', 'multiline': 'bad'}]}, False), # noqa

View File

@@ -87,7 +87,8 @@ class TestInstanceGroupOrdering:
inventory_source.inventory.instance_groups.add(ig_inv) inventory_source.inventory.instance_groups.add(ig_inv)
assert iu.preferred_instance_groups == [ig_inv, ig_org] assert iu.preferred_instance_groups == [ig_inv, ig_org]
inventory_source.instance_groups.add(ig_tmp) inventory_source.instance_groups.add(ig_tmp)
assert iu.preferred_instance_groups == [ig_tmp, ig_inv, ig_org] # API does not allow setting IGs on inventory source, so ignore those
assert iu.preferred_instance_groups == [ig_inv, ig_org]
def test_project_update_instance_groups(self, instance_group_factory, project, default_instance_group): def test_project_update_instance_groups(self, instance_group_factory, project, default_instance_group):
pu = ProjectUpdate.objects.create(project=project) pu = ProjectUpdate.objects.create(project=project)

View File

@@ -3,7 +3,8 @@ import pytest
from awx.main.models import ( from awx.main.models import (
Host, Host,
CustomInventoryScript, CustomInventoryScript,
Schedule Schedule,
AdHocCommand
) )
from awx.main.access import ( from awx.main.access import (
InventoryAccess, InventoryAccess,
@@ -11,10 +12,19 @@ from awx.main.access import (
HostAccess, HostAccess,
InventoryUpdateAccess, InventoryUpdateAccess,
CustomInventoryScriptAccess, CustomInventoryScriptAccess,
ScheduleAccess ScheduleAccess,
StateConflict
) )
@pytest.mark.django_db
def test_running_job_protection(inventory, admin_user):
AdHocCommand.objects.create(inventory=inventory, status='running')
access = InventoryAccess(admin_user)
with pytest.raises(StateConflict):
access.can_delete(inventory)
@pytest.mark.django_db @pytest.mark.django_db
def test_custom_inv_script_access(organization, user): def test_custom_inv_script_access(organization, user):
u = user('user', False) u = user('user', False)
@@ -93,6 +103,20 @@ def test_inventory_update_org_admin(inventory_update, org_admin):
assert access.can_delete(inventory_update) assert access.can_delete(inventory_update)
@pytest.mark.parametrize("role_field,allowed", [
(None, False),
('admin_role', True),
('update_role', False),
('adhoc_role', False),
('use_role', False)
])
@pytest.mark.django_db
def test_inventory_source_delete(inventory_source, alice, role_field, allowed):
if role_field:
getattr(inventory_source.inventory, role_field).members.add(alice)
assert allowed == InventorySourceAccess(alice).can_delete(inventory_source), '{} test failed'.format(role_field)
# See companion test in tests/functional/api/test_inventory.py::test_inventory_update_access_called # See companion test in tests/functional/api/test_inventory.py::test_inventory_update_access_called
@pytest.mark.parametrize("role_field,allowed", [ @pytest.mark.parametrize("role_field,allowed", [
(None, False), (None, False),

View File

@@ -149,7 +149,7 @@ class TestJobRelaunchAccess:
assert not inventory_user.can_access(Job, 'start', job_with_links, validate_license=False) assert not inventory_user.can_access(Job, 'start', job_with_links, validate_license=False)
def test_job_relaunch_extra_credential_access( def test_job_relaunch_extra_credential_access(
self, post, inventory, project, credential, net_credential): self, inventory, project, credential, net_credential):
jt = JobTemplate.objects.create(name='testjt', inventory=inventory, project=project) jt = JobTemplate.objects.create(name='testjt', inventory=inventory, project=project)
jt.extra_credentials.add(credential) jt.extra_credentials.add(credential)
job = jt.create_unified_job() job = jt.create_unified_job()
@@ -164,6 +164,45 @@ class TestJobRelaunchAccess:
job.extra_credentials.add(net_credential) job.extra_credentials.add(net_credential)
assert not jt_user.can_access(Job, 'start', job, validate_license=False) assert not jt_user.can_access(Job, 'start', job, validate_license=False)
def test_prompted_extra_credential_relaunch_denied(
self, inventory, project, net_credential, rando):
jt = JobTemplate.objects.create(
name='testjt', inventory=inventory, project=project,
ask_credential_on_launch=True)
job = jt.create_unified_job()
jt.execute_role.members.add(rando)
# Job has prompted extra_credential, rando lacks permission to use it
job.extra_credentials.add(net_credential)
assert not rando.can_access(Job, 'start', job, validate_license=False)
def test_prompted_extra_credential_relaunch_allowed(
self, inventory, project, net_credential, rando):
jt = JobTemplate.objects.create(
name='testjt', inventory=inventory, project=project,
ask_credential_on_launch=True)
job = jt.create_unified_job()
jt.execute_role.members.add(rando)
# Job has prompted extra_credential, but rando can use it
net_credential.use_role.members.add(rando)
job.extra_credentials.add(net_credential)
assert rando.can_access(Job, 'start', job, validate_license=False)
def test_extra_credential_relaunch_recreation_permission(
self, inventory, project, net_credential, credential, rando):
jt = JobTemplate.objects.create(
name='testjt', inventory=inventory, project=project,
credential=credential, ask_credential_on_launch=True)
job = jt.create_unified_job()
project.admin_role.members.add(rando)
inventory.admin_role.members.add(rando)
credential.admin_role.members.add(rando)
# Relaunch blocked by the extra credential
job.extra_credentials.add(net_credential)
assert not rando.can_access(Job, 'start', job, validate_license=False)
@pytest.mark.django_db @pytest.mark.django_db
class TestJobAndUpdateCancels: class TestJobAndUpdateCancels:

View File

@@ -96,6 +96,25 @@ def test_job_template_access_org_admin(jt_linked, rando):
assert access.can_delete(jt_linked) assert access.can_delete(jt_linked)
@pytest.mark.django_db
def test_job_template_extra_credentials_prompts_access(
rando, post, inventory, project, machine_credential, vault_credential):
jt = JobTemplate.objects.create(
name = 'test-jt',
project = project,
playbook = 'helloworld.yml',
inventory = inventory,
credential = machine_credential,
ask_credential_on_launch = True
)
jt.execute_role.members.add(rando)
r = post(
reverse('api:job_template_launch', kwargs={'version': 'v2', 'pk': jt.id}),
{'vault_credential': vault_credential.pk}, rando
)
assert r.status_code == 403
@pytest.mark.django_db @pytest.mark.django_db
class TestJobTemplateCredentials: class TestJobTemplateCredentials:

View File

@@ -7,6 +7,8 @@ from awx.main.access import (
# WorkflowJobNodeAccess # WorkflowJobNodeAccess
) )
from awx.main.models import InventorySource
@pytest.fixture @pytest.fixture
def wfjt(workflow_job_template_factory, organization): def wfjt(workflow_job_template_factory, organization):
@@ -102,6 +104,10 @@ class TestWorkflowJobAccess:
access = WorkflowJobAccess(rando) access = WorkflowJobAccess(rando)
assert access.can_cancel(workflow_job) assert access.can_cancel(workflow_job)
@pytest.mark.django_db
class TestWFJTCopyAccess:
def test_copy_permissions_org_admin(self, wfjt, org_admin, org_member): def test_copy_permissions_org_admin(self, wfjt, org_admin, org_member):
admin_access = WorkflowJobTemplateAccess(org_admin) admin_access = WorkflowJobTemplateAccess(org_admin)
assert admin_access.can_copy(wfjt) assert admin_access.can_copy(wfjt)
@@ -126,6 +132,20 @@ class TestWorkflowJobAccess:
warnings = access.messages warnings = access.messages
assert 'inventories_unable_to_copy' in warnings assert 'inventories_unable_to_copy' in warnings
def test_workflow_copy_no_start(self, wfjt, inventory, admin_user):
# Test that un-startable resource doesn't block copy
inv_src = InventorySource.objects.create(
inventory = inventory,
source = 'custom',
source_script = None
)
assert not inv_src.can_update
wfjt.workflow_job_template_nodes.create(unified_job_template=inv_src)
access = WorkflowJobTemplateAccess(admin_user, save_messages=True)
access.can_copy(wfjt)
assert not access.messages
def test_workflow_copy_warnings_jt(self, wfjt, rando, job_template): def test_workflow_copy_warnings_jt(self, wfjt, rando, job_template):
wfjt.workflow_job_template_nodes.create(unified_job_template=job_template) wfjt.workflow_job_template_nodes.create(unified_job_template=job_template)
access = WorkflowJobTemplateAccess(rando, save_messages=True) access = WorkflowJobTemplateAccess(rando, save_messages=True)

View File

@@ -6,7 +6,7 @@ from django.utils.timezone import now, timedelta
from awx.main.tasks import ( from awx.main.tasks import (
RunProjectUpdate, RunInventoryUpdate, RunProjectUpdate, RunInventoryUpdate,
tower_isolated_heartbeat, awx_isolated_heartbeat,
isolated_manager isolated_manager
) )
from awx.main.models import ( from awx.main.models import (
@@ -121,7 +121,7 @@ class TestIsolatedManagementTask:
original_isolated_instance = needs_updating.instances.all().first() original_isolated_instance = needs_updating.instances.all().first()
with mock.patch('awx.main.tasks.settings', MockSettings()): with mock.patch('awx.main.tasks.settings', MockSettings()):
with mock.patch.object(isolated_manager.IsolatedManager, 'health_check') as check_mock: with mock.patch.object(isolated_manager.IsolatedManager, 'health_check') as check_mock:
tower_isolated_heartbeat() awx_isolated_heartbeat()
iso_instance = Instance.objects.get(hostname='isolated') iso_instance = Instance.objects.get(hostname='isolated')
call_args, _ = check_mock.call_args call_args, _ = check_mock.call_args
assert call_args[0][0] == iso_instance assert call_args[0][0] == iso_instance
@@ -131,7 +131,7 @@ class TestIsolatedManagementTask:
def test_does_not_take_action(self, control_instance, just_updated): def test_does_not_take_action(self, control_instance, just_updated):
with mock.patch('awx.main.tasks.settings', MockSettings()): with mock.patch('awx.main.tasks.settings', MockSettings()):
with mock.patch.object(isolated_manager.IsolatedManager, 'health_check') as check_mock: with mock.patch.object(isolated_manager.IsolatedManager, 'health_check') as check_mock:
tower_isolated_heartbeat() awx_isolated_heartbeat()
iso_instance = Instance.objects.get(hostname='isolated') iso_instance = Instance.objects.get(hostname='isolated')
check_mock.assert_not_called() check_mock.assert_not_called()
assert iso_instance.capacity == 103 assert iso_instance.capacity == 103

Some files were not shown because too many files have changed in this diff Show More