merge and resolve conflict

This commit is contained in:
Dmytro Makovey
2018-09-18 11:35:35 -07:00
912 changed files with 56525 additions and 35552 deletions

41
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,41 @@
---
name: "\U0001F41B Bug report"
about: Create a report to help us improve
---
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
- API
- UI
- Installer
##### SUMMARY
<!-- Briefly describe the problem. -->
##### ENVIRONMENT
* AWX version: X.Y.Z
* AWX install method: openshift, minishift, docker on linux, docker for mac, boot2docker
* Ansible version: X.Y.Z
* Operating System:
* Web Browser:
##### STEPS TO REPRODUCE
<!-- Please describe exactly how to reproduce the problem. -->
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!-- What actually happened? -->
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->

View File

@@ -0,0 +1,22 @@
---
name: "✨ Feature request"
about: Suggest an idea for this project
---
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!-- Pick the area of AWX for this issue, you can have multiple, delete the rest: -->
- API
- UI
- Installer
##### SUMMARY
<!-- Briefly describe the problem or desired enhancement. -->
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->

View File

@@ -0,0 +1,9 @@
---
name: "\U0001F525 Security bug report"
about: How to report security vulnerabilities
---
For all security related bugs, email security@ansible.com instead of using this issue tracker and you will receive a prompt response.
For more information on the Ansible community's practices regarding responsible disclosure, see https://www.ansible.com/security

1
.gitignore vendored
View File

@@ -67,6 +67,7 @@ pep8.txt
scratch scratch
testem.log testem.log
awx/awx_test.sqlite3-journal awx/awx_test.sqlite3-journal
.pytest_cache/
# Mac OS X # Mac OS X
*.DS_Store *.DS_Store

View File

@@ -6,14 +6,14 @@ Have questions about this document or anything not covered here? Come chat with
## Table of contents ## Table of contents
* [Things to know prior to submitting code](#things-to-know-before-contributing-code) * [Things to know prior to submitting code](#things-to-know-prior-to-contributing-code)
* [Setting up your development environment](#setting-up-your-development-environment) * [Setting up your development environment](#setting-up-your-development-environment)
* [Prerequisites](#prerequisites) * [Prerequisites](#prerequisites)
* [Docker](#docker) * [Docker](#docker)
* [Docker compose](#docker-compose) * [Docker compose](#docker-compose)
* [Node and npm](#node-and-npm) * [Node and npm](#node-and-npm)
* [Building the environment](#building-the-environment) * [Build the environment](#build-the-environment)
* [Clone the AWX repo](#clone-the-awx-repo) * [Fork and clone the AWX repo](#fork-and-clone-the-awx-repo)
* [Create local settings](#create-local-settings) * [Create local settings](#create-local-settings)
* [Build the base image](#build-the-base-image) * [Build the base image](#build-the-base-image)
* [Build the user interface](#build-the-user-interface) * [Build the user interface](#build-the-user-interface)

View File

@@ -13,7 +13,7 @@ This tool does __not__ support export/import of the following:
In terminal, pip install tower-cli (if you do not have pip already, install [here](https://pip.pypa.io/en/stable/installing/)): In terminal, pip install tower-cli (if you do not have pip already, install [here](https://pip.pypa.io/en/stable/installing/)):
``` ```
$ pip install ansible-tower-cli $ pip install --upgrade ansible-tower-cli
``` ```
The AWX host URL, user, and password must be set for the AWX instance to be exported: The AWX host URL, user, and password must be set for the AWX instance to be exported:

View File

@@ -62,8 +62,8 @@ Before you can run a deployment, you'll need the following installed in your loc
- [docker-py](https://github.com/docker/docker-py) Python module - [docker-py](https://github.com/docker/docker-py) Python module
- [GNU Make](https://www.gnu.org/software/make/) - [GNU Make](https://www.gnu.org/software/make/)
- [Git](https://git-scm.com/) Requires Version 1.8.4+ - [Git](https://git-scm.com/) Requires Version 1.8.4+
- [Node 6.x LTS version](https://nodejs.org/en/download/) - [Node 8.x LTS version](https://nodejs.org/en/download/)
- [NPM 3.x LTS](https://docs.npmjs.com/) - [NPM 6.x LTS](https://docs.npmjs.com/)
### System Requirements ### System Requirements
@@ -318,7 +318,7 @@ The default resource requests per-pod requires:
> Memory: 6GB > Memory: 6GB
> CPU: 3 cores > CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion. This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
@@ -404,7 +404,7 @@ If you're installing using Docker Compose, you'll need [Docker Compose](https://
#### Deploying to a remote host #### Deploying to a remote host
By default, the delivered [installer/inventory](./installer/inventory) file will deploy AWX to the local host. It is possible; however, to deploy to a remote host. The [installer/install.yml](./installer/install.yml) playbook can be used to build images on the local host, and ship the built images to, and run deployment tasks on, a remote host. To do this, modify the [installer/inventory](./installer/inventory) file, by commenting out `localhost`, and adding the remote host. By default, the delivered [installer/inventory](./installer/inventory) file will deploy AWX to the local host. It is possible, however, to deploy to a remote host. The [installer/install.yml](./installer/install.yml) playbook can be used to build images on the local host, and ship the built images to, and run deployment tasks on, a remote host. To do this, modify the [installer/inventory](./installer/inventory) file, by commenting out `localhost`, and adding the remote host.
For example, suppose you wish to build images locally on your CI/CD host, and deploy them to a remote host named *awx-server*. To do this, add *awx-server* to the [installer/inventory](./installer/inventory) file, and comment out or remove `localhost`, as demonstrated by the following: For example, suppose you wish to build images locally on your CI/CD host, and deploy them to a remote host named *awx-server*. To do this, add *awx-server* to the [installer/inventory](./installer/inventory) file, and comment out or remove `localhost`, as demonstrated by the following:

View File

@@ -30,6 +30,8 @@ DEV_DOCKER_TAG_BASE ?= gcr.io/ansible-tower-engineering
# Comma separated list # Comma separated list
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio
CURWD = $(shell pwd)
# Determine appropriate shasum command # Determine appropriate shasum command
UNAME_S := $(shell uname -s) UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux) ifeq ($(UNAME_S),Linux)
@@ -219,7 +221,7 @@ init:
if [ "$(AWX_GROUP_QUEUES)" == "tower,thepentagon" ]; then \ if [ "$(AWX_GROUP_QUEUES)" == "tower,thepentagon" ]; then \
$(MANAGEMENT_COMMAND) provision_instance --hostname=isolated; \ $(MANAGEMENT_COMMAND) provision_instance --hostname=isolated; \
$(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \ $(MANAGEMENT_COMMAND) register_queue --queuename='thepentagon' --hostnames=isolated --controller=tower; \
$(MANAGEMENT_COMMAND) generate_isolated_key | ssh -o "StrictHostKeyChecking no" root@isolated 'cat > /root/.ssh/authorized_keys'; \ $(MANAGEMENT_COMMAND) generate_isolated_key | ssh -o "StrictHostKeyChecking no" root@isolated 'cat >> /root/.ssh/authorized_keys'; \
fi; fi;
# Refresh development environment after pulling new code. # Refresh development environment after pulling new code.
@@ -273,7 +275,7 @@ supervisor:
supervisord --configuration /supervisor.conf --pidfile=/tmp/supervisor_pid supervisord --configuration /supervisor.conf --pidfile=/tmp/supervisor_pid
# Alternate approach to tmux to run all development tasks specified in # Alternate approach to tmux to run all development tasks specified in
# Procfile. https://youtu.be/OPMgaibszjk # Procfile.
honcho: honcho:
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
@@ -372,7 +374,7 @@ awx-link:
sed -i "s/placeholder/$(shell git describe --long | sed 's/\./\\./g')/" /awx_devel/awx.egg-info/PKG-INFO sed -i "s/placeholder/$(shell git describe --long | sed 's/\./\\./g')/" /awx_devel/awx.egg-info/PKG-INFO
cp /tmp/awx.egg-link /venv/awx/lib/python2.7/site-packages/awx.egg-link cp /tmp/awx.egg-link /venv/awx/lib/python2.7/site-packages/awx.egg-link
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests awx/network_ui/tests/unit TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
# Run all API unit tests. # Run all API unit tests.
test: test:
@@ -380,6 +382,7 @@ test:
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi; \ fi; \
py.test -n auto $(TEST_DIRS) py.test -n auto $(TEST_DIRS)
awx-manage check_migrations --dry-run --check -n 'vNNN_missing_migration_file'
test_combined: test_ansible test test_combined: test_ansible test
@@ -387,7 +390,7 @@ test_unit:
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi; \ fi; \
py.test awx/main/tests/unit awx/conf/tests/unit awx/sso/tests/unit awx/network_ui/tests/unit py.test awx/main/tests/unit awx/conf/tests/unit awx/sso/tests/unit
test_ansible: test_ansible:
@if [ "$(VENV_BASE)" ]; then \ @if [ "$(VENV_BASE)" ]; then \
@@ -560,7 +563,7 @@ docker-isolated:
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml create TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml create
docker start tools_awx_1 docker start tools_awx_1
docker start tools_isolated_1 docker start tools_isolated_1
echo "__version__ = '`python setup.py --version`'" | docker exec -i tools_isolated_1 /bin/bash -c "cat > /venv/awx/lib/python2.7/site-packages/awx.py" echo "__version__ = '`git describe --long | cut -d - -f 1-1`'" | docker exec -i tools_isolated_1 /bin/bash -c "cat > /venv/awx/lib/python2.7/site-packages/awx.py"
if [ "`docker exec -i -t tools_isolated_1 cat /root/.ssh/authorized_keys`" == "`docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub`" ]; then \ if [ "`docker exec -i -t tools_isolated_1 cat /root/.ssh/authorized_keys`" == "`docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub`" ]; then \
echo "SSH keys already copied to isolated instance"; \ echo "SSH keys already copied to isolated instance"; \
else \ else \
@@ -607,6 +610,10 @@ docker-compose-elk: docker-auth
docker-compose-cluster-elk: docker-auth docker-compose-cluster-elk: docker-auth
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
minishift-dev:
ansible-playbook -i localhost, -e devtree_directory=$(CURWD) tools/clusterdevel/start_minishift_dev.yml
clean-elk: clean-elk:
docker stop tools_kibana_1 docker stop tools_kibana_1
docker stop tools_logstash_1 docker stop tools_logstash_1

View File

@@ -23,7 +23,7 @@ Contributing
Reporting Issues Reporting Issues
---------------- ----------------
If you're experiencing a problem, we encourage you to open an issue, and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md). If you're experiencing a problem that you feel is a bug in AWX, or have ideas for how to improve AWX, we encourage you to open an issue, and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md).
Code of Conduct Code of Conduct
--------------- ---------------
@@ -33,11 +33,10 @@ We ask all of our community members and contributors to adhere to the [Ansible c
Get Involved Get Involved
------------ ------------
We welcome your feedback and ideas. Here's how to reach us: We welcome your feedback and ideas. Here's how to reach us with feedback and questions:
- Join the `#ansible-awx` channel on irc.freenode.net - Join the `#ansible-awx` channel on irc.freenode.net
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project) - Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
- [Open an Issue](https://github.com/ansible/awx/issues)
License License
------- -------

View File

@@ -7,11 +7,18 @@ import sys
import warnings import warnings
from pkg_resources import get_distribution from pkg_resources import get_distribution
from .celery import app as celery_app # noqa
__version__ = get_distribution('awx').version __version__ = get_distribution('awx').version
__all__ = ['__version__']
# Isolated nodes do not have celery installed
try:
from .celery import app as celery_app # noqa
__all__.append('celery_app')
except ImportError:
pass
__all__ = ['__version__', 'celery_app']
# Check for the presence/absence of "devonly" module to determine if running # Check for the presence/absence of "devonly" module to determine if running
# from a source code checkout or release packaage. # from a source code checkout or release packaage.

View File

@@ -11,7 +11,7 @@ from django.utils.encoding import smart_text
# Django REST Framework # Django REST Framework
from rest_framework import authentication from rest_framework import authentication
# Django OAuth Toolkit # Django-OAuth-Toolkit
from oauth2_provider.contrib.rest_framework import OAuth2Authentication from oauth2_provider.contrib.rest_framework import OAuth2Authentication
logger = logging.getLogger('awx.api.authentication') logger = logging.getLogger('awx.api.authentication')
@@ -25,7 +25,7 @@ class LoggedBasicAuthentication(authentication.BasicAuthentication):
ret = super(LoggedBasicAuthentication, self).authenticate(request) ret = super(LoggedBasicAuthentication, self).authenticate(request)
if ret: if ret:
username = ret[0].username if ret[0] else '<none>' username = ret[0].username if ret[0] else '<none>'
logger.debug(smart_text(u"User {} performed a {} to {} through the API".format(username, request.method, request.path))) logger.info(smart_text(u"User {} performed a {} to {} through the API".format(username, request.method, request.path)))
return ret return ret
def authenticate_header(self, request): def authenticate_header(self, request):
@@ -39,9 +39,6 @@ class SessionAuthentication(authentication.SessionAuthentication):
def authenticate_header(self, request): def authenticate_header(self, request):
return 'Session' return 'Session'
def enforce_csrf(self, request):
return None
class LoggedOAuth2Authentication(OAuth2Authentication): class LoggedOAuth2Authentication(OAuth2Authentication):
@@ -50,8 +47,8 @@ class LoggedOAuth2Authentication(OAuth2Authentication):
if ret: if ret:
user, token = ret user, token = ret
username = user.username if user else '<none>' username = user.username if user else '<none>'
logger.debug(smart_text( logger.info(smart_text(
u"User {} performed a {} to {} through the API using OAuth token {}.".format( u"User {} performed a {} to {} through the API using OAuth 2 token {}.".format(
username, request.method, request.path, token.pk username, request.method, request.path, token.pk
) )
)) ))

View File

@@ -47,3 +47,15 @@ register(
category=_('Authentication'), category=_('Authentication'),
category_slug='authentication', category_slug='authentication',
) )
register(
'ALLOW_OAUTH2_FOR_EXTERNAL_USERS',
field_class=fields.BooleanField,
default=False,
label=_('Allow External Users to Create OAuth2 Tokens'),
help_text=_('For security reasons, users from external auth providers (LDAP, SAML, '
'SSO, Radius, and others) are not allowed to create OAuth2 tokens. '
'To change this behavior, enable this setting. Existing tokens will '
'not be deleted when this setting is toggled off.'),
category=_('Authentication'),
category_slug='authentication',
)

View File

@@ -12,7 +12,11 @@ class ActiveJobConflict(ValidationError):
status_code = 409 status_code = 409
def __init__(self, active_jobs): def __init__(self, active_jobs):
super(ActiveJobConflict, self).__init__({ # During APIException.__init__(), Django Rest Framework
# turn everything in self.detail into string by using force_text.
# Declare detail afterwards circumvent this behavior.
super(ActiveJobConflict, self).__init__()
self.detail = {
"error": _("Resource is being used by running jobs."), "error": _("Resource is being used by running jobs."),
"active_jobs": active_jobs "active_jobs": active_jobs
}) }

View File

@@ -97,7 +97,7 @@ class DeprecatedCredentialField(serializers.IntegerField):
kwargs['allow_null'] = True kwargs['allow_null'] = True
kwargs['default'] = None kwargs['default'] = None
kwargs['min_value'] = 1 kwargs['min_value'] = 1
kwargs['help_text'] = 'This resource has been deprecated and will be removed in a future release' kwargs.setdefault('help_text', 'This resource has been deprecated and will be removed in a future release')
super(DeprecatedCredentialField, self).__init__(**kwargs) super(DeprecatedCredentialField, self).__init__(**kwargs)
def to_internal_value(self, pk): def to_internal_value(self, pk):

View File

@@ -4,6 +4,7 @@
# Python # Python
import re import re
import json import json
from functools import reduce
# Django # Django
from django.core.exceptions import FieldError, ValidationError from django.core.exceptions import FieldError, ValidationError
@@ -238,7 +239,11 @@ class FieldLookupBackend(BaseFilterBackend):
or_filters = [] or_filters = []
chain_filters = [] chain_filters = []
role_filters = [] role_filters = []
search_filters = [] search_filters = {}
# Can only have two values: 'AND', 'OR'
# If 'AND' is used, an iterm must satisfy all condition to show up in the results.
# If 'OR' is used, an item just need to satisfy one condition to appear in results.
search_filter_relation = 'OR'
for key, values in request.query_params.lists(): for key, values in request.query_params.lists():
if key in self.RESERVED_NAMES: if key in self.RESERVED_NAMES:
continue continue
@@ -262,11 +267,13 @@ class FieldLookupBackend(BaseFilterBackend):
# Search across related objects. # Search across related objects.
if key.endswith('__search'): if key.endswith('__search'):
if values and ',' in values[0]:
search_filter_relation = 'AND'
values = reduce(lambda list1, list2: list1 + list2, [i.split(',') for i in values])
for value in values: for value in values:
search_value, new_keys = self.value_to_python(queryset.model, key, force_text(value)) search_value, new_keys = self.value_to_python(queryset.model, key, force_text(value))
assert isinstance(new_keys, list) assert isinstance(new_keys, list)
for new_key in new_keys: search_filters[search_value] = new_keys
search_filters.append((new_key, search_value))
continue continue
# Custom chain__ and or__ filters, mutually exclusive (both can # Custom chain__ and or__ filters, mutually exclusive (both can
@@ -355,11 +362,18 @@ class FieldLookupBackend(BaseFilterBackend):
else: else:
q |= Q(**{k:v}) q |= Q(**{k:v})
args.append(q) args.append(q)
if search_filters: if search_filters and search_filter_relation == 'OR':
q = Q() q = Q()
for k,v in search_filters: for term, constrains in search_filters.iteritems():
q |= Q(**{k:v}) for constrain in constrains:
q |= Q(**{constrain: term})
args.append(q) args.append(q)
elif search_filters and search_filter_relation == 'AND':
for term, constrains in search_filters.iteritems():
q_chain = Q()
for constrain in constrains:
q_chain |= Q(**{constrain: term})
queryset = queryset.filter(q_chain)
for n,k,v in chain_filters: for n,k,v in chain_filters:
if n: if n:
q = ~Q(**{k:v}) q = ~Q(**{k:v})

View File

@@ -23,14 +23,14 @@ from django.utils.translation import ugettext_lazy as _
from django.contrib.auth import views as auth_views from django.contrib.auth import views as auth_views
# Django REST Framework # Django REST Framework
from rest_framework.authentication import get_authorization_header from rest_framework.exceptions import PermissionDenied, AuthenticationFailed, ParseError, NotAcceptable, UnsupportedMediaType
from rest_framework.exceptions import PermissionDenied, AuthenticationFailed, ParseError
from rest_framework import generics from rest_framework import generics
from rest_framework.response import Response from rest_framework.response import Response
from rest_framework import status from rest_framework import status
from rest_framework import views from rest_framework import views
from rest_framework.permissions import AllowAny from rest_framework.permissions import AllowAny
from rest_framework.renderers import JSONRenderer from rest_framework.renderers import StaticHTMLRenderer, JSONRenderer
from rest_framework.negotiation import DefaultContentNegotiation
# cryptography # cryptography
from cryptography.fernet import InvalidToken from cryptography.fernet import InvalidToken
@@ -64,21 +64,36 @@ analytics_logger = logging.getLogger('awx.analytics.performance')
class LoggedLoginView(auth_views.LoginView): class LoggedLoginView(auth_views.LoginView):
def get(self, request, *args, **kwargs):
# The django.auth.contrib login form doesn't perform the content
# negotiation we've come to expect from DRF; add in code to catch
# situations where Accept != text/html (or */*) and reply with
# an HTTP 406
try:
DefaultContentNegotiation().select_renderer(
request,
[StaticHTMLRenderer],
'html'
)
except NotAcceptable:
resp = Response(status=status.HTTP_406_NOT_ACCEPTABLE)
resp.accepted_renderer = StaticHTMLRenderer()
resp.accepted_media_type = 'text/plain'
resp.renderer_context = {}
return resp
return super(LoggedLoginView, self).get(request, *args, **kwargs)
def post(self, request, *args, **kwargs): def post(self, request, *args, **kwargs):
original_user = getattr(request, 'user', None)
ret = super(LoggedLoginView, self).post(request, *args, **kwargs) ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
current_user = getattr(request, 'user', None) current_user = getattr(request, 'user', None)
if current_user and getattr(current_user, 'pk', None) and current_user != original_user:
logger.info("User {} logged in.".format(current_user.username))
if request.user.is_authenticated: if request.user.is_authenticated:
logger.info(smart_text(u"User {} logged in".format(self.request.user.username))) logger.info(smart_text(u"User {} logged in.".format(self.request.user.username)))
ret.set_cookie('userLoggedIn', 'true') ret.set_cookie('userLoggedIn', 'true')
current_user = UserSerializer(self.request.user) current_user = UserSerializer(self.request.user)
current_user = JSONRenderer().render(current_user.data) current_user = JSONRenderer().render(current_user.data)
current_user = urllib.quote('%s' % current_user, '') current_user = urllib.quote('%s' % current_user, '')
ret.set_cookie('current_user', current_user) ret.set_cookie('current_user', current_user)
return ret return ret
else: else:
ret.status_code = 401 ret.status_code = 401
@@ -175,9 +190,13 @@ class APIView(views.APIView):
request.drf_request_user = getattr(drf_request, 'user', False) request.drf_request_user = getattr(drf_request, 'user', False)
except AuthenticationFailed: except AuthenticationFailed:
request.drf_request_user = None request.drf_request_user = None
except ParseError as exc: except (PermissionDenied, ParseError) as exc:
request.drf_request_user = None request.drf_request_user = None
self.__init_request_error__ = exc self.__init_request_error__ = exc
except UnsupportedMediaType as exc:
exc.detail = _('You did not use correct Content-Type in your HTTP request. '
'If you are using our REST API, the Content-Type must be application/json')
self.__init_request_error__ = exc
return drf_request return drf_request
def finalize_response(self, request, response, *args, **kwargs): def finalize_response(self, request, response, *args, **kwargs):
@@ -190,6 +209,7 @@ class APIView(views.APIView):
if hasattr(self, '__init_request_error__'): if hasattr(self, '__init_request_error__'):
response = self.handle_exception(self.__init_request_error__) response = self.handle_exception(self.__init_request_error__)
if response.status_code == 401: if response.status_code == 401:
response.data['detail'] += ' To establish a login session, visit /api/login/.'
logger.info(status_msg) logger.info(status_msg)
else: else:
logger.warn(status_msg) logger.warn(status_msg)
@@ -208,26 +228,35 @@ class APIView(views.APIView):
return response return response
def get_authenticate_header(self, request): def get_authenticate_header(self, request):
""" # HTTP Basic auth is insecure by default, because the basic auth
Determine the WWW-Authenticate header to use for 401 responses. Try to # backend does not provide CSRF protection.
use the request header as an indication for which authentication method #
was attempted. # If you visit `/api/v2/job_templates/` and we return
""" # `WWW-Authenticate: Basic ...`, your browser will prompt you for an
for authenticator in self.get_authenticators(): # HTTP basic auth username+password and will store it _in the browser_
resp_hdr = authenticator.authenticate_header(request) # for subsequent requests. Because basic auth does not require CSRF
if not resp_hdr: # validation (because it's commonly used with e.g., tower-cli and other
continue # non-browser clients), browsers that save basic auth in this way are
req_hdr = get_authorization_header(request) # vulnerable to cross-site request forgery:
if not req_hdr: #
continue # 1. Visit `/api/v2/job_templates/` and specify a user+pass for basic auth.
if resp_hdr.split()[0] and resp_hdr.split()[0] == req_hdr.split()[0]: # 2. Visit a nefarious website and submit a
return resp_hdr # `<form action='POST' method='https://tower.example.org/api/v2/job_templates/N/launch/'>`
# If it can't be determined from the request, use the last # 3. The browser will use your persisted user+pass and your login
# authenticator (should be Basic). # session is effectively hijacked.
try: #
return authenticator.authenticate_header(request) # To prevent this, we will _no longer_ send `WWW-Authenticate: Basic ...`
except NameError: # headers in responses; this means that unauthenticated /api/v2/... requests
pass # will now return HTTP 401 in-browser, rather than popping up an auth dialog.
#
# This means that people who wish to use the interactive API browser
# must _first_ login in via `/api/login/` to establish a session (which
# _does_ enforce CSRF).
#
# CLI users can _still_ specify basic auth credentials explicitly via
# a header or in the URL e.g.,
# `curl https://user:pass@tower.example.org/api/v2/job_templates/N/launch/`
return 'Bearer realm=api authorization_url=/api/o/authorize/'
def get_view_description(self, html=False): def get_view_description(self, html=False):
""" """
@@ -298,6 +327,12 @@ class APIView(views.APIView):
kwargs.pop('version') kwargs.pop('version')
return super(APIView, self).dispatch(request, *args, **kwargs) return super(APIView, self).dispatch(request, *args, **kwargs)
def check_permissions(self, request):
if request.method not in ('GET', 'OPTIONS', 'HEAD'):
if 'write' not in getattr(request.user, 'oauth_scopes', ['write']):
raise PermissionDenied()
return super(APIView, self).check_permissions(request)
class GenericAPIView(generics.GenericAPIView, APIView): class GenericAPIView(generics.GenericAPIView, APIView):
# Base class for all model-based views. # Base class for all model-based views.
@@ -355,7 +390,6 @@ class GenericAPIView(generics.GenericAPIView, APIView):
]: ]:
d[key] = self.metadata_class().get_serializer_info(serializer, method=method) d[key] = self.metadata_class().get_serializer_info(serializer, method=method)
d['settings'] = settings d['settings'] = settings
d['has_named_url'] = self.model in settings.NAMED_URL_GRAPH
return d return d
@@ -726,6 +760,7 @@ class DeleteLastUnattachLabelMixin(object):
when the last disassociate is called should inherit from this class. Further, when the last disassociate is called should inherit from this class. Further,
the model should implement is_detached() the model should implement is_detached()
''' '''
def unattach(self, request, *args, **kwargs): def unattach(self, request, *args, **kwargs):
(sub_id, res) = super(DeleteLastUnattachLabelMixin, self).unattach_validate(request) (sub_id, res) = super(DeleteLastUnattachLabelMixin, self).unattach_validate(request)
if res: if res:
@@ -801,6 +836,10 @@ class CopyAPIView(GenericAPIView):
new_in_330 = True new_in_330 = True
new_in_api_v2 = True new_in_api_v2 = True
def v1_not_allowed(self):
return Response({'detail': 'Action only possible starting with v2 API.'},
status=status.HTTP_404_NOT_FOUND)
def _get_copy_return_serializer(self, *args, **kwargs): def _get_copy_return_serializer(self, *args, **kwargs):
if not self.copy_return_serializer_class: if not self.copy_return_serializer_class:
return self.get_serializer(*args, **kwargs) return self.get_serializer(*args, **kwargs)
@@ -885,9 +924,11 @@ class CopyAPIView(GenericAPIView):
# not work properly in non-request-response-cycle context. # not work properly in non-request-response-cycle context.
new_obj.created_by = creater new_obj.created_by = creater
new_obj.save() new_obj.save()
for m2m in m2m_to_preserve: from awx.main.signals import disable_activity_stream
for related_obj in m2m_to_preserve[m2m].all(): with disable_activity_stream():
getattr(new_obj, m2m).add(related_obj) for m2m in m2m_to_preserve:
for related_obj in m2m_to_preserve[m2m].all():
getattr(new_obj, m2m).add(related_obj)
if not old_parent: if not old_parent:
sub_objects = [] sub_objects = []
for o2m in o2m_to_preserve: for o2m in o2m_to_preserve:
@@ -902,13 +943,21 @@ class CopyAPIView(GenericAPIView):
return ret return ret
def get(self, request, *args, **kwargs): def get(self, request, *args, **kwargs):
if get_request_version(request) < 2:
return self.v1_not_allowed()
obj = self.get_object() obj = self.get_object()
if not request.user.can_access(obj.__class__, 'read', obj):
raise PermissionDenied()
create_kwargs = self._build_create_dict(obj) create_kwargs = self._build_create_dict(obj)
for key in create_kwargs: for key in create_kwargs:
create_kwargs[key] = getattr(create_kwargs[key], 'pk', None) or create_kwargs[key] create_kwargs[key] = getattr(create_kwargs[key], 'pk', None) or create_kwargs[key]
return Response({'can_copy': request.user.can_access(self.model, 'add', create_kwargs)}) can_copy = request.user.can_access(self.model, 'add', create_kwargs) and \
request.user.can_access(self.model, 'copy_related', obj)
return Response({'can_copy': can_copy})
def post(self, request, *args, **kwargs): def post(self, request, *args, **kwargs):
if get_request_version(request) < 2:
return self.v1_not_allowed()
obj = self.get_object() obj = self.get_object()
create_kwargs = self._build_create_dict(obj) create_kwargs = self._build_create_dict(obj)
create_kwargs_check = {} create_kwargs_check = {}
@@ -916,6 +965,8 @@ class CopyAPIView(GenericAPIView):
create_kwargs_check[key] = getattr(create_kwargs[key], 'pk', None) or create_kwargs[key] create_kwargs_check[key] = getattr(create_kwargs[key], 'pk', None) or create_kwargs[key]
if not request.user.can_access(self.model, 'add', create_kwargs_check): if not request.user.can_access(self.model, 'add', create_kwargs_check):
raise PermissionDenied() raise PermissionDenied()
if not request.user.can_access(self.model, 'copy_related', obj):
raise PermissionDenied()
serializer = self.get_serializer(data=request.data) serializer = self.get_serializer(data=request.data)
if not serializer.is_valid(): if not serializer.is_valid():
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@@ -937,4 +988,5 @@ class CopyAPIView(GenericAPIView):
permission_check_func=permission_check_func permission_check_func=permission_check_func
) )
serializer = self._get_copy_return_serializer(new_obj) serializer = self._get_copy_return_serializer(new_obj)
return Response(serializer.data, status=status.HTTP_201_CREATED) headers = {'Location': new_obj.get_absolute_url(request=request)}
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)

View File

@@ -67,6 +67,8 @@ class Metadata(metadata.SimpleMetadata):
if field.field_name == model_field.name: if field.field_name == model_field.name:
field_info['filterable'] = True field_info['filterable'] = True
break break
else:
field_info['filterable'] = False
# Indicate if a field has a default value. # Indicate if a field has a default value.
# FIXME: Still isn't showing all default values? # FIXME: Still isn't showing all default values?

File diff suppressed because it is too large Load Diff

View File

@@ -54,8 +54,6 @@ within all designated text fields of a model.
?search=findme ?search=findme
_Added in AWX 1.4_
(_Added in Ansible Tower 3.1.0_) Search across related fields: (_Added in Ansible Tower 3.1.0_) Search across related fields:
?related__search=findme ?related__search=findme
@@ -84,7 +82,7 @@ To exclude results matching certain criteria, prefix the field parameter with
?not__field=value ?not__field=value
(_Added in AWX 1.4_) By default, all query string filters are AND'ed together, so By default, all query string filters are AND'ed together, so
only the results matching *all* filters will be returned. To combine results only the results matching *all* filters will be returned. To combine results
matching *any* one of multiple criteria, prefix each query string parameter matching *any* one of multiple criteria, prefix each query string parameter
with `or__`: with `or__`:

View File

@@ -10,7 +10,7 @@ object containing groups, including the hosts, children and variables for each
group. The response data is equivalent to that returned by passing the group. The response data is equivalent to that returned by passing the
`--list` argument to an inventory script. `--list` argument to an inventory script.
_(Added in AWX 1.3)_ Specify a query string of `?hostvars=1` to retrieve the JSON Specify a query string of `?hostvars=1` to retrieve the JSON
object above including all host variables. The `['_meta']['hostvars']` object object above including all host variables. The `['_meta']['hostvars']` object
in the response contains an entry for each host with its variables. This in the response contains an entry for each host with its variables. This
response format can be used with Ansible 1.3 and later to avoid making a response format can be used with Ansible 1.3 and later to avoid making a
@@ -18,11 +18,16 @@ separate API request for each host. Refer to
[Tuning the External Inventory Script](http://docs.ansible.com/developing_inventory.html#tuning-the-external-inventory-script) [Tuning the External Inventory Script](http://docs.ansible.com/developing_inventory.html#tuning-the-external-inventory-script)
for more information on this feature. for more information on this feature.
_(Added in AWX 1.4)_ By default, the inventory script will only return hosts that By default, the inventory script will only return hosts that
are enabled in the inventory. This feature allows disabled hosts to be skipped are enabled in the inventory. This feature allows disabled hosts to be skipped
when running jobs without removing them from the inventory. Specify a query when running jobs without removing them from the inventory. Specify a query
string of `?all=1` to return all hosts, including disabled ones. string of `?all=1` to return all hosts, including disabled ones.
Specify a query string of `?towervars=1` to add variables
to the hostvars of each host that specifies its enabled state and database ID.
To apply multiple query strings, join them with the `&` character, like `?hostvars=1&all=1`.
## Host Response ## Host Response
Make a GET request to this resource with a query string similar to Make a GET request to this resource with a query string similar to

View File

@@ -48,7 +48,7 @@ Here is a more comprehensive example showing the various question types and thei
"min": 5, "min": 5,
"max": "", "max": "",
"required": false, "required": false,
"default": "yes" "default": "Leeloo Minai Lekarariba-Laminai-Tchai Ekbat De Sebat"
}, },
{ {
"type": "text", "type": "text",
@@ -57,9 +57,9 @@ Here is a more comprehensive example showing the various question types and thei
"variable": "short_answer", "variable": "short_answer",
"choices": "", "choices": "",
"min": "", "min": "",
"max": 5, "max": 7,
"required": false, "required": false,
"default": "yes" "default": "leeloo"
}, },
{ {
"type": "text", "type": "text",
@@ -70,7 +70,7 @@ Here is a more comprehensive example showing the various question types and thei
"min": "", "min": "",
"max": "", "max": "",
"required": true, "required": true,
"default": "yes" "default": "NOT OPTIONAL"
}, },
{ {
"type": "multiplechoice", "type": "multiplechoice",
@@ -81,7 +81,7 @@ Here is a more comprehensive example showing the various question types and thei
"min": "", "min": "",
"max": "", "max": "",
"required": false, "required": false,
"default": "yes" "default": "one"
}, },
{ {
"type": "multiselect", "type": "multiselect",
@@ -92,7 +92,7 @@ Here is a more comprehensive example showing the various question types and thei
"min": "", "min": "",
"max": "", "max": "",
"required": false, "required": false,
"default": "yes" "default": "one\nthree"
}, },
{ {
"type": "integer", "type": "integer",

View File

@@ -1,7 +1,3 @@
{% if has_named_url %}
### Note: starting from api v2, this resource object can be accessed via its named URL.
{% endif %}
# Retrieve {{ model_verbose_name|title|anora }}: # Retrieve {{ model_verbose_name|title|anora }}:
Make GET request to this resource to retrieve a single {{ model_verbose_name }} Make GET request to this resource to retrieve a single {{ model_verbose_name }}

View File

@@ -1,7 +1,3 @@
{% if has_named_url %}
### Note: starting from api v2, this resource object can be accessed via its named URL.
{% endif %}
{% ifmeth GET %} {% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}: # Retrieve {{ model_verbose_name|title|anora }}:

View File

@@ -1,7 +1,3 @@
{% if has_named_url %}
### Note: starting from api v2, this resource object can be accessed via its named URL.
{% endif %}
{% ifmeth GET %} {% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}: # Retrieve {{ model_verbose_name|title|anora }}:

View File

@@ -1,7 +1,3 @@
{% if has_named_url %}
### Note: starting from api v2, this resource object can be accessed via its named URL.
{% endif %}
{% ifmeth GET %} {% ifmeth GET %}
# Retrieve {{ model_verbose_name|title|anora }}: # Retrieve {{ model_verbose_name|title|anora }}:

View File

@@ -1,18 +0,0 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.conf.urls import url
from oauth2_provider.urls import base_urlpatterns
from awx.api.views import (
ApiOAuthAuthorizationRootView,
)
urls = [
url(r'^$', ApiOAuthAuthorizationRootView.as_view(), name='oauth_authorization_root_view'),
] + base_urlpatterns
__all__ = ['urls']

View File

@@ -11,7 +11,6 @@ from awx.api.views import (
OAuth2TokenList, OAuth2TokenList,
OAuth2TokenDetail, OAuth2TokenDetail,
OAuth2TokenActivityStreamList, OAuth2TokenActivityStreamList,
OAuth2PersonalTokenList
) )
@@ -42,8 +41,7 @@ urls = [
r'^tokens/(?P<pk>[0-9]+)/activity_stream/$', r'^tokens/(?P<pk>[0-9]+)/activity_stream/$',
OAuth2TokenActivityStreamList.as_view(), OAuth2TokenActivityStreamList.as_view(),
name='o_auth2_token_activity_stream_list' name='o_auth2_token_activity_stream_list'
), ),
url(r'^personal_tokens/$', OAuth2PersonalTokenList.as_view(), name='o_auth2_personal_token_list'),
] ]
__all__ = ['urls'] __all__ = ['urls']

View File

@@ -0,0 +1,31 @@
# Copyright (c) 2017 Ansible, Inc.
# All Rights Reserved.
from django.conf.urls import url
from oauthlib import oauth2
from oauth2_provider import views
from awx.api.views import (
ApiOAuthAuthorizationRootView,
)
class TokenView(views.TokenView):
def create_token_response(self, request):
try:
return super(TokenView, self).create_token_response(request)
except oauth2.AccessDeniedError as e:
return request.build_absolute_uri(), {}, str(e), '403'
urls = [
url(r'^$', ApiOAuthAuthorizationRootView.as_view(), name='oauth_authorization_root_view'),
url(r"^authorize/$", views.AuthorizationView.as_view(), name="authorize"),
url(r"^token/$", TokenView.as_view(), name="token"),
url(r"^revoke_token/$", views.RevokeTokenView.as_view(), name="revoke-token"),
]
__all__ = ['urls']

View File

@@ -67,8 +67,8 @@ from .schedule import urls as schedule_urls
from .activity_stream import urls as activity_stream_urls from .activity_stream import urls as activity_stream_urls
from .instance import urls as instance_urls from .instance import urls as instance_urls
from .instance_group import urls as instance_group_urls from .instance_group import urls as instance_group_urls
from .user_oauth import urls as user_oauth_urls from .oauth2 import urls as oauth2_urls
from .oauth import urls as oauth_urls from .oauth2_root import urls as oauth2_root_urls
v1_urls = [ v1_urls = [
@@ -130,7 +130,7 @@ v2_urls = [
url(r'^applications/(?P<pk>[0-9]+)/$', OAuth2ApplicationDetail.as_view(), name='o_auth2_application_detail'), url(r'^applications/(?P<pk>[0-9]+)/$', OAuth2ApplicationDetail.as_view(), name='o_auth2_application_detail'),
url(r'^applications/(?P<pk>[0-9]+)/tokens/$', ApplicationOAuth2TokenList.as_view(), name='application_o_auth2_token_list'), url(r'^applications/(?P<pk>[0-9]+)/tokens/$', ApplicationOAuth2TokenList.as_view(), name='application_o_auth2_token_list'),
url(r'^tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'), url(r'^tokens/$', OAuth2TokenList.as_view(), name='o_auth2_token_list'),
url(r'^', include(user_oauth_urls)), url(r'^', include(oauth2_urls)),
] ]
app_name = 'api' app_name = 'api'
@@ -145,7 +145,7 @@ urlpatterns = [
url(r'^logout/$', LoggedLogoutView.as_view( url(r'^logout/$', LoggedLogoutView.as_view(
next_page='/api/', redirect_field_name='next' next_page='/api/', redirect_field_name='next'
), name='logout'), ), name='logout'),
url(r'^o/', include(oauth_urls)), url(r'^o/', include(oauth2_root_urls)),
] ]
if settings.SETTINGS_MODULE == 'awx.settings.development': if settings.SETTINGS_MODULE == 'awx.settings.development':
from awx.api.swagger import SwaggerSchemaView from awx.api.swagger import SwaggerSchemaView

View File

@@ -16,7 +16,7 @@ from awx.api.views import (
UserAccessList, UserAccessList,
OAuth2ApplicationList, OAuth2ApplicationList,
OAuth2UserTokenList, OAuth2UserTokenList,
OAuth2PersonalTokenList, UserPersonalTokenList,
UserAuthorizedTokenList, UserAuthorizedTokenList,
) )
@@ -34,7 +34,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'), url(r'^(?P<pk>[0-9]+)/applications/$', OAuth2ApplicationList.as_view(), name='o_auth2_application_list'),
url(r'^(?P<pk>[0-9]+)/tokens/$', OAuth2UserTokenList.as_view(), name='o_auth2_token_list'), url(r'^(?P<pk>[0-9]+)/tokens/$', OAuth2UserTokenList.as_view(), name='o_auth2_token_list'),
url(r'^(?P<pk>[0-9]+)/authorized_tokens/$', UserAuthorizedTokenList.as_view(), name='user_authorized_token_list'), url(r'^(?P<pk>[0-9]+)/authorized_tokens/$', UserAuthorizedTokenList.as_view(), name='user_authorized_token_list'),
url(r'^(?P<pk>[0-9]+)/personal_tokens/$', OAuth2PersonalTokenList.as_view(), name='o_auth2_personal_token_list'), url(r'^(?P<pk>[0-9]+)/personal_tokens/$', UserPersonalTokenList.as_view(), name='user_personal_token_list'),
] ]

View File

@@ -24,7 +24,8 @@ from django.shortcuts import get_object_or_404
from django.utils.encoding import smart_text from django.utils.encoding import smart_text
from django.utils.safestring import mark_safe from django.utils.safestring import mark_safe
from django.utils.timezone import now from django.utils.timezone import now
from django.views.decorators.csrf import csrf_exempt from django.utils.decorators import method_decorator
from django.views.decorators.csrf import csrf_exempt, ensure_csrf_cookie
from django.template.loader import render_to_string from django.template.loader import render_to_string
from django.http import HttpResponse from django.http import HttpResponse
from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes.models import ContentType
@@ -60,7 +61,7 @@ import pytz
from wsgiref.util import FileWrapper from wsgiref.util import FileWrapper
# AWX # AWX
from awx.main.tasks import send_notifications, handle_ha_toplogy_changes from awx.main.tasks import send_notifications
from awx.main.access import get_user_queryset from awx.main.access import get_user_queryset
from awx.main.ha import is_ha_environment from awx.main.ha import is_ha_environment
from awx.api.filters import V1CredentialFilterBackend from awx.api.filters import V1CredentialFilterBackend
@@ -104,6 +105,8 @@ def api_exception_handler(exc, context):
exc = ParseError(exc.args[0]) exc = ParseError(exc.args[0])
if isinstance(exc, FieldError): if isinstance(exc, FieldError):
exc = ParseError(exc.args[0]) exc = ParseError(exc.args[0])
if isinstance(context['view'], UnifiedJobStdout):
context['view'].renderer_classes = [BrowsableAPIRenderer, renderers.JSONRenderer]
return exception_handler(exc, context) return exception_handler(exc, context)
@@ -112,9 +115,10 @@ class ActivityStreamEnforcementMixin(object):
Mixin to check that license supports activity streams. Mixin to check that license supports activity streams.
''' '''
def check_permissions(self, request): def check_permissions(self, request):
ret = super(ActivityStreamEnforcementMixin, self).check_permissions(request)
if not feature_enabled('activity_streams'): if not feature_enabled('activity_streams'):
raise LicenseForbids(_('Your license does not allow use of the activity stream.')) raise LicenseForbids(_('Your license does not allow use of the activity stream.'))
return super(ActivityStreamEnforcementMixin, self).check_permissions(request) return ret
class SystemTrackingEnforcementMixin(object): class SystemTrackingEnforcementMixin(object):
@@ -122,9 +126,10 @@ class SystemTrackingEnforcementMixin(object):
Mixin to check that license supports system tracking. Mixin to check that license supports system tracking.
''' '''
def check_permissions(self, request): def check_permissions(self, request):
ret = super(SystemTrackingEnforcementMixin, self).check_permissions(request)
if not feature_enabled('system_tracking'): if not feature_enabled('system_tracking'):
raise LicenseForbids(_('Your license does not permit use of system tracking.')) raise LicenseForbids(_('Your license does not permit use of system tracking.'))
return super(SystemTrackingEnforcementMixin, self).check_permissions(request) return ret
class WorkflowsEnforcementMixin(object): class WorkflowsEnforcementMixin(object):
@@ -132,9 +137,10 @@ class WorkflowsEnforcementMixin(object):
Mixin to check that license supports workflows. Mixin to check that license supports workflows.
''' '''
def check_permissions(self, request): def check_permissions(self, request):
ret = super(WorkflowsEnforcementMixin, self).check_permissions(request)
if not feature_enabled('workflows') and request.method not in ('GET', 'OPTIONS', 'DELETE'): if not feature_enabled('workflows') and request.method not in ('GET', 'OPTIONS', 'DELETE'):
raise LicenseForbids(_('Your license does not allow use of workflows.')) raise LicenseForbids(_('Your license does not allow use of workflows.'))
return super(WorkflowsEnforcementMixin, self).check_permissions(request) return ret
class UnifiedJobDeletionMixin(object): class UnifiedJobDeletionMixin(object):
@@ -176,29 +182,64 @@ class InstanceGroupMembershipMixin(object):
sub_id, res = self.attach_validate(request) sub_id, res = self.attach_validate(request)
if status.is_success(response.status_code): if status.is_success(response.status_code):
if self.parent_model is Instance: if self.parent_model is Instance:
ig_obj = get_object_or_400(self.model, pk=sub_id)
inst_name = ig_obj.hostname inst_name = ig_obj.hostname
else: else:
ig_obj = self.get_parent_object()
inst_name = get_object_or_400(self.model, pk=sub_id).hostname inst_name = get_object_or_400(self.model, pk=sub_id).hostname
if inst_name not in ig_obj.policy_instance_list: with transaction.atomic():
ig_obj.policy_instance_list.append(inst_name) ig_qs = InstanceGroup.objects.select_for_update()
ig_obj.save() if self.parent_model is Instance:
ig_obj = get_object_or_400(ig_qs, pk=sub_id)
else:
# similar to get_parent_object, but selected for update
parent_filter = {
self.lookup_field: self.kwargs.get(self.lookup_field, None),
}
ig_obj = get_object_or_404(ig_qs, **parent_filter)
if inst_name not in ig_obj.policy_instance_list:
ig_obj.policy_instance_list.append(inst_name)
ig_obj.save(update_fields=['policy_instance_list'])
return response return response
def is_valid_relation(self, parent, sub, created=False):
if sub.is_isolated():
return {'error': _('Isolated instances may not be added or removed from instances groups via the API.')}
if self.parent_model is InstanceGroup:
ig_obj = self.get_parent_object()
if ig_obj.controller_id is not None:
return {'error': _('Isolated instance group membership may not be managed via the API.')}
return None
def unattach_validate(self, request):
(sub_id, res) = super(InstanceGroupMembershipMixin, self).unattach_validate(request)
if res:
return (sub_id, res)
sub = get_object_or_400(self.model, pk=sub_id)
attach_errors = self.is_valid_relation(None, sub)
if attach_errors:
return (sub_id, Response(attach_errors, status=status.HTTP_400_BAD_REQUEST))
return (sub_id, res)
def unattach(self, request, *args, **kwargs): def unattach(self, request, *args, **kwargs):
response = super(InstanceGroupMembershipMixin, self).unattach(request, *args, **kwargs) response = super(InstanceGroupMembershipMixin, self).unattach(request, *args, **kwargs)
sub_id, res = self.attach_validate(request)
if status.is_success(response.status_code): if status.is_success(response.status_code):
sub_id = request.data.get('id', None)
if self.parent_model is Instance: if self.parent_model is Instance:
ig_obj = get_object_or_400(self.model, pk=sub_id)
inst_name = self.get_parent_object().hostname inst_name = self.get_parent_object().hostname
else: else:
ig_obj = self.get_parent_object()
inst_name = get_object_or_400(self.model, pk=sub_id).hostname inst_name = get_object_or_400(self.model, pk=sub_id).hostname
if inst_name in ig_obj.policy_instance_list: with transaction.atomic():
ig_obj.policy_instance_list.pop(ig_obj.policy_instance_list.index(inst_name)) ig_qs = InstanceGroup.objects.select_for_update()
ig_obj.save() if self.parent_model is Instance:
ig_obj = get_object_or_400(ig_qs, pk=sub_id)
else:
# similar to get_parent_object, but selected for update
parent_filter = {
self.lookup_field: self.kwargs.get(self.lookup_field, None),
}
ig_obj = get_object_or_404(ig_qs, **parent_filter)
if inst_name in ig_obj.policy_instance_list:
ig_obj.policy_instance_list.pop(ig_obj.policy_instance_list.index(inst_name))
ig_obj.save(update_fields=['policy_instance_list'])
return response return response
@@ -227,20 +268,20 @@ class ApiRootView(APIView):
versioning_class = None versioning_class = None
swagger_topic = 'Versioning' swagger_topic = 'Versioning'
@method_decorator(ensure_csrf_cookie)
def get(self, request, format=None): def get(self, request, format=None):
''' List supported API versions ''' ''' List supported API versions '''
v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'}) v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'})
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'}) v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
data = dict( data = OrderedDict()
description = _('AWX REST API'), data['description'] = _('AWX REST API')
current_version = v2, data['current_version'] = v2
available_versions = dict(v1 = v1, v2 = v2), data['available_versions'] = dict(v1 = v1, v2 = v2)
) data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
if feature_enabled('rebranding'): if feature_enabled('rebranding'):
data['custom_logo'] = settings.CUSTOM_LOGO data['custom_logo'] = settings.CUSTOM_LOGO
data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO data['custom_login_info'] = settings.CUSTOM_LOGIN_INFO
data['oauth2'] = drf_reverse('api:oauth_authorization_root_view')
return Response(data) return Response(data)
@@ -404,9 +445,9 @@ class ApiV1ConfigView(APIView):
data.update(dict( data.update(dict(
project_base_dir = settings.PROJECTS_ROOT, project_base_dir = settings.PROJECTS_ROOT,
project_local_paths = Project.get_local_path_choices(), project_local_paths = Project.get_local_path_choices(),
custom_virtualenvs = get_custom_venv_choices()
)) ))
elif JobTemplate.accessible_objects(request.user, 'admin_role').exists():
if JobTemplate.accessible_objects(request.user, 'admin_role').exists():
data['custom_virtualenvs'] = get_custom_venv_choices() data['custom_virtualenvs'] = get_custom_venv_choices()
return Response(data) return Response(data)
@@ -631,7 +672,6 @@ class InstanceDetail(RetrieveUpdateAPIView):
else: else:
obj.capacity = 0 obj.capacity = 0
obj.save() obj.save()
handle_ha_toplogy_changes.apply_async()
r.data = InstanceSerializer(obj, context=self.get_serializer_context()).to_representation(obj) r.data = InstanceSerializer(obj, context=self.get_serializer_context()).to_representation(obj)
return r return r
@@ -640,7 +680,7 @@ class InstanceUnifiedJobsList(SubListAPIView):
view_name = _("Instance Jobs") view_name = _("Instance Jobs")
model = UnifiedJob model = UnifiedJob
serializer_class = UnifiedJobSerializer serializer_class = UnifiedJobListSerializer
parent_model = Instance parent_model = Instance
def get_queryset(self): def get_queryset(self):
@@ -687,7 +727,7 @@ class InstanceGroupUnifiedJobsList(SubListAPIView):
view_name = _("Instance Group Running Jobs") view_name = _("Instance Group Running Jobs")
model = UnifiedJob model = UnifiedJob
serializer_class = UnifiedJobSerializer serializer_class = UnifiedJobListSerializer
parent_model = InstanceGroup parent_model = InstanceGroup
relationship = "unifiedjob_set" relationship = "unifiedjob_set"
@@ -720,6 +760,7 @@ class SchedulePreview(GenericAPIView):
model = Schedule model = Schedule
view_name = _('Schedule Recurrence Rule Preview') view_name = _('Schedule Recurrence Rule Preview')
serializer_class = SchedulePreviewSerializer serializer_class = SchedulePreviewSerializer
permission_classes = (IsAuthenticated,)
def post(self, request): def post(self, request):
serializer = self.get_serializer(data=request.data) serializer = self.get_serializer(data=request.data)
@@ -797,7 +838,7 @@ class ScheduleCredentialsList(LaunchConfigCredentialsBase):
class ScheduleUnifiedJobsList(SubListAPIView): class ScheduleUnifiedJobsList(SubListAPIView):
model = UnifiedJob model = UnifiedJob
serializer_class = UnifiedJobSerializer serializer_class = UnifiedJobListSerializer
parent_model = Schedule parent_model = Schedule
relationship = 'unifiedjob_set' relationship = 'unifiedjob_set'
view_name = _('Schedule Jobs List') view_name = _('Schedule Jobs List')
@@ -1055,7 +1096,7 @@ class OrganizationProjectsList(SubListCreateAttachDetachAPIView):
class OrganizationWorkflowJobTemplatesList(SubListCreateAttachDetachAPIView): class OrganizationWorkflowJobTemplatesList(SubListCreateAttachDetachAPIView):
model = WorkflowJobTemplate model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateListSerializer serializer_class = WorkflowJobTemplateSerializer
parent_model = Organization parent_model = Organization
relationship = 'workflows' relationship = 'workflows'
parent_key = 'organization' parent_key = 'organization'
@@ -1144,11 +1185,6 @@ class TeamList(ListCreateAPIView):
model = Team model = Team
serializer_class = TeamSerializer serializer_class = TeamSerializer
def get_queryset(self):
qs = Team.accessible_objects(self.request.user, 'read_role').order_by()
qs = qs.select_related('admin_role', 'read_role', 'member_role', 'organization')
return qs
class TeamDetail(RetrieveUpdateDestroyAPIView): class TeamDetail(RetrieveUpdateDestroyAPIView):
@@ -1186,8 +1222,8 @@ class TeamRolesList(SubListAttachDetachAPIView):
role = get_object_or_400(Role, pk=sub_id) role = get_object_or_400(Role, pk=sub_id)
org_content_type = ContentType.objects.get_for_model(Organization) org_content_type = ContentType.objects.get_for_model(Organization)
if role.content_type == org_content_type: if role.content_type == org_content_type and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_("You cannot assign an Organization role as a child role for a Team.")) data = dict(msg=_("You cannot assign an Organization participation role as a child role for a Team."))
return Response(data, status=status.HTTP_400_BAD_REQUEST) return Response(data, status=status.HTTP_400_BAD_REQUEST)
if role.is_singleton(): if role.is_singleton():
@@ -1377,7 +1413,7 @@ class ProjectNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView):
class ProjectUpdatesList(SubListAPIView): class ProjectUpdatesList(SubListAPIView):
model = ProjectUpdate model = ProjectUpdate
serializer_class = ProjectUpdateSerializer serializer_class = ProjectUpdateListSerializer
parent_model = Project parent_model = Project
relationship = 'project_updates' relationship = 'project_updates'
@@ -1415,7 +1451,7 @@ class ProjectUpdateList(ListAPIView):
class ProjectUpdateDetail(UnifiedJobDeletionMixin, RetrieveDestroyAPIView): class ProjectUpdateDetail(UnifiedJobDeletionMixin, RetrieveDestroyAPIView):
model = ProjectUpdate model = ProjectUpdate
serializer_class = ProjectUpdateSerializer serializer_class = ProjectUpdateDetailSerializer
class ProjectUpdateEventsList(SubListAPIView): class ProjectUpdateEventsList(SubListAPIView):
@@ -1488,7 +1524,7 @@ class ProjectUpdateScmInventoryUpdates(SubListCreateAPIView):
view_name = _("Project Update SCM Inventory Updates") view_name = _("Project Update SCM Inventory Updates")
model = InventoryUpdate model = InventoryUpdate
serializer_class = InventoryUpdateSerializer serializer_class = InventoryUpdateListSerializer
parent_model = ProjectUpdate parent_model = ProjectUpdate
relationship = 'scm_inventory_updates' relationship = 'scm_inventory_updates'
parent_key = 'source_project_update' parent_key = 'source_project_update'
@@ -1568,6 +1604,10 @@ class OAuth2ApplicationDetail(RetrieveUpdateDestroyAPIView):
serializer_class = OAuth2ApplicationSerializer serializer_class = OAuth2ApplicationSerializer
swagger_topic = 'Authentication' swagger_topic = 'Authentication'
def update_raw_data(self, data):
data.pop('client_secret', None)
return super(OAuth2ApplicationDetail, self).update_raw_data(data)
class ApplicationOAuth2TokenList(SubListCreateAPIView): class ApplicationOAuth2TokenList(SubListCreateAPIView):
@@ -1610,29 +1650,14 @@ class OAuth2UserTokenList(SubListCreateAPIView):
relationship = 'main_oauth2accesstoken' relationship = 'main_oauth2accesstoken'
parent_key = 'user' parent_key = 'user'
swagger_topic = 'Authentication' swagger_topic = 'Authentication'
class OAuth2AuthorizedTokenList(SubListCreateAPIView):
view_name = _("OAuth2 Authorized Access Tokens")
model = OAuth2AccessToken
serializer_class = OAuth2AuthorizedTokenSerializer
parent_model = OAuth2Application
relationship = 'oauth2accesstoken_set'
parent_key = 'application'
swagger_topic = 'Authentication'
def get_queryset(self):
return get_access_token_model().objects.filter(application__isnull=False, user=self.request.user)
class UserAuthorizedTokenList(SubListCreateAPIView): class UserAuthorizedTokenList(SubListCreateAPIView):
view_name = _("OAuth2 User Authorized Access Tokens") view_name = _("OAuth2 User Authorized Access Tokens")
model = OAuth2AccessToken model = OAuth2AccessToken
serializer_class = OAuth2AuthorizedTokenSerializer serializer_class = UserAuthorizedTokenSerializer
parent_model = User parent_model = User
relationship = 'oauth2accesstoken_set' relationship = 'oauth2accesstoken_set'
parent_key = 'user' parent_key = 'user'
@@ -1640,12 +1665,12 @@ class UserAuthorizedTokenList(SubListCreateAPIView):
def get_queryset(self): def get_queryset(self):
return get_access_token_model().objects.filter(application__isnull=False, user=self.request.user) return get_access_token_model().objects.filter(application__isnull=False, user=self.request.user)
class OrganizationApplicationList(SubListCreateAPIView): class OrganizationApplicationList(SubListCreateAPIView):
view_name = _("Organization OAuth2 Applications") view_name = _("Organization OAuth2 Applications")
model = OAuth2Application model = OAuth2Application
serializer_class = OAuth2ApplicationSerializer serializer_class = OAuth2ApplicationSerializer
parent_model = Organization parent_model = Organization
@@ -1654,17 +1679,17 @@ class OrganizationApplicationList(SubListCreateAPIView):
swagger_topic = 'Authentication' swagger_topic = 'Authentication'
class OAuth2PersonalTokenList(SubListCreateAPIView): class UserPersonalTokenList(SubListCreateAPIView):
view_name = _("OAuth2 Personal Access Tokens") view_name = _("OAuth2 Personal Access Tokens")
model = OAuth2AccessToken model = OAuth2AccessToken
serializer_class = OAuth2PersonalTokenSerializer serializer_class = UserPersonalTokenSerializer
parent_model = User parent_model = User
relationship = 'main_oauth2accesstoken' relationship = 'main_oauth2accesstoken'
parent_key = 'user' parent_key = 'user'
swagger_topic = 'Authentication' swagger_topic = 'Authentication'
def get_queryset(self): def get_queryset(self):
return get_access_token_model().objects.filter(application__isnull=True, user=self.request.user) return get_access_token_model().objects.filter(application__isnull=True, user=self.request.user)
@@ -2233,6 +2258,12 @@ class HostDetail(RelatedJobsPreventDeleteMixin, ControlledByScmMixin, RetrieveUp
model = Host model = Host
serializer_class = HostSerializer serializer_class = HostSerializer
def delete(self, request, *args, **kwargs):
if self.get_object().inventory.pending_deletion:
return Response({"error": _("The inventory for this host is already being deleted.")},
status=status.HTTP_400_BAD_REQUEST)
return super(HostDetail, self).delete(request, *args, **kwargs)
class HostAnsibleFactsDetail(RetrieveAPIView): class HostAnsibleFactsDetail(RetrieveAPIView):
@@ -2842,7 +2873,7 @@ class InventorySourceGroupsList(SubListDestroyAPIView):
class InventorySourceUpdatesList(SubListAPIView): class InventorySourceUpdatesList(SubListAPIView):
model = InventoryUpdate model = InventoryUpdate
serializer_class = InventoryUpdateSerializer serializer_class = InventoryUpdateListSerializer
parent_model = InventorySource parent_model = InventorySource
relationship = 'inventory_updates' relationship = 'inventory_updates'
@@ -2855,17 +2886,14 @@ class InventorySourceCredentialsList(SubListAttachDetachAPIView):
relationship = 'credentials' relationship = 'credentials'
def is_valid_relation(self, parent, sub, created=False): def is_valid_relation(self, parent, sub, created=False):
# Inventory source credentials are exclusive with all other credentials
# subject to change for https://github.com/ansible/awx/issues/277
# or https://github.com/ansible/awx/issues/223
if parent.credentials.exists():
return {'msg': _("Source already has credential assigned.")}
error = InventorySource.cloud_credential_validation(parent.source, sub) error = InventorySource.cloud_credential_validation(parent.source, sub)
if error: if error:
return {'msg': error} return {'msg': error}
if sub.credential_type == 'vault':
# TODO: support this
return {"msg": _("Vault credentials are not yet supported for inventory sources.")}
else:
# Cloud credentials are exclusive with all other cloud credentials
cloud_cred_qs = parent.credentials.exclude(credential_type__kind='vault')
if cloud_cred_qs.exists():
return {'msg': _("Source already has cloud credential assigned.")}
return None return None
@@ -3011,12 +3039,12 @@ class JobTemplateLaunch(RetrieveAPIView):
if fd not in modern_data and id_fd in modern_data: if fd not in modern_data and id_fd in modern_data:
modern_data[fd] = modern_data[id_fd] modern_data[fd] = modern_data[id_fd]
# This block causes `extra_credentials` to _always_ be ignored for # This block causes `extra_credentials` to _always_ raise error if
# the launch endpoint if we're accessing `/api/v1/` # the launch endpoint if we're accessing `/api/v1/`
if get_request_version(self.request) == 1 and 'extra_credentials' in modern_data: if get_request_version(self.request) == 1 and 'extra_credentials' in modern_data:
extra_creds = modern_data.pop('extra_credentials', None) raise ParseError({"extra_credentials": _(
if extra_creds is not None: "Field is not allowed for use with v1 API."
ignored_fields['extra_credentials'] = extra_creds )})
# Automatically convert legacy launch credential arguments into a list of `.credentials` # Automatically convert legacy launch credential arguments into a list of `.credentials`
if 'credentials' in modern_data and ( if 'credentials' in modern_data and (
@@ -3037,10 +3065,10 @@ class JobTemplateLaunch(RetrieveAPIView):
existing_credentials = obj.credentials.all() existing_credentials = obj.credentials.all()
template_credentials = list(existing_credentials) # save copy of existing template_credentials = list(existing_credentials) # save copy of existing
new_credentials = [] new_credentials = []
for key, conditional in ( for key, conditional, _type, type_repr in (
('credential', lambda cred: cred.credential_type.kind != 'ssh'), ('credential', lambda cred: cred.credential_type.kind != 'ssh', int, 'pk value'),
('vault_credential', lambda cred: cred.credential_type.kind != 'vault'), ('vault_credential', lambda cred: cred.credential_type.kind != 'vault', int, 'pk value'),
('extra_credentials', lambda cred: cred.credential_type.kind not in ('cloud', 'net')) ('extra_credentials', lambda cred: cred.credential_type.kind not in ('cloud', 'net'), Iterable, 'a list')
): ):
if key in modern_data: if key in modern_data:
# if a specific deprecated key is specified, remove all # if a specific deprecated key is specified, remove all
@@ -3049,6 +3077,13 @@ class JobTemplateLaunch(RetrieveAPIView):
existing_credentials = filter(conditional, existing_credentials) existing_credentials = filter(conditional, existing_credentials)
prompted_value = modern_data.pop(key) prompted_value = modern_data.pop(key)
# validate type, since these are not covered by a serializer
if not isinstance(prompted_value, _type):
msg = _(
"Incorrect type. Expected {}, received {}."
).format(type_repr, prompted_value.__class__.__name__)
raise ParseError({key: [msg], 'credentials': [msg]})
# add the deprecated credential specified in the request # add the deprecated credential specified in the request
if not isinstance(prompted_value, Iterable) or isinstance(prompted_value, basestring): if not isinstance(prompted_value, Iterable) or isinstance(prompted_value, basestring):
prompted_value = [prompted_value] prompted_value = [prompted_value]
@@ -3108,7 +3143,8 @@ class JobTemplateLaunch(RetrieveAPIView):
data['job'] = new_job.id data['job'] = new_job.id
data['ignored_fields'] = self.sanitize_for_response(ignored_fields) data['ignored_fields'] = self.sanitize_for_response(ignored_fields)
data.update(JobSerializer(new_job, context=self.get_serializer_context()).to_representation(new_job)) data.update(JobSerializer(new_job, context=self.get_serializer_context()).to_representation(new_job))
return Response(data, status=status.HTTP_201_CREATED) headers = {'Location': new_job.get_absolute_url(request)}
return Response(data, status=status.HTTP_201_CREATED, headers=headers)
def sanitize_for_response(self, data): def sanitize_for_response(self, data):
@@ -3320,6 +3356,9 @@ class JobTemplateCredentialsList(SubListCreateAttachDetachAPIView):
if sub.unique_hash() in [cred.unique_hash() for cred in parent.credentials.all()]: if sub.unique_hash() in [cred.unique_hash() for cred in parent.credentials.all()]:
return {"error": _("Cannot assign multiple {credential_type} credentials.".format( return {"error": _("Cannot assign multiple {credential_type} credentials.".format(
credential_type=sub.unique_hash(display=True)))} credential_type=sub.unique_hash(display=True)))}
kind = sub.credential_type.kind
if kind not in ('ssh', 'vault', 'cloud', 'net'):
return {'error': _('Cannot assign a Credential of kind `{}`.').format(kind)}
return super(JobTemplateCredentialsList, self).is_valid_relation(parent, sub, created) return super(JobTemplateCredentialsList, self).is_valid_relation(parent, sub, created)
@@ -3713,7 +3752,7 @@ class WorkflowJobNodeAlwaysNodesList(WorkflowJobNodeChildrenBaseList):
class WorkflowJobTemplateList(WorkflowsEnforcementMixin, ListCreateAPIView): class WorkflowJobTemplateList(WorkflowsEnforcementMixin, ListCreateAPIView):
model = WorkflowJobTemplate model = WorkflowJobTemplate
serializer_class = WorkflowJobTemplateListSerializer serializer_class = WorkflowJobTemplateSerializer
always_allow_superuser = False always_allow_superuser = False
@@ -3730,7 +3769,11 @@ class WorkflowJobTemplateCopy(WorkflowsEnforcementMixin, CopyAPIView):
copy_return_serializer_class = WorkflowJobTemplateSerializer copy_return_serializer_class = WorkflowJobTemplateSerializer
def get(self, request, *args, **kwargs): def get(self, request, *args, **kwargs):
if get_request_version(request) < 2:
return self.v1_not_allowed()
obj = self.get_object() obj = self.get_object()
if not request.user.can_access(obj.__class__, 'read', obj):
raise PermissionDenied()
can_copy, messages = request.user.can_access_with_errors(self.model, 'copy', obj) can_copy, messages = request.user.can_access_with_errors(self.model, 'copy', obj)
data = OrderedDict([ data = OrderedDict([
('can_copy', can_copy), ('can_copy_without_user_input', can_copy), ('can_copy', can_copy), ('can_copy_without_user_input', can_copy),
@@ -3806,7 +3849,8 @@ class WorkflowJobTemplateLaunch(WorkflowsEnforcementMixin, RetrieveAPIView):
data['workflow_job'] = new_job.id data['workflow_job'] = new_job.id
data['ignored_fields'] = ignored_fields data['ignored_fields'] = ignored_fields
data.update(WorkflowJobSerializer(new_job, context=self.get_serializer_context()).to_representation(new_job)) data.update(WorkflowJobSerializer(new_job, context=self.get_serializer_context()).to_representation(new_job))
return Response(data, status=status.HTTP_201_CREATED) headers = {'Location': new_job.get_absolute_url(request)}
return Response(data, status=status.HTTP_201_CREATED, headers=headers)
class WorkflowJobRelaunch(WorkflowsEnforcementMixin, GenericAPIView): class WorkflowJobRelaunch(WorkflowsEnforcementMixin, GenericAPIView):
@@ -4022,7 +4066,8 @@ class SystemJobTemplateLaunch(GenericAPIView):
data = OrderedDict() data = OrderedDict()
data['system_job'] = new_job.id data['system_job'] = new_job.id
data.update(SystemJobSerializer(new_job, context=self.get_serializer_context()).to_representation(new_job)) data.update(SystemJobSerializer(new_job, context=self.get_serializer_context()).to_representation(new_job))
return Response(data, status=status.HTTP_201_CREATED) headers = {'Location': new_job.get_absolute_url(request)}
return Response(data, status=status.HTTP_201_CREATED, headers=headers)
class SystemJobTemplateSchedulesList(SubListCreateAPIView): class SystemJobTemplateSchedulesList(SubListCreateAPIView):
@@ -4094,7 +4139,30 @@ class JobDetail(UnifiedJobDeletionMixin, RetrieveUpdateDestroyAPIView):
model = Job model = Job
metadata_class = JobTypeMetadata metadata_class = JobTypeMetadata
serializer_class = JobSerializer serializer_class = JobDetailSerializer
# NOTE: When removing the V1 API in 3.4, delete the following four methods,
# and let this class inherit from RetrieveDestroyAPIView instead of
# RetrieveUpdateDestroyAPIView.
@property
def allowed_methods(self):
methods = super(JobDetail, self).allowed_methods
if get_request_version(getattr(self, 'request', None)) > 1:
methods.remove('PUT')
methods.remove('PATCH')
return methods
def put(self, request, *args, **kwargs):
if get_request_version(self.request) > 1:
return Response({"error": _("PUT not allowed for Job Details in version 2 of the API")},
status=status.HTTP_405_METHOD_NOT_ALLOWED)
return super(JobDetail, self).put(request, *args, **kwargs)
def patch(self, request, *args, **kwargs):
if get_request_version(self.request) > 1:
return Response({"error": _("PUT not allowed for Job Details in version 2 of the API")},
status=status.HTTP_405_METHOD_NOT_ALLOWED)
return super(JobDetail, self).patch(request, *args, **kwargs)
def update(self, request, *args, **kwargs): def update(self, request, *args, **kwargs):
obj = self.get_object() obj = self.get_object()
@@ -4220,7 +4288,6 @@ class JobRelaunch(RetrieveAPIView):
data.pop('credential_passwords', None) data.pop('credential_passwords', None)
return data return data
@csrf_exempt
@transaction.non_atomic_requests @transaction.non_atomic_requests
def dispatch(self, *args, **kwargs): def dispatch(self, *args, **kwargs):
return super(JobRelaunch, self).dispatch(*args, **kwargs) return super(JobRelaunch, self).dispatch(*args, **kwargs)
@@ -4466,7 +4533,6 @@ class AdHocCommandList(ListCreateAPIView):
serializer_class = AdHocCommandListSerializer serializer_class = AdHocCommandListSerializer
always_allow_superuser = False always_allow_superuser = False
@csrf_exempt
@transaction.non_atomic_requests @transaction.non_atomic_requests
def dispatch(self, *args, **kwargs): def dispatch(self, *args, **kwargs):
return super(AdHocCommandList, self).dispatch(*args, **kwargs) return super(AdHocCommandList, self).dispatch(*args, **kwargs)
@@ -4538,7 +4604,7 @@ class HostAdHocCommandsList(AdHocCommandList, SubListCreateAPIView):
class AdHocCommandDetail(UnifiedJobDeletionMixin, RetrieveDestroyAPIView): class AdHocCommandDetail(UnifiedJobDeletionMixin, RetrieveDestroyAPIView):
model = AdHocCommand model = AdHocCommand
serializer_class = AdHocCommandSerializer serializer_class = AdHocCommandDetailSerializer
class AdHocCommandCancel(RetrieveAPIView): class AdHocCommandCancel(RetrieveAPIView):
@@ -4564,7 +4630,6 @@ class AdHocCommandRelaunch(GenericAPIView):
# FIXME: Figure out why OPTIONS request still shows all fields. # FIXME: Figure out why OPTIONS request still shows all fields.
@csrf_exempt
@transaction.non_atomic_requests @transaction.non_atomic_requests
def dispatch(self, *args, **kwargs): def dispatch(self, *args, **kwargs):
return super(AdHocCommandRelaunch, self).dispatch(*args, **kwargs) return super(AdHocCommandRelaunch, self).dispatch(*args, **kwargs)
@@ -5041,8 +5106,8 @@ class RoleTeamsList(SubListAttachDetachAPIView):
role = Role.objects.get(pk=self.kwargs['pk']) role = Role.objects.get(pk=self.kwargs['pk'])
organization_content_type = ContentType.objects.get_for_model(Organization) organization_content_type = ContentType.objects.get_for_model(Organization)
if role.content_type == organization_content_type: if role.content_type == organization_content_type and role.role_field in ['member_role', 'admin_role']:
data = dict(msg=_("You cannot assign an Organization role as a child role for a Team.")) data = dict(msg=_("You cannot assign an Organization participation role as a child role for a Team."))
return Response(data, status=status.HTTP_400_BAD_REQUEST) return Response(data, status=status.HTTP_400_BAD_REQUEST)
credential_content_type = ContentType.objects.get_for_model(Credential) credential_content_type = ContentType.objects.get_for_model(Credential)

View File

@@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
from awx.conf.migrations import _rename_setting
def copy_session_settings(apps, schema_editor):
_rename_setting.rename_setting(apps, schema_editor, old_key='AUTH_TOKEN_PER_USER', new_key='SESSIONS_PER_USER')
_rename_setting.rename_setting(apps, schema_editor, old_key='AUTH_TOKEN_EXPIRATION', new_key='SESSION_COOKIE_AGE')
def reverse_copy_session_settings(apps, schema_editor):
_rename_setting.rename_setting(apps, schema_editor, old_key='SESSION_COOKIE_AGE', new_key='AUTH_TOKEN_EXPIRATION')
_rename_setting.rename_setting(apps, schema_editor, old_key='SESSIONS_PER_USER', new_key='AUTH_TOKEN_PER_USER')
class Migration(migrations.Migration):
dependencies = [
('conf', '0004_v320_reencrypt'),
]
operations = [
migrations.RunPython(copy_session_settings, reverse_copy_session_settings),
]

View File

@@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import logging
from django.utils.timezone import now
from django.conf import settings
logger = logging.getLogger('awx.conf.settings')
__all__ = ['rename_setting']
def rename_setting(apps, schema_editor, old_key, new_key):
old_setting = None
Setting = apps.get_model('conf', 'Setting')
if Setting.objects.filter(key=new_key).exists() or hasattr(settings, new_key):
logger.info('Setting ' + new_key + ' unexpectedly exists before this migration, it will be replaced by the value of the ' + old_key + ' setting.')
Setting.objects.filter(key=new_key).delete()
# Look for db setting, which wouldn't be picked up by SettingsWrapper because the register method is gone
if Setting.objects.filter(key=old_key).exists():
old_setting = Setting.objects.filter(key=old_key).last().value
Setting.objects.filter(key=old_key).delete()
# Look for "on-disk" setting (/etc/tower/conf.d)
if hasattr(settings, old_key):
old_setting = getattr(settings, old_key)
if old_setting is not None:
Setting.objects.create(key=new_key,
value=old_setting,
created=now(),
modified=now()
)

View File

@@ -78,6 +78,14 @@ class Setting(CreatedModifiedModel):
def get_cache_id_key(self, key): def get_cache_id_key(self, key):
return '{}_ID'.format(key) return '{}_ID'.format(key)
def display_value(self):
if self.key == 'LICENSE' and 'license_key' in self.value:
# don't log the license key in activity stream
value = self.value.copy()
value['license_key'] = '********'
return value
return self.value
import awx.conf.signals # noqa import awx.conf.signals # noqa

View File

@@ -15,7 +15,7 @@ from django.conf import LazySettings
from django.conf import settings, UserSettingsHolder from django.conf import settings, UserSettingsHolder
from django.core.cache import cache as django_cache from django.core.cache import cache as django_cache
from django.core.exceptions import ImproperlyConfigured from django.core.exceptions import ImproperlyConfigured
from django.db import ProgrammingError, OperationalError from django.db import ProgrammingError, OperationalError, transaction, connection
from django.utils.functional import cached_property from django.utils.functional import cached_property
# Django REST Framework # Django REST Framework
@@ -61,24 +61,66 @@ __all__ = ['SettingsWrapper', 'get_settings_to_cache', 'SETTING_CACHE_NOTSET']
@contextlib.contextmanager @contextlib.contextmanager
def _log_database_error(): def _ctit_db_wrapper(trans_safe=False):
'''
Wrapper to avoid undesired actions by Django ORM when managing settings
if only getting a setting, can use trans_safe=True, which will avoid
throwing errors if the prior context was a broken transaction.
Any database errors will be logged, but exception will be suppressed.
'''
rollback_set = None
is_atomic = None
try: try:
if trans_safe:
is_atomic = connection.in_atomic_block
if is_atomic:
rollback_set = transaction.get_rollback()
if rollback_set:
logger.debug('Obtaining database settings in spite of broken transaction.')
transaction.set_rollback(False)
yield yield
except (ProgrammingError, OperationalError): except (ProgrammingError, OperationalError):
if 'migrate' in sys.argv and get_tower_migration_version() < '310': if 'migrate' in sys.argv and get_tower_migration_version() < '310':
logger.info('Using default settings until version 3.1 migration.') logger.info('Using default settings until version 3.1 migration.')
else: else:
# Somewhat ugly - craming the full stack trace into the log message # We want the _full_ traceback with the context
# the available exc_info does not give information about the real caller # First we get the current call stack, which constitutes the "top",
# TODO: replace in favor of stack_info kwarg in python 3 # it has the context up to the point where the context manager is used
sio = StringIO.StringIO() top_stack = StringIO.StringIO()
traceback.print_stack(file=sio) traceback.print_stack(file=top_stack)
sinfo = sio.getvalue() top_lines = top_stack.getvalue().strip('\n').split('\n')
sio.close() top_stack.close()
sinfo = sinfo.strip('\n') # Get "bottom" stack from the local error that happened
logger.warning('Database settings are not available, using defaults, logged from:\n{}'.format(sinfo)) # inside of the "with" block this wraps
exc_type, exc_value, exc_traceback = sys.exc_info()
bottom_stack = StringIO.StringIO()
traceback.print_tb(exc_traceback, file=bottom_stack)
bottom_lines = bottom_stack.getvalue().strip('\n').split('\n')
# Glue together top and bottom where overlap is found
bottom_cutoff = 0
for i, line in enumerate(bottom_lines):
if line in top_lines:
# start of overlapping section, take overlap from bottom
top_lines = top_lines[:top_lines.index(line)]
bottom_cutoff = i
break
bottom_lines = bottom_lines[bottom_cutoff:]
tb_lines = top_lines + bottom_lines
tb_string = '\n'.join(
['Traceback (most recent call last):'] +
tb_lines +
['{}: {}'.format(exc_type.__name__, str(exc_value))]
)
bottom_stack.close()
# Log the combined stack
if trans_safe:
logger.warning('Database settings are not available, using defaults, error:\n{}'.format(tb_string))
else:
logger.error('Error modifying something related to database settings.\n{}'.format(tb_string))
finally: finally:
pass if trans_safe and is_atomic and rollback_set:
transaction.set_rollback(rollback_set)
def filter_sensitive(registry, key, value): def filter_sensitive(registry, key, value):
@@ -398,7 +440,7 @@ class SettingsWrapper(UserSettingsHolder):
def __getattr__(self, name): def __getattr__(self, name):
value = empty value = empty
if name in self.all_supported_settings: if name in self.all_supported_settings:
with _log_database_error(): with _ctit_db_wrapper(trans_safe=True):
value = self._get_local(name) value = self._get_local(name)
if value is not empty: if value is not empty:
return value return value
@@ -430,7 +472,7 @@ class SettingsWrapper(UserSettingsHolder):
def __setattr__(self, name, value): def __setattr__(self, name, value):
if name in self.all_supported_settings: if name in self.all_supported_settings:
with _log_database_error(): with _ctit_db_wrapper():
self._set_local(name, value) self._set_local(name, value)
else: else:
setattr(self.default_settings, name, value) setattr(self.default_settings, name, value)
@@ -446,14 +488,14 @@ class SettingsWrapper(UserSettingsHolder):
def __delattr__(self, name): def __delattr__(self, name):
if name in self.all_supported_settings: if name in self.all_supported_settings:
with _log_database_error(): with _ctit_db_wrapper():
self._del_local(name) self._del_local(name)
else: else:
delattr(self.default_settings, name) delattr(self.default_settings, name)
def __dir__(self): def __dir__(self):
keys = [] keys = []
with _log_database_error(): with _ctit_db_wrapper(trans_safe=True):
for setting in Setting.objects.filter( for setting in Setting.objects.filter(
key__in=self.all_supported_settings, user__isnull=True): key__in=self.all_supported_settings, user__isnull=True):
# Skip returning settings that have been overridden but are # Skip returning settings that have been overridden but are
@@ -470,7 +512,7 @@ class SettingsWrapper(UserSettingsHolder):
def is_overridden(self, setting): def is_overridden(self, setting):
set_locally = False set_locally = False
if setting in self.all_supported_settings: if setting in self.all_supported_settings:
with _log_database_error(): with _ctit_db_wrapper(trans_safe=True):
set_locally = Setting.objects.filter(key=setting, user__isnull=True).exists() set_locally = Setting.objects.filter(key=setting, user__isnull=True).exists()
set_on_default = getattr(self.default_settings, 'is_overridden', lambda s: False)(setting) set_on_default = getattr(self.default_settings, 'is_overridden', lambda s: False)(setting)
return (set_locally or set_on_default) return (set_locally or set_on_default)

View File

@@ -24,7 +24,12 @@ import os
import pwd import pwd
# PSUtil # PSUtil
import psutil try:
import psutil
except ImportError:
raise ImportError('psutil is missing; {}bin/pip install psutil'.format(
os.environ['VIRTUAL_ENV']
))
__all__ = [] __all__ = []

View File

@@ -27,7 +27,13 @@ import os
import stat import stat
import threading import threading
import uuid import uuid
import memcache
try:
import memcache
except ImportError:
raise ImportError('python-memcached is missing; {}bin/pip install python-memcached'.format(
os.environ['VIRTUAL_ENV']
))
from six.moves import xrange from six.moves import xrange

View File

@@ -308,7 +308,8 @@ class BaseCallbackModule(CallbackBase):
if custom_artifact_data: if custom_artifact_data:
# create the directory for custom stats artifacts to live in (if it doesn't exist) # create the directory for custom stats artifacts to live in (if it doesn't exist)
custom_artifacts_dir = os.path.join(os.getenv('AWX_PRIVATE_DATA_DIR'), 'artifacts') custom_artifacts_dir = os.path.join(os.getenv('AWX_PRIVATE_DATA_DIR'), 'artifacts')
os.makedirs(custom_artifacts_dir, mode=stat.S_IXUSR + stat.S_IWUSR + stat.S_IRUSR) if not os.path.isdir(custom_artifacts_dir):
os.makedirs(custom_artifacts_dir, mode=stat.S_IXUSR + stat.S_IWUSR + stat.S_IRUSR)
custom_artifacts_path = os.path.join(custom_artifacts_dir, 'custom') custom_artifacts_path = os.path.join(custom_artifacts_dir, 'custom')
with codecs.open(custom_artifacts_path, 'w', encoding='utf-8') as f: with codecs.open(custom_artifacts_path, 'w', encoding='utf-8') as f:

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -5,6 +5,8 @@
import os import os
import sys import sys
import logging import logging
import six
from functools import reduce
# Django # Django
from django.conf import settings from django.conf import settings
@@ -96,8 +98,6 @@ def check_user_access(user, model_class, action, *args, **kwargs):
Return True if user can perform action against model_class with the Return True if user can perform action against model_class with the
provided parameters. provided parameters.
''' '''
if 'write' not in getattr(user, 'oauth_scopes', ['write']) and action != 'read':
return False
access_class = access_registry[model_class] access_class = access_registry[model_class]
access_instance = access_class(user) access_instance = access_class(user)
access_method = getattr(access_instance, 'can_%s' % action) access_method = getattr(access_instance, 'can_%s' % action)
@@ -217,6 +217,15 @@ class BaseAccess(object):
def can_copy(self, obj): def can_copy(self, obj):
return self.can_add({'reference_obj': obj}) return self.can_add({'reference_obj': obj})
def can_copy_related(self, obj):
'''
can_copy_related() should only be used to check if the user have access to related
many to many credentials in when copying the object. It does not check if the user
has permission for any other related objects. Therefore, when checking if the user
can copy an object, it should always be used in conjunction with can_add()
'''
return True
def can_attach(self, obj, sub_obj, relationship, data, def can_attach(self, obj, sub_obj, relationship, data,
skip_sub_obj_read_check=False): skip_sub_obj_read_check=False):
if skip_sub_obj_read_check: if skip_sub_obj_read_check:
@@ -391,21 +400,24 @@ class BaseAccess(object):
return user_capabilities return user_capabilities
def get_method_capability(self, method, obj, parent_obj): def get_method_capability(self, method, obj, parent_obj):
if method in ['change']: # 3 args try:
return self.can_change(obj, {}) if method in ['change']: # 3 args
elif method in ['delete', 'run_ad_hoc_commands', 'copy']: return self.can_change(obj, {})
access_method = getattr(self, "can_%s" % method) elif method in ['delete', 'run_ad_hoc_commands', 'copy']:
return access_method(obj) access_method = getattr(self, "can_%s" % method)
elif method in ['start']: return access_method(obj)
return self.can_start(obj, validate_license=False) elif method in ['start']:
elif method in ['attach', 'unattach']: # parent/sub-object call return self.can_start(obj, validate_license=False)
access_method = getattr(self, "can_%s" % method) elif method in ['attach', 'unattach']: # parent/sub-object call
if type(parent_obj) == Team: access_method = getattr(self, "can_%s" % method)
relationship = 'parents' if type(parent_obj) == Team:
parent_obj = parent_obj.member_role relationship = 'parents'
else: parent_obj = parent_obj.member_role
relationship = 'members' else:
return access_method(obj, parent_obj, relationship, skip_sub_obj_read_check=True, data={}) relationship = 'members'
return access_method(obj, parent_obj, relationship, skip_sub_obj_read_check=True, data={})
except (ParseError, ObjectDoesNotExist):
return False
return False return False
@@ -516,29 +528,28 @@ class UserAccess(BaseAccess):
return False return False
return bool(self.user == obj or self.can_admin(obj, data)) return bool(self.user == obj or self.can_admin(obj, data))
def user_membership_roles(self, u): @staticmethod
return Role.objects.filter( def user_organizations(u):
content_type=ContentType.objects.get_for_model(Organization), '''
role_field__in=[ Returns all organizations that count `u` as a member
'admin_role', 'member_role', '''
'execute_role', 'project_admin_role', 'inventory_admin_role', return Organization.accessible_objects(u, 'member_role')
'credential_admin_role', 'workflow_admin_role',
'notification_admin_role'
],
members=u
)
def is_all_org_admin(self, u): def is_all_org_admin(self, u):
return not self.user_membership_roles(u).exclude( '''
ancestors__in=self.user.roles.filter(role_field='admin_role') returns True if `u` is member of any organization that is
not also an organization that `self.user` admins
'''
return not self.user_organizations(u).exclude(
pk__in=Organization.accessible_pk_qs(self.user, 'admin_role')
).exists() ).exists()
def user_is_orphaned(self, u): def user_is_orphaned(self, u):
return not self.user_membership_roles(u).exists() return not self.user_organizations(u).exists()
@check_superuser @check_superuser
def can_admin(self, obj, data, allow_orphans=False): def can_admin(self, obj, data, allow_orphans=False, check_setting=True):
if not settings.MANAGE_ORGANIZATION_AUTH: if check_setting and (not settings.MANAGE_ORGANIZATION_AUTH):
return False return False
if obj.is_superuser or obj.is_system_auditor: if obj.is_superuser or obj.is_system_auditor:
# must be superuser to admin users with system roles # must be superuser to admin users with system roles
@@ -600,7 +611,8 @@ class OAuth2ApplicationAccess(BaseAccess):
select_related = ('user',) select_related = ('user',)
def filtered_queryset(self): def filtered_queryset(self):
return self.model.objects.filter(organization__in=self.user.organizations) org_access_qs = Organization.accessible_objects(self.user, 'member_role')
return self.model.objects.filter(organization__in=org_access_qs)
def can_change(self, obj, data): def can_change(self, obj, data):
return self.user.is_superuser or self.check_related('organization', Organization, data, obj=obj, return self.user.is_superuser or self.check_related('organization', Organization, data, obj=obj,
@@ -742,12 +754,13 @@ class InventoryAccess(BaseAccess):
# If no data is specified, just checking for generic add permission? # If no data is specified, just checking for generic add permission?
if not data: if not data:
return Organization.accessible_objects(self.user, 'inventory_admin_role').exists() return Organization.accessible_objects(self.user, 'inventory_admin_role').exists()
return (self.check_related('organization', Organization, data, role_field='inventory_admin_role') and
return self.check_related('organization', Organization, data, role_field='inventory_admin_role') self.check_related('insights_credential', Credential, data, role_field='use_role'))
@check_superuser @check_superuser
def can_change(self, obj, data): def can_change(self, obj, data):
return self.can_admin(obj, data) return (self.can_admin(obj, data) and
self.check_related('insights_credential', Credential, data, obj=obj, role_field='use_role'))
@check_superuser @check_superuser
def can_admin(self, obj, data): def can_admin(self, obj, data):
@@ -1071,7 +1084,7 @@ class CredentialAccess(BaseAccess):
return True return True
if data and data.get('user', None): if data and data.get('user', None):
user_obj = get_object_from_data('user', User, data) user_obj = get_object_from_data('user', User, data)
return check_user_access(self.user, User, 'change', user_obj, None) return bool(self.user == user_obj or UserAccess(self.user).can_admin(user_obj, None, check_setting=False))
if data and data.get('team', None): if data and data.get('team', None):
team_obj = get_object_from_data('team', Team, data) team_obj = get_object_from_data('team', Team, data)
return check_user_access(self.user, Team, 'change', team_obj, None) return check_user_access(self.user, Team, 'change', team_obj, None)
@@ -1114,6 +1127,9 @@ class TeamAccess(BaseAccess):
select_related = ('created_by', 'modified_by', 'organization',) select_related = ('created_by', 'modified_by', 'organization',)
def filtered_queryset(self): def filtered_queryset(self):
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and \
(self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
return self.model.objects.all()
return self.model.accessible_objects(self.user, 'read_role') return self.model.accessible_objects(self.user, 'read_role')
@check_superuser @check_superuser
@@ -1197,14 +1213,15 @@ class ProjectAccess(BaseAccess):
@check_superuser @check_superuser
def can_add(self, data): def can_add(self, data):
if not data: # So the browseable API will work if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'project_admin_role').exists() return Organization.accessible_objects(self.user, 'admin_role').exists()
return self.check_related('organization', Organization, data, role_field='project_admin_role', mandatory=True) return (self.check_related('organization', Organization, data, role_field='project_admin_role', mandatory=True) and
self.check_related('credential', Credential, data, role_field='use_role'))
@check_superuser @check_superuser
def can_change(self, obj, data): def can_change(self, obj, data):
if not self.check_related('organization', Organization, data, obj=obj, role_field='project_admin_role'): return (self.check_related('organization', Organization, data, obj=obj, role_field='project_admin_role') and
return False self.user in obj.admin_role and
return self.user in obj.admin_role self.check_related('credential', Credential, data, obj=obj, role_field='use_role'))
@check_superuser @check_superuser
def can_start(self, obj, validate_license=True): def can_start(self, obj, validate_license=True):
@@ -1320,6 +1337,17 @@ class JobTemplateAccess(BaseAccess):
return self.user in project.use_role return self.user in project.use_role
else: else:
return False return False
@check_superuser
def can_copy_related(self, obj):
'''
Check if we have access to all the credentials related to Job Templates.
Does not verify the user's permission for any other related fields (projects, inventories, etc).
'''
# obj.credentials.all() is accessible ONLY when object is saved (has valid id)
credential_manager = getattr(obj, 'credentials', None) if getattr(obj, 'id', False) else Credentials.objects.none()
return reduce(lambda prev, cred: prev and self.user in cred.use_role, credential_manager.all(), True)
def can_start(self, obj, validate_license=True): def can_start(self, obj, validate_license=True):
# Check license. # Check license.
@@ -1488,7 +1516,7 @@ class JobAccess(BaseAccess):
# Obtain prompts used to start original job # Obtain prompts used to start original job
JobLaunchConfig = obj._meta.get_field('launch_config').related_model JobLaunchConfig = obj._meta.get_field('launch_config').related_model
try: try:
config = obj.launch_config config = JobLaunchConfig.objects.prefetch_related('credentials').get(job=obj)
except JobLaunchConfig.DoesNotExist: except JobLaunchConfig.DoesNotExist:
config = None config = None
@@ -1496,6 +1524,12 @@ class JobAccess(BaseAccess):
if obj.job_template is not None: if obj.job_template is not None:
if config is None: if config is None:
prompts_access = False prompts_access = False
elif not config.has_user_prompts(obj.job_template):
prompts_access = True
elif obj.created_by_id != self.user.pk:
prompts_access = False
if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts provided by another user.')
else: else:
prompts_access = ( prompts_access = (
JobLaunchConfigAccess(self.user).can_add({'reference_obj': config}) and JobLaunchConfigAccess(self.user).can_add({'reference_obj': config}) and
@@ -1507,13 +1541,13 @@ class JobAccess(BaseAccess):
elif not jt_access: elif not jt_access:
return False return False
org_access = obj.inventory and self.user in obj.inventory.organization.inventory_admin_role org_access = bool(obj.inventory) and self.user in obj.inventory.organization.inventory_admin_role
project_access = obj.project is None or self.user in obj.project.admin_role project_access = obj.project is None or self.user in obj.project.admin_role
credential_access = all([self.user in cred.use_role for cred in obj.credentials.all()]) credential_access = all([self.user in cred.use_role for cred in obj.credentials.all()])
# job can be relaunched if user could make an equivalent JT # job can be relaunched if user could make an equivalent JT
ret = org_access and credential_access and project_access ret = org_access and credential_access and project_access
if not ret and self.save_messages: if not ret and self.save_messages and not self.messages:
if not obj.job_template: if not obj.job_template:
pretext = _('Job has been orphaned from its job template.') pretext = _('Job has been orphaned from its job template.')
elif config is None: elif config is None:
@@ -1918,12 +1952,22 @@ class WorkflowJobAccess(BaseAccess):
if not wfjt: if not wfjt:
return False return False
# execute permission to WFJT is mandatory for any relaunch # If job was launched by another user, it could have survey passwords
if self.user not in wfjt.execute_role: if obj.created_by_id != self.user.pk:
return False # Obtain prompts used to start original job
JobLaunchConfig = obj._meta.get_field('launch_config').related_model
try:
config = JobLaunchConfig.objects.get(job=obj)
except JobLaunchConfig.DoesNotExist:
config = None
# user's WFJT access doesn't guarentee permission to launch, introspect nodes if config is None or config.prompts_dict():
return self.can_recreate(obj) if self.save_messages:
self.messages['detail'] = _('Job was launched with prompts provided by another user.')
return False
# execute permission to WFJT is mandatory for any relaunch
return (self.user in wfjt.execute_role)
def can_recreate(self, obj): def can_recreate(self, obj):
node_qs = obj.workflow_job_nodes.all().prefetch_related('inventory', 'credentials', 'unified_job_template') node_qs = obj.workflow_job_nodes.all().prefetch_related('inventory', 'credentials', 'unified_job_template')
@@ -2342,9 +2386,7 @@ class LabelAccess(BaseAccess):
prefetch_related = ('modified_by', 'created_by', 'organization',) prefetch_related = ('modified_by', 'created_by', 'organization',)
def filtered_queryset(self): def filtered_queryset(self):
return self.model.objects.filter( return self.model.objects.all()
organization__in=Organization.accessible_pk_qs(self.user, 'read_role')
)
@check_superuser @check_superuser
def can_read(self, obj): def can_read(self, obj):
@@ -2531,7 +2573,11 @@ class RoleAccess(BaseAccess):
# administrators of that Organization the ability to edit that user. To prevent # administrators of that Organization the ability to edit that user. To prevent
# unwanted escalations lets ensure that the Organization administartor has the abilty # unwanted escalations lets ensure that the Organization administartor has the abilty
# to admin the user being added to the role. # to admin the user being added to the role.
if isinstance(obj.content_object, Organization) and obj.role_field in ['member_role', 'admin_role']: if (isinstance(obj.content_object, Organization) and
obj.role_field in (Organization.member_role.field.parent_role + ['member_role'])):
if not isinstance(sub_obj, User):
logger.error(six.text_type('Unexpected attempt to associate {} with organization role.').format(sub_obj))
return False
if not UserAccess(self.user).can_admin(sub_obj, None, allow_orphans=True): if not UserAccess(self.user).can_admin(sub_obj, None, allow_orphans=True):
return False return False

View File

@@ -38,7 +38,8 @@ register(
'ORG_ADMINS_CAN_SEE_ALL_USERS', 'ORG_ADMINS_CAN_SEE_ALL_USERS',
field_class=fields.BooleanField, field_class=fields.BooleanField,
label=_('All Users Visible to Organization Admins'), label=_('All Users Visible to Organization Admins'),
help_text=_('Controls whether any Organization Admin can view all users, even those not associated with their Organization.'), help_text=_('Controls whether any Organization Admin can view all users and teams, '
'even those not associated with their Organization.'),
category=_('System'), category=_('System'),
category_slug='system', category_slug='system',
) )
@@ -81,7 +82,7 @@ register(
help_text=_('HTTP headers and meta keys to search to determine remote host ' help_text=_('HTTP headers and meta keys to search to determine remote host '
'name or IP. Add additional items to this list, such as ' 'name or IP. Add additional items to this list, such as '
'"HTTP_X_FORWARDED_FOR", if behind a reverse proxy. ' '"HTTP_X_FORWARDED_FOR", if behind a reverse proxy. '
'See the "Proxy Support" section of the Adminstrator guide for' 'See the "Proxy Support" section of the Adminstrator guide for '
'more details.'), 'more details.'),
category=_('System'), category=_('System'),
category_slug='system', category_slug='system',
@@ -277,6 +278,16 @@ register(
placeholder={'HTTP_PROXY': 'myproxy.local:8080'}, placeholder={'HTTP_PROXY': 'myproxy.local:8080'},
) )
register(
'AWX_ROLES_ENABLED',
field_class=fields.BooleanField,
default=True,
label=_('Enable Role Download'),
help_text=_('Allows roles to be dynamically downlaoded from a requirements.yml file for SCM projects.'),
category=_('Jobs'),
category_slug='jobs',
)
register( register(
'STDOUT_MAX_BYTES_DISPLAY', 'STDOUT_MAX_BYTES_DISPLAY',
field_class=fields.IntegerField, field_class=fields.IntegerField,
@@ -472,10 +483,12 @@ register(
register( register(
'LOG_AGGREGATOR_PROTOCOL', 'LOG_AGGREGATOR_PROTOCOL',
field_class=fields.ChoiceField, field_class=fields.ChoiceField,
choices=[('https', 'HTTPS'), ('tcp', 'TCP'), ('udp', 'UDP')], choices=[('https', 'HTTPS/HTTP'), ('tcp', 'TCP'), ('udp', 'UDP')],
default='https', default='https',
label=_('Logging Aggregator Protocol'), label=_('Logging Aggregator Protocol'),
help_text=_('Protocol used to communicate with log aggregator.'), help_text=_('Protocol used to communicate with log aggregator. '
'HTTPS/HTTP assumes HTTPS unless http:// is explicitly used in '
'the Logging Aggregator hostname.'),
category=_('Logging'), category=_('Logging'),
category_slug='logging', category_slug='logging',
) )

View File

@@ -7,7 +7,7 @@ from django.utils.translation import ugettext_lazy as _
__all__ = [ __all__ = [
'CLOUD_PROVIDERS', 'SCHEDULEABLE_PROVIDERS', 'PRIVILEGE_ESCALATION_METHODS', 'CLOUD_PROVIDERS', 'SCHEDULEABLE_PROVIDERS', 'PRIVILEGE_ESCALATION_METHODS',
'ANSI_SGR_PATTERN', 'CAN_CANCEL', 'ACTIVE_STATES' 'ANSI_SGR_PATTERN', 'CAN_CANCEL', 'ACTIVE_STATES', 'STANDARD_INVENTORY_UPDATE_ENV'
] ]
@@ -20,6 +20,12 @@ PRIVILEGE_ESCALATION_METHODS = [
] ]
CHOICES_PRIVILEGE_ESCALATION_METHODS = [('', _('None'))] + PRIVILEGE_ESCALATION_METHODS CHOICES_PRIVILEGE_ESCALATION_METHODS = [('', _('None'))] + PRIVILEGE_ESCALATION_METHODS
ANSI_SGR_PATTERN = re.compile(r'\x1b\[[0-9;]*m') ANSI_SGR_PATTERN = re.compile(r'\x1b\[[0-9;]*m')
STANDARD_INVENTORY_UPDATE_ENV = {
# Failure to parse inventory should always be fatal
'ANSIBLE_INVENTORY_UNPARSED_FAILED': 'True',
# Always use the --export option for ansible-inventory
'ANSIBLE_INVENTORY_EXPORT': 'True'
}
CAN_CANCEL = ('new', 'pending', 'waiting', 'running') CAN_CANCEL = ('new', 'pending', 'waiting', 'running')
ACTIVE_STATES = CAN_CANCEL ACTIVE_STATES = CAN_CANCEL
TOKEN_CENSOR = '************' CENSOR_VALUE = '************'

View File

@@ -4,10 +4,12 @@ import logging
from channels import Group from channels import Group
from channels.auth import channel_session_user_from_http, channel_session_user from channels.auth import channel_session_user_from_http, channel_session_user
from django.http.cookie import parse_cookie
from django.core.serializers.json import DjangoJSONEncoder from django.core.serializers.json import DjangoJSONEncoder
logger = logging.getLogger('awx.main.consumers') logger = logging.getLogger('awx.main.consumers')
XRF_KEY = '_auth_user_xrf'
def discard_groups(message): def discard_groups(message):
@@ -18,12 +20,20 @@ def discard_groups(message):
@channel_session_user_from_http @channel_session_user_from_http
def ws_connect(message): def ws_connect(message):
headers = dict(message.content.get('headers', ''))
message.reply_channel.send({"accept": True}) message.reply_channel.send({"accept": True})
message.content['method'] = 'FAKE' message.content['method'] = 'FAKE'
if message.user.is_authenticated(): if message.user.is_authenticated():
message.reply_channel.send( message.reply_channel.send(
{"text": json.dumps({"accept": True, "user": message.user.id})} {"text": json.dumps({"accept": True, "user": message.user.id})}
) )
# store the valid CSRF token from the cookie so we can compare it later
# on ws_receive
cookie_token = parse_cookie(
headers.get('cookie')
).get('csrftoken')
if cookie_token:
message.channel_session[XRF_KEY] = cookie_token
else: else:
logger.error("Request user is not authenticated to use websocket.") logger.error("Request user is not authenticated to use websocket.")
message.reply_channel.send({"close": True}) message.reply_channel.send({"close": True})
@@ -42,6 +52,20 @@ def ws_receive(message):
raw_data = message.content['text'] raw_data = message.content['text']
data = json.loads(raw_data) data = json.loads(raw_data)
xrftoken = data.get('xrftoken')
if (
not xrftoken or
XRF_KEY not in message.channel_session or
xrftoken != message.channel_session[XRF_KEY]
):
logger.error(
"access denied to channel, XRF mismatch for {}".format(user.username)
)
message.reply_channel.send({
"text": json.dumps({"error": "access denied to channel"})
})
return
if 'groups' in data: if 'groups' in data:
discard_groups(message) discard_groups(message)
groups = data['groups'] groups = data['groups']

View File

@@ -117,10 +117,10 @@ class IsolatedManager(object):
@classmethod @classmethod
def awx_playbook_path(cls): def awx_playbook_path(cls):
return os.path.join( return os.path.abspath(os.path.join(
os.path.dirname(awx.__file__), os.path.dirname(awx.__file__),
'playbooks' 'playbooks'
) ))
def path_to(self, *args): def path_to(self, *args):
return os.path.join(self.private_data_dir, *args) return os.path.join(self.private_data_dir, *args)
@@ -318,7 +318,7 @@ class IsolatedManager(object):
path = self.path_to('artifacts', 'stdout') path = self.path_to('artifacts', 'stdout')
if os.path.exists(path): if os.path.exists(path):
with codecs.open(path, 'r', encoding='utf-8') as f: with open(path, 'r') as f:
f.seek(seek) f.seek(seek)
for line in f: for line in f:
self.stdout_handle.write(line) self.stdout_handle.write(line)
@@ -434,6 +434,7 @@ class IsolatedManager(object):
task_result = {} task_result = {}
if 'capacity_cpu' in task_result and 'capacity_mem' in task_result: if 'capacity_cpu' in task_result and 'capacity_mem' in task_result:
cls.update_capacity(instance, task_result, awx_application_version) cls.update_capacity(instance, task_result, awx_application_version)
logger.debug('Isolated instance {} successful heartbeat'.format(instance.hostname))
elif instance.capacity == 0: elif instance.capacity == 0:
logger.debug('Isolated instance {} previously marked as lost, could not re-join.'.format( logger.debug('Isolated instance {} previously marked as lost, could not re-join.'.format(
instance.hostname)) instance.hostname))
@@ -468,13 +469,11 @@ class IsolatedManager(object):
return OutputEventFilter(job_event_callback) return OutputEventFilter(job_event_callback)
def run(self, instance, host, private_data_dir, proot_temp_dir): def run(self, instance, private_data_dir, proot_temp_dir):
""" """
Run a job on an isolated host. Run a job on an isolated host.
:param instance: a `model.Job` instance :param instance: a `model.Job` instance
:param host: the hostname (or IP address) to run the
isolated job on
:param private_data_dir: an absolute path on the local file system :param private_data_dir: an absolute path on the local file system
where job-specific data should be written where job-specific data should be written
(i.e., `/tmp/ansible_awx_xyz/`) (i.e., `/tmp/ansible_awx_xyz/`)
@@ -486,14 +485,11 @@ class IsolatedManager(object):
`ansible-playbook` run. `ansible-playbook` run.
""" """
self.instance = instance self.instance = instance
self.host = host self.host = instance.execution_node
self.private_data_dir = private_data_dir self.private_data_dir = private_data_dir
self.proot_temp_dir = proot_temp_dir self.proot_temp_dir = proot_temp_dir
status, rc = self.dispatch() status, rc = self.dispatch()
if status == 'successful': if status == 'successful':
status, rc = self.check() status, rc = self.check()
else:
# If dispatch fails, attempt to consume artifacts that *might* exist
self.check()
self.cleanup() self.cleanup()
return status, rc return status, rc

View File

@@ -4,7 +4,7 @@ import argparse
import base64 import base64
import codecs import codecs
import collections import collections
import cStringIO import StringIO
import logging import logging
import json import json
import os import os
@@ -18,6 +18,7 @@ import time
import pexpect import pexpect
import psutil import psutil
import six
logger = logging.getLogger('awx.main.utils.expect') logger = logging.getLogger('awx.main.utils.expect')
@@ -99,6 +100,12 @@ def run_pexpect(args, cwd, env, logfile,
password_patterns = expect_passwords.keys() password_patterns = expect_passwords.keys()
password_values = expect_passwords.values() password_values = expect_passwords.values()
# pexpect needs all env vars to be utf-8 encoded strings
# https://github.com/pexpect/pexpect/issues/512
for k, v in env.items():
if isinstance(v, six.text_type):
env[k] = v.encode('utf-8')
child = pexpect.spawn( child = pexpect.spawn(
args[0], args[1:], cwd=cwd, env=env, ignore_sighup=True, args[0], args[1:], cwd=cwd, env=env, ignore_sighup=True,
encoding='utf-8', echo=False, use_poll=True encoding='utf-8', echo=False, use_poll=True
@@ -201,6 +208,12 @@ def run_isolated_job(private_data_dir, secrets, logfile=sys.stdout):
env['AWX_ISOLATED_DATA_DIR'] = private_data_dir env['AWX_ISOLATED_DATA_DIR'] = private_data_dir
env['PYTHONPATH'] = env.get('PYTHONPATH', '') + callback_dir + ':' env['PYTHONPATH'] = env.get('PYTHONPATH', '') + callback_dir + ':'
venv_path = env.get('VIRTUAL_ENV')
if venv_path and not os.path.exists(venv_path):
raise RuntimeError(
'a valid Python virtualenv does not exist at {}'.format(venv_path)
)
return run_pexpect(args, cwd, env, logfile, return run_pexpect(args, cwd, env, logfile,
expect_passwords=expect_passwords, expect_passwords=expect_passwords,
idle_timeout=idle_timeout, idle_timeout=idle_timeout,
@@ -240,7 +253,7 @@ def handle_termination(pid, args, proot_cmd, is_cancel=True):
def __run__(private_data_dir): def __run__(private_data_dir):
buff = cStringIO.StringIO() buff = StringIO.StringIO()
with open(os.path.join(private_data_dir, 'env'), 'r') as f: with open(os.path.join(private_data_dir, 'env'), 'r') as f:
for line in f: for line in f:
buff.write(line) buff.write(line)

View File

@@ -218,6 +218,7 @@ class ImplicitRoleField(models.ForeignKey):
kwargs.setdefault('to', 'Role') kwargs.setdefault('to', 'Role')
kwargs.setdefault('related_name', '+') kwargs.setdefault('related_name', '+')
kwargs.setdefault('null', 'True') kwargs.setdefault('null', 'True')
kwargs.setdefault('editable', False)
super(ImplicitRoleField, self).__init__(*args, **kwargs) super(ImplicitRoleField, self).__init__(*args, **kwargs)
def deconstruct(self): def deconstruct(self):

View File

@@ -0,0 +1,12 @@
from django.db import connections
from django.db.backends.sqlite3.base import DatabaseWrapper
from django.core.management.commands.makemigrations import Command as MakeMigrations
class Command(MakeMigrations):
def execute(self, *args, **options):
settings = connections['default'].settings_dict.copy()
settings['ENGINE'] = 'sqlite3'
connections['default'] = DatabaseWrapper(settings)
return MakeMigrations().execute(*args, **options)

View File

@@ -24,5 +24,11 @@ class Command(BaseCommand):
raise CommandError('The user does not exist.') raise CommandError('The user does not exist.')
config = {'user': user, 'scope':'write'} config = {'user': user, 'scope':'write'}
serializer_obj = OAuth2TokenSerializer() serializer_obj = OAuth2TokenSerializer()
token_record = serializer_obj.create(config, True)
class FakeRequest(object):
def __init__(self):
self.user = user
serializer_obj.context['request'] = FakeRequest()
token_record = serializer_obj.create(config)
self.stdout.write(token_record.token) self.stdout.write(token_record.token)

View File

@@ -4,6 +4,7 @@
from django.core.management.base import BaseCommand from django.core.management.base import BaseCommand
from crum import impersonate from crum import impersonate
from awx.main.models import User, Organization, Project, Inventory, CredentialType, Credential, Host, JobTemplate from awx.main.models import User, Organization, Project, Inventory, CredentialType, Credential, Host, JobTemplate
from awx.main.signals import disable_computed_fields
class Command(BaseCommand): class Command(BaseCommand):
@@ -22,33 +23,34 @@ class Command(BaseCommand):
except IndexError: except IndexError:
superuser = None superuser = None
with impersonate(superuser): with impersonate(superuser):
o = Organization.objects.create(name='Default') with disable_computed_fields():
p = Project(name='Demo Project', o = Organization.objects.create(name='Default')
scm_type='git', p = Project(name='Demo Project',
scm_url='https://github.com/ansible/ansible-tower-samples', scm_type='git',
scm_update_on_launch=True, scm_url='https://github.com/ansible/ansible-tower-samples',
scm_update_cache_timeout=0, scm_update_on_launch=True,
organization=o) scm_update_cache_timeout=0,
p.save(skip_update=True) organization=o)
ssh_type = CredentialType.from_v1_kind('ssh') p.save(skip_update=True)
c = Credential.objects.create(credential_type=ssh_type, ssh_type = CredentialType.from_v1_kind('ssh')
name='Demo Credential', c = Credential.objects.create(credential_type=ssh_type,
inputs={ name='Demo Credential',
'username': superuser.username inputs={
}, 'username': superuser.username
created_by=superuser) },
c.admin_role.members.add(superuser) created_by=superuser)
i = Inventory.objects.create(name='Demo Inventory', c.admin_role.members.add(superuser)
organization=o, i = Inventory.objects.create(name='Demo Inventory',
created_by=superuser) organization=o,
Host.objects.create(name='localhost', created_by=superuser)
inventory=i, Host.objects.create(name='localhost',
variables="ansible_connection: local", inventory=i,
created_by=superuser) variables="ansible_connection: local",
jt = JobTemplate.objects.create(name='Demo Job Template', created_by=superuser)
playbook='hello_world.yml', jt = JobTemplate.objects.create(name='Demo Job Template',
project=p, playbook='hello_world.yml',
inventory=i) project=p,
jt.credentials.add(c) inventory=i)
jt.credentials.add(c)
print('Default organization added.') print('Default organization added.')
print('Demo Credential, Inventory, and Job Template added.') print('Demo Credential, Inventory, and Job Template added.')

View File

@@ -0,0 +1,37 @@
# Python
from importlib import import_module
# Django
from django.utils import timezone
from django.conf import settings
from django.contrib.auth import logout
from django.http import HttpRequest
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import User
from django.contrib.sessions.models import Session
from django.core.exceptions import ObjectDoesNotExist
class Command(BaseCommand):
"""Expire Django auth sessions for a user/all users"""
help='Expire Django auth sessions. Will expire all auth sessions if --user option is not supplied.'
def add_arguments(self, parser):
parser.add_argument('--user', dest='user', type=str)
def handle(self, *args, **options):
# Try to see if the user exist
try:
user = User.objects.get(username=options['user']) if options['user'] else None
except ObjectDoesNotExist:
raise CommandError('The user does not exist.')
# We use the following hack to filter out sessions that are still active,
# with consideration for timezones.
start = timezone.now()
sessions = Session.objects.filter(expire_date__gte=start).iterator()
request = HttpRequest()
for session in sessions:
user_id = session.get_decoded().get('_auth_user_id')
if (user is None) or (user_id and user.id == int(user_id)):
request.session = import_module(settings.SESSION_ENGINE).SessionStore(session.session_key)
logout(request)

View File

@@ -30,6 +30,7 @@ from awx.main.utils import (
) )
from awx.main.utils.mem_inventory import MemInventory, dict_to_mem_data from awx.main.utils.mem_inventory import MemInventory, dict_to_mem_data
from awx.main.signals import disable_activity_stream from awx.main.signals import disable_activity_stream
from awx.main.constants import STANDARD_INVENTORY_UPDATE_ENV
logger = logging.getLogger('awx.main.commands.inventory_import') logger = logging.getLogger('awx.main.commands.inventory_import')
@@ -82,7 +83,10 @@ class AnsibleInventoryLoader(object):
env = dict(os.environ.items()) env = dict(os.environ.items())
env['VIRTUAL_ENV'] = settings.ANSIBLE_VENV_PATH env['VIRTUAL_ENV'] = settings.ANSIBLE_VENV_PATH
env['PATH'] = os.path.join(settings.ANSIBLE_VENV_PATH, "bin") + ":" + env['PATH'] env['PATH'] = os.path.join(settings.ANSIBLE_VENV_PATH, "bin") + ":" + env['PATH']
env['ANSIBLE_INVENTORY_UNPARSED_FAILED'] = '1' # Set configuration items that should always be used for updates
for key, value in STANDARD_INVENTORY_UPDATE_ENV.items():
if key not in env:
env[key] = value
venv_libdir = os.path.join(settings.ANSIBLE_VENV_PATH, "lib") venv_libdir = os.path.join(settings.ANSIBLE_VENV_PATH, "lib")
env.pop('PYTHONPATH', None) # default to none if no python_ver matches env.pop('PYTHONPATH', None) # default to none if no python_ver matches
if os.path.isdir(os.path.join(venv_libdir, "python2.7")): if os.path.isdir(os.path.join(venv_libdir, "python2.7")):
@@ -487,7 +491,7 @@ class Command(BaseCommand):
for host in hosts_qs.filter(pk__in=del_pks): for host in hosts_qs.filter(pk__in=del_pks):
host_name = host.name host_name = host.name
host.delete() host.delete()
logger.info('Deleted host "%s"', host_name) logger.debug('Deleted host "%s"', host_name)
if settings.SQL_DEBUG: if settings.SQL_DEBUG:
logger.warning('host deletions took %d queries for %d hosts', logger.warning('host deletions took %d queries for %d hosts',
len(connection.queries) - queries_before, len(connection.queries) - queries_before,
@@ -524,7 +528,7 @@ class Command(BaseCommand):
group_name = group.name group_name = group.name
with ignore_inventory_computed_fields(): with ignore_inventory_computed_fields():
group.delete() group.delete()
logger.info('Group "%s" deleted', group_name) logger.debug('Group "%s" deleted', group_name)
if settings.SQL_DEBUG: if settings.SQL_DEBUG:
logger.warning('group deletions took %d queries for %d groups', logger.warning('group deletions took %d queries for %d groups',
len(connection.queries) - queries_before, len(connection.queries) - queries_before,
@@ -545,7 +549,7 @@ class Command(BaseCommand):
db_groups = self.inventory_source.groups db_groups = self.inventory_source.groups
for db_group in db_groups.all(): for db_group in db_groups.all():
if self.inventory_source.deprecated_group_id == db_group.id: # TODO: remove in 3.3 if self.inventory_source.deprecated_group_id == db_group.id: # TODO: remove in 3.3
logger.info( logger.debug(
'Group "%s" from v1 API child group/host connections preserved', 'Group "%s" from v1 API child group/host connections preserved',
db_group.name db_group.name
) )
@@ -562,8 +566,8 @@ class Command(BaseCommand):
for db_child in db_children.filter(pk__in=child_group_pks): for db_child in db_children.filter(pk__in=child_group_pks):
group_group_count += 1 group_group_count += 1
db_group.children.remove(db_child) db_group.children.remove(db_child)
logger.info('Group "%s" removed from group "%s"', logger.debug('Group "%s" removed from group "%s"',
db_child.name, db_group.name) db_child.name, db_group.name)
# FIXME: Inventory source group relationships # FIXME: Inventory source group relationships
# Delete group/host relationships not present in imported data. # Delete group/host relationships not present in imported data.
db_hosts = db_group.hosts db_hosts = db_group.hosts
@@ -590,8 +594,8 @@ class Command(BaseCommand):
if db_host not in db_group.hosts.all(): if db_host not in db_group.hosts.all():
continue continue
db_group.hosts.remove(db_host) db_group.hosts.remove(db_host)
logger.info('Host "%s" removed from group "%s"', logger.debug('Host "%s" removed from group "%s"',
db_host.name, db_group.name) db_host.name, db_group.name)
if settings.SQL_DEBUG: if settings.SQL_DEBUG:
logger.warning('group-group and group-host deletions took %d queries for %d relationships', logger.warning('group-group and group-host deletions took %d queries for %d relationships',
len(connection.queries) - queries_before, len(connection.queries) - queries_before,
@@ -610,9 +614,9 @@ class Command(BaseCommand):
if db_variables != all_obj.variables_dict: if db_variables != all_obj.variables_dict:
all_obj.variables = json.dumps(db_variables) all_obj.variables = json.dumps(db_variables)
all_obj.save(update_fields=['variables']) all_obj.save(update_fields=['variables'])
logger.info('Inventory variables updated from "all" group') logger.debug('Inventory variables updated from "all" group')
else: else:
logger.info('Inventory variables unmodified') logger.debug('Inventory variables unmodified')
def _create_update_groups(self): def _create_update_groups(self):
''' '''
@@ -644,11 +648,11 @@ class Command(BaseCommand):
group.variables = json.dumps(db_variables) group.variables = json.dumps(db_variables)
group.save(update_fields=['variables']) group.save(update_fields=['variables'])
if self.overwrite_vars: if self.overwrite_vars:
logger.info('Group "%s" variables replaced', group.name) logger.debug('Group "%s" variables replaced', group.name)
else: else:
logger.info('Group "%s" variables updated', group.name) logger.debug('Group "%s" variables updated', group.name)
else: else:
logger.info('Group "%s" variables unmodified', group.name) logger.debug('Group "%s" variables unmodified', group.name)
existing_group_names.add(group.name) existing_group_names.add(group.name)
self._batch_add_m2m(self.inventory_source.groups, group) self._batch_add_m2m(self.inventory_source.groups, group)
for group_name in all_group_names: for group_name in all_group_names:
@@ -662,7 +666,7 @@ class Command(BaseCommand):
'description':'imported' 'description':'imported'
} }
)[0] )[0]
logger.info('Group "%s" added', group.name) logger.debug('Group "%s" added', group.name)
self._batch_add_m2m(self.inventory_source.groups, group) self._batch_add_m2m(self.inventory_source.groups, group)
self._batch_add_m2m(self.inventory_source.groups, flush=True) self._batch_add_m2m(self.inventory_source.groups, flush=True)
if settings.SQL_DEBUG: if settings.SQL_DEBUG:
@@ -701,24 +705,24 @@ class Command(BaseCommand):
if update_fields: if update_fields:
db_host.save(update_fields=update_fields) db_host.save(update_fields=update_fields)
if 'name' in update_fields: if 'name' in update_fields:
logger.info('Host renamed from "%s" to "%s"', old_name, mem_host.name) logger.debug('Host renamed from "%s" to "%s"', old_name, mem_host.name)
if 'instance_id' in update_fields: if 'instance_id' in update_fields:
if old_instance_id: if old_instance_id:
logger.info('Host "%s" instance_id updated', mem_host.name) logger.debug('Host "%s" instance_id updated', mem_host.name)
else: else:
logger.info('Host "%s" instance_id added', mem_host.name) logger.debug('Host "%s" instance_id added', mem_host.name)
if 'variables' in update_fields: if 'variables' in update_fields:
if self.overwrite_vars: if self.overwrite_vars:
logger.info('Host "%s" variables replaced', mem_host.name) logger.debug('Host "%s" variables replaced', mem_host.name)
else: else:
logger.info('Host "%s" variables updated', mem_host.name) logger.debug('Host "%s" variables updated', mem_host.name)
else: else:
logger.info('Host "%s" variables unmodified', mem_host.name) logger.debug('Host "%s" variables unmodified', mem_host.name)
if 'enabled' in update_fields: if 'enabled' in update_fields:
if enabled: if enabled:
logger.info('Host "%s" is now enabled', mem_host.name) logger.debug('Host "%s" is now enabled', mem_host.name)
else: else:
logger.info('Host "%s" is now disabled', mem_host.name) logger.debug('Host "%s" is now disabled', mem_host.name)
self._batch_add_m2m(self.inventory_source.hosts, db_host) self._batch_add_m2m(self.inventory_source.hosts, db_host)
def _create_update_hosts(self): def _create_update_hosts(self):
@@ -792,9 +796,9 @@ class Command(BaseCommand):
host_attrs['instance_id'] = instance_id host_attrs['instance_id'] = instance_id
db_host = self.inventory.hosts.update_or_create(name=mem_host_name, defaults=host_attrs)[0] db_host = self.inventory.hosts.update_or_create(name=mem_host_name, defaults=host_attrs)[0]
if enabled is False: if enabled is False:
logger.info('Host "%s" added (disabled)', mem_host_name) logger.debug('Host "%s" added (disabled)', mem_host_name)
else: else:
logger.info('Host "%s" added', mem_host_name) logger.debug('Host "%s" added', mem_host_name)
self._batch_add_m2m(self.inventory_source.hosts, db_host) self._batch_add_m2m(self.inventory_source.hosts, db_host)
self._batch_add_m2m(self.inventory_source.hosts, flush=True) self._batch_add_m2m(self.inventory_source.hosts, flush=True)
@@ -823,10 +827,10 @@ class Command(BaseCommand):
child_names = all_child_names[offset2:(offset2 + self._batch_size)] child_names = all_child_names[offset2:(offset2 + self._batch_size)]
db_children_qs = self.inventory.groups.filter(name__in=child_names) db_children_qs = self.inventory.groups.filter(name__in=child_names)
for db_child in db_children_qs.filter(children__id=db_group.id): for db_child in db_children_qs.filter(children__id=db_group.id):
logger.info('Group "%s" already child of group "%s"', db_child.name, db_group.name) logger.debug('Group "%s" already child of group "%s"', db_child.name, db_group.name)
for db_child in db_children_qs.exclude(children__id=db_group.id): for db_child in db_children_qs.exclude(children__id=db_group.id):
self._batch_add_m2m(db_group.children, db_child) self._batch_add_m2m(db_group.children, db_child)
logger.info('Group "%s" added as child of "%s"', db_child.name, db_group.name) logger.debug('Group "%s" added as child of "%s"', db_child.name, db_group.name)
self._batch_add_m2m(db_group.children, flush=True) self._batch_add_m2m(db_group.children, flush=True)
if settings.SQL_DEBUG: if settings.SQL_DEBUG:
logger.warning('Group-group updates took %d queries for %d group-group relationships', logger.warning('Group-group updates took %d queries for %d group-group relationships',
@@ -850,19 +854,19 @@ class Command(BaseCommand):
host_names = all_host_names[offset2:(offset2 + self._batch_size)] host_names = all_host_names[offset2:(offset2 + self._batch_size)]
db_hosts_qs = self.inventory.hosts.filter(name__in=host_names) db_hosts_qs = self.inventory.hosts.filter(name__in=host_names)
for db_host in db_hosts_qs.filter(groups__id=db_group.id): for db_host in db_hosts_qs.filter(groups__id=db_group.id):
logger.info('Host "%s" already in group "%s"', db_host.name, db_group.name) logger.debug('Host "%s" already in group "%s"', db_host.name, db_group.name)
for db_host in db_hosts_qs.exclude(groups__id=db_group.id): for db_host in db_hosts_qs.exclude(groups__id=db_group.id):
self._batch_add_m2m(db_group.hosts, db_host) self._batch_add_m2m(db_group.hosts, db_host)
logger.info('Host "%s" added to group "%s"', db_host.name, db_group.name) logger.debug('Host "%s" added to group "%s"', db_host.name, db_group.name)
all_instance_ids = sorted([h.instance_id for h in mem_group.hosts if h.instance_id]) all_instance_ids = sorted([h.instance_id for h in mem_group.hosts if h.instance_id])
for offset2 in xrange(0, len(all_instance_ids), self._batch_size): for offset2 in xrange(0, len(all_instance_ids), self._batch_size):
instance_ids = all_instance_ids[offset2:(offset2 + self._batch_size)] instance_ids = all_instance_ids[offset2:(offset2 + self._batch_size)]
db_hosts_qs = self.inventory.hosts.filter(instance_id__in=instance_ids) db_hosts_qs = self.inventory.hosts.filter(instance_id__in=instance_ids)
for db_host in db_hosts_qs.filter(groups__id=db_group.id): for db_host in db_hosts_qs.filter(groups__id=db_group.id):
logger.info('Host "%s" already in group "%s"', db_host.name, db_group.name) logger.debug('Host "%s" already in group "%s"', db_host.name, db_group.name)
for db_host in db_hosts_qs.exclude(groups__id=db_group.id): for db_host in db_hosts_qs.exclude(groups__id=db_group.id):
self._batch_add_m2m(db_group.hosts, db_host) self._batch_add_m2m(db_group.hosts, db_host)
logger.info('Host "%s" added to group "%s"', db_host.name, db_group.name) logger.debug('Host "%s" added to group "%s"', db_host.name, db_group.name)
self._batch_add_m2m(db_group.hosts, flush=True) self._batch_add_m2m(db_group.hosts, flush=True)
if settings.SQL_DEBUG: if settings.SQL_DEBUG:
logger.warning('Group-host updates took %d queries for %d group-host relationships', logger.warning('Group-host updates took %d queries for %d group-host relationships',
@@ -1001,37 +1005,43 @@ class Command(BaseCommand):
self.all_group.debug_tree() self.all_group.debug_tree()
with batch_role_ancestor_rebuilding(): with batch_role_ancestor_rebuilding():
# Ensure that this is managed as an atomic SQL transaction, # If using with transaction.atomic() with try ... catch,
# and thus properly rolled back if there is an issue. # with transaction.atomic() must be inside the try section of the code as per Django docs
with transaction.atomic(): try:
# Merge/overwrite inventory into database. # Ensure that this is managed as an atomic SQL transaction,
if settings.SQL_DEBUG: # and thus properly rolled back if there is an issue.
logger.warning('loading into database...') with transaction.atomic():
with ignore_inventory_computed_fields(): # Merge/overwrite inventory into database.
if getattr(settings, 'ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC', True): if settings.SQL_DEBUG:
self.load_into_database() logger.warning('loading into database...')
else: with ignore_inventory_computed_fields():
with disable_activity_stream(): if getattr(settings, 'ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC', True):
self.load_into_database() self.load_into_database()
if settings.SQL_DEBUG: else:
queries_before2 = len(connection.queries) with disable_activity_stream():
self.inventory.update_computed_fields() self.load_into_database()
if settings.SQL_DEBUG: if settings.SQL_DEBUG:
logger.warning('update computed fields took %d queries', queries_before2 = len(connection.queries)
len(connection.queries) - queries_before2) self.inventory.update_computed_fields()
try: if settings.SQL_DEBUG:
logger.warning('update computed fields took %d queries',
len(connection.queries) - queries_before2)
# Check if the license is valid.
# If the license is not valid, a CommandError will be thrown,
# and inventory update will be marked as invalid.
# with transaction.atomic() will roll back the changes.
self.check_license() self.check_license()
except CommandError as e: except CommandError as e:
self.mark_license_failure(save=True) self.mark_license_failure()
raise e raise e
if settings.SQL_DEBUG: if settings.SQL_DEBUG:
logger.warning('Inventory import completed for %s in %0.1fs', logger.warning('Inventory import completed for %s in %0.1fs',
self.inventory_source.name, time.time() - begin) self.inventory_source.name, time.time() - begin)
else: else:
logger.info('Inventory import completed for %s in %0.1fs', logger.info('Inventory import completed for %s in %0.1fs',
self.inventory_source.name, time.time() - begin) self.inventory_source.name, time.time() - begin)
status = 'successful' status = 'successful'
# If we're in debug mode, then log the queries and time # If we're in debug mode, then log the queries and time
# used to do the operation. # used to do the operation.
@@ -1058,6 +1068,8 @@ class Command(BaseCommand):
self.inventory_update.result_traceback = tb self.inventory_update.result_traceback = tb
self.inventory_update.status = status self.inventory_update.status = status
self.inventory_update.save(update_fields=['status', 'result_traceback']) self.inventory_update.save(update_fields=['status', 'result_traceback'])
self.inventory_source.status = status
self.inventory_source.save(update_fields=['status'])
if exc and isinstance(exc, CommandError): if exc and isinstance(exc, CommandError):
sys.exit(1) sys.exit(1)

View File

@@ -6,6 +6,22 @@ from django.core.management.base import BaseCommand
import six import six
class Ungrouped(object):
name = 'ungrouped'
policy_instance_percentage = None
policy_instance_minimum = None
controller = None
@property
def instances(self):
return Instance.objects.filter(rampart_groups__isnull=True)
@property
def capacity(self):
return sum([x.capacity for x in self.instances])
class Command(BaseCommand): class Command(BaseCommand):
"""List instances from the Tower database """List instances from the Tower database
""" """
@@ -13,12 +29,28 @@ class Command(BaseCommand):
def handle(self, *args, **options): def handle(self, *args, **options):
super(Command, self).__init__() super(Command, self).__init__()
for instance in Instance.objects.all(): groups = list(InstanceGroup.objects.all())
print(six.text_type( ungrouped = Ungrouped()
"hostname: {0.hostname}; created: {0.created}; " if len(ungrouped.instances):
"heartbeat: {0.modified}; capacity: {0.capacity}").format(instance)) groups.append(ungrouped)
for instance_group in InstanceGroup.objects.all():
print(six.text_type( for instance_group in groups:
"Instance Group: {0.name}; created: {0.created}; " fmt = '[{0.name} capacity={0.capacity}'
"capacity: {0.capacity}; members: {1}").format(instance_group, if instance_group.policy_instance_percentage:
[x.hostname for x in instance_group.instances.all()])) fmt += ' policy={0.policy_instance_percentage}%'
if instance_group.policy_instance_minimum:
fmt += ' policy>={0.policy_instance_minimum}'
if instance_group.controller:
fmt += ' controller={0.controller.name}'
print(six.text_type(fmt + ']').format(instance_group))
for x in instance_group.instances.all():
color = '\033[92m'
if x.capacity == 0 or x.enabled is False:
color = '\033[91m'
fmt = '\t' + color + '{0.hostname} capacity={0.capacity} version={1}'
if x.last_isolated_check:
fmt += ' last_isolated_check="{0.last_isolated_check:%Y-%m-%d %H:%M:%S}"'
if x.capacity:
fmt += ' heartbeat="{0.modified:%Y-%m-%d %H:%M:%S}"'
print(six.text_type(fmt + '\033[0m').format(x, x.version or '?'))
print('')

View File

@@ -22,7 +22,7 @@ class Command(BaseCommand):
parser.add_argument('--queuename', dest='queuename', type=lambda s: six.text_type(s, 'utf8'), parser.add_argument('--queuename', dest='queuename', type=lambda s: six.text_type(s, 'utf8'),
help='Queue to create/update') help='Queue to create/update')
parser.add_argument('--hostnames', dest='hostnames', type=lambda s: six.text_type(s, 'utf8'), parser.add_argument('--hostnames', dest='hostnames', type=lambda s: six.text_type(s, 'utf8'),
help='Comma-Delimited Hosts to add to the Queue') help='Comma-Delimited Hosts to add to the Queue (will not remove already assigned instances)')
parser.add_argument('--controller', dest='controller', type=lambda s: six.text_type(s, 'utf8'), parser.add_argument('--controller', dest='controller', type=lambda s: six.text_type(s, 'utf8'),
default='', help='The controlling group (makes this an isolated group)') default='', help='The controlling group (makes this an isolated group)')
parser.add_argument('--instance_percent', dest='instance_percent', type=int, default=0, parser.add_argument('--instance_percent', dest='instance_percent', type=int, default=0,
@@ -44,6 +44,9 @@ class Command(BaseCommand):
ig.policy_instance_minimum = instance_min ig.policy_instance_minimum = instance_min
changed = True changed = True
if changed:
ig.save()
return (ig, created, changed) return (ig, created, changed)
def update_instance_group_controller(self, ig, controller): def update_instance_group_controller(self, ig, controller):
@@ -72,16 +75,16 @@ class Command(BaseCommand):
else: else:
raise InstanceNotFound(six.text_type("Instance does not exist: {}").format(inst_name), changed) raise InstanceNotFound(six.text_type("Instance does not exist: {}").format(inst_name), changed)
ig.instances = instances ig.instances.add(*instances)
instance_list_before = set(ig.policy_instance_list) instance_list_before = ig.policy_instance_list
instance_list_after = set(instance_list_unique) instance_list_after = instance_list_unique
if len(instance_list_before) != len(instance_list_after) or \ new_instances = set(instance_list_after) - set(instance_list_before)
len(set(instance_list_before) - set(instance_list_after)) != 0: if new_instances:
changed = True changed = True
ig.policy_instance_list = ig.policy_instance_list + list(new_instances)
ig.save()
ig.policy_instance_list = list(instance_list_unique)
ig.save()
return (instances, changed) return (instances, changed)
def handle(self, **options): def handle(self, **options):
@@ -97,25 +100,27 @@ class Command(BaseCommand):
hostname_list = options.get('hostnames').split(",") hostname_list = options.get('hostnames').split(",")
with advisory_lock(six.text_type('instance_group_registration_{}').format(queuename)): with advisory_lock(six.text_type('instance_group_registration_{}').format(queuename)):
(ig, created, changed) = self.get_create_update_instance_group(queuename, inst_per, inst_min) changed2 = False
changed3 = False
(ig, created, changed1) = self.get_create_update_instance_group(queuename, inst_per, inst_min)
if created: if created:
print(six.text_type("Creating instance group {}".format(ig.name))) print(six.text_type("Creating instance group {}".format(ig.name)))
elif not created: elif not created:
print(six.text_type("Instance Group already registered {}").format(ig.name)) print(six.text_type("Instance Group already registered {}").format(ig.name))
if ctrl: if ctrl:
(ig_ctrl, changed) = self.update_instance_group_controller(ig, ctrl) (ig_ctrl, changed2) = self.update_instance_group_controller(ig, ctrl)
if changed: if changed2:
print(six.text_type("Set controller group {} on {}.").format(ctrl, queuename)) print(six.text_type("Set controller group {} on {}.").format(ctrl, queuename))
try: try:
(instances, changed) = self.add_instances_to_group(ig, hostname_list) (instances, changed3) = self.add_instances_to_group(ig, hostname_list)
for i in instances: for i in instances:
print(six.text_type("Added instance {} to {}").format(i.hostname, ig.name)) print(six.text_type("Added instance {} to {}").format(i.hostname, ig.name))
except InstanceNotFound as e: except InstanceNotFound as e:
instance_not_found_err = e instance_not_found_err = e
if changed: if any([changed1, changed2, changed3]):
print('(changed: True)') print('(changed: True)')
if instance_not_found_err: if instance_not_found_err:

View File

@@ -95,7 +95,7 @@ class ReplayJobEvents():
raise RuntimeError("Job is of type {} and replay is not yet supported.".format(type(job))) raise RuntimeError("Job is of type {} and replay is not yet supported.".format(type(job)))
sys.exit(1) sys.exit(1)
def run(self, job_id, speed=1.0, verbosity=0, skip=0): def run(self, job_id, speed=1.0, verbosity=0, skip_range=[]):
stats = { stats = {
'events_ontime': { 'events_ontime': {
'total': 0, 'total': 0,
@@ -127,7 +127,7 @@ class ReplayJobEvents():
je_previous = None je_previous = None
for n, je_current in enumerate(job_events): for n, je_current in enumerate(job_events):
if n < skip: if je_current.counter in skip_range:
continue continue
if not je_previous: if not je_previous:
@@ -193,19 +193,29 @@ class Command(BaseCommand):
help = 'Replay job events over websockets ordered by created on date.' help = 'Replay job events over websockets ordered by created on date.'
def _parse_slice_range(self, slice_arg):
slice_arg = tuple([int(n) for n in slice_arg.split(':')])
slice_obj = slice(*slice_arg)
start = slice_obj.start or 0
stop = slice_obj.stop or -1
step = slice_obj.step or 1
return range(start, stop, step)
def add_arguments(self, parser): def add_arguments(self, parser):
parser.add_argument('--job_id', dest='job_id', type=int, metavar='j', parser.add_argument('--job_id', dest='job_id', type=int, metavar='j',
help='Id of the job to replay (job or adhoc)') help='Id of the job to replay (job or adhoc)')
parser.add_argument('--speed', dest='speed', type=int, metavar='s', parser.add_argument('--speed', dest='speed', type=int, metavar='s',
help='Speedup factor.') help='Speedup factor.')
parser.add_argument('--skip', dest='skip', type=int, metavar='k', parser.add_argument('--skip-range', dest='skip_range', type=str, metavar='k',
help='Number of events to skip.') default='0:-1:1', help='Range of events to skip')
def handle(self, *args, **options): def handle(self, *args, **options):
job_id = options.get('job_id') job_id = options.get('job_id')
speed = options.get('speed') or 1 speed = options.get('speed') or 1
verbosity = options.get('verbosity') or 0 verbosity = options.get('verbosity') or 0
skip = options.get('skip') or 0 skip = self._parse_slice_range(options.get('skip_range'))
replayer = ReplayJobEvents() replayer = ReplayJobEvents()
replayer.run(job_id, speed, verbosity, skip) replayer.run(job_id, speed, verbosity, skip)

View File

@@ -64,15 +64,22 @@ class CallbackBrokerWorker(ConsumerMixin):
return _handler return _handler
if use_workers: if use_workers:
django_connection.close()
django_cache.close()
for idx in range(settings.JOB_EVENT_WORKERS): for idx in range(settings.JOB_EVENT_WORKERS):
queue_actual = MPQueue(settings.JOB_EVENT_MAX_QUEUE_SIZE) queue_actual = MPQueue(settings.JOB_EVENT_MAX_QUEUE_SIZE)
w = Process(target=self.callback_worker, args=(queue_actual, idx,)) w = Process(target=self.callback_worker, args=(queue_actual, idx,))
w.start()
if settings.DEBUG: if settings.DEBUG:
logger.info('Started worker %s' % str(idx)) logger.info('Starting worker %s' % str(idx))
self.worker_queues.append([0, queue_actual, w]) self.worker_queues.append([0, queue_actual, w])
# It's important to close these _right before_ we fork; we
# don't want the forked processes to inherit the open sockets
# for the DB and memcached connections (that way lies race
# conditions)
django_connection.close()
django_cache.close()
for _, _, w in self.worker_queues:
w.start()
elif settings.DEBUG: elif settings.DEBUG:
logger.warn('Started callback receiver (no workers)') logger.warn('Started callback receiver (no workers)')
@@ -162,6 +169,7 @@ class CallbackBrokerWorker(ConsumerMixin):
if body.get('event') == 'EOF': if body.get('event') == 'EOF':
try: try:
final_counter = body.get('final_counter', 0)
logger.info('Event processing is finished for Job {}, sending notifications'.format(job_identifier)) logger.info('Event processing is finished for Job {}, sending notifications'.format(job_identifier))
# EOF events are sent when stdout for the running task is # EOF events are sent when stdout for the running task is
# closed. don't actually persist them to the database; we # closed. don't actually persist them to the database; we
@@ -169,7 +177,7 @@ class CallbackBrokerWorker(ConsumerMixin):
# approximation for when a job is "done" # approximation for when a job is "done"
emit_channel_notification( emit_channel_notification(
'jobs-summary', 'jobs-summary',
dict(group_name='jobs', unified_job_id=job_identifier) dict(group_name='jobs', unified_job_id=job_identifier, final_counter=final_counter)
) )
# Additionally, when we've processed all events, we should # Additionally, when we've processed all events, we should
# have all the data we need to send out success/failure # have all the data we need to send out success/failure

View File

@@ -3,7 +3,6 @@ import shutil
import subprocess import subprocess
import sys import sys
import tempfile import tempfile
from optparse import make_option
from django.conf import settings from django.conf import settings
from django.core.management.base import BaseCommand, CommandError from django.core.management.base import BaseCommand, CommandError
@@ -15,10 +14,9 @@ class Command(BaseCommand):
"""Tests SSH connectivity between a controller and target isolated node""" """Tests SSH connectivity between a controller and target isolated node"""
help = 'Tests SSH connectivity between a controller and target isolated node' help = 'Tests SSH connectivity between a controller and target isolated node'
option_list = BaseCommand.option_list + ( def add_arguments(self, parser):
make_option('--hostname', dest='hostname', type='string', parser.add_argument('--hostname', dest='hostname', type=str,
help='Hostname of an isolated node'), help='Hostname of an isolated node')
)
def handle(self, *args, **options): def handle(self, *args, **options):
hostname = options.get('hostname') hostname = options.get('hostname')
@@ -30,7 +28,7 @@ class Command(BaseCommand):
args = [ args = [
'ansible', 'all', '-i', '{},'.format(hostname), '-u', 'ansible', 'all', '-i', '{},'.format(hostname), '-u',
settings.AWX_ISOLATED_USERNAME, '-T5', '-m', 'shell', settings.AWX_ISOLATED_USERNAME, '-T5', '-m', 'shell',
'-a', 'hostname', '-vvv' '-a', 'awx-expect -h', '-vvv'
] ]
if all([ if all([
getattr(settings, 'AWX_ISOLATED_KEY_GENERATION', False) is True, getattr(settings, 'AWX_ISOLATED_KEY_GENERATION', False) is True,

View File

@@ -0,0 +1,66 @@
import datetime
import os
import signal
import subprocess
import sys
import time
from celery import Celery
from django.core.management.base import BaseCommand
from django.conf import settings
class Command(BaseCommand):
"""Watch local celery workers"""
help=("Sends a periodic ping to the local celery process over AMQP to ensure "
"it's responsive; this command is only intended to run in an environment "
"where celeryd is running")
#
# Just because celery is _running_ doesn't mean it's _working_; it's
# imperative that celery workers are _actually_ handling AMQP messages on
# their appropriate queues for awx to function. Unfortunately, we've been
# plagued by a variety of bugs in celery that cause it to hang and become
# an unresponsive zombie, such as:
#
# https://github.com/celery/celery/issues/4185
# https://github.com/celery/celery/issues/4457
#
# The goal of this code is periodically send a broadcast AMQP message to
# the celery process on the local host via celery.app.control.ping;
# If that _fails_, we attempt to determine the pid of the celery process
# and send SIGHUP (which tends to resolve these sorts of issues for us).
#
INTERVAL = 60
def _log(self, msg):
sys.stderr.write(datetime.datetime.utcnow().isoformat())
sys.stderr.write(' ')
sys.stderr.write(msg)
sys.stderr.write('\n')
def handle(self, **options):
app = Celery('awx')
app.config_from_object('django.conf:settings')
while True:
try:
pongs = app.control.ping(['celery@{}'.format(settings.CLUSTER_HOST_ID)], timeout=30)
except Exception:
pongs = []
if not pongs:
self._log('celery is not responsive to ping over local AMQP')
pid = self.getpid()
if pid:
self._log('sending SIGHUP to {}'.format(pid))
os.kill(pid, signal.SIGHUP)
time.sleep(self.INTERVAL)
def getpid(self):
cmd = 'supervisorctl pid tower-processes:awx-celeryd'
if os.path.exists('/supervisor_task.conf'):
cmd = 'supervisorctl -c /supervisor_task.conf pid tower-processes:celery'
try:
return int(subprocess.check_output(cmd, shell=True))
except Exception:
self._log('could not detect celery pid')

View File

@@ -96,7 +96,7 @@ class InstanceManager(models.Manager):
instance = self.filter(hostname=hostname) instance = self.filter(hostname=hostname)
if instance.exists(): if instance.exists():
return (False, instance[0]) return (False, instance[0])
instance = self.create(uuid=uuid, hostname=hostname) instance = self.create(uuid=uuid, hostname=hostname, capacity=0)
return (True, instance) return (True, instance)
def get_or_register(self): def get_or_register(self):

View File

@@ -1,6 +1,8 @@
# Copyright (c) 2015 Ansible, Inc. # Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved. # All Rights Reserved.
import base64
import json
import logging import logging
import threading import threading
import uuid import uuid
@@ -9,12 +11,15 @@ import time
import cProfile import cProfile
import pstats import pstats
import os import os
import re
from django.conf import settings from django.conf import settings
from django.contrib.auth.models import User from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
from django.db.models.signals import post_save from django.db.models.signals import post_save
from django.db.migrations.executor import MigrationExecutor from django.db.migrations.executor import MigrationExecutor
from django.db import IntegrityError, connection from django.db import IntegrityError, connection
from django.http import HttpResponse
from django.utils.functional import curry from django.utils.functional import curry
from django.shortcuts import get_object_or_404, redirect from django.shortcuts import get_object_or_404, redirect
from django.apps import apps from django.apps import apps
@@ -119,6 +124,21 @@ class ActivityStreamMiddleware(threading.local):
self.instance_ids.append(instance.id) self.instance_ids.append(instance.id)
class SessionTimeoutMiddleware(object):
"""
Resets the session timeout for both the UI and the actual session for the API
to the value of SESSION_COOKIE_AGE on every request if there is a valid session.
"""
def process_response(self, request, response):
req_session = getattr(request, 'session', None)
if req_session and not req_session.is_empty():
expiry = int(settings.SESSION_COOKIE_AGE)
request.session.set_expiry(expiry)
response['Session-Timeout'] = expiry
return response
def _customize_graph(): def _customize_graph():
from awx.main.models import Instance, Schedule, UnifiedJobTemplate from awx.main.models import Instance, Schedule, UnifiedJobTemplate
for model in [Schedule, UnifiedJobTemplate]: for model in [Schedule, UnifiedJobTemplate]:
@@ -189,6 +209,56 @@ class URLModificationMiddleware(object):
request.path_info = new_path request.path_info = new_path
class DeprecatedAuthTokenMiddleware(object):
"""
Used to emulate support for the old Auth Token endpoint to ease the
transition to OAuth2.0. Specifically, this middleware:
1. Intercepts POST requests to `/api/v2/authtoken/` (which now no longer
_actually_ exists in our urls.py)
2. Rewrites `request.path` to `/api/v2/users/N/personal_tokens/`
3. Detects the username and password in the request body (either in JSON,
or form-encoded variables) and builds an appropriate HTTP_AUTHORIZATION
Basic header
"""
def process_request(self, request):
if re.match('^/api/v[12]/authtoken/?$', request.path):
if request.method != 'POST':
return HttpResponse('HTTP {} is not allowed.'.format(request.method), status=405)
try:
payload = json.loads(request.body)
except (ValueError, TypeError):
payload = request.POST
if 'username' not in payload or 'password' not in payload:
return HttpResponse('Unable to login with provided credentials.', status=401)
username = payload['username']
password = payload['password']
try:
pk = User.objects.get(username=username).pk
except ObjectDoesNotExist:
return HttpResponse('Unable to login with provided credentials.', status=401)
new_path = reverse('api:user_personal_token_list', kwargs={
'pk': pk,
'version': 'v2'
})
request._body = ''
request.META['CONTENT_TYPE'] = 'application/json'
request.path = request.path_info = new_path
auth = ' '.join([
'Basic',
base64.b64encode(
six.text_type('{}:{}').format(username, password)
)
])
request.environ['HTTP_AUTHORIZATION'] = auth
logger.warn(
'The Auth Token API (/api/v2/authtoken/) is deprecated and will '
'be replaced with OAuth2.0 in the next version of Ansible Tower '
'(see /api/o/ for more details).'
)
class MigrationRanCheckMiddleware(object): class MigrationRanCheckMiddleware(object):
def process_request(self, request): def process_request(self, request):

View File

@@ -157,7 +157,7 @@ class Migration(migrations.Migration):
('status', models.CharField(default=b'pending', max_length=20, editable=False, choices=[(b'pending', 'Pending'), (b'successful', 'Successful'), (b'failed', 'Failed')])), ('status', models.CharField(default=b'pending', max_length=20, editable=False, choices=[(b'pending', 'Pending'), (b'successful', 'Successful'), (b'failed', 'Failed')])),
('error', models.TextField(default=b'', editable=False, blank=True)), ('error', models.TextField(default=b'', editable=False, blank=True)),
('notifications_sent', models.IntegerField(default=0, editable=False)), ('notifications_sent', models.IntegerField(default=0, editable=False)),
('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'mattermost', 'Mattermost'), (b'irc', 'IRC')])), ('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'mattermost', 'Mattermost'), (b'rocketchat', 'Rocket.Chat'), (b'irc', 'IRC')])),
('recipients', models.TextField(default=b'', editable=False, blank=True)), ('recipients', models.TextField(default=b'', editable=False, blank=True)),
('subject', models.TextField(default=b'', editable=False, blank=True)), ('subject', models.TextField(default=b'', editable=False, blank=True)),
('body', jsonfield.fields.JSONField(default=dict, blank=True)), ('body', jsonfield.fields.JSONField(default=dict, blank=True)),
@@ -174,7 +174,7 @@ class Migration(migrations.Migration):
('modified', models.DateTimeField(default=None, editable=False)), ('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(default=b'', blank=True)), ('description', models.TextField(default=b'', blank=True)),
('name', models.CharField(unique=True, max_length=512)), ('name', models.CharField(unique=True, max_length=512)),
('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'mattermost', 'Mattermost'), (b'irc', 'IRC')])), ('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'mattermost', 'Mattermost'), (b'rocketchat', 'Rocket.Chat'), (b'irc', 'IRC')])),
('notification_configuration', jsonfield.fields.JSONField(default=dict)), ('notification_configuration', jsonfield.fields.JSONField(default=dict)),
('created_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)), ('created_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
('modified_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)), ('modified_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),

View File

@@ -484,7 +484,7 @@ class Migration(migrations.Migration):
migrations.AddField( migrations.AddField(
model_name='instance', model_name='instance',
name='last_isolated_check', name='last_isolated_check',
field=models.DateTimeField(auto_now_add=True, null=True), field=models.DateTimeField(editable=False, null=True),
), ),
# Migrations that don't change db schema but simply to make Django ORM happy. # Migrations that don't change db schema but simply to make Django ORM happy.
# e.g. Choice updates, help_text updates, etc. # e.g. Choice updates, help_text updates, etc.

View File

@@ -2,6 +2,7 @@
from __future__ import unicode_literals from __future__ import unicode_literals
# AWX # AWX
from awx.main.migrations import _migration_utils as migration_utils
from awx.main.migrations import _credentialtypes as credentialtypes from awx.main.migrations import _credentialtypes as credentialtypes
from django.db import migrations, models from django.db import migrations, models
@@ -14,6 +15,7 @@ class Migration(migrations.Migration):
] ]
operations = [ operations = [
migrations.RunPython(migration_utils.set_current_apps_for_migrations),
migrations.RunPython(credentialtypes.create_rhv_tower_credtype), migrations.RunPython(credentialtypes.create_rhv_tower_credtype),
migrations.AlterField( migrations.AlterField(
model_name='inventorysource', model_name='inventorysource',

View File

@@ -2,6 +2,7 @@
from __future__ import unicode_literals from __future__ import unicode_literals
# AWX # AWX
from awx.main.migrations import _migration_utils as migration_utils
from awx.main.migrations import _credentialtypes as credentialtypes from awx.main.migrations import _credentialtypes as credentialtypes
from django.db import migrations from django.db import migrations
@@ -14,5 +15,6 @@ class Migration(migrations.Migration):
] ]
operations = [ operations = [
migrations.RunPython(migration_utils.set_current_apps_for_migrations),
migrations.RunPython(credentialtypes.add_azure_cloud_environment_field), migrations.RunPython(credentialtypes.add_azure_cloud_environment_field),
] ]

View File

@@ -5,7 +5,7 @@ from django.db import migrations, models
from awx.main.migrations import _migration_utils as migration_utils from awx.main.migrations import _migration_utils as migration_utils
from awx.main.migrations import _credentialtypes as credentialtypes from awx.main.migrations import _credentialtypes as credentialtypes
from awx.main.migrations._multi_cred import migrate_to_multi_cred from awx.main.migrations._multi_cred import migrate_to_multi_cred, migrate_back_from_multi_cred
class Migration(migrations.Migration): class Migration(migrations.Migration):
@@ -13,6 +13,13 @@ class Migration(migrations.Migration):
dependencies = [ dependencies = [
('main', '0012_v322_update_cred_types'), ('main', '0012_v322_update_cred_types'),
] ]
run_before = [
# Django-vendored migrations will make reference to settings
# this migration was introduced in Django 1.11 / Tower 3.3 upgrade
# migration main-0009 changed the setting model and is not backward compatible,
# so we assure that at least all of Tower 3.2 migrations are finished before running it
('auth', '0008_alter_user_username_max_length')
]
operations = [ operations = [
migrations.AddField( migrations.AddField(
@@ -25,8 +32,8 @@ class Migration(migrations.Migration):
name='credentials', name='credentials',
field=models.ManyToManyField(related_name='unifiedjobtemplates', to='main.Credential'), field=models.ManyToManyField(related_name='unifiedjobtemplates', to='main.Credential'),
), ),
migrations.RunPython(migration_utils.set_current_apps_for_migrations), migrations.RunPython(migration_utils.set_current_apps_for_migrations, migrate_back_from_multi_cred),
migrations.RunPython(migrate_to_multi_cred), migrations.RunPython(migrate_to_multi_cred, migration_utils.set_current_apps_for_migrations),
migrations.RemoveField( migrations.RemoveField(
model_name='job', model_name='job',
name='credential', name='credential',
@@ -51,5 +58,6 @@ class Migration(migrations.Migration):
model_name='jobtemplate', model_name='jobtemplate',
name='vault_credential', name='vault_credential',
), ),
migrations.RunPython(credentialtypes.add_vault_id_field) migrations.RunPython(migration_utils.set_current_apps_for_migrations, credentialtypes.remove_vault_id_field),
migrations.RunPython(credentialtypes.add_vault_id_field, migration_utils.set_current_apps_for_migrations)
] ]

View File

@@ -27,7 +27,7 @@ class Migration(migrations.Migration):
('verbosity', models.PositiveIntegerField(default=0, editable=False)), ('verbosity', models.PositiveIntegerField(default=0, editable=False)),
('start_line', models.PositiveIntegerField(default=0, editable=False)), ('start_line', models.PositiveIntegerField(default=0, editable=False)),
('end_line', models.PositiveIntegerField(default=0, editable=False)), ('end_line', models.PositiveIntegerField(default=0, editable=False)),
('inventory_update', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='generic_command_events', to='main.InventoryUpdate')), ('inventory_update', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='inventory_update_events', to='main.InventoryUpdate')),
], ],
options={ options={
'ordering': ('-pk',), 'ordering': ('-pk',),
@@ -53,7 +53,7 @@ class Migration(migrations.Migration):
('verbosity', models.PositiveIntegerField(default=0, editable=False)), ('verbosity', models.PositiveIntegerField(default=0, editable=False)),
('start_line', models.PositiveIntegerField(default=0, editable=False)), ('start_line', models.PositiveIntegerField(default=0, editable=False)),
('end_line', models.PositiveIntegerField(default=0, editable=False)), ('end_line', models.PositiveIntegerField(default=0, editable=False)),
('project_update', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='generic_command_events', to='main.ProjectUpdate')), ('project_update', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='project_update_events', to='main.ProjectUpdate')),
], ],
options={ options={
'ordering': ('pk',), 'ordering': ('pk',),
@@ -72,12 +72,24 @@ class Migration(migrations.Migration):
('verbosity', models.PositiveIntegerField(default=0, editable=False)), ('verbosity', models.PositiveIntegerField(default=0, editable=False)),
('start_line', models.PositiveIntegerField(default=0, editable=False)), ('start_line', models.PositiveIntegerField(default=0, editable=False)),
('end_line', models.PositiveIntegerField(default=0, editable=False)), ('end_line', models.PositiveIntegerField(default=0, editable=False)),
('system_job', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='generic_command_events', to='main.SystemJob')), ('system_job', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='system_job_events', to='main.SystemJob')),
], ],
options={ options={
'ordering': ('-pk',), 'ordering': ('-pk',),
}, },
), ),
migrations.AlterIndexTogether(
name='inventoryupdateevent',
index_together=set([('inventory_update', 'start_line'), ('inventory_update', 'uuid'), ('inventory_update', 'end_line')]),
),
migrations.AlterIndexTogether(
name='projectupdateevent',
index_together=set([('project_update', 'event'), ('project_update', 'end_line'), ('project_update', 'start_line'), ('project_update', 'uuid')]),
),
migrations.AlterIndexTogether(
name='systemjobevent',
index_together=set([('system_job', 'end_line'), ('system_job', 'uuid'), ('system_job', 'start_line')]),
),
migrations.RemoveField( migrations.RemoveField(
model_name='unifiedjob', model_name='unifiedjob',
name='result_stdout_file', name='result_stdout_file',

View File

@@ -20,6 +20,11 @@ class Migration(migrations.Migration):
name='execute_role', name='execute_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'), field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
), ),
migrations.AddField(
model_name='organization',
name='job_template_admin_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=b'admin_role', related_name='+', to='main.Role'),
),
migrations.AddField( migrations.AddField(
model_name='organization', model_name='organization',
name='credential_admin_role', name='credential_admin_role',
@@ -73,7 +78,7 @@ class Migration(migrations.Migration):
migrations.AlterField( migrations.AlterField(
model_name='jobtemplate', model_name='jobtemplate',
name='admin_role', name='admin_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'project.organization.project_admin_role', b'inventory.organization.inventory_admin_role'], related_name='+', to='main.Role'), field=awx.main.fields.ImplicitRoleField(editable=False, null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'project.organization.job_template_admin_role', b'inventory.organization.job_template_admin_role'], related_name='+', to='main.Role'),
), ),
migrations.AlterField( migrations.AlterField(
model_name='jobtemplate', model_name='jobtemplate',
@@ -83,6 +88,7 @@ class Migration(migrations.Migration):
migrations.AlterField( migrations.AlterField(
model_name='organization', model_name='organization',
name='member_role', name='member_role',
field=awx.main.fields.ImplicitRoleField(null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'admin_role', b'project_admin_role', b'inventory_admin_role', b'workflow_admin_role', b'notification_admin_role', b'credential_admin_role', b'execute_role'], related_name='+', to='main.Role'), field=awx.main.fields.ImplicitRoleField(editable=False, null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'admin_role', b'execute_role', b'project_admin_role', b'inventory_admin_role', b'workflow_admin_role', b'notification_admin_role', b'credential_admin_role', b'job_template_admin_role'], related_name='+', to='main.Role'),
), ),
] ]

View File

@@ -19,8 +19,8 @@ class Migration(migrations.Migration):
operations = [ operations = [
# Run data migration before removing the old credential field # Run data migration before removing the old credential field
migrations.RunPython(migration_utils.set_current_apps_for_migrations, migrations.RunPython.noop), migrations.RunPython(migration_utils.set_current_apps_for_migrations, migrate_inventory_source_cred_reverse),
migrations.RunPython(migrate_inventory_source_cred, migrate_inventory_source_cred_reverse), migrations.RunPython(migrate_inventory_source_cred, migration_utils.set_current_apps_for_migrations),
migrations.RemoveField( migrations.RemoveField(
model_name='inventorysource', model_name='inventorysource',
name='credential', name='credential',

View File

@@ -13,6 +13,12 @@ class Migration(migrations.Migration):
dependencies = [ dependencies = [
('main', '0024_v330_create_user_session_membership'), ('main', '0024_v330_create_user_session_membership'),
] ]
run_before = [
# As of this migration, OAuth2Application and OAuth2AccessToken are models in main app
# Grant and RefreshToken models are still in the oauth2_provider app and reference
# the app and token models, so these must be created before the oauth2_provider models
('oauth2_provider', '0001_initial')
]
operations = [ operations = [
@@ -58,12 +64,12 @@ class Migration(migrations.Migration):
migrations.AddField( migrations.AddField(
model_name='activitystream', model_name='activitystream',
name='o_auth2_access_token', name='o_auth2_access_token',
field=models.ManyToManyField(to='main.OAuth2AccessToken', blank=True, related_name='main_o_auth2_accesstoken'), field=models.ManyToManyField(to='main.OAuth2AccessToken', blank=True),
), ),
migrations.AddField( migrations.AddField(
model_name='activitystream', model_name='activitystream',
name='o_auth2_application', name='o_auth2_application',
field=models.ManyToManyField(to='main.OAuth2Application', blank=True, related_name='main_o_auth2_application'), field=models.ManyToManyField(to='main.OAuth2Application', blank=True),
), ),
] ]

View File

@@ -20,4 +20,8 @@ class Migration(migrations.Migration):
name='organization', name='organization',
field=models.ForeignKey(help_text='Organization containing this application.', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='applications', to='main.Organization'), field=models.ForeignKey(help_text='Organization containing this application.', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='applications', to='main.Organization'),
), ),
migrations.AlterUniqueTogether(
name='oauth2application',
unique_together=set([('name', 'organization')]),
),
] ]

View File

@@ -20,7 +20,7 @@ class Migration(migrations.Migration):
migrations.AlterField( migrations.AlterField(
model_name='oauth2accesstoken', model_name='oauth2accesstoken',
name='scope', name='scope',
field=models.TextField(blank=True, help_text="Allowed scopes, further restricts user's permissions."), field=models.TextField(blank=True, default=b'write', help_text="Allowed scopes, further restricts user's permissions."),
), ),
migrations.AlterField( migrations.AlterField(
model_name='oauth2accesstoken', model_name='oauth2accesstoken',

View File

@@ -16,6 +16,6 @@ class Migration(migrations.Migration):
migrations.AlterField( migrations.AlterField(
model_name='oauth2accesstoken', model_name='oauth2accesstoken',
name='scope', name='scope',
field=models.TextField(blank=True, help_text="Allowed scopes, further restricts user's permissions. Must be a simple space-separated string with allowed scopes ['read', 'write']."), field=models.TextField(blank=True, default=b'write', help_text="Allowed scopes, further restricts user's permissions. Must be a simple space-separated string with allowed scopes ['read', 'write']."),
), ),
] ]

View File

@@ -0,0 +1,24 @@
#d -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-05-21 19:51
from __future__ import unicode_literals
import awx.main.fields
import awx.main.models.activity_stream
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0037_v330_remove_legacy_fact_cleanup'),
]
operations = [
migrations.AddField(
model_name='activitystream',
name='deleted_actor',
field=awx.main.fields.JSONField(null=True),
),
]

View File

@@ -0,0 +1,33 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-05-23 20:17
from __future__ import unicode_literals
import awx.main.fields
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0038_v330_add_deleted_activitystream_actor'),
]
operations = [
migrations.AlterField(
model_name='jobtemplate',
name='custom_virtualenv',
field=models.CharField(blank=True, default=None, help_text='Local absolute file path containing a custom Python virtualenv to use', max_length=100, null=True),
),
migrations.AlterField(
model_name='organization',
name='custom_virtualenv',
field=models.CharField(blank=True, default=None, help_text='Local absolute file path containing a custom Python virtualenv to use', max_length=100, null=True),
),
migrations.AlterField(
model_name='project',
name='custom_virtualenv',
field=models.CharField(blank=True, default=None, help_text='Local absolute file path containing a custom Python virtualenv to use', max_length=100, null=True),
),
]

View File

@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-05-25 18:58
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0039_v330_custom_venv_help_text'),
]
operations = [
migrations.AddField(
model_name='unifiedjob',
name='controller_node',
field=models.TextField(blank=True, default=b'', editable=False, help_text='The instance that managed the isolated execution environment.'),
),
]

View File

@@ -0,0 +1,23 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-06-14 21:03
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.OAUTH2_PROVIDER_REFRESH_TOKEN_MODEL),
('main', '0040_v330_unifiedjob_controller_node'),
]
operations = [
migrations.AddField(
model_name='oauth2accesstoken',
name='source_refresh_token',
field=models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='refreshed_access_token', to=settings.OAUTH2_PROVIDER_REFRESH_TOKEN_MODEL),
),
]

View File

@@ -0,0 +1,29 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-07-02 13:47
from __future__ import unicode_literals
import awx.main.fields
from django.db import migrations
import django.db.models.deletion
from awx.main.migrations._rbac import rebuild_role_hierarchy
class Migration(migrations.Migration):
dependencies = [
('main', '0041_v330_update_oauth_refreshtoken'),
]
operations = [
migrations.AlterField(
model_name='organization',
name='member_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'admin_role'], related_name='+', to='main.Role'),
),
migrations.AlterField(
model_name='organization',
name='read_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null=b'True', on_delete=django.db.models.deletion.CASCADE, parent_role=[b'member_role', b'auditor_role', b'execute_role', b'project_admin_role', b'inventory_admin_role', b'workflow_admin_role', b'notification_admin_role', b'credential_admin_role', b'job_template_admin_role'], related_name='+', to='main.Role'),
),
migrations.RunPython(rebuild_role_hierarchy),
]

View File

@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-07-10 14:02
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0042_v330_org_member_role_deparent'),
]
operations = [
migrations.AddField(
model_name='oauth2accesstoken',
name='modified',
field=models.DateTimeField(editable=False, auto_now=True),
),
]

View File

@@ -0,0 +1,21 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-07-17 03:57
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0043_v330_oauth2accesstoken_modified'),
]
operations = [
migrations.AddField(
model_name='inventoryupdate',
name='inventory',
field=models.ForeignKey(default=None, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='inventory_updates', to='main.Inventory'),
),
]

View File

@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-07-25 17:42
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0044_v330_add_inventory_update_inventory'),
]
operations = [
migrations.AddField(
model_name='instance',
name='managed_by_policy',
field=models.BooleanField(default=True),
)
]

View File

@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-07-25 21:24
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0045_v330_instance_managed_by_policy'),
]
operations = [
migrations.AlterField(
model_name='oauth2application',
name='authorization_grant_type',
field=models.CharField(choices=[(b'authorization-code', 'Authorization code'), (b'implicit', 'Implicit'), (b'password', 'Resource owner password-based')], help_text='The Grant type the user must use for acquire tokens for this application.', max_length=32),
),
]

View File

@@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-07-25 20:19
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0046_v330_remove_client_credentials_grant'),
]
operations = [
migrations.AddField(
model_name='activitystream',
name='instance',
field=models.ManyToManyField(blank=True, to='main.Instance'),
),
]

View File

@@ -0,0 +1,147 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-08-16 16:46
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0047_v330_activitystream_instance'),
]
operations = [
migrations.AlterField(
model_name='credential',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'credential', u'model_name': 'credential'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='credential',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'credential', u'model_name': 'credential'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='credentialtype',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'credentialtype', u'model_name': 'credentialtype'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='credentialtype',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'credentialtype', u'model_name': 'credentialtype'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='custominventoryscript',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'custominventoryscript', u'model_name': 'custominventoryscript'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='custominventoryscript',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'custominventoryscript', u'model_name': 'custominventoryscript'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='group',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'group', u'model_name': 'group'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='group',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'group', u'model_name': 'group'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='host',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'host', u'model_name': 'host'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='host',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'host', u'model_name': 'host'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='inventory',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'inventory', u'model_name': 'inventory'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='inventory',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'inventory', u'model_name': 'inventory'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='label',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'label', u'model_name': 'label'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='label',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'label', u'model_name': 'label'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='notificationtemplate',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'notificationtemplate', u'model_name': 'notificationtemplate'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='notificationtemplate',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'notificationtemplate', u'model_name': 'notificationtemplate'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='organization',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'organization', u'model_name': 'organization'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='organization',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'organization', u'model_name': 'organization'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='schedule',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'schedule', u'model_name': 'schedule'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='schedule',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'schedule', u'model_name': 'schedule'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='team',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'team', u'model_name': 'team'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='team',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'team', u'model_name': 'team'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='unifiedjob',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'unifiedjob', u'model_name': 'unifiedjob'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='unifiedjob',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'unifiedjob', u'model_name': 'unifiedjob'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='unifiedjobtemplate',
name='created_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'unifiedjobtemplate', u'model_name': 'unifiedjobtemplate'}(class)s_created+", to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='unifiedjobtemplate',
name='modified_by',
field=models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{u'app_label': 'main', u'class': 'unifiedjobtemplate', u'model_name': 'unifiedjobtemplate'}(class)s_modified+", to=settings.AUTH_USER_MODEL),
),
]

View File

@@ -0,0 +1,22 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2018-08-17 16:13
from __future__ import unicode_literals
from decimal import Decimal
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0048_v330_django_created_modified_by_model_name'),
]
operations = [
migrations.AlterField(
model_name='instance',
name='capacity_adjustment',
field=models.DecimalField(decimal_places=2, default=Decimal('1'), max_digits=3, validators=[django.core.validators.MinValueValidator(0)]),
),
]

View File

@@ -180,6 +180,17 @@ def add_vault_id_field(apps, schema_editor):
vault_credtype.save() vault_credtype.save()
def remove_vault_id_field(apps, schema_editor):
vault_credtype = CredentialType.objects.get(kind='vault')
idx = 0
for i, input in enumerate(vault_credtype.inputs['fields']):
if input['id'] == 'vault_id':
idx = i
break
vault_credtype.inputs['fields'].pop(idx)
vault_credtype.save()
def create_rhv_tower_credtype(apps, schema_editor): def create_rhv_tower_credtype(apps, schema_editor):
CredentialType.setup_tower_managed_defaults() CredentialType.setup_tower_managed_defaults()

View File

@@ -1,56 +1,124 @@
import logging
logger = logging.getLogger('awx.main.migrations')
def migrate_to_multi_cred(app, schema_editor): def migrate_to_multi_cred(app, schema_editor):
Job = app.get_model('main', 'Job') Job = app.get_model('main', 'Job')
JobTemplate = app.get_model('main', 'JobTemplate') JobTemplate = app.get_model('main', 'JobTemplate')
ct = 0
for cls in (Job, JobTemplate): for cls in (Job, JobTemplate):
for j in cls.objects.iterator(): for j in cls.objects.iterator():
if j.credential: if j.credential:
ct += 1
logger.debug('Migrating cred %s to %s %s multi-cred relation.', j.credential_id, cls, j.id)
j.credentials.add(j.credential) j.credentials.add(j.credential)
if j.vault_credential: if j.vault_credential:
ct += 1
logger.debug('Migrating cred %s to %s %s multi-cred relation.', j.vault_credential_id, cls, j.id)
j.credentials.add(j.vault_credential) j.credentials.add(j.vault_credential)
for cred in j.extra_credentials.all(): for cred in j.extra_credentials.all():
ct += 1
logger.debug('Migrating cred %s to %s %s multi-cred relation.', cred.id, cls, j.id)
j.credentials.add(cred) j.credentials.add(cred)
if ct:
logger.info('Finished migrating %s credentials to multi-cred', ct)
def migrate_back_from_multi_cred(app, schema_editor):
Job = app.get_model('main', 'Job')
JobTemplate = app.get_model('main', 'JobTemplate')
CredentialType = app.get_model('main', 'CredentialType')
vault_credtype = CredentialType.objects.get(kind='vault')
ssh_credtype = CredentialType.objects.get(kind='ssh')
ct = 0
for cls in (Job, JobTemplate):
for j in cls.objects.iterator():
for cred in j.credentials.iterator():
changed = False
if cred.credential_type_id == vault_credtype.id:
changed = True
ct += 1
logger.debug('Reverse migrating vault cred %s for %s %s', cred.id, cls, j.id)
j.vault_credential = cred
elif cred.credential_type_id == ssh_credtype.id:
changed = True
ct += 1
logger.debug('Reverse migrating ssh cred %s for %s %s', cred.id, cls, j.id)
j.credential = cred
else:
changed = True
ct += 1
logger.debug('Reverse migrating cloud cred %s for %s %s', cred.id, cls, j.id)
j.extra_credentials.add(cred)
if changed:
j.save()
if ct:
logger.info('Finished reverse migrating %s credentials from multi-cred', ct)
def migrate_workflow_cred(app, schema_editor): def migrate_workflow_cred(app, schema_editor):
WorkflowJobTemplateNode = app.get_model('main', 'WorkflowJobTemplateNode') WorkflowJobTemplateNode = app.get_model('main', 'WorkflowJobTemplateNode')
WorkflowJobNode = app.get_model('main', 'WorkflowJobNode') WorkflowJobNode = app.get_model('main', 'WorkflowJobNode')
ct = 0
for cls in (WorkflowJobNode, WorkflowJobTemplateNode): for cls in (WorkflowJobNode, WorkflowJobTemplateNode):
for node in cls.objects.iterator(): for node in cls.objects.iterator():
if node.credential: if node.credential:
node.credentials.add(j.credential) logger.debug('Migrating prompted credential %s for %s %s', node.credential_id, cls, node.id)
ct += 1
node.credentials.add(node.credential)
if ct:
logger.info('Finished migrating total of %s workflow prompted credentials', ct)
def migrate_workflow_cred_reverse(app, schema_editor): def migrate_workflow_cred_reverse(app, schema_editor):
WorkflowJobTemplateNode = app.get_model('main', 'WorkflowJobTemplateNode') WorkflowJobTemplateNode = app.get_model('main', 'WorkflowJobTemplateNode')
WorkflowJobNode = app.get_model('main', 'WorkflowJobNode') WorkflowJobNode = app.get_model('main', 'WorkflowJobNode')
ct = 0
for cls in (WorkflowJobNode, WorkflowJobTemplateNode): for cls in (WorkflowJobNode, WorkflowJobTemplateNode):
for node in cls.objects.iterator(): for node in cls.objects.iterator():
cred = node.credentials.first() cred = node.credentials.first()
if cred: if cred:
node.credential = cred node.credential = cred
node.save() logger.debug('Reverse migrating prompted credential %s for %s %s', node.credential_id, cls, node.id)
ct += 1
node.save(update_fields=['credential'])
if ct:
logger.info('Finished reverse migrating total of %s workflow prompted credentials', ct)
def migrate_inventory_source_cred(app, schema_editor): def migrate_inventory_source_cred(app, schema_editor):
InventoryUpdate = app.get_model('main', 'InventoryUpdate') InventoryUpdate = app.get_model('main', 'InventoryUpdate')
InventorySource = app.get_model('main', 'InventorySource') InventorySource = app.get_model('main', 'InventorySource')
ct = 0
for cls in (InventoryUpdate, InventorySource): for cls in (InventoryUpdate, InventorySource):
for obj in cls.objects.iterator(): for obj in cls.objects.iterator():
if obj.credential: if obj.credential:
ct += 1
logger.debug('Migrating credential %s for %s %s', obj.credential_id, cls, obj.id)
obj.credentials.add(obj.credential) obj.credentials.add(obj.credential)
if ct:
logger.info('Finished migrating %s inventory source credentials to multi-cred', ct)
def migrate_inventory_source_cred_reverse(app, schema_editor): def migrate_inventory_source_cred_reverse(app, schema_editor):
InventoryUpdate = app.get_model('main', 'InventoryUpdate') InventoryUpdate = app.get_model('main', 'InventoryUpdate')
InventorySource = app.get_model('main', 'InventorySource') InventorySource = app.get_model('main', 'InventorySource')
ct = 0
for cls in (InventoryUpdate, InventorySource): for cls in (InventoryUpdate, InventorySource):
for obj in cls.objects.iterator(): for obj in cls.objects.iterator():
cred = obj.credentials.first() cred = obj.credentials.first()
if cred: if cred:
ct += 1
logger.debug('Reverse migrating credential %s for %s %s', cred.id, cls, obj.id)
obj.credential = cred obj.credential = cred
obj.save() obj.save()
if ct:
logger.info('Finished reverse migrating %s inventory source credentials from multi-cred', ct)

View File

@@ -33,6 +33,7 @@ class ActivityStream(models.Model):
operation = models.CharField(max_length=13, choices=OPERATION_CHOICES) operation = models.CharField(max_length=13, choices=OPERATION_CHOICES)
timestamp = models.DateTimeField(auto_now_add=True) timestamp = models.DateTimeField(auto_now_add=True)
changes = models.TextField(blank=True) changes = models.TextField(blank=True)
deleted_actor = JSONField(null=True)
object_relationship_type = models.TextField(blank=True) object_relationship_type = models.TextField(blank=True)
object1 = models.TextField() object1 = models.TextField()
@@ -65,6 +66,7 @@ class ActivityStream(models.Model):
notification = models.ManyToManyField("Notification", blank=True) notification = models.ManyToManyField("Notification", blank=True)
label = models.ManyToManyField("Label", blank=True) label = models.ManyToManyField("Label", blank=True)
role = models.ManyToManyField("Role", blank=True) role = models.ManyToManyField("Role", blank=True)
instance = models.ManyToManyField("Instance", blank=True)
instance_group = models.ManyToManyField("InstanceGroup", blank=True) instance_group = models.ManyToManyField("InstanceGroup", blank=True)
o_auth2_application = models.ManyToManyField("OAuth2Application", blank=True) o_auth2_application = models.ManyToManyField("OAuth2Application", blank=True)
o_auth2_access_token = models.ManyToManyField("OAuth2AccessToken", blank=True) o_auth2_access_token = models.ManyToManyField("OAuth2AccessToken", blank=True)
@@ -77,6 +79,18 @@ class ActivityStream(models.Model):
return reverse('api:activity_stream_detail', kwargs={'pk': self.pk}, request=request) return reverse('api:activity_stream_detail', kwargs={'pk': self.pk}, request=request)
def save(self, *args, **kwargs): def save(self, *args, **kwargs):
# Store denormalized actor metadata so that we retain it for accounting
# purposes when the User row is deleted.
if self.actor:
self.deleted_actor = {
'id': self.actor_id,
'username': self.actor.username,
'first_name': self.actor.first_name,
'last_name': self.actor.last_name,
}
if 'update_fields' in kwargs and 'deleted_actor' not in kwargs['update_fields']:
kwargs['update_fields'].append('deleted_actor')
# For compatibility with Django 1.4.x, attempt to handle any calls to # For compatibility with Django 1.4.x, attempt to handle any calls to
# save that pass update_fields. # save that pass update_fields.
try: try:

View File

@@ -221,7 +221,46 @@ class PasswordFieldsModel(BaseModel):
update_fields.append(field) update_fields.append(field)
class PrimordialModel(CreatedModifiedModel): class HasEditsMixin(BaseModel):
"""Mixin which will keep the versions of field values from last edit
so we can tell if current model has unsaved changes.
"""
class Meta:
abstract = True
@classmethod
def _get_editable_fields(cls):
fds = set([])
for field in cls._meta.concrete_fields:
if hasattr(field, 'attname'):
if field.attname == 'id':
continue
elif field.attname.endswith('ptr_id'):
# polymorphic fields should always be non-editable, see:
# https://github.com/django-polymorphic/django-polymorphic/issues/349
continue
if getattr(field, 'editable', True):
fds.add(field.attname)
return fds
def _get_fields_snapshot(self, fields_set=None):
new_values = {}
if fields_set is None:
fields_set = self._get_editable_fields()
for attr, val in self.__dict__.items():
if attr in fields_set:
new_values[attr] = val
return new_values
def _values_have_edits(self, new_values):
return any(
new_values.get(fd_name, None) != self._prior_values_store.get(fd_name, None)
for fd_name in new_values.keys()
)
class PrimordialModel(HasEditsMixin, CreatedModifiedModel):
''' '''
Common model for all object types that have these standard fields Common model for all object types that have these standard fields
must use a subclass CommonModel or CommonModelNameNotUnique though must use a subclass CommonModel or CommonModelNameNotUnique though
@@ -254,9 +293,13 @@ class PrimordialModel(CreatedModifiedModel):
tags = TaggableManager(blank=True) tags = TaggableManager(blank=True)
def __init__(self, *args, **kwargs):
r = super(PrimordialModel, self).__init__(*args, **kwargs)
self._prior_values_store = self._get_fields_snapshot()
return r
def save(self, *args, **kwargs): def save(self, *args, **kwargs):
update_fields = kwargs.get('update_fields', []) update_fields = kwargs.get('update_fields', [])
fields_are_specified = bool(update_fields)
user = get_current_user() user = get_current_user()
if user and not user.id: if user and not user.id:
user = None user = None
@@ -264,15 +307,14 @@ class PrimordialModel(CreatedModifiedModel):
self.created_by = user self.created_by = user
if 'created_by' not in update_fields: if 'created_by' not in update_fields:
update_fields.append('created_by') update_fields.append('created_by')
# Update modified_by if not called with update_fields, or if any # Update modified_by if any editable fields have changed
# editable fields are present in update_fields new_values = self._get_fields_snapshot()
if ( if (not self.pk and not self.modified_by) or self._values_have_edits(new_values):
(not fields_are_specified) or
any(getattr(self._meta.get_field(name), 'editable', True) for name in update_fields)):
self.modified_by = user self.modified_by = user
if 'modified_by' not in update_fields: if 'modified_by' not in update_fields:
update_fields.append('modified_by') update_fields.append('modified_by')
super(PrimordialModel, self).save(*args, **kwargs) super(PrimordialModel, self).save(*args, **kwargs)
self._prior_values_store = new_values
def clean_description(self): def clean_description(self):
# Description should always be empty string, never null. # Description should always be empty string, never null.

View File

@@ -14,7 +14,7 @@ from jinja2 import Template
# Django # Django
from django.db import models from django.db import models
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _, ugettext_noop
from django.core.exceptions import ValidationError from django.core.exceptions import ValidationError
from django.utils.encoding import force_text from django.utils.encoding import force_text
@@ -419,7 +419,7 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin):
else: else:
fmt_str = six.text_type('{}_{}') fmt_str = six.text_type('{}_{}')
return fmt_str.format(type_alias, self.inputs.get('vault_id')) return fmt_str.format(type_alias, self.inputs.get('vault_id'))
return str(type_alias) return six.text_type(type_alias)
@staticmethod @staticmethod
def unique_dict(cred_qs): def unique_dict(cred_qs):
@@ -623,6 +623,11 @@ class CredentialType(CommonModelNameNotUnique):
if len(value): if len(value):
namespace[field_name] = value namespace[field_name] = value
# default missing boolean fields to False
for field in self.inputs.get('fields', []):
if field['type'] == 'boolean' and field['id'] not in credential.inputs.keys():
namespace[field['id']] = safe_namespace[field['id']] = False
file_tmpls = self.injectors.get('file', {}) file_tmpls = self.injectors.get('file', {})
# If any file templates are provided, render the files and update the # If any file templates are provided, render the files and update the
# special `tower` template namespace so the filename can be # special `tower` template namespace so the filename can be
@@ -673,46 +678,46 @@ class CredentialType(CommonModelNameNotUnique):
def ssh(cls): def ssh(cls):
return cls( return cls(
kind='ssh', kind='ssh',
name='Machine', name=ugettext_noop('Machine'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
'ask_at_runtime': True 'ask_at_runtime': True
}, { }, {
'id': 'ssh_key_data', 'id': 'ssh_key_data',
'label': 'SSH Private Key', 'label': ugettext_noop('SSH Private Key'),
'type': 'string', 'type': 'string',
'format': 'ssh_private_key', 'format': 'ssh_private_key',
'secret': True, 'secret': True,
'multiline': True 'multiline': True
}, { }, {
'id': 'ssh_key_unlock', 'id': 'ssh_key_unlock',
'label': 'Private Key Passphrase', 'label': ugettext_noop('Private Key Passphrase'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
'ask_at_runtime': True 'ask_at_runtime': True
}, { }, {
'id': 'become_method', 'id': 'become_method',
'label': 'Privilege Escalation Method', 'label': ugettext_noop('Privilege Escalation Method'),
'type': 'become_method', 'type': 'become_method',
'help_text': ('Specify a method for "become" operations. This is ' 'help_text': ugettext_noop('Specify a method for "become" operations. This is '
'equivalent to specifying the --become-method ' 'equivalent to specifying the --become-method '
'Ansible parameter.') 'Ansible parameter.')
}, { }, {
'id': 'become_username', 'id': 'become_username',
'label': 'Privilege Escalation Username', 'label': ugettext_noop('Privilege Escalation Username'),
'type': 'string', 'type': 'string',
}, { }, {
'id': 'become_password', 'id': 'become_password',
'label': 'Privilege Escalation Password', 'label': ugettext_noop('Privilege Escalation Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
'ask_at_runtime': True 'ask_at_runtime': True
@@ -728,28 +733,28 @@ def ssh(cls):
def scm(cls): def scm(cls):
return cls( return cls(
kind='scm', kind='scm',
name='Source Control', name=ugettext_noop('Source Control'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True 'secret': True
}, { }, {
'id': 'ssh_key_data', 'id': 'ssh_key_data',
'label': 'SCM Private Key', 'label': ugettext_noop('SCM Private Key'),
'type': 'string', 'type': 'string',
'format': 'ssh_private_key', 'format': 'ssh_private_key',
'secret': True, 'secret': True,
'multiline': True 'multiline': True
}, { }, {
'id': 'ssh_key_unlock', 'id': 'ssh_key_unlock',
'label': 'Private Key Passphrase', 'label': ugettext_noop('Private Key Passphrase'),
'type': 'string', 'type': 'string',
'secret': True 'secret': True
}], }],
@@ -764,25 +769,25 @@ def scm(cls):
def vault(cls): def vault(cls):
return cls( return cls(
kind='vault', kind='vault',
name='Vault', name=ugettext_noop('Vault'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'vault_password', 'id': 'vault_password',
'label': 'Vault Password', 'label': ugettext_noop('Vault Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
'ask_at_runtime': True 'ask_at_runtime': True
}, { }, {
'id': 'vault_id', 'id': 'vault_id',
'label': 'Vault Identifier', 'label': ugettext_noop('Vault Identifier'),
'type': 'string', 'type': 'string',
'format': 'vault_id', 'format': 'vault_id',
'help_text': ('Specify an (optional) Vault ID. This is ' 'help_text': ugettext_noop('Specify an (optional) Vault ID. This is '
'equivalent to specifying the --vault-id ' 'equivalent to specifying the --vault-id '
'Ansible parameter for providing multiple Vault ' 'Ansible parameter for providing multiple Vault '
'passwords. Note: this feature only works in ' 'passwords. Note: this feature only works in '
'Ansible 2.4+.') 'Ansible 2.4+.')
}], }],
'required': ['vault_password'], 'required': ['vault_password'],
} }
@@ -793,37 +798,37 @@ def vault(cls):
def net(cls): def net(cls):
return cls( return cls(
kind='net', kind='net',
name='Network', name=ugettext_noop('Network'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}, { }, {
'id': 'ssh_key_data', 'id': 'ssh_key_data',
'label': 'SSH Private Key', 'label': ugettext_noop('SSH Private Key'),
'type': 'string', 'type': 'string',
'format': 'ssh_private_key', 'format': 'ssh_private_key',
'secret': True, 'secret': True,
'multiline': True 'multiline': True
}, { }, {
'id': 'ssh_key_unlock', 'id': 'ssh_key_unlock',
'label': 'Private Key Passphrase', 'label': ugettext_noop('Private Key Passphrase'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}, { }, {
'id': 'authorize', 'id': 'authorize',
'label': 'Authorize', 'label': ugettext_noop('Authorize'),
'type': 'boolean', 'type': 'boolean',
}, { }, {
'id': 'authorize_password', 'id': 'authorize_password',
'label': 'Authorize Password', 'label': ugettext_noop('Authorize Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}], }],
@@ -840,27 +845,27 @@ def net(cls):
def aws(cls): def aws(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='Amazon Web Services', name=ugettext_noop('Amazon Web Services'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'username', 'id': 'username',
'label': 'Access Key', 'label': ugettext_noop('Access Key'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Secret Key', 'label': ugettext_noop('Secret Key'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}, { }, {
'id': 'security_token', 'id': 'security_token',
'label': 'STS Token', 'label': ugettext_noop('STS Token'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
'help_text': ('Security Token Service (STS) is a web service ' 'help_text': ugettext_noop('Security Token Service (STS) is a web service '
'that enables you to request temporary, ' 'that enables you to request temporary, '
'limited-privilege credentials for AWS Identity ' 'limited-privilege credentials for AWS Identity '
'and Access Management (IAM) users.'), 'and Access Management (IAM) users.'),
}], }],
'required': ['username', 'password'] 'required': ['username', 'password']
} }
@@ -871,36 +876,36 @@ def aws(cls):
def openstack(cls): def openstack(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='OpenStack', name=ugettext_noop('OpenStack'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password (API Key)', 'label': ugettext_noop('Password (API Key)'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}, { }, {
'id': 'host', 'id': 'host',
'label': 'Host (Authentication URL)', 'label': ugettext_noop('Host (Authentication URL)'),
'type': 'string', 'type': 'string',
'help_text': ('The host to authenticate with. For example, ' 'help_text': ugettext_noop('The host to authenticate with. For example, '
'https://openstack.business.com/v2.0/') 'https://openstack.business.com/v2.0/')
}, { }, {
'id': 'project', 'id': 'project',
'label': 'Project (Tenant Name)', 'label': ugettext_noop('Project (Tenant Name)'),
'type': 'string', 'type': 'string',
}, { }, {
'id': 'domain', 'id': 'domain',
'label': 'Domain Name', 'label': ugettext_noop('Domain Name'),
'type': 'string', 'type': 'string',
'help_text': ('OpenStack domains define administrative boundaries. ' 'help_text': ugettext_noop('OpenStack domains define administrative boundaries. '
'It is only needed for Keystone v3 authentication ' 'It is only needed for Keystone v3 authentication '
'URLs. Refer to Ansible Tower documentation for ' 'URLs. Refer to Ansible Tower documentation for '
'common scenarios.') 'common scenarios.')
}], }],
'required': ['username', 'password', 'host', 'project'] 'required': ['username', 'password', 'host', 'project']
} }
@@ -911,22 +916,22 @@ def openstack(cls):
def vmware(cls): def vmware(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='VMware vCenter', name=ugettext_noop('VMware vCenter'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'host', 'id': 'host',
'label': 'VCenter Host', 'label': ugettext_noop('VCenter Host'),
'type': 'string', 'type': 'string',
'help_text': ('Enter the hostname or IP address that corresponds ' 'help_text': ugettext_noop('Enter the hostname or IP address that corresponds '
'to your VMware vCenter.') 'to your VMware vCenter.')
}, { }, {
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}], }],
@@ -939,22 +944,22 @@ def vmware(cls):
def satellite6(cls): def satellite6(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='Red Hat Satellite 6', name=ugettext_noop('Red Hat Satellite 6'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'host', 'id': 'host',
'label': 'Satellite 6 URL', 'label': ugettext_noop('Satellite 6 URL'),
'type': 'string', 'type': 'string',
'help_text': ('Enter the URL that corresponds to your Red Hat ' 'help_text': ugettext_noop('Enter the URL that corresponds to your Red Hat '
'Satellite 6 server. For example, https://satellite.example.org') 'Satellite 6 server. For example, https://satellite.example.org')
}, { }, {
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}], }],
@@ -967,23 +972,23 @@ def satellite6(cls):
def cloudforms(cls): def cloudforms(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='Red Hat CloudForms', name=ugettext_noop('Red Hat CloudForms'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'host', 'id': 'host',
'label': 'CloudForms URL', 'label': ugettext_noop('CloudForms URL'),
'type': 'string', 'type': 'string',
'help_text': ('Enter the URL for the virtual machine that ' 'help_text': ugettext_noop('Enter the URL for the virtual machine that '
'corresponds to your CloudForm instance. ' 'corresponds to your CloudForm instance. '
'For example, https://cloudforms.example.org') 'For example, https://cloudforms.example.org')
}, { }, {
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}], }],
@@ -996,32 +1001,32 @@ def cloudforms(cls):
def gce(cls): def gce(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='Google Compute Engine', name=ugettext_noop('Google Compute Engine'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'username', 'id': 'username',
'label': 'Service Account Email Address', 'label': ugettext_noop('Service Account Email Address'),
'type': 'string', 'type': 'string',
'help_text': ('The email address assigned to the Google Compute ' 'help_text': ugettext_noop('The email address assigned to the Google Compute '
'Engine service account.') 'Engine service account.')
}, { }, {
'id': 'project', 'id': 'project',
'label': 'Project', 'label': 'Project',
'type': 'string', 'type': 'string',
'help_text': ('The Project ID is the GCE assigned identification. ' 'help_text': ugettext_noop('The Project ID is the GCE assigned identification. '
'It is often constructed as three words or two words ' 'It is often constructed as three words or two words '
'followed by a three-digit number. Examples: project-id-000 ' 'followed by a three-digit number. Examples: project-id-000 '
'and another-project-id') 'and another-project-id')
}, { }, {
'id': 'ssh_key_data', 'id': 'ssh_key_data',
'label': 'RSA Private Key', 'label': ugettext_noop('RSA Private Key'),
'type': 'string', 'type': 'string',
'format': 'ssh_private_key', 'format': 'ssh_private_key',
'secret': True, 'secret': True,
'multiline': True, 'multiline': True,
'help_text': ('Paste the contents of the PEM file associated ' 'help_text': ugettext_noop('Paste the contents of the PEM file associated '
'with the service account email.') 'with the service account email.')
}], }],
'required': ['username', 'ssh_key_data'], 'required': ['username', 'ssh_key_data'],
} }
@@ -1032,43 +1037,43 @@ def gce(cls):
def azure_rm(cls): def azure_rm(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='Microsoft Azure Resource Manager', name=ugettext_noop('Microsoft Azure Resource Manager'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'subscription', 'id': 'subscription',
'label': 'Subscription ID', 'label': ugettext_noop('Subscription ID'),
'type': 'string', 'type': 'string',
'help_text': ('Subscription ID is an Azure construct, which is ' 'help_text': ugettext_noop('Subscription ID is an Azure construct, which is '
'mapped to a username.') 'mapped to a username.')
}, { }, {
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}, { }, {
'id': 'client', 'id': 'client',
'label': 'Client ID', 'label': ugettext_noop('Client ID'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'secret', 'id': 'secret',
'label': 'Client Secret', 'label': ugettext_noop('Client Secret'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}, { }, {
'id': 'tenant', 'id': 'tenant',
'label': 'Tenant ID', 'label': ugettext_noop('Tenant ID'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'cloud_environment', 'id': 'cloud_environment',
'label': 'Azure Cloud Environment', 'label': ugettext_noop('Azure Cloud Environment'),
'type': 'string', 'type': 'string',
'help_text': ('Environment variable AZURE_CLOUD_ENVIRONMENT when' 'help_text': ugettext_noop('Environment variable AZURE_CLOUD_ENVIRONMENT when'
' using Azure GovCloud or Azure stack.') ' using Azure GovCloud or Azure stack.')
}], }],
'required': ['subscription'], 'required': ['subscription'],
} }
@@ -1079,16 +1084,16 @@ def azure_rm(cls):
def insights(cls): def insights(cls):
return cls( return cls(
kind='insights', kind='insights',
name='Insights', name=ugettext_noop('Insights'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True 'secret': True
}], }],
@@ -1107,28 +1112,28 @@ def insights(cls):
def rhv(cls): def rhv(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='Red Hat Virtualization', name=ugettext_noop('Red Hat Virtualization'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'host', 'id': 'host',
'label': 'Host (Authentication URL)', 'label': ugettext_noop('Host (Authentication URL)'),
'type': 'string', 'type': 'string',
'help_text': ('The host to authenticate with.') 'help_text': ugettext_noop('The host to authenticate with.')
}, { }, {
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}, { }, {
'id': 'ca_file', 'id': 'ca_file',
'label': 'CA File', 'label': ugettext_noop('CA File'),
'type': 'string', 'type': 'string',
'help_text': ('Absolute file path to the CA file to use (optional)') 'help_text': ugettext_noop('Absolute file path to the CA file to use (optional)')
}], }],
'required': ['host', 'username', 'password'], 'required': ['host', 'username', 'password'],
}, },
@@ -1159,26 +1164,26 @@ def rhv(cls):
def tower(cls): def tower(cls):
return cls( return cls(
kind='cloud', kind='cloud',
name='Ansible Tower', name=ugettext_noop('Ansible Tower'),
managed_by_tower=True, managed_by_tower=True,
inputs={ inputs={
'fields': [{ 'fields': [{
'id': 'host', 'id': 'host',
'label': 'Ansible Tower Hostname', 'label': ugettext_noop('Ansible Tower Hostname'),
'type': 'string', 'type': 'string',
'help_text': ('The Ansible Tower base URL to authenticate with.') 'help_text': ugettext_noop('The Ansible Tower base URL to authenticate with.')
}, { }, {
'id': 'username', 'id': 'username',
'label': 'Username', 'label': ugettext_noop('Username'),
'type': 'string' 'type': 'string'
}, { }, {
'id': 'password', 'id': 'password',
'label': 'Password', 'label': ugettext_noop('Password'),
'type': 'string', 'type': 'string',
'secret': True, 'secret': True,
}, { }, {
'id': 'verify_ssl', 'id': 'verify_ssl',
'label': 'Verify SSL', 'label': ugettext_noop('Verify SSL'),
'type': 'boolean', 'type': 'boolean',
'secret': False 'secret': False
}], }],

View File

@@ -1,5 +1,6 @@
import datetime import datetime
import logging import logging
from collections import defaultdict
from django.conf import settings from django.conf import settings
from django.db import models, DatabaseError from django.db import models, DatabaseError
@@ -34,11 +35,26 @@ def sanitize_event_keys(kwargs, valid_keys):
for key in [ for key in [
'play', 'role', 'task', 'playbook' 'play', 'role', 'task', 'playbook'
]: ]:
if isinstance(kwargs.get(key), six.string_types): if isinstance(kwargs.get('event_data', {}).get(key), six.string_types):
if len(kwargs[key]) > 1024: if len(kwargs['event_data'][key]) > 1024:
kwargs[key] = Truncator(kwargs[key]).chars(1024) kwargs['event_data'][key] = Truncator(kwargs['event_data'][key]).chars(1024)
def create_host_status_counts(event_data):
host_status = {}
host_status_keys = ['skipped', 'ok', 'changed', 'failures', 'dark']
for key in host_status_keys:
for host in event_data.get(key, {}):
host_status[host] = key
host_status_counts = defaultdict(lambda: 0)
for value in host_status.values():
host_status_counts[value] += 1
return dict(host_status_counts)
class BasePlaybookEvent(CreatedModifiedModel): class BasePlaybookEvent(CreatedModifiedModel):
''' '''
@@ -194,6 +210,9 @@ class BasePlaybookEvent(CreatedModifiedModel):
def event_level(self): def event_level(self):
return self.LEVEL_FOR_EVENT.get(self.event, 0) return self.LEVEL_FOR_EVENT.get(self.event, 0)
def get_host_status_counts(self):
return create_host_status_counts(getattr(self, 'event_data', {}))
def get_event_display2(self): def get_event_display2(self):
msg = self.get_event_display() msg = self.get_event_display()
if self.event == 'playbook_on_play_start': if self.event == 'playbook_on_play_start':
@@ -588,6 +607,9 @@ class BaseCommandEvent(CreatedModifiedModel):
''' '''
return self.event return self.event
def get_host_status_counts(self):
return create_host_status_counts(getattr(self, 'event_data', {}))
class AdHocCommandEvent(BaseCommandEvent): class AdHocCommandEvent(BaseCommandEvent):

View File

@@ -1,8 +1,12 @@
# Copyright (c) 2015 Ansible, Inc. # Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved. # All Rights Reserved.
import six
import random
from decimal import Decimal from decimal import Decimal
from django.core.exceptions import ValidationError
from django.core.validators import MinValueValidator
from django.db import models, connection from django.db import models, connection
from django.db.models.signals import post_save, post_delete from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver from django.dispatch import receiver
@@ -16,6 +20,7 @@ from awx import __version__ as awx_application_version
from awx.api.versioning import reverse from awx.api.versioning import reverse
from awx.main.managers import InstanceManager, InstanceGroupManager from awx.main.managers import InstanceManager, InstanceGroupManager
from awx.main.fields import JSONField from awx.main.fields import JSONField
from awx.main.models.base import BaseModel, HasEditsMixin
from awx.main.models.inventory import InventoryUpdate from awx.main.models.inventory import InventoryUpdate
from awx.main.models.jobs import Job from awx.main.models.jobs import Job
from awx.main.models.projects import ProjectUpdate from awx.main.models.projects import ProjectUpdate
@@ -26,7 +31,37 @@ from awx.main.models.mixins import RelatedJobsMixin
__all__ = ('Instance', 'InstanceGroup', 'JobOrigin', 'TowerScheduleState',) __all__ = ('Instance', 'InstanceGroup', 'JobOrigin', 'TowerScheduleState',)
class Instance(models.Model): def validate_queuename(v):
# celery and kombu don't play nice with unicode in queue names
if v:
try:
'{}'.format(v.decode('utf-8'))
except UnicodeEncodeError:
raise ValidationError(_(six.text_type('{} contains unsupported characters')).format(v))
class HasPolicyEditsMixin(HasEditsMixin):
class Meta:
abstract = True
def __init__(self, *args, **kwargs):
r = super(BaseModel, self).__init__(*args, **kwargs)
self._prior_values_store = self._get_fields_snapshot()
return r
def save(self, *args, **kwargs):
super(BaseModel, self).save(*args, **kwargs)
self._prior_values_store = self._get_fields_snapshot()
def has_policy_changes(self):
if not hasattr(self, 'POLICY_FIELDS'):
raise RuntimeError('HasPolicyEditsMixin Model needs to set POLICY_FIELDS')
new_values = self._get_fields_snapshot(fields_set=self.POLICY_FIELDS)
return self._values_have_edits(new_values)
class Instance(HasPolicyEditsMixin, BaseModel):
"""A model representing an AWX instance running against this database.""" """A model representing an AWX instance running against this database."""
objects = InstanceManager() objects = InstanceManager()
@@ -37,7 +72,6 @@ class Instance(models.Model):
last_isolated_check = models.DateTimeField( last_isolated_check = models.DateTimeField(
null=True, null=True,
editable=False, editable=False,
auto_now_add=True
) )
version = models.CharField(max_length=24, blank=True) version = models.CharField(max_length=24, blank=True)
capacity = models.PositiveIntegerField( capacity = models.PositiveIntegerField(
@@ -48,10 +82,14 @@ class Instance(models.Model):
default=Decimal(1.0), default=Decimal(1.0),
max_digits=3, max_digits=3,
decimal_places=2, decimal_places=2,
validators=[MinValueValidator(0)]
) )
enabled = models.BooleanField( enabled = models.BooleanField(
default=True default=True
) )
managed_by_policy = models.BooleanField(
default=True
)
cpu = models.IntegerField( cpu = models.IntegerField(
default=0, default=0,
editable=False, editable=False,
@@ -72,6 +110,8 @@ class Instance(models.Model):
class Meta: class Meta:
app_label = 'main' app_label = 'main'
POLICY_FIELDS = frozenset(('managed_by_policy', 'hostname', 'capacity_adjustment'))
def get_absolute_url(self, request=None): def get_absolute_url(self, request=None):
return reverse('api:instance_detail', kwargs={'pk': self.pk}, request=request) return reverse('api:instance_detail', kwargs={'pk': self.pk}, request=request)
@@ -80,6 +120,10 @@ class Instance(models.Model):
return sum(x.task_impact for x in UnifiedJob.objects.filter(execution_node=self.hostname, return sum(x.task_impact for x in UnifiedJob.objects.filter(execution_node=self.hostname,
status__in=('running', 'waiting'))) status__in=('running', 'waiting')))
@property
def remaining_capacity(self):
return self.capacity - self.consumed_capacity
@property @property
def role(self): def role(self):
# NOTE: TODO: Likely to repurpose this once standalone ramparts are a thing # NOTE: TODO: Likely to repurpose this once standalone ramparts are a thing
@@ -89,6 +133,10 @@ class Instance(models.Model):
def jobs_running(self): def jobs_running(self):
return UnifiedJob.objects.filter(execution_node=self.hostname, status__in=('running', 'waiting',)).count() return UnifiedJob.objects.filter(execution_node=self.hostname, status__in=('running', 'waiting',)).count()
@property
def jobs_total(self):
return UnifiedJob.objects.filter(execution_node=self.hostname).count()
def is_lost(self, ref_time=None, isolated=False): def is_lost(self, ref_time=None, isolated=False):
if ref_time is None: if ref_time is None:
ref_time = now() ref_time = now()
@@ -100,6 +148,8 @@ class Instance(models.Model):
def is_controller(self): def is_controller(self):
return Instance.objects.filter(rampart_groups__controller__instances=self).exists() return Instance.objects.filter(rampart_groups__controller__instances=self).exists()
def is_isolated(self):
return self.rampart_groups.filter(controller__isnull=False).exists()
def refresh_capacity(self): def refresh_capacity(self):
cpu = get_cpu_capacity() cpu = get_cpu_capacity()
@@ -113,9 +163,13 @@ class Instance(models.Model):
self.save(update_fields=['capacity', 'version', 'modified', 'cpu', self.save(update_fields=['capacity', 'version', 'modified', 'cpu',
'memory', 'cpu_capacity', 'mem_capacity']) 'memory', 'cpu_capacity', 'mem_capacity'])
def clean_hostname(self):
validate_queuename(self.hostname)
return self.hostname
class InstanceGroup(models.Model, RelatedJobsMixin): class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
"""A model representing a Queue/Group of AWX Instances.""" """A model representing a Queue/Group of AWX Instances."""
objects = InstanceGroupManager() objects = InstanceGroupManager()
@@ -150,6 +204,10 @@ class InstanceGroup(models.Model, RelatedJobsMixin):
help_text=_("List of exact-match Instances that will always be automatically assigned to this group") help_text=_("List of exact-match Instances that will always be automatically assigned to this group")
) )
POLICY_FIELDS = frozenset((
'policy_instance_list', 'policy_instance_minimum', 'policy_instance_percentage', 'controller'
))
def get_absolute_url(self, request=None): def get_absolute_url(self, request=None):
return reverse('api:instance_group_detail', kwargs={'pk': self.pk}, request=request) return reverse('api:instance_group_detail', kwargs={'pk': self.pk}, request=request)
@@ -157,6 +215,15 @@ class InstanceGroup(models.Model, RelatedJobsMixin):
def capacity(self): def capacity(self):
return sum([inst.capacity for inst in self.instances.all()]) return sum([inst.capacity for inst in self.instances.all()])
@property
def jobs_running(self):
return UnifiedJob.objects.filter(status__in=('running', 'waiting'),
instance_group=self).count()
@property
def jobs_total(self):
return UnifiedJob.objects.filter(instance_group=self).count()
''' '''
RelatedJobsMixin RelatedJobsMixin
''' '''
@@ -167,6 +234,37 @@ class InstanceGroup(models.Model, RelatedJobsMixin):
class Meta: class Meta:
app_label = 'main' app_label = 'main'
def clean_name(self):
validate_queuename(self.name)
return self.name
def fit_task_to_most_remaining_capacity_instance(self, task):
instance_most_capacity = None
for i in self.instances.filter(capacity__gt=0).order_by('hostname'):
if not i.enabled:
continue
if i.remaining_capacity >= task.task_impact and \
(instance_most_capacity is None or
i.remaining_capacity > instance_most_capacity.remaining_capacity):
instance_most_capacity = i
return instance_most_capacity
def find_largest_idle_instance(self):
largest_instance = None
for i in self.instances.filter(capacity__gt=0).order_by('hostname'):
if i.jobs_running == 0:
if largest_instance is None:
largest_instance = i
elif i.capacity > largest_instance.capacity:
largest_instance = i
return largest_instance
def choose_online_controller_node(self):
return random.choice(list(self.controller
.instances
.filter(capacity__gt=0)
.values_list('hostname', flat=True)))
class TowerScheduleState(SingletonModel): class TowerScheduleState(SingletonModel):
schedule_last_run = models.DateTimeField(auto_now_add=True) schedule_last_run = models.DateTimeField(auto_now_add=True)
@@ -190,29 +288,31 @@ class JobOrigin(models.Model):
app_label = 'main' app_label = 'main'
@receiver(post_save, sender=InstanceGroup) def schedule_policy_task():
def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs):
from awx.main.tasks import apply_cluster_membership_policies from awx.main.tasks import apply_cluster_membership_policies
connection.on_commit(lambda: apply_cluster_membership_policies.apply_async()) connection.on_commit(lambda: apply_cluster_membership_policies.apply_async())
@receiver(post_save, sender=InstanceGroup)
def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs):
if created or instance.has_policy_changes():
schedule_policy_task()
@receiver(post_save, sender=Instance) @receiver(post_save, sender=Instance)
def on_instance_saved(sender, instance, created=False, raw=False, **kwargs): def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
if created: if created or instance.has_policy_changes():
from awx.main.tasks import apply_cluster_membership_policies schedule_policy_task()
connection.on_commit(lambda: apply_cluster_membership_policies.apply_async())
@receiver(post_delete, sender=InstanceGroup) @receiver(post_delete, sender=InstanceGroup)
def on_instance_group_deleted(sender, instance, using, **kwargs): def on_instance_group_deleted(sender, instance, using, **kwargs):
from awx.main.tasks import apply_cluster_membership_policies schedule_policy_task()
connection.on_commit(lambda: apply_cluster_membership_policies.apply_async())
@receiver(post_delete, sender=Instance) @receiver(post_delete, sender=Instance)
def on_instance_deleted(sender, instance, using, **kwargs): def on_instance_deleted(sender, instance, using, **kwargs):
from awx.main.tasks import apply_cluster_membership_policies schedule_policy_task()
connection.on_commit(lambda: apply_cluster_membership_policies.apply_async())
# Unfortunately, the signal can't just be connected against UnifiedJob; it # Unfortunately, the signal can't just be connected against UnifiedJob; it

View File

@@ -1262,6 +1262,11 @@ class InventorySourceOptions(BaseModel):
'Credentials of type machine, source control, insights and vault are ' 'Credentials of type machine, source control, insights and vault are '
'disallowed for custom inventory sources.' 'disallowed for custom inventory sources.'
) )
elif source == 'scm' and cred and cred.credential_type.kind in ('insights', 'vault'):
return _(
'Credentials of type insights and vault are '
'disallowed for scm inventory sources.'
)
return None return None
def get_inventory_plugin_name(self): def get_inventory_plugin_name(self):
@@ -1420,7 +1425,7 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, RelatedJobsMix
@classmethod @classmethod
def _get_unified_job_field_names(cls): def _get_unified_job_field_names(cls):
return set(f.name for f in InventorySourceOptions._meta.fields) | set( return set(f.name for f in InventorySourceOptions._meta.fields) | set(
['name', 'description', 'schedule', 'credentials'] ['name', 'description', 'schedule', 'credentials', 'inventory']
) )
def save(self, *args, **kwargs): def save(self, *args, **kwargs):
@@ -1599,6 +1604,13 @@ class InventoryUpdate(UnifiedJob, InventorySourceOptions, JobNotificationMixin,
class Meta: class Meta:
app_label = 'main' app_label = 'main'
inventory = models.ForeignKey(
'Inventory',
related_name='inventory_updates',
null=True,
default=None,
on_delete=models.DO_NOTHING,
)
inventory_source = models.ForeignKey( inventory_source = models.ForeignKey(
'InventorySource', 'InventorySource',
related_name='inventory_updates', related_name='inventory_updates',

View File

@@ -33,7 +33,7 @@ from awx.main.models.notifications import (
NotificationTemplate, NotificationTemplate,
JobNotificationMixin, JobNotificationMixin,
) )
from awx.main.utils import parse_yaml_or_json from awx.main.utils import parse_yaml_or_json, getattr_dne
from awx.main.fields import ImplicitRoleField from awx.main.fields import ImplicitRoleField
from awx.main.models.mixins import ( from awx.main.models.mixins import (
ResourceMixin, ResourceMixin,
@@ -238,11 +238,11 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
app_label = 'main' app_label = 'main'
ordering = ('name',) ordering = ('name',)
host_config_key = models.CharField( host_config_key = prevent_search(models.CharField(
max_length=1024, max_length=1024,
blank=True, blank=True,
default='', default='',
) ))
ask_diff_mode_on_launch = AskForField( ask_diff_mode_on_launch = AskForField(
blank=True, blank=True,
default=False, default=False,
@@ -278,7 +278,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
allows_field='credentials' allows_field='credentials'
) )
admin_role = ImplicitRoleField( admin_role = ImplicitRoleField(
parent_role=['project.organization.project_admin_role', 'inventory.organization.inventory_admin_role'] parent_role=['project.organization.job_template_admin_role', 'inventory.organization.job_template_admin_role']
) )
execute_role = ImplicitRoleField( execute_role = ImplicitRoleField(
parent_role=['admin_role', 'project.organization.execute_role', 'inventory.organization.execute_role'], parent_role=['admin_role', 'project.organization.execute_role', 'inventory.organization.execute_role'],
@@ -343,11 +343,6 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
# not block a provisioning callback from creating/launching jobs. # not block a provisioning callback from creating/launching jobs.
if callback_extra_vars is None: if callback_extra_vars is None:
for ask_field_name in set(self.get_ask_mapping().values()): for ask_field_name in set(self.get_ask_mapping().values()):
if ask_field_name == 'ask_credential_on_launch':
# if ask_credential_on_launch is True, it just means it can
# optionally be specified at launch time, not that it's *required*
# to launch
continue
if getattr(self, ask_field_name): if getattr(self, ask_field_name):
prompting_needed = True prompting_needed = True
break break
@@ -402,7 +397,9 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
if 'prompts' not in exclude_errors: if 'prompts' not in exclude_errors:
errors_dict[field_name] = _('Field is not configured to prompt on launch.').format(field_name=field_name) errors_dict[field_name] = _('Field is not configured to prompt on launch.').format(field_name=field_name)
if 'prompts' not in exclude_errors and self.passwords_needed_to_start: if ('prompts' not in exclude_errors and
(not getattr(self, 'ask_credential_on_launch', False)) and
self.passwords_needed_to_start):
errors_dict['passwords_needed_to_start'] = _( errors_dict['passwords_needed_to_start'] = _(
'Saved launch configurations cannot provide passwords needed to start.') 'Saved launch configurations cannot provide passwords needed to start.')
@@ -772,9 +769,13 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
if not os.path.realpath(filepath).startswith(destination): if not os.path.realpath(filepath).startswith(destination):
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name))) system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue continue
with codecs.open(filepath, 'w', encoding='utf-8') as f: try:
os.chmod(f.name, 0o600) with codecs.open(filepath, 'w', encoding='utf-8') as f:
json.dump(host.ansible_facts, f) os.chmod(f.name, 0o600)
json.dump(host.ansible_facts, f)
except IOError:
system_tracking_logger.error('facts for host {} could not be cached'.format(smart_str(host.name)))
continue
# make note of the time we wrote the file so we can check if it changed later # make note of the time we wrote the file so we can check if it changed later
modification_times[filepath] = os.path.getmtime(filepath) modification_times[filepath] = os.path.getmtime(filepath)
@@ -957,24 +958,37 @@ class JobLaunchConfig(LaunchTimeConfig):
editable=False, editable=False,
) )
def has_user_prompts(self, template):
'''
Returns True if any fields exist in the launch config that are
not permissions exclusions
(has to exist because of callback relaunch exception)
'''
return self._has_user_prompts(template, only_unprompted=False)
def has_unprompted(self, template): def has_unprompted(self, template):
''' '''
returns False if the template has set ask_ fields to False after returns True if the template has set ask_ fields to False after
launching with those prompts launching with those prompts
''' '''
return self._has_user_prompts(template, only_unprompted=True)
def _has_user_prompts(self, template, only_unprompted=True):
prompts = self.prompts_dict() prompts = self.prompts_dict()
ask_mapping = template.get_ask_mapping() ask_mapping = template.get_ask_mapping()
if template.survey_enabled and (not template.ask_variables_on_launch): if template.survey_enabled and (not template.ask_variables_on_launch):
ask_mapping.pop('extra_vars') ask_mapping.pop('extra_vars')
provided_vars = set(prompts['extra_vars'].keys()) provided_vars = set(prompts.get('extra_vars', {}).keys())
survey_vars = set( survey_vars = set(
element.get('variable') for element in element.get('variable') for element in
template.survey_spec.get('spec', {}) if 'variable' in element template.survey_spec.get('spec', {}) if 'variable' in element
) )
if provided_vars - survey_vars: if (provided_vars and not only_unprompted) or (provided_vars - survey_vars):
return True return True
for field_name, ask_field_name in ask_mapping.items(): for field_name, ask_field_name in ask_mapping.items():
if field_name in prompts and not getattr(template, ask_field_name): if field_name in prompts and not (getattr(template, ask_field_name) and only_unprompted):
if field_name == 'limit' and self.job and self.job.launch_type == 'callback':
continue # exception for relaunching callbacks
return True return True
else: else:
return False return False
@@ -1019,7 +1033,8 @@ class JobHostSummary(CreatedModifiedModel):
failed = models.BooleanField(default=False, editable=False) failed = models.BooleanField(default=False, editable=False)
def __unicode__(self): def __unicode__(self):
hostname = self.host.name if self.host else 'N/A' host = getattr_dne(self, 'host')
hostname = host.name if host else 'N/A'
return '%s changed=%d dark=%d failures=%d ok=%d processed=%d skipped=%s' % \ return '%s changed=%d dark=%d failures=%d ok=%d processed=%d skipped=%s' % \
(hostname, self.changed, self.dark, self.failures, self.ok, (hostname, self.changed, self.dark, self.failures, self.ok,
self.processed, self.skipped) self.processed, self.skipped)

View File

@@ -436,7 +436,8 @@ class CustomVirtualEnvMixin(models.Model):
blank=True, blank=True,
null=True, null=True,
default=None, default=None,
max_length=100 max_length=100,
help_text=_('Local absolute file path containing a custom Python virtualenv to use')
) )
def clean_custom_virtualenv(self): def clean_custom_virtualenv(self):
@@ -465,7 +466,7 @@ class RelatedJobsMixin(object):
return self._get_related_jobs().filter(status__in=ACTIVE_STATES) return self._get_related_jobs().filter(status__in=ACTIVE_STATES)
''' '''
Returns [{'id': '1', 'type': 'job'}, {'id': 2, 'type': 'project_update'}, ...] Returns [{'id': 1, 'type': 'job'}, {'id': 2, 'type': 'project_update'}, ...]
''' '''
def get_active_jobs(self): def get_active_jobs(self):
UnifiedJob = apps.get_model('main', 'UnifiedJob') UnifiedJob = apps.get_model('main', 'UnifiedJob')
@@ -474,5 +475,5 @@ class RelatedJobsMixin(object):
if not isinstance(jobs, QuerySet): if not isinstance(jobs, QuerySet):
raise RuntimeError("Programmer error. Expected _get_active_jobs() to return a QuerySet.") raise RuntimeError("Programmer error. Expected _get_active_jobs() to return a QuerySet.")
return [dict(id=str(t[0]), type=mapping[t[1]]) for t in jobs.values_list('id', 'polymorphic_ctype_id')] return [dict(id=t[0], type=mapping[t[1]]) for t in jobs.values_list('id', 'polymorphic_ctype_id')]

View File

@@ -11,7 +11,9 @@ from django.conf import settings
# Django OAuth Toolkit # Django OAuth Toolkit
from oauth2_provider.models import AbstractApplication, AbstractAccessToken from oauth2_provider.models import AbstractApplication, AbstractAccessToken
from oauth2_provider.generators import generate_client_secret from oauth2_provider.generators import generate_client_secret
from oauthlib import oauth2
from awx.main.utils import get_external_account
from awx.main.fields import OAuth2ClientSecretField from awx.main.fields import OAuth2ClientSecretField
@@ -25,6 +27,7 @@ class OAuth2Application(AbstractApplication):
class Meta: class Meta:
app_label = 'main' app_label = 'main'
verbose_name = _('application') verbose_name = _('application')
unique_together = (("name", "organization"),)
CLIENT_CONFIDENTIAL = "confidential" CLIENT_CONFIDENTIAL = "confidential"
CLIENT_PUBLIC = "public" CLIENT_PUBLIC = "public"
@@ -36,12 +39,10 @@ class OAuth2Application(AbstractApplication):
GRANT_AUTHORIZATION_CODE = "authorization-code" GRANT_AUTHORIZATION_CODE = "authorization-code"
GRANT_IMPLICIT = "implicit" GRANT_IMPLICIT = "implicit"
GRANT_PASSWORD = "password" GRANT_PASSWORD = "password"
GRANT_CLIENT_CREDENTIALS = "client-credentials"
GRANT_TYPES = ( GRANT_TYPES = (
(GRANT_AUTHORIZATION_CODE, _("Authorization code")), (GRANT_AUTHORIZATION_CODE, _("Authorization code")),
(GRANT_IMPLICIT, _("Implicit")), (GRANT_IMPLICIT, _("Implicit")),
(GRANT_PASSWORD, _("Resource owner password-based")), (GRANT_PASSWORD, _("Resource owner password-based")),
(GRANT_CLIENT_CREDENTIALS, _("Client credentials")),
) )
description = models.TextField( description = models.TextField(
@@ -109,8 +110,13 @@ class OAuth2AccessToken(AbstractAccessToken):
) )
scope = models.TextField( scope = models.TextField(
blank=True, blank=True,
default='write',
help_text=_('Allowed scopes, further restricts user\'s permissions. Must be a simple space-separated string with allowed scopes [\'read\', \'write\'].') help_text=_('Allowed scopes, further restricts user\'s permissions. Must be a simple space-separated string with allowed scopes [\'read\', \'write\'].')
) )
modified = models.DateTimeField(
editable=False,
auto_now=True
)
def is_valid(self, scopes=None): def is_valid(self, scopes=None):
valid = super(OAuth2AccessToken, self).is_valid(scopes) valid = super(OAuth2AccessToken, self).is_valid(scopes)
@@ -119,3 +125,11 @@ class OAuth2AccessToken(AbstractAccessToken):
self.save(update_fields=['last_used']) self.save(update_fields=['last_used'])
return valid return valid
def save(self, *args, **kwargs):
if self.user and settings.ALLOW_OAUTH2_FOR_EXTERNAL_USERS is False:
external_account = get_external_account(self.user)
if external_account is not None:
raise oauth2.AccessDeniedError(_(
'OAuth2 Tokens cannot be created by users associated with an external authentication provider ({})'
).format(external_account))
super(OAuth2AccessToken, self).save(*args, **kwargs)

View File

@@ -60,16 +60,21 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
notification_admin_role = ImplicitRoleField( notification_admin_role = ImplicitRoleField(
parent_role='admin_role', parent_role='admin_role',
) )
job_template_admin_role = ImplicitRoleField(
parent_role='admin_role',
)
auditor_role = ImplicitRoleField( auditor_role = ImplicitRoleField(
parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_AUDITOR, parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_AUDITOR,
) )
member_role = ImplicitRoleField( member_role = ImplicitRoleField(
parent_role=['admin_role', 'execute_role', 'project_admin_role', parent_role=['admin_role']
'inventory_admin_role', 'workflow_admin_role',
'notification_admin_role', 'credential_admin_role']
) )
read_role = ImplicitRoleField( read_role = ImplicitRoleField(
parent_role=['member_role', 'auditor_role'], parent_role=['member_role', 'auditor_role',
'execute_role', 'project_admin_role',
'inventory_admin_role', 'workflow_admin_role',
'notification_admin_role', 'credential_admin_role',
'job_template_admin_role',],
) )

Some files were not shown because too many files have changed in this diff Show More